Search results for: twin prime decomposition
909 Fault Detection and Diagnosis of Broken Bar Problem in Induction Motors Base Wavelet Analysis and EMD Method: Case Study of Mobarakeh Steel Company in Iran
Authors: M. Ahmadi, M. Kafil, H. Ebrahimi
Abstract:
Nowadays, induction motors have a significant role in industries. Condition monitoring (CM) of this equipment has gained a remarkable importance during recent years due to huge production losses, substantial imposed costs and increases in vulnerability, risk, and uncertainty levels. Motor current signature analysis (MCSA) is one of the most important techniques in CM. This method can be used for rotor broken bars detection. Signal processing methods such as Fast Fourier transformation (FFT), Wavelet transformation and Empirical Mode Decomposition (EMD) are used for analyzing MCSA output data. In this study, these signal processing methods are used for broken bar problem detection of Mobarakeh steel company induction motors. Based on wavelet transformation method, an index for fault detection, CF, is introduced which is the variation of maximum to the mean of wavelet transformation coefficients. We find that, in the broken bar condition, the amount of CF factor is greater than the healthy condition. Based on EMD method, the energy of intrinsic mode functions (IMF) is calculated and finds that when motor bars become broken the energy of IMFs increases.Keywords: broken bar, condition monitoring, diagnostics, empirical mode decomposition, fourier transform, wavelet transform
Procedia PDF Downloads 150908 Production of Nanocomposite Electrical Contact Materials Ag-SnO2, W-Cu and Cu-C in Thermal Plasma
Authors: A. V. Samokhin, A. A. Fadeev, M. A. Sinaiskii, N. V. Alekseev, A. V. Kolesnikov
Abstract:
Composite materials where metal matrix is reinforced by ceramic or metal particles are of great interest for use in the manufacturing of electrical contacts. Significant improvement of the composite physical and mechanical properties as well as increase of the performance parameters of composite-based products can be achieved if the nanoscale structure in the composite materials is obtained by using nanosized powders as starting components. The results of nanosized composite powders synthesis (Ag-SnO2, W-Cu and Cu-C) in the DC thermal plasma flows are presented in this paper. The investigations included the following processes: - Recondensation of micron powder mixture Ag + SnO2 in a nitrogen plasma; - The reduction of the oxide powders mixture (WO3 + CuO) in a hydrogen-nitrogen plasma; - Decomposition of the copper formate and copper acetate powders in nitrogen plasma. The calculations of equilibrium compositions of multicomponent systems Ag-Sn-O-N, W-Cu-O-H-N and Cu-O-C-H-N in the temperature range of 400-5000 K were carried to estimate basic process characteristics. Experimental studies of the processes were performed using a plasma reactor with a confined jet flow. The plasma jet net power was in the range of 2 - 13 kW, and the feedstock flow rate was up to 0.35 kg/h. The obtained powders were characterized by TEM, HR-TEM, SEM, EDS, ED-XRF, XRD, BET and QEA methods. Nanocomposite Ag-SnO2 (12 wt. %). Processing of the initial powder mixture (Ag-SnO2) in nitrogen thermal plasma stream allowed to produce nanopowders with a specific surface area up to 24 m2/g, consisting predominantly of particles with size less than 100 nm. According to XRD results, tin was present in the obtained products as SnO2 phase, and also as intermetallic phases AgxSn. Nanocomposite W-Cu (20 wt .%). Reduction of (WO3+CuO) mixture in the hydrogen-nitrogen plasma provides W-Cu nanopowder with particle sizes in the range of 10-150 nm. The particles have mainly spherical shape and structure tungsten core - copper shell. The thickness of the shell is about several nanometers, the shell is composed of copper and its oxides (Cu2O, CuO). The nanopowders had 1.5 wt. % oxygen impurity. Heat treatment in a hydrogen atmosphere allows to reduce the oxygen content to less than 0.1 wt. %. Nanocomposite Cu-C. Copper nanopowders were found as products of the starting copper compounds decomposition. The nanopowders primarily had a spherical shape with a particle size of less than 100 nm. The main phase was copper, with small amount of Cu2O and CuO oxides. Copper formate decomposition products had a specific surface area 2.5-7 m2/g and contained 0.15 - 4 wt. % carbon; and copper acetate decomposition products had the specific surface area 5-35 m2/g, and carbon content of 0.3 - 5 wt. %. Compacting of nanocomposites (sintering in hydrogen for Ag-SnO2 and electric spark sintering (SPS) for W-Cu) showed that the samples having a relative density of 97-98 % can be obtained with a submicron structure. The studies indicate the possibility of using high-intensity plasma processes to create new technologies to produce nanocomposite materials for electric contacts.Keywords: electrical contact, material, nanocomposite, plasma, synthesis
Procedia PDF Downloads 235907 A Content Analysis of ‘Junk Food’ Content in Children’s TV Programs: A Comparison of UK Broadcast TV and Video-On-Demand Services
Authors: Alexander B. Barker, Megan Parkin, Shreesh Sinha, Emma Wilson, Rachael L. Murray
Abstract:
Objectives: Exposure to HFSS imagery is associated with consumption of foods high in fat, sugar, or salt (HFSS), and subsequently obesity, among young people. We report and compare the results of two content analyses, one of two popular terrestrial children’s television channels in the UK and the other of a selection of children’s programs available on video-on-demand (VOD) streaming sites. Design: Content analysis of three days’ worth of programs (including advertisements) on two popular children’s television channels broadcast on UK television (CBeebies and Milkshake) as well as a sample of 40 highest-rated children’s programs available on the VOD platforms, Netflix and Amazon Prime, using 1-minute interval coding. Setting: United Kingdom, Participants: None. Results: HFSS content was seen in 181 broadcasts (36%) and in 417 intervals (13%) on terrestrial television, ‘Milkshake’ had a significantly higher proportion of programs/adverts which contained HFSS content than ‘CBeebies’. In VOD platforms, HFSS content was seen in 82 episodes (72% of the total number of episodes), across 459 intervals (19% of the total number of intervals), with no significant difference in the proportion of programs containing HFSS content between Netflix and Amazon Prime. Conclusions: This study demonstrates that HFSS content is common in both popular UK children’s television channels and children's programs on VOD services. Since previous research has shown that HFSS content in the media has an effect on HFSS consumption, children’s television programs broadcast either on TV or VOD services are likely having an effect on HFSS consumption in children and legislative opportunities to prevent this exposure are being missed.Keywords: public health, epidemiology, obesity, content analysis
Procedia PDF Downloads 186906 Depression of Copper-Activated Pyrite by Potassium Ferrate in Copper Ore Flotation Using High Salinity Process Water
Authors: Yufan Mu
Abstract:
High salinity process water (HSPW) is often applied in copper ore flotation to alleviate freshwater shortage; however, it is detrimental to copper flotation as it strongly enhances copper activation of pyrite. In this study, the depression effect of a strong oxidiser, potassium ferrate (𝐾₂𝐹₄), on the flotation of copper-activated pyrite was tested to realise the selective separation of pyrite from copper minerals (e.g., chalcopyrite) in flotation using HSPW. The flotation results show that when (𝐾₂𝐹₄) was added in the flotation cell during conditioning, (𝐾₂𝐹₄) could selectively depress copper-activated pyrite while improving chalcopyrite flotation. The depression mechanism of (𝐾₂𝐹₄) on pyrite was ascribed to the significant increase in the pulp potential (Eₕ), dissolved oxygen (DO) concentration and the amount of ferric oxyhydroxides as a result of ferrate decomposition. In the flotation cell, the high Eh and DO concentration promoted the oxidation of low valency metal species (𝐶⁺𝐹e²⁺) released from mineral surfaces and forged steel grinding media, and the resultant high valency metal oxyhydroxides 𝐶u(𝑂H)₂⁄Fe(OH)₃ together with the ferric oxyhydroxides from ferrate decomposition preferentially precipitated on pyrite surface due to its more cathodic nature compared with chalcopyrite, which increased pyrite surface hydrophilicity and reduced its floatability. This study reveals that (𝐾₂𝐹₄) is a highly efficient depressant for pyrite when separating copper minerals from pyrite in flotation using HSPW if dosed properly.Keywords: copper flotation, pyrite depression, copper-activated pyrite, potassium ferrate, high salinity process water
Procedia PDF Downloads 72905 Acetic Acid Adsorption and Decomposition on Pt(111): Comparisons to Ni(111)
Authors: Lotanna Ezeonu, Jason P. Robbins, Ziyu Tang, Xiaofang Yang, Bruce E. Koel, Simon G. Podkolzin
Abstract:
The interaction of organic molecules with metal surfaces is of interest in numerous technological applications, such as catalysis, bone replacement, and biosensors. Acetic acid is one of the main products of bio-oils produced from the pyrolysis of hemicellulosic feedstocks. However, their high oxygen content makes them unsuitable for use as fuels. Hydrodeoxygenation is a proven technique for catalytic deoxygenation of bio-oils. An understanding of the energetics and control of the bond-breaking sequences of biomass-derived oxygenates on metal surfaces will enable a guided optimization of existing catalysts and the development of more active/selective processes for biomass transformations to fuels. Such investigations have been carried out with the aid of ultrahigh vacuum and its concomitant techniques. The high catalytic activity of platinum in biomass-derived oxygenate transformations has sparked a lot of interest. We herein exploit infrared reflection absorption spectroscopy(IRAS), temperature-programmed desorption(TPD), and density functional theory(DFT) to study the adsorption and decomposition of acetic acid on a Pt(111) surface, which was then compared with Ni(111), a model non-noble metal. We found that acetic acid adsorbs molecularly on the Pt(111) surface, interacting through the lone pair of electrons of one oxygen atomat 90 K. At 140 K, the molecular form is still predominant, with some dissociative adsorption (in the form of acetate and hydrogen). Annealing to 193 K led to complete dehydrogenation of molecular acetic acid species leaving adsorbed acetate. At 440 K, decomposition of the acetate species occurs via decarbonylation and decarboxylation as evidenced by desorption peaks for H₂,CO, CO₂ and CHX fragments (x=1, 2) in theTPD.The assignments for the experimental IR peaks were made using visualization of the DFT-calculated vibrational modes. The results showed that acetate adsorbs in a bridged bidentate (μ²η²(O,O)) configuration. The coexistence of linear and bridge bonded CO was also predicted by the DFT results. Similar molecular acid adsorption energy was predicted in the case of Ni(111) whereas a significant difference was found for acetate adsorption.Keywords: acetic acid, platinum, nickel, infared-absorption spectrocopy, temperature programmed desorption, density functional theory
Procedia PDF Downloads 108904 Performance and Voyage Analysis of Marine Gas Turbine Engine, Installed to Power and Propel an Ocean-Going Cruise Ship from Lagos to Jeddah
Authors: Mathias U. Bonet, Pericles Pilidis, Georgios Doulgeris
Abstract:
An aero-derivative marine Gas Turbine engine model is simulated to be installed as the main propulsion prime mover to power a cruise ship which is designed and routed to transport intending Muslim pilgrims for the annual hajj pilgrimage from Nigeria to the Islamic port city of Jeddah in Saudi Arabia. A performance assessment of the Gas Turbine engine has been conducted by examining the effect of varying aerodynamic and hydrodynamic conditions encountered at various geographical locations along the scheduled transit route during the voyage. The investigation focuses on the overall behavior of the Gas Turbine engine employed to power and propel the ship as it operates under ideal and adverse conditions to be encountered during calm and rough weather according to the different seasons of the year under which the voyage may be undertaken. The variation of engine performance under varying operating conditions has been considered as a very important economic issue by determining the time the speed by which the journey is completed as well as the quantity of fuel required for undertaking the voyage. The assessment also focuses on the increased resistance caused by the fouling of the submerged portion of the ship hull surface with its resultant effect on the power output of the engine as well as the overall performance of the propulsion system. Daily ambient temperature levels were obtained by accessing data from the UK Meteorological Office while the varying degree of turbulence along the transit route and according to the Beaufort scale were also obtained as major input variables of the investigation. By assuming the ship to be navigating the Atlantic Ocean and the Mediterranean Sea during winter, spring and summer seasons, the performance modeling and simulation was accomplished through the use of an integrated Gas Turbine performance simulation code known as ‘Turbomach’ along with a Matlab generated code named ‘Poseidon’, all of which have been developed at the Power and Propulsion Department of Cranfield University. As a case study, the results of the various assumptions have further revealed that the marine Gas Turbine is a reliable and available alternative to the conventional marine propulsion prime movers that have dominated the maritime industry before now. The techno-economic and environmental assessment of this type of propulsion prime mover has enabled the determination of the effect of changes in weather and sea conditions on the ship speed as well as trip time and the quantity of fuel required to be burned throughout the voyage.Keywords: ambient temperature, hull fouling, marine gas turbine, performance, propulsion, voyage
Procedia PDF Downloads 186903 Moving beyond the Gender Pay Gap: An Investigation of Pension Gender Inequalities across European Counties
Authors: Enva Doda
Abstract:
Recent statistical analyses within the European Union (EU) underscore the enduring significance of the Gender Pay Gap in amplifying the Gender Pension Gap, a phenomenon resisting proportional reduction over time. This study meticulously calculates the Pension Gap, scrutinizing contributing variables within diverse pension systems. Furthermore, it investigates whether the "unexplained" segment of the Gender Gap correlates with political institutions, economic systems, historical events, or discrimination, utilizing quantitative methods and the Blinder-Oaxaca Decomposition Method to pinpoint potential discriminatory factors. The descriptive analysis reveals a conspicuous Gender Pension Gap across European nations, displaying notable variation. While an overall reduction in the Gender Gap is observed, the degree of improvement varies among countries. Subsequent analyses will delve into the specific reasons or variables influencing distinct Gender Gap percentages, forming the basis for nuanced policy recommendations. This comprehensive research enriches the ongoing discourse on gender equality and economic equity. By focusing on the root causes of the Pension Gap, the study has the potential to instigate policy adjustments, urging policymakers to reassess systemic structures and contribute to informed decision-making. Emphasizing gender equality as essential for a flourishing and resilient economy, the research aspires to drive positive change on academic and policy fronts.Keywords: blinder Oaxaca decomposition method, discrimination, gender pension gap, quantitative methods, unexplained gender gap
Procedia PDF Downloads 42902 Constructions of Linear and Robust Codes Based on Wavelet Decompositions
Authors: Alla Levina, Sergey Taranov
Abstract:
The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability
Procedia PDF Downloads 489901 Cadaver Free Fatty Acid Distribution Associated with Burial in Mangrove and Oil Palm Plantation Soils under Tropical Climate
Authors: Siti Sofo Ismail, Siti Noraina Wahida Mohd Alwi, Mohamad Hafiz Ameran, Masrudin M. Yusoff
Abstract:
Locating clandestine cadaver is crucially important in forensic investigations. However, it requires a lot of man power, costly and time consuming. Therefore, the development of a new method to locate the clandestine graves is urgently needed as the cases involve burial of cadaver in different types of soils under tropical climates are still not well explored. This study focused on the burial in mangrove and oil palm plantation soils, comparing the fatty acid distributions in different soil acidities. A stimulated burial experiment was conducted using domestic pig (Sus scrofa) to substitute human tissues. Approximately 20g of pig fatty flesh was allowed to decompose in mangrove and oil palm plantation soils, mimicking burial in a shallow grave. The associated soils were collected at different designated sampling points, corresponding different decomposition stages. Modified Bligh-Dyer Extraction method was applied to extract the soil free fatty acids. Then, the obtained free fatty acids were analyzed with gas chromatography-flame ionization (GC-FID). A similar fatty acid distribution was observed for both mangrove and oil palm plantations soils. Palmitic acid (C₁₆) was the most abundance of free fatty acid, followed by stearic acid (C₁₈). However, the concentration of palmitic acid (C₁₆) higher in oil palm plantation compare to mangrove soils. Conclusion, the decomposition rate of cadaver can be affected by different type of soils.Keywords: clandestine grave, burial, soils, free fatty acid
Procedia PDF Downloads 399900 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics
Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere
Abstract:
Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciencesKeywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet
Procedia PDF Downloads 137899 Gas Network Noncooperative Game
Authors: Teresa Azevedo PerdicoúLis, Paulo Lopes Dos Santos
Abstract:
The conceptualisation of the problem of network optimisation as a noncooperative game sets up a holistic interactive approach that brings together different network features (e.g., com-pressor stations, sources, and pipelines, in the gas context) where the optimisation objectives are different, and a single optimisation procedure becomes possible without having to feed results from diverse software packages into each other. A mathematical model of this type, where independent entities take action, offers the ideal modularity and subsequent problem decomposition in view to design a decentralised algorithm to optimise the operation and management of the network. In a game framework, compressor stations and sources are under-stood as players which communicate through network connectivity constraints–the pipeline model. That is, in a scheme similar to tatonnementˆ, the players appoint their best settings and then interact to check for network feasibility. The devolved degree of network unfeasibility informs the players about the ’quality’ of their settings, and this two-phase iterative scheme is repeated until a global optimum is obtained. Due to network transients, its optimisation needs to be assessed at different points of the control interval. For this reason, the proposed approach to optimisation has two stages: (i) the first stage computes along the period of optimisation in order to fulfil the requirement just mentioned; (ii) the second stage is initialised with the solution found by the problem computed at the first stage, and computes in the end of the period of optimisation to rectify the solution found at the first stage. The liability of the proposed scheme is proven correct on an abstract prototype and three example networks.Keywords: connectivity matrix, gas network optimisation, large-scale, noncooperative game, system decomposition
Procedia PDF Downloads 152898 Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling
Authors: Ali Ben Abbes, ImedRiadh Farah, Vincent Barra
Abstract:
Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.Keywords: multi-temporal satellite image, urban growth, non-stationary, stochastic model
Procedia PDF Downloads 428897 Ammonia Cracking: Catalysts and Process Configurations for Enhanced Performance
Authors: Frea Van Steenweghen, Lander Hollevoet, Johan A. Martens
Abstract:
Compared to other hydrogen (H₂) carriers, ammonia (NH₃) is one of the most promising carriers as it contains 17.6 wt% hydrogen. It is easily liquefied at ≈ 9–10 bar pressure at ambient temperature. More importantly, NH₃ is a carbon-free hydrogen carrier with no CO₂ emission at final decomposition. Ammonia has a well-defined regulatory framework and a good track record regarding safety concerns. Furthermore, the industry already has an existing transport infrastructure consisting of pipelines, tank trucks and shipping technology, as ammonia has been manufactured and distributed around the world for over a century. While NH₃ synthesis and transportation technological solutions are at hand, a missing link in the hydrogen delivery scheme from ammonia is an energy-lean and efficient technology for cracking ammonia into H₂ and N₂. The most explored option for ammonia decomposition is thermo-catalytic cracking which is, by itself, the most energy-efficient approach compared to other technologies, such as plasma and electrolysis, as it is the most energy-lean and robust option. The decomposition reaction is favoured only at high temperatures (> 300°C) and low pressures (1 bar) as the thermocatalytic ammonia cracking process is faced with thermodynamic limitations. At 350°C, the thermodynamic equilibrium at 1 bar pressure limits the conversion to 99%. Gaining additional conversion up to e.g. 99.9% necessitates heating to ca. 530°C. However, reaching thermodynamic equilibrium is infeasible as a sufficient driving force is needed, requiring even higher temperatures. Limiting the conversion below the equilibrium composition is a more economical option. Thermocatalytic ammonia cracking is documented in scientific literature. Among the investigated metal catalysts (Ru, Co, Ni, Fe, …), ruthenium is known to be most active for ammonia decomposition with an onset of cracking activity around 350°C. For establishing > 99% conversion reaction, temperatures close to 600°C are required. Such high temperatures are likely to reduce the round-trip efficiency but also the catalyst lifetime because of the sintering of the supported metal phase. In this research, the first focus was on catalyst bed design, avoiding diffusion limitation. Experiments in our packed bed tubular reactor set-up showed that extragranular diffusion limitations occur at low concentrations of NH₃ when reaching high conversion, a phenomenon often overlooked in experimental work. A second focus was thermocatalyst development for ammonia cracking, avoiding the use of noble metals. To this aim, candidate metals and mixtures were deposited on a range of supports. Sintering resistance at high temperatures and the basicity of the support were found to be crucial catalyst properties. The catalytic activity was promoted by adding alkaline and alkaline earth metals. A third focus was studying the optimum process configuration by process simulations. A trade-off between conversion and favorable operational conditions (i.e. low pressure and high temperature) may lead to different process configurations, each with its own pros and cons. For example, high-pressure cracking would eliminate the need for post-compression but is detrimental for the thermodynamic equilibrium, leading to an optimum in cracking pressure in terms of energy cost.Keywords: ammonia cracking, catalyst research, kinetics, process simulation, thermodynamic equilibrium
Procedia PDF Downloads 66896 Frequency Domain Decomposition, Stochastic Subspace Identification and Continuous Wavelet Transform for Operational Modal Analysis of Three Story Steel Frame
Authors: Ardalan Sabamehr, Ashutosh Bagchi
Abstract:
Recently, Structural Health Monitoring (SHM) based on the vibration of structures has attracted the attention of researchers in different fields such as: civil, aeronautical and mechanical engineering. Operational Modal Analysis (OMA) have been developed to identify modal properties of infrastructure such as bridge, building and so on. Frequency Domain Decomposition (FDD), Stochastic Subspace Identification (SSI) and Continuous Wavelet Transform (CWT) are the three most common methods in output only modal identification. FDD, SSI, and CWT operate based on the frequency domain, time domain, and time-frequency plane respectively. So, FDD and SSI are not able to display time and frequency at the same time. By the way, FDD and SSI have some difficulties in a noisy environment and finding the closed modes. CWT technique which is currently developed works on time-frequency plane and a reasonable performance in such condition. The other advantage of wavelet transform rather than other current techniques is that it can be applied for the non-stationary signal as well. The aim of this paper is to compare three most common modal identification techniques to find modal properties (such as natural frequency, mode shape, and damping ratio) of three story steel frame which was built in Concordia University Lab by use of ambient vibration. The frame has made of Galvanized steel with 60 cm length, 27 cm width and 133 cm height with no brace along the long span and short space. Three uniaxial wired accelerations (MicroStarin with 100mv/g accuracy) have been attached to the middle of each floor and gateway receives the data and send to the PC by use of Node Commander Software. The real-time monitoring has been performed for 20 seconds with 512 Hz sampling rate. The test is repeated for 5 times in each direction by hand shaking and impact hammer. CWT is able to detect instantaneous frequency by used of ridge detection method. In this paper, partial derivative ridge detection technique has been applied to the local maxima of time-frequency plane to detect the instantaneous frequency. The extracted result from all three methods have been compared, and it demonstrated that CWT has the better performance in term of its accuracy in noisy environment. The modal parameters such as natural frequency, damping ratio and mode shapes are identified from all three methods.Keywords: ambient vibration, frequency domain decomposition, stochastic subspace identification, continuous wavelet transform
Procedia PDF Downloads 296895 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach
Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed
Abstract:
Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model
Procedia PDF Downloads 462894 The Role Of Digital Technology In Crime Prevention
Authors: Muhammad Ashfaq
Abstract:
Main theme: This prime focus of this study is on the role of digital technology in crime prevention, with special focus on Cellular Forensic Unit, Capital City Police Peshawar-Khyber Pakhtunkhwa-Pakistan. Objective(s) of the study: The prime objective of this study is to provide statistics, strategies and pattern of analysis used for crime prevention in Cellular Forensic Unit of Capital City Police Peshawar, Khyber Pakhtunkhwa-Pakistan. Research Method and Procedure: Qualitative method of research has been used in the study for obtaining secondary data from research wing and Information Technology (IT) section of Peshawar police. Content analysis was the method used for the conduction of the study. This study is delimited to Capital City Police and Cellular Forensic Unit Peshawar-KP, Pakistan. information technologies. Major finding(s): It is evident that the old traditional approach will never provide solutions for better management in controlling crimes. The best way to control crimes and promotion of proactive policing is to adopt new technologies. The study reveals that technology have transformed police more effective and vigilant as compared to traditional policing. The heinous crimes like abduction, missing of an individual, snatching, burglaries and blind murder cases are now traceable with the help of technology. Recommendation(s): From the analysis of the data, it is reflected that Information Technology (IT) expert should be recruited along with research analyst to timely assist and facilitate operational as well as investigation units of police.A mobile locator should be Provided to Cellular Forensic Unit to timely apprehend the criminals .Latest digital analysis software should be provided to equip the Cellular Forensic Unit.Keywords: crime prevention, digital technology, pakistan, police
Procedia PDF Downloads 65893 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface
Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto
Abstract:
Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns
Procedia PDF Downloads 128892 Precipitation Intensity: Duration Based Threshold Analysis for Initiation of Landslides in Upper Alaknanda Valley
Authors: Soumiya Bhattacharjee, P. K. Champati Ray, Shovan L. Chattoraj, Mrinmoy Dhara
Abstract:
The entire Himalayan range is globally renowned for rainfall-induced landslides. The prime focus of the study is to determine rainfall based threshold for initiation of landslides that can be used as an important component of an early warning system for alerting stake holders. This research deals with temporal dimension of slope failures due to extreme rainfall events along the National Highway-58 from Karanprayag to Badrinath in the Garhwal Himalaya, India. Post processed 3-hourly rainfall intensity data and its corresponding duration from daily rainfall data available from Tropical Rainfall Measuring Mission (TRMM) were used as the prime source of rainfall data. Landslide event records from Border Road Organization (BRO) and some ancillary landslide inventory data for 2013 and 2014 have been used to determine Intensity Duration (ID) based rainfall threshold. The derived governing threshold equation, I= 4.738D-0.025, has been considered for prediction of landslides of the study region. This equation was validated with an accuracy of 70% landslides during August and September 2014. The derived equation was considered for further prediction of landslides of the study region. From the obtained results and validation, it can be inferred that this equation can be used for initiation of landslides in the study area to work as a part of an early warning system. Results can significantly improve with ground based rainfall estimates and better database on landslide records. Thus, the study has demonstrated a very low cost method to get first-hand information on possibility of impending landslide in any region, thereby providing alert and better preparedness for landslide disaster mitigation.Keywords: landslide, intensity-duration, rainfall threshold, TRMM, slope, inventory, early warning system
Procedia PDF Downloads 272891 The Neuropsychology of Obsessive Compulsion Disorder
Authors: Mia Bahar, Özlem Bozkurt
Abstract:
Obsessive-compulsive disorder (OCD) is a typical, persistent, and long-lasting mental health condition in which a person experiences uncontrollable, recurrent thoughts (or "obsessions") and/or activities (or "compulsions") that they feel compelled to engage in repeatedly. Obsessive-compulsive disorder is both underdiagnosed and undertreated. It frequently manifests in a variety of medical settings and is persistent, expensive, and burdensome. Obsessive-compulsive neurosis was long believed to be a condition that offered valuable insight into the inner workings of the unconscious mind. Obsessive-compulsive disorder is now recognized as a prime example of a neuropsychiatric condition susceptible to particular pharmacotherapeutic and psychotherapy therapies and mediated by pathology in particular neural circuits. An obsessive-compulsive disorder which is called OCD, usually has two components, one cognitive and the other behavioral, although either can occur alone. Obsessions are often repetitive and intrusive thoughts that invade consciousness. These obsessions are incredibly hard to control or dismiss. People who have OCD often engage in rituals to reduce anxiety associated with intrusive thoughts. Once the ritual is formed, the person may feel extreme relief and be free from anxiety until the thoughts of contamination intrude once again. These thoughts are strengthened through a manifestation of negative reinforcement because they allow the person to avoid anxiety and obscurity. These thoughts are described as autogenous, meaning they most likely come from nowhere. These unwelcome thoughts are related to actions which we can describe as Thought Action Fusion. The thought becomes equated with an action, such as if they refuse to perform the ritual, something bad might happen, and so people perform the ritual to escape the intrusive thought. In almost all cases of OCD, the person's life gets extremely disturbed by compulsions and obsessions. Studies show OCD is an estimated 1.1% prevalence, making it a challenging issue with high co-morbidities with other issues like depressive episodes, panic disorders, and specific phobias. The first to reveal brain anomalies in OCD were numerous CT investigations, although the results were inconsistent. A few studies have focused on the orbitofrontal cortex (OFC), anterior cingulate gyrus (AC), and thalamus, structures also implicated in the pathophysiology of OCD by functional neuroimaging studies, but few have found consistent results. However, some studies have found abnormalities in the basal ganglion. There have also been some discussions that OCD might be genetic. OCD has been linked to families in studies of family aggregation, and findings from twin studies show that this relationship is somewhat influenced by genetic variables. Some Research has shown that OCD is a heritable, polygenic condition that can result from de novo harmful mutations as well as common and unusual variants. Numerous studies have also presented solid evidence in favor of a significant additive genetic component to OCD risk, with distinct OCD symptom dimensions showing both common and individual genetic risks.Keywords: compulsions, obsessions, neuropsychiatric, genetic
Procedia PDF Downloads 64890 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles
Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo
Abstract:
Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.Keywords: HRRP, NCTI, simulated/synthetic database, SVD
Procedia PDF Downloads 354889 Morphology Feature of Nanostructure Bainitic Steel after Tempering Treatment
Authors: Chih Yuan Chen, Chien Chon Chen, Jin-Shyong Lin
Abstract:
The microstructure characterization of tempered nanocrystalline bainitic steel is investigated in the present study. It is found that two types of plastic relaxation, dislocation debris and nanotwin, occurs in the displacive transformation due to relatively low transformation temperature and high carbon content. Because most carbon atoms trap in the dislocation, high dislocation density can be sustained during the tempering process. More carbides only can be found in the high tempered temperature due to intense recovery progression.Keywords: nanostructure bainitic steel, tempered, TEM, nano-twin, dislocation debris, accommodation
Procedia PDF Downloads 535888 Skew Cyclic Codes over Fq+uFq+…+uk-1Fq
Abstract:
This paper studies a special class of linear codes, called skew cyclic codes, over the ring R= Fq+uFq+…+uk-1Fq, where q is a prime power. A Gray map ɸ from R to Fq and a Gray map ɸ' from Rn to Fnq are defined, as well as an automorphism Θ over R. It is proved that the images of skew cyclic codes over R under map ɸ' and Θ are cyclic codes over Fq, and they still keep the dual relation.Keywords: skew cyclic code, gray map, automorphism, cyclic code
Procedia PDF Downloads 297887 Influence of Propeller Blade Lift Distribution on Whirl Flutter Stability Characteristics
Authors: J. Cecrdle
Abstract:
This paper deals with the whirl flutter of the turboprop aircraft structures. It is focused on the influence of the blade lift span-wise distribution on the whirl flutter stability. Firstly it gives the overall theoretical background of the whirl flutter phenomenon. After that the propeller blade forces solution and the options of the blade lift modelling are described. The problem is demonstrated on the example of a twin turboprop aircraft structure. There are evaluated the influences with respect to the propeller aerodynamic derivatives and finally the influences to the whirl flutter speed and the whirl flutter margin respectively.Keywords: aeroelasticity, flutter, propeller blade force, whirl flutter
Procedia PDF Downloads 536886 On Paranorm Zweier I-Convergent Sequence Spaces
Authors: Nazneen Khan, Vakeel A. Khan
Abstract:
In this article we introduce the Paranorm Zweier I-convergent sequence spaces, for a sequence of positive real numbers. We study some topological properties, prove the decomposition theorem and study some inclusion relations on these spaces.Keywords: ideal, filter, I-convergence, I-nullity, paranorm
Procedia PDF Downloads 481885 Determinants of Child Nutritional Inequalities in Pakistan: Regression-Based Decomposition Analysis
Authors: Nilam Bano, Uzma Iram
Abstract:
Globally, the dilemma of undernutrition has become a notable concern for the researchers, academicians, and policymakers because of its severe consequences for many centuries. The nutritional deficiencies create hurdles for the people to achieve goals related to live a better lifestyle. Not only at micro level but also at the macro level, the consequences of undernutrition affect the economic progress of the country. The initial five years of a child’s life are considered critical for the physical growth and brain development. In this regard, children require special care and good quality food (nutrient intake) to fulfill their nutritional demand of the growing body. Having the sensitive stature and health, children specially under the age of 5 years are more vulnerable to the poor economic, housing, environmental and other social conditions. Beside confronting economic challenges and political upheavals, Pakistan is also going through from a rough patch in the context of social development. Majority of the children are facing serious health problems in the absence of required nutrition. The complexity of this issue is getting severe day by day and specially children are left behind with different type of immune problems and vitamins and mineral deficiencies. It is noted that children from the well-off background are less likely affected by the undernutrition. In order to underline this issue, the present study aims to highlight the existing nutritional inequalities among the children of under five years in Pakistan. Moreover, this study strives to decompose those factors that severely affect the existing nutritional inequality and standing in the queue to capture the consideration of concerned authorities. Pakistan Demographic and Health Survey 2012-13 was employed to assess the relevant indicators of undernutrition such as stunting, wasting, underweight and associated socioeconomic factors. The objectives were executed through the utilization of the relevant empirical techniques. Concentration indices were constructed to measure the nutritional inequalities by utilizing three measures of undernutrition; stunting, wasting and underweight. In addition to it, the decomposition analysis following the logistic regression was made to unfold the determinants that severely affect the nutritional inequalities. The negative values of concentration indices illustrate that children from the marginalized background are affected by the undernutrition more than their counterparts who belong from rich households. Furthermore, the result of decomposition analysis indicates that child age, size of a child at birth, wealth index, household size, parents’ education, mother’s health and place of residence are the most contributing factors in the prevalence of existing nutritional inequalities. Considering the result of the study, it is suggested to the policymakers to design policies in a way so that the health sector of Pakistan can stimulate in a productive manner. Increasing the number of effective health awareness programs for mothers would create a notable difference. Moreover, the education of the parents must be concerned by the policymakers as it has a significant association with the present research in terms of eradicating the nutritional inequalities among children.Keywords: concentration index, decomposition analysis, inequalities, undernutrition, Pakistan
Procedia PDF Downloads 132884 Use of Nanoclay in Various Modified Polyolefins
Authors: Michael Tupý, Alice Tesaříková-Svobodová, Dagmar Měřínská, Vít Petránek
Abstract:
Polyethylene (PE), Polypropylene (PP), Polyethylene (vinyl acetate) (EVA) and Surlyn (modif-PE) nano composite samples were prepared with montmorillonite fillers Cloisite 93A and Dellite 67G. The amount of modified Na+ montmorillonite (MMT) was fixed to 5 % (w/w). For the compounding of polymer matrix and chosen nano fillers twin-screw kneader was used. The level of MMT intercalation or exfoliation in the nano composite systems was studied by transmission electron microscopy (TEM) observations. The properties of samples were evaluated by dynamical mechanical analysis (E* modulus at 30 °C) and by the measurement of tensile properties (stress and strain at break).Keywords: polyethylene, polypropylene, polyethylene(vinyl acetate), clay, nanocomposite, montmorillonite
Procedia PDF Downloads 535883 Flexible PVC Based Nanocomposites With the Incorporation of Electric and Magnetic Nanofillers for the Shielding Against EMI and Thermal Imaging Signals
Authors: H. M. Fayzan Shakir, Khadija Zubair, Tingkai Zhao
Abstract:
Electromagnetic (EM) waves are being used widely now a days. Cell phone signals, WIFI signals, wireless telecommunications etc everything uses EM waves which then create EM pollution. EM pollution can cause serious effects on both human health and nearby electronic devices. EM waves have electric and magnetic components that disturb the flow of charged particles in both human nervous system and electronic devices. The shielding of both humans and electronic devices are a prime concern today. EM waves can cause headaches, anxiety, suicide and depression, nausea, fatigue and loss of libido in humans and malfunctioning in electronic devices. Polyaniline (PANI) and polypyrrole (PPY) were successfully synthesized using chemical polymerizing using ammonium persulfate and DBSNa as oxidant respectively. Barium ferrites (BaFe) were also prepared using co-precipitation method and calcinated at 10500C for 8h. Nanocomposite thin films with various combinations and compositions of Polyvinylchloride, PANI, PPY and BaFe were prepared. X-ray diffraction technique was first used to confirm the successful fabrication of all nano fillers and particle size analyzer to measure the exact size and scanning electron microscopy is used for the shape. According to Electromagnetic Interference theory, electrical conductivity is the prime property required for the Electromagnetic Interference shielding. 4-probe technique is then used to evaluate DC conductivity of all samples. Samples with high concentration of PPY and PANI exhibit remarkable increased electrical conductivity due to fabrication of interconnected network structure inside the Polyvinylchloride matrix that is also confirmed by SEM analysis. Less than 1% transmission was observed in whole NIR region (700 nm – 2500 nm). Also, less than -80 dB Electromagnetic Interference shielding effectiveness was observed in microwave region (0.1 GHz to 20 GHz).Keywords: nanocomposites, polymers, EMI shielding, thermal imaging
Procedia PDF Downloads 106882 Hydrogen Production at the Forecourt from Off-Peak Electricity and Its Role in Balancing the Grid
Authors: Abdulla Rahil, Rupert Gammon, Neil Brown
Abstract:
The rapid growth of renewable energy sources and their integration into the grid have been motivated by the depletion of fossil fuels and environmental issues. Unfortunately, the grid is unable to cope with the predicted growth of renewable energy which would lead to its instability. To solve this problem, energy storage devices could be used. Electrolytic hydrogen production from an electrolyser is considered a promising option since it is a clean energy source (zero emissions). Choosing flexible operation of an electrolyser (producing hydrogen during the off-peak electricity period and stopping at other times) could bring about many benefits like reducing the cost of hydrogen and helping to balance the electric systems. This paper investigates the price of hydrogen during flexible operation compared with continuous operation, while serving the customer (hydrogen filling station) without interruption. The optimization algorithm is applied to investigate the hydrogen station in both cases (flexible and continuous operation). Three different scenarios are tested to see whether the off-peak electricity price could enhance the reduction of the hydrogen cost. These scenarios are: Standard tariff (1 tier system) during the day (assumed 12 p/kWh) while still satisfying the demand for hydrogen; using off-peak electricity at a lower price (assumed 5 p/kWh) and shutting down the electrolyser at other times; using lower price electricity at off-peak times and high price electricity at other times. This study looks at Derna city, which is located on the coast of the Mediterranean Sea (32° 46′ 0 N, 22° 38′ 0 E) with a high potential for wind resource. Hourly wind speed data which were collected over 24½ years from 1990 to 2014 were in addition to data on hourly radiation and hourly electricity demand collected over a one-year period, together with the petrol station data.Keywords: hydrogen filling station off-peak electricity, renewable energy, off-peak electricity, electrolytic hydrogen
Procedia PDF Downloads 231881 A Review of Digital Twins to Reduce Emission in the Construction Industry
Authors: Zichao Zhang, Yifan Zhao, Samuel Court
Abstract:
The carbon emission problem of the traditional construction industry has long been a pressing issue. With the growing emphasis on environmental protection and advancement of science and technology, the organic integration of digital technology and emission reduction has gradually become a mainstream solution. Among various sophisticated digital technologies, digital twins, which involve creating virtual replicas of physical systems or objects, have gained enormous attention in recent years as tools to improve productivity, optimize management and reduce carbon emissions. However, the relatively high implementation costs including finances, time, and manpower associated with digital twins have limited their widespread adoption. As a result, most of the current applications are primarily concentrated within a few industries. In addition, the creation of digital twins relies on a large amount of data and requires designers to possess exceptional skills in information collection, organization, and analysis. Unfortunately, these capabilities are often lacking in the traditional construction industry. Furthermore, as a relatively new concept, digital twins have different expressions and usage methods across different industries. This lack of standardized practices poses a challenge in creating a high-quality digital twin framework for construction. This paper firstly reviews the current academic studies and industrial practices focused on reducing greenhouse gas emissions in the construction industry using digital twins. Additionally, it identifies the challenges that may be encountered during the design and implementation of a digital twin framework specific to this industry and proposes potential directions for future research. This study shows that digital twins possess substantial potential and significance in enhancing the working environment within the traditional construction industry, particularly in their ability to support decision-making processes. It proves that digital twins can improve the work efficiency and energy utilization of related machinery while helping this industry save energy and reduce emissions. This work will help scholars in this field to better understand the relationship between digital twins and energy conservation and emission reduction, and it also serves as a conceptual reference for practitioners to implement related technologies.Keywords: digital twins, emission reduction, construction industry, energy saving, life cycle, sustainability
Procedia PDF Downloads 100880 Integrating Cyber-Physical System toward Advance Intelligent Industry: Features, Requirements and Challenges
Authors: V. Reyes, P. Ferreira
Abstract:
In response to high levels of competitiveness, industrial systems have evolved to improve productivity. As a consequence, a rapid increase in volume production and simultaneously, a customization process require lower costs, more variety, and accurate quality of products. Reducing time-cycle production, enabling customizability, and ensure continuous quality improvement are key features in advance intelligent industry. In this scenario, customers and producers will be able to participate in the ongoing production life cycle through real-time interaction. To achieve this vision, transparency, predictability, and adaptability are key features that provide the industrial systems the capability to adapt to customer demands modifying the manufacturing process through an autonomous response and acting preventively to avoid errors. The industrial system incorporates a diversified number of components that in advanced industry are expected to be decentralized, end to end communicating, and with the capability to make own decisions through feedback. The evolving process towards advanced intelligent industry defines a set of stages to empower components of intelligence and enhancing efficiency to achieve the decision-making stage. The integrated system follows an industrial cyber-physical system (CPS) architecture whose real-time integration, based on a set of enabler technologies, links the physical and virtual world generating the digital twin (DT). This instance allows incorporating sensor data from real to virtual world and the required transparency for real-time monitoring and control, contributing to address important features of the advanced intelligent industry and simultaneously improve sustainability. Assuming the industrial CPS as the core technology toward the latest advanced intelligent industry stage, this paper reviews and highlights the correlation and contributions of the enabler technologies for the operationalization of each stage in the path toward advanced intelligent industry. From this research, a real-time integration architecture for a cyber-physical system with applications to collaborative robotics is proposed. The required functionalities and issues to endow the industrial system of adaptability are identified.Keywords: cyber-physical systems, digital twin, sensor data, system integration, virtual model
Procedia PDF Downloads 118