Search results for: inverse method
16069 Optimization Analysis of a Concentric Tube Heat Exchanger with Field Synergy Principle
Abstract:
The paper investigates the optimization analysis to the heat exchanger design, mainly with response surface method and genetic algorithm to explore the relationship between optimal fluid flow velocity and temperature of the heat exchanger using field synergy principle. First, finite volume method is proposed to calculate the flow temperature and flow rate distribution for numerical analysis. We identify the most suitable simulation equations by response surface methodology. Furthermore, a genetic algorithm approach is applied to optimize the relationship between fluid flow velocity and flow temperature of the heat exchanger. The results show that the field synergy angle plays vital role in the performance of a true heat exchanger.Keywords: optimization analysis, field synergy, heat exchanger, genetic algorithm
Procedia PDF Downloads 30816068 Industrial Wastewater Sludge Treatment in Chongqing, China
Authors: Victor Emery David Jr., Jiang Wenchao, Yasinta John, Md. Sahadat Hossain
Abstract:
Sludge originates from the process of treatment of wastewater. It is the byproduct of wastewater treatment containing concentrated heavy metals and poorly biodegradable trace organic compounds, as well as potentially pathogenic organisms (viruses, bacteria, etc.) which are usually difficult to treat or dispose of. China, like other countries, is no stranger to the challenges posed by an increase of wastewater. Treatment and disposal of sludge have been a problem for most cities in China. However, this problem has been exacerbated by other issues such as lack of technology, funding, and other factors. Suitable methods for such climatic conditions are still unavailable for modern cities in China. Against this background, this paper seeks to describe the methods used for treatment and disposal of sludge from industries and suggest a suitable method for treatment and disposal in Chongqing/China. From the research conducted, it was discovered that the highest treatment rate of sludge in Chongqing was 10.08%. The industrial waste piping system is not separated from the domestic system. Considering the proliferation of industry and urbanization, there is a likelihood that the production of sludge in Chongqing will increase. If the sludge produced is not properly managed, this may lead to adverse health and environmental effects. Disposal costs and methods for Chongqing were also included in this paper’s analysis. Research showed that incineration is the most expensive method of sludge disposal in China/Chongqing. Subsequent research, therefore, considered optional alternatives such as composting. Composting represents a relatively cheap waste disposal method considering the vast population, current technology and economic conditions of Chongqing, as well as China at large.Keywords: Chongqing/China, disposal, industrial, sludge, treatment
Procedia PDF Downloads 32116067 Modelling the Effect of Distancing and Wearing of Face Masks on Transmission of COVID-19 Infection Dynamics
Authors: Nurudeen Oluwasola Lasisi
Abstract:
The COVID-19 is an infection caused by coronavirus, which has been designated as a pandemic in the world. In this paper, we proposed a model to study the effect of distancing and wearing masks on the transmission of COVID-19 infection dynamics. The invariant region of the model is established. The COVID-19 free equilibrium and the reproduction number of the model were obtained. The local and global stability of the model is determined using the linearization technique method and Lyapunov method. It was found that COVID-19 free equilibrium state is locally asymptotically stable in feasible region Ω if R₀ < 1 and globally asymptomatically stable if R₀ < 1, otherwise unstable if R₀ > 1. More so, numerical analysis and simulations of the dynamics of the COVID-19 infection are presented.Keywords: distancing, reproduction number, wearing of mask, local and global stability, modelling, transmission
Procedia PDF Downloads 13816066 Micromechanics Modeling of 3D Network Smart Orthotropic Structures
Authors: E. M. Hassan, A. L. Kalamkarov
Abstract:
Two micromechanical models for 3D smart composite with embedded periodic or nearly periodic network of generally orthotropic reinforcements and actuators are developed and applied to cubic structures with unidirectional orientation of constituents. Analytical formulas for the effective piezothermoelastic coefficients are derived using the Asymptotic Homogenization Method (AHM). Finite Element Analysis (FEA) is subsequently developed and used to examine the aforementioned periodic 3D network reinforced smart structures. The deformation responses from the FE simulations are used to extract effective coefficients. The results from both techniques are compared. This work considers piezoelectric materials that respond linearly to changes in electric field, electric displacement, mechanical stress and strain and thermal effects. This combination of electric fields and thermo-mechanical response in smart composite structures is characterized by piezoelectric and thermal expansion coefficients. The problem is represented by unit-cell and the models are developed using the AHM and the FEA to determine the effective piezoelectric and thermal expansion coefficients. Each unit cell contains a number of orthotropic inclusions in the form of structural reinforcements and actuators. Using matrix representation of the coupled response of the unit cell, the effective piezoelectric and thermal expansion coefficients are calculated and compared with results of the asymptotic homogenization method. A very good agreement is shown between these two approaches.Keywords: asymptotic homogenization method, finite element analysis, effective piezothermoelastic coefficients, 3D smart network composite structures
Procedia PDF Downloads 40016065 An Intelligent Scheme Switching for MIMO Systems Using Fuzzy Logic Technique
Authors: Robert O. Abolade, Olumide O. Ajayi, Zacheaus K. Adeyemo, Solomon A. Adeniran
Abstract:
Link adaptation is an important strategy for achieving robust wireless multimedia communications based on quality of service (QoS) demand. Scheme switching in multiple-input multiple-output (MIMO) systems is an aspect of link adaptation, and it involves selecting among different MIMO transmission schemes or modes so as to adapt to the varying radio channel conditions for the purpose of achieving QoS delivery. However, finding the most appropriate switching method in MIMO links is still a challenge as existing methods are either computationally complex or not always accurate. This paper presents an intelligent switching method for the MIMO system consisting of two schemes - transmit diversity (TD) and spatial multiplexing (SM) - using fuzzy logic technique. In this method, two channel quality indicators (CQI) namely average received signal-to-noise ratio (RSNR) and received signal strength indicator (RSSI) are measured and are passed as inputs to the fuzzy logic system which then gives a decision – an inference. The switching decision of the fuzzy logic system is fed back to the transmitter to switch between the TD and SM schemes. Simulation results show that the proposed fuzzy logic – based switching technique outperforms conventional static switching technique in terms of bit error rate and spectral efficiency.Keywords: channel quality indicator, fuzzy logic, link adaptation, MIMO, spatial multiplexing, transmit diversity
Procedia PDF Downloads 15316064 Synthesis of a Hybrid Material (PVA/SiO₂/TiO₂) by Sol-Gel Method
Authors: Gueridi Bachir, Dadache Derradji, Rouabah Farid
Abstract:
This work is focused on the preparation and characterization of poly (vinyl alcohol)/silica gel/Nano-TiO₂, and the study of titanium dioxide (TiO₂) nanoparticles 1% on the properties of poly (vinyl alcohol) (PVA)/silica films. Fourier transform infrared (FT-IR), water contact angle, ultraviolet-visible spectrometry (UV-VIS)) were used to characterize the hybrid films obtained. The PVA/SiO₂/Nano-TiO₂ films were successfully synthesized. Owing to the FT-IR Analysis, the chemical bonds have clearly shown that the PVA backbone is linked to the (SiO₂-TiO₂) network. UV-VIS tests indicated that the hybrid films' UV shielding properties were drastically enhanced as a result of the addition of TiO₂. The water contact angle results revealed that TiO₂ nanoparticles used as a doping compound possess an important influence on the hydrophilicity of PVA/SiO₂ as thin films.Keywords: sol-gel method, hybrid materials, PVA/SIO₂/TiO₂, spectroscopical characterization
Procedia PDF Downloads 1416063 Cupric Oxide Thin Films for Optoelectronic Application
Authors: Sanjay Kumar, Dinesh Pathak, Sudhir Saralch
Abstract:
Copper oxide is a semiconductor that has been studied for several reasons such as the natural abundance of starting material copper (Cu); the easiness of production by Cu oxidation; their non-toxic nature and the reasonably good electrical and optical properties. Copper oxide is well-known as cuprite oxide. The cuprite is p-type semiconductors having band gap energy of 1.21 to 1.51 eV. As a p-type semiconductor, conduction arises from the presence of holes in the valence band (VB) due to doping/annealing. CuO is attractive as a selective solar absorber since it has high solar absorbency and a low thermal emittance. CuO is very promising candidate for solar cell applications as it is a suitable material for photovoltaic energy conversion. It has been demonstrated that the dip technique can be used to deposit CuO films in a simple manner using metallic chlorides (CuCl₂.2H₂O) as a starting material. Copper oxide films are prepared using a methanolic solution of cupric chloride (CuCl₂.2H₂O) at three baking temperatures. We made three samples, after heating which converts to black colour. XRD data confirm that the films are of CuO phases at a particular temperature. The optical band gap of the CuO films calculated from optical absorption measurements is 1.90 eV which is quite comparable to the reported value. Dip technique is a very simple and low-cost method, which requires no sophisticated specialized setup. Coating of the substrate with a large surface area can be easily obtained by this technique compared to that in physical evaporation techniques and spray pyrolysis. Another advantage of the dip technique is that it is very easy to coat both sides of the substrate instead of only one and to deposit otherwise inaccessible surfaces. This method is well suited for applying coating on the inner and outer surfaces of tubes of various diameters and shapes. The main advantage of the dip coating method lies in the fact that it is possible to deposit a variety of layers having good homogeneity and mechanical and chemical stability with a very simple setup. In this paper, the CuO thin films preparation by dip coating method and their characterization will be presented.Keywords: absorber material, cupric oxide, dip coating, thin film
Procedia PDF Downloads 30916062 Evaluating Urban City Indices: A Study for Investigating Functional Domains, Indicators and Integration Methods
Authors: Fatih Gundogan, Fatih Kafali, Abdullah Karadag, Alper Baloglu, Ersoy Pehlivan, Mustafa Eruyar, Osman Bayram, Orhan Karademiroglu, Wasim Shoman
Abstract:
Nowadays many cities around the world are investing their efforts and resources for the purpose of facilitating their citizen’s life and making cities more livable and sustainable by implementing newly emerged phenomena of smart city. For this purpose, related research institutions prepare and publish smart city indices or benchmarking reports aiming to measure the city’s current ‘smartness’ status. Several functional domains, various indicators along different selection and calculation methods are found within such indices and reports. The selection criteria varied for each institution resulting in inconsistency in the ranking and evaluating. This research aims to evaluate the impact of selecting such functional domains, indicators and calculation methods which may cause change in the rank. For that, six functional domains, i.e. Environment, Mobility, Economy, People, Living and governance, were selected covering 19 focus areas and 41 sub-focus (variable) areas. 60 out of 191 indicators were also selected according to several criteria. These were identified as a result of extensive literature review for 13 well known global indices and research and the ISO 37120 standards of sustainable development of communities. The values of the identified indicators were obtained from reliable sources for ten cities. The values of each indicator for the selected cities were normalized and standardized to objectively investigate the impact of the chosen indicators. Moreover, the effect of choosing an integration method to represent the values of indicators for each city is investigated by comparing the results of two of the most used methods i.e. geometric aggregation and fuzzy logic. The essence of these methods is assigning a weight to each indicator its relative significance. However, both methods resulted in different weights for the same indicator. As a result of this study, the alternation in city ranking resulting from each method was investigated and discussed separately. Generally, each method illustrated different ranking for the selected cities. However, it was observed that within certain functional areas the rank remained unchanged in both integration method. Based on the results of the study, it is recommended utilizing a common platform and method to objectively evaluate cities around the world. The common method should provide policymakers proper tools to evaluate their decisions and investments relative to other cities. Moreover, for smart cities indices, at least 481 different indicators were found, which is an immense number of indicators to be considered, especially for a smart city index. Further works should be devoted to finding mutual indicators representing the index purpose globally and objectively.Keywords: functional domain, urban city index, indicator, smart city
Procedia PDF Downloads 14716061 Estimation and Restoration of Ill-Posed Parameters for Underwater Motion Blurred Images
Authors: M. Vimal Raj, S. Sakthivel Murugan
Abstract:
Underwater images degrade their quality due to atmospheric conditions. One of the major problems in an underwater image is motion blur caused by the imaging device or the movement of the object. In order to rectify that in post-imaging, parameters of the blurred image are to be estimated. So, the point spread function is estimated by the properties, using the spectrum of the image. To improve the estimation accuracy of the parameters, Optimized Polynomial Lagrange Interpolation (OPLI) method is implemented after the angle and length measurement of motion-blurred images. Initially, the data were collected from real-time environments in Chennai and processed. The proposed OPLI method shows better accuracy than the existing classical Cepstral, Hough, and Radon transform estimation methods for underwater images.Keywords: image restoration, motion blur, parameter estimation, radon transform, underwater
Procedia PDF Downloads 17616060 Liver Lesion Extraction with Fuzzy Thresholding in Contrast Enhanced Ultrasound Images
Authors: Abder-Rahman Ali, Adélaïde Albouy-Kissi, Manuel Grand-Brochier, Viviane Ladan-Marcus, Christine Hoeffl, Claude Marcus, Antoine Vacavant, Jean-Yves Boire
Abstract:
In this paper, we present a new segmentation approach for focal liver lesions in contrast enhanced ultrasound imaging. This approach, based on a two-cluster Fuzzy C-Means methodology, considers type-II fuzzy sets to handle uncertainty due to the image modality (presence of speckle noise, low contrast, etc.), and to calculate the optimum inter-cluster threshold. Fine boundaries are detected by a local recursive merging of ambiguous pixels. The method has been tested on a representative database. Compared to both Otsu and type-I Fuzzy C-Means techniques, the proposed method significantly reduces the segmentation errors.Keywords: defuzzification, fuzzy clustering, image segmentation, type-II fuzzy sets
Procedia PDF Downloads 48516059 Fault Analysis of Induction Machine Using Finite Element Method (FEM)
Authors: Wiem Zaabi, Yemna Bensalem, Hafedh Trabelsi
Abstract:
The paper presents a finite element (FE) based efficient analysis procedure for induction machine (IM). The FE formulation approaches are proposed to achieve this goal: the magnetostatic and the non-linear transient time stepped formulations. The study based on finite element models offers much more information on the phenomena characterizing the operation of electrical machines than the classical analytical models. This explains the increase of the interest for the finite element investigations in electrical machines. Based on finite element models, this paper studies the influence of the stator and the rotor faults on the behavior of the IM. In this work, a simple dynamic model for an IM with inter-turn winding fault and a broken bar fault is presented. This fault model is used to study the IM under various fault conditions and severity. The simulation results are conducted to validate the fault model for different levels of fault severity. The comparison of the results obtained by simulation tests allowed verifying the precision of the proposed FEM model. This paper presents a technical method based on Fast Fourier Transform (FFT) analysis of stator current and electromagnetic torque to detect the faults of broken rotor bar. The technique used and the obtained results show clearly the possibility of extracting signatures to detect and locate faults.Keywords: Finite element Method (FEM), Induction motor (IM), short-circuit fault, broken rotor bar, Fast Fourier Transform (FFT) analysis
Procedia PDF Downloads 30116058 High Temperature Oxidation Behavior of Aluminized Steel by Arc Spray and Cementation Techniques
Authors: Minoo Tavakoli, Alireza Kiani Rashid, Abbas Afrasiabi
Abstract:
An aluminum coating deposited on mild steel substrate by electric arc spray and diffused to the base steel material by diffusion treatment at 800 and 900°C for 1 and 3 hours in a static air. Alloy layers formed by diffusion at both temperatures were investigated, and their features were compared with those of pack cementation aluminized steel. High-temperature oxidation tests were carried out in air at 600 °C for 145 hours. Results indicated that the aluminide coatings obtained from this process have significantly improved the high-temperature oxidation resistance in both methods due to the Al2O3 scale formation. Furthermore, it showed that the isothermal oxidation resistance of arc spray technique is better than pack cementation method. This can be attributed to voids that formed at the intermetallic layer /Al layer interface which is increased in the pack cementation method.Keywords: electric arc spray, pack cementation, oxidation resistance, aluminized steel
Procedia PDF Downloads 46816057 An Investigation on Electric Field Distribution around 380 kV Transmission Line for Various Pylon Models
Authors: C. F. Kumru, C. Kocatepe, O. Arikan
Abstract:
In this study, electric field distribution analyses for three pylon models are carried out by a Finite Element Method (FEM) based software. Analyses are performed in both stationary and time domains to observe instantaneous values along with the effective ones. Considering the results of the study, different line geometries is considerably affecting the magnitude and distribution of electric field although the line voltages are the same. Furthermore, it is observed that maximum values of instantaneous electric field obtained in time domain analysis are quite higher than the effective ones in stationary mode. In consequence, electric field distribution analyses should be individually made for each different line model and the limit exposure values or distances to residential buildings should be defined according to the results obtained.Keywords: electric field, energy transmission line, finite element method, pylon
Procedia PDF Downloads 72816056 The Effects of Logistical Centers Realization on Society and Economy
Authors: Anna Dolinayova, Juraj Camaj, Martin Loch
Abstract:
Presently it is necessary to ensure the sustainable development of passenger and freight transport. Increasing performance of road freight have been a negative impact to environment and society. It is therefore necessary to increase the competitiveness of intermodal transport, which is more environmentally friendly. The study describe the effectiveness of logistical centers realization for companies and society and research how the partial internalization of external costs reflected in the efficient use of these centers and increase the competitiveness of intermodal transport to road freight. In our research, we use the method of comparative analysis and market research to describe the advantages of logistic centers for their users as well as for society as a whole. Method normal costing is used for calculation infrastructure and total costs, method of conversion costing for determine the external costs. We modelling of total society costs for road freight transport and inter modal transport chain (we assumed that most of the traffic is carried by rail) with different loading schemes for condition in the Slovak Republic. Our research has shown that higher utilization of inter modal transport chain do good not only for society, but for companies providing freight services too. Increase in use of inter modal transport chain can bring many benefits to society that do not bring direct immediate financial return. They often bring the multiplier effects, such as greater use of environmentally friendly transport mode and reduce the total society costs.Keywords: delivery time, economy effectiveness, logistical centers, ecological efficiency, optimization, society
Procedia PDF Downloads 44316055 Using Self Organizing Feature Maps for Classification in RGB Images
Authors: Hassan Masoumi, Ahad Salimi, Nazanin Barhemmat, Babak Gholami
Abstract:
Artificial neural networks have gained a lot of interest as empirical models for their powerful representational capacity, multi input and output mapping characteristics. In fact, most feed-forward networks with nonlinear nodal functions have been proved to be universal approximates. In this paper, we propose a new supervised method for color image classification based on self organizing feature maps (SOFM). This algorithm is based on competitive learning. The method partitions the input space using self-organizing feature maps to introduce the concept of local neighborhoods. Our image classification system entered into RGB image. Experiments with simulated data showed that separability of classes increased when increasing training time. In additional, the result shows proposed algorithms are effective for color image classification.Keywords: classification, SOFM algorithm, neural network, neighborhood, RGB image
Procedia PDF Downloads 47816054 DC Bus Voltage Ripple Control of Photo Voltaic Inverter in Low Voltage Ride-Trough Operation
Authors: Afshin Kadri
Abstract:
Using Renewable Energy Resources (RES) as a type of DG unit is developing in distribution systems. The connection of these generation units to existing AC distribution systems changes the structure and some of the operational aspects of these grids. Most of the RES requires to power electronic-based interfaces for connection to AC systems. These interfaces consist of at least one DC/AC conversion unit. Nowadays, grid-connected inverters must have the required feature to support the grid under sag voltage conditions. There are two curves in these conditions that show the magnitude of the reactive component of current as a function of voltage drop value and the required minimum time value, which must be connected to the grid. This feature is named low voltage ride-through (LVRT). Implementing this feature causes problems in the operation of the inverter that increases the amplitude of high-frequency components of the injected current and working out of maximum power point in the photovoltaic panel connected inverters are some of them. The important phenomenon in these conditions is ripples in the DC bus voltage that affects the operation of the inverter directly and indirectly. The losses of DC bus capacitors which are electrolytic capacitors, cause increasing their temperature and decreasing its lifespan. In addition, if the inverter is connected to the photovoltaic panels directly and has the duty of maximum power point tracking, these ripples cause oscillations around the operating point and decrease the generating energy. Using a bidirectional converter in the DC bus, which works as a buck and boost converter and transfers the ripples to its DC bus, is the traditional method to eliminate these ripples. In spite of eliminating the ripples in the DC bus, this method cannot solve the problem of reliability because it uses an electrolytic capacitor in its DC bus. In this work, a control method is proposed which uses the bidirectional converter as the fourth leg of the inverter and eliminates the DC bus ripples using an injection of unbalanced currents into the grid. Moreover, the proposed method works based on constant power control. In this way, in addition, to supporting the amplitude of grid voltage, it stabilizes its frequency by injecting active power. Also, the proposed method can eliminate the DC bus ripples in deep voltage drops, which cause increasing the amplitude of the reference current more than the nominal current of the inverter. The amplitude of the injected current for the faulty phases in these conditions is kept at the nominal value and its phase, together with the phase and amplitude of the other phases, are adjusted, which at the end, the ripples in the DC bus are eliminated, however, the generated power decreases.Keywords: renewable energy resources, voltage drop value, DC bus ripples, bidirectional converter
Procedia PDF Downloads 7616053 Evaluation of Seismic Behavior of Steel Shear Wall with Opening with Hardener and Beam with Reduced Cross Section under Cycle Loading with Finite Element Analysis Method
Authors: Masoud Mahdavi
Abstract:
During an earthquake, the structure is subjected to seismic loads that cause tension in the members of the building. The use of energy dissipation elements in the structure reduces the percentage of seismic forces on the main members of the building (especially the columns). Steel plate shear wall, as one of the most widely used types of energy dissipation element, has evolved today, and regular drilling of its inner plate is one of the common cases. In the present study, using a finite element method, the shear wall of the steel plate is designed as a floor (with dimensions of 447 × 6/246 cm) with Abacus software and in three different modes on which a cyclic load has been applied. The steel shear wall has a horizontal element (beam) with a reduced beam section (RBS). The hole in the interior plate of the models is created in such a way that it has the process of increasing the area, which makes the effect of increasing the surface area of the hole on the seismic performance of the steel shear wall completely clear. In the end, it was found that with increasing the opening level in the steel shear wall (with reduced cross-section beam), total displacement and plastic strain indicators increased, structural capacity and total energy indicators decreased and the Mises Monson stress index did not change much.Keywords: steel plate shear wall with opening, cyclic loading, reduced cross-section beam, finite element method, Abaqus software
Procedia PDF Downloads 12316052 Effect of Thermal Energy on Inorganic Coagulation for the Treatment of Industrial Wastewater
Authors: Abhishek Singh, Rajlakshmi Barman, Tanmay Shah
Abstract:
Coagulation is considered to be one of the predominant water treatment processes which improve the cost effectiveness of wastewater. The sole purpose of this experiment on thermal coagulation is to increase the efficiency and the rate of reaction. The process uses renewable sources of energy which comprises of improved and minimized time method in order to eradicate the water scarcity of the regions which are on the brink of depletion. This paper includes the various effects of temperature on the standard coagulation treatment of wastewater and their effect on water quality. In addition, the coagulation is done with the mix of bottom/fly-ash that will act as an adsorbent and removes most of the minor and macro particles by means of adsorption which not only helps to reduce the environmental burden of fly ash but also enhance economic benefit. Also, the method of sand filtration is amalgamated in the process. The sand filter is an environmentally-friendly wastewater treatment method, which is relatively simple and inexpensive. The existing parameters were satisfied with the experimental results obtained in this study and were found satisfactory. The initial turbidity of the wastewater is 162 NTU. The initial temperature of the wastewater is 27 C. The temperature variation of the entire process is 50 C-80 C. The concentration of alum in wastewater is 60mg/L-320mg/L. The turbidity range is 8.31-28.1 NTU after treatment. pH variation is 7.73-8.29. The effective time taken is 10 minutes for thermal mixing and sedimentation. The results indicate that the presence of thermal energy affects the coagulation treatment process. The influence of thermal energy on turbidity is assessed along with renewable energy sources and increase of the rate of reaction of the treatment process.Keywords: adsorbent, sand filter, temperature, thermal coagulation
Procedia PDF Downloads 32116051 The Criteria of the Aesthetic Quality of Art: Contemporary Photography
Authors: Artem Surkov
Abstract:
This work is devoted to a problem of aesthetic quality determinism in the context of contemporary art. The object of study is photography regarding as a kind of art which demands specific system of quality marking. Objective: To define aesthetic criteria in photography art. For current searching different kind of texts by such powerful authors like Clement Greenberg and Rosalind Krauss, Theodor Adorno and Herbert Marcuse, Charlott Cotton and Boris Groys, Viktor Miziano and Ekaterina Degot' were analyzed. Before all, there are two different kinds of photography: the classic art photography (by Ansel Adams) and the photography as kind of art (by Andreas Gursky). In this text we are talking about the photography as kind of art. The main principle of current searching is synthesis of two different approaches: modernism and postmodernism. This method helps us to define uniform criteria of aesthetic quality in photography as kind of art. The criteria mentioned in conclusion paragraph are: aesthetic rationality, aesthetic economy, awareness (using photographic technics or references), and intention to go beyond form, practice and method.Keywords: aesthetic, art, criteria of quality, photography, visually
Procedia PDF Downloads 41816050 Robust Heart Rate Estimation from Multiple Cardiovascular and Non-Cardiovascular Physiological Signals Using Signal Quality Indices and Kalman Filter
Authors: Shalini Rankawat, Mansi Rankawat, Rahul Dubey, Mazad Zaveri
Abstract:
Physiological signals such as electrocardiogram (ECG) and arterial blood pressure (ABP) in the intensive care unit (ICU) are often seriously corrupted by noise, artifacts, and missing data, which lead to errors in the estimation of heart rate (HR) and incidences of false alarm from ICU monitors. Clinical support in ICU requires most reliable heart rate estimation. Cardiac activity, because of its relatively high electrical energy, may introduce artifacts in Electroencephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG) recordings. This paper presents a robust heart rate estimation method by detection of R-peaks of ECG artifacts in EEG, EMG & EOG signals, using energy-based function and a novel Signal Quality Index (SQI) assessment technique. SQIs of physiological signals (EEG, EMG, & EOG) were obtained by correlation of nonlinear energy operator (teager energy) of these signals with either ECG or ABP signal. HR is estimated from ECG, ABP, EEG, EMG, and EOG signals from separate Kalman filter based upon individual SQIs. Data fusion of each HR estimate was then performed by weighing each estimate by the Kalman filters’ SQI modified innovations. The fused signal HR estimate is more accurate and robust than any of the individual HR estimate. This method was evaluated on MIMIC II data base of PhysioNet from bedside monitors of ICU patients. The method provides an accurate HR estimate even in the presence of noise and artifacts.Keywords: ECG, ABP, EEG, EMG, EOG, ECG artifacts, Teager-Kaiser energy, heart rate, signal quality index, Kalman filter, data fusion
Procedia PDF Downloads 69616049 Ultrasonic Pulse Velocity Investigation of Polypropylene and Steel Fiber Reinforced Concrete
Authors: Erjola Reufi, Jozefita Marku, Thomas Bier
Abstract:
Ultrasonic pulse velocity (UPV) method has been shown for some time to provide a reliable means of estimating properties and offers a unique opportunity for direct, quick and safe control of building damaged by earthquake, fatigue, conflagration and catastrophic scenarios. On this investigation hybrid reinforced concrete has been investigated by UPV method. Hooked end steel fiber of length 50 and 30 mm was added to concrete in different proportion 0, 0.25, 0.5, and 1 % by the volume of concrete. On the other hand, polypropylene fiber of length 12, 6, 3 mm was added to concrete of 0.1, 0.2, and 0.4 % by the volume of concrete. Fifteen different mixture has been prepared to investigate the relation between compressive strength and UPV values and also to investigate on the effect of volume and type of fiber on UPV values.Keywords: compressive strength, polypropylene fiber, steel fiber, ultrasonic pulse velocity, volume, type of fiber
Procedia PDF Downloads 40216048 Optimal Harmonic Filters Design of Taiwan High Speed Rail Traction System
Authors: Ying-Pin Chang
Abstract:
This paper presents a method for combining a particle swarm optimization with nonlinear time-varying evolution and orthogonal arrays (PSO-NTVEOA) in the planning of harmonic filters for the high speed railway traction system with specially connected transformers in unbalanced three-phase power systems. The objective is to minimize the cost of the filter, the filters loss, the total harmonic distortion of currents and voltages at each bus simultaneously. An orthogonal array is first conducted to obtain the initial solution set. The set is then treated as the initial training sample. Next, the PSO-NTVEOA method parameters are determined by using matrix experiments with an orthogonal array, in which a minimal number of experiments would have an effect that approximates the full factorial experiments. This PSO-NTVEOA method is then applied to design optimal harmonic filters in Taiwan High Speed Rail (THSR) traction system, where both rectifiers and inverters with IGBT are used. From the results of the illustrative examples, the feasibility of the PSO-NTVEOA to design an optimal passive harmonic filter of THSR system is verified and the design approach can greatly reduce the harmonic distortion. Three design schemes are compared that V-V connection suppressing the 3rd order harmonic, and Scott and Le Blanc connection for the harmonic improvement is better than the V-V connection.Keywords: harmonic filters, particle swarm optimization, nonlinear time-varying evolution, orthogonal arrays, specially connected transformers
Procedia PDF Downloads 39216047 The Incidental Linguistic Information Processing and Its Relation to General Intellectual Abilities
Authors: Evgeniya V. Gavrilova, Sofya S. Belova
Abstract:
The present study was aimed at clarifying the relationship between general intellectual abilities and efficiency in free recall and rhymed words generation task after incidental exposure to linguistic stimuli. The theoretical frameworks stress that general intellectual abilities are based on intentional mental strategies. In this context, it seems to be crucial to examine the efficiency of incidentally presented information processing in cognitive task and its relation to general intellectual abilities. The sample consisted of 32 Russian students. Participants were exposed to pairs of words. Each pair consisted of two common nouns or two city names. Participants had to decide whether a city name was presented in each pair. Thus words’ semantics was processed intentionally. The city names were considered to be focal stimuli, whereas common nouns were considered to be peripheral stimuli. Along with that each pair of words could be rhymed or not be rhymed, but this phonemic aspect of stimuli’s characteristic (rhymed and non-rhymed words) was processed incidentally. Then participants were asked to produce as many rhymes as they could to new words. The stimuli presented earlier could be used as well. After that, participants had to retrieve all words presented earlier. In the end, verbal and non-verbal abilities were measured with number of special psychometric tests. As for free recall task intentionally processed focal stimuli had an advantage in recall compared to peripheral stimuli. In addition all the rhymed stimuli were recalled more effectively than non-rhymed ones. The inverse effect was found in words generation task where participants tended to use mainly peripheral stimuli compared to focal ones. Furthermore peripheral rhymed stimuli were most popular target category of stimuli that was used in this task. Thus the information that was processed incidentally had a supplemental influence on efficiency of stimuli processing as well in free recall as in word generation task. Different patterns of correlations between intellectual abilities and efficiency in different stimuli processing in both tasks were revealed. Non-verbal reasoning ability correlated positively with free recall of peripheral rhymed stimuli, but it was not related to performance on rhymed words’ generation task. Verbal reasoning ability correlated positively with free recall of focal stimuli. As for rhymed words generation task, verbal intelligence correlated negatively with generation of focal stimuli and correlated positively with generation of all peripheral stimuli. The present findings lead to two key conclusions. First, incidentally processed stimuli had an advantage in free recall and word generation task. Thus incidental information processing appeared to be crucial for subsequent cognitive performance. Secondly, it was demonstrated that incidentally processed stimuli were recalled more frequently by participants with high nonverbal reasoning ability and were more effectively used by participants with high verbal reasoning ability in subsequent cognitive tasks. That implies that general intellectual abilities could benefit from operating by different levels of information processing while cognitive problem solving. This research was supported by the “Grant of President of RF for young PhD scientists” (contract № is 14.Z56.17.2980- MK) and the Grant № 15-36-01348a2 of Russian Foundation for Humanities.Keywords: focal and peripheral stimuli, general intellectual abilities, incidental information processing
Procedia PDF Downloads 23116046 Management and Agreement Protocol in Computer Security
Authors: Abdulameer K. Hussain
Abstract:
When dealing with a cryptographic system we note that there are many activities performed by parties of this cryptographic system and the most prominent of these activities is the process of agreement between the parties involved in the cryptographic system on how to deal and perform the cryptographic system tasks to be more secure, more confident and reliable. The most common agreement among parties is a key agreement and other types of agreements. Despite the fact that there is an attempt from some quarters to find other effective agreement methods but these methods are limited to the traditional agreements. This paper presents different parameters to perform more effectively the task of the agreement, including the key alternative, the agreement on the encryption method used and the agreement to prevent the denial of the services. To manage and achieve these goals, this method proposes the existence of an control and monitoring entity to manage these agreements by collecting different statistical information of the opinions of the authorized parties in the cryptographic system. These statistics help this entity to take the proper decision about the agreement factors. This entity is called Agreement Manager (AM).Keywords: agreement parameters, key agreement, key exchange, security management
Procedia PDF Downloads 42116045 The Different Roles between Sodium and Potassium Ions in Ion Exchange of WO3/SiO2 Catalysts
Authors: Kritsada Pipitthapan
Abstract:
WO3/SiO2 catalysts were modified by an ion exchange method with sodium hydroxide or potassium hydroxide solution. The performance of the modified catalysts was tested in the metathesis of ethylene and trans-2-butene to propylene. During ion exchange, sodium and potassium ions played different roles. Sodium modified catalysts revealed constant trans-2-butene conversion and propylene selectivity when the concentrations of sodium in the solution were varied. In contrast, potassium modified catalysts showed reduction of the conversion and increase of the selectivity. From these results, potassium hydroxide may affect the transformation of tungsten oxide active species, resulting in the decrease in conversion whereas sodium hydroxide did not. Moreover, the modification of catalysts by this method improved the catalyst stability by lowering the amount of coke deposited on the catalyst surface.Keywords: acid sites, alkali metal, isomerization, metathesis
Procedia PDF Downloads 25116044 Application of Nonparametric Geographically Weighted Regression to Evaluate the Unemployment Rate in East Java
Authors: Sifriyani Sifriyani, I Nyoman Budiantara, Sri Haryatmi, Gunardi Gunardi
Abstract:
East Java Province has a first rank as a province that has the most counties and cities in Indonesia and has the largest population. In 2015, the population reached 38.847.561 million, this figure showed a very high population growth. High population growth is feared to lead to increase the levels of unemployment. In this study, the researchers mapped and modeled the unemployment rate with 6 variables that were supposed to influence. Modeling was done by nonparametric geographically weighted regression methods with truncated spline approach. This method was chosen because spline method is a flexible method, these models tend to look for its own estimation. In this modeling, there were point knots, the point that showed the changes of data. The selection of the optimum point knots was done by selecting the most minimun value of Generalized Cross Validation (GCV). Based on the research, 6 variables were declared to affect the level of unemployment in eastern Java. They were the percentage of population that is educated above high school, the rate of economic growth, the population density, the investment ratio of total labor force, the regional minimum wage and the ratio of the number of big industry and medium scale industry from the work force. The nonparametric geographically weighted regression models with truncated spline approach had a coefficient of determination 98.95% and the value of MSE equal to 0.0047.Keywords: East Java, nonparametric geographically weighted regression, spatial, spline approach, unemployed rate
Procedia PDF Downloads 32116043 Teachers and Innovations in Information and Communication Technology
Authors: Martina Manenova, Lukas Cirus
Abstract:
This article introduces research focused on elementary school teachers’ approach to innovations in ICT. The diffusion of innovations theory, which was written by E. M. Rogers, captures the processes of innovation adoption. The research method derived from this theory and the Rogers’ questionnaire focused on the diffusion of innovations was used as the basic research method. The research sample consisted of elementary school teachers. The comparison of results with the Rogers’ results shows that among the teachers in the research sample the so-called early majority, as well as the overall division of the data, was rather central (early adopter, early majority, and later majority). The teachers very rarely appeared on the edge positions (innovator, laggard). The obtained results can be applied to teaching practice and used especially in the implementation of new technologies and techniques into the educational process.Keywords: innovation, diffusion of innovation, information and communication technology, teachers
Procedia PDF Downloads 29316042 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model
Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier
Abstract:
Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.Keywords: human motion recognition, motion representation, Laban Movement Analysis, Discrete Hidden Markov Model
Procedia PDF Downloads 20716041 Effect of Mesh Size on the Supersonic Viscous Flow Parameters around an Axisymmetric Blunt Body
Authors: Haoui Rabah
Abstract:
The aim of this work is to analyze a viscous flow around the axisymmetric blunt body taken into account the mesh size both in the free stream and into the boundary layer. The resolution of the Navier-Stokes equations is realized by using the finite volume method to determine the flow parameters and detached shock position. The numerical technique uses the Flux Vector Splitting method of Van Leer. Here, adequate time stepping parameter, CFL coefficient and mesh size level are selected to ensure numerical convergence. The effect of the mesh size is significant on the shear stress and velocity profile. The best solution is obtained with using a very fine grid. This study enabled us to confirm that the determination of boundary layer thickness can be obtained only if the size of the mesh is lower than a certain value limits given by our calculations.Keywords: supersonic flow, viscous flow, finite volume, blunt body
Procedia PDF Downloads 60416040 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods
Authors: Dario Milani, Guido Morgenthal
Abstract:
Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method
Procedia PDF Downloads 262