Search results for: feature method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20002

Search results for: feature method

16612 Performance Evaluation and Comparison between the Empirical Mode Decomposition, Wavelet Analysis, and Singular Spectrum Analysis Applied to the Time Series Analysis in Atmospheric Science

Authors: Olivier Delage, Hassan Bencherif, Alain Bourdier

Abstract:

Signal decomposition approaches represent an important step in time series analysis, providing useful knowledge and insight into the data and underlying dynamics characteristics while also facilitating tasks such as noise removal and feature extraction. As most of observational time series are nonlinear and nonstationary, resulting of several physical processes interaction at different time scales, experimental time series have fluctuations at all time scales and requires the development of specific signal decomposition techniques. Most commonly used techniques are data driven, enabling to obtain well-behaved signal components without making any prior-assumptions on input data. Among the most popular time series decomposition techniques, most cited in the literature, are the empirical mode decomposition and its variants, the empirical wavelet transform and singular spectrum analysis. With increasing popularity and utility of these methods in wide ranging applications, it is imperative to gain a good understanding and insight into the operation of these algorithms. In this work, we describe all of the techniques mentioned above as well as their ability to denoise signals, to capture trends, to identify components corresponding to the physical processes involved in the evolution of the observed system and deduce the dimensionality of the underlying dynamics. Results obtained with all of these methods on experimental total ozone columns and rainfall time series will be discussed and compared

Keywords: denoising, empirical mode decomposition, singular spectrum analysis, time series, underlying dynamics, wavelet analysis

Procedia PDF Downloads 117
16611 Detonalization of Punjabi: Towards a Loss of Linguistic Indigeneity

Authors: Sukhvinder Singh

Abstract:

Punjabi language is related to the languages of New Indo-Aryan group that, in turn, is related to the branch of Indo-European language family. Punjabi language covers the areas of Western part (that is in Pakistan) and Eastern part (the Punjab state, Haryana, Delhi Himachal and J&K) and abroad (particularly Canada, USA, U.K. and Arab Emirates), where it is spoken widely. Besides India and Pakistan, Punjabi is the third language spoken in Canada after English, French having more than one hundred millions speakers worldwide. It is the fourth language spoken in Canada after English, French, and Chinese. It is also being taught as second language in most of the community school of British Columbia. The total number of Punjabi speakers is more than one hundred millions including India, Pakistan and abroad. Punjabi has a long tradition of linguistic tradition. A large number of scholars have studied Punjabi at different linguistic levels. Various studies are devoted to its special phonological characteristics, especially the tone, which has now started disappearing in favour of aspiration, a rare example of a language change in progress in its reversal direction. This process of language change in progress in reversal is dealt with in this paper a change towards a loss of linguistic indigeneity. The tone being a distinctive linguistic feature of Punjabi language is getting lost due to the increasing influence of Hindi and English particularly in the speech Urban Punjabi and Punjabi settled abroad. In this paper, an attempt has been made to discuss the sociolinguistics and sociology of Punjabi language and Punjab to trace the initiation and progression of this change towards a loss of Linguistic Indigeneity.

Keywords: language change in reversal, reaspiration, detonalization, new Indo-Aryan group

Procedia PDF Downloads 172
16610 Urban Design as a Tool to Address Safety in a Crime Ridden Area: A Case Study of Malviya Nagar, New Delhi

Authors: Shramana Mondal

Abstract:

As a city is growing in population, sprawl, and complexity, use of public spaces increases variably and thus ensuring safety for the people becomes an utmost priority. While active monitoring measures may be necessary in some places, urban design can play a major role in devising self-policing and encourage active public life. This paper aims to explore the various spatial and psychological reasons for the occurrence of crime and the role of ‘urban design’ to address this issue. In this research, the principles of urban design are examined, as well as projected on actual site by addressing the issue with urban design principles. In this review the sociological, psychological, typological and morphological factors are addressed which affect the safety of a space and the possible framing guidelines, controls and urban design strategies are explored to address a safe neighborhood. On the basis of statistical survey, the residential and street network of Malviya Nagar in Delhi is chosen as the area of demonstration. The programs inhibit a safe neighborhood and a movement network that are addressed based on the four principles of natural surveillance, territoriality, community building, and connectivity. The paper concludes with a discussion of the urban design as an effective tool by creating an intense active zone with mixed use feature to ensure throughout activity and also ensuring safe pedestrian zone by introducing sense of community feeling and territoriality thus achieving active, useful and public friendly space.

Keywords: crime, public life, safety, urban design

Procedia PDF Downloads 399
16609 Development and Validation of a Rapid Turbidimetric Assay to Determine the Potency of Cefepime Hydrochloride in Powder Injectable Solution

Authors: Danilo F. Rodrigues, Hérida Regina N. Salgado

Abstract:

Introduction: The emergence of resistant microorganisms to a large number of clinically approved antimicrobials has been increasing, which restrict the options for the treatment of bacterial infections. As a strategy, drugs with high antimicrobial activities are in evidence. Stands out a class of antimicrobial, the cephalosporins, having as fourth generation cefepime (CEF) a semi-synthetic product which has activity against various Gram-positive bacteria (e.g. oxacillin resistant Staphylococcus aureus) and Gram-negative (e.g. Pseudomonas aeruginosa) aerobic. There are few studies in the literature regarding the development of microbiological methodologies for the analysis of this antimicrobial, so researches in this area are highly relevant to optimize the analysis of this drug in the industry and ensure the quality of the marketed product. The development of microbiological methods for the analysis of antimicrobials has gained strength in recent years and has been highlighted in relation to physicochemical methods, especially because they make possible to determine the bioactivity of the drug against a microorganism. In this context, the aim of this work was the development and validation of a microbiological method for quantitative analysis of CEF in powder lyophilized for injectable solution by turbidimetric assay. Method: For performing the method, Staphylococcus aureus ATCC 6538 IAL 2082 was used as the test microorganism and the culture medium chosen was the Casoy broth. The test was performed using temperature control (35.0 °C ± 2.0 °C) and incubated for 4 hours in shaker. The readings of the results were made at a wavelength of 530 nm through a spectrophotometer. The turbidimetric microbiological method was validated by determining the following parameters: linearity, precision (repeatability and intermediate precision), accuracy and robustness, according to ICH guidelines. Results and discussion: Among the parameters evaluated for method validation, the linearity showed results suitable for both statistical analyses as the correlation coefficients (r) that went 0.9990 for CEF reference standard and 0.9997 for CEF sample. The precision presented the following values 1.86% (intraday), 0.84% (interday) and 0.71% (between analyst). The accuracy of the method has been proven through the recovery test where the mean value obtained was 99.92%. The robustness was verified by the parameters changing volume of culture medium, brand of culture medium, incubation time in shaker and wavelength. The potency of CEF present in the samples of lyophilized powder for injectable solution was 102.46%. Conclusion: The turbidimetric microbiological method proposed for quantification of CEF in lyophilized powder for solution for injectable showed being fast, linear, precise, accurate and robust, being in accordance with all the requirements, which can be used in routine analysis of quality control in the pharmaceutical industry as an option for microbiological analysis.

Keywords: cefepime hydrochloride, quality control, turbidimetric assay, validation

Procedia PDF Downloads 362
16608 A Method for Quantifying Arsenolipids in Sea Water by HPLC-High Resolution Mass Spectrometry

Authors: Muslim Khan, Kenneth B. Jensen, Kevin A. Francesconi

Abstract:

Trace amounts (ca 1 µg/L, 13 nM) of arsenic are present in sea water mostly as the oxyanion arsenate. In contrast, arsenic is present in marine biota (animals and algae) at very high levels (up to100,000 µg/kg) a significant portion of which is present as lipid-soluble compounds collectively termed arsenolipids. The complex nature of sea water presents an analytical challenge to detect trace compounds and monitor their environmental path. We developed a simple method using liquid-liquid extraction combined with HPLC-High Resolution Mass Spectrometer capable of detecting trace of arsenolipids (99 % of the sample matrix while recovering > 80 % of the six target arsenolipids with limit of detection of 0.003 µg/L.)

Keywords: arsenolipids, sea water, HPLC-high resolution mass spectrometry

Procedia PDF Downloads 366
16607 Determining Optimal Number of Trees in Random Forests

Authors: Songul Cinaroglu

Abstract:

Background: Random Forest is an efficient, multi-class machine learning method using for classification, regression and other tasks. This method is operating by constructing each tree using different bootstrap sample of the data. Determining the number of trees in random forests is an open question in the literature for studies about improving classification performance of random forests. Aim: The aim of this study is to analyze whether there is an optimal number of trees in Random Forests and how performance of Random Forests differ according to increase in number of trees using sample health data sets in R programme. Method: In this study we analyzed the performance of Random Forests as the number of trees grows and doubling the number of trees at every iteration using “random forest” package in R programme. For determining minimum and optimal number of trees we performed Mc Nemar test and Area Under ROC Curve respectively. Results: At the end of the analysis it was found that as the number of trees grows, it does not always means that the performance of the forest is better than forests which have fever trees. In other words larger number of trees only increases computational costs but not increases performance results. Conclusion: Despite general practice in using random forests is to generate large number of trees for having high performance results, this study shows that increasing number of trees doesn’t always improves performance. Future studies can compare different kinds of data sets and different performance measures to test whether Random Forest performance results change as number of trees increase or not.

Keywords: classification methods, decision trees, number of trees, random forest

Procedia PDF Downloads 395
16606 Method for Selecting and Prioritising Smart Services in Manufacturing Companies

Authors: Till Gramberg, Max Kellner, Erwin Gross

Abstract:

This paper presents a comprehensive investigation into the topic of smart services and IIoT-Platforms, focusing on their selection and prioritization in manufacturing organizations. First, a literature review is conducted to provide a basic understanding of the current state of research in the area of smart services. Based on discussed and established definitions, a definition approach for this paper is developed. In addition, value propositions for smart services are identified based on the literature and expert interviews. Furthermore, the general requirements for the provision of smart services are presented. Subsequently, existing approaches for the selection and development of smart services are identified and described. In order to determine the requirements for the selection of smart services, expert opinions from successful companies that have already implemented smart services are collected through semi-structured interviews. Based on the results, criteria for the evaluation of existing methods are derived. The existing methods are then evaluated according to the identified criteria. Furthermore, a novel method for the selection of smart services in manufacturing companies is developed, taking into account the identified criteria and the existing approaches. The developed concept for the method is verified in expert interviews. The method includes a collection of relevant smart services identified in the literature. The actual relevance of the use cases in the industrial environment was validated in an online survey. The required data and sensors are assigned to the smart service use cases. The value proposition of the use cases is evaluated in an expert workshop using different indicators. Based on this, a comparison is made between the identified value proposition and the required data, leading to a prioritization process. The prioritization process follows an established procedure for evaluating technical decision-making processes. In addition to the technical requirements, the prioritization process includes other evaluation criteria such as the economic benefit, the conformity of the new service offering with the company strategy, or the customer retention enabled by the smart service. Finally, the method is applied and validated in an industrial environment. The results of these experiments are critically reflected upon and an outlook on future developments in the area of smart services is given. This research contributes to a deeper understanding of the selection and prioritization process as well as the technical considerations associated with smart service implementation in manufacturing organizations. The proposed method serves as a valuable guide for decision makers, helping them to effectively select the most appropriate smart services for their specific organizational needs.

Keywords: smart services, IIoT, industrie 4.0, IIoT-platform, big data

Procedia PDF Downloads 89
16605 Application of Support Vector Machines in Fault Detection and Diagnosis of Power Transmission Lines

Authors: I. A. Farhat, M. Bin Hasan

Abstract:

A developed approach for the protection of power transmission lines using Support Vector Machines (SVM) technique is presented. In this paper, the SVM technique is utilized for the classification and isolation of faults in power transmission lines. Accurate fault classification and location results are obtained for all possible types of short circuit faults. As in distance protection, the approach utilizes the voltage and current post-fault samples as inputs. The main advantage of the method introduced here is that the method could easily be extended to any power transmission line.

Keywords: fault detection, classification, diagnosis, power transmission line protection, support vector machines (SVM)

Procedia PDF Downloads 559
16604 Development of a Program for the Evaluation of Thermal Performance Applying the Centre Scientifique et Techniques du Bâtiment Method Case Study: Classroom

Authors: Iara Rezende, Djalma Silva, Alcino Costa Neto

Abstract:

Considering the transformations of the contemporary world linked to globalization and climate changes caused by global warming, the environmental and energy issues have been increasingly present in the decisions of the world scenario. Thus, the aim of reducing the impacts caused by human activities there are the energy efficiency measures, which are also applicable in the scope of Civil Engineering. Considering that a large part of the energy demand from buildings is related to the need to adapt the internal environment to the users comfort and productivity, measures capable of reducing this need can minimize the climate changes impacts and also the energy consumption of the building. However, these important measures are currently little used by civil engineers, either because of the interdisciplinarity of the subject, the time required to apply certain methods or the difficult interpretation of the results obtained by computational programs that often have a complex and little applied approach. Thus, it was proposed the development of a Java application with a simpler and applied approach to evaluate the thermal performance of a building in order to obtain results capable of assisting the civil engineers in the decision making related to the users thermal comfort. The program was built in Java programming language and the method used for the evaluation was the Center Scientifique et Technique du Batiment (CSTB) method. The program was used to evaluate the thermal performance of a university classroom. The analysis was carried out from simulations considering the worst climatic situation of the building occupation. Thus, at the end of the process, the favorable result was obtained regarding the classroom comfort zone and the feasibility of using the program, thus achieving the proposed objectives.

Keywords: building occupation, CSTB method, energy efficiency measures, Java application, thermal comfort

Procedia PDF Downloads 131
16603 Investigating the Viability of Small-Scale Rapid Alloy Prototyping of Interstitial Free Steels

Authors: Talal S. Abdullah, Shahin Mehraban, Geraint Lodwig, Nicholas P. Lavery

Abstract:

The defining property of Interstitial Free (IF) steels is formability, comprehensively measured using the Lankford coefficient (r-value) on uniaxial tensile test data. The contributing factors supporting this feature are grain size, orientation, and elemental additions. The processes that effectively modulate these factors are the casting procedure, hot rolling, and heat treatment. An existing methodology is well-practised in the steel Industry; however, large-scale production and experimentation consume significant proportions of time, money, and material. Introducing small-scale rapid alloy prototyping (RAP) as an alternative process would considerably reduce the drawbacks relative to standard practices. The aim is to finetune the existing fundamental procedures implemented in the industrial plant to adapt to the RAP route. IF material is remelted in the 80-gram coil induction melting (CIM) glovebox. To birth small grains, maximum deformation must be induced onto the cast material during the hot rolling process. The rolled strip must then satisfy the polycrystalline behaviour of the bulk material by displaying a resemblance in microstructure, hardness, and formability to that of the literature and actual plant steel. A successful outcome of this work is that small-scale RAP can achieve target compositions with similar microstructures and statistically consistent mechanical properties which complements and accelerates the development of novel steel grades.

Keywords: rapid alloy prototyping, plastic anisotropy, interstitial free, miniaturised tensile testing, formability

Procedia PDF Downloads 114
16602 Optimal Injected Current Control for Shunt Active Power Filter Using Artificial Intelligence

Authors: Brahim Berbaoui

Abstract:

In this paper, a new particle swarm optimization (PSO) based method is proposed for the implantation of optimal harmonic power flow in power systems. In this algorithm approach, proportional integral controller for reference compensating currents of active power filter is performed in order to minimize the total harmonic distortion (THD). The simulation results show that the new control method using PSO approach is not only easy to be implanted, but also very effective in reducing the unwanted harmonics and compensating reactive power. The studies carried out have been accomplished using the MATLAB Simulink Power System Toolbox.

Keywords: shunt active power filter, power quality, current control, proportional integral controller, particle swarm optimization

Procedia PDF Downloads 616
16601 The Analysis of a Reactive Hydromagnetic Internal Heat Generating Poiseuille Fluid Flow through a Channel

Authors: Anthony R. Hassan, Jacob A. Gbadeyan

Abstract:

In this paper, the analysis of a reactive hydromagnetic Poiseuille fluid flow under each of sensitized, Arrhenius and bimolecular chemical kinetics through a channel in the presence of heat source is carried out. An exothermic reaction is assumed while the concentration of the material is neglected. Adomian Decomposition Method (ADM) together with Pade Approximation is used to obtain the solutions of the governing nonlinear non – dimensional differential equations. Effects of various physical parameters on the velocity and temperature fields of the fluid flow are investigated. The entropy generation analysis and the conditions for thermal criticality are also presented.

Keywords: chemical kinetics, entropy generation, thermal criticality, adomian decomposition method (ADM) and pade approximation

Procedia PDF Downloads 464
16600 Comparison of Wet and Microwave Digestion Methods for the Al, Cu, Fe, Mn, Ni, Pb and Zn Determination in Some Honey Samples by ICPOES in Turkey

Authors: Huseyin Altundag, Emel Bina, Esra Altıntıg

Abstract:

The aim of this study is determining amount of Al, Cu, Fe, Mn, Ni, Pb and Zn in the samples of honey which are gathered from Sakarya and Istanbul regions. In this study the evaluation of the trace elements in honeys samples are gathered from Sakarya and Istanbul, Turkey. The sample preparation phase is performed via wet decomposition method and microwave digestion system. The accuracy of the method was corrected by the standard reference material, Tea Leaves (INCY-TL-1) and NIST SRM 1515 Apple leaves. The comparison between gathered data and literature values has made and possible resources of the contamination to the samples of honey have handled. The obtained results will be presented in ICCIS 2015: XIII International Conference on Chemical Industry and Science.

Keywords: Wet decomposition, Microwave digestion, Trace element, Honey, ICP-OES

Procedia PDF Downloads 462
16599 Evaluation of Dynamic Behavior of a Rotor-Bearing System in Operating Conditions

Authors: Mohammad Hadi Jalali, Behrooz Shahriari, Mostafa Ghayour, Saeed Ziaei-Rad, Shahram Yousefi

Abstract:

Most flexible rotors can be considered as beam-like structures. In many cases, rotors are modeled as one-dimensional bodies, made basically of beam-like shafts with rigid bodies attached to them. This approach is typical of rotor dynamics, both analytical and numerical, and several rotor dynamic codes, based on the finite element method, follow this trend. In this paper, a finite element model based on Timoshenko beam elements is utilized to analyze the lateral dynamic behavior of a certain rotor-bearing system in operating conditions.

Keywords: finite element method, Timoshenko beam elements, operational deflection shape, unbalance response

Procedia PDF Downloads 428
16598 Study of Wake Dynamics for a Rim-Driven Thruster Based on Numerical Method

Authors: Bao Liu, Maarten Vanierschot, Frank Buysschaert

Abstract:

The present work examines the wake dynamics of a rim-driven thruster (RDT) with Computational Fluid Dynamics (CFD). Unsteady Reynolds-averaged Navier-Stokes (URANS) equations were solved in the commercial solver ANSYS Fluent in combination with the SST k-ω turbulence model. The application of the moving reference frame (MRF) and sliding mesh (SM) approach to handling the rotational movement of the propeller were compared in the transient simulations. Validation and verification of the numerical model was performed to ensure numerical accuracy. Two representative scenarios were considered, i.e., the bollard condition (J=0) and a very light loading condition(J=0.7), respectively. From the results, it’s confirmed that compared to the SM method, the MRF method is not suitable for resolving the unsteady flow features as it only gives the general mean flow but smooths out lots of characteristic details in the flow field. By evaluating the simulation results with the SM technique, the instantaneous wake flow field under both conditions is presented and analyzed, most notably the helical vortex structure. It’s observed from the results that the tip vortices, blade shed vortices, and hub vortices are present in the wake flow field and convect downstream in a highly non-linear way. The shear layer vortices shedding from the duct displayed a strong interaction with the distorted tip vortices in an irregularmanner.

Keywords: computational fluid dynamics, rim-driven thruster, sliding mesh, wake dynamics

Procedia PDF Downloads 259
16597 An Analysis of Oil Price Changes and Other Factors Affecting Iranian Food Basket: A Panel Data Method

Authors: Niloofar Ashktorab, Negar Ashktorab

Abstract:

Oil exports fund nearly half of Iran’s government expenditures, since many years other countries have been imposed different sanctions against Iran. Sanctions that primarily target Iran’s key energy sector have harmed Iran’s economy. The strategic effects of sanctions might be reduction as Iran adjusts to them economically. In this study, we evaluate the impact of oil price and sanctions against Iran on food commodity prices by using panel data method. Here, we find that the food commodity prices, the oil price and real exchange rate are stationary. The results show positive effect of oil price changes, real exchange rate and sanctions on food commodity prices.

Keywords: oil price, food basket, sanctions, panel data, Iran

Procedia PDF Downloads 356
16596 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System

Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim

Abstract:

General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.

Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms

Procedia PDF Downloads 391
16595 Diagnosis of Gingivitis Based on Correlations of Laser Doppler Data and Gingival Fluid Cytology

Authors: A. V. Belousov, Yakushenko

Abstract:

One of the main problems of modern dentistry is development a reliable method to detect inflammation in the gums on the stages of diagnosis and assessment of treatment efficacy. We have proposed a method of gingival fluid intake, which successfully combines accessibility, excluding the impact of the annoying and damaging the gingival sulcus factors and provides reliable results (patent of RF№ 2342956 Method of gingival fluid intake). The objects of the study were students - volunteers of Dentistry Faculty numbering 75 people aged 20-21 years. Cellular composition of gingival fluid was studied using microscope "Olympus CX 31" (Japan) with the calculation of epithelial leukocyte index (ELI). Assessment of gingival micro circulation was performed using the apparatus «LAKK–01» (Lazma, Moscow). Cytological investigation noted the highly informative of epithelial leukocyte index (ELI), which demonstrated changes in the mechanisms of protection gums. The increase of ELI occurs during inhibition mechanisms of phagocytosis and activation of epithelial desquamation. The cytological data correlate with micro circulation indicators obtained by laser Doppler flowmetry. We have identified and confirmed the correlations between parameters laser Doppler flowmetry and data cytology gingival fluid in patients with gingivitis.

Keywords: gingivitis, laser doppler flowmetry, gingival fluid cytology, epithelial leukocyte index (ELI)

Procedia PDF Downloads 328
16594 Evaluation of Heterogeneity of Paint Coating on Metal Substrate Using Laser Infrared Thermography and Eddy Current

Authors: S. Mezghani, E. Perrin, J. L. Bodnar, J. Marthe, B. Cauwe, V. Vrabie

Abstract:

Non contact evaluation of the thickness of paint coatings can be attempted by different destructive and nondestructive methods such as cross-section microscopy, gravimetric mass measurement, magnetic gauges, Eddy current, ultrasound or terahertz. Infrared thermography is a nondestructive and non-invasive method that can be envisaged as a useful tool to measure the surface thickness variations by analyzing the temperature response. In this paper, the thermal quadrupole method for two layered samples heated up with a pulsed excitation is firstly used. By analyzing the thermal responses as a function of thermal properties and thicknesses of both layers, optimal parameters for the excitation source can be identified. Simulations show that a pulsed excitation with duration of ten milliseconds allows to obtain a substrate-independent thermal response. Based on this result, an experimental setup consisting of a near-infrared laser diode and an Infrared camera was next used to evaluate the variation of paint coating thickness between 60 µm and 130 µm on two samples. Results show that the parameters extracted for thermal images are correlated with the estimated thicknesses by the Eddy current methods. The laser pulsed thermography is thus an interesting alternative nondestructive method that can be moreover used for non conductive substrates.

Keywords: non destructive, paint coating, thickness, infrared thermography, laser, heterogeneity

Procedia PDF Downloads 639
16593 The Value and Role of Higher Education in the Police Profession

Authors: Habib Ahmadi, Mohamad Ali Ameri

Abstract:

In this research, the perception and understanding of police officers about the value of higher education have been investigated. A qualitative research approach and phenomenological method were used, and in data analysis, the Claizi method was used. In this research, 17 people with different degrees and occupations were selected by purposive sampling method until saturation and were investigated using a semi-structured interview tool. After the data was collected, recorded, and coded in the Atlas T software, it was formulated in the form of main categories and concepts. The general views of police officers participating in this research show the importance of university education in police jobs(76%). The analysis of participants' experiences led to the identification of seven main categories of the value and role of higher education, including; 1- Improvement of behavior and social skills, 2- Opportunities to improve and improve job performance, 3- Professionalization of police work, 4- Financial motivation, 5- People's satisfaction with police services, 6- Improvement of writing and technical skills Statement, 7- Raising the level of expectation and expectations was misplaced (negative perception). The findings of this study support the positive attitude and professionalism of the educated police. Therefore, considering the change of paradigm in society as well as the change of technologies, more complex organizational designs, and the perception of police officers, it is concluded that the police field needs officers with higher education to enable them to understand the new global environment.

Keywords: lived experience, higher education, police professionalization, perceptions of police officers

Procedia PDF Downloads 83
16592 Behavior of Common Philippine-Made Concrete Hollow Block Structures Subjected to Seismic Load Using Rigid Body Spring-Discrete Element Method

Authors: Arwin Malabanan, Carl Chester Ragudo, Jerome Tadiosa, John Dee Mangoba, Eric Augustus Tingatinga, Romeo Eliezer Longalong

Abstract:

Concrete hollow blocks (CHB) are the most commonly used masonry block for walls in residential houses, school buildings and public buildings in the Philippines. During the recent 2013 Bohol earthquake (Mw 7.2), it has been proven that CHB walls are very vulnerable to severe external action like strong ground motion. In this paper, a numerical model of CHB structures is proposed, and seismic behavior of CHB houses is presented. In modeling, the Rigid Body Spring-Discrete Element method (RBS-DEM)) is used wherein masonry blocks are discretized into rigid elements and connected by nonlinear springs at preselected contact points. The shear and normal stiffness of springs are derived from the material properties of CHB unit incorporating the grout and mortar fillings through the volumetric transformation of the dimension using material ratio. Numerical models of reinforced and unreinforced walls are first subjected to linearly-increasing in plane loading to observe the different failure mechanisms. These wall models are then assembled to form typical model masonry houses and then subjected to the El Centro and Pacoima earthquake records. Numerical simulations show that the elastic, failure and collapse behavior of the model houses agree well with shaking table tests results. The effectiveness of the method in replicating failure patterns will serve as a basis for the improvement of the design and provides a good basis of strengthening the structure.

Keywords: concrete hollow blocks, discrete element method, earthquake, rigid body spring model

Procedia PDF Downloads 372
16591 Investigation about Structural and Optical Properties of Bulk and Thin Film of 1H-CaAlSi by Density Functional Method

Authors: M. Babaeipour, M. Vejdanihemmat

Abstract:

Optical properties of bulk and thin film of 1H-CaAlSi for two directions (1,0,0) and (0,0,1) were studied. The calculations are carried out by Density Functional Theory (DFT) method using full potential. GGA approximation was used to calculate exchange-correlation energy. The calculations are performed by WIEN2k package. The results showed that the absorption edge is shifted backward 0.82eV in the thin film than the bulk for both directions. The static values of the real part of dielectric function for four cases were obtained. The static values of the refractive index for four cases are calculated too. The reflectivity graphs have shown an intensive difference between the reflectivity of the thin film and the bulk in the ultraviolet region.

Keywords: 1H-CaAlSi, absorption, bulk, optical, thin film

Procedia PDF Downloads 518
16590 Synthesis of Montmorillonite/CuxCd1-xS Nanocomposites and Their Application to the Photodegradation of Methylene Blue

Authors: H. Boukhatem, L. Djouadi, H. Khalaf, R. M. Navarro, F. V. Ganzalez

Abstract:

Synthetic organic dyes are used in various industries, such as textile industry, leather tanning industry, paper production, hair dye production, etc. Wastewaters containing these dyes may be harmful to the environment and living organisms. Therefore, it is very important to remove or degrade these dyes before discharging them into the environment. In addition to standard technologies for the degradation and/or removal of dyes, several new specific technologies, the so-called advanced oxidation processes (AOPs), have been developed to eliminate dangerous compounds from polluted waters. AOPs are all characterized by the same chemical feature: production of radicals (•OH) through a multistep process, although different reaction systems are used. These radicals show little selectivity of attack and are able to oxidize various organic pollutants due to their high oxidative capacity (reduction potential of HO• Eo = 2.8 V). Heterogeneous photocatalysis, as one of the AOPs, could be effective in the oxidation/degradation of organic dyes. A major advantage of using heterogeneous photocatalysis for this purpose is the total mineralization of organic dyes, which results in CO2, H2O and corresponding mineral acids. In this study, nanomaterials based on montmorillonite and CuxCd1-xS with different Cu concentration (0.3 < x < 0.7) were utilized for the degradation of the commercial cationic textile dye Methylene blue (MB), used as a model pollutant. The synthesized nanomaterials were characterized by fourier transform infrared (FTIR) and thermogravimetric-differential thermal analysis (TG–DTA). Test results of photocatalysis of methylene blue under UV-Visible irradiation show that the photoactivity of nanomaterials montmorillonite/ CuxCd1-xS increases with the increasing of Cu concentration. The kinetics of the degradation of the MB dye was described with the Langmuir–Hinshelwood (L–H) kinetic model.

Keywords: heterogeneous photocatalysis, methylene blue, montmorillonite, nanomaterial

Procedia PDF Downloads 373
16589 Comparison of Entropy Coefficient and Internal Resistance of Two (Used and Fresh) Cylindrical Commercial Lithium-Ion Battery (NCR18650) with Different Capacities

Authors: Sara Kamalisiahroudi, Zhang Jianbo, Bin Wu, Jun Huang, Laisuo Su

Abstract:

The temperature rising within a battery cell depends on the level of heat generation, the thermal properties and the heat transfer around the cell. The rising of temperature is a serious problem of Lithium-Ion batteries and the internal resistance of battery is the main reason for this heating up, so the heat generation rate of the batteries is an important investigating factor in battery pack design. The delivered power of a battery is directly related to its capacity, decreases in the battery capacity means the growth of the Solid Electrolyte Interface (SEI) layer which is because of the deposits of lithium from the electrolyte to form SEI layer that increases the internal resistance of the battery. In this study two identical cylindrical Lithium-Ion (NCR18650)batteries from the same company with noticeable different in capacity (a fresh and a used battery) were compared for more focusing on their heat generation parameters (entropy coefficient and internal resistance) according to Brandi model, by utilizing potentiometric method for entropy coefficient and EIS method for internal resistance measurement. The results clarify the effect of capacity difference on cell electrical (R) and thermal (dU/dT) parameters. It can be very noticeable in battery pack design for its Safety.

Keywords: heat generation, Solid Electrolyte Interface (SEI), potentiometric method, entropy coefficient

Procedia PDF Downloads 473
16588 Rapid Detection and Differentiation of Camel Pox, Contagious Ecthyma and Papilloma Viruses in Clinical Samples of Camels Using a Multiplex PCR

Authors: A. I. Khalafalla, K. A. Al-Busada, I. M. El-Sabagh

Abstract:

Pox and pox-like diseases of camels are a group of exanthematous skin conditions that have become increasingly important economically. They may be caused by three distinct viruses: camelpox virus (CMPV), camel contagious ecthyma virus (CCEV) and camel papillomavirus (CAPV). These diseases are difficult to differentiate based on clinical presentation in disease outbreaks. Molecular methods such as PCR targeting species-specific genes have been developed and used to identify CMPV and CCEV, but not simultaneously in a single tube. Recently, multiplex PCR has gained reputation as a convenient diagnostic method with cost- and time–saving benefits. In the present communication, we describe the development, optimization and validation a multiplex PCR assays able to detect simultaneously the genome of the three viruses in one single test allowing for rapid and efficient molecular diagnosis. The assay was developed based on the evaluation and combination of published and new primer sets, and was applied to the detection of 110 tissue samples. The method showed high sensitivity, and the specificity was confirmed by PCR-product sequencing. In conclusion, this rapid, sensitive and specific assay is considered a useful method for identifying three important viruses in specimens from camels and as part of a molecular diagnostic regime.

Keywords: multiplex PCR, diagnosis, pox and pox-like diseases, camels

Procedia PDF Downloads 468
16587 Behavior Consistency Analysis for Workflow Nets Based on Branching Processes

Authors: Wang Mimi, Jiang Changjun, Liu Guanjun, Fang Xianwen

Abstract:

Loop structure often appears in the business process modeling, analyzing the consistency of corresponding workflow net models containing loop structure is a problem, the existing behavior consistency methods cannot analyze effectively the process models with the loop structure. In the paper, by analyzing five kinds of behavior relations of transitions, a three-dimensional figure and two-dimensional behavior relation matrix are proposed. Based on this, analysis method of behavior consistency of business process based on Petri net branching processes is proposed. Finally, an example is given out, which shows the method is effective.

Keywords: workflow net, behavior consistency measures, loop, branching process

Procedia PDF Downloads 388
16586 Object Recognition System Operating from Different Type Vehicles Using Raspberry and OpenCV

Authors: Maria Pavlova

Abstract:

In our days, it is possible to put the camera on different vehicles like quadcopter, train, airplane and etc. The camera also can be the input sensor in many different systems. That means the object recognition like non separate part of monitoring control can be key part of the most intelligent systems. The aim of this paper is to focus of the object recognition process during vehicles movement. During the vehicle’s movement the camera takes pictures from the environment without storage in Data Base. In case the camera detects a special object (for example human or animal), the system saves the picture and sends it to the work station in real time. This functionality will be very useful in emergency or security situations where is necessary to find a specific object. In another application, the camera can be mounted on crossroad where do not have many people and if one or more persons come on the road, the traffic lights became the green and they can cross the road. In this papers is presented the system has solved the aforementioned problems. It is presented architecture of the object recognition system includes the camera, Raspberry platform, GPS system, neural network, software and Data Base. The camera in the system takes the pictures. The object recognition is done in real time using the OpenCV library and Raspberry microcontroller. An additional feature of this library is the ability to display the GPS coordinates of the captured objects position. The results from this processes will be sent to remote station. So, in this case, we can know the location of the specific object. By neural network, we can learn the module to solve the problems using incoming data and to be part in bigger intelligent system. The present paper focuses on the design and integration of the image recognition like a part of smart systems.

Keywords: camera, object recognition, OpenCV, Raspberry

Procedia PDF Downloads 218
16585 The Reenactment of Historic Memory and the Ways to Read past Traces through Contemporary Architecture in European Urban Contexts: The Case Study of the Medieval Walls of Naples

Authors: Francesco Scarpati

Abstract:

Because of their long history, ranging from ancient times to the present day, European cities feature many historical layers, whose single identities are represented by traces surviving in the urban design. However, urban transformations, in particular, the ones that have been produced by the property speculation phenomena of the 20th century, often compromised the readability of these traces, resulting in a loss of the historical identities of the single layers. The purpose of this research is, therefore, a reflection on the theme of the reenactment of the historical memory in the stratified European contexts and on how contemporary architecture can help to reveal past signs of the cities. The research work starts from an analysis of a series of emblematic examples that have already provided an original solution to the described problem, going from the architectural detail scale to the urban and landscape scale. The results of these analyses are then applied to the case study of the city of Naples, as an emblematic example of a stratified city, with an ancient Greek origin; a city where it is possible to read most of the traces of its transformations. Particular consideration is given to the trace of the medieval walls of the city, which a long time ago clearly divided the city itself from the outer fields, and that is no longer readable at the current time. Finally, solutions and methods of intervention are proposed to ensure that the trace of the walls, read as a boundary, can be revealed through the contemporary project.

Keywords: contemporary project, historic memory, historic urban contexts, medieval walls, naples, stratified cities, urban traces

Procedia PDF Downloads 264
16584 The Study of Heat and Mass Transfer for Ferrous Materials' Filtration Drying

Authors: Dmytro Symak

Abstract:

Drying is a complex technologic, thermal and energy process. Energy cost of drying processes in many cases is the most costly stage of production, and can be over 50% of total costs. As we know, in Ukraine over 85% of Portland cement is produced moist, and the finished product energy costs make up to almost 60%. During the wet cement production, energy costs make up over 5500 kJ / kg of clinker, while during the dry only 3100 kJ / kg, that is, switching to a dry Portland cement will allow result into double cutting energy costs. Therefore, to study raw materials drying process in the manufacture of Portland cement is very actual task. The fine ferrous materials drying (small pyrites, red mud, clay Kyoko) is recommended to do by filtration method, that is one of the most intense. The essence of filtration method drying lies in heat agent filtering through a stationary layer of wet material, which is located on the perforated partition, in the "layer-dispersed material - perforated partition." For the optimum drying purposes, it is necessary to establish the dependence of pressure loss in the layer of dispersed material, and the values of heat and mass transfer, depending on the speed of the gas flow filtering. In our research, the experimentally determined pressure loss in the layer of dispersed material was generalized based on dimensionless complexes in the form and coefficients of heat exchange. We also determined the relation between the coefficients of mass and heat transfer. As a result of theoretic and experimental investigations, it was possible to develop a methodology for calculating the optimal parameters for the thermal agent and the main parameters for the filtration drying installation. The comparison of calculated by known operating expenses methods for the process of small pyrites drying in a rotating drum and filtration method shows to save up to 618 kWh per 1,000 kg of dry material and 700 kWh during filtration drying clay.

Keywords: drying, cement, heat and mass transfer, filtration method

Procedia PDF Downloads 262
16583 Continuous Plug Flow and Discrete Particle Phase Coupling Using Triangular Parcels

Authors: Anders Schou Simonsen, Thomas Condra, Kim Sørensen

Abstract:

Various processes are modelled using a discrete phase, where particles are seeded from a source. Such particles can represent liquid water droplets, which are affecting the continuous phase by exchanging thermal energy, momentum, species etc. Discrete phases are typically modelled using parcel, which represents a collection of particles, which share properties such as temperature, velocity etc. When coupling the phases, the exchange rates are integrated over the cell, in which the parcel is located. This can cause spikes and fluctuating exchange rates. This paper presents an alternative method of coupling a discrete and a continuous plug flow phase. This is done using triangular parcels, which span between nodes following the dynamics of single droplets. Thus, the triangular parcels are propagated using the corner nodes. At each time step, the exchange rates are spatially integrated over the surface of the triangular parcels, which yields a smooth continuous exchange rate to the continuous phase. The results shows that the method is more stable, converges slightly faster and yields smooth exchange rates compared with the steam tube approach. However, the computational requirements are about five times greater, so the applicability of the alternative method should be limited to processes, where the exchange rates are important. The overall balances of the exchanged properties did not change significantly using the new approach.

Keywords: CFD, coupling, discrete phase, parcel

Procedia PDF Downloads 267