Search results for: Homogenization method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8095

Search results for: Homogenization method

4615 On the Approximate Solution of a Nonlinear Singular Integral Equation

Authors: Nizami Mustafa, C. Ardil

Abstract:

In this study, the existence and uniqueness of the solution of a nonlinear singular integral equation that is defined on a region in the complex plane is proven and a method is given for finding the solution.

Keywords: Approximate solution, Fixed-point principle, Nonlinear singular integral equations, Vekua integral operator

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1925
4614 Estimation and Removal of Chlorophenolic Compounds from Paper Mill Waste Water by Electrochemical Treatment

Authors: R. Sharma, S. Kumar, C. Sharma

Abstract:

A number of toxic chlorophenolic compounds are formed during pulp bleaching. The nature and concentration of these chlorophenolic compounds largely depends upon the amount and nature of bleaching chemicals used. These compounds are highly recalcitrant and difficult to remove but are partially removed by the biochemical treatment processes adopted by the paper industry. Identification and estimation of these chlorophenolic compounds has been carried out in the primary and secondary clarified effluents from the paper mill by GCMS. Twenty-six chorophenolic compounds have been identified and estimated in paper mill waste waters. Electrochemical treatment is an efficient method for oxidation of pollutants and has successfully been used to treat textile and oil waste water. Electrochemical treatment using less expensive anode material, stainless steel electrodes has been tried to study their removal. The electrochemical assembly comprised a DC power supply, a magnetic stirrer and stainless steel (316 L) electrode. The optimization of operating conditions has been carried out and treatment has been performed under optimized treatment conditions. Results indicate that 68.7% and 83.8% of cholorphenolic compounds are removed during 2 h of electrochemical treatment from primary and secondary clarified effluent respectively. Further, there is a reduction of 65.1, 60 and 92.6% of COD, AOX and color, respectively for primary clarified and 83.8%, 75.9% and 96.8% of COD, AOX and color, respectively for secondary clarified effluent. EC treatment has also been found to increase significantly the biodegradability index of wastewater because of conversion of non- biodegradable fraction into biodegradable fraction. Thus, electrochemical treatment is an efficient method for the degradation of cholorophenolic compounds, removal of color, AOX and other recalcitrant organic matter present in paper mill waste water.

Keywords: Chlorophenolics, effluent, electrochemical treatment, wastewater.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1898
4613 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images

Authors: Mehrnoosh Omati, Mahmod Reza Sahebi

Abstract:

The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.

Keywords: Coupled Markov random field, environment, object-based analysis, Polarimetric SAR images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 863
4612 Environmental Decision Making Model for Assessing On-Site Performances of Building Subcontractors

Authors: Buket Metin

Abstract:

Buildings cause a variety of loads on the environment due to activities performed at each stage of the building life cycle. Construction is the first stage that affects both the natural and built environments at different steps of the process, which can be defined as transportation of materials within the construction site, formation and preparation of materials on-site and the application of materials to realize the building subsystems. All of these steps require the use of technology, which varies based on the facilities that contractors and subcontractors have. Hence, environmental consequences of the construction process should be tackled by focusing on construction technology options used in every step of the process. This paper presents an environmental decision-making model for assessing on-site performances of subcontractors based on the construction technology options which they can supply. First, construction technologies, which constitute information, tools and methods, are classified. Then, environmental performance criteria are set forth related to resource consumption, ecosystem quality, and human health issues. Finally, the model is developed based on the relationships between the construction technology components and the environmental performance criteria. The Fuzzy Analytical Hierarchy Process (FAHP) method is used for weighting the environmental performance criteria according to environmental priorities of decision-maker(s), while the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method is used for ranking on-site environmental performances of subcontractors using quantitative data related to the construction technology components. Thus, the model aims to provide an insight to decision-maker(s) about the environmental consequences of the construction process and to provide an opportunity to improve the overall environmental performance of construction sites.

Keywords: Construction process, construction technology, decision making, environmental performance, subcontractors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1172
4611 Probabilistic Damage Tolerance Methodology for Solid Fan Blades and Discs

Authors: Andrej Golowin, Viktor Denk, Axel Riepe

Abstract:

Solid fan blades and discs in aero engines are subjected to high combined low and high cycle fatigue loads especially around the contact areas between blade and disc. Therefore, special coatings (e.g. dry film lubricant) and surface treatments (e.g. shot peening or laser shock peening) are applied to increase the strength with respect to combined cyclic fatigue and fretting fatigue, but also to improve damage tolerance capability. The traditional deterministic damage tolerance assessment based on fracture mechanics analysis, which treats service damage as an initial crack, often gives overly conservative results especially in the presence of vibratory stresses. A probabilistic damage tolerance methodology using crack initiation data has been developed for fan discs exposed to relatively high vibratory stresses in cross- and tail-wind conditions at certain resonance speeds for limited time periods. This Monte-Carlo based method uses a damage databank from similar designs, measured vibration levels at typical aircraft operations and wind conditions and experimental crack initiation data derived from testing of artificially damaged specimens with representative surface treatment under combined fatigue conditions. The proposed methodology leads to a more realistic prediction of the minimum damage tolerance life for the most critical locations applicable to modern fan disc designs.

Keywords: Damage tolerance, Monte-Carlo method, fan blade and disc, laser shock peening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
4610 Wetting Characterization of High Aspect Ratio Nanostructures by Gigahertz Acoustic Reflectometry

Authors: C. Virgilio, J. Carlier, P. Campistron, M. Toubal, P. Garnier, L. Broussous, V. Thomy, B. Nongaillard

Abstract:

Wetting efficiency of microstructures or nanostructures patterned on Si wafers is a real challenge in integrated circuits manufacturing. In fact, bad or non-uniform wetting during wet processes limits chemical reactions and can lead to non-complete etching or cleaning inside the patterns and device defectivity. This issue is more and more important with the transistors size shrinkage and concerns mainly high aspect ratio structures. Deep Trench Isolation (DTI) structures enabling pixels’ isolation in imaging devices are subject to this phenomenon. While low-frequency acoustic reflectometry principle is a well-known method for Non Destructive Test applications, we have recently shown that it is also well suited for nanostructures wetting characterization in a higher frequency range. In this paper, we present a high-frequency acoustic reflectometry characterization of DTI wetting through a confrontation of both experimental and modeling results. The acoustic method proposed is based on the evaluation of the reflection of a longitudinal acoustic wave generated by a 100 µm diameter ZnO piezoelectric transducer sputtered on the silicon wafer backside using MEMS technologies. The transducers have been fabricated to work at 5 GHz corresponding to a wavelength of 1.7 µm in silicon. The DTI studied structures, manufactured on the wafer frontside, are crossing trenches of 200 nm wide and 4 µm deep (aspect ratio of 20) etched into a Si wafer frontside. In that case, the acoustic signal reflection occurs at the bottom and at the top of the DTI enabling its characterization by monitoring the electrical reflection coefficient of the transducer. A Finite Difference Time Domain (FDTD) model has been developed to predict the behavior of the emitted wave. The model shows that the separation of the reflected echoes (top and bottom of the DTI) from different acoustic modes is possible at 5 Ghz. A good correspondence between experimental and theoretical signals is observed. The model enables the identification of the different acoustic modes. The evaluation of DTI wetting is then performed by focusing on the first reflected echo obtained through the reflection at Si bottom interface, where wetting efficiency is crucial. The reflection coefficient is measured with different water / ethanol mixtures (tunable surface tension) deposited on the wafer frontside. Two cases are studied: with and without PFTS hydrophobic treatment. In the untreated surface case, acoustic reflection coefficient values with water show that liquid imbibition is partial. In the treated surface case, the acoustic reflection is total with water (no liquid in DTI). The impalement of the liquid occurs for a specific surface tension but it is still partial for pure ethanol. DTI bottom shape and local pattern collapse of the trenches can explain these incomplete wetting phenomena. This high-frequency acoustic method sensitivity coupled with a FDTD propagative model thus enables the local determination of the wetting state of a liquid on real structures. Partial wetting states for non-hydrophobic surfaces or low surface tension liquids are then detectable with this method.

Keywords: Wetting, acoustic reflectometry, gigahertz, semiconductor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1301
4609 Experimental Study on Capturing of Magnetic Nanoparticles Transported in an Implant Assisted Cylindrical Tube under Magnetic Field

Authors: Anurag Gaur, Nidhi, Shashi Sharma

Abstract:

Targeted drug delivery is a method of delivering medication to a patient in a manner that increases the concentration of the medication in some parts of the body relative to others. Targeted drug delivery seeks to concentrate the medication in the tissues of interest while reducing the relative concentration of the medication in the remaining tissues. This improves efficacy of the while reducing side effects. In the present work, we investigate the effect of magnetic field, flow rate and particle concentration on the capturing of magnetic particles transported in a stent implanted fluidic channel. Iron oxide magnetic nanoparticles (Fe3O4) nanoparticles were synthesized via co-precipitation method. The synthesized Fe3O4 nanoparticles were added in the de-ionized (DI) water to prepare the Fe3O4 magnetic particle suspended fluid. This fluid is transported in a cylindrical tube of diameter 8 mm with help of a peristaltic pump at different flow rate (25-40 ml/min). A ferromagnetic coil of SS 430 has been implanted inside the cylindrical tube to enhance the capturing of magnetic nanoparticles under magnetic field. The capturing of magnetic nanoparticles was observed at different magnetic magnetic field, flow rate and particle concentration. It is observed that capture efficiency increases from 47-67% at magnetic field 2-5kG, respectively at particle concentration 0.6mg/ml and at flow rate 30 ml/min. However, the capture efficiency decreases from 65 to 44% by increasing the flow rate from 25 to 40 ml/min, respectively. Furthermore, it is observed that capture efficiency increases from 51 to 67% by increasing the particle concentration from 0.3 to 0.6 mg/ml, respectively.

Keywords: Capture efficiency, Implant assisted-Magnetic drug targeting (IA-MDT), Magnetic nanoparticles, in vitro study.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1834
4608 A Fuzzy MCDM Approach for Health-Care Waste Management

Authors: Mehtap Dursun, E. Ertugrul Karsak, Melis Almula Karadayi

Abstract:

The management of the health-care wastes is one of the most important problems in Istanbul, a city with more than 12 million inhabitants, as it is in most of the developing countries. Negligence in appropriate treatment and final disposal of the healthcare wastes can lead to adverse impacts to public health and to the environment. This paper employs a fuzzy multi-criteria group decision making approach, which is based on the principles of fusion of fuzzy information, 2-tuple linguistic representation model, and technique for order preference by similarity to ideal solution (TOPSIS), to evaluate health-care waste (HCW) treatment alternatives for Istanbul. The evaluation criteria are determined employing nominal group technique (NGT), which is a method of systematically developing a consensus of group opinion. The employed method is apt to manage information assessed using multigranularity linguistic information in a decision making problem with multiple information sources. The decision making framework employs ordered weighted averaging (OWA) operator that encompasses several operators as the aggregation operator since it can implement different aggregation rules by changing the order weights. The aggregation process is based on the unification of information by means of fuzzy sets on a basic linguistic term set (BLTS). Then, the unified information is transformed into linguistic 2-tuples in a way to rectify the problem of loss information of other fuzzy linguistic approaches.

Keywords: Group decision making, health care waste management, multi-criteria decision making, OWA, TOPSIS, 2-tuple linguistic representation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2402
4607 Fractal Dimension of Breast Cancer Cell Migration in a Wound Healing Assay

Authors: R. Sullivan, T. Holden, G. Tremberger, Jr, E. Cheung, C. Branch, J. Burrero, G. Surpris, S. Quintana, A. Rameau, N. Gadura, H. Yao, R. Subramaniam, P. Schneider, S. A. Rotenberg, P. Marchese, A. Flamhlolz, D. Lieberman, T. Cheung

Abstract:

Migration in breast cancer cell wound healing assay had been studied using image fractal dimension analysis. The migration of MDA-MB-231 cells (highly motile) in a wound healing assay was captured using time-lapse phase contrast video microscopy and compared to MDA-MB-468 cell migration (moderately motile). The Higuchi fractal method was used to compute the fractal dimension of the image intensity fluctuation along a single pixel width region parallel to the wound. The near-wound region fractal dimension was found to decrease three times faster in the MDA-MB- 231 cells initially as compared to the less cancerous MDA-MB-468 cells. The inner region fractal dimension was found to be fairly constant for both cell types in time and suggests a wound influence range of about 15 cell layer. The box-counting fractal dimension method was also used to study region of interest (ROI). The MDAMB- 468 ROI area fractal dimension was found to decrease continuously up to 7 hours. The MDA-MB-231 ROI area fractal dimension was found to increase and is consistent with the behavior of a HGF-treated MDA-MB-231 wound healing assay posted in the public domain. A fractal dimension based capacity index has been formulated to quantify the invasiveness of the MDA-MB-231 cells in the perpendicular-to-wound direction. Our results suggest that image intensity fluctuation fractal dimension analysis can be used as a tool to quantify cell migration in terms of cancer severity and treatment responses.

Keywords: Higuchi fractal dimension, box-counting fractal dimension, cancer cell migration, wound healing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2544
4606 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah

Abstract:

Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.

Keywords: Hyperspectral image, spatial hypergraph, dimensionality reduction, semantic interpretation, band selection, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1220
4605 Limited Component Evaluation of the Effect of Regular Cavities on the Sheet Metal Element of the Steel Plate Shear Wall

Authors: Seyyed Abbas Mojtabavi, Mojtaba Fatzaneh Moghadam, Masoud Mahdavi

Abstract:

Steel Metal Shear Wall is one of the most common and widely used energy dissipation systems in structures, which is used today as a damping system due to the increase in the construction of metal structures. In the present study, the shear wall of the steel plate with dimensions of 5×3 m and thickness of 0.024 m was modeled with 2 floors of total height from the base level with finite element method in Abaqus software. The loading is done as a concentrated load at the upper point of the shear wall on the second floor based on step type buckle. The mesh in the model is applied in two directions of length and width of the shear wall, equal to 0.02 and 0.033, respectively, and the mesh in the models is of sweep type. Finally, it was found that the steel plate shear wall with cavity (CSPSW) compared to the SPSW model, S (Mises), Smax (In-Plane Principal), Smax (In-Plane Principal-ABS), Smax (Min Principal) increased by 53%, 70%, 68% and 43%, respectively. The presence of cavities has led to an increase in the estimated stresses, but their presence has caused critical stresses and critical deformations created to be removed from the inner surface of the shear wall and transferred to the desired sections (regular cavities) which can be suggested as a solution in seismic design and improvement of the structure to transfer possible damage during the earthquake and storm to the desired and pre-designed location in the structure.

Keywords: Steel plate shear wall, Abacus software, finite element method, boundary element, seismic structural improvement, Von misses Stress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 519
4604 Drainage Prediction for Dam using Fuzzy Support Vector Regression

Authors: S. Wiriyarattanakun, A. Ruengsiriwatanakun, S. Noimanee

Abstract:

The drainage Estimating is an important factor in dam management. In this paper, we use fuzzy support vector regression (FSVR) to predict the drainage of the Sirikrit Dam at Uttaradit province, Thailand. The results show that the FSVR is a suitable method in drainage estimating.

Keywords: Drainage Estimation, Prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1271
4603 Efficient Design Optimization of Multi-State Flow Network for Multiple Commodities

Authors: Yu-Cheng Chou, Po Ting Lin

Abstract:

The network of delivering commodities has been an important design problem in our daily lives and many transportation applications. The delivery performance is evaluated based on the system reliability of delivering commodities from a source node to a sink node in the network. The system reliability is thus maximized to find the optimal routing. However, the design problem is not simple because (1) each path segment has randomly distributed attributes; (2) there are multiple commodities that consume various path capacities; (3) the optimal routing must successfully complete the delivery process within the allowable time constraints. In this paper, we want to focus on the design optimization of the Multi-State Flow Network (MSFN) for multiple commodities. We propose an efficient approach to evaluate the system reliability in the MSFN with respect to randomly distributed path attributes and find the optimal routing subject to the allowable time constraints. The delivery rates, also known as delivery currents, of the path segments are evaluated and the minimal-current arcs are eliminated to reduce the complexity of the MSFN. Accordingly, the correct optimal routing is found and the worst-case reliability is evaluated. It has been shown that the reliability of the optimal routing is at least higher than worst-case measure. Two benchmark examples are utilized to demonstrate the proposed method. The comparisons between the original and the reduced networks show that the proposed method is very efficient.

Keywords: Multiple Commodities, Multi-State Flow Network (MSFN), Time Constraints, Worst-Case Reliability (WCR)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450
4602 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based On Local Color Histograms

Authors: Mawloud Mosbah, Bachir Boucheham

Abstract:

Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.

Keywords: CBIR, Color Global Histogram, Color Local Histogram, Weak Segmentation, Euclidean Distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
4601 Effect of High Injection Pressure on Mixture Formation, Burning Process and Combustion Characteristics in Diesel Combustion

Authors: Amir Khalid, B. Manshoor

Abstract:

The mixture formation prior to the ignition process plays as a key element in the diesel combustion. Parametric studies of mixture formation and ignition process in various injection parameter has received considerable attention in potential for reducing emissions. Purpose of this study is to clarify the effects of injection pressure on mixture formation and ignition especially during ignition delay period, which have to be significantly influences throughout the combustion process and exhaust emissions. This study investigated the effects of injection pressure on diesel combustion fundamentally using rapid compression machine. The detail behavior of mixture formation during ignition delay period was investigated using the schlieren photography system with a high speed camera. This method can capture spray evaporation, spray interference, mixture formation and flame development clearly with real images. Ignition process and flame development were investigated by direct photography method using a light sensitive high-speed color digital video camera. The injection pressure and air motion are important variable that strongly affect to the fuel evaporation, endothermic and prolysis process during ignition delay. An increased injection pressure makes spray tip penetration longer and promotes a greater amount of fuel-air mixing occurs during ignition delay. A greater quantity of fuel prepared during ignition delay period thus predominantly promotes more rapid heat release.

Keywords: Mixture Formation, Diesel Combustion, Ignition Process, Spray, Rapid Compression Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2843
4600 Solution of The KdV Equation with Asymptotic Degeneracy

Authors: Tapas Kumar Sinha, Joseph Mathew

Abstract:

Recently T. C. Au-Yeung, C.Au, and P. C. W. Fung [2] have given the solution of the KdV equation [1] to the boundary condition , where b is a constant. We have further extended the method of [2] to find the solution of the KdV equation with asymptotic degeneracy. Via simulations we find both bright and dark Solitons (i.e. Solitons with opposite phases).

Keywords: KdV equation, Asymptotic Degeneracy, Solitons, Inverse Scattering

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620
4599 Accurate Control of a Pneumatic System using an Innovative Fuzzy Gain-Scheduling Pattern

Authors: M. G. Papoutsidakis, G. Chamilothoris, F. Dailami, N. Larsen, A Pipe

Abstract:

Due to their high power-to-weight ratio and low cost, pneumatic actuators are attractive for robotics and automation applications; however, achieving fast and accurate control of their position have been known as a complex control problem. A methodology for obtaining high position accuracy with a linear pneumatic actuator is presented. During experimentation with a number of PID classical control approaches over many operations of the pneumatic system, the need for frequent manual re-tuning of the controller could not be eliminated. The reason for this problem is thermal and energy losses inside the cylinder body due to the complex friction forces developed by the piston displacements. Although PD controllers performed very well over short periods, it was necessary in our research project to introduce some form of automatic gain-scheduling to achieve good long-term performance. We chose a fuzzy logic system to do this, which proved to be an easily designed and robust approach. Since the PD approach showed very good behaviour in terms of position accuracy and settling time, it was incorporated into a modified form of the 1st order Tagaki- Sugeno fuzzy method to build an overall controller. This fuzzy gainscheduler uses an input variable which automatically changes the PD gain values of the controller according to the frequency of repeated system operations. Performance of the new controller was significantly improved and the need for manual re-tuning was eliminated without a decrease in performance. The performance of the controller operating with the above method is going to be tested through a high-speed web network (GRID) for research purposes.

Keywords: Fuzzy logic, gain scheduling, leaky integrator, pneumatic actuator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750
4598 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation

Authors: H. Khanfari, M. Johari Fard

Abstract:

Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.

Keywords: Carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1791
4597 Electrophoretic Deposition of p-Type Bi2Te3 for Thermoelectric Applications

Authors: Tahereh Talebi, Reza Ghomashchi, Pejman Talemi, Sima Aminorroaya

Abstract:

Electrophoretic deposition (EPD) of p-type Bi2Te3 material has been accomplished, and a high quality crack-free thick film has been achieved for thermoelectric (TE) applications. TE generators (TEG) can convert waste heat into electricity, which can potentially solve global warming problems. However, TEG is expensive due to the high cost of materials, as well as the complex and expensive manufacturing process. EPD is a simple and cost-effective method which has been used recently for advanced applications. In EPD, when a DC electric field is applied to the charged powder particles suspended in a suspension, they are attracted and deposited on the substrate with the opposite charge. In this study, it has been shown that it is possible to prepare a TE film using the EPD method and potentially achieve high TE properties at low cost. The relationship between the deposition weight and the EPD-related process parameters, such as applied voltage and time, has been investigated and a linear dependence has been observed, which is in good agreement with the theoretical principles of EPD. A stable EPD suspension of p-type Bi2Te3 was prepared in a mixture of acetone-ethanol with triethanolamine as a stabilizer. To achieve a high quality homogenous film on a copper substrate, the optimum voltage and time of the EPD process was investigated. The morphology and microstructures of the green deposited films have been investigated using a scanning electron microscope (SEM). The green Bi2Te3 films have shown good adhesion to the substrate. In summary, this study has shown that not only EPD of p-type Bi2Te3 material is possible, but its thick film is of high quality for TE applications.

Keywords: Electrical conductivity, electrophoretic deposition, p-type Bi2Te3, thermoelectric materials, thick films.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1007
4596 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation

Authors: Somayeh Komeylian

Abstract:

The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).

Keywords: DoA estimation, adaptive antenna array, Deep Neural Network, LS-SVM optimization model, radial basis function, MSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 539
4595 Value of Sharing: Viral Advertisement

Authors: Duygu Aydın, Aşina Gülerarslan, Süleyman Karaçor, Tarık Doğan

Abstract:

Sharing motivations of viral advertisements by consumers and the impacts of these advertisements on the perceptions for brand will be questioned in this study. Three fundamental questions are answered in the study. These are advertisement watching and sharing motivations of individuals, criteria of liking viral advertisement and the impact of individual attitudes for viral advertisement on brand perception respectively. This study will be carried out via a viral advertisement which was practiced in Turkey. The data will be collected by survey method and the sample of the study consists of individuals who experienced the practice of sample advertisement. Data will be collected by online survey method and will be analyzed by using SPSS statistical package program. Recently traditional advertisement mind have been changing. New advertising approaches which have significant impacts on consumers have been argued. Viral advertising is a modernist advertisement mind which offers significant advantages to brands apart from traditional advertising channels such as television, radio and magazines. Viral advertising also known as Electronic Word-of- Mouth (eWOM) consists of free spread of convincing messages sent by brands among interpersonal communication. When compared to the traditional advertising, a more provocative thematic approach is argued. The foundation of this approach is to create advertisements that are worth sharing with others by consumers. When that fact is taken into consideration, in a manner of speaking it can also be stated that viral advertising is media engineering. The content worth sharing makes people being a volunteer spokesman of a brand and strengthens the emotional bonds among brand and consumer. Especially for some sectors in countries which are having traditional advertising channel limitations, viral advertising creates vital advantages.

Keywords: Viral advertising, marketing, consumers, brands.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2152
4594 Solar Tracking System: More Efficient Use of Solar Panels

Authors: J. Rizk, Y. Chaiko

Abstract:

This paper shows the potential system benefits of simple tracking solar system using a stepper motor and light sensor. This method is increasing power collection efficiency by developing a device that tracks the sun to keep the panel at a right angle to its rays. A solar tracking system is designed, implemented and experimentally tested. The design details and the experimental results are shown.

Keywords: Renewable Energy, Power Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7792
4593 Very Large Scale Integration Architecture of Finite Impulse Response Filter Implementation Using Retiming Technique

Authors: S. Jalaja, A. M. Vijaya Prakash

Abstract:

Recursive combination of an algorithm based on Karatsuba multiplication is exploited to design a generalized transpose and parallel Finite Impulse Response (FIR) Filter. Mid-range Karatsuba multiplication and Carry Save adder based on Karatsuba multiplication reduce time complexity for higher order multiplication implemented up to n-bit. As a result, we design modified N-tap Transpose and Parallel Symmetric FIR Filter Structure using Karatsuba algorithm. The mathematical formulation of the FFA Filter is derived. The proposed architecture involves significantly less area delay product (APD) then the existing block implementation. By adopting retiming technique, hardware cost is reduced further. The filter architecture is designed by using 90 nm technology library and is implemented by using cadence EDA Tool. The synthesized result shows better performance for different word length and block size. The design achieves switching activity reduction and low power consumption by applying with and without retiming for different combination of the circuit. The proposed structure achieves more than a half of the power reduction by adopting with and without retiming techniques compared to the earlier design structure. As a proof of the concept for block size 16 and filter length 64 for CKA method, it achieves a 51% as well as 70% less power by applying retiming technique, and for CSA method it achieves a 57% as well as 77% less power by applying retiming technique compared to the previously proposed design.

Keywords: Carry save adder Karatsuba multiplication, mid-range Karatsuba multiplication, modified FFA, transposed filter, retiming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 910
4592 Markov Chain Based QoS Support for Wireless Body Area Network Communication in Health Monitoring Services

Authors: R. A. Isabel, E. Baburaj

Abstract:

Wireless Body Area Networks (WBANs) are essential for real-time health monitoring of patients and in diagnosing of many diseases. WBANs comprise many sensors to monitor a large range of ambient conditions. Quality of Service (QoS) is a key challenge in WBAN, because the different state information of the neighboring nodes has to be monitored in an accurate manner. However, energy consumption gets increased while predicting and maintaining the exact information in highly dynamic environments. In order to reduce energy consumption and end to end delay, Markov Chain Based Quality of Service Support (MC-QoSS) method is designed in the health monitoring services of WBAN communication. The energy consumption gets reduced by forming a Markov chain with high energy nodes in the sensor networks communication path. The low energy level sensor nodes are removed using transitional probability in order to reduce end to end delay. High energy nodes are formed in the chain structure of its corresponding path to enhance communication. After choosing the communication path through high energy nodes, the packets are sent to the sink node from the source node with a higher Packet Delivery Ratio. The simulation result shows that MC-QoSS method improves the packet delivery ratio and reduces energy consumption with minimum end to end delay, compared to existing methods.

Keywords: Wireless body area networks, quality of service, Markov chain, health monitoring services.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1439
4591 Integrated Design in Additive Manufacturing Based on Design for Manufacturing

Authors: E. Asadollahi-Yazdi, J. Gardan, P. Lafon

Abstract:

Nowadays, manufactures are encountered with production of different version of products due to quality, cost and time constraints. On the other hand, Additive Manufacturing (AM) as a production method based on CAD model disrupts the design and manufacturing cycle with new parameters. To consider these issues, the researchers utilized Design For Manufacturing (DFM) approach for AM but until now there is no integrated approach for design and manufacturing of product through the AM. So, this paper aims to provide a general methodology for managing the different production issues, as well as, support the interoperability with AM process and different Product Life Cycle Management tools. The problem is that the models of System Engineering which is used for managing complex systems cannot support the product evolution and its impact on the product life cycle. Therefore, it seems necessary to provide a general methodology for managing the product’s diversities which is created by using AM. This methodology must consider manufacture and assembly during product design as early as possible in the design stage. The latest approach of DFM, as a methodology to analyze the system comprehensively, integrates manufacturing constraints in the numerical model in upstream. So, DFM for AM is used to import the characteristics of AM into the design and manufacturing process of a hybrid product to manage the criteria coming from AM. Also, the research presents an integrated design method in order to take into account the knowledge of layers manufacturing technologies. For this purpose, the interface model based on the skin and skeleton concepts is provided, the usage and manufacturing skins are used to show the functional surface of the product. Also, the material flow and link between the skins are demonstrated by usage and manufacturing skeletons. Therefore, this integrated approach is a helpful methodology for designer and manufacturer in different decisions like material and process selection as well as, evaluation of product manufacturability.

Keywords: Additive manufacturing, 3D printing, design for manufacturing, integrated design, interoperability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2255
4590 Rotor Bearing System Analysis Using the Transfer Matrix Method with Thickness Assumption of Disk and Bearing

Authors: Omid Ghasemalizadeh, Mohammad Reza Mirzaee, Hossein Sadeghi, Mohammad Taghi Ahmadian

Abstract:

There are lots of different ways to find the natural frequencies of a rotating system. One of the most effective methods which is used because of its precision and correctness is the application of the transfer matrix. By use of this method the entire continuous system is subdivided and the corresponding differential equation can be stated in matrix form. So to analyze shaft that is this paper issue the rotor is divided as several elements along the shaft which each one has its own mass and moment of inertia, which this work would create possibility of defining the named matrix. By Choosing more elements number, the size of matrix would become larger and as a result more accurate answers would be earned. In this paper the dynamics of a rotor-bearing system is analyzed, considering the gyroscopic effect. To increase the accuracy of modeling the thickness of the disk and bearings is also taken into account which would cause more complicated matrix to be solved. Entering these parameters to our modeling would change the results completely that these differences are shown in the results. As said upper, to define transfer matrix to reach the natural frequencies of probed system, introducing some elements would be one of the requirements. For the boundary condition of these elements, bearings at the end of the shaft are modeled as equivalent spring and dampers for the discretized system. Also, continuous model is used for the shaft in the system. By above considerations and using transfer matrix, exact results are taken from the calculations. Results Show that, by increasing thickness of the bearing the amplitude of vibration would decrease, but obviously the stiffness of the shaft and the natural frequencies of the system would accompany growth. Consequently it is easily understood that ignoring the influences of bearing and disk thicknesses would results not real answers.

Keywords: Rotor System, Disk and Bearing Thickness, Transfer Matrix, Amplitude.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1548
4589 Liquid Chromatography Microfluidics for Detection and Quantification of Urine Albumin Using Linear Regression Method

Authors: Patricia B. Cruz, Catrina Jean G. Valenzuela, Analyn N. Yumang

Abstract:

Nearly a hundred per million of the Filipino population is diagnosed with Chronic Kidney Disease (CKD). The early stage of CKD has no symptoms and can only be discovered once the patient undergoes urinalysis. Over the years, different methods were discovered and used for the quantification of the urinary albumin such as the immunochemical assays where most of these methods require large machinery that has a high cost in maintenance and resources, and a dipstick test which is yet to be proven and is still debated as a reliable method in detecting early stages of microalbuminuria. This research study involves the use of the liquid chromatography concept in microfluidic instruments with biosensor as a means of separation and detection respectively, and linear regression to quantify human urinary albumin. The researchers’ main objective was to create a miniature system that quantifies and detect patients’ urinary albumin while reducing the amount of volume used per five test samples. For this study, 30 urine samples of unknown albumin concentrations were tested using VITROS Analyzer and the microfluidic system for comparison. Based on the data shared by both methods, the actual vs. predicted regression were able to create a positive linear relationship with an R2 of 0.9995 and a linear equation of y = 1.09x + 0.07, indicating that the predicted values and actual values are approximately equal. Furthermore, the microfluidic instrument uses 75% less in total volume – sample and reagents combined, compared to the VITROS Analyzer per five test samples.

Keywords: Chronic kidney disease, microfluidics, linear regression, VITROS analyzer, urinary albumin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 871
4588 A Study on the Effectiveness of Alternative Commercial Ventilation Inlets That Improve Energy Efficiency of Building Ventilation Systems

Authors: Brian Considine, Aonghus McNabola, John Gallagher, Prashant Kumar

Abstract:

Passive air pollution control devices known as aspiration efficiency reducers (AER) have been developed using aspiration efficiency (AE) concepts. Their purpose is to reduce the concentration of particulate matter (PM) drawn into a building air handling unit (AHU) through alterations in the inlet design improving energy consumption. In this paper an examination is conducted into the effect of installing a deflector system around an AER-AHU inlet for both a forward and rear-facing orientations relative to the wind. The results of the study found that these deflectors are an effective passive control method for reducing AE at various ambient wind speeds over a range of microparticles of varying diameter. The deflector system was found to induce a large wake zone at low ambient wind speeds for a rear-facing AER-AHU, resulting in significantly lower AE in comparison to without. As the wind speed increased, both contained a wake zone but have much lower concentration gradients with the deflectors. For the forward-facing models, the deflector system at low ambient wind speed was preferred at higher Stokes numbers but there was negligible difference as the Stokes number decreased. Similarly, there was no significant difference at higher wind speeds across the Stokes number range tested. The results demonstrate that a deflector system is a viable passive control method for the reduction of ventilation energy consumption.

Keywords: Aspiration efficiency, energy, particulate matter, ventilation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 477
4587 Strategy in Controlling Rice-Field Conversion in Pangkep Regency, South Sulawesi, Indonesia

Authors: Nurliani, Ida Rosada

Abstract:

The national rice consumption keeps increasing along with raising income of the households and the rapid growth of population. However, food availability, particularly rice, is limited. Impacts of rice-field conversion have run cumulatively, as we can see on potential losses of rice and crops production, as well as work opportunity that keeps increasing year-by-year. Therefore, it requires policy recommendation to control rice-field conversion through economic, social, and ecological approaches. The research was a survey method intended to: (1) Identify internal factors; quality and productivity of the land as the cause of land conversion, (2) Identify external factors of land conversion, value of the rice-field and the competitor’s land, workforce absorption, and regulation, as well as (3) Formulate strategies in controlling rice-field conversion. Population of the research was farmers who applied land conversion at Pangkep Regency, South Sulawesi. Samples were determined using the incidental sampling method. Data analysis used productivity analysis, land quality analysis, total economic value analysis, and SWOT analysis. Results of the research showed that the quality of rice-field was low as well as productivity of the grains (unhulled-rice). So that, average productivity of the grains and quality of rice-field were low as well. Total economic value of rice-field was lower than the economic value of the embankment. Workforce absorption value on rice-field was higher than on the embankment. Strategies in controlling such rice-field conversion can be done by increasing rice-field productivity, improving land quality, applying cultivation technique of specific location, improving the irrigation lines, and socializing regulation and sanction about the transfer of land use.

Keywords: Land conversion, quality of rice-field, land economic value, strategy in controlling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1313
4586 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure

Authors: T. Nozu, K. Hibi, T. Nishiie

Abstract:

This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.

Keywords: Deflagration, Large Eddy Simulation, Turbulent combustion, Vented enclosure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477