Search results for: linear transformation and pattern recognition.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3674

Search results for: linear transformation and pattern recognition.

224 Modeling and Visualizing Seismic Wave Propagation in Elastic Medium Using Multi-Dimension Wave Digital Filtering Approach

Authors: Jason Chien-Hsun Tseng, Nguyen Dong-Thai Dao, Chong-Ching Chang

Abstract:

A novel PDE solver using the multidimensional wave digital filtering (MDWDF) technique to achieve the solution of a 2D seismic wave system is presented. In essence, the continuous physical system served by a linear Kirchhoff circuit is transformed to an equivalent discrete dynamic system implemented by a MD wave digital filtering (MDWDF) circuit. This amounts to numerically approximating the differential equations used to describe elements of a MD passive electronic circuit by a grid-based difference equations implemented by the so-called state quantities within the passive MDWDF circuit. So the digital model can track the wave field on a dense 3D grid of points. Details about how to transform the continuous system into a desired discrete passive system are addressed. In addition, initial and boundary conditions are properly embedded into the MDWDF circuit in terms of state quantities. Graphic results have clearly demonstrated some physical effects of seismic wave (P-wave and S–wave) propagation including radiation, reflection, and refraction from and across the hard boundaries. Comparison between the MDWDF technique and the finite difference time domain (FDTD) approach is also made in terms of the computational efficiency.

Keywords: Seismic Wave Propagation, Multi-dimension WaveDigital Filters, Partial Differential Equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1421
223 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector

Authors: Mariam Vardiashvili

Abstract:

The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity.  When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and  Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector  as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating  impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.

Keywords: Non-cash-generating assets, cash-generating assets, recoverable value, recoverable service amount, value in use.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 683
222 Effect of Oxytocin on Cytosolic Calcium Concentration of Alpha and Beta Cells in Pancreas

Authors: Rauza Sukma Rita, Katsuya Dezaki, Yuko Maejima, Toshihiko Yada

Abstract:

Oxytocin is a nine-amino acid peptide synthesized in the paraventricular nucleus (PVN) and supraoptic nucleus (SON) of the hypothalamus. Oxytocin promotes contraction of the uterus during birth and milk ejection during breast feeding. Although oxytocin receptors are found predominantly in the breasts and uterus of females, many tissues and organs express oxytocin receptors, including the pituitary, heart, kidney, thymus, vascular endothelium, adipocytes, osteoblasts, adrenal gland, pancreatic islets, and many cell lines. On the other hand, in pancreatic islets, oxytocin receptors are expressed in both α-cells and β-cells with stronger expression in α- cells. However, to our knowledge there are no reports yet about the effect of oxytocin on cytosolic calcium reaction on α and β-cell. This study aims to investigate the effect of oxytocin on α-cells and β-cells and its oscillation pattern. Islet of Langerhans from wild type mice were isolated by collagenase digestion. Isolated and dissociated single cells either α-cells or β-cells on coverslips were mounted in an open chamber and superfused in HKRB. Cytosolic concentration ([Ca2+]i) in single cells were measured by fura-2 microfluorimetry. After measurement of [Ca2+]i, α-cells were identified by subsequent immunocytochemical staining using an anti-glucagon antiserum. In β-cells, the [Ca2+]i increase in response to oxytocin was observed only under 8.3 mM glucose condition, whereas in α-cells, [Ca2+]i an increase induced by oxytocin was observed in both 2.8 mM and 8.3 mM glucose. The oscillation incidence was induced more frequently in β-cells compared to α-cells. In conclusion, the present study demonstrated that oxytocin directly interacts with both α-cells and β-cells and induces increase of [Ca2+]i and its specific patterns.

Keywords: α-cells, β-cells, cytosolic calcium concentration, oscillation, oxytocin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1070
221 Application of Particle Image Velocimetry in the Analysis of Scale Effects in Granular Soil

Authors: Zuhair Kadhim Jahanger, S. Joseph Antony

Abstract:

The available studies in the literature which dealt with the scale effects of strip footings on different sand packing systematically still remain scarce. In this research, the variation of ultimate bearing capacity and deformation pattern of soil beneath strip footings of different widths under plane-strain condition on the surface of loose, medium-dense and dense sand have been systematically studied using experimental and noninvasive methods for measuring microscopic deformations. The presented analyses are based on model scale compression test analysed using Particle Image Velocimetry (PIV) technique. Upper bound analysis of the current study shows that the maximum vertical displacement of the sand under the ultimate load increases for an increase in the width of footing, but at a decreasing rate with relative density of sand, whereas the relative vertical displacement in the sand decreases for an increase in the width of the footing. A well agreement is observed between experimental results for different footing widths and relative densities. The experimental analyses have shown that there exists pronounced scale effect for strip surface footing. The bearing capacity factors rapidly decrease up to footing widths B=0.25 m, 0.35 m, and 0.65 m for loose, medium-dense and dense sand respectively, after that there is no significant decrease in . The deformation modes of the soil as well as the ultimate bearing capacity values have been affected by the footing widths. The obtained results could be used to improve settlement calculation of the foundation interacting with granular soil.

Keywords: PIV, granular mechanics, scale effect, upper bound analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 988
220 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles

Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi

Abstract:

Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.

Keywords: Artificial neural networks, fuel consumption, machine learning, regression, statistical tests.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 803
219 Determination of Surface Deformations with Global Navigation Satellite System Time Series

Authors: I. Tiryakioglu, M. A. Ugur, C. Ozkaymak

Abstract:

The development of Global Navigation Satellite System (GNSS) technology has led to increasingly widely and successful applications of GNSS surveys for monitoring crustal movements. Instead of the multi-period GNSS solutions, this study utilizes GNSS time series that are required to more precisely determine the vertical deformations in the study area. In recent years, the surface deformations that are parallel and semi-parallel to Bolvadin fault have occurred in Western Anatolia. These surface deformations have continued to occur in Bolvadin settlement area that is located mostly on alluvium ground. Due to these surface deformations, a number of cracks in the buildings located in the residential areas and breaks in underground water and sewage systems have been observed. In order to determine the amount of vertical surface deformations, two continuous GNSS stations have been established in the region. The stations have been operating since 2015 and 2017, respectively. In this study, GNSS observations from the mentioned two GNSS stations were processed with GAMIT/GLOBK (GNSS Analysis Massachusetts Institute of Technology/GLOBal Kalman) program package to create coordinate time series. With the time series analyses, the GNSS stations’ behaviour models (linear, periodical, etc.), the causes of these behaviours, and mathematical models were determined. The study results from the time series analysis of these two 2 GNSS stations show approximately 50-90 mm/yr vertical movement.

Keywords: Bolvadin fault, GAMIT, GNSS time series, surface deformations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 799
218 Complex Wavelet Transform Based Image Denoising and Zooming Under the LMMSE Framework

Authors: T. P. Athira, Gibin Chacko George

Abstract:

This paper proposes a dual tree complex wavelet transform (DT-CWT) based directional interpolation scheme for noisy images. The problems of denoising and interpolation are modelled as to estimate the noiseless and missing samples under the same framework of optimal estimation. Initially, DT-CWT is used to decompose an input low-resolution noisy image into low and high frequency subbands. The high-frequency subband images are interpolated by linear minimum mean square estimation (LMMSE) based interpolation, which preserves the edges of the interpolated images. For each noisy LR image sample, we compute multiple estimates of it along different directions and then fuse those directional estimates for a more accurate denoised LR image. The estimation parameters calculated in the denoising processing can be readily used to interpolate the missing samples. The inverse DT-CWT is applied on the denoised input and interpolated high frequency subband images to obtain the high resolution image. Compared with the conventional schemes that perform denoising and interpolation in tandem, the proposed DT-CWT based noisy image interpolation method can reduce many noise-caused interpolation artifacts and preserve well the image edge structures. The visual and quantitative results show that the proposed technique outperforms many of the existing denoising and interpolation methods.

Keywords: Dual-tree complex wavelet transform (DT-CWT), denoising, interpolation, optimal estimation, super resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2152
217 A Novel Low-Profile Coupled-Fed Printed Twelve-Band Mobile Phone Antenna with Slotted Ground Plane for LTE/GSM/UMTS/WIMAX/WLAN Operations

Authors: Omar A. Saraereh, M. A. Smadi, A. K. S. Al-Bayati, Jasim A. Ghaeb, Qais H. Alsafasfeh

Abstract:

A low profile planar antenna for twelve-band operation in the mobile phone is presented. The proposed antenna radiating elements occupy an area equals 17 × 50 mm2 are mounted on the compact no-ground portion of the system circuit board to achieve a simple low profile structure. In order to overcome the shortcoming of narrow bandwidth for conventional planar printed antenna, a novel bandwidth enhancement approach for multiband handset antennas is proposed here. The technique used in this study shows that by using a coupled-fed mechanism and a slotted ground structure, a multiband operation with wideband characteristic can be achieved. The influences of the modifications introduced into the ground plane improved significantly the bandwidths of the designed antenna. The slotted ground plane structure with the coupled-fed elements contributes their lowest, middle and higher-order resonant modes to form four operating modes. The generated modes are able to cover LTE 700/2300/2500, GSM 850/900/1800/1900, UMTS, WiMAX 3500, WLAN 2400/5200/5800 operations. Parametric studies via simulation are provided and discussed. Proposed antenna’s gain, efficiency and radiation pattern characteristics over the desired operating bands are obtained and discussed. The reasonable results observed can meet the requirements of practical mobile phones.

Keywords: Antenna, handset, LTE, Mobile, Multiband, Slotted ground, specific absorption rate (SAR).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3034
216 Estimation of the Park-Ang Damage Index for Floating Column Building with Infill Wall

Authors: Susanta Banerjee, Sanjaya Kumar Patro

Abstract:

Buildings with floating column are highly undesirable built in seismically active areas. Many urban multi-storey buildings today have floating column buildings which are adopted to accommodate parking at ground floor or reception lobbies in the first storey. The earthquake forces developed at different floor levels in a building need to be brought down along the height to the ground by the shortest path; any deviation or discontinuity in this load transfer path results in poor performance of the building. Floating column buildings are severely damaged during earthquake. Damage on this structure can be reduce by taking the effect of infill wall. This paper presents the effect of stiffness of infill wall to the damage occurred in floating column building when ground shakes. Modelling and analysis are carried out by non linear analysis programme IDARC-2D. Damage occurred in beams, columns, storey are studied by formulating modified Park & Ang model to evaluate damage indices. Overall structural damage indices in buildings due to shaking of ground are also obtained. Dynamic response parameters i.e. lateral floor displacement, storey drift, time period, base shear of buildings are obtained and results are compared with the ordinary moment resisting frame buildings. Formation of cracks, yield, plastic hinge, are also observed during analysis.

Keywords: Floating column, Infill Wall, Park-Ang Damage Index, Damage State.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3091
215 Changing Geomorphosites in a Changing Lake: How Environmental Changes in Urmia Lake Have Been Driving Vanishing or Creating of Geomorphosites

Authors: D. Mokhtari

Abstract:

Any variation in environmental characteristics of geomorphosites would lead to destabilisation of their geotouristic values all around the planet. The Urmia lake, with an area of approximately 5,500 km2 and a catchment area of 51,876 km2, and to which various reasons over time, especially in the last fifty years have seen a sharp decline and have decreased by about 93 % in two recent decades. These variations are not only driving significant changes in the morphology and ecology of the present lake landscape, but at the same time are shaping newly formed morphologies, which vanished some valuable geomorphosites or develop into smaller geomorphosites with significant value from a scientific and cultural point of view. This paper analyses and discusses features and evolution in several representative coastal and island geomorphosites. For this purpose, a total of 23 geomorphosites were studied in two data series (1963 and 2015) and the respective data were compared and analysed. The results showed, the total loss in geomorphosites area in a half century amounted to a loss of more than 90% of the valuable geomorphosites. Moreover, the comparison between the mean yearly value of coastal area lost over the entire period and the yearly average calculated for the shorter period (1998- 2014) clearly indicates a pattern of acceleration. This acceleration in the rate of reduction in lake area was seen in most of the southern half of the lake. In the region as well, the general water-level falling is not only causing the loss of a significant water resource, which is followed by major impact on regional ecosystems, but is also driving the most marked recent (last century) changes in the geotouristic landscapes. In fact, the disappearance of geomorphosites means the loss of tourism phenomenon. In this context attention must be paid to the question of conservation. The action needed to safeguard geomorphosites includes: 1) Preventive action, 2) Corrective action, and 3) Sharing knowledge.

Keywords: Changing lake, environmental changes, geomorphosite, northwest of Iran, Urmia lake.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
214 Pushover Analysis of Reinforced Concrete Buildings Using Full Jacket Technics: A Case Study on an Existing Old Building in Madinah

Authors: Tarek M. Alguhane, Ayman H. Khalil, M. N. Fayed, Ayman M. Ismail

Abstract:

The retrofitting of existing buildings to resist the seismic loads is very important to avoid losing lives or financial disasters. The aim at retrofitting processes is increasing total structure strength by increasing stiffness or ductility ratio. In addition, the response modification factors (R) have to satisfy the code requirements for suggested retrofitting types. In this study, two types of jackets are used, i.e. full reinforced concrete jackets and surrounding steel plate jackets. The study is carried out on an existing building in Madinah by performing static pushover analysis before and after retrofitting the columns. The selected model building represents nearly all-typical structure lacks structure built before 30 years ago in Madina City, KSA. The comparison of the results indicates a good enhancement of the structure respect to the applied seismic forces. Also, the response modification factor of the RC building is evaluated for the studied cases before and after retrofitting. The design of all vertical elements (columns) is given. The results show that the design of retrofitted columns satisfied the code's design stress requirements. However, for some retrofitting types, the ductility requirements represented by response modification factor do not satisfy KSA design code (SBC- 301).

Keywords: Concrete jackets, steel jackets, RC buildings pushover analysis, non-linear analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1754
213 Antibody Reactivity of Synthetic Peptides Belonging to Proteins Encoded by Genes Located in Mycobacterium tuberculosis-Specific Genomic Regions of Differences

Authors: Abu Salim Mustafa

Abstract:

The comparisons of mycobacterial genomes have identified several Mycobacterium tuberculosis-specific genomic regions that are absent in other mycobacteria and are known as regions of differences. Due to M. tuberculosis-specificity, the peptides encoded by these regions could be useful in the specific diagnosis of tuberculosis. To explore this possibility, overlapping synthetic peptides corresponding to 39 proteins predicted to be encoded by genes present in regions of differences were tested for antibody-reactivity with sera from tuberculosis patients and healthy subjects. The results identified four immunodominant peptides corresponding to four different proteins, with three of the peptides showing significantly stronger antibody reactivity and rate of positivity with sera from tuberculosis patients than healthy subjects. The fourth peptide was recognized equally well by the sera of tuberculosis patients as well as healthy subjects. Predication of antibody epitopes by bioinformatics analyses using ABCpred server predicted multiple linear epitopes in each peptide. Furthermore, peptide sequence analysis for sequence identity using BLAST suggested M. tuberculosis-specificity for the three peptides that had preferential reactivity with sera from tuberculosis patients, but the peptide with equal reactivity with sera of TB patients and healthy subjects showed significant identity with sequences present in nob-tuberculous mycobacteria. The three identified M. tuberculosis-specific immunodominant peptides may be useful in the serological diagnosis of tuberculosis.

Keywords: Genomic regions of differences, Mycobacterium tuberculosis, peptides, serodiagnosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 906
212 Thermo-Mechanical Approach to Evaluate Softening Behavior of Polystyrene: Validation and Modeling

Authors: Salah Al-Enezi, Rashed Al-Zufairi, Naseer Ahmad

Abstract:

A Thermo-mechanical technique was developed to determine softening point temperature/glass transition temperature (Tg) of polystyrene exposed to high pressures. The design utilizes the ability of carbon dioxide to lower the glass transition temperature of polymers and acts as plasticizer. In this apparatus, the sorption of carbon dioxide to induce softening of polymers as a function of temperature/pressure is performed and the extent of softening is measured in three-point-flexural-bending mode. The polymer strip was placed in the cell in contact with the linear variable differential transformer (LVDT). CO2 was pumped into the cell from a supply cylinder to reach high pressure. The results clearly showed that full softening point of the samples, accompanied by a large deformation on the polymer strip. The deflection curves are initially relatively flat and then undergo a dramatic increase as the temperature is elevated. It was found that increasing the pressure of CO2 causes the temperature curves to shift from higher to lower by increment of about 45 K, over the pressure range of 0-120 bars. The obtained experimental Tg values were validated with the values reported in the literature. Finally, it is concluded that the defection model fits consistently to the generated experimental results, which attempts to describe in more detail how the central deflection of a thin polymer strip affected by the CO2 diffusions in the polymeric samples.

Keywords: Softening, high-pressure, polystyrene, CO2 diffusions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 648
211 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis

Authors: Petr Gurný

Abstract:

One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the creditscoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.

Keywords: Credit-scoring Models, Multidimensional Subordinated Lévy Model, Probability of Default.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909
210 CFD Modeling of Air Stream Pressure Drop inside Combustion Air Duct of Coal-Fired Power Plant with and without Airfoil

Authors: Pakawhat Khumkhreung, Yottana Khunatorn

Abstract:

The flow pattern inside rectangular intake air duct of 300 MW lignite coal-fired power plant is investigated in order to analyze and reduce overall inlet system pressure drop. The system consists of the 45-degree inlet elbow, the flow instrument, the 90-degree mitered elbow and fans, respectively. The energy loss in each section can be determined by Bernoulli’s equation and ASHRAE standard table. Hence, computational fluid dynamics (CFD) is used in this study based on Navier-Stroke equation and the standard k-epsilon turbulence modeling. Input boundary condition is 175 kg/s mass flow rate inside the 11-m2 cross sectional duct. According to the inlet air flow rate, the Reynolds number of airstream is 2.7x106 (based on the hydraulic duct diameter), thus the flow behavior is turbulence. The numerical results are validated with the real operation data. It is found that the numerical result agrees well with the operating data, and dominant loss occurs at the flow rate measurement device. Normally, the air flow rate is measured by the airfoil and it gets high pressure drop inside the duct. To overcome this problem, the airfoil is planned to be replaced with the other type measuring instrument, such as the average pitot tube which generates low pressure drop of airstream. The numerical result in case of average pitot tube shows that the pressure drop inside the inlet airstream duct is decreased significantly. It should be noted that the energy consumption of inlet air system is reduced too.

Keywords: Airfoil, average pitot tube, combustion air, CFD, pressure drop, rectangular duct.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1065
209 A Shape Optimization Method in Viscous Flow Using Acoustic Velocity and Four-step Explicit Scheme

Authors: Yoichi Hikino, Mutsuto Kawahara

Abstract:

The purpose of this study is to derive optimal shapes of a body located in viscous flows by the finite element method using the acoustic velocity and the four-step explicit scheme. The formulation is based on an optimal control theory in which a performance function of the fluid force is introduced. The performance function should be minimized satisfying the state equation. This problem can be transformed into the minimization problem without constraint conditions by using the adjoint equation with adjoint variables corresponding to the state equation. The performance function is defined by the drag and lift forces acting on the body. The weighted gradient method is applied as a minimization technique, the Galerkin finite element method is used as a spatial discretization and the four-step explicit scheme is used as a temporal discretization to solve the state equation and the adjoint equation. As the interpolation, the orthogonal basis bubble function for velocity and the linear function for pressure are employed. In case that the orthogonal basis bubble function is used, the mass matrix can be diagonalized without any artificial centralization. The shape optimization is performed by the presented method.

Keywords: Shape Optimization, Optimal Control Theory, Finite Element Method, Weighted Gradient Method, Fluid Force, Orthogonal Basis Bubble Function, Four-step Explicit Scheme, Acoustic Velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446
208 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System

Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi

Abstract:

Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.

Keywords: Channel estimation, OFDM, pilot-assist, VLC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 653
207 Rice Area Determination Using Landsat-Based Indices and Land Surface Temperature Values

Authors: Burçin Saltık, Levent Genç

Abstract:

In this study, it was aimed to determine a route for identification of rice cultivation areas within Thrace and Marmara regions of Turkey using remote sensing and GIS. Landsat 8 (OLI-TIRS) imageries acquired in production season of 2013 with 181/32 Path/Row number were used. Four different seasonal images were generated utilizing original bands and different transformation techniques. All images were classified individually using supervised classification techniques and Land Use Land Cover Maps (LULC) were generated with 8 classes. Areas (ha, %) of each classes were calculated. In addition, district-based rice distribution maps were developed and results of these maps were compared with Turkish Statistical Institute (TurkSTAT; TSI)’s actual rice cultivation area records. Accuracy assessments were conducted, and most accurate map was selected depending on accuracy assessment and coherency with TSI results. Additionally, rice areas on over 4° slope values were considered as mis-classified pixels and they eliminated using slope map and GIS tools. Finally, randomized rice zones were selected to obtain maximum-minimum value ranges of each date (May, June, July, August, September images separately) NDVI, LSWI, and LST images to test whether they may be used for rice area determination via raster calculator tool of ArcGIS. The most accurate classification for rice determination was obtained from seasonal LSWI LULC map, and considering TSI data and accuracy assessment results and mis-classified pixels were eliminated from this map. According to results, 83151.5 ha of rice areas exist within study area. However, this result is higher than TSI records with an area of 12702.3 ha. Use of maximum-minimum range of rice area NDVI, LSWI, and LST was tested in Meric district. It was seen that using the value ranges obtained from July imagery, gave the closest results to TSI records, and the difference was only 206.4 ha. This difference is normal due to relatively low resolution of images. Thus, employment of images with higher spectral, spatial, temporal and radiometric resolutions may provide more reliable results.

Keywords: Landsat 8 (OLI-TIRS), LULC, spectral indices, rice.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1287
206 The Effects of Food Deprivation on Hematological Indices and Blood Indicators of Liver Function in Oxyleotris marmorata

Authors: N. Sridee, S. Boonanuntanasarn

Abstract:

Oxyleotris marmorata is considered as undomesticated fish, and its culture occasionally faces a problem of food deprivation. The present study aims to evaluate alteration of hematological indices, blood chemical associated with liver function during 4 weeks of fasting. A non-linear relationships between fasting days and hematological parameters (red blood cell number; y = - 0.002x2 + 0.041x + 1.249; R2=0.915, P<0.05, hemoglobin; y = - 0.002x2 + 0.030x + 3.470; R2=0.460, P>0.05), mean corpuscular volume; y = -0.180x2 + 2.183x + 149.61; R2=0.732, P>0.05, mean corpuscular hemoglobin; y = -0.041x2 + 0.862x + 29.864; R2=0.818, P>0.05 and mean corpuscular hemoglobin concentration; y = - 0.044x2 + 0.711x + 21.580; R2=0.730, P>0.05) were demonstrated. Significant change in hematocrit (Ht) during fasting period was observed. Ht elevated sharply increase at the first weeks of fasting period. Higher Ht also was detected during week 2-4 of fasting time. The significant reduction of hepatosomatic index was observed (y = - 0.007x2 - 0.096x + 1.414; R2=0.968, P<0.05). Moreover, alteration of enzyme associated with liver function was evaluated during 4 weeks of fasting (alkalin phosphatase; y = -0.026x2 - 0.935x + 12.188; R2=0.737, P>0.05, serum glutamic oxaloacetic transaminase; y = 0.005x2 – 0.201x2 + 1.297x + 33.256; R2=1, P<0.01, serum glutamic pyruvic transaminase; y = 0.007x2 – 0.274x2 + 2.277x + 25.257; R2=0.807, P>0.05). Taken together, prolonged fasting has deleterious effects on hematological indices, liver mass and enzyme associated in liver function. The marked adverse effects occurred after the first week of fasting state.

Keywords: food deprivation, Oxyleotris marmorata, hematology, alkaline phosphatase, SGOT, SGPT

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1947
205 Analysis of Temperature Change under Global Warming Impact using Empirical Mode Decomposition

Authors: Md. Khademul Islam Molla, Akimasa Sumi, M. Sayedur Rahman

Abstract:

The empirical mode decomposition (EMD) represents any time series into a finite set of basis functions. The bases are termed as intrinsic mode functions (IMFs) which are mutually orthogonal containing minimum amount of cross-information. The EMD successively extracts the IMFs with the highest local frequencies in a recursive way, which yields effectively a set low-pass filters based entirely on the properties exhibited by the data. In this paper, EMD is applied to explore the properties of the multi-year air temperature and to observe its effects on climate change under global warming. This method decomposes the original time-series into intrinsic time scale. It is capable of analyzing nonlinear, non-stationary climatic time series that cause problems to many linear statistical methods and their users. The analysis results show that the mode of EMD presents seasonal variability. The most of the IMFs have normal distribution and the energy density distribution of the IMFs satisfies Chi-square distribution. The IMFs are more effective in isolating physical processes of various time-scales and also statistically significant. The analysis results also show that the EMD method provides a good job to find many characteristics on inter annual climate. The results suggest that climate fluctuations of every single element such as temperature are the results of variations in the global atmospheric circulation.

Keywords: Empirical mode decomposition, instantaneous frequency, Hilbert spectrum, Chi-square distribution, anthropogenic impact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2133
204 Sea Level Characteristics Referenced to Specific Geodetic Datum in Alexandria, Egypt

Authors: Ahmed M. Khedr, Saad M. Abdelrahman, Kareem M. Tonbol

Abstract:

Two geo-referenced sea level datasets (September 2008 – November 2010) and (April 2012 – January 2014) were recorded at Alexandria Western Harbour (AWH). Accurate re-definition of tidal datum, referred to the latest International Terrestrial Reference Frame (ITRF-2014), was discussed and updated to improve our understanding of the old predefined tidal datum at Alexandria. Tidal and non-tidal components of sea level were separated with the use of Delft-3D hydrodynamic model-tide suit (Delft-3D, 2015). Tidal characteristics at AWH were investigated and harmonic analysis showed the most significant 34 constituents with their amplitudes and phases. Tide was identified as semi-diurnal pattern as indicated by a “Form Factor” of 0.24 and 0.25, respectively. Principle tidal datums related to major tidal phenomena were recalculated referred to a meaningful geodetic height datum. The portion of residual energy (surge) out of the total sea level energy was computed for each dataset and found 77% and 72%, respectively. Power spectral density (PSD) showed accurate resolvability in high band (1–6) cycle/days for the nominated independent constituents, except some neighbouring constituents, which are too close in frequency. Wind and atmospheric pressure data, during the recorded sea level time, were analysed and cross-correlated with the surge signals. Moderate association between surge and wind and atmospheric pressure data were obtained. In addition, long-term sea level rise trend at AWH was computed and showed good agreement with earlier estimated rates.

Keywords: Alexandria, Delft-3D, Egypt, geodetic reference, harmonic analysis, sea level.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1340
203 Comparison of Different Hydrograph Routing Techniques in XPSTORM Modelling Software: A Case Study

Authors: Fatema Akram, Mohammad Golam Rasul, Mohammad Masud Kamal Khan, Md. Sharif Imam Ibne Amir

Abstract:

A variety of routing techniques are available to develop surface runoff hydrographs from rainfall. The selection of runoff routing method is very vital as it is directly related to the type of watershed and the required degree of accuracy. There are different modelling softwares available to explore the rainfall-runoff process in urban areas. XPSTORM, a link-node based, integrated stormwater modelling software, has been used in this study for developing surface runoff hydrograph for a Golf course area located in Rockhampton in Central Queensland in Australia. Four commonly used methods, namely SWMM runoff, Kinematic wave, Laurenson, and Time-Area are employed to generate runoff hydrograph for design storm of this study area. In runoff mode of XPSTORM, the rainfall, infiltration, evaporation and depression storage for subcatchments were simulated and the runoff from the subcatchment to collection node was calculated. The simulation results are presented, discussed and compared. The total surface runoff generated by SWMM runoff, Kinematic wave and Time-Area methods are found to be reasonably close, which indicates any of these methods can be used for developing runoff hydrograph of the study area. Laurenson method produces a comparatively less amount of surface runoff, however, it creates highest peak of surface runoff among all which may be suitable for hilly region. Although the Laurenson hydrograph technique is widely acceptable surface runoff routing technique in Queensland (Australia), extensive investigation is recommended with detailed topographic and hydrologic data in order to assess its suitability for use in the case study area.

Keywords: ARI, design storm, IFD, rainfall temporal pattern, routing techniques, surface runoff, XPSTORM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5028
202 Bio-Estimation of Selected Heavy Metals in Shellfish and Their Surrounding Environmental Media

Authors: Ebeed A. Saleh, Kadry M. Sadek, Safaa H. Ghorbal

Abstract:

Due to the determination of the pollution status of fresh resources in the Egyptian territorial waters is very important for public health; this study was carried out to reveal the levels of heavy metals in the shellfish and their environment and its relation to the highly developed industrial activities in those areas. A total of 100 shellfish samples from the Rosetta, Edku, El-Maadiya, Abo-Kir and El-Max coasts [10 crustaceans (shrimp) and 10 mollusks (oysters)] were randomly collected from each coast. Additionally, 10 samples from both the water and the sediment were collected from each coast. Each collected sample was analyzed for cadmium, chromium, copper, lead and zinc residues using a Perkin Elmer atomic absorption spectrophotometer (AAS). The results showed that the levels of heavy metals were higher in the water and sediment from Abo-Kir. The heavy metal levels decreased successively for the Rosetta, Edku, El-Maadiya, and El-Max coasts, and the concentrations of heavy metals, except copper and zinc, in shellfish exhibited the same pattern. For the concentration of heavy metals in shellfish tissue, the highest was zinc and the concentrations decreased successively for copper, lead, chromium and cadmium for all coasts, except the Abo-Kir coast, where the chromium level was highest and the other metals decreased successively for zinc, copper, lead and cadmium. In Rosetta, chromium was higher only in the mollusks, while the level of this metal was lower in the crustaceans; this trend was observed at the Edku, El-Maadiya and El-Max coasts as well. Herein, we discuss the importance of such contamination for public health and the sources of shellfish contamination with heavy metals. We suggest measures to minimize and prevent these pollutants in the aquatic environment and, furthermore, how to protect humans from excessive intake.

Keywords: Atomic absorption, heavy metals, sediment, shellfish, water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2834
201 Obsession of Time and the New Musical Ontologies: The Concert for Saxophone, Daniel Kientzy and Orchestra by Myriam Marbe

Authors: Luminiţa Duţică

Abstract:

For the music composer Myriam Marbe the musical time and memory represent 2 (complementary) phenomena with conclusive impact on the settlement of new musical ontologies. Summarizing the most important achievements of the contemporary techniques of composition, her vision on the microform presented in The Concert for Daniel Kientzy, saxophone and orchestra transcends the linear and unidirectional time in favour of a flexible, multivectorial speech with spiral developments, where the sound substance is auto(re)generated by analogy with the fundamental processes of the memory. The conceptual model is of an archetypal essence, the music composer being concerned with identifying the mechanisms of the creation process, especially of those specific to the collective creation (of oral tradition). Hence the spontaneity of expression, improvisation tint, free rhythm, micro-interval intonation, coloristictimbral universe dominated by multiphonics and unique sound effects, hence the atmosphere of ritual, however purged by the primary connotations and reprojected into a wonderful spectacular space. The Concert is a work of artistic maturity and enforces respect, among others, by the timbral diversity of the three species of saxophone required by the music composer (baritone, sopranino and alt), in Part III Daniel Kientzy shows the performance of playing two saxophones concomitantly. The score of the music composer Myriam Marbe contains a deeply spiritualized music, full or archetypal symbols, a music whose drama suggests a real cinematographic movement.

Keywords: Archetype, chronogenesis, concert, multiphonics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2085
200 Optimization the Conditions of Electrophoretic Deposition Fabrication of Graphene-Based Electrode to Consider Applications in Electro-Optical Sensors

Authors: Sepehr Lajevardi Esfahani, Shohre Rouhani, Zahra Ranjbar

Abstract:

Graphene has gained much attention owing to its unique optical and electrical properties. Charge carriers in graphene sheets (GS) carry out a linear dispersion relation near the Fermi energy and behave as massless Dirac fermions resulting in unusual attributes such as the quantum Hall effect and ambipolar electric field effect. It also exhibits nondispersive transport characteristics with an extremely high electron mobility (15000 cm2/(Vs)) at room temperature. Recently, several progresses have been achieved in the fabrication of single- or multilayer GS for functional device applications in the fields of optoelectronic such as field-effect transistors ultrasensitive sensors and organic photovoltaic cells. In addition to device applications, graphene also can serve as reinforcement to enhance mechanical, thermal, or electrical properties of composite materials. Electrophoretic deposition (EPD) is an attractive method for development of various coatings and films. It readily applied to any powdered solid that forms a stable suspension. The deposition parameters were controlled in various thicknesses. In this study, the graphene electrodeposition conditions were optimized. The results were obtained from SEM, Ohm resistance measuring technique and AFM characteristic tests. The minimum sheet resistance of electrodeposited reduced graphene oxide layers is achieved at conditions of 2 V in 10 s and it is annealed at 200 °C for 1 minute.

Keywords: Electrophoretic deposition, graphene oxide, electrical conductivity, electro-optical devices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 957
199 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: Adaptive sampling, batch bulk methyl methacrylate polymerization, large margin nearest neighbor regression, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1387
198 Qualitative Parametric Comparison of Load Balancing Algorithms in Parallel and Distributed Computing Environment

Authors: Amit Chhabra, Gurvinder Singh, Sandeep Singh Waraich, Bhavneet Sidhu, Gaurav Kumar

Abstract:

Decrease in hardware costs and advances in computer networking technologies have led to increased interest in the use of large-scale parallel and distributed computing systems. One of the biggest issues in such systems is the development of effective techniques/algorithms for the distribution of the processes/load of a parallel program on multiple hosts to achieve goal(s) such as minimizing execution time, minimizing communication delays, maximizing resource utilization and maximizing throughput. Substantive research using queuing analysis and assuming job arrivals following a Poisson pattern, have shown that in a multi-host system the probability of one of the hosts being idle while other host has multiple jobs queued up can be very high. Such imbalances in system load suggest that performance can be improved by either transferring jobs from the currently heavily loaded hosts to the lightly loaded ones or distributing load evenly/fairly among the hosts .The algorithms known as load balancing algorithms, helps to achieve the above said goal(s). These algorithms come into two basic categories - static and dynamic. Whereas static load balancing algorithms (SLB) take decisions regarding assignment of tasks to processors based on the average estimated values of process execution times and communication delays at compile time, Dynamic load balancing algorithms (DLB) are adaptive to changing situations and take decisions at run time. The objective of this paper work is to identify qualitative parameters for the comparison of above said algorithms. In future this work can be extended to develop an experimental environment to study these Load balancing algorithms based on comparative parameters quantitatively.

Keywords: SLB, DLB, Host, Algorithm and Load.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
197 Indoor Air Pollution of the Flexographic Printing Environment

Authors: Jelena S. Kiurski, Vesna S. Kecić, Snežana M. Aksentijević

Abstract:

The identification and evaluation of organic and inorganic pollutants were performed in a flexographic facility in Novi Sad, Serbia. Air samples were collected and analyzed in situ, during 4-hours working time at five sampling points by the mobile gas chromatograph and ozonometer at the printing of collagen casing. Experimental results showed that the concentrations of isopropyl alcohol, acetone, total volatile organic compounds and ozone varied during the sampling times. The highest average concentrations of 94.80 ppm and 102.57 ppm were achieved at 200 minutes from starting the production for isopropyl alcohol and total volatile organic compounds, respectively. The mutual dependences between target hazardous and microclimate parameters were confirmed using a multiple linear regression model with software package STATISTICA 10. Obtained multiple coefficients of determination in the case of ozone and acetone (0.507 and 0.589) with microclimate parameters indicated a moderate correlation between the observed variables. However, a strong positive correlation was obtained for isopropyl alcohol and total volatile organic compounds (0.760 and 0.852) with microclimate parameters. Higher values of parameter F than Fcritical for all examined dependences indicated the existence of statistically significant difference between the concentration levels of target pollutants and microclimates parameters. Given that, the microclimate parameters significantly affect the emission of investigated gases and the application of eco-friendly materials in production process present a necessity.

Keywords: Flexographic printing, indoor air, multiple regression analysis, pollution emission.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1292
196 Study of Storms on the Javits Center Green Roof

Authors: A. Cho, H. Sanyal, J. Cataldo

Abstract:

A quantitative analysis of the different variables on both the South and North green roofs of the Jacob K. Javits Convention Center was taken to find mathematical relationships between net radiation and evapotranspiration (ET), average outside temperature, and the lysimeter weight. Groups of datasets were analyzed, and the relationships were plotted on linear and semi-log graphs to find consistent relationships. Antecedent conditions for each rainstorm were also recorded and plotted against the volumetric water difference within the lysimeter. The first relation was the inverse parabolic relationship between the lysimeter weight and the net radiation and ET. The peaks and valleys of the lysimeter weight corresponded to valleys and peaks in the net radiation and ET respectively, with the 8/22/15 and 1/22/16 datasets showing this trend. The U-shaped and inverse U-shaped plots of the two variables coincided, indicating an inverse relationship between the two variables. Cross variable relationships were examined through graphs with lysimeter weight as the dependent variable on the y-axis. 10 out of 16 of the plots of lysimeter weight vs. outside temperature plots had R² values > 0.9. Antecedent conditions were also recorded for rainstorms, categorized by the amount of precipitation accumulating during the storm. Plotted against the change in the volumetric water weight difference within the lysimeter, a logarithmic regression was found with large R² values. The datasets were compared using the Mann Whitney U-test to see if the datasets were statistically different, using a significance level of 5%; all datasets compared showed a U test statistic value, proving the null hypothesis of the datasets being different from being true.

Keywords: Green roof, green infrastructure, Javits Center, evapotranspiration, net radiation, lysimeter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 346
195 A Retrospective Drug Utilization Study of Antiplatelet Drugs in Patients with Ischemic Heart Disease

Authors: K. Jyothi, T. S. Mohamed Saleem, L. Vineela, C. Gopinath, K. B. Yadavender Reddy

Abstract:

Objective: Acute coronary syndrome is a clinical condition encompassing ST segments elevation myocardial infraction, Non ST segment is elevation myocardial infraction and un stable angina is characterized by ruptured coronary plaque, stress and myocardial injury. Angina pectoris is a pressure like pain in the chest that is induced by exertion or stress and relived with in the minute after cessation of effort or using sublingual nitroglycerin. The present research was undertaken to study the drug utilization pattern of antiplatelet drugs for the ischemic heart disease in a tertiary care hospital. Method: The present study is retrospective drug utilization study and study period is 6months. The data is collected from the discharge case sheet of general medicine department from medical department Rajiv Gandhi institute of medical sciences, Kadapa. The tentative sample size fixed was 250 patients. Out of 250 cases 19 cases was excluded because of unrelated data. Results: A total of 250 prescriptions were collected for the study according to the inclusion criteria 233 prescriptions were diagnosed with ischemic heart disease 17 prescriptions were excluded due to unrelated information. out of 233 prescriptions 128 are male (54.9%) and 105 patients are were female (45%). According to the gender distribution, the prevalence of ischemic heart disease in males are 90 (70.31%) and females are 39 (37.1%). In the same way the prevalence of ischemic heart disease along with cerebrovascular disease in males are 39 (29.6%) and females are 66 (62.6%). Conclusion: We found that 94.8% of drug utilization of antiplatelet drugs was achieved in the Rajiv Gandhi institute of medical sciences, Kadapa from 2011-2012.

Keywords: Angina pectoris, aspirin, clopidogrel, myocardial infarction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1997