Search results for: Normalized Cumulative Spectral Distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2415

Search results for: Normalized Cumulative Spectral Distribution

1305 Performance Analysis of MIMO-OFDM Using Convolution Codes with QAM Modulation

Authors: I Gede Puja Astawa, Yoedy Moegiharto, Ahmad Zainudin, Imam Dui Agus Salim, Nur Annisa Anggraeni

Abstract:

Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs. Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier transmits Rayleigh multipath channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2x2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4x4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4x4 MIMO-OFDM system without coding, power saving 7dB of 2x2 MIMO-OFDM and significant power savings from SISO-OFDM system

Keywords: Convolution code, OFDM, MIMO, QAM, BER.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3369
1304 Feature Reduction of Nearest Neighbor Classifiers using Genetic Algorithm

Authors: M. Analoui, M. Fadavi Amiri

Abstract:

The design of a pattern classifier includes an attempt to select, among a set of possible features, a minimum subset of weakly correlated features that better discriminate the pattern classes. This is usually a difficult task in practice, normally requiring the application of heuristic knowledge about the specific problem domain. The selection and quality of the features representing each pattern have a considerable bearing on the success of subsequent pattern classification. Feature extraction is the process of deriving new features from the original features in order to reduce the cost of feature measurement, increase classifier efficiency, and allow higher classification accuracy. Many current feature extraction techniques involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. While this is useful for data visualization and increasing classification efficiency, it does not necessarily reduce the number of features that must be measured since each new feature may be a linear combination of all of the features in the original pattern vector. In this paper a new approach is presented to feature extraction in which feature selection, feature extraction, and classifier training are performed simultaneously using a genetic algorithm. In this approach each feature value is first normalized by a linear equation, then scaled by the associated weight prior to training, testing, and classification. A knn classifier is used to evaluate each set of feature weights. The genetic algorithm optimizes a vector of feature weights, which are used to scale the individual features in the original pattern vectors in either a linear or a nonlinear fashion. By this approach, the number of features used in classifying can be finely reduced.

Keywords: Feature reduction, genetic algorithm, pattern classification, nearest neighbor rule classifiers (k-NNR).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1753
1303 Accurate Time Domain Method for Simulation of Microstructured Electromagnetic and Photonic Structures

Authors: Vijay Janyani, Trevor M. Benson, Ana Vukovic

Abstract:

A time-domain numerical model within the framework of transmission line modeling (TLM) is developed to simulate electromagnetic pulse propagation inside multiple microcavities forming photonic crystal (PhC) structures. The model developed is quite general and is capable of simulating complex electromagnetic problems accurately. The field quantities can be mapped onto a passive electrical circuit equivalent what ensures that TLM is provably stable and conservative at a local level. Furthermore, the circuit representation allows a high level of hybridization of TLM with other techniques and lumped circuit models of components and devices. A photonic crystal structure formed by rods (or blocks) of high-permittivity dieletric material embedded in a low-dielectric background medium is simulated as an example. The model developed gives vital spatio-temporal information about the signal, and also gives spectral information over a wide frequency range in a single run. The model has wide applications in microwave communication systems, optical waveguides and electromagnetic materials simulations.

Keywords: Computational Electromagnetics, Numerical Simulation, Transmission Line Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
1302 On the Mechanism Broadening of Optical Spectrum of a Solvated Electron in Ammonia

Authors: V.K. Mukhomorov

Abstract:

The solvated electron is self-trapped (polaron) owing to strong interaction with the quantum polarization field. If the electron and quantum field are strongly coupled then the collective localized state of the field and quasi-particle is formed. In such a formation the electron motion is rather intricate. On the one hand the electron oscillated within a rather deep polarization potential well and undergoes the optical transitions, and on the other, it moves together with the center of inertia of the system and participates in the thermal random walk. The problem is to separate these motions correctly, rigorously taking into account the conservation laws. This can be conveniently done using Bogolyubov-Tyablikov method of canonical transformation to the collective coordinates. This transformation removes the translational degeneracy and allows one to develop the successive approximation algorithm for the energy and wave function while simultaneously fulfilling the law of conservation of total momentum of the system. The resulting equations determine the electron transitions and depend explicitly on the translational velocity of the quasi-particle as whole. The frequency of optical transition is calculated for the solvated electron in ammonia, and an estimate is made for the thermal-induced spectral bandwidth.

Keywords: Canonical transformations, solvated electron, width of the optical spectrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1300
1301 Phenotypical and Genotypical Assessment Techniques for Identification of Some Contagious Mastitis Pathogens

Authors: A. El Behiry, R. N. Zahran, R. Tarabees, E. Marzouk, M. Al-Dubaib

Abstract:

Mastitis is one of the most economic disease affecting dairy cows worldwide. Its classic diagnosis using bacterial culture and biochemical findings is a difficult and prolonged method. In this research, using of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) permitted identification of different microorganisms with high accuracy and rapidity (only 24 hours for microbial growth and analysis). During the application of MALDI-TOF MS, one hundred twenty strains of Staphylococcus and Streptococcus species isolated from milk of cows affected by clinical and subclinical mastitis were identified, and the results were compared with those obtained by traditional methods as API and VITEK 2 Systems. 37 of totality 39 strains (~95%) of Staphylococcus aureus (S. aureus) were exactly detected by MALDI TOF MS and then confirmed by a nuc-based PCR technique, whereas accurate identification was observed in 100% (50 isolates) of the coagulase negative staphylococci (CNS) and Streptococcus agalactiae (31 isolates). In brief, our results demonstrated that MALDI-TOF MS is a fast and truthful technique which has the capability to replace conventional identification of several bacterial strains usually isolated in clinical laboratories of microbiology.

Keywords: Identification, mastitis pathogens, mass spectral, phenotypical.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2255
1300 The Effect of Bottom Shape and Baffle Length on the Flow Field in Stirred Tanks in Turbulent and Transitional Flow

Authors: Jie Dong, Binjie Hu, Andrzej W Pacek, Xiaogang Yang, Nicholas J. Miles

Abstract:

The effect of the shape of the vessel bottom and the length of baffles on the velocity distributions in a turbulent and in a transitional flow has been simulated. The turbulent flow was simulated using standard k-ε model and simulation was verified using LES whereas transitional flow was simulated using only LES. It has been found that both the shape of tank bottom and the baffles’ length has significant effect on the flow pattern and velocity distribution below the impeller. In the dished bottom tank with baffles reaching the edge of the dish, the large rotating volume of liquid was formed below the impeller. Liquid in this rotating region was not fully mixing. A dead zone was formed here. The size and the intensity of circulation within this zone calculated by k-ε model and LES were practically identical what reinforces the accuracy of the numerical simulations. Both types of simulations also show that employing full-length baffles can reduce the size of dead zone formed below the impeller. The LES was also used to simulate the velocity distribution below the impeller in transitional flow and it has been found that secondary circulation loops were formed near the tank bottom in all investigated geometries. However, in this case the length of baffles has smaller effect on the volume of rotating liquid than in the turbulent flow.

Keywords: Baffles length, dished bottom, dead zone, flow field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2074
1299 Inferring User Preference Using Distance Dependent Chinese Restaurant Process and Weighted Distribution for a Content Based Recommender System

Authors: Bagher Rahimpour Cami, Hamid Hassanpour, Hoda Mashayekhi

Abstract:

Nowadays websites provide a vast number of resources for users. Recommender systems have been developed as an essential element of these websites to provide a personalized environment for users. They help users to retrieve interested resources from large sets of available resources. Due to the dynamic feature of user preference, constructing an appropriate model to estimate the user preference is the major task of recommender systems. Profile matching and latent factors are two main approaches to identify user preference. In this paper, we employed the latent factor and profile matching to cluster the user profile and identify user preference, respectively. The method uses the Distance Dependent Chines Restaurant Process as a Bayesian nonparametric framework to extract the latent factors from the user profile. These latent factors are mapped to user interests and a weighted distribution is used to identify user preferences. We evaluate the proposed method using a real-world data-set that contains news tweets of a news agency (BBC). The experimental results and comparisons show the superior recommendation accuracy of the proposed approach related to existing methods, and its ability to effectively evolve over time.

Keywords: Content-based recommender systems, dynamic user modeling, extracting user interests, predicting user preference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 797
1298 Using Teager Energy Cepstrum and HMM distancesin Automatic Speech Recognition and Analysis of Unvoiced Speech

Authors: Panikos Heracleous

Abstract:

In this study, the use of silicon NAM (Non-Audible Murmur) microphone in automatic speech recognition is presented. NAM microphones are special acoustic sensors, which are attached behind the talker-s ear and can capture not only normal (audible) speech, but also very quietly uttered speech (non-audible murmur). As a result, NAM microphones can be applied in automatic speech recognition systems when privacy is desired in human-machine communication. Moreover, NAM microphones show robustness against noise and they might be used in special systems (speech recognition, speech conversion etc.) for sound-impaired people. Using a small amount of training data and adaptation approaches, 93.9% word accuracy was achieved for a 20k Japanese vocabulary dictation task. Non-audible murmur recognition in noisy environments is also investigated. In this study, further analysis of the NAM speech has been made using distance measures between hidden Markov model (HMM) pairs. It has been shown the reduced spectral space of NAM speech using a metric distance, however the location of the different phonemes of NAM are similar to the location of the phonemes of normal speech, and the NAM sounds are well discriminated. Promising results in using nonlinear features are also introduced, especially under noisy conditions.

Keywords: Speech recognition, unvoiced speech, nonlinear features, HMM distance measures

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
1297 Low Complexity Peak-to-Average Power Ratio Reduction in Orthogonal Frequency Division Multiplexing System by Simultaneously Applying Partial Transmit Sequence and Clipping Algorithms

Authors: V. Sudha, D. Sriram Kumar

Abstract:

Orthogonal Frequency Division Multiplexing (OFDM) has been used in many advanced wireless communication systems due to its high spectral efficiency and robustness to frequency selective fading channels. However, the major concern with OFDM system is the high peak-to-average power ratio (PAPR) of the transmitted signal. Some of the popular techniques used for PAPR reduction in OFDM system are conventional partial transmit sequences (CPTS) and clipping. In this paper, a parallel combination/hybrid scheme of PAPR reduction using clipping and CPTS algorithms is proposed. The proposed method intelligently applies both the algorithms in order to reduce both PAPR as well as computational complexity. The proposed scheme slightly degrades bit error rate (BER) performance due to clipping operation and it can be reduced by selecting an appropriate value of the clipping ratio (CR). The simulation results show that the proposed algorithm achieves significant PAPR reduction with much reduced computational complexity.

Keywords: CCDF, OFDM, PAPR, PTS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1357
1296 Combining the Deep Neural Network with the K-Means for Traffic Accident Prediction

Authors: Celso L. Fernando, Toshio Yoshii, Takahiro Tsubota

Abstract:

Understanding the causes of a road accident and predicting their occurrence is key to prevent deaths and serious injuries from road accident events. Traditional statistical methods such as the Poisson and the Logistics regressions have been used to find the association of the traffic environmental factors with the accident occurred; recently, an artificial neural network, ANN, a computational technique that learns from historical data to make a more accurate prediction, has emerged. Although the ability to make accurate predictions, the ANN has difficulty dealing with highly unbalanced attribute patterns distribution in the training dataset; in such circumstances, the ANN treats the minority group as noise. However, in the real world data, the minority group is often the group of interest; e.g., in the road traffic accident data, the events of the accident are the group of interest. This study proposes a combination of the k-means with the ANN to improve the predictive ability of the neural network model by alleviating the effect of the unbalanced distribution of the attribute patterns in the training dataset. The results show that the proposed method improves the ability of the neural network to make a prediction on a highly unbalanced distributed attribute patterns dataset; however, on an even distributed attribute patterns dataset, the proposed method performs almost like a standard neural network. 

Keywords: Accident risks estimation, artificial neural network, deep learning, K-mean, road safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 919
1295 Impact of Music on Brain Function during Mental Task using Electroencephalography

Authors: B. Geethanjali, K. Adalarasu, R. Rajsekaran

Abstract:

Music has a great effect on human body and mind; it can have a positive effect on hormone system. Objective of this study is to analysis the effect of music (carnatic, hard rock and jazz) on brain activity during mental work load using electroencephalography (EEG). Eight healthy subjects without special musical education participated in the study. EEG signals were acquired at frontal (Fz), parietal (Pz) and central (Cz) lobes of brain while listening to music at three experimental condition (rest, music without mental task and music with mental task). Spectral powers features were extracted at alpha, theta and beta brain rhythms. While listening to jazz music, the alpha and theta powers were significantly (p < 0.05) high for rest as compared to music with and without mental task in Cz. While listening to Carnatic music, the beta power was significantly (p < 0.05) high for with mental task as compared to rest and music without mental task at Cz and Fz location. This finding corroborates that attention based activities are enhanced while listening to jazz and carnatic as compare to Hard rock during mental task.

Keywords: Music, Brain Function, Electroencephalography (EEG), Mental Task, Features extraction parameters

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4477
1294 An Investigation of Performance versus Security in Cognitive Radio Networks with Supporting Cloud Platforms

Authors: Kurniawan D. Irianto, Demetres D. Kouvatsos

Abstract:

The growth of wireless devices affects the availability of limited frequencies or spectrum bands as it has been known that spectrum bands are a natural resource that cannot be added. Meanwhile, the licensed frequencies are idle most of the time. Cognitive radio is one of the solutions to solve those problems. Cognitive radio is a promising technology that allows the unlicensed users known as secondary users (SUs) to access licensed bands without making interference to licensed users or primary users (PUs). As cloud computing has become popular in recent years, cognitive radio networks (CRNs) can be integrated with cloud platform. One of the important issues in CRNs is security. It becomes a problem since CRNs use radio frequencies as a medium for transmitting and CRNs share the same issues with wireless communication systems. Another critical issue in CRNs is performance. Security has adverse effect to performance and there are trade-offs between them. The goal of this paper is to investigate the performance related to security trade-off in CRNs with supporting cloud platforms. Furthermore, Queuing Network Models with preemptive resume and preemptive repeat identical priority are applied in this project to measure the impact of security to performance in CRNs with or without cloud platform. The generalized exponential (GE) type distribution is used to reflect the bursty inter-arrival and service times at the servers. The results show that the best performance is obtained when security is disabled and cloud platform is enabled.

Keywords: Cloud Platforms, Cognitive Radio Networks, GEtype Distribution, Performance Vs Security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2501
1293 Regional Analysis of Streamflow Drought: A Case Study for Southwestern Iran

Authors: M. Byzedi, B. Saghafian

Abstract:

Droughts are complex, natural hazards that, to a varying degree, affect some parts of the world every year. The range of drought impacts is related to drought occurring in different stages of the hydrological cycle and usually different types of droughts, such as meteorological, agricultural, hydrological, and socioeconomical are distinguished. Streamflow drought was analyzed by the method of truncation level (at 70% level) on daily discharges measured in 54 hydrometric stations in southwestern Iran. Frequency analysis was carried out for annual maximum series (AMS) of drought deficit volume and duration series. Some factors including physiographic, climatic, geologic, and vegetation cover were studied as influential factors in the regional analysis. According to the results of factor analysis, six most effective factors were identified as area, rainfall from December to February, the percent of area with Normalized Difference Vegetation Index (NDVI) <0.1, the percent of convex area, drainage density and the minimum of watershed elevation that explained 90.9% of variance. The homogenous regions were determined by cluster analysis and discriminate function analysis. Suitable multivariate regression models were evaluated for streamflow drought deficit volume with 2 years return period. The significance level of regression models was 0.01. The results showed that the watershed area is the most effective factor with high correlation with deficit volume. Also, drought duration was not a suitable drought index for regional analysis.

Keywords: Iran, Streamflow drought, truncation level method, regional analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1723
1292 Improved Automated Classification of Alcoholics and Non-alcoholics

Authors: Ramaswamy Palaniappan

Abstract:

In this paper, several improvements are proposed to previous work of automated classification of alcoholics and nonalcoholics. In the previous paper, multiplayer-perceptron neural network classifying energy of gamma band Visual Evoked Potential (VEP) signals gave the best classification performance using 800 VEP signals from 10 alcoholics and 10 non-alcoholics. Here, the dataset is extended to include 3560 VEP signals from 102 subjects: 62 alcoholics and 40 non-alcoholics. Three modifications are introduced to improve the classification performance: i) increasing the gamma band spectral range by increasing the pass-band width of the used filter ii) the use of Multiple Signal Classification algorithm to obtain the power of the dominant frequency in gamma band VEP signals as features and iii) the use of the simple but effective knearest neighbour classifier. To validate that these two modifications do give improved performance, a 10-fold cross validation classification (CVC) scheme is used. Repeat experiments of the previously used methodology for the extended dataset are performed here and improvement from 94.49% to 98.71% in maximum averaged CVC accuracy is obtained using the modifications. This latest results show that VEP based classification of alcoholics is worth exploring further for system development.

Keywords: Alcoholic, Multilayer-perceptron, Nearest neighbour, Gamma band, MUSIC, Visual evoked potential.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1359
1291 An Advanced Stereo Vision Based Obstacle Detection with a Robust Shadow Removal Technique

Authors: Saeid Fazli, Hajar Mohammadi D., Payman Moallem

Abstract:

This paper presents a robust method to detect obstacles in stereo images using shadow removal technique and color information. Stereo vision based obstacle detection is an algorithm that aims to detect and compute obstacle depth using stereo matching and disparity map. The proposed advanced method is divided into three phases, the first phase is detecting obstacles and removing shadows, the second one is matching and the last phase is depth computing. We propose a robust method for detecting obstacles in stereo images using a shadow removal technique based on color information in HIS space, at the first phase. In this paper we use Normalized Cross Correlation (NCC) function matching with a 5 × 5 window and prepare an empty matching table τ and start growing disparity components by drawing a seed s from S which is computed using canny edge detector, and adding it to τ. In this way we achieve higher performance than the previous works [2,17]. A fast stereo matching algorithm is proposed that visits only a small fraction of disparity space in order to find a semi-dense disparity map. It works by growing from a small set of correspondence seeds. The obstacle identified in phase one which appears in the disparity map of phase two enters to the third phase of depth computing. Finally, experimental results are presented to show the effectiveness of the proposed method.

Keywords: obstacle detection, stereo vision, shadowremoval, color, stereo matching

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2055
1290 Flow Characteristics around Rectangular Obstacles with the Varying Direction of Obstacles

Authors: Hee-Chang Lim

Abstract:

The study aims to understand the surface pressure distribution around the bodies such as the suction pressure in the leading edge on the top and side-face when the aspect ratio of bodies and the wind direction are changed, respectively. We carried out the wind tunnel measurement and numerical simulation around a series of rectangular bodies (40d×80w×80h, 80d×80w×80h, 160d×80w×80h, 80d×40w×80h and 80d×160w×80h in mm3) placed in a deep turbulent boundary layer. Based on a modern numerical platform, the Navier-Stokes equation with the typical 2-equation (k-ε model) and the DES (Detached Eddy Simulation) turbulence model has been calculated, and they are both compared with the measurement data. Regarding the turbulence model, the DES model makes a better prediction comparing with the k-ε model, especially when calculating the separated turbulent flow around a bluff body with sharp edged corner. In order to observe the effect of wind direction on the pressure variation around the cube (e.g., 80d×80w×80h in mm), it rotates at 0º, 10º, 20º, 30º, and 45º, which stands for the salient wind directions in the tunnel. The result shows that the surface pressure variation is highly dependent upon the approaching wind direction, especially on the top and the side-face of the cube. In addition, the transverse width has a substantial effect on the variation of surface pressure around the bodies, while the longitudinal length has little or no influence.

Keywords: Rectangular bodies, wind direction, aspect ratio, surface pressure distribution, wind-tunnel measurement, k-ε model, DES model, CFD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 890
1289 Stability Optimization of Functionally Graded Pipes Conveying Fluid

Authors: Karam Y. Maalawi, Hanan E.M EL-Sayed

Abstract:

This paper presents an exact analytical model for optimizing stability of thin-walled, composite, functionally graded pipes conveying fluid. The critical flow velocity at which divergence occurs is maximized for a specified total structural mass in order to ensure the economic feasibility of the attained optimum designs. The composition of the material of construction is optimized by defining the spatial distribution of volume fractions of the material constituents using piecewise variations along the pipe length. The major aim is to tailor the material distribution in the axial direction so as to avoid the occurrence of divergence instability without the penalty of increasing structural mass. Three types of boundary conditions have been examined; namely, Hinged-Hinged, Clamped- Hinged and Clamped-Clamped pipelines. The resulting optimization problem has been formulated as a nonlinear mathematical programming problem solved by invoking the MatLab optimization toolbox routines, which implement constrained function minimization routine named “fmincon" interacting with the associated eigenvalue problem routines. In fact, the proposed mathematical models have succeeded in maximizing the critical flow velocity without mass penalty and producing efficient and economic designs having enhanced stability characteristics as compared with the baseline designs.

Keywords: Functionally graded materials, pipe flow, optimumdesign, fluid- structure interaction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2189
1288 Automatic Classification of Lung Diseases from CT Images

Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari

Abstract:

Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life due to the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or COVID-19 induced pneumonia. The early prediction and classification of such lung diseases help reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans are pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publicly available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.

Keywords: CT scans, COVID-19, deep learning, image processing, pneumonia, lung disease.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 556
1287 Forecasting Stock Price Manipulation in Capital Market

Authors: F. Rahnamay Roodposhti, M. Falah Shams, H. Kordlouie

Abstract:

The aim of the article is extending and developing econometrics and network structure based methods which are able to distinguish price manipulation in Tehran stock exchange. The principal goal of the present study is to offer model for approximating price manipulation in Tehran stock exchange. In order to do so by applying separation method a sample consisting of 397 companies accepted at Tehran stock exchange were selected and information related to their price and volume of trades during years 2001 until 2009 were collected and then through performing runs test, skewness test and duration correlative test the selected companies were divided into 2 sets of manipulated and non manipulated companies. In the next stage by investigating cumulative return process and volume of trades in manipulated companies, the date of starting price manipulation was specified and in this way the logit model, artificial neural network, multiple discriminant analysis and by using information related to size of company, clarity of information, ratio of P/E and liquidity of stock one year prior price manipulation; a model for forecasting price manipulation of stocks of companies present in Tehran stock exchange were designed. At the end the power of forecasting models were studied by using data of test set. Whereas the power of forecasting logit model for test set was 92.1%, for artificial neural network was 94.1% and multi audit analysis model was 90.2%; therefore all of the 3 aforesaid models has high power to forecast price manipulation and there is no considerable difference among forecasting power of these 3 models.

Keywords: Price Manipulation, Liquidity, Size of Company, Floating Stock, Information Clarity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2827
1286 Paleoclimate Reconstruction during Pabdeh, Gurpi, Kazhdumi and Gadvan Formations (Cretaceous-Tertiary) Based on Clay Mineral Distribution

Authors: B. Soleimani

Abstract:

Paleoclimate was reconstructed by the clay mineral assemblages of shale units of Pabdeh (Paleocene- Oligocene), Gurpi (Upper Cretaceous), Kazhdumi (Albian-Cenomanian) and Gadvan (Aptian-Neocomian) formations in the Bangestan anticline. To compare with clay minerals assemblages in these formations, selected samples also taken from available formations in drilled wells in Ahvaz, Marun, Karanj, and Parsi oil fields. Collected samples prepared using standard clay mineral methodology. They were treated as normal, glycolated and heated oriented glass slides. Their identification was made on X-Ray diffractographs. Illite % varies from 8 to 36. Illite quantity increased from Pabdeh to Gurpi Formation. This may be due to dominant dry climate. Kaolinite is in range of 12-49%. Its variation style in different formations could be a marker of climate changes from wet to dry which is supported by the lithological changes. Chlorite (4-28%) can also be detected in those samples without any kaolinite. Mixed layer minerals as the mixture of illite-chlorite and illite-vermiculite-montmorillonite are varied from 6 to 36%, decreased during Kazhdumi deposition from the base to the top. This result may be according to decreasing of illite leaching process. Vermiculite was also determined in very less quantity and found in those units without kaolinite. Montmorillonite varies from 8 to 43%, and its presence is due to terrestrial depositional condition. Stratigraphical documents is also supported this idea that clay mineral distribution is a function of the climate changes. It seems, thus, the present results can be indicated a possible procedure for ancient climate changes evaluation.

Keywords: Clay Minerals, Paleoclimate, XRD, oriented slide

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2086
1285 An Image Encryption Method with Magnitude and Phase Manipulation using Carrier Images

Authors: S. R. M. Prasanna, Y. V. Subba Rao, A. Mitra

Abstract:

We describe an effective method for image encryption which employs magnitude and phase manipulation using carrier images. Although it involves traditional methods like magnitude and phase encryptions, the novelty of this work lies in deploying the concept of carrier images for encryption purpose. To this end, a carrier image is randomly chosen from a set of stored images. One dimensional (1-D) discrete Fourier transform (DFT) is then carried out on the original image to be encrypted along with the carrier image. Row wise spectral addition and scaling is performed between the magnitude spectra of the original and carrier images by randomly selecting the rows. Similarly, row wise phase addition and scaling is performed between the original and carrier images phase spectra by randomly selecting the rows. The encrypted image obtained by these two operations is further subjected to one more level of magnitude and phase manipulation using another randomly chosen carrier image by 1-D DFT along the columns. The resulting encrypted image is found to be fully distorted, resulting in increasing the robustness of the proposed work. Further, applying the reverse process at the receiver, the decrypted image is found to be distortionless.

Keywords: Encryption, Carrier images, Magnitude manipulation, Phase manipulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1586
1284 Numerical and Experimental Investigation of Airflow inside a Car Cabin

Authors: Mokhtar Djeddou, Amine Mehel, Georges Fokoua, Anne Tanière, Patrick Chevrier

Abstract:

Commuters’ exposure to air pollution, particularly to particle matter inside vehicles, is a significant health issue. Assessing particle concentrations and characterizing their distribution is an important first step in understanding and proposing solutions to improve car cabin air quality. It is known that particle dynamics is intimately driven by particle-turbulence interactions. In order to analyze and model pollutants distribution inside car cabins, it is crucial to examine first the single-phase flow topology and its associated turbulence characteristics. Within this context, Computational Fluid Dynamics (CFD) simulations were conducted to model airflow inside a full-scale car cabin using Reynolds Averaged Navier-Stokes (RANS) approach combined with the first order Realizable k-ε model to close the RANS equations. To assess the numerical model, a campaign of velocity field measurements at different locations in the front and back of the car cabin has been carried out using hot-wire anemometry technique. Comparison between numerical and experimental results shows a good agreement of velocity profiles. Additionally, visualization of streamlines shows the formation of jet flow developing out of the dashboard air vents and the formation of large vortex structures, particularly between the front and back-seat compartments. These vortical structures could play a key role in the accumulation and clustering of particles in a turbulent flow.

Keywords: Car cabin, CFD, hot-wire anemometry, vortical flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 434
1283 Three Dimensional Large Eddy Simulation of Blood Flow and Deformation in an Elastic Constricted Artery

Authors: Xi Gu, Guan Heng Yeoh, Victoria Timchenko

Abstract:

In the current work, a three-dimensional geometry of a 75% stenosed blood vessel is analyzed. Large eddy simulation (LES) with the help of a dynamic subgrid scale Smagorinsky model is applied to model the turbulent pulsatile flow. The geometry, the transmural pressure and the properties of the blood and the elastic boundary were based on clinical measurement data. For the flexible wall model, a thin solid region is constructed around the 75% stenosed blood vessel. The deformation of this solid region was modelled as a deforming boundary to reduce the computational cost of the solid model. Fluid-structure interaction is realized via a twoway coupling between the blood flow modelled via LES and the deforming vessel. The information of the flow pressure and the wall motion was exchanged continually during the cycle by an arbitrary Lagrangian-Eulerian method. The boundary condition of current time step depended on previous solutions. The fluctuation of the velocity in the post-stenotic region was analyzed in the study. The axial velocity at normalized position Z=0.5 shows a negative value near the vessel wall. The displacement of the elastic boundary was concerned in this study. In particular, the wall displacement at the systole and the diastole were compared. The negative displacement at the stenosis indicates a collapse at the maximum velocity and the deceleration phase.

Keywords: Large Eddy Simulation, Fluid Structural Interaction, Constricted Artery, Computational Fluid Dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2329
1282 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis

Authors: Petr Gurný

Abstract:

One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the creditscoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.

Keywords: Credit-scoring Models, Multidimensional Subordinated Lévy Model, Probability of Default.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1907
1281 A New Multi-Target, Multi-Agent Search-and-Rescue Path Planning Approach

Authors: Jean Berger, Nassirou Lo, Martin Noel

Abstract:

Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.

Keywords: Search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2466
1280 Curing Time Effect on Behavior of Cement Treated Marine Clay

Authors: H. W. Xiao, F. H. Lee

Abstract:

Cement stabilization has been widely used for improving the strength and stiffness of soft clayey soils. Cement treated soil specimens used to investigate the stress-strain behaviour in the laboratory study are usually cured for 7 days. This paper examines the effects of curing time on the strength and stress strain behaviour of cement treated marine clay under triaxial loading condition. Laboratory-prepared cement treated Singapore marine clay with different mix proportion S-C-W (soil solid-cement solid-water) and curing time (7 days to 180 days) was investigated through conducting unconfined compressive strength test and triaxial test. The results show that the curing time has a significant effect on the unconfined compressive strength u q , isotropic compression behaviour and stress strain behaviour. Although the primary yield loci of the cement treated soil specimens with the same mix proportion expand with curing time, they are very narrowly banded and have nearly the same shape after being normalized by isotropic compression primary stress ' py p . The isotropic compression primary yield stress ' py p was shown to be linearly related to unconfined compressive strength u q for specimens with different curing time and mix proportion. The effect of curing time on the hardening behaviour will diminish with consolidation stress higher than isotropic compression primary yield stress but its damping rate is dependent on the cement content.

Keywords: Cement treated soil, curing time effect, hardening behaviour, isotropic compression primary yield stress, unconfined compressive strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3889
1279 Medical Image Segmentation Based On Vigorous Smoothing and Edge Detection Ideology

Authors: Jagadish H. Pujar, Pallavi S. Gurjal, Shambhavi D. S, Kiran S. Kunnur

Abstract:

Medical image segmentation based on image smoothing followed by edge detection assumes a great degree of importance in the field of Image Processing. In this regard, this paper proposes a novel algorithm for medical image segmentation based on vigorous smoothening by identifying the type of noise and edge diction ideology which seems to be a boom in medical image diagnosis. The main objective of this algorithm is to consider a particular medical image as input and make the preprocessing to remove the noise content by employing suitable filter after identifying the type of noise and finally carrying out edge detection for image segmentation. The algorithm consists of three parts. First, identifying the type of noise present in the medical image as additive, multiplicative or impulsive by analysis of local histograms and denoising it by employing Median, Gaussian or Frost filter. Second, edge detection of the filtered medical image is carried out using Canny edge detection technique. And third part is about the segmentation of edge detected medical image by the method of Normalized Cut Eigen Vectors. The method is validated through experiments on real images. The proposed algorithm has been simulated on MATLAB platform. The results obtained by the simulation shows that the proposed algorithm is very effective which can deal with low quality or marginal vague images which has high spatial redundancy, low contrast and biggish noise, and has a potential of certain practical use of medical image diagnosis.

Keywords: Image Segmentation, Image smoothing, Edge Detection, Impulsive noise, Gaussian noise, Median filter, Canny edge, Eigen values, Eigen vector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1885
1278 Fingerprint Compression Using Contourlet Transform and Multistage Vector Quantization

Authors: S. Esakkirajan, T. Veerakumar, V. Senthil Murugan, R. Sudhakar

Abstract:

This paper presents a new fingerprint coding technique based on contourlet transform and multistage vector quantization. Wavelets have shown their ability in representing natural images that contain smooth areas separated with edges. However, wavelets cannot efficiently take advantage of the fact that the edges usually found in fingerprints are smooth curves. This issue is addressed by directional transforms, known as contourlets, which have the property of preserving edges. The contourlet transform is a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks. The computation and storage requirements are the major difficulty in implementing a vector quantizer. In the full-search algorithm, the computation and storage complexity is an exponential function of the number of bits used in quantizing each frame of spectral information. The storage requirement in multistage vector quantization is less when compared to full search vector quantization. The coefficients of contourlet transform are quantized by multistage vector quantization. The quantized coefficients are encoded by Huffman coding. The results obtained are tabulated and compared with the existing wavelet based ones.

Keywords: Contourlet Transform, Directional Filter bank, Laplacian Pyramid, Multistage Vector Quantization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1993
1277 Functional Lipids and Bioactive Compounds from Oil Rich Indigenous Seeds

Authors: Azza. S. Naik, S. S. Lele

Abstract:

Indian subcontinent has a plethora of traditional medicine systems that provide promising solutions to lifestyle disorders in an 'all natural way'. Spices and oilseeds hold prominence in Indian cuisine hence the focus of the current study was to evaluate the bioactive molecules from Linum usitatissinum (LU), Lepidium sativum (LS), Nigella sativa (NS) and Guizotia abyssinica (GA) seeds. The seeds were characterized for functional lipids like omega-3 fatty acid, antioxidant capacity, phenolic compounds, dietary fiber and anti-nutritional factors. Analysis of the seeds revealed LU and LS to be a rich source of α-linolenic acid (41.85 ± 0.33%, 26.71 ± 0.63%), an omega 3 fatty acid (using GCMS). While studying antioxidant potential NS seeds demonstrated highest antioxidant ability (61.68 ± 0.21 TEAC/ 100 gm DW) due to the presence of phenolics and terpenes as assayed by the Mass spectral analysis. When screened for anti-nutritional factor cyanogenic glycoside, LS seeds showed content as high as 1674 ± 54 mg HCN / kg. GA is a probable good source of a stable vegetable oil (SFA: PUFA 1:2.3). The seeds showed diversified bioactive profile and hence further studies to use different bio molecules in tandem for the development of a possible 'nutraceutical cocktail' have been initiated..

Keywords: antioxidants, bioactives, functional lipids and oilseeds

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2336
1276 The Emission Spectra Due to Exciton-Exciton Collisions in GaAs/AlGaAs Quantum Well System

Authors: Surendra K Pandey

Abstract:

Optical emission based on excitonic scattering processes becomes important in dense exciton systems in which the average distance between excitons is of the order of a few Bohr radii but still below the exciton screening threshold. The phenomena due to interactions among excited states play significant role in the emission near band edge of the material. The theory of two-exciton collisions for GaAs/AlGaAs quantum well systems is a mild attempt to understand the physics associated with the optical spectra due to excitonic scattering processes in these novel systems. The four typical processes considered give different spectral shape, peak position and temperature dependence of the emission spectra. We have used the theory of scattering together with the second order perturbation theory to derive the radiative power spontaneously emitted at an energy ħω by these processes. The results arrived at are purely qualitative in nature. The intensity of emitted light in quantum well systems varies inversely to the square of temperature, whereas in case of bulk materials it simply decreases with the  temperature.

Keywords: Exciton-Exciton Collisions, Excitonic Scattering Processes, Interacting Excitonic States, Quantum Wells.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1424