Search results for: partitioned matrices
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 279

Search results for: partitioned matrices

39 SLM Using Riemann Sequence Combined with DCT Transform for PAPR Reduction in OFDM Communication Systems

Authors: Pepin Magnangana Zoko Goyoro, Ibrahim James Moumouni, Sroy Abouty

Abstract:

Orthogonal Frequency Division Multiplexing (OFDM) is an efficient method of data transmission for high speed communication systems. However, the main drawback of OFDM systems is that, it suffers from the problem of high Peak-to-Average Power Ratio (PAPR) which causes inefficient use of the High Power Amplifier and could limit transmission efficiency. OFDM consist of large number of independent subcarriers, as a result of which the amplitude of such a signal can have high peak values. In this paper, we propose an effective reduction scheme that combines DCT and SLM techniques. The scheme is composed of the DCT followed by the SLM using the Riemann matrix to obtain phase sequences for the SLM technique. The simulation results show PAPR can be greatly reduced by applying the proposed scheme. In comparison with OFDM, while OFDM had high values of PAPR –about 10.4dB our proposed method achieved about 4.7dB reduction of the PAPR with low complexities computation. This approach also avoids randomness in phase sequence selection, which makes it simpler to decode at the receiver. As an added benefit, the matrices can be generated at the receiver end to obtain the data signal and hence it is not required to transmit side information (SI).

Keywords: DCT transform, OFDM, PAPR, Riemann matrix, SLM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2593
38 Soil Moisture Control System: A Product Development Approach

Authors: Swapneel U. Naphade, Dushyant A. Patil, Satyabodh M. Kulkarni

Abstract:

In this work, we propose the concept and geometrical design of a soil moisture control system (SMCS) module by following the product development approach to develop an inexpensive, easy to use and quick to install product targeted towards agriculture practitioners. The module delivers water to the agricultural land efficiently by sensing the soil moisture and activating the delivery valve. We start with identifying the general needs of the potential customer. Then, based on customer needs we establish product specifications and identify important measuring quantities to evaluate our product. Keeping in mind the specifications, we develop various conceptual solutions of the product and select the best solution through concept screening and selection matrices. Then, we develop the product architecture by integrating the systems into the final product. In the end, the geometric design is done using human factors engineering concepts like heuristic analysis, task analysis, and human error reduction analysis. The result of human factors analysis reveals the remedies which should be applied while designing the geometry and software components of the product. We find that to design the best grip in terms of comfort and applied force, for a power-type grip, a grip-diameter of 35 mm is the most ideal.

Keywords: Agriculture, human factors, product design, soil moisture control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1248
37 Optimal Tuning of Linear Quadratic Regulator Controller Using a Particle Swarm Optimization for Two-Rotor Aerodynamical System

Authors: Ayad Al-Mahturi, Herman Wahid

Abstract:

This paper presents an optimal state feedback controller based on Linear Quadratic Regulator (LQR) for a two-rotor aero-dynamical system (TRAS). TRAS is a highly nonlinear multi-input multi-output (MIMO) system with two degrees of freedom and cross coupling. There are two parameters that define the behavior of LQR controller: state weighting matrix and control weighting matrix. The two parameters influence the performance of LQR. Particle Swarm Optimization (PSO) is proposed to optimally tune weighting matrices of LQR. The major concern of using LQR controller is to stabilize the TRAS by making the beam move quickly and accurately for tracking a trajectory or to reach a desired altitude. The simulation results were carried out in MATLAB/Simulink. The system is decoupled into two single-input single-output (SISO) systems. Comparing the performance of the optimized proportional, integral and derivative (PID) controller provided by INTECO, results depict that LQR controller gives a better performance in terms of both transient and steady state responses when PSO is performed.

Keywords: Linear quadratic regulator, LQR controller, optimal control, particle swarm optimization, PSO, two-rotor aero-dynamical system, TRAS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2076
36 Static Headspace GC Method for Aldehydes Determination in Different Food Matrices

Authors: A. Mandić, M. Sakač, A. Mišan, B. Šojić, L. Petrović, I. Lončarević, B. Pajin, I. Sedej

Abstract:

Aldehydes as secondary lipid oxidation products are highly specific to the oxidative degradation of particular polyunsaturated fatty acids present in foods. Gas chromatographic analysis of those volatile compounds has been widely used for monitoring of the deterioration of food products. Developed static headspace gas chromatography method using flame ionization detector (SHS GC FID) was applied to monitor the aldehydes present in processed foods such as bakery, meat and confectionary products.

Five selected aldehydes were determined in samples without any sample preparation, except grinding for bakery and meat products. SHS–GC analysis allows the separation of propanal, pentanal, hexanal, heptanal and octanal, within 15min. Aldehydes were quantified in fresh and stored samples, and the obtained range of aldehydes in crackers was 1.62±0.05 – 9.95±0.05mg/kg, in sausages 6.62±0.46 – 39.16±0.39mg/kg; and in cocoa spread cream 0.48±0.01 – 1.13±0.02mg/kg. Referring to the obtained results, the following can be concluded, proposed method is suitable for different types of samples, content of aldehydes varies depending on the type of a sample, and differs in fresh and stored samples of the same type.

Keywords: Lipid oxidation, aldehydes, crackers, sausage, cocoa cream spread.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4852
35 Flutter Analysis of Slender Beams with Variable Cross Sections Based on Integral Equation Formulation

Authors: Z. El Felsoufi, L. Azrar

Abstract:

This paper studies a mathematical model based on the integral equations for dynamic analyzes numerical investigations of a non-uniform or multi-material composite beam. The beam is subjected to a sub-tangential follower force and elastic foundation. The boundary conditions are represented by generalized parameterized fixations by the linear and rotary springs. A mathematical formula based on Euler-Bernoulli beam theory is presented for beams with variable cross-sections. The non-uniform section introduces non-uniformity in the rigidity and inertia of beams and consequently, more complicated equilibrium who governs the equation. Using the boundary element method and radial basis functions, the equation of motion is reduced to an algebro-differential system related to internal and boundary unknowns. A generalized formula for the deflection, the slope, the moment and the shear force are presented. The free vibration of non-uniform loaded beams is formulated in a compact matrix form and all needed matrices are explicitly given. The dynamic stability analysis of slender beam is illustrated numerically based on the coalescence criterion. A realistic case related to an industrial chimney is investigated.

Keywords: Chimney, BEM and integral equation formulation, non uniform cross section, vibration and Flutter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1572
34 3D Liver Segmentation from CT Images Using a Level Set Method Based on a Shape and Intensity Distribution Prior

Authors: Nuseiba M. Altarawneh, Suhuai Luo, Brian Regan, Guijin Tang

Abstract:

Liver segmentation from medical images poses more challenges than analogous segmentations of other organs. This contribution introduces a liver segmentation method from a series of computer tomography images. Overall, we present a novel method for segmenting liver by coupling density matching with shape priors. Density matching signifies a tracking method which operates via maximizing the Bhattacharyya similarity measure between the photometric distribution from an estimated image region and a model photometric distribution. Density matching controls the direction of the evolution process and slows down the evolving contour in regions with weak edges. The shape prior improves the robustness of density matching and discourages the evolving contour from exceeding liver’s boundaries at regions with weak boundaries. The model is implemented using a modified distance regularized level set (DRLS) model. The experimental results show that the method achieves a satisfactory result. By comparing with the original DRLS model, it is evident that the proposed model herein is more effective in addressing the over segmentation problem. Finally, we gauge our performance of our model against matrices comprising of accuracy, sensitivity, and specificity.

Keywords: Bhattacharyya distance, distance regularized level set (DRLS) model, liver segmentation, level set method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2283
33 Stochastic Subspace Modelling of Turbulence

Authors: M. T. Sichani, B. J. Pedersen, S. R. K. Nielsen

Abstract:

Turbulence of the incoming wind field is of paramount importance to the dynamic response of civil engineering structures. Hence reliable stochastic models of the turbulence should be available from which time series can be generated for dynamic response and structural safety analysis. In the paper an empirical cross spectral density function for the along-wind turbulence component over the wind field area is taken as the starting point. The spectrum is spatially discretized in terms of a Hermitian cross-spectral density matrix for the turbulence state vector which turns out not to be positive definite. Since the succeeding state space and ARMA modelling of the turbulence rely on the positive definiteness of the cross-spectral density matrix, the problem with the non-positive definiteness of such matrices is at first addressed and suitable treatments regarding it are proposed. From the adjusted positive definite cross-spectral density matrix a frequency response matrix is constructed which determines the turbulence vector as a linear filtration of Gaussian white noise. Finally, an accurate state space modelling method is proposed which allows selection of an appropriate model order, and estimation of a state space model for the vector turbulence process incorporating its phase spectrum in one stage, and its results are compared with a conventional ARMA modelling method.

Keywords: Turbulence, wind turbine, complex coherence, state space modelling, ARMA modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1603
32 Compressed Sensing of Fetal Electrocardiogram Signals Based on Joint Block Multi-Orthogonal Least Squares Algorithm

Authors: Xiang Jianhong, Wang Cong, Wang Linyu

Abstract:

With the rise of medical IoT technologies, Wireless body area networks (WBANs) can collect fetal electrocardiogram (FECG) signals to support telemedicine analysis. The compressed sensing (CS)-based WBANs system can avoid the sampling of a large amount of redundant information and reduce the complexity and computing time of data processing, but the existing algorithms have poor signal compression and reconstruction performance. In this paper, a Joint block multi-orthogonal least squares (JBMOLS) algorithm is proposed. We apply the FECG signal to the Joint block sparse model (JBSM), and a comparative study of sparse transformation and measurement matrices is carried out. A FECG signal compression transmission mode based on Rbio5.5 wavelet, Bernoulli measurement matrix, and JBMOLS algorithm is proposed to improve the compression and reconstruction performance of FECG signal by CS-based WBANs. Experimental results show that the compression ratio (CR) required for accurate reconstruction of this transmission mode is increased by nearly 10%, and the runtime is saved by about 30%.

Keywords: telemedicine, fetal electrocardiogram, compressed sensing, joint sparse reconstruction, block sparse signal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 395
31 Static and Dynamic Analysis of Hyperboloidal Helix Having Thin Walled Open and Close Sections

Authors: Merve Ermis, Murat Yılmaz, Nihal Eratlı, Mehmet H. Omurtag

Abstract:

The static and dynamic analyses of hyperboloidal helix having the closed and the open square box sections are investigated via the mixed finite element formulation based on Timoshenko beam theory. Frenet triad is considered as local coordinate systems for helix geometry. Helix domain is discretized with a two-noded curved element and linear shape functions are used. Each node of the curved element has 12 degrees of freedom, namely, three translations, three rotations, two shear forces, one axial force, two bending moments and one torque. Finite element matrices are derived by using exact nodal values of curvatures and arc length and it is interpolated linearly throughout the element axial length. The torsional moments of inertia for close and open square box sections are obtained by finite element solution of St. Venant torsion formulation. With the proposed method, the torsional rigidity of simply and multiply connected cross-sections can be also calculated in same manner. The influence of the close and the open square box cross-sections on the static and dynamic analyses of hyperboloidal helix is investigated. The benchmark problems are represented for the literature.

Keywords: Hyperboloidal helix, squared cross section, thin walled cross section, torsional rigidity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613
30 A Novel Approach for Coin Identification using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms

Authors: J. Prakash, K. Rajesh

Abstract:

In this paper we present a new method for coin identification. The proposed method adopts a hybrid scheme using Eigenvalues of covariance matrix, Circular Hough Transform (CHT) and Bresenham-s circle algorithm. The statistical and geometrical properties of the small and large Eigenvalues of the covariance matrix of a set of edge pixels over a connected region of support are explored for the purpose of circular object detection. Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain only a small number of non-zero elements, they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of the circumference pixels is identified using Raster scan algorithm which uses geometrical symmetry property. After finding circular objects, the proposed method uses the texture on the surface of the coins called texton, which are unique properties of coins, refers to the fundamental micro structure in generic natural images. This method has been tested on several real world images including coin and non-coin images. The performance is also evaluated based on the noise withstanding capability.

Keywords: Circular Hough Transform, Coin detection, Covariance matrix, Eigenvalues, Raster scan Algorithm, Texton.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1842
29 Accurate Visualization of Graphs of Functions of Two Real Variables

Authors: Zeitoun D. G., Thierry Dana-Picard

Abstract:

The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.

Keywords: Function singularities, mesh generation, point allocation, visualization, collocation least squares method, Augmented Lagrangian method, Uzawa's Algorithm, Preconditioned Conjugate Gradien

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1669
28 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Mixed Integration Method: Stability Aspects and Computational Efficiency

Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino

Abstract:

In order to reduce numerical computations in the nonlinear dynamic analysis of seismically base-isolated structures, a Mixed Explicit-Implicit time integration Method (MEIM) has been proposed. Adopting the explicit conditionally stable central difference method to compute the nonlinear response of the base isolation system, and the implicit unconditionally stable Newmark’s constant average acceleration method to determine the superstructure linear response, the proposed MEIM, which is conditionally stable due to the use of the central difference method, allows to avoid the iterative procedure generally required by conventional monolithic solution approaches within each time step of the analysis. The main aim of this paper is to investigate the stability and computational efficiency of the MEIM when employed to perform the nonlinear time history analysis of base-isolated structures with sliding bearings. Indeed, in this case, the critical time step could become smaller than the one used to define accurately the earthquake excitation due to the very high initial stiffness values of such devices. The numerical results obtained from nonlinear dynamic analyses of a base-isolated structure with a friction pendulum bearing system, performed by using the proposed MEIM, are compared to those obtained adopting a conventional monolithic solution approach, i.e. the implicit unconditionally stable Newmark’s constant acceleration method employed in conjunction with the iterative pseudo-force procedure. According to the numerical results, in the presented numerical application, the MEIM does not have stability problems being the critical time step larger than the ground acceleration one despite of the high initial stiffness of the friction pendulum bearings. In addition, compared to the conventional monolithic solution approach, the proposed algorithm preserves its computational efficiency even when it is adopted to perform the nonlinear dynamic analysis using a smaller time step.

Keywords: Base isolation, computational efficiency, mixed explicit-implicit method, partitioned solution approach, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 995
27 Describing the Fine Electronic Structure and Predicting Properties of Materials with ATOMIC MATTERS Computation System

Authors: Rafal Michalski, Jakub Zygadlo

Abstract:

We present the concept and scientific methods and algorithms of our computation system called ATOMIC MATTERS. This is the first presentation of the new computer package, that allows its user to describe physical properties of atomic localized electron systems subject to electromagnetic interactions. Our solution applies to situations where an unclosed electron 2p/3p/3d/4d/5d/4f/5f subshell interacts with an electrostatic potential of definable symmetry and external magnetic field. Our methods are based on Crystal Electric Field (CEF) approach, which takes into consideration the electrostatic ligands field as well as the magnetic Zeeman effect. The application allowed us to predict macroscopic properties of materials such as: Magnetic, spectral and calorimetric as a result of physical properties of their fine electronic structure. We emphasize the importance of symmetry of charge surroundings of atom/ion, spin-orbit interactions (spin-orbit coupling) and the use of complex number matrices in the definition of the Hamiltonian. Calculation methods, algorithms and convention recalculation tools collected in ATOMIC MATTERS were chosen to permit the prediction of magnetic and spectral properties of materials in isostructural series.

Keywords: Atomic matters, crystal electric field, spin-orbit coupling, localized states, electron subshell, fine electronic structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1167
26 Innovative Activity and Development: Analyzing Firm Data from Eurozone Country-Members

Authors: Ilias A. Makris

Abstract:

In this work, we attempt to associate firm characteristics with innovative activity. We collect microdata from listed firms of selected Eurozone Country-members, after the beginning of 2007 financial crisis. The following literature, several indicators of growth and performance were selected and tested for their ability to interpret innovative activity. The main scope is to examine the possible differences in performance and growth between innovative and non-innovative firms, during a severe recession. Additionally to that, a special focus will be held on whether macroeconomic performance and national innovation system, determines the extent of innovators' performance. Preliminary findings, through correlation matrices and non-parametric tests, strongly indicate the positive relation between innovative activity and most of the measures used (profitability, size, employment), confirming that even during a recessionary period, innovative firms not only survive but also seem to succeed better economic results in almost all indexes relative to non-innovative. However, even though innovators seem to perform better in all economies examined, the extent of that performance seems to be strongly affected by the supportive mechanisms (financial and structural) that their country provides. Thus, it is clear, that the technologically intensive 'gap' between European South and North, during the economic crisis, became chaotic, due to the harsh austerity measures and reduced budgets in those countries, even in sectors with high potentials in economic activity and employment, impairing the effects of crisis and enhancing the vicious circle of recession.

Keywords: Eurozone, innovative activity, development, firm performance, non-parametric tests.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1406
25 An Approximate Lateral-Torsional Buckling Mode Function for Cantilever I-Beams

Authors: H. Ozbasaran

Abstract:

Lateral torsional buckling is a global buckling mode which should be considered in design of slender structural members under flexure about their strong axis. It is possible to compute the load which causes lateral torsional buckling of a beam by finite element analysis, however, closed form equations are needed in engineering practice for calculation ease which can be obtained by using energy method. In lateral torsional buckling applications of energy method, a proper function for the critical lateral torsional buckling mode should be chosen which can be thought as the variation of twisting angle along the buckled beam. Accuracy of the results depends on how close is the chosen function to the exact mode. Since critical lateral torsional buckling mode of the cantilever I-beams varies due to material properties, section properties and loading case, the hardest step is to determine a proper mode function in application of energy method. This paper presents an approximate function for critical lateral torsional buckling mode of doubly symmetric cantilever I-beams. Coefficient matrices are calculated for concentrated load at free end, uniformly distributed load and constant moment along the beam cases. Critical lateral torsional buckling modes obtained by presented function and exact solutions are compared. It is found that the modes obtained by presented function coincide with differential equation solutions for considered loading cases.

Keywords: Buckling mode, cantilever, lateral-torsional buckling, I-beam.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2510
24 Mercerization Treatment Parameter Effect on Natural Fiber Reinforced Polymer Matrix Composite: A Brief Review

Authors: Mohd Yussni Hashim, Mohd Nazrul Roslan, Azriszul Mohd Amin, Ahmad Mujahid Ahmad Zaidi, Saparudin Ariffin

Abstract:

Environmental awareness and depletion of the petroleum resources are among vital factors that motivate a number of researchers to explore the potential of reusing natural fiber as an alternative composite material in industries such as packaging, automotive and building constructions. Natural fibers are available in abundance, low cost, lightweight polymer composite and most importance its biodegradability features, which often called “ecofriendly" materials. However, their applications are still limited due to several factors like moisture absorption, poor wettability and large scattering in mechanical properties. Among the main challenges on natural fibers reinforced matrices composite is their inclination to entangle and form fibers agglomerates during processing due to fiber-fiber interaction. This tends to prevent better dispersion of the fibers into the matrix, resulting in poor interfacial adhesion between the hydrophobic matrix and the hydrophilic reinforced natural fiber. Therefore, to overcome this challenge, fiber treatment process is one common alternative that can be use to modify the fiber surface topology by chemically, physically or mechanically technique. Nevertheless, this paper attempt to focus on the effect of mercerization treatment on mechanical properties enhancement of natural fiber reinforced composite or so-called bio composite. It specifically discussed on mercerization parameters, and natural fiber reinforced composite mechanical properties enhancement.

Keywords: Mercerization treatment, mechanical properties, natural fiber and bio composite

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4692
23 A Product Development for Green Logistics Model by Integrated Evaluation of Design and Manufacturing and Green Supply Chain

Authors: Yuan-Jye Tseng, Yen-Jung Wang

Abstract:

A product development for green logistics model using the fuzzy analytic network process method is presented for evaluating the relationships among the product design, the manufacturing activities, and the green supply chain. In the product development stage, there can be alternative ways to design the detailed components to satisfy the design concept and product requirement. In different design alternative cases, the manufacturing activities can be different. In addition, the manufacturing activities can affect the green supply chain of the components and product. In this research, a fuzzy analytic network process evaluation model is presented for evaluating the criteria in product design, manufacturing activities, and green supply chain. The comparison matrices for evaluating the criteria among the three groups are established. The total relational values between the three groups represent the relationships and effects. In application, the total relational values can be used to evaluate the design alternative cases for decision-making to select a suitable design case and the green supply chain. In this presentation, an example product is illustrated. It shows that the model is useful for integrated evaluation of design and manufacturing and green supply chain for the purpose of product development for green logistics.

Keywords: Supply chain management, green supply chain, product development for logistics, fuzzy analytic network process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2194
22 Polymer Modification of Fine Grained Concretes Used in Textile Reinforced Cementitious Composites

Authors: Esma Gizem Daskiran, Mehmet Mustafa Daskiran, Mustafa Gencoglu

Abstract:

Textile reinforced cementitious composite (TRCC) is a development of a composite material where textile and fine-grained concrete (matrix) materials are used in combination. These matrices offer high performance properties in many aspects. To achieve high performance, polymer modified fine-grained concretes were used as matrix material which have high flexural strength. In this study, ten latex polymers and ten powder polymers were added to fine-grained concrete mixtures. These latex and powder polymers were added to the mixtures at different rates related to binder weight. Mechanical properties such as compressive and flexural strength were studied. Results showed that latex polymer and redispersible polymer modified fine-grained concretes showed different mechanical performance. A wide range of both latex and redispersible powder polymers were studied. As the addition rate increased compressive strength decreased for all mixtures. Flexural strength increased as the addition rate increased but significant enhancement was not observed through all mixtures.

Keywords: Textile reinforced composite, cement, fine grained concrete, latex, redispersible powder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 873
21 Web Proxy Detection via Bipartite Graphs and One-Mode Projections

Authors: Zhipeng Chen, Peng Zhang, Qingyun Liu, Li Guo

Abstract:

With the Internet becoming the dominant channel for business and life, many IPs are increasingly masked using web proxies for illegal purposes such as propagating malware, impersonate phishing pages to steal sensitive data or redirect victims to other malicious targets. Moreover, as Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to detect the proxy service due to their dynamic update and high anonymity. In this paper, we present an approach based on behavioral graph analysis to study the behavior similarity of web proxy users. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of web proxy users. Based on the similarity matrices of end-users from the derived one-mode projection graphs, we apply a simple yet effective spectral clustering algorithm to discover the inherent web proxy users behavior clusters. The web proxy URL may vary from time to time. Still, the inherent interest would not. So, based on the intuition, by dint of our private tools implemented by WebDriver, we examine whether the top URLs visited by the web proxy users are web proxies. Our experiment results based on real datasets show that the behavior clusters not only reduce the number of URLs analysis but also provide an effective way to detect the web proxies, especially for the unknown web proxies.

Keywords: Bipartite graph, clustering, one-mode projection, web proxy detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 693
20 Environmental Impact Assessment of Gotv and Hydro-Electric Dam on the Karoon River Using ICOLD Technique

Authors: A. Sayadi, A. Khodadadi D., S. Partani

Abstract:

Today Environmental Impact Assessment (EIA) is known as one of the most important tools for decision makers in the construction of civil and industrial projects towards sustainable development. In the past, projects were evaluated based on cost and benefit analysis regardless of the physical and biological environmental effects and its socio-economical impacts. According to the Department of Environment (DOE) of Iran's regulations, the construction of hydroelectric dams is an activity that requires an EIA report. In this paper the environmental impact assessment of the Gotvand hydro-electrical dam has been evaluated in the three environment elements, biological, Physical-chemical and cultural units. This dam is one of the largest dams in Iran with a volume of 4500 MCM and is going to be the last dam on the Karoon River in the south of Iran. In this paper the ICOLD (International Commission on Large Dams) technique was employed for the environmental impact assessment of the dam. The research includes all socio economical and environmental effects of the dam during the construction and operation of the hydro electric dam and Environmental management, monitoring and mitigation of negative impacts were analyzed. In this project the results led to using some techniques to protect the destructive impacts on biological aspects beside the effective long time period impacts on the biological aspects. The impacts on physical aspects are temporary and negative commonly that could be restored and rehabilitated in natural process in the long time in operation period.

Keywords: "Gotvand Hydro Electric Dam", "EIA", "ICOLD and Leopold matrices"

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3313
19 Qualitative and Quantitative Analyses of Phytochemicals and Antioxidant Activity of Ficus sagittifolia (Warburg Ex Mildbread and Burret)

Authors: Taiwo O. Margaret, Olaoluwa O. Olaoluwa

Abstract:

Moraceae family has immense phytochemical constituents and significant pharmacological properties, hence have great medicinal values. The aim of this study was to screen and quantify phytochemicals as well as the antioxidant activities of the leaf and stem bark extracts and fractions (crude ethanol extracts, n-hexane, ethyl acetate and aqueous ethanol fractions) of Ficus sagittifolia. Leaf and stem bark of F. sagittifolia were extracted by maceration method using ethanol to give ethanol crude extract. The ethanol crude extract was partitioned by n-hexane and ethyl-acetate to give their respective fractions. All the extracts were screened for their phytochemicals using standard methods. The total phenolic, flavonoid, tannin, saponin contents and antioxidant activity were determined by spectrophotometric method while the alkaloid content was evaluated by titrimetric method. The amount of total phenolic in extracts and fractions were estimated in comparison to gallic acid, whereas total flavonoids, tannins and saponins were estimated corresponding to quercetin, tannic acid and saponin respectively. 2, 2-diphenylpicryl hydrazyl radical (DPPH)* and phosphomolybdate methods were used to evaluate the antioxidant activities of leaf and stem bark of F. sagittifolia. Phytochemical screening revealed the presence of flavonoids, saponins, terpenoids/steroids, alkaloids for both extracts of leaf and stem bark of F. sagittifolia. The phenolic content of F. sagittifolia was most abundant in leaf ethanol crude extract as 3.53 ± 0.03 mg/g equivalent of gallic acid. Total flavonoids and tannins content were highest in stem bark aqueous ethanol fraction of F. sagittifolia estimated as 3.41 ± 0.08 mg/g equivalent of quercetin and 1.52 ± 0.05 mg/g equivalent of tannic acid respectively. The hexane leaf fraction of F. sagittifolia had the utmost saponin and alkaloid content as 5.10 ± 0.48 mg/g equivalent of saponins and 0.171 ± 0.39 g of alkaloids. Leaf aqueous ethanol fraction of F. sagittifolia showed high antioxidant activity (IC50 value of 63.092 µg/mL) and stem ethanol crude extract (227.43 ± 0.78 mg/g equivalent of ascorbic acid) for DPPH and phosphomolybdate method respectively and the least active was found to be the stem hexane fraction using both methods (313.32 µg/mL; 16.21 ± 1.30 mg/g equivalent of ascorbic acid). The presence of these phytochemicals in the leaf and stem bark of F. sagittifolia are responsible for their therapeutic importance as well as the ability to scavenge free radicals in living systems.

Keywords: Antioxidant activity, Ficus sagittifolia, Moraceae, phytochemicals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 958
18 A State Aggregation Approach to Singularly Perturbed Markov Reward Processes

Authors: Dali Zhang, Baoqun Yin, Hongsheng Xi

Abstract:

In this paper, we propose a single sample path based algorithm with state aggregation to optimize the average rewards of singularly perturbed Markov reward processes (SPMRPs) with a large scale state spaces. It is assumed that such a reward process depend on a set of parameters. Differing from the other kinds of Markov chain, SPMRPs have their own hierarchical structure. Based on this special structure, our algorithm can alleviate the load in the optimization for performance. Moreover, our method can be applied on line because of its evolution with the sample path simulated. Compared with the original algorithm applied on these problems of general MRPs, a new gradient formula for average reward performance metric in SPMRPs is brought in, which will be proved in Appendix, and then based on these gradients, the schedule of the iteration algorithm is presented, which is based on a single sample path, and eventually a special case in which parameters only dominate the disturbance matrices will be analyzed, and a precise comparison with be displayed between our algorithm with the old ones which is aim to solve these problems in general Markov reward processes. When applied in SPMRPs, our method will approach a fast pace in these cases. Furthermore, to illustrate the practical value of SPMRPs, a simple example in multiple programming in computer systems will be listed and simulated. Corresponding to some practical model, physical meanings of SPMRPs in networks of queues will be clarified.

Keywords: Singularly perturbed Markov processes, Gradient of average reward, Differential reward, State aggregation, Perturbed close network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1589
17 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique

Authors: Satyasen Panda, Urmila Bhanja

Abstract:

In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.

Keywords: Cross correlation, three-dimensional optical code division multiple access, spectral amplitude coding optical code division multiple access, multiple access interference, phase induced intensity noise, three-dimensional modified quadratic congruence/modified prime code.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483
16 Adaptive Shape Parameter (ASP) Technique for Local Radial Basis Functions (RBFs) and Their Application for Solution of Navier Strokes Equations

Authors: A. Javed, K. Djidjeli, J. T. Xing

Abstract:

The concept of adaptive shape parameters (ASP) has been presented for solution of incompressible Navier Strokes equations using mesh-free local Radial Basis Functions (RBF). The aim is to avoid ill-conditioning of coefficient matrices of RBF weights and inaccuracies in RBF interpolation resulting from non-optimized shape of basis functions for the cases where data points (or nodes) are not distributed uniformly throughout the domain. Unlike conventional approaches which assume globally similar values of RBF shape parameters, the presented ASP technique suggests that shape parameter be calculated exclusively for each data point (or node) based on the distribution of data points within its own influence domain. This will ensure interpolation accuracy while still maintaining well conditioned system of equations for RBF weights. Performance and accuracy of ASP technique has been tested by evaluating derivatives and laplacian of a known function using RBF in Finite difference mode (RBFFD), with and without the use of adaptivity in shape parameters. Application of adaptive shape parameters (ASP) for solution of incompressible Navier Strokes equations has been presented by solving lid driven cavity flow problem on mesh-free domain using RBF-FD. The results have been compared for fixed and adaptive shape parameters. Improved accuracy has been achieved with the use of ASP in RBF-FD especially at regions where larger gradients of field variables exist.

Keywords: CFD, Meshless Particle Method, Radial Basis Functions, Shape Parameters

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2782
15 The Direct and Indirect Effects of the Achievement Motivation on Nurturing Intellectual Giftedness

Authors: Al-Shabatat, M. Ahmad, Abbas, M., Ismail, H. Nizam

Abstract:

Achievement motivation is believed to promote giftedness attracting people to invest in many programs to adopt gifted students providing them with challenging activities. Intellectual giftedness is founded on the fluid intelligence and extends to more specific abilities through the growth and inputs from the achievement motivation. Acknowledging the roles played by the motivation in the development of giftedness leads to an effective nurturing of gifted individuals. However, no study has investigated the direct and indirect effects of the achievement motivation and fluid intelligence on intellectual giftedness. Thus, this study investigated the contribution of motivation factors to giftedness development by conducting tests of fluid intelligence using Cattell Culture Fair Test (CCFT) and analytical abilities using culture reduced test items covering problem solving, pattern recognition, audio-logic, audio-matrices, and artificial language, and self report questionnaire for the motivational factors. A number of 180 highscoring students were selected using CCFT from a leading university in Malaysia. Structural equation modeling was employed using Amos V.16 to determine the direct and indirect effects of achievement motivation factors (self confidence, success, perseverance, competition, autonomy, responsibility, ambition, and locus of control) on the intellectual giftedness. The findings showed that the hypothesized model fitted the data, supporting the model postulates and showed significant and strong direct and indirect effects of the motivation and fluid intelligence on the intellectual giftedness.

Keywords: Achievement motivation, Intellectual Giftedness, Fluid Intelligence, Analytical Giftedness, CCFT, Structural EquationModeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2117
14 Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network

Authors: V Krishnaveni, S Jayaraman, A Gunasekaran, K Ramadoss

Abstract:

The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.

Keywords: Auto Regressive (AR) Coefficients, Feed Forward Neural Network (FNN), Joint Approximation Diagonalisation of Eigen matrices (JADE) Algorithm, Polynomial Neural Network (PNN).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1830
13 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction

Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal

Abstract:

The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.

Keywords: Acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 924
12 Replicating Brain’s Resting State Functional Connectivity Network Using a Multi-Factor Hub-Based Model

Authors: B. L. Ho, L. Shi, D. F. Wang, V. C. T. Mok

Abstract:

The brain’s functional connectivity while temporally non-stationary does express consistency at a macro spatial level. The study of stable resting state connectivity patterns hence provides opportunities for identification of diseases if such stability is severely perturbed. A mathematical model replicating the brain’s spatial connections will be useful for understanding brain’s representative geometry and complements the empirical model where it falls short. Empirical computations tend to involve large matrices and become infeasible with fine parcellation. However, the proposed analytical model has no such computational problems. To improve replicability, 92 subject data are obtained from two open sources. The proposed methodology, inspired by financial theory, uses multivariate regression to find relationships of every cortical region of interest (ROI) with some pre-identified hubs. These hubs acted as representatives for the entire cortical surface. A variance-covariance framework of all ROIs is then built based on these relationships to link up all the ROIs. The result is a high level of match between model and empirical correlations in the range of 0.59 to 0.66 after adjusting for sample size; an increase of almost forty percent. More significantly, the model framework provides an intuitive way to delineate between systemic drivers and idiosyncratic noise while reducing dimensions by more than 30 folds, hence, providing a way to conduct attribution analysis. Due to its analytical nature and simple structure, the model is useful as a standalone toolkit for network dependency analysis or as a module for other mathematical models.

Keywords: Functional magnetic resonance imaging, multivariate regression, network hubs, resting state functional connectivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 750
11 Data Hiding in Images in Discrete Wavelet Domain Using PMM

Authors: Souvik Bhattacharyya, Gautam Sanyal

Abstract:

Over last two decades, due to hostilities of environment over the internet the concerns about confidentiality of information have increased at phenomenal rate. Therefore to safeguard the information from attacks, number of data/information hiding methods have evolved mostly in spatial and transformation domain.In spatial domain data hiding techniques,the information is embedded directly on the image plane itself. In transform domain data hiding techniques the image is first changed from spatial domain to some other domain and then the secret information is embedded so that the secret information remains more secure from any attack. Information hiding algorithms in time domain or spatial domain have high capacity and relatively lower robustness. In contrast, the algorithms in transform domain, such as DCT, DWT have certain robustness against some multimedia processing.In this work the authors propose a novel steganographic method for hiding information in the transform domain of the gray scale image.The proposed approach works by converting the gray level image in transform domain using discrete integer wavelet technique through lifting scheme.This approach performs a 2-D lifting wavelet decomposition through Haar lifted wavelet of the cover image and computes the approximation coefficients matrix CA and detail coefficients matrices CH, CV, and CD.Next step is to apply the PMM technique in those coefficients to form the stego image. The aim of this paper is to propose a high-capacity image steganography technique that uses pixel mapping method in integer wavelet domain with acceptable levels of imperceptibility and distortion in the cover image and high level of overall security. This solution is independent of the nature of the data to be hidden and produces a stego image with minimum degradation.

Keywords: Cover Image, Pixel Mapping Method (PMM), StegoImage, Integer Wavelet Tranform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2793
10 Three Dimensional Finite Element Analysis of Functionally Graded Radiation Shielding Nanoengineered Sandwich Composites

Authors: Nasim Abuali Galehdari, Thomas J. Ryan, Ajit D. Kelkar

Abstract:

In recent years, nanotechnology has played an important role in the design of an efficient radiation shielding polymeric composites. It is well known that, high loading of nanomaterials with radiation absorption properties can enhance the radiation attenuation efficiency of shielding structures. However, due to difficulties in dispersion of nanomaterials into polymer matrices, there has been a limitation in higher loading percentages of nanoparticles in the polymer matrix. Therefore, the objective of the present work is to provide a methodology to fabricate and then to characterize the functionally graded radiation shielding structures, which can provide an efficient radiation absorption property along with good structural integrity. Sandwich structures composed of Ultra High Molecular Weight Polyethylene (UHMWPE) fabric as face sheets and functionally graded epoxy nanocomposite as core material were fabricated. A method to fabricate a functionally graded core panel with controllable gradient dispersion of nanoparticles is discussed. In order to optimize the design of functionally graded sandwich composites and to analyze the stress distribution throughout the sandwich composite thickness, a finite element method was used. The sandwich panels were discretized using 3-Dimensional 8 nodded brick elements. Classical laminate analysis in conjunction with simplified micromechanics equations were used to obtain the properties of the face sheets. The presented finite element model would provide insight into deformation and damage mechanics of the functionally graded sandwich composites from the structural point of view.

Keywords: Nanotechnology, functionally graded material, radiation shielding, sandwich composites, finite element method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1218