Search results for: Linear Equalizers.
152 Estimation of the Park-Ang Damage Index for Floating Column Building with Infill Wall
Authors: Susanta Banerjee, Sanjaya Kumar Patro
Abstract:
Buildings with floating column are highly undesirable built in seismically active areas. Many urban multi-storey buildings today have floating column buildings which are adopted to accommodate parking at ground floor or reception lobbies in the first storey. The earthquake forces developed at different floor levels in a building need to be brought down along the height to the ground by the shortest path; any deviation or discontinuity in this load transfer path results in poor performance of the building. Floating column buildings are severely damaged during earthquake. Damage on this structure can be reduce by taking the effect of infill wall. This paper presents the effect of stiffness of infill wall to the damage occurred in floating column building when ground shakes. Modelling and analysis are carried out by non linear analysis programme IDARC-2D. Damage occurred in beams, columns, storey are studied by formulating modified Park & Ang model to evaluate damage indices. Overall structural damage indices in buildings due to shaking of ground are also obtained. Dynamic response parameters i.e. lateral floor displacement, storey drift, time period, base shear of buildings are obtained and results are compared with the ordinary moment resisting frame buildings. Formation of cracks, yield, plastic hinge, are also observed during analysis.
Keywords: Floating column, Infill Wall, Park-Ang Damage Index, Damage State.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3114151 Pushover Analysis of Reinforced Concrete Buildings Using Full Jacket Technics: A Case Study on an Existing Old Building in Madinah
Authors: Tarek M. Alguhane, Ayman H. Khalil, M. N. Fayed, Ayman M. Ismail
Abstract:
The retrofitting of existing buildings to resist the seismic loads is very important to avoid losing lives or financial disasters. The aim at retrofitting processes is increasing total structure strength by increasing stiffness or ductility ratio. In addition, the response modification factors (R) have to satisfy the code requirements for suggested retrofitting types. In this study, two types of jackets are used, i.e. full reinforced concrete jackets and surrounding steel plate jackets. The study is carried out on an existing building in Madinah by performing static pushover analysis before and after retrofitting the columns. The selected model building represents nearly all-typical structure lacks structure built before 30 years ago in Madina City, KSA. The comparison of the results indicates a good enhancement of the structure respect to the applied seismic forces. Also, the response modification factor of the RC building is evaluated for the studied cases before and after retrofitting. The design of all vertical elements (columns) is given. The results show that the design of retrofitted columns satisfied the code's design stress requirements. However, for some retrofitting types, the ductility requirements represented by response modification factor do not satisfy KSA design code (SBC- 301).Keywords: Concrete jackets, steel jackets, RC buildings pushover analysis, non-linear analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1776150 Antibody Reactivity of Synthetic Peptides Belonging to Proteins Encoded by Genes Located in Mycobacterium tuberculosis-Specific Genomic Regions of Differences
Authors: Abu Salim Mustafa
Abstract:
The comparisons of mycobacterial genomes have identified several Mycobacterium tuberculosis-specific genomic regions that are absent in other mycobacteria and are known as regions of differences. Due to M. tuberculosis-specificity, the peptides encoded by these regions could be useful in the specific diagnosis of tuberculosis. To explore this possibility, overlapping synthetic peptides corresponding to 39 proteins predicted to be encoded by genes present in regions of differences were tested for antibody-reactivity with sera from tuberculosis patients and healthy subjects. The results identified four immunodominant peptides corresponding to four different proteins, with three of the peptides showing significantly stronger antibody reactivity and rate of positivity with sera from tuberculosis patients than healthy subjects. The fourth peptide was recognized equally well by the sera of tuberculosis patients as well as healthy subjects. Predication of antibody epitopes by bioinformatics analyses using ABCpred server predicted multiple linear epitopes in each peptide. Furthermore, peptide sequence analysis for sequence identity using BLAST suggested M. tuberculosis-specificity for the three peptides that had preferential reactivity with sera from tuberculosis patients, but the peptide with equal reactivity with sera of TB patients and healthy subjects showed significant identity with sequences present in nob-tuberculous mycobacteria. The three identified M. tuberculosis-specific immunodominant peptides may be useful in the serological diagnosis of tuberculosis.
Keywords: Genomic regions of differences, Mycobacterium tuberculosis, peptides, serodiagnosis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 929149 Thermo-Mechanical Approach to Evaluate Softening Behavior of Polystyrene: Validation and Modeling
Authors: Salah Al-Enezi, Rashed Al-Zufairi, Naseer Ahmad
Abstract:
A Thermo-mechanical technique was developed to determine softening point temperature/glass transition temperature (Tg) of polystyrene exposed to high pressures. The design utilizes the ability of carbon dioxide to lower the glass transition temperature of polymers and acts as plasticizer. In this apparatus, the sorption of carbon dioxide to induce softening of polymers as a function of temperature/pressure is performed and the extent of softening is measured in three-point-flexural-bending mode. The polymer strip was placed in the cell in contact with the linear variable differential transformer (LVDT). CO2 was pumped into the cell from a supply cylinder to reach high pressure. The results clearly showed that full softening point of the samples, accompanied by a large deformation on the polymer strip. The deflection curves are initially relatively flat and then undergo a dramatic increase as the temperature is elevated. It was found that increasing the pressure of CO2 causes the temperature curves to shift from higher to lower by increment of about 45 K, over the pressure range of 0-120 bars. The obtained experimental Tg values were validated with the values reported in the literature. Finally, it is concluded that the defection model fits consistently to the generated experimental results, which attempts to describe in more detail how the central deflection of a thin polymer strip affected by the CO2 diffusions in the polymeric samples.
Keywords: Softening, high-pressure, polystyrene, CO2 diffusions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 665148 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis
Authors: Petr Gurný
Abstract:
One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the creditscoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.
Keywords: Credit-scoring Models, Multidimensional Subordinated Lévy Model, Probability of Default.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1919147 A Shape Optimization Method in Viscous Flow Using Acoustic Velocity and Four-step Explicit Scheme
Authors: Yoichi Hikino, Mutsuto Kawahara
Abstract:
The purpose of this study is to derive optimal shapes of a body located in viscous flows by the finite element method using the acoustic velocity and the four-step explicit scheme. The formulation is based on an optimal control theory in which a performance function of the fluid force is introduced. The performance function should be minimized satisfying the state equation. This problem can be transformed into the minimization problem without constraint conditions by using the adjoint equation with adjoint variables corresponding to the state equation. The performance function is defined by the drag and lift forces acting on the body. The weighted gradient method is applied as a minimization technique, the Galerkin finite element method is used as a spatial discretization and the four-step explicit scheme is used as a temporal discretization to solve the state equation and the adjoint equation. As the interpolation, the orthogonal basis bubble function for velocity and the linear function for pressure are employed. In case that the orthogonal basis bubble function is used, the mass matrix can be diagonalized without any artificial centralization. The shape optimization is performed by the presented method.Keywords: Shape Optimization, Optimal Control Theory, Finite Element Method, Weighted Gradient Method, Fluid Force, Orthogonal Basis Bubble Function, Four-step Explicit Scheme, Acoustic Velocity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1464146 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System
Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi
Abstract:
Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.
Keywords: Channel estimation, OFDM, pilot-assist, VLC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 668145 The Effects of Food Deprivation on Hematological Indices and Blood Indicators of Liver Function in Oxyleotris marmorata
Authors: N. Sridee, S. Boonanuntanasarn
Abstract:
Oxyleotris marmorata is considered as undomesticated fish, and its culture occasionally faces a problem of food deprivation. The present study aims to evaluate alteration of hematological indices, blood chemical associated with liver function during 4 weeks of fasting. A non-linear relationships between fasting days and hematological parameters (red blood cell number; y = - 0.002x2 + 0.041x + 1.249; R2=0.915, P<0.05, hemoglobin; y = - 0.002x2 + 0.030x + 3.470; R2=0.460, P>0.05), mean corpuscular volume; y = -0.180x2 + 2.183x + 149.61; R2=0.732, P>0.05, mean corpuscular hemoglobin; y = -0.041x2 + 0.862x + 29.864; R2=0.818, P>0.05 and mean corpuscular hemoglobin concentration; y = - 0.044x2 + 0.711x + 21.580; R2=0.730, P>0.05) were demonstrated. Significant change in hematocrit (Ht) during fasting period was observed. Ht elevated sharply increase at the first weeks of fasting period. Higher Ht also was detected during week 2-4 of fasting time. The significant reduction of hepatosomatic index was observed (y = - 0.007x2 - 0.096x + 1.414; R2=0.968, P<0.05). Moreover, alteration of enzyme associated with liver function was evaluated during 4 weeks of fasting (alkalin phosphatase; y = -0.026x2 - 0.935x + 12.188; R2=0.737, P>0.05, serum glutamic oxaloacetic transaminase; y = 0.005x2 – 0.201x2 + 1.297x + 33.256; R2=1, P<0.01, serum glutamic pyruvic transaminase; y = 0.007x2 – 0.274x2 + 2.277x + 25.257; R2=0.807, P>0.05). Taken together, prolonged fasting has deleterious effects on hematological indices, liver mass and enzyme associated in liver function. The marked adverse effects occurred after the first week of fasting state.Keywords: food deprivation, Oxyleotris marmorata, hematology, alkaline phosphatase, SGOT, SGPT
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972144 Analysis of Temperature Change under Global Warming Impact using Empirical Mode Decomposition
Authors: Md. Khademul Islam Molla, Akimasa Sumi, M. Sayedur Rahman
Abstract:
The empirical mode decomposition (EMD) represents any time series into a finite set of basis functions. The bases are termed as intrinsic mode functions (IMFs) which are mutually orthogonal containing minimum amount of cross-information. The EMD successively extracts the IMFs with the highest local frequencies in a recursive way, which yields effectively a set low-pass filters based entirely on the properties exhibited by the data. In this paper, EMD is applied to explore the properties of the multi-year air temperature and to observe its effects on climate change under global warming. This method decomposes the original time-series into intrinsic time scale. It is capable of analyzing nonlinear, non-stationary climatic time series that cause problems to many linear statistical methods and their users. The analysis results show that the mode of EMD presents seasonal variability. The most of the IMFs have normal distribution and the energy density distribution of the IMFs satisfies Chi-square distribution. The IMFs are more effective in isolating physical processes of various time-scales and also statistically significant. The analysis results also show that the EMD method provides a good job to find many characteristics on inter annual climate. The results suggest that climate fluctuations of every single element such as temperature are the results of variations in the global atmospheric circulation.
Keywords: Empirical mode decomposition, instantaneous frequency, Hilbert spectrum, Chi-square distribution, anthropogenic impact.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2149143 Obsession of Time and the New Musical Ontologies: The Concert for Saxophone, Daniel Kientzy and Orchestra by Myriam Marbe
Authors: Luminiţa Duţică
Abstract:
For the music composer Myriam Marbe the musical time and memory represent 2 (complementary) phenomena with conclusive impact on the settlement of new musical ontologies. Summarizing the most important achievements of the contemporary techniques of composition, her vision on the microform presented in The Concert for Daniel Kientzy, saxophone and orchestra transcends the linear and unidirectional time in favour of a flexible, multivectorial speech with spiral developments, where the sound substance is auto(re)generated by analogy with the fundamental processes of the memory. The conceptual model is of an archetypal essence, the music composer being concerned with identifying the mechanisms of the creation process, especially of those specific to the collective creation (of oral tradition). Hence the spontaneity of expression, improvisation tint, free rhythm, micro-interval intonation, coloristictimbral universe dominated by multiphonics and unique sound effects, hence the atmosphere of ritual, however purged by the primary connotations and reprojected into a wonderful spectacular space. The Concert is a work of artistic maturity and enforces respect, among others, by the timbral diversity of the three species of saxophone required by the music composer (baritone, sopranino and alt), in Part III Daniel Kientzy shows the performance of playing two saxophones concomitantly. The score of the music composer Myriam Marbe contains a deeply spiritualized music, full or archetypal symbols, a music whose drama suggests a real cinematographic movement.Keywords: Archetype, chronogenesis, concert, multiphonics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2100142 Optimization the Conditions of Electrophoretic Deposition Fabrication of Graphene-Based Electrode to Consider Applications in Electro-Optical Sensors
Authors: Sepehr Lajevardi Esfahani, Shohre Rouhani, Zahra Ranjbar
Abstract:
Graphene has gained much attention owing to its unique optical and electrical properties. Charge carriers in graphene sheets (GS) carry out a linear dispersion relation near the Fermi energy and behave as massless Dirac fermions resulting in unusual attributes such as the quantum Hall effect and ambipolar electric field effect. It also exhibits nondispersive transport characteristics with an extremely high electron mobility (15000 cm2/(Vs)) at room temperature. Recently, several progresses have been achieved in the fabrication of single- or multilayer GS for functional device applications in the fields of optoelectronic such as field-effect transistors ultrasensitive sensors and organic photovoltaic cells. In addition to device applications, graphene also can serve as reinforcement to enhance mechanical, thermal, or electrical properties of composite materials. Electrophoretic deposition (EPD) is an attractive method for development of various coatings and films. It readily applied to any powdered solid that forms a stable suspension. The deposition parameters were controlled in various thicknesses. In this study, the graphene electrodeposition conditions were optimized. The results were obtained from SEM, Ohm resistance measuring technique and AFM characteristic tests. The minimum sheet resistance of electrodeposited reduced graphene oxide layers is achieved at conditions of 2 V in 10 s and it is annealed at 200 °C for 1 minute.
Keywords: Electrophoretic deposition, graphene oxide, electrical conductivity, electro-optical devices.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 970141 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling
Authors: Florin Leon, Silvia Curteanu
Abstract:
Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.Keywords: Adaptive sampling, batch bulk methyl methacrylate polymerization, large margin nearest neighbor regression, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1400140 Indoor Air Pollution of the Flexographic Printing Environment
Authors: Jelena S. Kiurski, Vesna S. Kecić, Snežana M. Aksentijević
Abstract:
The identification and evaluation of organic and inorganic pollutants were performed in a flexographic facility in Novi Sad, Serbia. Air samples were collected and analyzed in situ, during 4-hours working time at five sampling points by the mobile gas chromatograph and ozonometer at the printing of collagen casing. Experimental results showed that the concentrations of isopropyl alcohol, acetone, total volatile organic compounds and ozone varied during the sampling times. The highest average concentrations of 94.80 ppm and 102.57 ppm were achieved at 200 minutes from starting the production for isopropyl alcohol and total volatile organic compounds, respectively. The mutual dependences between target hazardous and microclimate parameters were confirmed using a multiple linear regression model with software package STATISTICA 10. Obtained multiple coefficients of determination in the case of ozone and acetone (0.507 and 0.589) with microclimate parameters indicated a moderate correlation between the observed variables. However, a strong positive correlation was obtained for isopropyl alcohol and total volatile organic compounds (0.760 and 0.852) with microclimate parameters. Higher values of parameter F than Fcritical for all examined dependences indicated the existence of statistically significant difference between the concentration levels of target pollutants and microclimates parameters. Given that, the microclimate parameters significantly affect the emission of investigated gases and the application of eco-friendly materials in production process present a necessity.
Keywords: Flexographic printing, indoor air, multiple regression analysis, pollution emission.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1308139 Study of Storms on the Javits Center Green Roof
Authors: A. Cho, H. Sanyal, J. Cataldo
Abstract:
A quantitative analysis of the different variables on both the South and North green roofs of the Jacob K. Javits Convention Center was taken to find mathematical relationships between net radiation and evapotranspiration (ET), average outside temperature, and the lysimeter weight. Groups of datasets were analyzed, and the relationships were plotted on linear and semi-log graphs to find consistent relationships. Antecedent conditions for each rainstorm were also recorded and plotted against the volumetric water difference within the lysimeter. The first relation was the inverse parabolic relationship between the lysimeter weight and the net radiation and ET. The peaks and valleys of the lysimeter weight corresponded to valleys and peaks in the net radiation and ET respectively, with the 8/22/15 and 1/22/16 datasets showing this trend. The U-shaped and inverse U-shaped plots of the two variables coincided, indicating an inverse relationship between the two variables. Cross variable relationships were examined through graphs with lysimeter weight as the dependent variable on the y-axis. 10 out of 16 of the plots of lysimeter weight vs. outside temperature plots had R² values > 0.9. Antecedent conditions were also recorded for rainstorms, categorized by the amount of precipitation accumulating during the storm. Plotted against the change in the volumetric water weight difference within the lysimeter, a logarithmic regression was found with large R² values. The datasets were compared using the Mann Whitney U-test to see if the datasets were statistically different, using a significance level of 5%; all datasets compared showed a U test statistic value, proving the null hypothesis of the datasets being different from being true.
Keywords: Green roof, green infrastructure, Javits Center, evapotranspiration, net radiation, lysimeter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 375138 A Sensitive Approach on Trace Analysis of Methylparaben in Wastewater and Cosmetic Products Using Molecularly Imprinted Polymer
Authors: Soukaina Motia, Nadia El Alami El Hassani, Alassane Diouf, Benachir Bouchikhi, Nezha El Bari
Abstract:
Parabens are the antimicrobial molecules largely used in cosmetic products as a preservative agent. Among them, the methylparaben (MP) is the most frequently used ingredient in cosmetic preparations. Nevertheless, their potential dangers led to the development of sensible and reliable methods for their determination in environmental samples. Firstly, a sensitive and selective molecular imprinted polymer (MIP) based on screen-printed gold electrode (Au-SPE), assembled on a polymeric layer of carboxylated poly(vinyl-chloride) (PVC-COOH), was developed. After the template removal, the obtained material was able to rebind MP and discriminate it among other interfering species such as glucose, sucrose, and citric acid. The behavior of molecular imprinted sensor was characterized by Cyclic Voltammetry (CV), Differential Pulse Voltammetry (DPV) and Electrochemical Impedance Spectroscopy (EIS) techniques. Then, the biosensor was found to have a linear detection range from 0.1 pg.mL-1 to 1 ng.mL-1 and a low limit of detection of 0.12 fg.mL-1 and 5.18 pg.mL-1 by DPV and EIS, respectively. For applications, this biosensor was employed to determine MP content in four wastewaters in Meknes city and two cosmetic products (shower gel and shampoo). The operational reproducibility and stability of this biosensor were also studied. Secondly, another MIP biosensor based on tungsten trioxide (WO3) functionalized by gold nanoparticles (Au-NPs) assembled on a polymeric layer of PVC-COOH was developed. The main goal was to increase the sensitivity of the biosensor. The developed MIP biosensor was successfully applied for the MP determination in wastewater samples and cosmetic products.
Keywords: Cosmetic products, methylparaben, molecularly imprinted polymer, wastewater.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1006137 Boundary Layer Flow of a Casson Nanofluid past a Vertical Exponentially Stretching Cylinder in the Presence of a Transverse Magnetic Field with Internal Heat Generation/Absorption
Authors: G. Sarojamma, K. Vendabai
Abstract:
An analysis is carried out to investigate the effect of magnetic field and heat source on the steady boundary layer flow and heat transfer of a Casson nanofluid over a vertical cylinder stretching exponentially along its radial direction. Using a similarity transformation, the governing mathematical equations, with the boundary conditions are reduced to a system of coupled, non –linear ordinary differential equations. The resulting system is solved numerically by the fourth order Runge – Kutta scheme with shooting technique. The influence of various physical parameters such as Reynolds number, Prandtl number, magnetic field, Brownian motion parameter, thermophoresis parameter, Lewis number and the natural convection parameter are presented graphically and discussed for non – dimensional velocity, temperature and nanoparticle volume fraction. Numerical data for the skin – friction coefficient, local Nusselt number and the local Sherwood number have been tabulated for various parametric conditions. It is found that the local Nusselt number is a decreasing function of Brownian motion parameter Nb and the thermophoresis parameter Nt.
Keywords: Casson nanofluid, Boundary layer flow, Internal heat generation/absorption, Exponentially stretching cylinder, Heat transfer, Brownian motion, Thermophoresis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2821136 Effect of Strain and Storage Period on Some Qualitative and Quantitative Traits of Table Eggs
Authors: Hani N. Hermiz, Sukar H. Ali
Abstract:
This study include the effect of strain and storage period and their interaction on some quantitative and qualitative traits and percentages of the egg components in the eggs collected at the start of production (at age 24 weeks). Eggs were divided into three storage periods (1, 7 and 14) days under refrigerator temperature (5- 7)0C. Fifty seven eggs obtained randomly from each strain including Isa Brown and Lohman White. General Linear Model within SAS programme was used to analyze the collected data and correlations between the studied traits were calculated for each strain.Average egg weight (EW), Haugh Unit (HU), yolk index (YI), yolk % (HP), albumin % (AP) and yolk to albumin ratio (YAR) was 56.629 gm, 87.968 %, 0.493, 22.13%, 67.74% and 32.76 respectively. Egg produced from ISA Brown surpassed those produced by Lohman White significantly (P<0.01) in EW (59.337 vs. 53.921 g) and AP (68.46 vs. 67.02 %), while Lohman White surpassed ISA Brown significantly (P<0.01) in HU (91.998 against 83.939 %), YI (0.498 against 0.487), YP (22.83 against 21.44%) and YAR (34.12 against 31.40). Storage period did not have any significant effect on EW and YI. Increasing the storage period caused a significant (P<0.01) decrease in HU. A non-significant increasing in YP and significant decreasing in AP % due to increasing storage period caused a significant increasing in YAR. The interaction between strain and storage period affect EW, HU and YI significantly (P <0.01), while its effect on YP, AP and YAR was not significant. Highest and significant (P<0.01) correlation was recorded between YP with YAR (0.99) in both strains, while the lowest values were between AP with YAR and being -0.97 and -0.95 in ISA Brown and Lohman White, respectively. The conclusion: increasing storage period caused a few decreasing in egg weight and this enabling the consumer to store eggs without any damage. Because of using the albumin in many food industries, so it is very important to focus on its weight. The correlations between some of the studied traits were significant, which means that selection for any trait will improve other traits.Keywords: Quality, Quantity, Storage period, Strain, Table egg
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658135 A Perceptually Optimized Wavelet Embedded Zero Tree Image Coder
Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf
Abstract:
In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Keywords: DWT, linear-phase 9/7 filter, 9/7 Wavelets Error Sensitivity WES, CSF implementation approaches, JND Just Noticeable Difference, Luminance masking, Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2051134 A Failure Criterion for Unsupported Boreholes in Poorly Cemented Granular Formations
Authors: Sam S. Hashemi
Abstract:
The breakage of bonding between sand particles and their dislodgment from the borehole wall are among the main factors resulting in a borehole failure in poorly cemented granular formations. The grain debonding usually precedes the borehole failure and it can be considered as a sign that the onset of the borehole collapse is imminent. Detecting the bonding breakage point and introducing an appropriate failure criterion will play an important role in borehole stability analysis. To study the influence of different factors on the initiation of sand bonding breakage at the borehole wall, a series of laboratory tests was designed and conducted on poorly cemented sand samples. The total absorbed strain energy per volume of material up to the point of the observed particle debonding was computed. The results indicated that the particle bonding breakage point at the borehole wall was reached both before and after the peak strength of the thick-walled hollow cylinder specimens depending on the stress path and cement content. Three different cement contents and two borehole sizes were investigated to study the influence of the bonding strength and scale on the particle dislodgment. Test results showed that the stress path has a significant influence on the onset of the sand bonding breakage. It was shown that for various stress paths, there is a near linear relationship between the absorbed energy and the normal effective mean stress.Keywords: Borehole stability, experimental studies, total strain energy, poorly cemented sands, particle bonding breakage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1311133 Cascaded ANN for Evaluation of Frequency and Air-gap Voltage of Self-Excited Induction Generator
Authors: Raja Singh Khela, R. K. Bansal, K. S. Sandhu, A. K. Goel
Abstract:
Self-Excited Induction Generator (SEIG) builds up voltage while it enters in its magnetic saturation region. Due to non-linear magnetic characteristics, the performance analysis of SEIG involves cumbersome mathematical computations. The dependence of air-gap voltage on saturated magnetizing reactance can only be established at rated frequency by conducting a laboratory test commonly known as synchronous run test. But, there is no laboratory method to determine saturated magnetizing reactance and air-gap voltage of SEIG at varying speed, terminal capacitance and other loading conditions. For overall analysis of SEIG, prior information of magnetizing reactance, generated frequency and air-gap voltage is essentially required. Thus, analytical methods are the only alternative to determine these variables. Non-existence of direct mathematical relationship of these variables for different terminal conditions has forced the researchers to evolve new computational techniques. Artificial Neural Networks (ANNs) are very useful for solution of such complex problems, as they do not require any a priori information about the system. In this paper, an attempt is made to use cascaded neural networks to first determine the generated frequency and magnetizing reactance with varying terminal conditions and then air-gap voltage of SEIG. The results obtained from the ANN model are used to evaluate the overall performance of SEIG and are found to be in good agreement with experimental results. Hence, it is concluded that analysis of SEIG can be carried out effectively using ANNs.Keywords: Self-Excited Induction Generator, Artificial NeuralNetworks, Exciting Capacitance and Saturated magnetizingreactance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690132 Route Training in Mobile Robotics through System Identification
Authors: Roberto Iglesias, Theocharis Kyriacou, Ulrich Nehmzow, Steve Billings
Abstract:
Fundamental sensor-motor couplings form the backbone of most mobile robot control tasks, and often need to be implemented fast, efficiently and nevertheless reliably. Machine learning techniques are therefore often used to obtain the desired sensor-motor competences. In this paper we present an alternative to established machine learning methods such as artificial neural networks, that is very fast, easy to implement, and has the distinct advantage that it generates transparent, analysable sensor-motor couplings: system identification through nonlinear polynomial mapping. This work, which is part of the RobotMODIC project at the universities of Essex and Sheffield, aims to develop a theoretical understanding of the interaction between the robot and its environment. One of the purposes of this research is to enable the principled design of robot control programs. As a first step towards this aim we model the behaviour of the robot, as this emerges from its interaction with the environment, with the NARMAX modelling method (Nonlinear, Auto-Regressive, Moving Average models with eXogenous inputs). This method produces explicit polynomial functions that can be subsequently analysed using established mathematical methods. In this paper we demonstrate the fidelity of the obtained NARMAX models in the challenging task of robot route learning; we present a set of experiments in which a Magellan Pro mobile robot was taught to follow four different routes, always using the same mechanism to obtain the required control law.Keywords: Mobile robotics, system identification, non-linear modelling, NARMAX.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721131 Designing Social Care Plans Considering Cause-Effect Relationships: A Study in Scotland
Authors: Sotirios N. Raptis
Abstract:
The paper links social needs to social classes by the creation of cohorts of public services matched as causes to other ones as effects using cause-effect (CE) models. It then compares these associations using CE and typical regression methods (LR, ARMA). The paper discusses such public service groupings offered in Scotland in the long term to estimate the risk of multiple causes or effects that can ultimately reduce the healthcare cost by linking the next services to the likely causes of them. The same generic goal can be achieved using LR or ARMA and differences are discussed. The work uses Health and Social Care (H&Sc) public services data from 11 service packs offered by Public Health Services (PHS) Scotland that boil down to 110 single-attribute year series, called ’factors’. The study took place at Macmillan Cancer Support, UK and Abertay University, Dundee, from 2020 to 2023. The paper discusses CE relationships as a main method and compares sample findings with Linear Regression (LR), ARMA, to see how the services are linked. Relationships found were between smoking-related healthcare provision, mental-health-related services, and epidemiological weight in Primary-1-Education Body-Mass-Index (BMI) in children as CE models. Insurance companies and public policymakers can pack CE-linked services in plans such as those for the elderly, low-income people, in the long term. The linkage of services was confirmed allowing more accurate resource planning.
Keywords: Probability, regression, cause-effect cohorts, data frames, services, prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 57130 New Simultaneous High Performance Liquid Chromatographic Method for Determination of NSAIDs and Opioid Analgesics in Advanced Drug Delivery Systems and Human Plasma
Authors: Asad Ullah Madni, Mahmood Ahmad, Naveed Akhtar, Muhammad Usman
Abstract:
A new and cost effective RP-HPLC method was developed and validated for simultaneous analysis of non steroidal anti inflammatory dugs Diclofenac sodium (DFS), Flurbiprofen (FLP) and an opioid analgesic Tramadol (TMD) in advanced drug delivery systems (Liposome and Microcapsules), marketed brands and human plasma. Isocratic system was employed for the flow of mobile phase consisting of 10 mM sodium dihydrogen phosphate buffer and acetonitrile in molar ratio of 67: 33 with adjusted pH of 3.2. The stationary phase was hypersil ODS column (C18, 250×4.6 mm i.d., 5 μm) with controlled temperature of 30 C°. DFS in liposomes, microcapsules and marketed drug products was determined in range of 99.76-99.84%. FLP and TMD in microcapsules and brands formulation were 99.78 - 99.94 % and 99.80 - 99.82 %, respectively. Single step liquid-liquid extraction procedure using combination of acetonitrile and trichloroacetic acid (TCA) as protein precipitating agent was employed. The detection limits (at S/N ratio 3) of quality control solutions and plasma samples were 10, 20, and 20 ng/ml for DFS, FLP and TMD, respectively. The Assay was acceptable in linear dynamic range. All other validation parameters were found in limits of FDA and ICH method validation guidelines. The proposed method is sensitive, accurate and precise and could be applicable for routine analysis in pharmaceutical industry as well as in human plasma samples for bioequivalence and pharmacokinetics studies.Keywords: Diclofenac Sodium, Flurbiprofen, Tramadol, HPLCUV detection, Validation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1859129 Feature Analysis of Predictive Maintenance Models
Authors: Zhaoan Wang
Abstract:
Research in predictive maintenance modeling has improved in the recent years to predict failures and needed maintenance with high accuracy, saving cost and improving manufacturing efficiency. However, classic prediction models provide little valuable insight towards the most important features contributing to the failure. By analyzing and quantifying feature importance in predictive maintenance models, cost saving can be optimized based on business goals. First, multiple classifiers are evaluated with cross-validation to predict the multi-class of failures. Second, predictive performance with features provided by different feature selection algorithms are further analyzed. Third, features selected by different algorithms are ranked and combined based on their predictive power. Finally, linear explainer SHAP (SHapley Additive exPlanations) is applied to interpret classifier behavior and provide further insight towards the specific roles of features in both local predictions and global model behavior. The results of the experiments suggest that certain features play dominant roles in predictive models while others have significantly less impact on the overall performance. Moreover, for multi-class prediction of machine failures, the most important features vary with type of machine failures. The results may lead to improved productivity and cost saving by prioritizing sensor deployment, data collection, and data processing of more important features over less importance features.
Keywords: Automated supply chain, intelligent manufacturing, predictive maintenance machine learning, feature engineering, model interpretation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2004128 Determining the Maximum Lateral Displacement Due to Sever Earthquakes without Using Nonlinear Analysis
Authors: Mussa Mahmoudi
Abstract:
For Seismic design, it is important to estimate, maximum lateral displacement (inelastic displacement) of the structures due to sever earthquakes for several reasons. Seismic design provisions estimate the maximum roof and storey drifts occurring in major earthquakes by amplifying the drifts of the structures obtained by elastic analysis subjected to seismic design load, with a coefficient named “displacement amplification factor" which is greater than one. Here, this coefficient depends on various parameters, such as ductility and overstrength factors. The present research aims to evaluate the value of the displacement amplification factor in seismic design codes and then tries to propose a value to estimate the maximum lateral structural displacement from sever earthquakes, without using non-linear analysis. In seismic codes, since the displacement amplification is related to “force reduction factor" hence; this aspect has been accepted in the current study. Meanwhile, two methodologies are applied to evaluate the value of displacement amplification factor and its relation with the force reduction factor. In the first methodology, which is applied for all structures, the ratio of displacement amplification and force reduction factors is determined directly. Whereas, in the second methodology that is applicable just for R/C moment resisting frame, the ratio is obtained by calculating both factors, separately. The acquired results of these methodologies are alike and estimate the ratio of two factors from 1 to 1.2. The results indicate that the ratio of the displacement amplification factor and the force reduction factor differs to those proposed by seismic provisions such as NEHRP, IBC and Iranian seismic code (standard no. 2800).Keywords: Displacement amplification factor, Ductility factor, Force reduction factor, Maximum lateral displacement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2888127 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases
Authors: Mohammad A. Bani-Khaled
Abstract:
In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.
Keywords: Coupled dynamics, geometric complexity, Proper Orthogonal Decomposition (POD), thin walled beams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1016126 Color Characteristics of Dried Cocoa Using Shallow Box Fermentation Technique
Authors: Khairul Bariah Sulaiman, Tajul Aris Yang
Abstract:
Fermentation is well known as an essential process to develop chocolate flavor in dried cocoa beans. Besides developing the precursor of cocoa flavor, it also induces the color changes in the beans. The fermentation process is influenced by various factors such as planting material, preconditioning of cocoa pod and fermentation technique. Therefore, this study was conducted to evaluate color of Malaysian cocoa beans and how the duration of pods storage and fermentation technique using shallow box will effect on its color characteristics. There are two factors being studied i.e. duration of cocoa pod storage (0, 2, 4 and 6 days) and duration of cocoa fermentation (0, 1, 2, 3, 4 and 5 days). The experiment is arranged in 4 x 6 factorial designs with 24 treatments and arrangement is in a Completely Randomised Design (CRD). The produced beans are inspected for color changes under artificial light during cut test and divided into four groups of color namely fully brown, purple brown, fully purple and slaty. Cut tests indicated that cocoa beans which are directly dried without undergone fermentation has the highest slaty percentage. However, application of pods storage before fermentation process is found to decrease the slaty percentage. In contrast, the percentages of fully brown beans start to dominate after two days of fermentation, especially from four and six days of pods storage batch. Whereas, almost all batches of cocoa beans have a percentage of fully purple less than 20%. Interestingly, the percentage of purple brown beans are scattered in the entire beans batch regardless any specific trend. Meanwhile, statistical analysis using General Linear Model showed that the pods storage has a significant effect on the color characteristic of the Malaysian dried beans compared to fermentation duration.Keywords: Cocoa beans, color, fermentation, shallow box.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2975125 A Three Elements Vector Valued Structure’s Ultimate Strength-Strong Motion-Intensity Measure
Authors: A. Nicknam, N. Eftekhari, A. Mazarei, M. Ganjvar
Abstract:
This article presents an alternative collapse capacity intensity measure in the three elements form which is influenced by the spectral ordinates at periods longer than that of the first mode period at near and far source sites. A parameter, denoted by β, is defined by which the spectral ordinate effects, up to the effective period (2T1), on the intensity measure are taken into account. The methodology permits to meet the hazard-levelled target extreme event in the probabilistic and deterministic forms. A MATLAB code is developed involving OpenSees to calculate the collapse capacities of the 8 archetype RC structures having 2 to 20 stories for regression process. The incremental dynamic analysis (IDA) method is used to calculate the structure’s collapse values accounting for the element stiffness and strength deterioration. The general near field set presented by FEMA is used in a series of performing nonlinear analyses. 8 linear relationships are developed for the 8structutres leading to the correlation coefficient up to 0.93. A collapse capacity near field prediction equation is developed taking into account the results of regression processes obtained from the 8 structures. The proposed prediction equation is validated against a set of actual near field records leading to a good agreement. Implementation of the proposed equation to the four archetype RC structures demonstrated different collapse capacities at near field site compared to those of FEMA. The reasons of differences are believed to be due to accounting for the spectral shape effects.Keywords: Collapse capacity, fragility analysis, spectral shape effects, IDA method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1794124 Support Vector Regression for Retrieval of Soil Moisture Using Bistatic Scatterometer Data at X-Band
Authors: Dileep Kumar Gupta, Rajendra Prasad, Pradeep Kumar, Varun Narayan Mishra, Ajeet Kumar Vishwakarma, Prashant Kumar Srivastava
Abstract:
An approach was evaluated for the retrieval of soil moisture of bare soil surface using bistatic scatterometer data in the angular range of 200 to 700 at VV- and HH- polarization. The microwave data was acquired by specially designed X-band (10 GHz) bistatic scatterometer. The linear regression analysis was done between scattering coefficients and soil moisture content to select the suitable incidence angle for retrieval of soil moisture content. The 250 incidence angle was found more suitable. The support vector regression analysis was used to approximate the function described by the input output relationship between the scattering coefficient and corresponding measured values of the soil moisture content. The performance of support vector regression algorithm was evaluated by comparing the observed and the estimated soil moisture content by statistical performance indices %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 2.9451, 1.0986 and 0.9214 respectively at HHpolarization. At VV- polarization, the values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 3.6186, 0.9373 and 0.9428 respectively.Keywords: Bistatic scatterometer, soil moisture, support vector regression, RMSE, %Bias, NSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3227123 Application of Interferometric Techniques for Quality Control of Oils Used in the Food Industry
Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich
Abstract:
The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.Keywords: Food industry, interferometric, oils, quality control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2180