Search results for: optimum signal approximation
1296 The Analysis of Defects Prediction in Injection Molding
Authors: Mehdi Moayyedian, Kazem Abhary, Romeo Marian
Abstract:
This paper presents an evaluation of a plastic defect in injection molding before it occurs in the process; it is known as the short shot defect. The evaluation of different parameters which affect the possibility of short shot defect is the aim of this paper. The analysis of short shot possibility is conducted via SolidWorks Plastics and Taguchi method to determine the most significant parameters. Finite Element Method (FEM) is employed to analyze two circular flat polypropylene plates of 1 mm thickness. Filling time, part cooling time, pressure holding time, melt temperature and gate type are chosen as process and geometric parameters, respectively. A methodology is presented herein to predict the possibility of the short-shot occurrence. The analysis determined melt temperature is the most influential parameter affecting the possibility of short shot defect with a contribution of 74.25%, and filling time with a contribution of 22%, followed by gate type with a contribution of 3.69%. It was also determined the optimum level of each parameter leading to a reduction in the possibility of short shot are gate type at level 1, filling time at level 3 and melt temperature at level 3. Finally, the most significant parameters affecting the possibility of short shot were determined to be melt temperature, filling time, and gate type.Keywords: injection molding, plastic defects, short shot, Taguchi method
Procedia PDF Downloads 2181295 Nonlinear Modelling of Sloshing Waves and Solitary Waves in Shallow Basins
Authors: Mohammad R. Jalali, Mohammad M. Jalali
Abstract:
The earliest theories of sloshing waves and solitary waves based on potential theory idealisations and irrotational flow have been extended to be applicable to more realistic domains. To this end, the computational fluid dynamics (CFD) methods are widely used. Three-dimensional CFD methods such as Navier-Stokes solvers with volume of fluid treatment of the free surface and Navier-Stokes solvers with mappings of the free surface inherently impose high computational expense; therefore, considerable effort has gone into developing depth-averaged approaches. Examples of such approaches include Green–Naghdi (GN) equations. In Cartesian system, GN velocity profile depends on horizontal directions, x-direction and y-direction. The effect of vertical direction (z-direction) is also taken into consideration by applying weighting function in approximation. GN theory considers the effect of vertical acceleration and the consequent non-hydrostatic pressure. Moreover, in GN theory, the flow is rotational. The present study illustrates the application of GN equations to propagation of sloshing waves and solitary waves. For this purpose, GN equations solver is verified for the benchmark tests of Gaussian hump sloshing and solitary wave propagation in shallow basins. Analysis of the free surface sloshing of even harmonic components of an initial Gaussian hump demonstrates that the GN model gives predictions in satisfactory agreement with the linear analytical solutions. Discrepancies between the GN predictions and the linear analytical solutions arise from the effect of wave nonlinearities arising from the wave amplitude itself and wave-wave interactions. Numerically predicted solitary wave propagation indicates that the GN model produces simulations in good agreement with the analytical solution of the linearised wave theory. Comparison between the GN model numerical prediction and the result from perturbation analysis confirms that nonlinear interaction between solitary wave and a solid wall is satisfactorilly modelled. Moreover, solitary wave propagation at an angle to the x-axis and the interaction of solitary waves with each other are conducted to validate the developed model.Keywords: Green–Naghdi equations, nonlinearity, numerical prediction, sloshing waves, solitary waves
Procedia PDF Downloads 2861294 Assessment of Pre-Processing Influence on Near-Infrared Spectra for Predicting the Mechanical Properties of Wood
Authors: Aasheesh Raturi, Vimal Kothiyal, P. D. Semalty
Abstract:
We studied mechanical properties of Eucalyptus tereticornis using FT-NIR spectroscopy. Firstly, spectra were pre-processed to eliminate useless information. Then, prediction model was constructed by partial least squares regression. To study the influence of pre-processing on prediction of mechanical properties for NIR analysis of wood samples, we applied various pretreatment methods like straight line subtraction, constant offset elimination, vector-normalization, min-max normalization, multiple scattering. Correction, first derivative, second derivatives and their combination with other treatment such as First derivative + straight line subtraction, First derivative+ vector normalization and First derivative+ multiplicative scattering correction. The data processing methods in combination of preprocessing with different NIR regions, RMSECV, RMSEP and optimum factors/rank were obtained by optimization process of model development. More than 350 combinations were obtained during optimization process. More than one pre-processing method gave good calibration/cross-validation and prediction/test models, but only the best calibration/cross-validation and prediction/test models are reported here. The results show that one can safely use NIR region between 4000 to 7500 cm-1 with straight line subtraction, constant offset elimination, first derivative and second derivative preprocessing method which were found to be most appropriate for models development.Keywords: FT-NIR, mechanical properties, pre-processing, PLS
Procedia PDF Downloads 3621293 Implementation of a Novel Modified Multilevel Inverter Topology for Grid Connected PV System
Authors: Dhivya Balakrishnan, Dhamodharan Shanmugam
Abstract:
Multilevel converters offer high power capability, associated with lower output harmonics and lower commutation losses. Their main disadvantage is their complexity requiring a great number of power devices and passive components, and a rather complex control circuitry. This paper proposes a single-phase seven-level inverter for grid connected PV systems, With a novel pulse width-modulated (PWM) control scheme. Three reference signals that are identical to each other with an offset that is equivalent to the amplitude of the triangular carrier signal were used to generate the PWM signals. The inverter is capable of producing seven levels of output-voltage levels from the dc supply voltage. This paper proposes a new multilevel inverter topology using an H-bridge output stage with two bidirectional auxiliary switches. The new topology produces a significant reduction in the number of power devices and capacitors required to implement a multilevel output using the asymmetric cascade configuration.Keywords: asymmetric cascade configuration, H-Bridge, multilevel inverter, Pulse Width Modulation (PWM)
Procedia PDF Downloads 3571292 Intelligent Semi-Active Suspension Control of a Electric Model Vehicle System
Authors: Shiuh-Jer Huang, Yun-Han Yeh
Abstract:
A four-wheel drive electric vehicle was built with hub DC motors and FPGA embedded control structure. A 40 steps manual adjusting motorcycle shock absorber was refitted with DC motor driving mechanism to construct as a semi-active suspension system. Accelerometer and potentiometer sensors are installed to measure the sprung mass acceleration and suspension system compression or rebound states for control purpose. An intelligent fuzzy logic controller was proposed to real-time search appropriate damping ratio based on vehicle running condition. Then, a robust fuzzy sliding mode controller (FSMC) is employed to regulate the target damping ratio of each wheel axis semi-active suspension system. Finally, different road surface conditions are chosen to evaluate the control performance of this semi-active suspension and compare with that of passive system based on wheel axis acceleration signal.Keywords: acceleration, FPGA, Fuzzy sliding mode control, semi-active suspension
Procedia PDF Downloads 4191291 Error Probability of Multi-User Detection Techniques
Authors: Komal Babbar
Abstract:
Multiuser Detection is the intelligent estimation/demodulation of transmitted bits in the presence of Multiple Access Interference. The authors have presented the Bit-error rate (BER) achieved by linear multi-user detectors: Matched filter (which treats the MAI as AWGN), Decorrelating and MMSE. In this work, authors investigate the bit error probability analysis for Matched filter, decorrelating, and MMSE. This problem arises in several practical CDMA applications where the receiver may not have full knowledge of the number of active users and their signature sequences. In particular, the behavior of MAI at the output of the Multi-user detectors (MUD) is examined under various asymptotic conditions including large signal to noise ratio; large near-far ratios; and a large number of users. In the last section Authors also shows Matlab Simulation results for Multiuser detection techniques i.e., Matched filter, Decorrelating, MMSE for 2 users and 10 users.Keywords: code division multiple access, decorrelating, matched filter, minimum mean square detection (MMSE) detection, multiple access interference (MAI), multiuser detection (MUD)
Procedia PDF Downloads 5281290 Optimizing Oxidation Process Parameters of Al-Li Base Alloys Using Taguchi Method
Authors: Muna K. Abbass, Laith A. Mohammed, Muntaha K. Abbas
Abstract:
The oxidation of Al-Li base alloy containing small amounts of rare earth (RE) oxides such as 0.2 wt% Y2O3 and 0.2wt% Nd2O3 particles have been studied at temperatures: 400ºC, 500ºC and 550°C for 60hr in a dry air. Alloys used in this study were prepared by melting and casting in a permanent steel mould under controlled atmosphere. Identification of oxidation kinetics was carried out by using weight gain/surface area (∆W/A) measurements while scanning electron microscopy (SEM) and x-ray diffraction analysis were used for micro structural morphologies and phase identification of the oxide scales. It was observed that the oxidation kinetic for all studied alloys follows the parabolic law in most experimental tests under the different oxidation temperatures. It was also found that the alloy containing 0.2 wt %Y 2O3 particles possess the lowest oxidation rate and shows great improvements in oxidation resistance compared to the alloy containing 0.2 wt % Nd2O3 particles and Al-Li base alloy. In this work, Taguchi method is performed to estimate the optimum weight gain /area (∆W/A) parameter in oxidation process of Al-Li base alloys to obtain a minimum thickness of oxidation layer. Taguchi method is used to formulate the experimental layout, to analyses the effect of each parameter (time, temperature and alloy type) on the oxidation generation and to predict the optimal choice for each parameter and analyzed the effect of these parameters on the weight gain /area (∆W/A) parameter. The analysis shows that, the temperature significantly affects on the (∆W/A) parameter.Keywords: Al-Li base alloy, oxidation, Taguchi method, temperature
Procedia PDF Downloads 3721289 Investigating the Experiences of Higher Education Academics on the Blended Approach Used during the Induction Course
Authors: Ann-May Marais
Abstract:
South African higher education institutions are following the global adoption of a blended approach to teaching and learning. Blended learning is viewed as a transformative teaching-learning approach, as it provides students with the optimum experience by mixing the best of face-to-face and online learning. Although academics realise the benefits of blended learning, they find it challenging and time-consuming to implement blended strategies. Professional development is a critical component of the adoption of higher education teaching-learning approaches. The Institutional course for higher education academics offered at a South African University was designed in a blended model, implemented and evaluated. This paper reports on a study that investigated the experiences of academics on the blended approach used during the induction course. A qualitative design-based research methodology was employed, and data was collected using participant feedback and document analysis. The data gathered from each of the four ICNL offerings were used to inform the design of the next course. Findings indicated that lecturers realised that blended learning could cater to student diversity, different learning styles, engagement, and innovation. Furthermore, it emerged that the course has to cater for diversity in technology proficiency and readiness of participants. Participants also require ongoing support in technology usage and discipline-specific blended learning workshops. This paper contends that the modelling of a blended approach to professional development can be an effective way to motivate academics to apply blended learning in their teaching-learning experiences.Keywords: blended learning, professional development, induction course, integration of technology
Procedia PDF Downloads 1621288 Effect of Wind and Humidity on Microwave Links in North West Libya
Authors: M. S. Agha, A. M. Eshahiry, S. A. Aldabbar, Z. M. Alshahri
Abstract:
The propagation of microwave is affected by rain and dust particles causing signal attenuation and de-polarization. Computations of these effects require knowledge of the propagation characteristics of microwave and millimeter wave energy in the climate conditions of the studied region. This paper presents effect of wind and humidity on wireless communication such as microwave links in the North West region of Libya (Al-Khoms). The experimental procedure is done on three selected antennae towers (Nagaza station, Al-Khoms center station, Al-Khoms gateway station) for determining the attenuation loss per unit length and cross-polarization discrimination (XPD) change. Dust particles are collected along the region of the study, to measure the particle size distribution (PSD), calculate the concentration, and chemically analyze the contents, then the dielectric constant can be calculated. The results show that humidity and dust, antenna height and the visibility affect both attenuation and phase shift; in which, a few considerations must be taken into account in the communication power budget.Keywords: : Attenuation, scattering, transmission loss.
Procedia PDF Downloads 2151287 Effects of Fenugreek Seed Extract on in vitro Maturation and Subsequent Development of Sheep Oocytes
Authors: Ibrahim A. H. Barakat, Ahmed R. Al-Himaidi
Abstract:
The present study was conducted to determine the role and optimum concentration of fenugreek seed extract during in-vitro maturation on in-vitro maturation and developmental competence of Neaimi sheep oocytes following in-vitro fertilization. The Cumulus Oocyte Complexes (COCs) collected from sheep slaughterhouse ovaries were randomly divided into three groups, and they were matured for 24 hrs. in maturation medium containing fenugreek seed extract (0, 1 and 10 µg ml-1). Oocytes of a control group were matured in a medium containing 1 µg ml-1 estradiol 17β. After maturation, half of oocytes were fixed and stained for evaluation of nuclear maturation. The rest of oocytes were fertilized in vitro with fresh semen, then cultured for 9 days for the assessment of the developmental capacity of the oocytes. The results showed that the mean values of oocytes with expanded cumulus cells percentage were not significantly different among all groups (P < 0.05). But nuclear maturation rate of oocytes matured with 10 µg ml-1 fenugreek seed extract was significantly higher than that of the control group. The maturation rate and development to morula and blastocyst stage for oocytes matured at 10 µg ml-1 fenugreek seed extract was significantly higher than those matured at 1µg ml-1 of fenugreek seed extract and the control group. In conclusion, better maturation and developmental capacity rate to morula and blastocyst stage were obtained by the addition of 10 µg ml-1 fenugreek seed extract to maturation medium than addition of 1 µg ml-1 estradiol-17β (P < 0.05).Keywords: fenugreek seed extract, in vitro maturation, sheep oocytes, in vitro fertilization, embryo development
Procedia PDF Downloads 3921286 Pineapple Waste Valorization through Biogas Production: Effect of Substrate Concentration and Microwave Pretreatment
Authors: Khamdan Cahyari, Pratikno Hidayat
Abstract:
Indonesia has produced more than 1.8 million ton pineapple fruit in 2013 of which turned into waste due to industrial processing, deterioration and low qualities. It was estimated that this waste accounted for more than 40 percent of harvested fruits. In addition, pineapple leaves were one of biomass waste from pineapple farming land, which contributed even higher percentages. Most of the waste was only dumped into landfill area without proper pretreatment causing severe environmental problem. This research was meant to valorize the pineapple waste for producing renewable energy source of biogas through mesophilic (30℃) anaerobic digestion process. Especially, it was aimed to investigate effect of substrate concentration of pineapple fruit waste i.e. peel, core as well as effect of microwave pretreatment of pineapple leaves waste. The concentration of substrate was set at value 12, 24 and 36 g VS/liter culture whereas 800-Watt microwave pretreatment conducted at 2 and 5 minutes. It was noticed that optimum biogas production obtained at concentration 24 g VS/l with biogas yield 0.649 liter/g VS (45%v CH4) whereas microwave pretreatment at 2 minutes duration performed better compare to 5 minutes due to shorter exposure of microwave heat. This results suggested that valorization of pineapple waste could be carried out through biogas production at the aforementioned process condition. Application of this method is able to both reduce the environmental problem of the waste and produce renewable energy source of biogas to fulfill local energy demand of pineapple farming areas.Keywords: pineapple waste, substrate concentration, microwave pretreatment, biogas, anaerobic digestion
Procedia PDF Downloads 5801285 Development of Numerical Method for Mass Transfer across the Moving Membrane with Selective Permeability: Approximation of the Membrane Shape by Level Set Method for Numerical Integral
Authors: Suguru Miyauchi, Toshiyuki Hayase
Abstract:
Biological membranes have selective permeability, and the capsules or cells enclosed by the membrane show the deformation by the osmotic flow. This mass transport phenomenon is observed everywhere in a living body. For the understanding of the mass transfer in a body, it is necessary to consider the mass transfer phenomenon across the membrane as well as the deformation of the membrane by a flow. To our knowledge, in the numerical analysis, the method for mass transfer across the moving membrane has not been established due to the difficulty of the treating of the mass flux permeating through the moving membrane with selective permeability. In the existing methods for the mass transfer across the membrane, the approximate delta function is used to communicate the quantities on the interface. The methods can reproduce the permeation of the solute, but cannot reproduce the non-permeation. Moreover, the computational accuracy decreases with decreasing of the permeable coefficient of the membrane. This study aims to develop the numerical method capable of treating three-dimensional problems of mass transfer across the moving flexible membrane. One of the authors developed the numerical method with high accuracy based on the finite element method. This method can capture the discontinuity on the membrane sharply due to the consideration of the jumps in concentration and concentration gradient in the finite element discretization. The formulation of the method takes into account the membrane movement, and both permeable and non-permeable membranes can be treated. However, searching the cross points of the membrane and fluid element boundaries and splitting the fluid element into sub-elements are needed for the numerical integral. Therefore, cumbersome operation is required for a three-dimensional problem. In this paper, we proposed an improved method to avoid the search and split operations, and confirmed its effectiveness. The membrane shape was treated implicitly by introducing the level set function. As the construction of the level set function, the membrane shape in one fluid element was expressed by the shape function of the finite element method. By the numerical experiment, it was found that the shape function with third order appropriately reproduces the membrane shapes. The same level of accuracy compared with the previous method using search and split operations was achieved by using a number of sampling points of the numerical integral. The effectiveness of the method was confirmed by solving several model problems.Keywords: finite element method, level set method, mass transfer, membrane permeability
Procedia PDF Downloads 2501284 A 1.8 GHz to 43 GHz Low Noise Amplifier with 4 dB Noise Figure in 0.1 µm Galium Arsenide Technology
Authors: Mantas Sakalas, Paulius Sakalas
Abstract:
This paper presents an analysis and design of a ultrawideband 1.8GHz to 43GHz Low Noise Amplifier (LNA) in 0.1 μm Galium Arsenide (GaAs) pseudomorphic High Electron Mobility Transistor (pHEMT) technology. The feedback based bandwidth extension techniques is analyzed and based on the outcome, a two stage LNA is designed. The impedance fine tuning is implemented by using Transmission Line (TL) structures. The measured performance shows a good agreement with simulation results and an outstanding wideband noise matching. The measured small signal gain was 12 dB, whereas a 3 dB gain flatness in range from 1.8 - 43 GHz was reached. The noise figure was below 4 dB almost all over the entire frequency band of 1.8GHz to 43GHz, the output power at 1 dB compression point was 6 dBm and the DC power consumption was 95 mW. To the best knowledge of the authors the designed LNA outperforms the State of the Art (SotA) reported LNA designs in terms of combined parameters of noise figure within the addressed ultra-wide 3 dB bandwidth, linearity and DC power consumption.Keywords: feedback amplifiers, GaAs pHEMT, monolithic microwave integrated circuit, LNA, noise matching
Procedia PDF Downloads 2161283 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy
Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon
Abstract:
Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect
Procedia PDF Downloads 1821282 Development of Sustainable Building Environmental Model (SBEM) in Hong Kong
Authors: Kwok W. Mui, Ling T. Wong, F. Xiao, Chin T. Cheung, Ho C. Yu
Abstract:
This study addresses a concept of the Sustainable Building Environmental Model (SBEM) developed to optimize energy consumption in air conditioning and ventilation (ACV) systems without any deterioration of indoor environmental quality (IEQ). The SBEM incorporates two main components: an adaptive comfort temperature control module (ACT) and a new carbon dioxide demand control module (nDCV). These two modules take an innovative approach to maintain satisfaction of the Indoor Environmental Quality (IEQ) with optimum energy consumption, they provide a rational basis of effective control. A total of 2133 sets of measurement data of indoor air temperature (Ta), relative humidity (Rh) and carbon dioxide concentration (CO2) were conducted in some Hong Kong offices to investigate the potential of integrating the SBEM. A simulation was used to evaluate the dynamic performance of the energy and air conditioning system with the integration of the SBEM in an air-conditioned building. It allows us make a clear picture of the control strategies and performed any pre-tuned of controllers before utilized in real systems. With the integration of SBEM, it was able to save up to 12.3% in simulation and 15% in field measurement of overall electricity consumption, and maintain the average carbon dioxide concentration within 1000ppm and occupant dissatisfaction in 20%.Keywords: sustainable building environmental model (SBEM), adaptive comfort temperature (ACT), new demand control ventilation (nDCV), energy saving
Procedia PDF Downloads 6361281 The TiO2 Refraction Film for CsI Scintillator
Authors: C. C. Chen, C. W. Hun, C. J. Wang, C. Y. Chen, J. S. Lin, K. J. Huang
Abstract:
Cesium iodide (CsI) melt was injected into anodic aluminum oxide (AAO) template and was solidified to CsI column. The controllable AAO channel size (10~500 nm) can makes CsI column size from 10 to500 nm in diameter. In order to have a shorter light irradiate from each singe CsI column top to bottom the AAO template was coated a TiO2 nano-film. The TiO2 film acts a refraction film and makes X-ray has a shorter irradiation path in the CsI crystal making a stronger the photo-electron signal. When the incidence light irradiate from air (R=1.0) to CsI’s first surface (R=1.84) the first refraction happen, the first refraction continue into TiO2 film (R=2.88) and produces the low angle of the second refraction. Then the second refraction continue into AAO wall (R=1.78) and produces the third refraction after refractions between CsI and AAO wall (R=1.78) produce the fourth refraction. The incidence light after through CsI and TiO2 film refractions arrive to the CsI second surface. Therefore, the TiO2 film can has shorter refraction path of incidence light and increase the photo-electron conversion efficiency.Keywords: cesium iodide, anodic aluminum oxide (AAO), TiO2, refraction, X-ray
Procedia PDF Downloads 4251280 Comparative Growth Rates of Treculia africana Decne: Embryo in Varied Strengths of Murashige and Skoog Basal Medium
Authors: Okafor C. Uche, Agbo P. Ejiofor, Okezie C. Eziuche
Abstract:
This study provides a regeneration protocol for Treculia africana Decne (an endangered plant) through embryo culture. Mature zygotic embryos of T. africana were excised from the seeds aseptically and cultured on varied strengths (full, half and quarter) of Murashige and Skoog (MS) basal medium supplemented. All treatments experienced 100±0.00 percent sprouting except for half and quarter strengths. Plantlets in MS full strength had the highest fresh weight, leaf area, and longest shoot length when compared to other treatments. All explants in full, half, quarter strengths and control had the same number of leaves and sprout rate. Between the treatments, there was a significant difference (P>0.05) in their effect on the length of shoot and root, number of adventitious root, leaf area, and fresh weight. Full strength had the highest mean value in all the above-mentioned parameters and differed significantly (P>0.05) from others except in shoot length, number of adventitious roots, and root length where it did not differ (P<0.05) from half strength. The result of this study indicates that full strength MS basal medium offers a better option for the optimum growth for Treculia africana regeneration in vitro.Keywords: medium strengths, Murashige and Skoog, Treculia africana, zygotic embryos
Procedia PDF Downloads 2541279 Design, Analysis and Optimization of Space Frame for BAJA SAE Chassis
Authors: Manoj Malviya, Shubham Shinde
Abstract:
The present study focuses on the determination of torsional stiffness of a space frame chassis and comparison of elements used in the Finite Element Analysis of frame. The study also discusses various concepts and design aspects of a space frame chassis with the emphasis on their applicability in BAJA SAE vehicles. Torsional stiffness is a very important factor that determines the chassis strength, vehicle control, and handling. Therefore, it is very important to determine the torsional stiffness of the vehicle before designing an optimum chassis so that it should not fail during extreme conditions. This study determines the torsional stiffness of frame with respect to suspension shocks, roll-stiffness and anti-roll bar rates. A spring model is developed to study the effects of suspension parameters. The engine greatly contributes to torsional stiffness, and therefore, its effects on torsional stiffness need to be considered. Deflections in the tire have not been considered in the present study. The proper element shape should be selected to analyze the effects of various loadings on chassis while implementing finite element methods. The study compares the accuracy of results and computational time for different element types. Shape functions of these elements are also discussed. Modelling methodology is discussed for the multibody analysis of chassis integrated with suspension arms and engine. Proper boundary conditions are presented so as to replicate the real life conditions.Keywords: space frame chassis, torsional stiffness, multi-body analysis of chassis, element selection
Procedia PDF Downloads 3541278 Effect of Number of Baffles on Pressure Drop and Heat Transfer in a Shell and Tube Heat Exchanger
Authors: A. Falavand Jozaei, A. Ghafouri, M. Mosavi Navaei
Abstract:
In this paper for a given heat duty, study of number of baffles on pressure drop and heat transfer is considered in a STHX (Shell and Tube Heat Exchanger) with single segmental baffles. The effect of number of baffles from 9 to 52 baffles (baffle spacing variations from 4 to 24 inches) over OHTC (Overall Heat Hransfer Coefficient) to pressure drop ratio (U/Δp ratio). The results show that U/Δp ratio is low when baffle spacing is minimum (4 inches) because pressure drop is high; however, heat transfer coefficient is very significant. Then, with the increase of baffle spacing, pressure drop rapidly decreases and OHTC also decreases, but the decrease of OHTC is lower than pressure drop, so (U/Δp) ratio increases. After increasing baffles more than 12 inches, variation in pressure drop is gradual and approximately constant and OHTC decreases; Consequently, U/Δp ratio decreases again. If baffle spacing reaches to 24 inches, STHX will have minimum pressure drop, but OHTC decreases, so required heat transfer surface increases and U/Δp ratio decreases. After baffle spacing more than 12 inches, variation of shell side pressure drop is negligible. So optimum baffle spacing is suggested between 8 to 12 inches (43 to 63 percent of inside shell diameter) for a sufficient heat duty and low pressure drop.Keywords: shell and tube heat exchanger, single segmental baffle, overall heat transfer coefficient, pressure drop
Procedia PDF Downloads 5461277 Discovering Event Outliers for Drug as Commercial Products
Authors: Arunas Burinskas, Aurelija Burinskiene
Abstract:
On average, ten percent of drugs - commercial products are not available in pharmacies due to shortage. The shortage event disbalance sales and requires a recovery period, which is too long. Therefore, one of the critical issues that pharmacies do not record potential sales transactions during shortage and recovery periods. The authors suggest estimating outliers during shortage and recovery periods. To shorten the recovery period, the authors suggest using average sales per sales day prediction, which helps to protect the data from being downwards or upwards. Authors use the outlier’s visualization method across different drugs and apply the Grubbs test for significance evaluation. The researched sample is 100 drugs in a one-month time frame. The authors detected that high demand variability products had outliers. Among analyzed drugs, which are commercial products i) High demand variability drugs have a one-week shortage period, and the probability of facing a shortage is equal to 69.23%. ii) Mid demand variability drugs have three days shortage period, and the likelihood to fall into deficit is equal to 34.62%. To avoid shortage events and minimize the recovery period, real data must be set up. Even though there are some outlier detection methods for drug data cleaning, they have not been used for the minimization of recovery period once a shortage has occurred. The authors use Grubbs’ test real-life data cleaning method for outliers’ adjustment. In the paper, the outliers’ adjustment method is applied with a confidence level of 99%. In practice, the Grubbs’ test was used to detect outliers for cancer drugs and reported positive results. The application of the Grubbs’ test is used to detect outliers which exceed boundaries of normal distribution. The result is a probability that indicates the core data of actual sales. The application of the outliers’ test method helps to represent the difference of the mean of the sample and the most extreme data considering the standard deviation. The test detects one outlier at a time with different probabilities from a data set with an assumed normal distribution. Based on approximation data, the authors constructed a framework for scaling potential sales and estimating outliers with Grubbs’ test method. The suggested framework is applicable during the shortage event and recovery periods. The proposed framework has practical value and could be used for the minimization of the recovery period required after the shortage of event occurrence.Keywords: drugs, Grubbs' test, outlier, shortage event
Procedia PDF Downloads 1341276 A Hybrid Digital Watermarking Scheme
Authors: Nazish Saleem Abbas, Muhammad Haris Jamil, Hamid Sharif
Abstract:
Digital watermarking is a technique that allows an individual to add and hide secret information, copyright notice, or other verification message inside a digital audio, video, or image. Today, with the advancement of technology, modern healthcare systems manage patients’ diagnostic information in a digital way in many countries. When transmitted between hospitals through the internet, the medical data becomes vulnerable to attacks and requires security and confidentiality. Digital watermarking techniques are used in order to ensure the authenticity, security and management of medical images and related information. This paper proposes a watermarking technique that embeds a watermark in medical images imperceptibly and securely. In this work, digital watermarking on medical images is carried out using the Least Significant Bit (LSB) with the Discrete Cosine Transform (DCT). The proposed methods of embedding and extraction of a watermark in a watermarked image are performed in the frequency domain using LSB by XOR operation. The quality of the watermarked medical image is measured by the Peak signal-to-noise ratio (PSNR). It was observed that the watermarked medical image obtained performing XOR operation between DCT and LSB survived compression attack having a PSNR up to 38.98.Keywords: watermarking, image processing, DCT, LSB, PSNR
Procedia PDF Downloads 471275 Analysis of a Lignocellulose Degrading Microbial Consortium to Enhance the Anaerobic Digestion of Rice Straws
Authors: Supanun Kangrang, Kraipat Cheenkachorn, Kittiphong Rattanaporn, Malinee Sriariyanun
Abstract:
Rice straw is lignocellulosic biomass which can be utilized as substrate for the biogas production. However, due to the property and composition of rice straw, it is difficult to be degraded by hydrolysis enzymes. One of the pretreatment method that modifies such properties of lignocellulosic biomass is the application of lignocellulose-degrading microbial consortia. The aim of this study is to investigate the effect of microbial consortia to enhance biogas production. To select the high efficient consortium, cellulase enzymes were extracted and their activities were analyzed. The results suggested that microbial consortium culture obtained from cattle manure is the best candidate compared to decomposed wood and horse manure. A microbial consortium isolated from cattle manure was then mixed with anaerobic sludge and used as inoculum for biogas production. The optimal conditions for biogas production were investigated using response surface methodology (RSM). The tested parameters were the ratio of amount of microbial consortium isolated and amount of anaerobic sludge (MI:AS), substrate to inoculum ratio (S:I) and temperature. Here, the value of the regression coefficient R2 = 0.7661 could be explained by the model which is high to advocate the significance of the model. The highest cumulative biogas yield was 104.6 ml/g-rice straw at optimum ratio of MI:AS, ratio of S:I, and temperature of 2.5:1, 15:1 and 44°C respectively.Keywords: lignocellulolytic biomass, microbial consortium, cellulase, biogas, Response Surface Methodology (RSM)
Procedia PDF Downloads 3981274 Emotiv EPOC BCI Matrix Speller Based on Single Emokey
Authors: S. M. Abdullah Al Mamun
Abstract:
Human Computer Interaction (HCI) is an excellent area for the researchers to make daily life more simple and fast. Necessary hardware equipments for any BCI are generally expensive and not affordable for most of the people. Emotiv is one of the solutions for this problem, which can provide electroencephalograph (EEG) signal and explain the brain activities. BCI virtual speller was one of the important applications for the people who have lost their hand or speaking ability because of diseases or unexpected accident. In this paper, a matrix speller has been designed for the first time for Bengali speaking people around the world. Bengali is one of the most commonly spoken languages. Among them, a lot of disabled person will be able to express their desire in their mother tongue. This application is also usable for the social networks and daily life communications. For this virtual keyboard, the well-known matrix speller method with column flashing is applied and controlled by single Emokey only. Emokey is a great feature which translates emotional state for application inputs. In this paper, it is presented that the ITR (Information Transfer Rate) were 29.4 bits/min and typing speed achieved up to 7.43 char/per min.Keywords: brain computer interface, Emotiv EPOC, EEG, virtual keyboard, matrix speller
Procedia PDF Downloads 3081273 Analysis of a Discrete-time Geo/G/1 Queue Integrated with (s, Q) Inventory Policy at a Service Facility
Authors: Akash Verma, Sujit Kumar Samanta
Abstract:
This study examines a discrete-time Geo/G/1 queueing-inventory system attached with (s, Q) inventory policy. Assume that the customers follow the Bernoulli process on arrival. Each customer demands a single item with arbitrarily distributed service time. The inventory is replenished by an outside supplier, and the lead time for the replenishment is determined by a geometric distribution. There is a single server and infinite waiting space in this facility. Demands must wait in the specified waiting area during a stock-out period. The customers are served on a first-come-first-served basis. With the help of the embedded Markov chain technique, we determine the joint probability distributions of the number of customers in the system and the number of items in stock at the post-departure epoch using the Matrix Analytic approach. We relate the system length distribution at post-departure and outside observer's epochs to determine the joint probability distribution at the outside observer's epoch. We use probability distributions at random epochs to determine the waiting time distribution. We obtain the performance measures to construct the cost function. The optimum values of the order quantity and reordering point are found numerically for the variety of model parameters.Keywords: discrete-time queueing inventory model, matrix analytic method, waiting-time analysis, cost optimization
Procedia PDF Downloads 441272 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms
Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov
Abstract:
The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems does not scale well on multi-CPU/multi-GPUs clusters. For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration instead of two for standard CG. The standard and pipelined CG methods need the vector entries generated by the current GPU and other GPUs for matrix-vector products. So the communication between GPUs becomes a major performance bottleneck on multi GPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using the pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP, and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.Keywords: conjugate gradient, GPU, parallel programming, pipelined algorithm
Procedia PDF Downloads 1651271 High-Throughput, Purification-Free, Multiplexed Profiling of Circulating miRNA for Discovery, Validation, and Diagnostics
Authors: J. Hidalgo de Quintana, I. Stoner, M. Tackett, G. Doran, C. Rafferty, A. Windemuth, J. Tytell, D. Pregibon
Abstract:
We have developed the Multiplexed Circulating microRNA assay that allows the detection of up to 68 microRNA targets per sample. The assay combines particlebased multiplexing, using patented Firefly hydrogel particles, with single step RT-PCR signal. Thus, the Circulating microRNA assay leverages PCR sensitivity while eliminating the need for separate reverse transcription reactions and mitigating amplification biases introduced by target-specific qPCR. Furthermore, the ability to multiplex targets in each well eliminates the need to split valuable samples into multiple reactions. Results from the Circulating microRNA assay are interpreted using Firefly Analysis Workbench, which allows visualization, normalization, and export of experimental data. To aid discovery and validation of biomarkers, we have generated fixed panels for Oncology, Cardiology, Neurology, Immunology, and Liver Toxicology. Here we present the data from several studies investigating circulating and tumor microRNA, showcasing the ability of the technology to sensitively and specifically detect microRNA biomarker signatures from fluid specimens.Keywords: biomarkers, biofluids, miRNA, photolithography, flowcytometry
Procedia PDF Downloads 3691270 Polymer Patterning by Dip Pen Nanolithography
Authors: Ayse Cagil Kandemir, Derya Erdem, Markus Niederberger, Ralph Spolenak
Abstract:
Dip Pen nanolithography (DPN), which is a tip based method, serves a novel approach to produce nano and micro-scaled patterns due to its high resolution and pattern flexibility. It is introduced as a new constructive scanning probe lithography (SPL) technique. DPN delivers materials in the form of an ink by using the tip of a cantilever as pen and substrate as paper in order to form surface architectures. First studies rely on delivery of small organic molecules on gold substrate in ambient conditions. As time passes different inks such as; polymers, colloidal particles, oligonucleotides, metallic salts were examined on a variety of surfaces. Discovery of DPN also enabled patterning with multiple inks by using multiple cantilevers for the first time in SPL history. Specifically, polymer inks, which constitute a flexible matrix for various materials, can have a potential in MEMS, NEMS and drug delivery applications. In our study, it is aimed to construct polymer patterns using DPN by studying wetting behavior of polymer on semiconductor, metal and polymer surfaces. The optimum viscosity range of polymer and effect of environmental conditions such as humidity and temperature are examined. It is observed that there is an inverse relation with ink viscosity and depletion time. This study also yields the optimal writing conditions to produce consistent patterns with DPN. It is shown that written dot sizes increase with dwell time, indicating that the examined writing conditions yield repeatable patterns.Keywords: dip pen nanolithography, polymer, surface patterning, surface science
Procedia PDF Downloads 3971269 Studies on Irrigation and Nutrient Interactions in Sweet Orange (Citrus sinensis Osbeck)
Authors: S. M. Jogdand, D. D. Jagtap, N. R. Dalal
Abstract:
Sweet orange (Citrus sinensis Osbeck) is one of the most important commercially cultivated fruit crop in India. It stands on second position amongst citrus group after mandarin. Irrigation and fertigation are vital importance of sweet orange orchard and considered to be the most critical cultural operations. The soil acts as the reservoir of water and applied nutrients, the interaction between irrigation and fertigation leads to the ultimate quality and production of fruits. The increasing cost of fertilizers and scarcity of irrigation water forced the farmers for optimum use of irrigation and nutrients. The experiment was conducted with object to find out irrigation and nutrient interaction in sweet orange to optimize the use of both the factors. The experiment was conducted in medium to deep soil. The irrigation level I3,drip irrigation at 90% ER (effective rainfall) and fertigation level F3 80% RDF (recommended dose of fertilizer) recorded significantly maximum plant height, plant spread, canopy volume, number of fruits, weight of fruit, fruit yield kg/plant and t/ha followed by F2 , fertigation with 70% RDF. The interaction effect of irrigation and fertigation on growth was also significant and the maximum plant height, E-W spread, N-S spread, canopy volume, highest number of fruits, weight of fruit and yield kg/plant and t/ha was recorded in T9 i.e. I3F3 drip irrigation at 90% ER and fertigation with 80% of RDF followed by I3F2 drip irrigation at 90% ER and fertigation with 70% of RDF.Keywords: sweet orange, fertigation, irrigation, interactions
Procedia PDF Downloads 1801268 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies
Authors: Paolino Di Felice
Abstract:
The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.Keywords: quality of life, distance measurement error, Italian administrative units, spatial database
Procedia PDF Downloads 3711267 The Convolution Recurrent Network of Using Residual LSTM to Process the Output of the Downsampling for Monaural Speech Enhancement
Authors: Shibo Wei, Ting Jiang
Abstract:
Convolutional-recurrent neural networks (CRN) have achieved much success recently in the speech enhancement field. The common processing method is to use the convolution layer to compress the feature space by multiple upsampling and then model the compressed features with the LSTM layer. At last, the enhanced speech is obtained by deconvolution operation to integrate the global information of the speech sequence. However, the feature space compression process may cause the loss of information, so we propose to model the upsampling result of each step with the residual LSTM layer, then join it with the output of the deconvolution layer and input them to the next deconvolution layer, by this way, we want to integrate the global information of speech sequence better. The experimental results show the network model (RES-CRN) we introduce can achieve better performance than LSTM without residual and overlaying LSTM simply in the original CRN in terms of scale-invariant signal-to-distortion ratio (SI-SNR), speech quality (PESQ), and intelligibility (STOI).Keywords: convolutional-recurrent neural networks, speech enhancement, residual LSTM, SI-SNR
Procedia PDF Downloads 201