Search results for: Maximum Likelihood Estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2640

Search results for: Maximum Likelihood Estimation

210 A Community Compromised Approach to Combinatorial Coalition Problem

Authors: Laor Boongasame, Veera Boonjing, Ho-fung Leung

Abstract:

Buyer coalition with a combination of items is a group of buyers joining together to purchase a combination of items with a larger discount. The primary aim of existing buyer coalition with a combination of items research is to generate a large total discount. However, the aim is hard to achieve because this research is based on the assumption that each buyer completely knows other buyers- information or at least one buyer knows other buyers- information in a coalition by exchange of information. These assumption contrast with the real world environment where buyers join a coalition with incomplete information, i.e., they concerned only with their expected discounts. Therefore, this paper proposes a new buyer community coalition formation with a combination of items scheme, called the Community Compromised Combinatorial Coalition scheme, under such an environment of incomplete information. In order to generate a larger total discount, after buyers who want to join a coalition propose their minimum required saving, a coalition structure that gives a maximum total retail prices is formed. Then, the total discount division of the coalition is divided among buyers in the coalition depending on their minimum required saving and is a Pareto optimal. In mathematical analysis, we compare concepts of this scheme with concepts of the existing buyer coalition scheme. Our mathematical analysis results show that the total discount of the coalition in this scheme is larger than that in the existing buyer coalition scheme.

Keywords: group decision and negotiations, group buying, gametheory, combinatorial coalition formation, Pareto optimality

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
209 Vibration Analysis of a Solar Powered UAV

Authors: Kevin Anderson, Sukhwinder Singh Sandhu, Nouh Anies, Shilpa Ravichandra, Steven Dobbs, Donald Edberg

Abstract:

This paper presents the results of a Finite Element based vibration analysis of a solar powered Unmanned Aerial Vehicle (UAV). The purpose of this paper was to quantify the free vibration, forced vibration response due to differing point inputs in order to predict the relative response magnitudes and frequencies at various wing locations of vibration induced power generators (magnet in coil) excited by gust and/or control surface pulse-decays used to help power the flight of the electric UAV. A Fluid Structure Interaction (FSI) study was performed in order to ascertain pertinent design stresses and deflections as well as aerodynamic parameters of the UAV airfoil. The 10 ft span airfoil is modeled using Mylar as the primary material. Results show that the free mode in bending is 4.8 Hz while the first forced bending mode is on range of 16.2 to 16.7 Hz depending on the location of excitation. The free torsional bending mode is 28.3 Hz, and the first forced torsional mode is range of 26.4 to 27.8 Hz, depending on the location of excitation. The FSI results predict the coefficients of aerodynamic drag and lift of 0.0052 and 0.077, respectively, which matches hand-calculations used to validate the Finite Element based results. FSI based maximum von Mises stresses and deflections were found to be 0.282 MPa and 3.4 mm, respectively. Dynamic pressures on the airfoil range from 1.04 to 1.23 kPa corresponding to velocity magnitudes in range of 22 to 66 m/s.

Keywords: ANSYS, finite element, FSI, UAV, vibrations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2707
208 Arsenic Mobility from Mining Tailings of Monte San Nicolas to Presa de Mata in Guanajuato, Mexico

Authors: I. Cano-Aguilera, B. E. Rubio-Campos, G. De la Rosa, A. F. Aguilera-Alvarado

Abstract:

Mining tailings represent a generating source of rich heavy metal material with a potential danger the public health and the environment, since these metals, under certain conditions, can leach and contaminate aqueous systems that serve like supplying potable water sources. The strategy for this work is based on the observation, experimentation and the simulation that can be obtained by binding real answers of the hydrodynamic behavior of metals leached from mining tailings, and the applied mathematics that provides the logical structure to decipher the individual effects of the general physicochemical phenomenon. The case of study presented herein focuses on mining tailings deposits located in Monte San Nicolas, Guanajuato, Mexico, an abandoned mine. This was considered the contamination source that under certain physicochemical conditions can favor the metal leaching, and its transport towards aqueous systems. In addition, the cartography, meteorology, geology and the hydrodynamics and hydrological characteristics of the place, will be helpful in determining the way and the time in which these systems can interact. Preliminary results demonstrated that arsenic presents a great mobility, since this one was identified in several superficial aqueous systems of the micro watershed, as well as in sediments in concentrations that exceed the established maximum limits in the official norms. Also variations in pH and potential oxide-reduction were registered, conditions that favor the presence of different species from this element its solubility and therefore its mobility.

Keywords: Arsenic, mining tailings, transport.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1647
207 Implementing a Strategy of Reliability Centered Maintenance (RCM) in the Libyan Cement Industry

Authors: Khalid M. Albarkoly, Kenneth S. Park

Abstract:

The substantial development of the construction industry has forced the cement industry, its major support, to focus on achieving maximum productivity to meet the growing demand for this material. This means that the reliability of a cement production system needs to be at the highest level that can be achieved by good maintenance. This paper studies the extent to which the implementation of RCM is needed as a strategy for increasing the reliability of the production systems component can be increased, thus ensuring continuous productivity. In a case study of four Libyan cement factories, 80 employees were surveyed and 12 top and middle managers interviewed. It is evident that these factories usually breakdown more often than once per month which has led to a decline in productivity. In many times they cannot achieve the minimum level of production amount. This has resulted from the poor reliability of their production systems as a result of poor or insufficient maintenance. It has been found that most of the factories’ employees misunderstand maintenance and its importance. The main cause of this problem is the lack of qualified and trained staff, but in addition it has been found that most employees are not found to be motivated as a result of a lack of management support and interest. In response to these findings, it has been suggested that the RCM strategy should be implemented in the four factories. The results show the importance of the development of maintenance strategies through the implementation of RCM in these factories. The purpose of it would be to overcome the problems that could secure the reliability of the production systems. This study could be a useful source of information for academic researchers and the industrial organizations which are still experiencing problems in maintenance practices.

Keywords: Libyan cement industry, maintenance, production, reliability centered maintenance, reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3423
206 Concrete Mix Design Using Neural Network

Authors: Rama Shanker, Anil Kumar Sachan

Abstract:

Basic ingredients of concrete are cement, fine aggregate, coarse aggregate and water. To produce a concrete of certain specific properties, optimum proportion of these ingredients are mixed. The important factors which govern the mix design are grade of concrete, type of cement and size, shape and grading of aggregates. Concrete mix design method is based on experimentally evolved empirical relationship between the factors in the choice of mix design. Basic draw backs of this method are that it does not produce desired strength, calculations are cumbersome and a number of tables are to be referred for arriving at trial mix proportion moreover, the variation in attainment of desired strength is uncertain below the target strength and may even fail. To solve this problem, a lot of cubes of standard grades were prepared and attained 28 days strength determined for different combination of cement, fine aggregate, coarse aggregate and water. An artificial neural network (ANN) was prepared using these data. The input of ANN were grade of concrete, type of cement, size, shape and grading of aggregates and output were proportions of various ingredients. With the help of these inputs and outputs, ANN was trained using feed forward back proportion model. Finally trained ANN was validated, it was seen that it gave the result with/ error of maximum 4 to 5%. Hence, specific type of concrete can be prepared from given material properties and proportions of these materials can be quickly evaluated using the proposed ANN.

Keywords: Aggregate Proportions, Artificial Neural Network, Concrete Grade, Concrete Mix Design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2580
205 Conflation Methodology Applied to Flood Recovery

Authors: E. L. Suarez, D. E. Meeroff, Y. Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: Community resilience, conflation, flood risk, nuisance flooding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 48
204 Simulation of Lid Cavity Flow in Rectangular, Half-Circular and Beer Bucket Shapes using Quasi-Molecular Modeling

Authors: S. Kulsri, M. Jaroensutasinee, K. Jaroensutasinee

Abstract:

We developed a new method based on quasimolecular modeling to simulate the cavity flow in three cavity shapes: rectangular, half-circular and bucket beer in cgs units. Each quasi-molecule was a group of particles that interacted in a fashion entirely analogous to classical Newtonian molecular interactions. When a cavity flow was simulated, the instantaneous velocity vector fields were obtained by using an inverse distance weighted interpolation method. In all three cavity shapes, fluid motion was rotated counter-clockwise. The velocity vector fields of the three cavity shapes showed a primary vortex located near the upstream corners at time t ~ 0.500 s, t ~ 0.450 s and t ~ 0.350 s, respectively. The configurational kinetic energy of the cavities increased as time increased until the kinetic energy reached a maximum at time t ~ 0.02 s and, then, the kinetic energy decreased as time increased. The rectangular cavity system showed the lowest kinetic energy, while the half-circular cavity system showed the highest kinetic energy. The kinetic energy of rectangular, beer bucket and half-circular cavities fluctuated about stable average values 35.62 x 103, 38.04 x 103 and 40.80 x 103 ergs/particle, respectively. This indicated that the half-circular shapes were the most suitable shape for a shrimp pond because the water in shrimp pond flows best when we compared with rectangular and beer bucket shape.

Keywords: Quasi-molecular modelling, particle modelling, lid driven cavity flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1682
203 Patient’s Knowledge and Use of Sublingual Glyceryl Trinitrate Therapy in Taiping Hospital, Malaysia

Authors: Wan Azuati Wan Omar, Selva Rani John Jasudass, Siti Rohaiza Md Saad

Abstract:

Background: The objectives of this study were to assess patient’s knowledge of appropriate sublingual glyceryl trinitrate (GTN) use as well as to investigate how patients commonly store and carry their sublingual GTN tablets. Methodology: This was a cross-sectional survey, using a validated researcher-administered questionnaire. The study involved cardiac patients receiving sublingual GTN attending the outpatient and inpatient departments of Taiping Hospital, a non-academic public care hospital. The minimum calculated sample size was 92, but 100 patients were conveniently sampled. Respondents were interviewed on 3 areas, including demographic data, knowledge and use of sublingual GTN. Eight items were used to calculate each subject’s knowledge score and six items were used to calculate use score. Results: Of the 96 patients who consented to participate, majority (96.9%) were well aware of the indication of sublingual GTN. With regards to the mechanism of action of sublingual GTN, 73 (76%) patients did not know how the medication works. Majority of the patients (66.7%) knew about the proper storage of the tablet. In relation to the maximum number of sublingual GTN tablets that can be taken during each angina episode, 36.5% did not know that up to 3 tablets of sublingual GTN can be taken during each episode of angina. Fifty four (56.2%) patients were not aware that they need to replace sublingual GTN every 8 weeks after receiving the tablets. Majority (69.8%) of the patients demonstrated lack of knowledge with regards to the use of sublingual GTN as prevention of chest pain. Conclusion: Overall, patients’ knowledge regarding the self-administration of sublingual GTN is still inadequate. The findings support the need for more frequent reinforcement of patient education, especially in the areas of preventive use, storage and drug stability.

Keywords: Glyceryl trinitrate, knowledge, adherence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3432
202 Effect of Plant Growth Promoting Rhizobacteria (PGPR) and Planting Pattern on Yield and Its Components of Rice (Oryza sativa L.) in Ilam Province, Iran

Authors: Ali Rahmani, Abbas Maleki, Mohammad Mirzaeiheydari, Rahim Naseri

Abstract:

Most parts of the world such as Iran are facing the excessive consumption of fertilizers, that are used to achieve high yield, but increase the cost of production of fertilizer and degradation of soil and water resources. This experiment was carried out to study the effect of PGPR and planting pattern on yield and yield components of rice (Oryza sativa L.) using split plot based on randomized complete block design with three replications in Ilam province, Iran. Bio-fertilizer including Azotobacter, Nitroxin and control treatment (without consumption) were designed as a main plot and planting pattern including 15 × 10, 15 × 15 and 15 × 20 and the number of plant in hill including 3, 4 and 5 plants in hill were considered as a sub-plots. The results showed that the effect of bio-fertilizers, planting pattern and the number of plants in hill were significant affect on yield and yield components. Interaction effect between bio-fertilizer and planting pattern had important difference on the number spikelet of panicle and harvest index. Interaction effect between bio-fertilizer and the number of plants in hill were significant affect on the number of spikelet per panicle. The maximum grain yield was obtained by inoculation with Nitroxin, planting pattern of 15 × 15 and 4 plants in hill with mean of 1110.6 g.m-2, 959.9 g.m-2 and 928.4 g.m-2, respectively.

Keywords: Bio-fertilizer, Grain yield, Planting pattern, Rice.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1788
201 Verification of Sr-90 Determination in Water and Spruce Needles Samples Using IAEA-TEL-2016-04 ALMERA Proficiency Test Samples

Authors: S. Visetpotjanakit, N. Nakkaew

Abstract:

Determination of 90Sr in environmental samples has been widely developed with several radioanlytical methods and radiation measurement techniques since 90Sr is one of the most hazardous radionuclides produced from nuclear reactors. Liquid extraction technique using di-(2-ethylhexyl) phosphoric acid (HDEHP) to separate and purify 90Y and Cherenkov counting using liquid scintillation counter to determine 90Y in secular equilibrium to 90Sr was developed and performed at our institute, the Office of Atoms for Peace. The approach is inexpensive, non-laborious, and fast to analyse 90Sr in environmental samples. To validate our analytical performance for the accurate and precise criteria, determination of 90Sr using the IAEA-TEL-2016-04 ALMERA proficiency test samples were performed for statistical evaluation. The experiment used two spiked tap water samples and one naturally contaminated spruce needles sample from Austria collected shortly after the Chernobyl accident. Results showed that all three analyses were successfully passed in terms of both accuracy and precision criteria, obtaining “Accepted” statuses. The two water samples obtained the measured results of 15.54 Bq/kg and 19.76 Bq/kg, which had relative bias 5.68% and -3.63% for the Maximum Acceptable Relative Bias (MARB) 15% and 20%, respectively. And the spruce needles sample obtained the measured results of 21.04 Bq/kg, which had relative bias 23.78% for the MARB 30%. These results confirm our analytical performance of 90Sr determination in water and spruce needles samples using the same developed method.

Keywords: ALMERA proficiency test, Cherenkov counting, determination of 90Sr, environmental samples.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806
200 Design and Performance Analysis of One Dimensional Zero Cross-Correlation Coding Technique for a Fixed Wavelength Hopping SAC-OCDMA

Authors: Satyasen Panda, Urmila Bhanja

Abstract:

This paper presents a SAC-OCDMA code with zero cross correlation property to minimize the Multiple Access Interface (MAI) as New Zero Cross Correlation code (NZCC), which is found to be more scalable compared to the other existing SAC-OCDMA codes. This NZCC code is constructed using address segment and data segment. In this work, the proposed NZCC code is implemented in an optical system using the Opti-System software for the spectral amplitude coded optical code-division multiple-access (SAC-OCDMA) scheme. The main contribution of the proposed NZCC code is the zero cross correlation, which reduces both the MAI and PIIN noises. The proposed NZCC code reveals properties of minimum cross-correlation, flexibility in selecting the code parameters and supports a large number of users, combined with high data rate and longer fiber length. Simulation results reveal that the optical code division multiple access system based on the proposed NZCC code accommodates maximum number of simultaneous users with higher data rate transmission, lower Bit Error Rates (BER) and longer travelling distance without any signal quality degradation, as compared to the former existing SAC-OCDMA codes.

Keywords: Cross Correlation, Optical Code Division Multiple Access, Spectral Amplitude Coding Optical Code Division Multiple Access, Multiple Access Interference, Phase Induced Intensity Noise, New Zero Cross Correlation code.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2195
199 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: Deep learning, long-short-term memory, energy, renewable energy load forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536
198 Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea

Authors: Kyomin Lee, Joohee Kim, Sangho Kang

Abstract:

The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.

Keywords: Characterization, classification, decommissioning, decontamination and dismantling, Kori 1, radioactive waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1401
197 The Effect of Development of Two-Phase Flow Regimes on the Stability of Gas Lift Systems

Authors: Khalid. M. O. Elmabrok, M. L. Burby, G. G. Nasr

Abstract:

Flow instability during gas lift operation is caused by three major phenomena – the density wave oscillation, the casing heading pressure and the flow perturbation within the two-phase flow region. This paper focuses on the causes and the effect of flow instability during gas lift operation and suggests ways to control it in order to maximise productivity during gas lift operations. A laboratory-scale two-phase flow system to study the effects of flow perturbation was designed and built. The apparatus is comprised of a 2 m long by 66 mm ID transparent PVC pipe with air injection point situated at 0.1 m above the base of the pipe. This is the point where stabilised bubbles were visibly clear after injection. Air is injected into the water filled transparent pipe at different flow rates and pressures. The behavior of the different sizes of the bubbles generated within the two-phase region was captured using a digital camera and the images were analysed using the advanced image processing package. It was observed that the average maximum bubbles sizes increased with the increase in the length of the vertical pipe column from 29.72 to 47 mm. The increase in air injection pressure from 0.5 to 3 bars increased the bubble sizes from 29.72 mm to 44.17 mm and then decreasing when the pressure reaches 4 bars. It was observed that at higher bubble velocity of 6.7 m/s, larger diameter bubbles coalesce and burst due to high agitation and collision with each other. This collapse of the bubbles causes pressure drop and reverse flow within two phase flow and is the main cause of the flow instability phenomena.

Keywords: Gas lift instability, bubble forming, bubble collapsing, image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430
196 Use of GIS for the Performance Evaluation of Canal Irrigation System in Rice Wheat Cropping Zone

Authors: Umm-e- Kalsoom, M. Arshad, Sadia Iqbal, M. Usman, M. Adnan

Abstract:

The research study evaluated the performance of irrigation system by using special scientific tools like Remote Sensing and GIS technology, so that proper measurements could be taken for the sustainable agriculture and water management. Different performance evaluation parameters had been calculated for the purposed data was gathered from field investigation and different government and private organizations. According to the calculations, organic matter ranges from 0.19% (low value) to 0.76% (high value). In flat irrigation system for wheat yield ranges from 3347.16 to 5260.39 kg/ha, while the total water applied to wheat crop ranges from 252.94 to 279.19 mm and WUE ranges from 13.07 to 18.37 kg/ha/mm. For rice yield ranges from 3347.47 to 5433.07 kg/ha with total water supplied to rice crop ranges from 764.71 to 978.15 mm and WUE ranges from 3.49 to 5.71 kg/ha/mm. Similarly, in raised bed system wheat yield ranges from 4569.13 to 6008.60 kg/ha, total water supplied ranges from 158.87 to 185.09 mm and WUE ranges from 27.20 to 33.54 kg/ha/mm while in rice crop, yield ranges from 5285.04 to 6716.69 kg/ha, total water supplied ranges from 600.72 to 755.06 mm and WUE ranges from 6.41 to 10.05 kg/ha/mm. Almost 51.3% water saving is observed in bed irrigation system as compared to flat system. Less water supplied to beds is more affective as its WUE value is higher than flat system where more water is supplied in both the seasons. Similarly, RWS values show that maximum water deficit while minimum area is getting adequate water supply. Greater yield is recorded in bed system as plant per square meter is more in bed system in comparison of flat system Thus, the integration of GIS tools to regularly compute performance indices could provide irrigation managers with the means for managing efficiently the irrigation system.

Keywords: Field survey, Relative Water Supply (RWS), Remote sensing maps, Water Use Efficiency (WUE).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2372
195 Evaluation of Shear Strength Parameters of Amended Loess through Using Common Admixtures in Gorgan, Iran

Authors: Seyed Erfan Hosseini, Mohammad K. Alizadeh, Amir Mesbah

Abstract:

Non-saturated soils that while saturation greatly decrease their volume, have sudden settlement due to increasing humidity, fracture and structural crack are called loess soils. Whereas importance of civil projects including: dams, canals and constructions bearing this type of soil and thereof problems, it is required for carrying out more research and study in relation to loess soils. This research studies shear strength parameters by using grading test, Atterberg limit, compression, direct shear and consolidation and then effect of using cement and lime additives on stability of loess soils is studied. In related tests, lime and cement are separately added to mixed ratios under different percentages of soil and for different times the stabilized samples are processed and effect of aforesaid additives on shear strength parameters of soil is studied. Results show that upon passing time the effect of additives and collapsible potential is greatly decreased and upon increasing percentage of cement and lime the maximum dry density is decreased; however, optimum humidity is increased. In addition, liquid limit and plastic index is decreased; however, plastic index limit is increased. It is to be noted that results of direct shear test reveal increasing shear strength of soil due to increasing cohesion parameter and soil friction angle.

Keywords: Loess Soils, Shear Strength, Cement, Lime.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961
194 Polyvinyl Alcohol Processed Templated Polyaniline Films: Preparation, Characterization and Assessment of Tensile Strength

Authors: J. Subbalakshmi, G. Dhruvasamhith, S. M. Hussain

Abstract:

Polyaniline (PANI) is one of the most extensively studied material among the conducting polymers due to its simple synthesis by chemical and electrochemical routes. PANIs have advantages of chemical stability and high conductivity making their commercial applications quite attractive. However, to our knowledge, very little work has been reported on the tensile strength properties of templated PANIs processed with polyvinyl alcohol and also, detailed study has not been carried out. We have investigated the effect of small molecule and polymers as templates on PANI. Stable aqueous colloidal suspensions of trisodium citrate (TSC), poly(ethylenedioxythiophene)-polystyrene sulfonate (PEDOT-PSS), and polyethylene glycol (PEG) templated PANIs were prepared through chemical synthesis, processed with polyvinyl alcohol (PVA) and were fabricated into films by solution casting. Absorption and infra-red spectra were studied to gain insight into the possible molecular interactions. Surface morphology was studied through scanning electron microscope and optical microscope. Interestingly, tensile testing studies revealed least strain for pure PVA when compared to the blends of templated PANI. Furthermore, among the blends, TSC templated PANI possessed maximum elasticity. The ultimate tensile strength for PVA processed, PEG-templated PANI was found to be five times more than other blends considered in this study. We establish structure–property correlation with morphology, spectral characterization and tensile testing studies.

Keywords: Processed films, polyvinyl alcohol, spectroscopy, surface morphology, templated polyanilines, tensile test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1128
193 Removal of Malachite Green from Aqueous Solution using Hydrilla verticillata -Optimization, Equilibrium and Kinetic Studies

Authors: R. Rajeshkannan, M. Rajasimman, N. Rajamohan

Abstract:

In this study, the sorption of Malachite green (MG) on Hydrilla verticillata biomass, a submerged aquatic plant, was investigated in a batch system. The effects of operating parameters such as temperature, adsorbent dosage, contact time, adsorbent size, and agitation speed on the sorption of Malachite green were analyzed using response surface methodology (RSM). The proposed quadratic model for central composite design (CCD) fitted very well to the experimental data that it could be used to navigate the design space according to ANOVA results. The optimum sorption conditions were determined as temperature - 43.5oC, adsorbent dosage - 0.26g, contact time - 200min, adsorbent size - 0.205mm (65mesh), and agitation speed - 230rpm. The Langmuir and Freundlich isotherm models were applied to the equilibrium data. The maximum monolayer coverage capacity of Hydrilla verticillata biomass for MG was found to be 91.97 mg/g at an initial pH 8.0 indicating that the optimum sorption initial pH. The external and intra particle diffusion models were also applied to sorption data of Hydrilla verticillata biomass with MG, and it was found that both the external diffusion as well as intra particle diffusion contributes to the actual sorption process. The pseudo-second order kinetic model described the MG sorption process with a good fitting.

Keywords: Response surface methodology, Hydrilla verticillata, malachite green, adsorption, central composite design

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1952
192 Effect of Needle Height on Discharge Coefficient and Cavitation Number

Authors: Azadeh Yazdi, Mohammadreza Nezamirad, Sepideh Amirahmadian, Nasim Sabetpour, Amirmasoud Hamedi

Abstract:

Cavitation inside diesel injector nozzle is investigated using Reynolds-Stress-Navier stokes equations. Schnerr-Sauer cavitation model is used for modeling cavitation inside diesel injector nozzle. The carrying fluid utilized in the current study is diesel fuel. The flow is verified at the beginning by comparing with the previous experimental data and it was found that K-Epsilon turbulent model could lead to a better accuracy comparing to K-Omega turbulent model. Moreover, mass flow rate obtained numerically is compared with the experimental value and discrepancy was found to be less than 5% - which shows the accuracy of the current results. Finally, a real-size four-hole nozzle is investigated and the flow inside it is visualized based on velocity profile, discharge coefficient and cavitation number. It was found that the mesh density could be reduced significantly by utilizing periodic boundary condition. Velocity contour at the mid nozzle showed that maximum value of velocity occurs at the end of the needle before entering the orifice area. Last but not least, at the same boundary conditions, when different needle heights were utilized, it was found that as needle height increases with an increase in cavitation number, discharge coefficient increases, while the mentioned increases is more tangible at smaller values of needle heights.

Keywords: cavitation, diesel fuel, CFD, real size nozzle, mass flow rate

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 460
191 Factors Affecting Slot Machine Performance in an Electronic Gaming Machine Facility

Authors: Etienne Provencal, David L. St-Pierre

Abstract:

A facility exploiting only electronic gambling machines (EGMs) opened in 2007 in Quebec City, Canada under the name of Salons de Jeux du Québec (SdjQ). This facility is one of the first worldwide to rely on that business model. This paper models the performance of such EGMs. The interest from a managerial point of view is to identify the variables that can be controlled or influenced so that a comprehensive model can help improve the overall performance of the business. The EGM individual performance model contains eight different variables under study (Game Title, Progressive jackpot, Bonus Round, Minimum Coin-in, Maximum Coin-in, Denomination, Slant Top and Position). Using data from Quebec City’s SdjQ, a linear regression analysis explains 90.80% of the EGM performance. Moreover, results show a behavior slightly different than that of a casino. The addition of GameTitle as a factor to predict the EGM performance is one of the main contributions of this paper. The choice of the game (GameTitle) is very important. Games having better position do not have significantly better performance than games located elsewhere on the gaming floor. Progressive jackpots have a positive and significant effect on the individual performance of EGMs. The impact of BonusRound on the dependent variable is significant but negative. The effect of Denomination is significant but weakly negative. As expected, the Language of an EGMS does not impact its individual performance. This paper highlights some possible improvements by indicating which features are performing well. Recommendations are given to increase the performance of the EGMs performance.

Keywords: EGM, linear regression, model prediction, slot operations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513
190 Investigations of Metals and Metal-Antibrowning Agents Effects on Polyphenol Oxidase Activity from Red Poppy Leaf

Authors: G. Arabaci

Abstract:

Heavy metals are one of the major groups of contaminants in the environment and many of them are toxic even at very low concentration in plants and animals. However, some metals play important roles in the biological function of many enzymes in living organisms. Metals such as zinc, iron, and cooper are important for survival and activity of enzymes in plants, however heavy metals can inhibit enzyme which is responsible for defense system of plants. Polyphenol oxidase (PPO) is a copper-containing metalloenzyme which is responsible for enzymatic browning reaction of plants. Enzymatic browning is a major problem for the handling of vegetables and fruits in food industry. It can be increased and effected with many different futures such as metals in the nature and ground. In the present work, PPO was isolated and characterized from green leaves of red poppy plant (Papaverr hoeas). Then, the effect of some known antibrowning agents which can form complexes with metals and metals were investigated on the red poppy PPO activity. The results showed that glutathione was the most potent inhibitory effect on PPO activity. Cu(II) and Fe(II) metals increased the enzyme activities however, Sn(II) had the maximum inhibitory effect and Zn(II) and Pb(II) had no significant effect on the enzyme activity. In order to reduce the effect of heavy metals, the effects of metal-antibrowning agent complexes on the PPO activity were determined. EDTA and metal complexes had no significant effect on the enzyme. L-ascorbic acid and metal complexes decreased but L-ascorbic acid-Cu(II)-complex had no effect. Glutathione–metal complexes had the best inhibitory effect on Red poppy leaf PPO activity.

Keywords: Inhibition, metal, red poppy, Polyphenol oxidase (PPO).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3408
189 A Growing Natural Gas Approach for Evaluating Quality of Software Modules

Authors: Parvinder S. Sandhu, Sandeep Khimta, Kiranpreet Kaur

Abstract:

The prediction of Software quality during development life cycle of software project helps the development organization to make efficient use of available resource to produce the product of highest quality. “Whether a module is faulty or not" approach can be used to predict quality of a software module. There are numbers of software quality prediction models described in the literature based upon genetic algorithms, artificial neural network and other data mining algorithms. One of the promising aspects for quality prediction is based on clustering techniques. Most quality prediction models that are based on clustering techniques make use of K-means, Mixture-of-Guassians, Self-Organizing Map, Neural Gas and fuzzy K-means algorithm for prediction. In all these techniques a predefined structure is required that is number of neurons or clusters should be known before we start clustering process. But in case of Growing Neural Gas there is no need of predetermining the quantity of neurons and the topology of the structure to be used and it starts with a minimal neurons structure that is incremented during training until it reaches a maximum number user defined limits for clusters. Hence, in this work we have used Growing Neural Gas as underlying cluster algorithm that produces the initial set of labeled cluster from training data set and thereafter this set of clusters is used to predict the quality of test data set of software modules. The best testing results shows 80% accuracy in evaluating the quality of software modules. Hence, the proposed technique can be used by programmers in evaluating the quality of modules during software development.

Keywords: Growing Neural Gas, data clustering, fault prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1819
188 Optimal Efficiency Control of Pulse Width Modulation - Inverter Fed Motor Pump Drive Using Neural Network

Authors: O. S. Ebrahim, M. A. Badr, A. S. Elgendy, K. O. Shawky, P. K. Jain

Abstract:

This paper demonstrates an improved Loss Model Control (LMC) for a 3-phase induction motor (IM) driving pump load. Compared with other power loss reduction algorithms for IM, the presented one has the advantages of fast and smooth flux adaptation, high accuracy, and versatile implementation. The performance of LMC depends mainly on the accuracy of modeling the motor drive and losses. A loss-model for IM drive that considers the surplus power loss caused by inverter voltage harmonics using closed-form equations and also includes the magnetic saturation has been developed. Further, an Artificial Neural Network (ANN) controller is synthesized and trained offline to determine the optimal flux level that achieves maximum drive efficiency. The drive’s voltage and speed control loops are connecting via the stator frequency to avoid the possibility of excessive magnetization. Besides, the resistance change due to temperature is considered by a first-order thermal model. The obtained thermal information enhances motor protection and control. These together have the potential of making the proposed algorithm reliable. Simulation and experimental studies are performed on 5.5 kW test motor using the proposed control method. The test results are provided and compared with the fixed flux operation to validate the effectiveness.

Keywords: Artificial neural network, ANN, efficiency optimization, induction motor, IM, Pulse Width Modulated, PWM, harmonic losses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 278
187 Roundabout Optimal Entry and Circulating Flow Induced by Road Hump

Authors: Amir Hossein Pakshir, A. Hossein Pour, N. Jahandar, Ali Paydar

Abstract:

Roundabout work on the principle of circulation and entry flows, where the maximum entry flow rates depend largely on circulating flow bearing in mind that entry flows must give away to circulating flows. Where an existing roundabout has a road hump installed at the entry arm, it can be hypothesized that the kinematics of vehicles may prevent the entry arm from achieving optimum performance. Road humps are traffic calming devices placed across road width solely as speed reduction mechanism. They are the preferred traffic calming option in Malaysia and often used on single and dual carriageway local routes. The speed limit on local routes is 30mph (50 km/hr). Road humps in their various forms achieved the biggest mean speed reduction (based on a mean speed before traffic calming of 30mph) of up to 10mph or 16 km/hr according to the UK Department of Transport. The underlying aim of reduced speed should be to achieve a 'safe' distribution of speeds which reflects the function of the road and the impacts on the local community. Constraining safe distribution of speeds may lead to poor drivers timing and delayed reflex reaction that can probably cause accident. Previous studies on road hump impact have focused mainly on speed reduction, traffic volume, noise and vibrations, discomfort and delay from the use of road humps. The paper is aimed at optimal entry and circulating flow induced by road humps. Results show that roundabout entry and circulating flow perform better in circumstances where there is no road hump at entrance.

Keywords: Road hump, Roundabout, Speed Reduction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2962
186 Assessment and Uncertainty Analysis of ROSA/LSTF Test on Pressurized Water Reactor 1.9% Vessel Upper Head Small-Break Loss-of-Coolant Accident

Authors: Takeshi Takeda

Abstract:

An experiment utilizing the ROSA/LSTF (rig of safety assessment/large-scale test facility) simulated a 1.9% vessel upper head small-break loss-of-coolant accident with an accident management (AM) measure under the total failure of high-pressure injection system of emergency core cooling system in a pressurized water reactor. Steam generator (SG) secondary-side depressurization on the AM measure was started by fully opening relief valves in both SGs when the maximum core exit temperature rose to 623 K. A large increase took place in the cladding surface temperature of simulated fuel rods on account of a late and slow response of core exit thermocouples during core boil-off. The author analyzed the LSTF test by reference to the matrix of an integral effect test for the validation of a thermal-hydraulic system code. Problems remained in predicting the primary coolant distribution and the core exit temperature with the RELAP5/MOD3.3 code. The uncertainty analysis results of the RELAP5 code confirmed that the sample size with respect to the order statistics influences the value of peak cladding temperature with a 95% probability at a 95% confidence level, and the Spearman’s rank correlation coefficient.

Keywords: LSTF, LOCA, uncertainty analysis, RELAP5.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 665
185 Study on Compressive Strength and Setting Times of Fly Ash Concrete after Slump Recovery Using Superplasticizer

Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert

Abstract:

Fresh concrete has one of dynamic properties known as slump. Slump of concrete is design to compatible with placing method. Due to hydration reaction of cement, the slump of concrete is loss through time. Therefore, delayed concrete probably get reject because slump is unacceptable. In order to recover the slump of delayed concrete the second dose of superplasticizer (naphthalene based type F) is added into the system, the slump recovery can be done as long as the concrete is not setting. By adding superplasticizer as solution for recover unusable slump loss concrete may affects other concrete properties. Therefore, this paper was observed setting times and compressive strength of concrete after being re-dose with chemical admixture type F (superplasticizer, naphthalene based) for slump recovery. The concrete used in this study was fly ash concrete with fly ash replacement of 0%, 30% and 50% respectively. Concrete mix designed for test specimen was prepared with paste content (ratio of volume of cement to volume of void in the aggregate) of 1.2 and 1.3, water-to-binder ratio (w/b) range of 0.3 to 0.58, initial dose of superplasticizer (SP) range from 0.5 to 1.6%. The setting times of concrete were tested both before and after re-dosed with different amount of second dose and time of dosing. The research was concluded that addition of second dose of superplasticizer would increase both initial and final setting times accordingly to dosage of addition. As for fly ash concrete, the prolongation effect was higher as the replacement of fly ash increase. The prolongation effect can reach up to maximum about 4 hours. In case of compressive strength, the re-dosed concrete has strength fluctuation within acceptable range of ±10%.

Keywords: Compressive strength, Fly ash concrete, Second dose of superplasticizer, Slump recovery, Setting times.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890
184 Evaluation of Dynamic Behavior a Machine Tool Spindle System through Modal and Unbalance Response Analysis

Authors: Khairul Jauhari, Achmad Widodo, Ismoyo Haryanto

Abstract:

The spindle system is one of the most important components of machine tool. The dynamic properties of the spindle affect the machining productivity and quality of the work pieces. Thus, it is important and necessary to determine its dynamic characteristics of spindles in the design and development in order to avoid forced resonance. The finite element method (FEM) has been adopted in order to obtain the dynamic behavior of spindle system. For this reason, obtaining the Campbell diagrams and determining the critical speeds are very useful to evaluate the spindle system dynamics. The unbalance response of the system to the center of mass unbalance at the cutting tool is also calculated to investigate the dynamic behavior. In this paper, we used an ANSYS Parametric Design Language (APDL) program which based on finite element method has been implemented to make the full dynamic analysis and evaluation of the results. Results show that the calculated critical speeds are far from the operating speed range of the spindle, thus, the spindle would not experience resonance, and the maximum unbalance response at operating speed is still with acceptable limit. ANSYS Parametric Design Language (APDL) can be used by spindle designer as tools in order to increase the product quality, reducing cost, and time consuming in the design and development stages.

Keywords: ANSYS parametric design language (APDL), Campbell diagram, Critical speeds, Unbalance response, The Spindle system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2785
183 The Effects of Three Months of HIIT on Plasma Adiponectin on Overweight College Men

Authors: M. J. Pourvaghar, M. E. Bahram, M. Sayyah, Sh. Khoshemehry

Abstract:

Adiponectin is a cytokine secreted by the adipose tissue that functions as an anti-inflammatory, antiathrogenic and anti-diabetic substance. Its density is inversely correlated with body mass index. The purpose of this research was to examine the effect of 12 weeks of high intensity interval training (HIIT) with the level of serum adiponectin and some selected adiposity markers in overweight and fat college students. This was a clinical research in which 24 students with BMI between 25 kg/m2 to 30 kg/m2. The sample was purposefully selected and then randomly assigned into two groups of experimental (age =22.7±1.5 yr.; weight = 85.8±3.18 kg and height =178.7±3.29 cm) and control (age =23.1±1.1 yr.; weight = 79.1±2.4 kg and height =181.3±4.6 cm), respectively. The experimental group participated in an aerobic exercise program for 12 weeks, three sessions per weeks at a high intensity between 85% to 95% of maximum heart rate (considering the over load principle). Prior and after the termination of exercise protocol, the level of serum adiponectin, BMI, waist to hip ratio, and body fat percentages were calculated. The data were analyzed by using SPSS: PC 16.0 and statistical procedure such as ANCOVA, was used. The results indicated that 12 weeks of intensive interval training led to the increase of serum adiponectin level and decrease of body weight, body fat percent, body mass index and waist to hip ratio (P < 0.05). Based on the results of this research, it may be concluded that participation in intensive interval training for 12 weeks is a non-invasive treatment to increase the adiponectin level while decreasing some of the anthropometric indices associated with obesity or being overweight.

Keywords: Adiponectin, interval, intensive, overweight, training.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1081
182 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour

Authors: Libor Zachoval, Daire O Broin, Oisin Cawley

Abstract:

E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).

Keywords: Compliance Course, Corporate Training, Learner Behaviours, xAPI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 521
181 Optimal Duty-Cycle Modulation Scheme for Analog-To-Digital Conversion Systems

Authors: G. Sonfack, J. Mbihi, B. Lonla Moffo

Abstract:

This paper presents an optimal duty-cycle modulation (ODCM) scheme for analog-to-digital conversion (ADC) systems. The overall ODCM-Based ADC problem is decoupled into optimal DCM and digital filtering sub-problems, while taking into account constraints of mutual design parameters between the two. Using a set of three lemmas and four morphological theorems, the ODCM sub-problem is modelled as a nonlinear cost function with nonlinear constraints. Then, a weighted least pth norm of the error between ideal and predicted frequency responses is used as a cost function for the digital filtering sub-problem. In addition, MATLAB fmincon and MATLAB iirlnorm tools are used as optimal DCM and least pth norm solvers respectively. Furthermore, the virtual simulation scheme of an overall prototyping ODCM-based ADC system is implemented and well tested with the help of Simulink tool according to relevant set of design data, i.e., 3 KHz of modulating bandwidth, 172 KHz of maximum modulation frequency and 25 MHZ of sampling frequency. Finally, the results obtained and presented show that the ODCM-based ADC achieves under 3 KHz of modulating bandwidth: 57 dBc of SINAD (signal-to-noise and distorsion), 58 dB of SFDR (Surpious free dynamic range) -80 dBc of THD (total harmonic distorsion), and 10 bits of minimum resolution. These performance levels appear to be a great challenge within the class of oversampling ADC topologies, with 2nd order IIR (infinite impulse response) decimation filter.

Keywords: Digital IIR filter, morphological lemmas and theorems, optimal DCM-based DAC, virtual simulation, weighted least pth norm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 875