Search results for: optimal operating parameters
10545 Identification of Rice Quality Using Gas Sensors and Neural Networks
Authors: Moh Hanif Mubarok, Muhammad Rivai
Abstract:
The public's response to quality rice is very high. So it is necessary to set minimum standards in checking the quality of rice. Most rice quality measurements still use manual methods, which are prone to errors due to limited human vision and the subjectivity of testers. So, a gas detection system can be a solution that has high effectiveness and subjectivity for solving current problems. The use of gas sensors in testing rice quality must pay attention to several parameters. The parameters measured in this research are the percentage of rice water content, gas concentration, output voltage, and measurement time. Therefore, this research was carried out to identify carbon dioxide (CO₂), nitrous oxide (N₂O) and methane (CH₄) gases in rice quality using a series of gas sensors using the Neural Network method.Keywords: carbon dioxide, dinitrogen oxide, methane, semiconductor gas sensor, neural network
Procedia PDF Downloads 4810544 Orthogonal Regression for Nonparametric Estimation of Errors-In-Variables Models
Authors: Anastasiia Yu. Timofeeva
Abstract:
Two new algorithms for nonparametric estimation of errors-in-variables models are proposed. The first algorithm is based on penalized regression spline. The spline is represented as a piecewise-linear function and for each linear portion orthogonal regression is estimated. This algorithm is iterative. The second algorithm involves locally weighted regression estimation. When the independent variable is measured with error such estimation is a complex nonlinear optimization problem. The simulation results have shown the advantage of the second algorithm under the assumption that true smoothing parameters values are known. Nevertheless the use of some indexes of fit to smoothing parameters selection gives the similar results and has an oversmoothing effect.Keywords: grade point average, orthogonal regression, penalized regression spline, locally weighted regression
Procedia PDF Downloads 41610543 Parametric Study on the Behavior of Reinforced Concrete Continuous Beams Flexurally Strengthened with FRP Plates
Authors: Mohammed A. Sakr, Tarek M. Khalifa, Walid N. Mansour
Abstract:
External bonding of fiber reinforced polymer (FRP) plates to reinforced concrete (RC) beams is an effective technique for flexural strengthening. This paper presents an analytical parametric study on the behavior of RC continuous beams flexurally strengthened with externally bonded FRP plates on the upper and lower fibers, conducted using simple uniaxial nonlinear finite element model (UNFEM). UNFEM is able to estimate the load-carrying capacity, different failure modes and the interfacial stresses of RC continuous beams flexurally strengthened with externally bonded FRP plates on the upper and lower fibers. The study investigated the effect of five key parameters on the behavior and moment redistribution of FRP-reinforced continuous beams. The investigated parameters were the length of the FRP plate, the width and the thickness of the FRP plate, the ratio between the area of the FRP plate to the concrete area, the cohesive shear strength of the adhesive layer, and the concrete compressive strength. The investigation resulted in a number of important conclusions reflecting the effects of the studied parameters on the behavior of RC continuous beams flexurally strengthened with externally bonded FRP plates.Keywords: continuous beams, parametric study, finite element, fiber reinforced polymer
Procedia PDF Downloads 37110542 A Spatial Perspective on the Metallized Combustion Aspect of Rockets
Authors: Chitresh Prasad, Arvind Ramesh, Aditya Virkar, Karan Dholkaria, Vinayak Malhotra
Abstract:
Solid Propellant Rocket is a rocket that utilises a combination of a solid Oxidizer and a solid Fuel. Success in Solid Rocket Motor design and development depends significantly on knowledge of burning rate behaviour of the selected solid propellant under all motor operating conditions and design limit conditions. Most Solid Motor Rockets consist of the Main Engine, along with multiple Boosters that provide an additional thrust to the space-bound vehicle. Though widely used, they have been eclipsed by Liquid Propellant Rockets, because of their better performance characteristics. The addition of a catalyst such as Iron Oxide, on the other hand, can drastically enhance the performance of a Solid Rocket. This scientific investigation tries to emulate the working of a Solid Rocket using Sparklers and Energized Candles, with a central Energized Candle acting as the Main Engine and surrounding Sparklers acting as the Booster. The Energized Candle is made of Paraffin Wax, with Magnesium filings embedded in it’s wick. The Sparkler is made up of 45% Barium Nitrate, 35% Iron, 9% Aluminium, 10% Dextrin and the remaining composition consists of Boric Acid. The Magnesium in the Energized Candle, and the combination of Iron and Aluminium in the Sparkler, act as catalysts and enhance the burn rates of both materials. This combustion of Metallized Propellants has an influence over the regression rate of the subject candle. The experimental parameters explored here are Separation Distance, Systematically varying Configuration and Layout Symmetry. The major performance parameter under observation is the Regression Rate of the Energized Candle. The rate of regression is significantly affected by the orientation and configuration of the sparklers, which usually act as heat sources for the energized candle. The Overall Efficiency of any engine is factorised by the thermal and propulsive efficiencies. Numerous efforts have been made to improve one or the other. This investigation focuses on the Orientation of Rocket Motor Design to maximize their Overall Efficiency. The primary objective is to analyse the Flame Spread Rate variations of the energized candle, which resembles the solid rocket propellant used in the first stage of rocket operation thereby affecting the Specific Impulse values in a Rocket, which in turn have a deciding impact on their Time of Flight. Another objective of this research venture is to determine the effectiveness of the key controlling parameters explored. This investigation also emulates the exhaust gas interactions of the Solid Rocket through concurrent ignition of the Energized Candle and Sparklers, and their behaviour is analysed. Modern space programmes intend to explore the universe outside our solar system. To accomplish these goals, it is necessary to design a launch vehicle which is capable of providing incessant propulsion along with better efficiency for vast durations. The main motivation of this study is to enhance Rocket performance and their Overall Efficiency through better designing and optimization techniques, which will play a crucial role in this human conquest for knowledge.Keywords: design modifications, improving overall efficiency, metallized combustion, regression rate variations
Procedia PDF Downloads 17810541 Action Potential of Lateral Geniculate Neurons at Low Threshold Currents: Simulation Study
Authors: Faris Tarlochan, Siva Mahesh Tangutooru
Abstract:
Lateral Geniculate Nucleus (LGN) is the relay center in the visual pathway as it receives most of the input information from retinal ganglion cells (RGC) and sends to visual cortex. Low threshold calcium currents (IT) at the membrane are the unique indicator to characterize this firing functionality of the LGN neurons gained by the RGC input. According to the LGN functional requirements such as functional mapping of RGC to LGN, the morphologies of the LGN neurons were developed. During the neurological disorders like glaucoma, the mapping between RGC and LGN is disconnected and hence stimulating LGN electrically using deep brain electrodes can restore the functionalities of LGN. A computational model was developed for simulating the LGN neurons with three predominant morphologies, each representing different functional mapping of RGC to LGN. The firings of action potentials at LGN neuron due to IT were characterized by varying the stimulation parameters, morphological parameters and orientation. A wide range of stimulation parameters (stimulus amplitude, duration and frequency) represents the various strengths of the electrical stimulation with different morphological parameters (soma size, dendrites size and structure). The orientation (0-1800) of LGN neuron with respect to the stimulating electrode represents the angle at which the extracellular deep brain stimulation towards LGN neuron is performed. A reduced dendrite structure was used in the model using Bush–Sejnowski algorithm to decrease the computational time while conserving its input resistance and total surface area. The major finding is that an input potential of 0.4 V is required to produce the action potential in the LGN neuron which is placed at 100 µm distance from the electrode. From this study, it can be concluded that the neuroprostheses under design would need to consider the capability of inducing at least 0.4V to produce action potentials in LGN.Keywords: Lateral Geniculate Nucleus, visual cortex, finite element, glaucoma, neuroprostheses
Procedia PDF Downloads 27910540 Decarbonising Urban Building Heating: A Case Study on the Benefits and Challenges of Fifth-Generation District Heating Networks
Authors: Mazarine Roquet, Pierre Dewallef
Abstract:
The building sector, both residential and tertiary, accounts for a significant share of greenhouse gas emissions. In Belgium, partly due to poor insulation of the building stock, but certainly because of the massive use of fossil fuels for heating buildings, this share reaches almost 30%. To reduce carbon emissions from urban building heating, district heating networks emerge as a promising solution as they offer various assets such as improving the load factor, integrating combined heat and power systems, and enabling energy source diversification, including renewable sources and waste heat recovery. However, mainly for sake of simple operation, most existing district heating networks still operate at high or medium temperatures ranging between 120°C and 60°C (the socalled second and third-generations district heating networks). Although these district heating networks offer energy savings in comparison with individual boilers, such temperature levels generally require the use of fossil fuels (mainly natural gas) with combined heat and power. The fourth-generation district heating networks improve the transport and energy conversion efficiency by decreasing the operating temperature between 50°C and 30°C. Yet, to decarbonise the building heating one must increase the waste heat recovery and use mainly wind, solar or geothermal sources for the remaining heat supply. Fifth-generation networks operating between 35°C and 15°C offer the possibility to decrease even more the transport losses, to increase the share of waste heat recovery and to use electricity from renewable resources through the use of heat pumps to generate low temperature heat. The main objective of this contribution is to exhibit on a real-life test case the benefits of replacing an existing third-generation network by a fifth-generation one and to decarbonise the heat supply of the building stock. The second objective of the study is to highlight the difficulties resulting from the use of a fifth-generation, low-temperature, district heating network. To do so, a simulation model of the district heating network including its regulation is implemented in the modelling language Modelica. This model is applied to the test case of the heating network on the University of Liège's Sart Tilman campus, consisting of around sixty buildings. This model is validated with monitoring data and then adapted for low-temperature networks. A comparison of primary energy consumptions as well as CO2 emissions is done between the two cases to underline the benefits in term of energy independency and GHG emissions. To highlight the complexity of operating a lowtemperature network, the difficulty of adapting the mass flow rate to the heat demand is considered. This shows the difficult balance between the thermal comfort and the electrical consumption of the circulation pumps. Several control strategies are considered and compared to the global energy savings. The developed model can be used to assess the potential for energy and CO2 emissions savings retrofitting an existing network or when designing a new one.Keywords: building simulation, fifth-generation district heating network, low-temperature district heating network, urban building heating
Procedia PDF Downloads 8310539 Formation of Human Resources in the Light of Sustainable Development and the Achievement of Full Employment
Authors: Kaddour Fellague Mohammed
Abstract:
The world has seen in recent years, significant developments affected various aspects of life and influenced the different types of institutions, thus was born a new world is a world of globalization, which dominated the scientific revolution and the tremendous technological developments, and that contributed to the re-formation of human resources in contemporary organizations, and made patterns new regulatory and at the same time raised and strongly values and new ideas, the organizations have become more flexible, and faster response to consumer and environmental conditions, and exceeded the problem of time and place in the framework of communication and human interaction and use of advanced information technology and adoption mainly mechanism in running its operations , focused on performance and based strategic thinking and approach in order to achieve its strategic goals high degrees of superiority and excellence, this new reality created an increasing need for a new type of human resources, quality aims to renew and aspire to be a strategic player in managing the organization and drafting of various strategies, think globally and act locally, to accommodate local variables in the international markets, which began organizations tend to strongly as well as the ability to work under different cultures. Human resources management of the most important management functions to focus on the human element, which is considered the most valuable resource of the Department and the most influential in productivity at all, that the management and development of human resources Tattabra a cornerstone in the majority of organizations which aims to strengthen the organizational capacity, and enable companies to attract and rehabilitation of the necessary competencies and are able to keep up with current and future challenges, human resources can contribute to and strongly in achieving the objectives and profit organization, and even expand more than contribute to the creation of new jobs to alleviate unemployment and achieve full operation, administration and human resources mean short optimal use of the human element is available and expected, where he was the efficiency and capabilities, and experience of this human element, and his enthusiasm for the work stop the efficiency and success in reaching their goals, so interested administration scientists developed the principles and foundations that help to make the most of each individual benefit in the organization through human resources management, these foundations start of the planning and selection, training and incentives and evaluation, which is not separate from each other, but are integrated with each other as a system systemic order to reach the efficient functioning of the human resources management and has been the organization as a whole in the context of development sustainable.Keywords: configuration, training, development, human resources, operating
Procedia PDF Downloads 43210538 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs
Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa
Abstract:
Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.Keywords: classification models, egg weight, fertilised eggs, multiple linear regression
Procedia PDF Downloads 8710537 Comparison between Separable and Irreducible Goppa Code in McEliece Cryptosystem
Authors: Newroz Nooralddin Abdulrazaq, Thuraya Mahmood Qaradaghi
Abstract:
The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.Keywords: McEliece cryptosystem, Goppa code, separable, irreducible
Procedia PDF Downloads 26610536 Compression Index Estimation by Water Content and Liquid Limit and Void Ratio Using Statistics Method
Authors: Lizhou Chen, Abdelhamid Belgaid, Assem Elsayed, Xiaoming Yang
Abstract:
Compression index is essential in foundation settlement calculation. The traditional method for determining compression index is consolidation test which is expensive and time consuming. Many researchers have used regression methods to develop empirical equations for predicting compression index from soil properties. Based on a large number of compression index data collected from consolidation tests, the accuracy of some popularly empirical equations were assessed. It was found that primary compression index is significantly overestimated in some equations while it is underestimated in others. The sensitivity analyses of soil parameters including water content, liquid limit and void ratio were performed. The results indicate that the compression index obtained from void ratio is most accurate. The ANOVA (analysis of variance) demonstrates that the equations with multiple soil parameters cannot provide better predictions than the equations with single soil parameter. In other words, it is not necessary to develop the relationships between compression index and multiple soil parameters. Meanwhile, it was noted that secondary compression index is approximately 0.7-5.0% of primary compression index with an average of 2.0%. In the end, the proposed prediction equations using power regression technique were provided that can provide more accurate predictions than those from existing equations.Keywords: compression index, clay, settlement, consolidation, secondary compression index, soil parameter
Procedia PDF Downloads 16310535 Non-Destructive Evaluation for Physical State Monitoring of an Angle Section Thin-Walled Curved Beam
Authors: Palash Dey, Sudip Talukdar
Abstract:
In this work, a cross-breed approach is presented for obtaining both the amount of the damage intensity and location of damage existing in thin-walled members. This cross-breed approach is developed based on response surface methodology (RSM) and genetic algorithm (GA). Theoretical finite element (FE) model of cracked angle section thin walled curved beam has been linked to the developed approach to carry out trial experiments to generate response surface functions (RSFs) of free, forced and heterogeneous dynamic response data. Subsequently, the error between the computed response surface functions and measured dynamic response data has been minimized using GA to find out the optimum damage parameters (amount of the damage intensity and location). A single crack of varying location and depth has been considered in this study. The presented approach has been found to reveal good accuracy in prediction of crack parameters and possess great potential in crack detection as it requires only the current response of a cracked beam.Keywords: damage parameters, finite element, genetic algorithm, response surface methodology, thin walled curved beam
Procedia PDF Downloads 24810534 Analysis of BSF Layer N-Gaas/P-Gaas/P+-Gaas Solar Cell
Authors: Abderrahmane Hemmani, Hamid Khachab, Dennai Benmoussa, Hassane Benslimane, Abderrachid Helmaoui
Abstract:
Back surface field GaAs with n -p-p+ structures are found to have better characteristics than the conventional solar cells. A theory, based on the transport of both minority carriers under the charge neutrality condition, has been developed in the present paper which explains behavior of the back surface field solar cells. That is reported with an efficiency of 25,05% (Jsc=33.5mA/cm2, Vco=0.87v and fill factor 86% under AM1.5 global conditions). We present the effect of technological parameters of the p+ layer on the conversion efficiency on the solar cell. Good agreement is achieved between our results and the simulation results given the variation of the equivalent recombination velocity to p+ layer as a function of BSF thickness and BSF doping.Keywords: back surface field, GaAs, solar cell, technological parameters
Procedia PDF Downloads 43310533 Effect of Tool Size and Cavity Depth on Response Characteristics during Electric Discharge Machining on Superalloy Metal - An Experimental Investigation
Authors: Sudhanshu Kumar
Abstract:
Electrical discharge machining, also known as EDM, process is one of the most applicable machining process for removal of material in hard to machine materials like superalloy metals. EDM process utilizes electrical energy into sparks to erode the metals in presence of dielectric medium. In the present investigation, superalloy, Inconel 718 has been selected as workpiece and electrolytic copper as tool electrode. Attempt has been made to understand the effect of size of tool with varying cavity depth during drilling of hole through EDM process. In order to systematic investigate, tool size in terms of tool diameter and cavity depth along with other important electrical parameters namely, peak current, pulse-on time and servo voltage have been varied at three different values and the experiments has been designed using fractional factorial (Taguchi) method. Each experiment has been repeated twice under the same condition in order to understand the variability within the experiments. The effect of variations in parameters has been evaluated in terms of material removal rate, tool wear rate and surface roughness. Results revel that change in tool diameter during machining affects the response characteristics significantly. Larger tool diameter yielded 13% more material removal rate than smaller tool diameter. Analysis of the effect of variation in cavity depth is notable. There is no significant effect of cavity depth on material removal rate, tool wear rate and surface quality. This indicates that number of experiments can be performed to analyze other parameters effect even at smaller depth of cavity which can reduce the cost and time of experiments. Further, statistical analysis has been carried out to identify the interaction effect between parameters.Keywords: EDM, Inconel 718, material removal rate, roughness, tool wear, tool size
Procedia PDF Downloads 21610532 Efficient Implementation of Finite Volume Multi-Resolution Weno Scheme on Adaptive Cartesian Grids
Authors: Yuchen Yang, Zhenming Wang, Jun Zhu, Ning Zhao
Abstract:
An easy-to-implement and robust finite volume multi-resolution Weighted Essentially Non-Oscillatory (WENO) scheme is proposed on adaptive cartesian grids in this paper. Such a multi-resolution WENO scheme is combined with the ghost cell immersed boundary method (IBM) and wall-function technique to solve Navier-Stokes equations. Unlike the k-exact finite volume WENO schemes which involve large amounts of extra storage, repeatedly solving the matrix generated in a least-square method or the process of calculating optimal linear weights on adaptive cartesian grids, the present methodology only adds very small overhead and can be easily implemented in existing edge-based computational fluid dynamics (CFD) codes with minor modifications. Also, the linear weights of this adaptive finite volume multi-resolution WENO scheme can be any positive numbers on condition that their sum is one. It is a way of bypassing the calculation of the optimal linear weights and such a multi-resolution WENO scheme avoids dealing with the negative linear weights on adaptive cartesian grids. Some benchmark viscous problems are numerical solved to show the efficiency and good performance of this adaptive multi-resolution WENO scheme. Compared with a second-order edge-based method, the presented method can be implemented into an adaptive cartesian grid with slight modification for big Reynolds number problems.Keywords: adaptive mesh refinement method, finite volume multi-resolution WENO scheme, immersed boundary method, wall-function technique.
Procedia PDF Downloads 14910531 Technical Efficiency in Organic and Conventional Wheat Farms: Evidence from a Primary Survey from Two Districts of Ganga River Basin, India
Authors: S. P. Singh, Priya, Komal Sajwan
Abstract:
With the increasing spread of organic farming in India, costs, returns, efficiency, and social and environmental sustainability of organic vis-a-vis conventional farming systems have become topics of interest among agriculture scientists, economists, and policy analysts. A study on technical efficiency estimation under these farming systems, particularly in the Ganga River Basin, where the promotion of organic farming is incentivized, can help to understand whether the inputs are utilized to their maximum possible level and what measures can be taken to improve the efficiency. This paper, therefore, analyses the technical efficiency of wheat farms operating under organic and conventional farming systems. The study is based on a primary survey of 600 farms (300 organic ad 300 conventional) conducted in 2021 in two districts located in the Middle Ganga River Basin, India. Technical, managerial, and scale efficiencies of individual farms are estimated by applying the data envelopment analysis (DEA) methodology. The per hectare value of wheat production is taken as an output variable, and values of seeds, human labour, machine cost, plant nutrients, farm yard manure (FYM), plant protection, and irrigation charges are considered input variables for estimating the farm-level efficiencies. The post-DEA analysis is conducted using the Tobit regression model to know the efficiency determining factors. The results show that technical efficiency is significantly higher in conventional than organic farming systems due to a higher gap in scale efficiency than managerial efficiency. Further, 9.8% conventional and only 1.0% organic farms are found operating at the most productive scale size (MPSS), and 99% organic and 81% conventional farms at IRS. Organic farms perform well in managerial efficiency, but their technical efficiency is lower than conventional farms, mainly due to their relatively lower scale size. The paper suggests that technical efficiency in organic wheat can be increased by upscaling the farm size by incentivizing group/collective farming in clusters.Keywords: organic, conventional, technical efficiency, determinants, DEA, Tobit regression
Procedia PDF Downloads 9910530 Assessment of Five Photoplethysmographic Methods for Estimating Heart Rate Variability
Authors: Akshay B. Pawar, Rohit Y. Parasnis
Abstract:
Heart Rate Variability (HRV) is a widely used indicator of the regulation between the autonomic nervous system (ANS) and the cardiovascular system. Besides being non-invasive, it also has the potential to predict mortality in cases involving critical injuries. The gold standard method for determining HRV is based on the analysis of RR interval time series extracted from ECG signals. However, because it is much more convenient to obtain photoplethysmogramic (PPG) signals as compared to ECG signals (which require the attachment of several electrodes to the body), many researchers have used pulse cycle intervals instead of RR intervals to estimate HRV. They have also compared this method with the gold standard technique. Though most of their observations indicate a strong correlation between the two methods, recent studies show that in healthy subjects, except for a few parameters, the pulse-based method cannot be a surrogate for the standard RR interval- based method. Moreover, the former tends to overestimate short-term variability in heart rate. This calls for improvements in or alternatives to the pulse-cycle interval method. In this study, besides the systolic peak-peak interval method (PP method) that has been studied several times, four recent PPG-based techniques, namely the first derivative peak-peak interval method (P1D method), the second derivative peak-peak interval method (P2D method), the valley-valley interval method (VV method) and the tangent-intersection interval method (TI method) were compared with the gold standard technique. ECG and PPG signals were obtained from 10 young and healthy adults (consisting of both males and females) seated in the armchair position. In order to de-noise these signals and eliminate baseline drift, they were passed through certain digital filters. After filtering, the following HRV parameters were computed from PPG using each of the five methods and also from ECG using the gold standard method: time domain parameters (SDNN, pNN50 and RMSSD), frequency domain parameters (Very low-frequency power (VLF), Low-frequency power (LF), High-frequency power (HF) and Total power or “TP”). Besides, Poincaré plots were also plotted and their SD1/SD2 ratios determined. The resulting sets of parameters were compared with those yielded by the standard method using measures of statistical correlation (correlation coefficient) as well as statistical agreement (Bland-Altman plots). From the viewpoint of correlation, our results show that the best PPG-based methods for the determination of most parameters and Poincaré plots are the P2D method (shows more than 93% correlation with the standard method) and the PP method (mean correlation: 88%) whereas the TI, VV and P1D methods perform poorly (<70% correlation in most cases). However, our evaluation of statistical agreement using Bland-Altman plots shows that none of the five techniques agrees satisfactorily well with the gold standard method as far as time-domain parameters are concerned. In conclusion, excellent statistical correlation implies that certain PPG-based methods provide a good amount of information on the pattern of heart rate variation, whereas poor statistical agreement implies that PPG cannot completely replace ECG in the determination of HRV.Keywords: photoplethysmography, heart rate variability, correlation coefficient, Bland-Altman plot
Procedia PDF Downloads 32410529 Ultrasound-Assisted Sol – Gel Synthesis of Nano-Boehmite for Biomedical Purposes
Authors: Olga Shapovalova, Vladimir Vinogradov
Abstract:
Among many different sol – gel matrices only alumina can be successfully parenteral injected in the human body. And this is not surprising, because boehmite (aluminium oxyhydroxide) is the metal oxide approved by FDA and EMA for intravenous and intramuscular administrations, and also has been using for a longtime as adjuvant for producing of many modern vaccines. In our earlier study, it has been shown, that denaturation temperature of enzymes entrapped in sol-gel boehmite matrix increases for 30 – 60 °С with preserving of initial activity. It makes such matrices more attractive for long-term storage of non-stable drugs. In current work we present ultrasound-assisted sol-gel synthesis of nano-boehmite. This method provides bio-friendly, very stable, highly homogeneous alumina sol with using only water and aluminium isopropoxide as a precursor. Many parameters of the synthesis were studied in details: time of ultrasound treatment, US frequency, surface area, pore and nanoparticle size, zeta potential and others. Here we investigated the dependence of stability of colloidal sols and textural properties of the final composites as a function of the time of ultrasonic treatment. Chosen ultrasonic treatment time was between 30 and 180 minutes. Surface area, average pore diameter and total pore volume of the final composites were measured by surface and pore size analyzer Nova 1200 Quntachrome. It was shown that the matrices with ultrasonic treatment time equal to 90 minutes have the biggest surface area 431 ± 24 m2/g. On the other had such matrices have a smaller stability in comparison with the samples with ultrasonic treatment time equal to 120 minutes that have the surface area 390 ± 21 m2/g. It was shown that the stable sols could be formed only after 120 minutes of ultrasonic treatment, otherwise the white precipitate of boehmite is formed. We conclude that the optimal ultrasonic treatment time is 120 minutes.Keywords: boehmite matrix, stabilisation, ultrasound-assisted sol-gel synthesis
Procedia PDF Downloads 26710528 Influence Analysis of Pelamis Wave Energy Converter Structure Parameters
Authors: Liu Shengnan, Sun Liping, Zhu Jianxun
Abstract:
Based on three dimensional potential flow theory and hinged rigid body motion equations, structure RAOs of Pelamis wave energy converter is analyzed. Analysis of numerical simulation is carried out on Pelamis in the irregular wave conditions, and the motion response of structures and total generated power is obtained. The paper analyzes influencing factors on the average power including diameter of floating body, section form of floating body, draft, hinged stiffness and damping. The optimum parameters are achieved in Zhejiang Province. Compared with the results of the pelamis experiment made by Glasgow University, the method applied in this paper is feasible.Keywords: Pelamis, hinge, floating multibody, wave energy
Procedia PDF Downloads 46510527 Association of the Frequency of the Dairy Products Consumption by Students and Health Parameters
Authors: Radyah Ivan, Khanferyan Roman
Abstract:
Milk and dairy products are an important component of a balanced diet. Dairy products represent a heterogeneous food group of solid, semi-solid and liquid, fermented or non-fermented foods, each differing in nutrients such as fat and micronutrient content. Deficiency of milk and dairy products contributes a impact on the main health parameters of the various age groups of the population. The goal of this study was to analyze of the frequency of the consumption of milk and various groups of dairy products by students and its association with their body mass index (BMI), body composition and other physiological parameters. 388 full-time students of the Medical Institute of RUDN University (185 male and 203 female, average age was 20.4+2.2 and 21.9+1.7 y.o., respectively) took part in the cross-sectional study. Anthropometric measurements, estimation of BMI and body composition were analyzed by bioelectrical impedance analysis. The frequency of consumption of the milk and various groups of dairy products was studied using a modified questionnaire on the frequency of consumption of products. Due to the questionnaire data on the frequency of consumption of the diary products, it have been demonstrated that only 11% of respondents consume milk daily, 5% - cottage cheese, 4% and 1% - fermented natural and with fillers milk products, respectively, hard cheese -4%. The study demonstrated that about 16% of the respondents did not consume milk at all over the past month, about one third - cottage cheese, 22% - natural sour-milk products and 18% - sour-milk products with various fillers. hard cheeses and pickled cheeses didn’t consume 9% and 26% of respondents, respectively. We demonstrated the gender differences in the characteristics of consumer preferences were revealed. Thus female students are less likely to use cream, sour cream, soft cheese, milk comparing to male students. Among female students the prevalence of persons with overweight was higher (25%) than among male students (19%). A modest inverse relationship was demonstrated between daily milk intake, BMI, body composition parameters and diary products consumption (r=-0.61 and r=-0.65). The study showed daily insufficient milk and dairy products consumption by students and due to this it have been demonstrated the relationship between the low and rare consumption of diary products and main parameters of indicators of physical activity and health indicators.Keywords: frequency of consumption, milk, dairy products, physical development, nutrition, body mass index.
Procedia PDF Downloads 3610526 Mathematical Simulation of Performance Parameters of Pulse Detonation Engine
Authors: Subhash Chander, Tejinder Kumar Jindal
Abstract:
Due to its simplicity, Pulse detonation engine technology has recently emerged as a future aerospace propulsion technology. In this paper, we studied various parameters affecting the performance of Pulse detonation engine (PDE) like tube length for proper deflagration to detonation transition (DDT), tube diameter (combustion tube), tube length, Shelkin spiral, Cell size, Equivalence ratio of fuel used etc. We have discussed various techniques for reducing the length of pulse tube by using various DDT enhancing devices. The effect of length of the tube from 40 mm to 3000 mm and diameter from 10 mm to 100 mm has been analyzed. The fuel used is C2H2 and oxidizer is O2. The results are processed in MATLAB for drawing valid conclusions.Keywords: pulse detonation engine (PDE), deflagration to detonation (DDT), Schelkin spiral, cell size (λ)
Procedia PDF Downloads 57210525 High-Frequency Acoustic Microscopy Imaging of Pellet/Cladding Interface in Nuclear Fuel Rods
Authors: H. Saikouk, D. Laux, Emmanuel Le Clézio, B. Lacroix, K. Audic, R. Largenton, E. Federici, G. Despaux
Abstract:
Pressurized Water Reactor (PWR) fuel rods are made of ceramic pellets (e.g. UO2 or (U,Pu) O2) assembled in a zirconium cladding tube. By design, an initial gap exists between these two elements. During irradiation, they both undergo transformations leading progressively to the closure of this gap. A local and non destructive examination of the pellet/cladding interface could constitute a useful help to identify the zones where the two materials are in contact, particularly at high burnups when a strong chemical bonding occurs under nominal operating conditions in PWR fuel rods. The evolution of the pellet/cladding bonding during irradiation is also an area of interest. In this context, the Institute of Electronic and Systems (IES- UMR CNRS 5214), in collaboration with the Alternative Energies and Atomic Energy Commission (CEA), is developing a high frequency acoustic microscope adapted to the control and imaging of the pellet/cladding interface with high resolution. Because the geometrical, chemical and mechanical nature of the contact interface is neither axially nor radially homogeneous, 2D images of this interface need to be acquired via this ultrasonic system with a highly performing processing signal and by means of controlled displacement of the sample rod along both its axis and its circumference. Modeling the multi-layer system (water, cladding, fuel etc.) is necessary in this present study and aims to take into account all the parameters that have an influence on the resolution of the acquired images. The first prototype of this microscope and the first results of the visualization of the inner face of the cladding will be presented in a poster in order to highlight the potentials of the system, whose final objective is to be introduced in the existing bench MEGAFOX dedicated to the non-destructive examination of irradiated fuel rods at LECA-STAR facility in CEA-Cadarache.Keywords: high-frequency acoustic microscopy, multi-layer model, non-destructive testing, nuclear fuel rod, pellet/cladding interface, signal processing
Procedia PDF Downloads 19110524 Geometric Imperfections in Lattice Structures: A Simulation Strategy to Predict Strength Variability
Authors: Xavier Lorang, Ahmadali Tahmasebimoradi, Chetra Mang, Sylvain Girard
Abstract:
The additive manufacturing processes (e.g. selective laser melting) allow us to produce lattice structures which have less weight, higher impact absorption capacity, and better thermal exchange property compared to the classical structures. Unfortunately, geometric imperfections (defects) in the lattice structures are by-products results of the manufacturing process. These imperfections decrease the lifetime and the strength of the lattice structures and alternate their mechanical responses. The objective of the paper is to present a simulation strategy which allows us to take into account the effect of the geometric imperfections on the mechanical response of the lattice structure. In the first part, an identification method of geometric imperfection parameters of the lattice structure based on point clouds is presented. These point clouds are based on tomography measurements. The point clouds are fed into the platform LATANA (LATtice ANAlysis) developed by IRT-SystemX to characterize the geometric imperfections. This is done by projecting the point clouds of each microbeam along the beam axis onto a 2D surface. Then, by fitting an ellipse to the 2D projections of the points, the geometric imperfections are characterized by introducing three parameters of an ellipse; semi-major/minor axes and angle of rotation. With regard to the calculated parameters of the microbeam geometric imperfections, a statistical analysis is carried out to determine a probability density law based on a statistical hypothesis. The microbeam samples are randomly drawn from the density law and are used to generate lattice structures. In the second part, a finite element model for the lattice structure with the simplified geometric imperfections (ellipse parameters) is presented. This numerical model is used to simulate the generated lattice structures. The propagation of the uncertainties of geometric imperfections is shown through the distribution of the computed mechanical responses of the lattice structures.Keywords: additive manufacturing, finite element model, geometric imperfections, lattice structures, propagation of uncertainty
Procedia PDF Downloads 18710523 A Comprehensive Study on Freshwater Aquatic Life Health Quality Assessment Using Physicochemical Parameters and Planktons as Bio Indicator in a Selected Region of Mahaweli River in Kandy District, Sri Lanka
Authors: S. M. D. Y. S. A. Wijayarathna, A. C. A. Jayasundera
Abstract:
Mahaweli River is the longest and largest river in Sri Lanka and it is the major drinking water source for a large portion of 2.5 million inhabitants in the Central Province. The aim of this study was to the determination of water quality and aquatic life health quality in a selected region of Mahaweli River. Six sampling locations (Site 1: 7° 16' 50" N, 80° 40' 00" E; Site 2: 7° 16' 34" N, 80° 40' 27" E; Site 3: 7° 16' 15" N, 80° 41' 28" E; Site 4: 7° 14' 06" N, 80° 44' 36" E; Site 5: 7° 14' 18" N, 80° 44' 39" E; Site 6: 7° 13' 32" N, 80° 46' 11" E) with various anthropogenic activities at bank of the river were selected for a period of three months from Tennekumbura Bridge to Victoria Reservoir. Temperature, pH, Electrical Conductivity (EC), Total Dissolved Solids (TDS), Dissolved Oxygen (DO), 5-day Biological Oxygen Demand (BOD5), Total Suspended Solids (TSS), hardness, the concentration of anions, and metal concentration were measured according to the standard methods, as physicochemical parameters. Planktons were considered as biological parameters. Using a plankton net (20 µm mesh size), surface water samples were collected into acid washed dried vials and were stored in an ice box during transportation. Diversity and abundance of planktons were identified within 4 days of sample collection using standard manuals of plankton identification under the light microscope. Almost all the measured physicochemical parameters were within the CEA standards limits for aquatic life, Sri Lanka Standards (SLS) or World Health Organization’s Guideline for drinking water. Concentration of orthophosphate ranged between 0.232 to 0.708 mg L-1, and it has exceeded the standard limit of aquatic life according to CEA guidelines (0.400 mg L-1) at Site 1 and Site 2, where there is high disturbance by cultivations and close households. According to the Pearson correlation (significant correlation at p < 0.05), it is obvious that some physicochemical parameters (temperature, DO, TDS, TSS, phosphate, sulphate, chloride fluoride, and sodium) were significantly correlated to the distribution of some plankton species such as Aulocoseira, Navicula, Synedra, Pediastrum, Fragilaria, Selenastrum, Oscillataria, Tribonema and Microcystis. Furthermore, species that appear in blooms (Aulocoseira), organic pollutants (Navicula), and phosphate high eutrophic water (Microcystis) were found, indicating deteriorated water quality in Mahaweli River due to agricultural activities, solid waste disposal, and release of domestic effluents. Therefore, it is necessary to improve environmental monitoring and management to control the further deterioration of water quality of the river.Keywords: bio indicator, environmental variables, planktons, physicochemical parameters, water quality
Procedia PDF Downloads 10610522 Optimal Construction Using Multi-Criteria Decision-Making Methods
Authors: Masood Karamoozian, Zhang Hong
Abstract:
The necessity and complexity of the decision-making process and the interference of the various factors to make decisions and consider all the relevant factors in a problem are very obvious nowadays. Hence, researchers show their interest in multi-criteria decision-making methods. In this research, the Analytical Hierarchy Process (AHP), Simple Additive Weighting (SAW), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methods of multi-criteria decision-making have been used to solve the problem of optimal construction systems. Systems being evaluated in this problem include; Light Steel Frames (LSF), a case study of designs by Zhang Hong studio in the Southeast University of Nanjing, Insulating Concrete Form (ICF), Ordinary Construction System (OCS), and Prefabricated Concrete System (PRCS) as another case study designs in Zhang Hong studio in the Southeast University of Nanjing. Crowdsourcing was done by using a questionnaire at the sample level (200 people). Questionnaires were distributed among experts, university centers, and conferences. According to the results of the research, the use of different methods of decision-making led to relatively the same results. In this way, with the use of all three multi-criteria decision-making methods mentioned above, the Prefabricated Concrete System (PRCS) was in the first rank, and the Light Steel Frame (LSF) system ranked second. Also, the Prefabricated Concrete System (PRCS), in terms of performance standards and economics, was ranked first, and the Light Steel Frame (LSF) system was allocated the first rank in terms of environmental standards.Keywords: multi-criteria decision making, AHP, SAW, TOPSIS
Procedia PDF Downloads 11010521 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model
Authors: Bokkasam Sasidhar, Ibrahim Aljasser
Abstract:
The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.Keywords: scheduling, maximal flow problem, multiple arc network model, optimization
Procedia PDF Downloads 40210520 Parameters Influencing Human Machine Interaction in Hospitals
Authors: Hind Bouami
Abstract:
Handling life-critical systems complexity requires to be equipped with appropriate technology and the right human agents’ functions such as knowledge, experience, and competence in problem’s prevention and solving. Human agents are involved in the management and control of human-machine system’s performance. Documenting human agent’s situation awareness is crucial to support human-machine designers’ decision-making. Knowledge about risks, critical parameters and factors that can impact and threaten automation system’s performance should be collected using preventive and retrospective approaches. This paper aims to document operators’ situation awareness through the analysis of automated organizations’ feedback. The analysis of automated hospital pharmacies feedbacks helps to identify and control critical parameters influencing human machine interaction in order to enhance system’s performance and security. Our human machine system evaluation approach has been deployed in Macon hospital center’s pharmacy which is equipped with automated drug dispensing systems since 2015. Automation’s specifications are related to technical aspects, human-machine interaction, and human aspects. The evaluation of drug delivery automation performance in Macon hospital center has shown that the performance of the automated activity depends on the performance of the automated solution chosen, and also on the control of systemic factors. In fact, 80.95% of automation specification related to the chosen Sinteco’s automated solution is met. The performance of the chosen automated solution is involved in 28.38% of automation specifications performance in Macon hospital center. The remaining systemic parameters involved in automation specifications performance need to be controlled.Keywords: life-critical systems, situation awareness, human-machine interaction, decision-making
Procedia PDF Downloads 18110519 Topology Optimization of Heat and Mass Transfer for Two Fluids under Steady State Laminar Regime: Application on Heat Exchangers
Authors: Rony Tawk, Boutros Ghannam, Maroun Nemer
Abstract:
Topology optimization technique presents a potential tool for the design and optimization of structures involved in mass and heat transfer. The method starts with an initial intermediate domain and should be able to progressively distribute the solid and the two fluids exchanging heat. The multi-objective function of the problem takes into account minimization of total pressure loss and maximization of heat transfer between solid and fluid subdomains. Existing methods account for the presence of only one fluid, while the actual work extends optimization distribution of solid and two different fluids. This requires to separate the channels of both fluids and to ensure a minimum solid thickness between them. This is done by adding a third objective function to the multi-objective optimization problem. This article uses density approach where each cell holds two local design parameters ranging from 0 to 1, where the combination of their extremums defines the presence of solid, cold fluid or hot fluid in this cell. Finite volume method is used for direct solver coupled with a discrete adjoint approach for sensitivity analysis and method of moving asymptotes for numerical optimization. Several examples are presented to show the ability of the method to find a trade-off between minimization of power dissipation and maximization of heat transfer while ensuring the separation and continuity of the channel of each fluid without crossing or mixing the fluids. The main conclusion is the possibility to find an optimal bi-fluid domain using topology optimization, defining a fluid to fluid heat exchanger device.Keywords: topology optimization, density approach, bi-fluid domain, laminar steady state regime, fluid-to-fluid heat exchanger
Procedia PDF Downloads 39910518 Extracting an Experimental Relation between SMD, Mass Flow Rate, Velocity and Pressure in Swirl Fuel Atomizers
Authors: Mohammad Hassan Ziraksaz
Abstract:
Fuel atomizers are used in a wide range of IC engines, turbojets and a variety of liquid propellant rocket engines. As the fuel spray fully develops its characters approach their ultimate amounts. Fuel spray characters such as SMD, injection pressure, mass flow rate, droplet velocity and spray cone angle play important roles to atomize the liquid fuel to finely atomized fuel droplets and finally form the fine fuel spray. Well performed, fully developed, fine spray without any defections, brings the idea of finding an experimental relation between the main effective spray characters. Extracting an experimental relation between SMD and other fuel spray physical characters in swirl fuel atomizers is the main scope of this experimental work. Droplet velocity, fuel mass flow rate, SMD and spray cone angle are the parameters which are measured. A set of twelve reverse engineering atomizers without any spray defections and a set of eight original atomizers as referenced well-performed spray are contributed in this work. More than 350 tests, mostly repeated, were performed. This work shows that although spray cone angle plays a very effective role in spray formation, after formation, it smoothly approaches to an almost constant amount while the other characters are changed to create fine droplets. Therefore, the work to find the relation between the characters is focused on SMD, droplet velocity, fuel mass flow rate, and injection pressure. The process of fuel spray formation begins in 5 Psig injection pressures, where a tiny fuel onion attaches to the injector tip and ended in 250 Psig injection pressure, were fully developed fine fuel spray forms. Injection pressure is gradually increased to observe how the spray forms. In each step, all parameters are measured and recorded carefully to provide a data bank. Various diagrams have been drawn to study the behavior of the parameters in more detail. Experiments and graphs show that the power equation can best show changes in parameters. The SMD experimental relation with pressure P, fuel mass flow rate Q ̇ and droplet velocity V extracted individually in pairs. Therefore, the proportional relation of SMD with other parameters is founded. Now it is time to find an experimental relation including all the parameters. Using obtained proportional relation, replacing the parameters with experimentally measured ones and drawing the graphs of experimental SMD versus proportion SMD (〖SMD〗_P), a correctional equation and consequently the final experimental equation is obtained. This experimental equation is specified to use for swirl fuel atomizers and the use of this experimental equation in different conditions shows about 3% error, which is expected to achieve lower error and consequently higher accuracy by increasing the number of experiments and increasing the accuracy of data collection.Keywords: droplet velocity, experimental relation, mass flow rate, SMD, swirl fuel atomizer
Procedia PDF Downloads 16110517 Evaluating the Relationship between Overconfidence of Senior Managers and Abnormal Cash Fluctuations with Respect to Financial Flexibility in Companies Listed in Tehran Stock Exchange
Authors: Hadi Mousavi, Majid Davoudi Nasr
Abstract:
Executives can maximize profits by recognizing the factors that affect investment and using them to obtain the optimal level of investment. Inefficient markets have shortcomings that can impact the optimal level of investment, leading to the process of over-investment or under-investment. In the present study, the relationship between the overconfidence of senior managers and abnormal cash fluctuations with respect to financial flexibility in companies listed in the Tehran stock exchange from 2009 to 2013 were evaluated. In this study, the sample consists of 84 companies selected by a systematic elimination method and 420 year-companies in total. In this research, EVIEWS software was used to test the research hypotheses by linear regression and correlation coefficient and after designing and testing the research hypothesis. After designing and testing research hypotheses that have been used to each hypothesis, it was concluded that there was a significant relationship between the overconfidence of senior managers and abnormal cash fluctuations, and this relationship was not significant at any level of financial flexibility. Moreover, the findings of the research showed that there was a significant relationship between senior manager’s overconfidence and positive abnormal cash flow fluctuations in firms, and this relationship is significant only at the level of companies with high financial flexibility. Finally, the results indicate that there is no significant relationship between senior managers 'overconfidence and negative cash flow abnormalities, and the relationship between senior managers' overconfidence and negative cash flow fluctuations at the level of companies with high financial flexibility was confirmed.Keywords: abnormal cash fluctuations, overconfidence of senior managers, financial flexibility, accounting
Procedia PDF Downloads 13110516 Quality-Of-Service-Aware Green Bandwidth Allocation in Ethernet Passive Optical Network
Authors: Tzu-Yang Lin, Chuan-Ching Sue
Abstract:
Sleep mechanisms are commonly used to ensure the energy efficiency of each optical network unit (ONU) that concerns a single class delay constraint in the Ethernet Passive Optical Network (EPON). How long the ONUs can sleep without violating the delay constraint has become a research problem. Particularly, we can derive an analytical model to determine the optimal sleep time of ONUs in every cycle without violating the maximum class delay constraint. The bandwidth allocation considering such optimal sleep time is called Green Bandwidth Allocation (GBA). Although the GBA mechanism guarantees that the different class delay constraints do not violate the maximum class delay constraint, packets with a more relaxed delay constraint will be treated as those with the most stringent delay constraint and may be sent early. This means that the ONU will waste energy in active mode to send packets in advance which did not need to be sent at the current time. Accordingly, we proposed a QoS-aware GBA using a novel intra-ONU scheduling to control the packets to be sent according to their respective delay constraints, thereby enhancing energy efficiency without deteriorating delay performance. If packets are not explicitly classified but with different packet delay constraints, we can modify the intra-ONU scheduling to classify packets according to their packet delay constraints rather than their classes. Moreover, we propose the switchable ONU architecture in which the ONU can switch the architecture according to the sleep time length, thus improving energy efficiency in the QoS-aware GBA. The simulation results show that the QoS-aware GBA ensures that packets in different classes or with different delay constraints do not violate their respective delay constraints and consume less power than the original GBA.Keywords: Passive Optical Networks, PONs, Optical Network Unit, ONU, energy efficiency, delay constraint
Procedia PDF Downloads 284