Search results for: error bound
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2292

Search results for: error bound

1572 Health Expenditure and Household Age Composition in India: Consequences for Health System Development

Authors: Milind Bharambe, Chander Shekhar

Abstract:

India is a vast country with its 1.21 billion population at the dawn of new decade, which accounts for one sixth of the global human capital in the world today. It is well known that health expenditure in India is dominated by private spending. This is an unfortunate consequence of India’s development because of large positive externality associated with health spending, which make health a merit good. This paper has used data from NSSO and Indian Government’s spending on health as reported by Ministry of Health and Family Welfare. Understanding of the dynamism of age-structure of the population would greatly optimize the expenditure on health care services. A country with good public health indicators is bound to possess good human capital which is an asset to the economic growth and indicator of development status of country. The paper tries to present the linkages between the health expenditure incurred by different states at various levels of demographic transition levels and the efficiency in utilization of health expenditure. It also looks into the way in which allocative efficiency health services can be improved. Paper tries to explore the per capita spending on health and how the demographic transition taking place in different states of India affect the required quantity and quality of health services.

Keywords: age structure, demographic transition, health expenditure, morbidity

Procedia PDF Downloads 384
1571 Study of Error Analysis and Sources of Uncertainty in the Measurement of Residual Stresses by the X-Ray Diffraction

Authors: E. T. Carvalho Filho, J. T. N. Medeiros, L. G. Martinez

Abstract:

Residual stresses are self equilibrating in a rigid body that acts on the microstructure of the material without application of an external load. They are elastic stresses and can be induced by mechanical, thermal and chemical processes causing a deformation gradient in the crystal lattice favoring premature failure in mechanicals components. The search for measurements with good reliability has been of great importance for the manufacturing industries. Several methods are able to quantify these stresses according to physical principles and the response of the mechanical behavior of the material. The diffraction X-ray technique is one of the most sensitive techniques for small variations of the crystalline lattice since the X-ray beam interacts with the interplanar distance. Being very sensitive technique is also susceptible to variations in measurements requiring a study of the factors that influence the final result of the measurement. Instrumental, operational factors, form deviations of the samples and geometry of analyzes are some variables that need to be considered and analyzed in order for the true measurement. The aim of this work is to analyze the sources of errors inherent to the residual stress measurement process by X-ray diffraction technique making an interlaboratory comparison to verify the reproducibility of the measurements. In this work, two specimens were machined, differing from each other by the surface finishing: grinding and polishing. Additionally, iron powder with particle size less than 45 µm was selected in order to be a reference (as recommended by ASTM E915 standard) for the tests. To verify the deviations caused by the equipment, those specimens were positioned and with the same analysis condition, seven measurements were carried out at 11Ψ tilts. To verify sample positioning errors, seven measurements were performed by positioning the sample at each measurement. To check geometry errors, measurements were repeated for the geometry and Bragg Brentano parallel beams. In order to verify the reproducibility of the method, the measurements were performed in two different laboratories and equipments. The results were statistically worked out and the quantification of the errors.

Keywords: residual stress, x-ray diffraction, repeatability, reproducibility, error analysis

Procedia PDF Downloads 165
1570 Relay Node Placement for Connectivity Restoration in Wireless Sensor Networks Using Genetic Algorithms

Authors: Hanieh Tarbiat Khosrowshahi, Mojtaba Shakeri

Abstract:

Wireless Sensor Networks (WSNs) consist of a set of sensor nodes with limited capability. WSNs may suffer from multiple node failures when they are exposed to harsh environments such as military zones or disaster locations and lose connectivity by getting partitioned into disjoint segments. Relay nodes (RNs) are alternatively introduced to restore connectivity. They cost more than sensors as they benefit from mobility, more power and more transmission range, enforcing a minimum number of them to be used. This paper addresses the problem of RN placement in a multiple disjoint network by developing a genetic algorithm (GA). The problem is reintroduced as the Steiner tree problem (which is known to be an NP-hard problem) by the aim of finding the minimum number of Steiner points where RNs are to be placed for restoring connectivity. An upper bound to the number of RNs is first computed to set up the length of initial chromosomes. The GA algorithm then iteratively reduces the number of RNs and determines their location at the same time. Experimental results indicate that the proposed GA is capable of establishing network connectivity using a reasonable number of RNs compared to the best existing work.

Keywords: connectivity restoration, genetic algorithms, multiple-node failure, relay nodes, wireless sensor networks

Procedia PDF Downloads 222
1569 Free and Open Source Licences, Software Programmers, and the Social Norm of Reciprocity

Authors: Luke McDonagh

Abstract:

Over the past three decades, free and open source software (FOSS) programmers have developed new, innovative and legally binding licences that have in turn enabled the creation of innumerable pieces of everyday software, including Linux, Mozilla Firefox and Open Office. That FOSS has been highly successful in competing with 'closed source software' (e.g. Microsoft Office) is now undeniable, but in noting this success, it is important to examine in detail why this system of FOSS has been so successful. One key reason is the existence of networks or communities of programmers, who are bound together by a key shared social norm of 'reciprocity'. At the same time, these FOSS networks are not unitary – they are highly diverse and there are large divergences of opinion between members regarding which licences are generally preferable: some members favour the flexible ‘free’ or 'no copyleft' licences, such as BSD and MIT, while other members favour the ‘strong open’ or 'strong copyleft' licences such as GPL. This paper argues that without both the existence of the shared norm of reciprocity and the diversity of licences, it is unlikely that the innovative legal framework provided by FOSS would have succeeded to the extent that it has.

Keywords: open source, copyright, licensing, copyleft

Procedia PDF Downloads 351
1568 System Identification and Quantitative Feedback Theory Design of a Lathe Spindle

Authors: M. Khairudin

Abstract:

This paper investigates the system identification and design quantitative feedback theory (QFT) for the robust control of a lathe spindle. The dynamic of the lathe spindle is uncertain and time variation due to the deepness variation on cutting process. System identification was used to obtain the dynamics model of the lathe spindle. In this work, real time system identification is used to construct a linear model of the system from the nonlinear system. These linear models and its uncertainty bound can then be used for controller synthesis. The real time nonlinear system identification process to obtain a set of linear models of the lathe spindle that represents the operating ranges of the dynamic system. With a selected input signal, the data of output and response is acquired and nonlinear system identification is performed using Matlab to obtain a linear model of the system. Practical design steps are presented in which the QFT-based conditions are formulated to obtain a compensator and pre-filter to control the lathe spindle. The performances of the proposed controller are evaluated in terms of velocity responses of the the lathe machine spindle in corporating deepness on cutting process.

Keywords: lathe spindle, QFT, robust control, system identification

Procedia PDF Downloads 520
1567 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks

Authors: Andrew N. Saylor, James R. Peters

Abstract:

Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.

Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging

Procedia PDF Downloads 113
1566 A Comparison of the Adsorption Mechanism of Arsenic on Iron-Modified Nanoclays

Authors: Michael Leo L. Dela Cruz, Khryslyn G. Arano, Eden May B. Dela Pena, Leslie Joy Diaz

Abstract:

Arsenic adsorbents were continuously being researched to ease the detrimental impact of arsenic to human health. A comparative study on the adsorption mechanism of arsenic on iron modified nanoclays was undertaken. Iron intercalated montmorillonite (Fe-MMT) and montmorillonite supported zero-valent iron (ZVI-MMT) were the adsorbents investigated in this study. Fe-MMT was produced through ion-exchange by replacing the sodium intercalated ions in montmorillonite with iron (III) ions. The iron (III) in Fe-MMT was later reduced to zero valent iron producing ZVI-MMT. Adsorption study was performed by batch technique. Obtained data were fitted to intra-particle diffusion, pseudo-first order, and pseudo-second-order models and the Elovich equation to determine the kinetics of adsorption. The adsorption of arsenic on Fe-MMT followed the intra-particle diffusion model with intra-particle rate constant of 0.27 mg/g-min0.5. Arsenic was found to be chemically bound on ZVI-MMT as suggested by the pseudo-second order and Elovich equation. The derived pseudo-second order rate constant was 0.0027 g/mg-min with initial adsorption rate computed from the Elovich equation was 113 mg/g-min.

Keywords: adsorption mechanism, arsenic, montmorillonite, zero valent iron

Procedia PDF Downloads 399
1565 A Geographic Information System Mapping Method for Creating Improved Satellite Solar Radiation Dataset Over Qatar

Authors: Sachin Jain, Daniel Perez-Astudillo, Dunia A. Bachour, Antonio P. Sanfilippo

Abstract:

The future of solar energy in Qatar is evolving steadily. Hence, high-quality spatial solar radiation data is of the uttermost requirement for any planning and commissioning of solar technology. Generally, two types of solar radiation data are available: satellite data and ground observations. Satellite solar radiation data is developed by the physical and statistical model. Ground data is collected by solar radiation measurement stations. The ground data is of high quality. However, they are limited to distributed point locations with the high cost of installation and maintenance for the ground stations. On the other hand, satellite solar radiation data is continuous and available throughout geographical locations, but they are relatively less accurate than ground data. To utilize the advantage of both data, a product has been developed here which provides spatial continuity and higher accuracy than any of the data alone. The popular satellite databases: National Solar radiation Data Base, NSRDB (PSM V3 model, spatial resolution: 4 km) is chosen here for merging with ground-measured solar radiation measurement in Qatar. The spatial distribution of ground solar radiation measurement stations is comprehensive in Qatar, with a network of 13 ground stations. The monthly average of the daily total Global Horizontal Irradiation (GHI) component from ground and satellite data is used for error analysis. The normalized root means square error (NRMSE) values of 3.31%, 6.53%, and 6.63% for October, November, and December 2019 were observed respectively when comparing in-situ and NSRDB data. The method is based on the Empirical Bayesian Kriging Regression Prediction model available in ArcGIS, ESRI. The workflow of the algorithm is based on the combination of regression and kriging methods. A regression model (OLS, ordinary least square) is fitted between the ground and NSBRD data points. A semi-variogram is fitted into the experimental semi-variogram obtained from the residuals. The kriging residuals obtained after fitting the semi-variogram model were added to NSRBD data predicted values obtained from the regression model to obtain the final predicted values. The NRMSE values obtained after merging are respectively 1.84%, 1.28%, and 1.81% for October, November, and December 2019. One more explanatory variable, that is the ground elevation, has been incorporated in the regression and kriging methods to reduce the error and to provide higher spatial resolution (30 m). The final GHI maps have been created after merging, and NRMSE values of 1.24%, 1.28%, and 1.28% have been observed for October, November, and December 2019, respectively. The proposed merging method has proven as a highly accurate method. An additional method is also proposed here to generate calibrated maps by using regression and kriging model and further to use the calibrated model to generate solar radiation maps from the explanatory variable only when not enough historical ground data is available for long-term analysis. The NRMSE values obtained after the comparison of the calibrated maps with ground data are 5.60% and 5.31% for November and December 2019 month respectively.

Keywords: global horizontal irradiation, GIS, empirical bayesian kriging regression prediction, NSRDB

Procedia PDF Downloads 74
1564 0.13-µm Complementary Metal-Oxide Semiconductor Vector Modulator for Beamforming System

Authors: J. S. Kim

Abstract:

This paper presents a 0.13-µm Complementary Metal-Oxide Semiconductor (CMOS) vector modulator for beamforming system. The vector modulator features a 360° phase and gain range of -10 dB to 10 dB with a root mean square phase and amplitude error of only 2.2° and 0.45 dB, respectively. These features make it a suitable for wireless backhaul system in the 5 GHz industrial, scientific, and medical (ISM) bands. It draws a current of 20.4 mA from a 1.2 V supply. The total chip size is 1.87x1.34 mm².

Keywords: CMOS, vector modulator, beamforming, 802.11ac

Procedia PDF Downloads 191
1563 Study and Analysis of the Factors Affecting Road Safety Using Decision Tree Algorithms

Authors: Naina Mahajan, Bikram Pal Kaur

Abstract:

The purpose of traffic accident analysis is to find the possible causes of an accident. Road accidents cannot be totally prevented but by suitable traffic engineering and management the accident rate can be reduced to a certain extent. This paper discusses the classification techniques C4.5 and ID3 using the WEKA Data mining tool. These techniques use on the NH (National highway) dataset. With the C4.5 and ID3 technique it gives best results and high accuracy with less computation time and error rate.

Keywords: C4.5, ID3, NH(National highway), WEKA data mining tool

Procedia PDF Downloads 313
1562 A Conceptual Stakeholder Engagement Model for Change Management in the South African Public Sector

Authors: Mokgata Matjie, Sibo Mayime

Abstract:

The 4IR brought with it an inevitable need for change in all organisations, regardless of sector. As a member of the global community, South African organisations are bound to experience the 4IR pressure, and the need to digitize becomes unavoidable. The South African government sector has various departments, of which one of them is the land administration solely responsible for the registration, management, and maintenance of the property registry of South Africa. For the past many years, the registration of deeds was done manually, ranging from 7-10 days, with lots and loads of paperwork handled manually by conveyancers and Registry Clerks. Some information might get lost during the registration period, thus delaying the whole process. This conceptual paper proposes ways to digitalize the land administration office by consulting all relevant literature and ultimately developing a theoretical change management framework for all public sector organisations in South Africa. Change is inevitable, but careful consideration is necessary in terms of consulting all relevant stakeholders for their buy-in and successful implementation of digitalization. The developed framework will serve as a theoretical basis for the empirical research envisaged as a PhD study.

Keywords: stakeholders, engagement, change management, land administration, digitalisation, South African public sector

Procedia PDF Downloads 88
1561 Assessment of Students Skills in Error Detection in SQL Classes using Rubric Framework - An Empirical Study

Authors: Dirson Santos De Campos, Deller James Ferreira, Anderson Cavalcante Gonçalves, Uyara Ferreira Silva

Abstract:

Rubrics to learning research provide many evaluation criteria and expected performance standards linked to defined student activity for learning and pedagogical objectives. Despite the rubric being used in education at all levels, academic literature on rubrics as a tool to support research in SQL Education is quite rare. There is a large class of SQL queries is syntactically correct, but certainly, not all are semantically correct. Detecting and correcting errors is a recurring problem in SQL education. In this paper, we usthe Rubric Abstract Framework (RAF), which consists of steps, that allows us to map the information to measure student performance guided by didactic objectives defined by the teacher as long as it is contextualized domain modeling by rubric. An empirical study was done that demonstrates how rubrics can mitigate student difficulties in finding logical errors and easing teacher workload in SQL education. Detecting and correcting logical errors is an important skill for students. Researchers have proposed several ways to improve SQL education because understanding this paradigm skills are crucial in software engineering and computer science. The RAF instantiation was using in an empirical study developed during the COVID-19 pandemic in database course. The pandemic transformed face-to-face and remote education, without presential classes. The lab activities were conducted remotely, which hinders the teaching-learning process, in particular for this research, in verifying the evidence or statements of knowledge, skills, and abilities (KSAs) of students. Various research in academia and industry involved databases. The innovation proposed in this paper is the approach used where the results obtained when using rubrics to map logical errors in query formulation have been analyzed with gains obtained by students empirically verified. The research approach can be used in the post-pandemic period in both classroom and distance learning.

Keywords: rubric, logical error, structured query language (SQL), empirical study, SQL education

Procedia PDF Downloads 166
1560 Impact of Import Restriction on Rice Production in Nigeria

Authors: C. O. Igberi, M. U. Amadi

Abstract:

This research paper on the impact of import restriction on rice production in Nigeria is aimed at finding/proffering valid solutions to the age long problem of rice self-sufficiency, through a better understanding of policy measures used in the past, in this case, the effectiveness of rice import restriction of the early 90’s. It tries to answer the questions of; import restriction boosting domestic rice production and the macroeconomic determining factors of Gross Domestic Rice Product (GDRP). The research probe is investigated through literature and analytical frameworks, such that time series data on the GDRP, Gross Fixed Capital Formation (GFCF), average foreign rice producers’ prices(PPF), domestic producers’ prices (PPN) and the labour force (LABF) are collated for analysis (with an import restriction dummy variable, POL1). The research objectives/hypothesis are analysed using; Cointegration, Vector Error Correction Model (VECM), Impulse Response Function (IRF) and Granger Causality Test(GCT) methodologies. Results show that in the short-run error correction specification for GDRP, a percentage (1%) deviation away from the long-run equilibrium in a current quarter is only corrected by 0.14% in the subsequent quarter. Also, the rice import restriction policy had no significant effect on the GDRP at this time. Other findings show that the policy period has, in fact, had effects on the PPN and LABF. The choice variables used are valid macroeconomic factors that explain the GDRP of Nigeria, as adduced from the IRF and GCT, and in the long-run. Policy recommendations suggest that the import restriction is not disqualified as a veritable tool for improving domestic rice production, rather better enforcement procedures and strict adherence to the policy dictates is needed. Furthermore, accompanying policies which drive public and private capital investment and accumulation must be introduced. Also, employment rate and labour substitution in the agricultural sector should not be drastically changed, rather its welfare and efficiency be improved.

Keywords: import restriction, gross domestic rice production, cointegration, VECM, Granger causality, impulse response function

Procedia PDF Downloads 183
1559 On the Performance Analysis of Coexistence between IEEE 802.11g and IEEE 802.15.4 Networks

Authors: Chompunut Jantarasorn, Chutima Prommak

Abstract:

This paper presents an intensive measurement studying of the network performance analysis when IEEE 802.11g Wireless Local Area Networks (WLAN) coexisting with IEEE 802.15.4 Wireless Personal Area Network (WPAN). The measurement results show that the coexistence between both networks could increase the Frame Error Rate (FER) of the IEEE 802.15.4 networks up to 60% and it could decrease the throughputs of the IEEE 802.11g networks up to 55%.

Keywords: wireless performance analysis, coexistence analysis, IEEE 802.11g, IEEE 802.15.4

Procedia PDF Downloads 529
1558 Generation of High-Quality Synthetic CT Images from Cone Beam CT Images Using A.I. Based Generative Networks

Authors: Heeba A. Gurku

Abstract:

Introduction: Cone Beam CT(CBCT) images play an integral part in proper patient positioning in cancer patients undergoing radiation therapy treatment. But these images are low in quality. The purpose of this study is to generate high-quality synthetic CT images from CBCT using generative models. Material and Methods: This study utilized two datasets from The Cancer Imaging Archive (TCIA) 1) Lung cancer dataset of 20 patients (with full view CBCT images) and 2) Pancreatic cancer dataset of 40 patients (only 27 patients having limited view images were included in the study). Cycle Generative Adversarial Networks (GAN) and its variant Attention Guided Generative Adversarial Networks (AGGAN) models were used to generate the synthetic CTs. Models were evaluated by visual evaluation and on four metrics, Structural Similarity Index Measure (SSIM), Peak Signal Noise Ratio (PSNR) Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), to compare the synthetic CT and original CT images. Results: For pancreatic dataset with limited view CBCT images, our study showed that in Cycle GAN model, MAE, RMSE, PSNR improved from 12.57to 8.49, 20.94 to 15.29 and 21.85 to 24.63, respectively but structural similarity only marginally increased from 0.78 to 0.79. Similar, results were achieved with AGGAN with no improvement over Cycle GAN. However, for lung dataset with full view CBCT images Cycle GAN was able to reduce MAE significantly from 89.44 to 15.11 and AGGAN was able to reduce it to 19.77. Similarly, RMSE was also decreased from 92.68 to 23.50 in Cycle GAN and to 29.02 in AGGAN. SSIM and PSNR also improved significantly from 0.17 to 0.59 and from 8.81 to 21.06 in Cycle GAN respectively while in AGGAN SSIM increased to 0.52 and PSNR increased to 19.31. In both datasets, GAN models were able to reduce artifacts, reduce noise, have better resolution, and better contrast enhancement. Conclusion and Recommendation: Both Cycle GAN and AGGAN were significantly able to reduce MAE, RMSE and PSNR in both datasets. However, full view lung dataset showed more improvement in SSIM and image quality than limited view pancreatic dataset.

Keywords: CT images, CBCT images, cycle GAN, AGGAN

Procedia PDF Downloads 67
1557 Mathematical Competence as It Is Defined through Learners' Errors in Arithmetic and Algebra

Authors: Michael Lousis

Abstract:

Mathematical competence is the great aim of every mathematical teaching and learning endeavour. This can be defined as an idealised conceptualisation of the quality of cognition and the ability of implementation in practice of the mathematical subject matter, which is included in the curriculum, and is displayed only through performance of doing mathematics. The present study gives a clear definition of mathematical competence in the domains of Arithmetic and Algebra that stems from the explanation of the learners’ errors in these domains. The learners, whose errors are explained, were Greek and English participants of a large, international, longitudinal, comparative research program entitled the Kassel Project. The participants’ errors emerged as results of their work in dealing with mathematical questions and problems of the tests, which were presented to them. The construction of the tests was such as only the outcomes of the participants’ work was to be encompassed and not their course of thinking, which resulted in these outcomes. The intention was that the tests had to provide undeviating comparable results and simultaneously avoid any probable bias. Any bias could stem from obtaining results by involving so many markers from different countries and cultures, with so many different belief systems concerning the assessment of learners’ course of thinking. In this way the validity of the research was protected. This fact forced the implementation of specific research methods and theoretical prospects to take place in order the participants’ erroneous way of thinking to be disclosed. These were Methodological Pragmatism, Symbolic Interactionism, Philosophy of Mind and the ideas of Computationalism, which were used for deciding and establishing the grounds of the adequacy and legitimacy of the obtained kinds of knowledge through the explanations given by the error analysis. The employment of this methodology and of these theoretical prospects resulted in the definition of the learners’ mathematical competence, which is the thesis of the present study. Thus, learners’ mathematical competence is depending upon three key elements that should be developed in their minds: appropriate representations, appropriate meaning, and appropriate developed schemata. This definition then determined the development of appropriate teaching practices and interventions conducive to the achievement and finally the entailment of mathematical competence.

Keywords: representations, meaning, appropriate developed schemata, computationalism, error analysis, explanations for the probable causes of the errors, Kassel Project, mathematical competence

Procedia PDF Downloads 251
1556 Perfomance of PAPR Reduction in OFDM System for Wireless Communications

Authors: Alcardo Alex Barakabitze, Saddam Aziz, Muhammad Zubair

Abstract:

The Orthogonal Frequency Division Multiplexing (OFDM) is a special form of multicarrier transmission that splits the total transmission bandwidth into a number of orthogonal and non-overlapping subcarriers and transmit the collection of bits called symbols in parallel using these subcarriers. In this paper, we explore the Peak to Average Power Reduction (PAPR) problem in OFDM systems. We provide the performance analysis of CCDF and BER through MATLAB simulations.

Keywords: bit error ratio (BER), OFDM, peak to average power reduction (PAPR), sub-carriers

Procedia PDF Downloads 523
1555 Estimating the Government Consumption and Investment Multipliers Using Local Projection Method on the US Data from 1966 to 2020

Authors: Mustofa Mahmud Al Mamun

Abstract:

Government spending, one of the major components of gross domestic product (GDP), is composed of government consumption, investment, and transfer payments. A change in government spending during recessionary periods can generate an increase in GDP greater than the increase in spending. This is called the "multiplier effect". Accurate estimation of government spending multiplier is important because fiscal policy has been used to stimulate a flagging economy. Many recent studies have focused on identifying parts of the economy that responds more to a stimulus under a variety of circumstances. This paper used the US dataset from 1966 to 2020 and local projection method assuming standard identification strategy to estimate the multipliers. The model includes important macroaggregates and controls for forecasted government spending, interest rate, consumer price index (CPI), export, import, and level of public debt. Investment multipliers are found to be positive and larger than the consumption multipliers. Consumption multipliers are either negative or not significantly different than zero. Results do not vary across the business cycle. However, the consumption multiplier estimated from pre-1980 data is positive.

Keywords: business cycle, consumption multipliers, forecasted government spending, investment multipliers, local projection method, zero lower bound

Procedia PDF Downloads 209
1554 An Extended Basic Period and Power-of-Two Policy for Economic Lot-Size Batch-Shipment Scheduling Problem

Authors: Wen-Tsung Ho, Ku-Kuang Chang, Hsin-Yuan Chang

Abstract:

In this study, we consider an economic lot-size batch-shipment scheduling problem (ELBSP) with extended basic period (EBP) and power-of-two (PoT) policies. In this problem, the supplier using a single facility to manufacture multiple products and equally sized batches are then delivered by the supplier to buyers over an infinite planning horizon. Further, the extended basic period (EBP) and power-of-two (PoT) policy are utilized. Relaxing the production schedule converts the ELBSP to an economic lot-size batch-shipment problem (ELBP) with EBP and PoT policies, and a nonlinear integer programming model of the ELBP is constructed. Using the replenishment cycle division and recursive tightening methods, optimal solutions are then solved separately for each product. The sum of these optimal solutions is the lower bound of the ELBSP. A proposed heuristic method with polynomial complexity is then applied to figure out the near-optimal solutions of the ELBSP. Numerical example is presented to confirm the efficacy of the proposed method.

Keywords: economic lot-size scheduling problem, extended basic period, replenishment cycle division, recursive tightening, power-of-two

Procedia PDF Downloads 329
1553 Numerical Evolution Methods of Rational Form for Diffusion Equations

Authors: Said Algarni

Abstract:

The purpose of this study was to investigate selected numerical methods that demonstrate good performance in solving PDEs. We adapted alternative method that involve rational polynomials. Padé time stepping (PTS) method, which is highly stable for the purposes of the present application and is associated with lower computational costs, was applied. Furthermore, PTS was modified for our study which focused on diffusion equations. Numerical runs were conducted to obtain the optimal local error control threshold.

Keywords: Padé time stepping, finite difference, reaction diffusion equation, PDEs

Procedia PDF Downloads 284
1552 Effect of Threshold Corrections on Proton Lifetime and Emergence of Topological Defects in Grand Unified Theories

Authors: Rinku Maji, Joydeep Chakrabortty, Stephen F. King

Abstract:

The grand unified theory (GUT) rationales the arbitrariness of the standard model (SM) and explains many enigmas of nature at the outset of a single gauge group. The GUTs predict the proton decay and, the spontaneous symmetry breaking (SSB) of the higher symmetry group may lead to the formation of topological defects, which are indispensable in the context of the cosmological observations. The Super-Kamiokande (Super-K) experiment sets sacrosanct bounds on the partial lifetime (τ) of the proton decay for different channels, e.g., τ(p → e+ π0) > 1.6×10³⁴ years which is the most relevant channel to test the viability of the nonsupersymmetric GUTs. The GUTs based on the gauge groups SO(10) and E(6) are broken to the SM spontaneously through one and two intermediate gauge symmetries with the manifestation of the left-right symmetry at least at a single intermediate stage and the proton lifetime for these breaking chains has been computed. The impact of the threshold corrections, as a consequence of integrating out the heavy fields at the breaking scale alter the running of the gauge couplings, which eventually, are found to keep many GUTs off the Super-K bound. The possible topological defects arising in the course of SSB at different breaking scales for all breaking chains have been studied.

Keywords: grand unified theories, proton decay, threshold correction, topological defects

Procedia PDF Downloads 159
1551 Feasibility of Voluntary Deep Inspiration Breath-Hold Radiotherapy Technique Implementation without Deep Inspiration Breath-Hold-Assisting Device

Authors: Auwal Abubakar, Shazril Imran Shaukat, Noor Khairiah A. Karim, Mohammed Zakir Kassim, Gokula Kumar Appalanaido, Hafiz Mohd Zin

Abstract:

Background: Voluntary deep inspiration breath-hold radiotherapy (vDIBH-RT) is an effective cardiac dose reduction technique during left breast radiotherapy. This study aimed to assess the accuracy of the implementation of the vDIBH technique among left breast cancer patients without the use of a special device such as a surface-guided imaging system. Methods: The vDIBH-RT technique was implemented among thirteen (13) left breast cancer patients at the Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia. Breath-hold monitoring was performed based on breath-hold skin marks and laser light congruence observed on zoomed CCTV images from the control console during each delivery. The initial setup was verified using cone beam computed tomography (CBCT) during breath-hold. Each field was delivered using multiple beam segments to allow a delivery time of 20 seconds, which can be tolerated by patients in breath-hold. The data were analysed using an in-house developed MATLAB algorithm. PTV margin was computed based on van Herk's margin recipe. Results: The setup error analysed from CBCT shows that the population systematic error in lateral (x), longitudinal (y), and vertical (z) axes was 2.28 mm, 3.35 mm, and 3.10 mm, respectively. Based on the CBCT image guidance, the Planning target volume (PTV) margin that would be required for vDIBH-RT using CCTV/Laser monitoring technique is 7.77 mm, 10.85 mm, and 10.93 mm in x, y, and z axes, respectively. Conclusion: It is feasible to safely implement vDIBH-RT among left breast cancer patients without special equipment. The breath-hold monitoring technique is cost-effective, radiation-free, easy to implement, and allows real-time breath-hold monitoring.

Keywords: vDIBH, cone beam computed tomography, radiotherapy, left breast cancer

Procedia PDF Downloads 34
1550 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK

Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick

Abstract:

The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.

Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest

Procedia PDF Downloads 101
1549 Dynamics of Chirped RZ Modulation Format in GEPON Fiber to the Home (FTTH) Network

Authors: Anurag Sharma, Manoj Kumar, Ashima, Sooraj Parkash

Abstract:

The work in this paper presents simulative comparison for different modulation formats such as NRZ, Manchester and CRZ in a 100 subscribers at 5 Gbps bit rate Gigabit Ethernet Passive Optical Network (GEPON) FTTH network. It is observed from the simulation results that the CRZ modulation format is best suited for the designed system. A link design for 1:100 splitter is used as Passive Optical Network (PON) element which creates communication between central offices to different users. The Bit Error Rate (BER) is found to be 2.8535e-10 at 5 Gbit/s systems for CRZ modulation format.

Keywords: PON , FTTH, OLT, ONU, CO, GEPON

Procedia PDF Downloads 684
1548 Implementing Green IT Practices in Non-IT Industries in Sri Lanka: Contemplating the Feasibility and Methods to Ensure Sustainability

Authors: Manuela Nayantara Jeyaraj

Abstract:

Green IT is a term that refers to the collective strategic and tactical practices that unswervingly condense the carbon footprint to a diminished proportion in an establishment’s computing procedures. This concept has been tightly knit with IT related organizations; hence it has been precluded to be applied within non-IT organizations in Sri Lanka. With the turn of the century, computing technologies have taken over commonplace activities in every nook and corner in Sri Lanka, which is still on the verge of moving forth in its march towards being a developed country. Hence, it needs to be recursively proven that non-IT industries are well-bound to adhere to ‘Green IT’ practices as well, in order to reduce their carbon footprint and move towards considering the practicality of implementing Green-IT practices within their work-arounds. There are several spheres that need to be taken into account in creating awareness of ‘Green IT’, such as the economic breach, technologies available, legislative bounds, community mind-set and many more. This paper tends to reconnoiter causes that currently restrain non-IT organizations from considering Green IT concepts. By doing so, it is expected to prove the beneficial providence gained by implementing this concept within the organization. The ultimate goal is to propose feasible ‘Green IT’ practices that could be implemented within the context of Sri Lankan non-IT sectors in order to ensure that organization’s sustainable growth towards a long term existence.

Keywords: computing practices, Green IT, non-IT industries, Sri Lanka, sustainability

Procedia PDF Downloads 231
1547 Semilocal Convergence of a Three Step Fifth Order Iterative Method under Hölder Continuity Condition in Banach Spaces

Authors: Ramandeep Behl, Prashanth Maroju, S. S. Motsa

Abstract:

In this paper, we study the semilocal convergence of a fifth order iterative method using recurrence relation under the assumption that first order Fréchet derivative satisfies the Hölder condition. Also, we calculate the R-order of convergence and provide some a priori error bounds. Based on this, we give existence and uniqueness region of the solution for a nonlinear Hammerstein integral equation of the second kind.

Keywords: Holder continuity condition, Frechet derivative, fifth order convergence, recurrence relations

Procedia PDF Downloads 595
1546 Exploring Error-Minimization Protocols for Upper-Limb Function During Activities of Daily Life in Chronic Stroke Patients

Authors: M. A. Riurean, S. Heijnen, C. A. Knott, J. Makinde, D. Gotti, J. VD. Kamp

Abstract:

Objectives: The current study is done in preparation for a randomized controlled study investigating the effects of an implicit motor learning protocol implemented using an extension-supporting glove. It will explore different protocols to find out which is preferred when studying motor learn-ing in the chronic stroke population that struggles with hand spasticity. Design: This exploratory study will follow 24 individuals who have a chronic stroke (> 6 months) during their usual care journey. We will record the results of two 9-Hole Peg Tests (9HPT) done during their therapy ses-sions with a physiotherapist or in their home before and after 4 weeks of them wearing an exten-sion-supporting glove used to employ the to-be-studied protocols. The participants will wear the glove 3 times/week for one hour while performing their activities of daily living and record the times they wore it in a diary. Their experience will be monitored through telecommunication once every week. Subjects: Individuals that have had a stroke at least 6 months prior to participation, hand spasticity measured on the modified Ashworth Scale of maximum 3, and finger flexion motor control measured on the Motricity Index of at least 19/33. Exclusion criteria: extreme hemi-neglect. Methods: The participants will be randomly divided into 3 groups: one group using the glove in a pre-set way of decreasing support (implicit motor learning), one group using the glove in a self-controlled way of decreasing support (autonomous motor learning), and the third using the glove with constant support (as control). Before and after the 4-week period, there will be an intake session and a post-assessment session. Analysis: We will compare the results of the two 9HPTs to check whether the protocols were effective. Furthermore, we will compare the results between the three groups to find the preferred one. A qualitative analysis will be run of the experience of participants throughout the 4-week period. Expected results: We expect that the group using the implicit learning protocol will show superior results.

Keywords: implicit learning, hand spasticity, stroke, error minimization, motor task

Procedia PDF Downloads 39
1545 Optimal Evaluation of Weather Risk Insurance for Wheat

Authors: Slim Amami

Abstract:

A model is developed to prevent the risks related to climate conditions in the agricultural sector. It will determine the yearly optimum premium to be paid by a farmer in order to reach his required turnover. The model is mainly based on both climatic stability and 'soft' responses of usually grown species to average climate variations at the same place and inside a safety ball which can be determined from past meteorological data. This allows the use of linear regression expression for dependence of production result in terms of driving meteorological parameters, main ones of which are daily average sunlight, rainfall and temperature. By a simple best parameter fit from the expert table drawn with professionals, optimal representation of yearly production is deduced from records of previous years, and yearly payback is evaluated from minimum yearly produced turnover. Optimal premium is then deduced, and gives the producer a useful bound for negotiating an offer by insurance companies to effectively protect their harvest. The application to wheat production in the French Oise department illustrates the reliability of the present model with as low as 6% difference between predicted and real data. The model can be adapted to almost every agricultural field by changing state parameters and calibrating their associated coefficients.

Keywords: agriculture, database, meteorological factors, production model, optimal price

Procedia PDF Downloads 206
1544 Performance Improvement of Long-Reach Optical Access Systems Using Hybrid Optical Amplifiers

Authors: Shreyas Srinivas Rangan, Jurgis Porins

Abstract:

The internet traffic has increased exponentially due to the high demand for data rates by the users, and the constantly increasing metro networks and access networks are focused on improving the maximum transmit distance of the long-reach optical networks. One of the common methods to improve the maximum transmit distance of the long-reach optical networks at the component level is to use broadband optical amplifiers. The Erbium Doped Fiber Amplifier (EDFA) provides high amplification with low noise figure but due to the characteristics of EDFA, its operation is limited to C-band and L-band. In contrast, the Raman amplifier exhibits a wide amplification spectrum, and negative noise figure values can be achieved. To obtain such results, high powered pumping sources are required. Operating Raman amplifiers with such high-powered optical sources may cause fire hazards and it may damage the optical system. In this paper, we implement a hybrid optical amplifier configuration. EDFA and Raman amplifiers are used in this hybrid setup to combine the advantages of both EDFA and Raman amplifiers to improve the reach of the system. Using this setup, we analyze the maximum transmit distance of the network by obtaining a correlation diagram between the length of the single-mode fiber (SMF) and the Bit Error Rate (BER). This hybrid amplifier configuration is implemented in a Wavelength Division Multiplexing (WDM) system with a BER of 10⁻⁹ by using NRZ modulation format, and the gain uniformity noise ratio (signal-to-noise ratio (SNR)), the efficiency of the pumping source, and the optical signal gain efficiency of the amplifier are studied experimentally in a mathematical modelling environment. Numerical simulations were implemented in RSoft OptSim simulation software based on the nonlinear Schrödinger equation using the Split-Step method, the Fourier transform, and the Monte Carlo method for estimating BER.

Keywords: Raman amplifier, erbium doped fibre amplifier, bit error rate, hybrid optical amplifiers

Procedia PDF Downloads 47
1543 Curcumin and Methotrexate Loaded Montmollilite Clay for Sustained Oral Drug Delivery Application

Authors: Subrata Kar, Banani Kundu, Papiya Nandy, Ruma Basu, Sukhen Das

Abstract:

Natural montmorilollite clay is a common ingredient in pharmaceutical products, both as excipients and active support; hence considered as suitable candidate for Drug Delivery System. In this work, cationic detergent CTAB is used to increase the interlayer spacing of Na+-Montmoriollite clay to intercalate curcumin and methotrexate. Methotrexate is a folic acid antagonist, anti-proliferative and immunosuppressive agent; while curcumin is a bioactive constituent of rhizomes of Curcuma longa, possessing remarkable chemo-preventive and anti-inflammatory properties. The resultant inorganic-organic hybrids are characterized by X-ray diffraction (XRD), Infrared spectroscopy (FTIR) and Thermo Gravimetric Analysis (TGA) to confirm successful intercalation of curcumin and Methotrexate within clay layers. Pharmaceutical investigation of the hybrids is explored by studying the drug loading (%), encapsulation efficiency and release kinetics. Finally in-vitro studies are performed using cancer cells to find the effect of released curcumin to improve the sensitivity of clay bound methotrexate to ameliorate cell death compared to their effectiveness when used without the inorganic aluminosilicate vehicle.

Keywords: montmorillonite, methotrexate, curcumin, loading efficiency, release kinetics, anticancer activity

Procedia PDF Downloads 503