Search results for: Exponential smoothing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 281

Search results for: Exponential smoothing

41 Blockchain for IoT Security and Privacy in Healthcare Sector

Authors: Umair Shafique, Hafiz Usman Zia, Fiaz Majeed, Samina Naz, Javeria Ahmed, Maleeha Zainab

Abstract:

The Internet of Things (IoT) has become a hot topic for the last couple of years. This innovative technology has shown promising progress in various areas and the world has witnessed exponential growth in multiple application domains. Researchers are working to investigate its aptitudes to get the best from it by harnessing its true potential. But at the same time, IoT networks open up a new aspect of vulnerability and physical threats to data integrity, privacy, and confidentiality. It is due to centralized control, data silos approach for handling information, and a lack of standardization in the IoT networks. As we know, blockchain is a new technology that involves creating secure distributed ledgers to store and communicate data. Some of the benefits include resiliency, integrity, anonymity, decentralization, and autonomous control. The potential for blockchain technology to provide the key to managing and controlling IoT has created a new wave of excitement around the idea of putting that data back into the hands of the end-users. In this manuscript, we have proposed a model that combines blockchain and IoT networks to address potential security and privacy issues in the healthcare domain and how various stakeholders will interact with the system.

Keywords: Internet of Things, IoT, blockchain, data integrity, authentication, data privacy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 333
40 Impact of Increasing Distributed Solar PV Systems on Distribution Networks in South Africa

Authors: Aradhna Pandarum

Abstract:

South Africa is experiencing an exponential growth of distributed solar PV installations. This is due to various factors with the predominant one being increasing electricity tariffs along with decreasing installation costs, resulting in attractive business cases to some end-users. Despite there being a variety of economic and environmental advantages associated with the installation of PV, their potential impact on distribution grids has yet to be thoroughly investigated. This is especially true since the locations of these units cannot be controlled by Network Service Providers (NSPs) and their output power is stochastic and non-dispatchable. This report details two case studies that were completed to determine the possible voltage and technical losses impact of increasing PV penetration in the Northern Cape of South Africa. Some major impacts considered for the simulations were ramping of PV generation due to intermittency caused by moving clouds, the size and overall hosting capacity and the location of the systems. The main finding is that the technical impact is different on a constrained feeder vs a non-constrained feeder. The acceptable PV penetration level is much lower for a constrained feeder than a non-constrained feeder, depending on where the systems are located.

Keywords: Medium voltage networks, power system losses, power system voltage, solar photovoltaic, PV.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 482
39 An Improved k Nearest Neighbor Classifier Using Interestingness Measures for Medical Image Mining

Authors: J. Alamelu Mangai, Satej Wagle, V. Santhosh Kumar

Abstract:

The exponential increase in the volume of medical image database has imposed new challenges to clinical routine in maintaining patient history, diagnosis, treatment and monitoring. With the advent of data mining and machine learning techniques it is possible to automate and/or assist physicians in clinical diagnosis. In this research a medical image classification framework using data mining techniques is proposed. It involves feature extraction, feature selection, feature discretization and classification. In the classification phase, the performance of the traditional kNN k nearest neighbor classifier is improved using a feature weighting scheme and a distance weighted voting instead of simple majority voting. Feature weights are calculated using the interestingness measures used in association rule mining. Experiments on the retinal fundus images show that the proposed framework improves the classification accuracy of traditional kNN from 78.57 % to 92.85 %.

Keywords: Medical Image Mining, Data Mining, Feature Weighting, Association Rule Mining, k nearest neighbor classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3260
38 Reliability Analysis of Computer Centre at Yobe State University Using LRU Algorithm

Authors: V. V. Singh, Yusuf Ibrahim Gwanda, Rajesh Prasad

Abstract:

In this paper, we focus on the reliability and performance analysis of Computer Centre (CC) at Yobe State University, Damaturu, Nigeria. The CC consists of three servers: one database mail server, one redundant and one for sharing with the client computers in the CC (called as a local server). Observing the different possibilities of the functioning of the CC, the analysis has been done to evaluate the various popular measures of reliability such as availability, reliability, mean time to failure (MTTF), profit analysis due to the operation of the system. The system can ultimately fail due to the failure of router, redundant server before repairing the mail server and switch failure. The system can also partially fail when a local server fails. The failed devices have restored according to Least Recently Used (LRU) techniques. The system can also fail entirely due to a cooling failure of the server, electricity failure or some natural calamity like earthquake, fire tsunami, etc. All the failure rates are assumed to be constant and follow exponential time distribution, while the repair follows two types of distributions: i.e. general and Gumbel-Hougaard family copula distribution.

Keywords: Reliability, availability Gumbel-Hougaard family copula, MTTF, internet data center.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 821
37 Multi-Rate Exact Discretization based on Diagonalization of a Linear System - A Multiple-Real-Eigenvalue Case

Authors: T. Sakamoto, N. Hori

Abstract:

A multi-rate discrete-time model, whose response agrees exactly with that of a continuous-time original at all sampling instants for any sampling periods, is developed for a linear system, which is assumed to have multiple real eigenvalues. The sampling rates can be chosen arbitrarily and individually, so that their ratios can even be irrational. The state space model is obtained as a combination of a linear diagonal state equation and a nonlinear output equation. Unlike the usual lifted model, the order of the proposed model is the same as the number of sampling rates, which is less than or equal to the order of the original continuous-time system. The method is based on a nonlinear variable transformation, which can be considered as a generalization of linear similarity transformation, which cannot be applied to systems with multiple eigenvalues in general. An example and its simulation result show that the proposed multi-rate model gives exact responses at all sampling instants.

Keywords: Multi-rate discretization, linear systems, triangularization, similarity transformation, diagonalization, exponential transformation, multiple eigenvalues

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1321
36 Forward Speed and Draught Requirement of a Semi-Automatic Cassava Planter under Different Wheel Usage

Authors: M. O. Ale, S. I. Manuwa, O. J. Olukunle, T. Ewetumo

Abstract:

Five varying speeds of 1.5, 1.8, 2.1, 2.3 and 2.6 km/h were used at a constant soil depth of 100 mm to determine the effects of forward speed on the draught requirement of a semi-automatic cassava planter under pneumatic wheel and rigid wheel usage on a well-prepared sandy clay loam soil. The soil draught was electronically measured using an on-the-go soil draught measuring instrumentation system developed for the purpose of this research. The results showed an exponential relationship between forward speed and draught in which draught ranging between 24.91 and 744.44 N increased with an increase in forward speed in the rigid wheel experiment. This is contrary to the polynomial relationship observed in the pneumatic wheel experiment in which the draught varied between 96.09 and 343.53 N. It was observed in the experiments that the optimum speed of 1.5 km/h had the least values of draught in both the pneumatic wheel and rigid wheel experiments with higher values in the pneumatic experiment. It was generally noted that the rigid wheel planter with the less value of draught requires less energy requirement for operation. It is therefore concluded that operating the semi-automatic cassava planter with rigid wheels will be more economical for cassava farmers than operating the planter with pneumatic wheels.

Keywords: Cassava planter, planting, forward speed, draught, wheel type.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 84
35 Fingerprint Compression Using Contourlet Transform and Multistage Vector Quantization

Authors: S. Esakkirajan, T. Veerakumar, V. Senthil Murugan, R. Sudhakar

Abstract:

This paper presents a new fingerprint coding technique based on contourlet transform and multistage vector quantization. Wavelets have shown their ability in representing natural images that contain smooth areas separated with edges. However, wavelets cannot efficiently take advantage of the fact that the edges usually found in fingerprints are smooth curves. This issue is addressed by directional transforms, known as contourlets, which have the property of preserving edges. The contourlet transform is a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks. The computation and storage requirements are the major difficulty in implementing a vector quantizer. In the full-search algorithm, the computation and storage complexity is an exponential function of the number of bits used in quantizing each frame of spectral information. The storage requirement in multistage vector quantization is less when compared to full search vector quantization. The coefficients of contourlet transform are quantized by multistage vector quantization. The quantized coefficients are encoded by Huffman coding. The results obtained are tabulated and compared with the existing wavelet based ones.

Keywords: Contourlet Transform, Directional Filter bank, Laplacian Pyramid, Multistage Vector Quantization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1956
34 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar

Abstract:

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Keywords: ANN, DWT, GLCM, KNN, ROI, artificial neural networks, discrete wavelet transform, gray-level co-occurrence matrix, k-nearest neighbor, region of interest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 904
33 An Evaluation of Average Run Length of MaxEWMA and MaxGWMA Control Charts

Authors: S. Phanyaem

Abstract:

Exponentially weighted moving average control chart (EWMA) is a popular chart used for detecting shift in the mean of parameter of distributions in quality control. The objective of this paper is to compare the efficiency of control chart to detect an increases in the mean of a process. In particular, we compared the Maximum Exponentially Weighted Moving Average (MaxEWMA) and Maximum Generally Weighted Moving Average (MaxGWMA) control charts when the observations are Exponential distribution. The criteria for evaluate the performance of control chart is called, the Average Run Length (ARL). The result of comparison show that in the case of process is small sample size, the MaxEWMA control chart is more efficiency to detect shift in the process mean than MaxGWMA control chart. For the case of large sample size, the MaxEWMA control chart is more sensitive to detect small shift in the process mean than MaxGWMA control chart, and when the process is a large shift in mean, the MaxGWMA control chart is more sensitive to detect mean shift than MaxEWMA control chart.

Keywords: Maximum Exponentially Weighted Moving Average, Maximum General Weighted Moving Average, Average Run Length.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2127
32 Pricing European Options under Jump Diffusion Models with Fast L-stable Padé Scheme

Authors: Salah Alrabeei, Mohammad Yousuf

Abstract:

The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. Modeling option pricing by Black-School models with jumps guarantees to consider the market movement. However, only numerical methods can solve this model. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, the exponential time differencing (ETD) method is applied for solving partial integrodifferential equations arising in pricing European options under Merton’s and Kou’s jump-diffusion models. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). A partial fraction form of Pad`e schemes is used to overcome the complexity of inverting polynomial of matrices. These two tools guarantee to get efficient and accurate numerical solutions. We construct a parallel and easy to implement a version of the numerical scheme. Numerical experiments are given to show how fast and accurate is our scheme.

Keywords: Integral differential equations, L-stable methods, pricing European options, Jump–diffusion model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 430
31 Strength and Permeability Characteristics of Steel Fibre Reinforced Concrete

Authors: A. P. Singh

Abstract:

The results reported in this paper are the part of an extensive laboratory investigation undertaken to study the effects of fibre parameters on the permeability and strength characteristics of steel fibre reinforced concrete (SFRC). The effect of varying fibre content and curing age on the water permeability, compressive and split tensile strengths of SFRC was investigated using straight steel fibres having an aspect ratio of 65. Samples containing three different weight fractions of 1.0%, 2.0% and 4.0% were cast and tested for permeability and strength after 7, 14, 28 and 60 days of curing. Plain concrete samples were also cast and tested for reference purposes.

Permeability was observed to decrease significantly with the addition of steel fibres and continued to decrease with increasing fibre content and increasing curing age. An exponential relationship was observed between permeability and compressive and split tensile strengths for SFRC as well as PCC. To evaluate the effect of fibre content on the permeability and strength characteristics, the Analysis of Variance (ANOVA) statistical method was used. An a level (probability of error) of 0.05 was used for ANOVA test. Regression analysis was carried out to develop relationship between permeability, compressive strength and curing age.

Keywords: Permeability, grade of concrete, fibre shape, fibre content, curing age, steady state, Darcy’s law, method of penetration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3024
30 Peak Data Rate Enhancement Using Switched Micro-Macro Diversity in Cellular Multiple-Input-Multiple-Output Systems

Authors: Jihad S. Daba, J. P. Dubois, Yvette Antar

Abstract:

With the exponential growth of cellular users, a new generation of cellular networks is needed to enhance the required peak data rates. The co-channel interference between neighboring base stations inhibits peak data rate increase. To overcome this interference, multi-cell cooperation known as coordinated multipoint transmission is proposed. Such a solution makes use of multiple-input-multiple-output (MIMO) systems under two different structures: Micro- and macro-diversity. In this paper, we study the capacity and bit error rate in cellular networks using MIMO technology. We analyse both micro- and macro-diversity schemes and develop a hybrid model that switches between macro- and micro-diversity in the case of hard handoff based on a cut-off range of signal-to-noise ratio values. We conclude that our hybrid switched micro-macro MIMO system outperforms classical MIMO systems at the cost of increased hardware and software complexity.

Keywords: Cooperative multipoint transmission, ergodic capacity, hard handoff, macro-diversity, micro-diversity, multiple-input-multiple-output systems, MIMO, orthogonal frequency division multiplexing, OFDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1053
29 Increase of Organization in Complex Systems

Authors: Georgi Yordanov Georgiev, Michael Daly, Erin Gombos, Amrit Vinod, Gajinder Hoonjan

Abstract:

Measures of complexity and entropy have not converged to a single quantitative description of levels of organization of complex systems. The need for such a measure is increasingly necessary in all disciplines studying complex systems. To address this problem, starting from the most fundamental principle in Physics, here a new measure for quantity of organization and rate of self-organization in complex systems based on the principle of least (stationary) action is applied to a model system - the central processing unit (CPU) of computers. The quantity of organization for several generations of CPUs shows a double exponential rate of change of organization with time. The exact functional dependence has a fine, S-shaped structure, revealing some of the mechanisms of self-organization. The principle of least action helps to explain the mechanism of increase of organization through quantity accumulation and constraint and curvature minimization with an attractor, the least average sum of actions of all elements and for all motions. This approach can help describe, quantify, measure, manage, design and predict future behavior of complex systems to achieve the highest rates of self organization to improve their quality. It can be applied to other complex systems from Physics, Chemistry, Biology, Ecology, Economics, Cities, network theory and others where complex systems are present.

Keywords: Organization, self-organization, complex system, complexification, quantitative measure, principle of least action, principle of stationary action, attractor, progressive development, acceleration, stochastic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598
28 Flow-Through Supercritical Installation for Producing Biodiesel Fuel

Authors: Y. A. Shapovalov, F. M. Gumerov, M. K. Nauryzbaev, S. V. Mazanov, R. A. Usmanov, A. V. Klinov, L. K. Safiullina, S. A. Soshin

Abstract:

A flow-through installation was created and manufactured for the transesterification of triglycerides of fatty acids and production of biodiesel fuel under supercritical fluid conditions. Transesterification of rapeseed oil with ethanol was carried out according to two parameters: temperature and the ratio of alcohol/oil mixture at the constant pressure of 19 MPa. The kinetics of the yield of fatty acids ethyl esters (FAEE) was determined in the temperature range of 320-380 °C at the alcohol/oil molar ratio of 6:1-20:1. The content of the formed FAEE was determined by the method of correlation of the resulting biodiesel fuel by its kinematic viscosity. The maximum FAEE yield (about 90%) was obtained within 30 min at the ethanol/oil molar ratio of 12:1 and a temperature of 380 °C. When studying of transesterification of triglycerides, a kinetic model of an isothermal flow reactor was used. The reaction order implemented in the flow reactor has been determined. The first order of the reaction was confirmed by data on the conversion of FAEE during the reaction at different temperatures and the molar ratios of the initial reagents (ethanol/oil). Using the Arrhenius equation, the values of the effective constants of the transesterification reaction rate were calculated at different reaction temperatures. In addition, based on the experimental data, the activation energy and the pre-exponential factor of the transesterification reaction were determined.

Keywords: Biodiesel, fatty acid esters, supercritical fluid technology, transesterification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 349
27 Time/Temperature-Dependent Finite Element Model of Laminated Glass Beams

Authors: Alena Zemanová, Jan Zeman, Michal Šejnoha

Abstract:

The polymer foil used for manufacturing of laminated glass members behaves in a viscoelastic manner with temperature dependance. This contribution aims at incorporating the time/temperature-dependent behavior of interlayer to our earlier elastic finite element model for laminated glass beams. The model is based on a refined beam theory: each layer behaves according to the finite-strain shear deformable formulation by Reissner and the adjacent layers are connected via the Lagrange multipliers ensuring the inter-layer compatibility of a laminated unit. The time/temperature-dependent behavior of the interlayer is accounted for by the generalized Maxwell model and by the time-temperature superposition principle due to the Williams, Landel, and Ferry. The resulting system is solved by the Newton method with consistent linearization and the viscoelastic response is determined incrementally by the exponential algorithm. By comparing the model predictions against available experimental data, we demonstrate that the proposed formulation is reliable and accurately reproduces the behavior of the laminated glass units.

Keywords: Laminated glass, finite element method, finite-strain Reissner model, Lagrange multipliers, generalized Maxwell model, Williams-Landel-Ferry equation, Newton method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1647
26 Thermal Analysis of Extrusion Process in Plastic Making

Authors: S. K. Fasogbon, T. M. Oladosu, O. S. Osasuyi

Abstract:

Plastic extrusion has been an important process of plastic production since 19th century. Meanwhile, in plastic extrusion process, wide variation in temperature along the extrudate usually leads to scraps formation on the side of finished products. To avoid this situation, there is a need to deeply understand temperature distribution along the extrudate in plastic extrusion process. This work developed an analytical model that predicts the temperature distribution over the billet (the polymers melt) along the extrudate during extrusion process with the limitation that the polymer in question does not cover biopolymer such as DNA. The model was solved and simulated. Results for two different plastic materials (polyvinylchloride and polycarbonate) using self-developed MATLAB code and a commercially developed software (ANSYS) were generated and ultimately compared. It was observed that there is a thermodynamic heat transfer from the entry level of the billet into the die down to the end of it. The graph plots indicate a natural exponential decay of temperature with time and along the die length, with the temperature being 413 K and 474 K for polyvinylchloride and polycarbonate respectively at the entry level and 299.3 K and 328.8 K at the exit when the temperature of the surrounding was 298 K. The extrusion model was validated by comparison of MATLAB code simulation with a commercially available ANSYS simulation and the results favourably agree. This work concludes that the developed mathematical model and the self-generated MATLAB code are reliable tools in predicting temperature distribution along the extrudate in plastic extrusion process.

Keywords: ANSYS, extrusion process, MATLAB, plastic making, thermal analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1789
25 Study of Human Upper Arm Girth during Elbow Isokinetic Contractions Based on a Smart Circumferential Measuring System

Authors: Xi Wang, Xiaoming Tao, Raymond C. H. So

Abstract:

As one of the convenient and noninvasive sensing approaches, the automatic limb girth measurement has been applied to detect intention behind human motion from muscle deformation. The sensing validity has been elaborated by preliminary researches but still need more fundamental studies, especially on kinetic contraction modes. Based on the novel fabric strain sensors, a soft and smart limb girth measurement system was developed by the authors’ group, which can measure the limb girth in-motion. Experiments were carried out on elbow isometric flexion and elbow isokinetic flexion (biceps’ isokinetic contractions) of 90°/s, 60°/s, and 120°/s for 10 subjects (2 canoeists and 8 ordinary people). After removal of natural circumferential increments due to elbow position, the joint torque is found not uniformly sensitive to the limb circumferential strains, but declining as elbow joint angle rises, regardless of the angular speed. Moreover, the maximum joint torque was found as an exponential function of the joint’s angular speed. This research highly contributes to the application of the automatic limb girth measuring during kinetic contractions, and it is useful to predict the contraction level of voluntary skeletal muscles.

Keywords: Fabric strain sensor, muscle deformation, isokinetic contraction, joint torque, limb girth strain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2066
24 All Types of Base Pair Substitutions Induced by γ-Rays in Haploid and Diploid Yeast Cells

Authors: Natalia Koltovaya, Nadezhda Zhuchkina, Ksenia Lyubimova

Abstract:

We study the biological effects induced by ionizing radiation in view of therapeutic exposure and the idea of space flights beyond Earth's magnetosphere. In particular, we examine the differences between base pair substitution induction by ionizing radiation in model haploid and diploid yeast Saccharomyces cerevisiae cells. Such mutations are difficult to study in higher eukaryotic systems. In our research, we have used a collection of six isogenic trp5-strains and 14 isogenic haploid and diploid cyc1-strains that are specific markers of all possible base-pair substitutions. These strains differ from each other only in single base substitutions within codon-50 of the trp5 gene or codon-22 of the cyc1 gene. Different mutation spectra for two different haploid genetic trp5- and cyc1-assays and different mutation spectra for the same genetic cyc1-system in cells with different ploidy — haploid and diploid — have been obtained. It was linear function for dose-dependence in haploid and exponential in diploid cells. We suggest that the differences between haploid yeast strains reflect the dependence on the sequence context, while the differences between haploid and diploid strains reflect the different molecular mechanisms of mutations.

Keywords: Base pair substitutions, γ-rays, haploid and diploid cells, yeast Saccharomyces cerevisiae.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 777
23 Understanding Innovation by Analyzing the Pillars of the Global Competitiveness Index

Authors: Ujjwala Bhand, Mridula Goel

Abstract:

Global Competitiveness Index (GCI) prepared by World Economic Forum has become a benchmark in studying the competitiveness of countries and for understanding the factors that enable competitiveness. Innovation is a key pillar in competitiveness and has the unique property of enabling exponential economic growth. This paper attempts to analyze how the pillars comprising the Global Competitiveness Index affect innovation and whether GDP growth can directly affect innovation outcomes for a country. The key objective of the study is to identify areas on which governments of developing countries can focus policies and programs to improve their country’s innovativeness. We have compiled a panel data set for top innovating countries and large emerging economies called BRICS from 2007-08 to 2014-15 in order to find the significant factors that affect innovation. The results of the regression analysis suggest that government should make policies to improve labor market efficiency, establish sophisticated business networks, provide basic health and primary education to its people and strengthen the quality of higher education and training services in the economy. The achievements of smaller economies on innovation suggest that concerted efforts by governments can counter any size related disadvantage, and in fact can provide greater flexibility and speed in encouraging innovation.

Keywords: Innovation, Global Competitiveness Index, BRICS, economic growth.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 996
22 Enhancing Performance of Bluetooth Piconets Using Priority Scheduling and Exponential Back-Off Mechanism

Authors: Dharmendra Chourishi “Maitraya”, Sridevi Seshadri

Abstract:

Bluetooth is a personal wireless communication technology and is being applied in many scenarios. It is an emerging standard for short range, low cost, low power wireless access technology. Current existing MAC (Medium Access Control) scheduling schemes only provide best-effort service for all masterslave connections. It is very challenging to provide QoS (Quality of Service) support for different connections due to the feature of Master Driven TDD (Time Division Duplex). However, there is no solution available to support both delay and bandwidth guarantees required by real time applications. This paper addresses the issue of how to enhance QoS support in a Bluetooth piconet. The Bluetooth specification proposes a Round Robin scheduler as possible solution for scheduling the transmissions in a Bluetooth Piconet. We propose an algorithm which will reduce the bandwidth waste and enhance the efficiency of network. We define token counters to estimate traffic of real-time slaves. To increase bandwidth utilization, a back-off mechanism is then presented for best-effort slaves to decrease the frequency of polling idle slaves. Simulation results demonstrate that our scheme achieves better performance over the Round Robin scheduling.

Keywords: Piconet, Medium Access Control, Polling algorithm, Scheduling, QoS, Time Division Duplex (TDD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1666
21 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis

Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen

Abstract:

Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.

Keywords: Hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1588
20 An Investigation of Performance versus Security in Cognitive Radio Networks with Supporting Cloud Platforms

Authors: Kurniawan D. Irianto, Demetres D. Kouvatsos

Abstract:

The growth of wireless devices affects the availability of limited frequencies or spectrum bands as it has been known that spectrum bands are a natural resource that cannot be added. Meanwhile, the licensed frequencies are idle most of the time. Cognitive radio is one of the solutions to solve those problems. Cognitive radio is a promising technology that allows the unlicensed users known as secondary users (SUs) to access licensed bands without making interference to licensed users or primary users (PUs). As cloud computing has become popular in recent years, cognitive radio networks (CRNs) can be integrated with cloud platform. One of the important issues in CRNs is security. It becomes a problem since CRNs use radio frequencies as a medium for transmitting and CRNs share the same issues with wireless communication systems. Another critical issue in CRNs is performance. Security has adverse effect to performance and there are trade-offs between them. The goal of this paper is to investigate the performance related to security trade-off in CRNs with supporting cloud platforms. Furthermore, Queuing Network Models with preemptive resume and preemptive repeat identical priority are applied in this project to measure the impact of security to performance in CRNs with or without cloud platform. The generalized exponential (GE) type distribution is used to reflect the bursty inter-arrival and service times at the servers. The results show that the best performance is obtained when security is disabled and cloud platform is enabled.

Keywords: Cloud Platforms, Cognitive Radio Networks, GEtype Distribution, Performance Vs Security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2468
19 Statistical Modeling of Local Area Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes

Authors: Jihad S. Daba, J. P. Dubois

Abstract:

Fading noise degrades the performance of cellular communication, most notably in femto- and pico-cells in 3G and 4G systems. When the wireless channel consists of a small number of scattering paths, the statistics of fading noise is not analytically tractable and poses a serious challenge to developing closed canonical forms that can be analysed and used in the design of efficient and optimal receivers. In this context, noise is multiplicative and is referred to as stochastically local fading. In many analytical investigation of multiplicative noise, the exponential or Gamma statistics are invoked. More recent advances by the author of this paper utilized a Poisson modulated-weighted generalized Laguerre polynomials with controlling parameters and uncorrelated noise assumptions. In this paper, we investigate the statistics of multidiversity stochastically local area fading channel when the channel consists of randomly distributed Rayleigh and Rician scattering centers with a coherent Nakagami-distributed line of sight component and an underlying doubly stochastic Poisson process driven by a lognormal intensity. These combined statistics form a unifying triply stochastic filtered marked Poisson point process model.

Keywords: Cellular communication, femto- and pico-cells, stochastically local area fading channel, triply stochastic filtered marked Poisson point process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1310
18 A Computational Stochastic Modeling Formalism for Biological Networks

Authors: Werner Sandmann, Verena Wolf

Abstract:

Stochastic models of biological networks are well established in systems biology, where the computational treatment of such models is often focused on the solution of the so-called chemical master equation via stochastic simulation algorithms. In contrast to this, the development of storage-efficient model representations that are directly suitable for computer implementation has received significantly less attention. Instead, a model is usually described in terms of a stochastic process or a "higher-level paradigm" with graphical representation such as e.g. a stochastic Petri net. A serious problem then arises due to the exponential growth of the model-s state space which is in fact a main reason for the popularity of stochastic simulation since simulation suffers less from the state space explosion than non-simulative numerical solution techniques. In this paper we present transition class models for the representation of biological network models, a compact mathematical formalism that circumvents state space explosion. Transition class models can also serve as an interface between different higher level modeling paradigms, stochastic processes and the implementation coded in a programming language. Besides, the compact model representation provides the opportunity to apply non-simulative solution techniques thereby preserving the possible use of stochastic simulation. Illustrative examples of transition class representations are given for an enzyme-catalyzed substrate conversion and a part of the bacteriophage λ lysis/lysogeny pathway.

Keywords: Computational Modeling, Biological Networks, Stochastic Models, Markov Chains, Transition Class Models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
17 Application of Interferometric Techniques for Quality Control of Oils Used in the Food Industry

Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich

Abstract:

The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.

Keywords: Food industry, interferometric, oils, quality control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2130
16 The Effect of Magnetite Particle Size on Methane Production by Fresh and Degassed Anaerobic Sludge

Authors: E. Al-Essa, R. Bello-Mendoza, D. G. Wareham

Abstract:

Anaerobic batch experiments were conducted to investigate the effect of magnetite-supplementation (7 mM) on methane production from digested sludge undergoing two different microbial growth phases, namely fresh sludge (exponential growth phase) and degassed sludge (endogenous decay phase). Three different particle sizes were assessed: small (50 - 150 nm), medium (168 – 490 nm) and large (800 nm - 4.5 µm) particles. Results show that, in the case of the fresh sludge, magnetite significantly enhanced the methane production rate (up to 32%) and reduced the lag phase (by 15% - 41%) as compared to the control, regardless of the particle size used. However, the cumulative methane produced at the end of the incubation was comparable in all treatment and control bottles. In the case of the degassed sludge, only the medium-sized magnetite particles increased significantly the methane production rate (12% higher) as compared to the control. Small and large particles had little effect on the methane production rate but did result in an extended lag phase which led to significantly lower cumulative methane production at the end of the incubation period. These results suggest that magnetite produces a clear and positive effect on methane production only when an active and balanced microbial community is present in the anaerobic digester. It is concluded that, (i) the effect of magnetite particle size on increasing the methane production rate and reducing lag phase duration is strongly influenced by the initial metabolic state of the microbial consortium, and (ii) the particle size would positively affect the methane production if it is provided within the nanometer size range.

Keywords: Anaerobic digestion, iron oxide (Fe3O4), methanogenesis, nanoparticle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 733
15 Advantages of Large Strands in Precast/Prestressed Concrete Highway Application

Authors: Amin Akhnoukh

Abstract:

The objective of this research is to investigate the advantages of using large-diameter 0.7 inch prestressing strands in pretention applications. The advantages of large-diameter strands are mainly beneficial in the heavy construction applications. Bridges and tunnels are subjected to a higher daily traffic with an exponential increase in trucks ultimate weight, which raise the demand for higher structural capacity of bridges and tunnels. In this research, precast prestressed I-girders were considered as a case study. Flexure capacities of girders fabricated using 0.7 inch strands and different concrete strengths were calculated and compared to capacities of 0.6 inch strands girders fabricated using equivalent concrete strength. The effect of bridge deck concrete strength on composite deck-girder section capacity was investigated due to its possible effect on final section capacity. Finally, a comparison was made to compare the bridge cross-section of girders designed using regular 0.6 inch strands and the large-diameter 0.7 inch. The research findings showed that structural advantages of 0.7 inch strands allow for using fewer bridge girders, reduced material quantity, and light-weight members. The structural advantages of 0.7 inch strands are maximized when high strength concrete (HSC) are used in girder fabrication, and concrete of minimum 5ksi compressive strength is used in pouring bridge decks. The use of 0.7 inch strands in bridge industry can partially contribute to the improvement of bridge conditions, minimize construction cost, and reduce the construction duration of the project.

Keywords: 0.7 Inch Strands, I-Girders, Pretension, Flexure Capacity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2694
14 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase

Authors: A. Lauvray, F. Poulhaon, P. Michaud, P. Joyot, E. Duc

Abstract:

Additive Friction Stir Manufacturing, or AFSM, is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. There is still a lack in understanding of the physical phenomena taking place during the process. This research aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system due to pure friction. An analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable, due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes through a numerical modeling followed by an experimental validation to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.

Keywords: numerical model, additive manufacturing, frictional heat generation, process

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 454
13 Simplified Stress Gradient Method for Stress-Intensity Factor Determination

Authors: Jeries J. Abou-Hanna

Abstract:

Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.

Keywords: Fracture mechanics, finite element method, stress intensity factor, stress gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 698
12 The Difficulties Witnessed by People with Intellectual Disability in Transition to Work in Saudi Arabia

Authors: Adel S. Alanazi

Abstract:

The transition of a student with a disability from school to work is the most crucial phase while moving from the stage of adolescence into early adulthood. In this process, young individuals face various difficulties and challenges in order to accomplish the next venture of life successfully. In this respect, this paper aims to examine the challenges encountered by the individuals with intellectual disabilities in transition to work in Saudi Arabia. For this purpose, this study has undertaken a qualitative research-based methodology; wherein interpretivist philosophy has been followed along with inductive approach and exploratory research design. The data for the research has been gathered with the help of semi-structured interviews, whose findings are analysed with the help of thematic analysis. Semi-structured interviews were conducted with parents of persons with intellectual disabilities, officials, supervisors and specialists of two vocational rehabilitation centres providing training to intellectually disabled students, in addition to that, directors of companies and websites in hiring those individuals. The total number of respondents for the interview was 15. The purposive sampling method was used to select the respondents for the interview. This sampling method is a non-probability sampling method which draws respondents from a known population and allows flexibility and suitability in selecting the participants for the study. The findings gathered from the interview revealed that the lack of awareness among their parents regarding the rights of their children who are intellectually disabled; the lack of adequate communication and coordination between various entities; concerns regarding their training and subsequent employment are the key difficulties experienced by the individuals with intellectual disabilities. Training in programmes such as bookbinding, carpentry, computing, agriculture, electricity and telephone exchange operations were involved as key training programmes. The findings of this study also revealed that information technology and media were playing a significant role in smoothing the transition to employment of individuals with intellectual disabilities. Furthermore, religious and cultural attitudes have been identified to be restricted for people with such disabilities in seeking advantages from job opportunities. On the basis of these findings, it can be implied that the information gathered through this study will serve to be highly beneficial for Saudi Arabian schools/ rehabilitation centres for individuals with intellectual disability to facilitate them in overcoming the problems they encounter during the transition to work.

Keywords: Intellectual disability, transition services, rehabilitation centre.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1274