Search results for: random copolymers
1877 Cyber Security Enhancement via Software Defined Pseudo-Random Private IP Address Hopping
Authors: Andre Slonopas, Zona Kostic, Warren Thompson
Abstract:
Obfuscation is one of the most useful tools to prevent network compromise. Previous research focused on the obfuscation of the network communications between external-facing edge devices. This work proposes the use of two edge devices, external and internal facing, which communicate via private IPv4 addresses in a software-defined pseudo-random IP hopping. This methodology does not require additional IP addresses and/or resources to implement. Statistical analyses demonstrate that the hopping surface must be at least 1e3 IP addresses in size with a broad standard deviation to minimize the possibility of coincidence of monitored and communication IPs. The probability of breaking the hopping algorithm requires a collection of at least 1e6 samples, which for large hopping surfaces will take years to collect. The probability of dropped packets is controlled via memory buffers and the frequency of hops and can be reduced to levels acceptable for video streaming. This methodology provides an impenetrable layer of security ideal for information and supervisory control and data acquisition systems.Keywords: moving target defense, cybersecurity, network security, hopping randomization, software defined network, network security theory
Procedia PDF Downloads 1851876 Cellular Uptake and Endocytosis of Doxorubicin Loaded Methoxy Poly (Ethylene Glycol)-Block-Poly (Glutamic Acid) [DOX/mPEG-b-PLG] Nanoparticles against Human Breast Cancer Cell Lines
Authors: Zaheer Ahmad, Afzal Shah
Abstract:
pH responsive block copolymers consist of mPEG and glutamic acid units were syntheiszed in different formulations. The synthesized polymers were structurally investigated. Doxorubicin Hydrocholide (DOX-HCl) as a chemotherapy medication for the treatment of cancer was selected. DOX-HCl was loaded and their drug loading content and drug loading efficiency were determined. The nanocarriers were obtained in small size, well shaped and slightly negative surface charge. The release study was carried out both at pH 7.4 and 5.5 and it was revealed that the release was sustained and in controlled manner and there was no initial burst release. The in vitro release study was further carried out for different formulations with different glutamic acid moieties. Time dependent cell proliferation inhibition of the free drug and drug loaded nanoparticles against human breast cancer cell lines MCF-7 and Zr-75-30 was observed. Cellular uptakes and endocytosis were investigated by confocal laser scanning microscopy (CLSM) and flow cytometery. The biocompatibility, optimum size, shape and surface charge of the developed nanoparticles make the nanoparticles an efficient drug delivery carrier.Keywords: doxorubicin, glutamic acid, cell proliferation inhibition, breast cancer cell
Procedia PDF Downloads 1431875 Unified Assessment of Power System Reserve-based Reliability Levels
Authors: B. M. Alshammari, M. A. El-Kady
Abstract:
This paper presents a unified framework for assessment of reserve-based reliability levels in electric power systems. The unified approach is based on reserve-based analysis and assessment of the relationship between available generation capacities and required demand levels. The developed approach takes into account the load variations as well as contingencies which occur randomly causing some generation and/or transmission capacities to be lost (become unavailable). The calculated reserve based indices, which are important to assess the reserve capabilities of the power system for various operating scenarios are therefore probabilistic in nature. They reflect the fact that neither the load levels nor the generation or transmission capacities are known with absolute certainty. They are rather subjects to random variations and consequently. The calculated reserve-based reliability indices are all subjects to random variations where only expected values of these indices can be evaluated. This paper presents a unified approach to reserve-based reliability assessment of power systems using various reserve assessment criteria. Practical applications are also presented for demonstration purposes to the Saudi electricity power grid.Keywords: assessment, power system, reserve, reliability
Procedia PDF Downloads 6171874 Design of Enhanced Adaptive Filter for Integrated Navigation System of FOG-SINS and Star Tracker
Authors: Nassim Bessaad, Qilian Bao, Zhao Jiangkang
Abstract:
The fiber optics gyroscope in the strap-down inertial navigation system (FOG-SINS) suffers from precision degradation due to the influence of random errors. In this work, an enhanced Allan variance (AV) stochastic modeling method combined with discrete wavelet transform (DWT) for signal denoising is implemented to estimate the random process in the FOG signal. Furthermore, we devise a measurement-based iterative adaptive Sage-Husa nonlinear filter with augmented states to integrate a star tracker sensor with SINS. The proposed filter adapts the measurement noise covariance matrix based on the available data. Moreover, the enhanced stochastic modeling scheme is invested in tuning the process noise covariance matrix and the augmented state Gauss-Markov process parameters. Finally, the effectiveness of the proposed filter is investigated by employing the collected data in laboratory conditions. The result shows the filter's improved accuracy in comparison with the conventional Kalman filter (CKF).Keywords: inertial navigation, adaptive filtering, star tracker, FOG
Procedia PDF Downloads 801873 An Innovative Auditory Impulsed EEG and Neural Network Based Biometric Identification System
Authors: Ritesh Kumar, Gitanjali Chhetri, Mandira Bhatia, Mohit Mishra, Abhijith Bailur, Abhinav
Abstract:
The prevalence of the internet and technology in our day to day lives is creating more security issues than ever. The need for protecting and providing a secure access to private and business data has led to the development of many security systems. One of the potential solutions is to employ the bio-metric authentication technique. In this paper we present an innovative biometric authentication method that utilizes a person’s EEG signal, which is acquired in response to an auditory stimulus,and transferred wirelessly to a computer that has the necessary ANN algorithm-Multi layer perceptrol neural network because of is its ability to differentiate between information which is not linearly separable.In order to determine the weights of the hidden layer we use Gaussian random weight initialization. MLP utilizes a supervised learning technique called Back propagation for training the network. The complex algorithm used for EEG classification reduces the chances of intrusion into the protected public or private data.Keywords: EEG signal, auditory evoked potential, biometrics, multilayer perceptron neural network, back propagation rule, Gaussian random weight initialization
Procedia PDF Downloads 4091872 Intelligent Chemistry Approach to Improvement of Oxygenates Analytical Method in Light Hydrocarbon by Multidimensional Gas Chromatography - FID and MS
Authors: Ahmed Aboforn
Abstract:
Butene-1 product is consider effectively raw material in Polyethylene production, however Oxygenates impurities existing will be effected ethylene/butene-1 copolymers synthesized through titanium-magnesium-supported Ziegler-Natta catalysts. Laterally, Petrochemical industries are challenge against poor quality of Butene-1 and other C4 mix – feedstock that reflected on business impact and production losing. In addition, propylene product suffering from contamination by oxygenates components and causing for lose production and plant upset of Polypropylene process plants. However, Multidimensional gas chromatography (MDGC) innovative analytical methodology is a chromatography technique used to separate complex samples, as mixing different functional group as Hydrocarbon and oxygenates compounds and have similar retention factors, by running the eluent through two or more columns instead of the customary single column. This analytical study striving to enhance the quality of Oxygenates analytical method, as monitoring the concentration of oxygenates with accurate and precise analytical method by utilizing multidimensional GC supported by Backflush technique and Flame Ionization Detector, which have high performance separation of hydrocarbon and Oxygenates; also improving the minimum detection limits (MDL) to detect the concentration <1.0 ppm. However different types of oxygenates as (Alcohols, Aldehyde, Ketones, Ester and Ether) may be determined in other Hydrocarbon streams asC3, C4-mix, until C12 mixture, supported by liquid injection auto-sampler.Keywords: analytical chemistry, gas chromatography, petrochemicals, oxygenates
Procedia PDF Downloads 831871 Estimation of Population Mean Using Characteristics of Poisson Distribution: An Application to Earthquake Data
Authors: Prayas Sharma
Abstract:
This paper proposed a generalized class of estimators, an exponential class of estimators based on the adaption of Sharma and Singh (2015) and Solanki and Singh (2013), and a simple difference estimator for estimating unknown population mean in the case of Poisson distributed population in simple random sampling without replacement. The expressions for mean square errors of the proposed classes of estimators are derived from the first order of approximation. It is shown that the adapted version of Solanki and Singh (2013), the exponential class of estimator, is always more efficient than the usual estimator, ratio, product, exponential ratio, and exponential product type estimators and equally efficient to simple difference estimator. Moreover, the adapted version of Sharma and Singh's (2015) estimator is always more efficient than all the estimators available in the literature. In addition, theoretical findings are supported by an empirical study to show the superiority of the constructed estimators over others with an application to earthquake data of Turkey.Keywords: auxiliary attribute, point bi-serial, mean square error, simple random sampling, Poisson distribution
Procedia PDF Downloads 1561870 The Effect of Cognitive Restructuring and Assertive Training on Improvement of Sexual Behavior of Secondary School Adolescents in Nigeria
Authors: Azu Kalu Oko, Ugboaku Nwanpka
Abstract:
The study investigated the effect of cognitive restructuring and assertive training on improvement of sexual behavior of secondary school adolescents in Nigeria. To guide the study, three research questions and four hypothesis were formulated. The study featured a 2X3 factorial design with a sample of 48 male and female students selected by random sampling using a table of random sample numbers. The three groups are assertive training, cognitive restructuring and control group. The study identified adolescents with deviant sexual behavior using Students Sexual Behavior Inventory (S.S.B.I.) as the research instrument. Ancova and T- Test statistic were used to analyze the data. The findings revealed that: I. Assertive Training and Cognitive Restructuring significantly improved sexual behavior of subjects at post test when compared with the control group. II. The treatment gains made by the two techniques were sustained at one month follow-up interval. III. Cognitive restructuring was more effective than assertiveness training in the improvement of the sexual behavior of students. Implication for education, psychotherapy and counseling were highlighted.Keywords: cognitive restructuring, assertiveness training, adolescents, sexual behavior
Procedia PDF Downloads 5871869 The Staff Performance Efficiency of the Faculty of Management Science, Suan Sunandha Rajabhat University
Authors: Nipawan Tharasak, Ladda Hirunyava
Abstract:
The objective of the research was to study factors affecting working efficiency and the relationship between working environment, satisfaction to human resources management and operation employees’ working efficiency of Faculty of Management Science, Suan Sunandha Rajabhat University. The sample size of the research was based on 33 employees of Faculty of Management Science. The researcher had classified the support employees into 4 divisions by using Stratified Random Sampling. Individual sample was randomized by using Simple Random Sampling. Data was collected through the instrument. The Statistical Package for the Windows was utilized for data processing. Percentage, mean, standard deviation, the t-test, One-way ANOVA, and Pearson product moment correlation coefficient were applied. The result found the support employees’ satisfaction in human resources management of Faculty of Management Science in following areas: remuneration; employee recruitment & selection; manpower planning; performance evaluation; staff training & developing; and spirit & fairness were overall in good level.Keywords: faculty of management science, operational factors, practice performance, staff working
Procedia PDF Downloads 2351868 The Effect of Penalizing Wrong Answers in the Computerized Modified Multiple Choice Testing System
Authors: Min Hae Song, Jooyong Park
Abstract:
Even though assessment using information and communication technology will most likely lead the future of educational assessment, there is little research on this topic. Computerized assessment will not only cut costs but also measure students' performance in ways not possible before. In this context, this study introduces a tool which can overcome the problems of multiple choice tests. Multiple-choice tests (MC) are efficient in automatic grading, however structural problems of multiple-choice tests allow students to find the correct answer from options even though they do not know the answer. A computerized modified multiple-choice testing system (CMMT) was developed using the interactivity of computers, that presents questions first, and options later for a short time when the student requests for them. This study was conducted to find out whether penalizing for wrong answers in CMMT could lower random guessing. In this study, we checked whether students knew the answers by having them respond to the short-answer tests before choosing the given options in CMMT or MC format. Ninety-four students were tested with the directions that they will be penalized for wrong answers, but not for no response. There were 4 experimental conditions: two conditions of high or low percentage of penalizing, each in traditional multiple-choice or CMMT format. In the low penalty condition, the penalty rate was the probability of getting the correct answer by random guessing. In the high penalty condition, students were penalized at twice the percentage of the low penalty condition. The results showed that the number of no response was significantly higher for the CMMT format and the number of random guesses was significantly lower for the CMMT format. There were no significant between the two penalty conditions. This result may be due to the fact that the actual score difference between the two conditions was too small. In the discussion, the possibility of applying CMMT format tests while penalizing wrong answers in actual testing settings was addressed.Keywords: computerized modified multiple choice test format, multiple-choice test format, penalizing, test format
Procedia PDF Downloads 1671867 Can We Meet the New Challenges of NonIsocyanates Polyurethanes (NIPU) towards NIPU Foams?
Authors: Adrien Cornille, Marine Blain, Bernard Boutevin, Sylvain Caillol
Abstract:
Generally, linear polyurethanes (PUs) are obtained by the reaction between an oligomeric diol, a short diol as chain extender and a diisocyanate. However the use of diisocyanate should be avoided since they are generally very harmful for human health. Therefore the synthesis of NIPUs (non isocyanate PUs) from step growth polymerization of dicyclocarbonates and diamines should be favoured. This method is particularly interesting since no hazardous isocyanates are used. Thus, this reaction, extensively studied by Endo et al. is currently gaining a lot of attention as a substitution route for the synthesis of NIPUs, both from industrial and academic community. However, the reactivity of reaction between amine and cyclic carbonate is a major scientific issue, since cyclic carbonates are poorly reactive. Thus, our team developed several synthetic ways for the synthesis of various di-cyclic carbonates based on C5-, C6- and dithio- cyclic carbonates, from different biobased raw materials (glycerin isosorbide, vegetable oils…). These monomers were used to synthesize NIPUs with various mechanical and thermal properties for various applications. We studied the reactivity of reaction with various catalysts and find optimized conditions for room temperature reaction. We also studied the radical copolymerization of cyclic carbonate monomers in styrene-acrylate copolymers for coating applications. We also succeeded in the elaboration of biobased NIPU flexible foams. To the best of our knowledge, there is no report in literature on the preparation of non-isocyanate polyurethane foams.Keywords: foam, nonisocyanate polyurethane, cyclic carbonate, blowing agent, scanning electron microscopy
Procedia PDF Downloads 2321866 Infrastructural Investment and Economic Growth in Indian States: A Panel Data Analysis
Authors: Jonardan Koner, Basabi Bhattacharya, Avinash Purandare
Abstract:
The study is focused to find out the impact of infrastructural investment on economic development in Indian states. The study uses panel data analysis to measure the impact of infrastructural investment on Real Gross Domestic Product in Indian States. Panel data analysis incorporates Unit Root Test, Cointegration Teat, Pooled Ordinary Least Squares, Fixed Effect Approach, Random Effect Approach, Hausman Test. The study analyzes panel data (annual in frequency) ranging from 1991 to 2012 and concludes that infrastructural investment has a desirable impact on economic development in Indian. Finally, the study reveals that the infrastructural investment significantly explains the variation of economic indicator.Keywords: infrastructural investment, real GDP, unit root test, cointegration teat, pooled ordinary least squares, fixed effect approach, random effect approach, Hausman test
Procedia PDF Downloads 4021865 Acoustic Induced Vibration Response Analysis of Honeycomb Panel
Authors: Po-Yuan Tung, Jen-Chueh Kuo, Chia-Ray Chen, Chien-Hsing Li, Kuo-Liang Pan
Abstract:
The main-body structure of satellite is mainly constructed by lightweight material, it should be able to withstand certain vibration load during launches. Since various kinds of change possibility in the space, it is an extremely important work to study the random vibration response of satellite structure. This paper based on the reciprocity relationship between sound and structure response and it will try to evaluate the dynamic response of satellite main body under random acoustic load excitation. This paper will study the technical process and verify the feasibility of sonic-borne vibration analysis. One simple plate exposed to the uniform acoustic field is utilized to take some important parameters and to validate the acoustics field model of the reverberation chamber. Then import both structure and acoustic field chamber models into the vibro-acoustic coupling analysis software to predict the structure response. During the modeling process, experiment verification is performed to make sure the quality of numerical models. Finally, the surface vibration level can be calculated through the modal participation factor, and the analysis results are presented in PSD spectrum.Keywords: vibration, acoustic, modal, honeycomb panel
Procedia PDF Downloads 5551864 A Multigrid Approach for Three-Dimensional Inverse Heat Conduction Problems
Authors: Jianhua Zhou, Yuwen Zhang
Abstract:
A two-step multigrid approach is proposed to solve the inverse heat conduction problem in a 3-D object under laser irradiation. In the first step, the location of the laser center is estimated using a coarse and uniform grid system. In the second step, the front-surface temperature is recovered in good accuracy using a multiple grid system in which fine mesh is used at laser spot center to capture the drastic temperature rise in this region but coarse mesh is employed in the peripheral region to reduce the total number of sensors required. The effectiveness of the two-step approach and the multiple grid system are demonstrated by the illustrative inverse solutions. If the measurement data for the temperature and heat flux on the back surface do not contain random error, the proposed multigrid approach can yield more accurate inverse solutions. When the back-surface measurement data contain random noise, accurate inverse solutions cannot be obtained if both temperature and heat flux are measured on the back surface.Keywords: conduction, inverse problems, conjugated gradient method, laser
Procedia PDF Downloads 3691863 The Effect of Institutions on Economic Growth: An Analysis Based on Bayesian Panel Data Estimation
Authors: Mohammad Anwar, Shah Waliullah
Abstract:
This study investigated panel data regression models. This paper used Bayesian and classical methods to study the impact of institutions on economic growth from data (1990-2014), especially in developing countries. Under the classical and Bayesian methodology, the two-panel data models were estimated, which are common effects and fixed effects. For the Bayesian approach, the prior information is used in this paper, and normal gamma prior is used for the panel data models. The analysis was done through WinBUGS14 software. The estimated results of the study showed that panel data models are valid models in Bayesian methodology. In the Bayesian approach, the effects of all independent variables were positively and significantly affected by the dependent variables. Based on the standard errors of all models, we must say that the fixed effect model is the best model in the Bayesian estimation of panel data models. Also, it was proved that the fixed effect model has the lowest value of standard error, as compared to other models.Keywords: Bayesian approach, common effect, fixed effect, random effect, Dynamic Random Effect Model
Procedia PDF Downloads 681862 A Machine Learning Approach for Intelligent Transportation System Management on Urban Roads
Authors: Ashish Dhamaniya, Vineet Jain, Rajesh Chouhan
Abstract:
Traffic management is one of the gigantic issue in most of the urban roads in al-most all metropolitan cities in India. Speed is one of the critical traffic parameters for effective Intelligent Transportation System (ITS) implementation as it decides the arrival rate of vehicles on an intersection which are majorly the point of con-gestions. The study aimed to leverage Machine Learning (ML) models to produce precise predictions of speed on urban roadway links. The research objective was to assess how categorized traffic volume and road width, serving as variables, in-fluence speed prediction. Four tree-based regression models namely: Decision Tree (DT), Random Forest (RF), Extra Tree (ET), and Extreme Gradient Boost (XGB)are employed for this purpose. The models' performances were validated using test data, and the results demonstrate that Random Forest surpasses other machine learning techniques and a conventional utility theory-based model in speed prediction. The study is useful for managing the urban roadway network performance under mixed traffic conditions and effective implementation of ITS.Keywords: stream speed, urban roads, machine learning, traffic flow
Procedia PDF Downloads 701861 Application of GeoGebra into Teaching and Learning of Linear and Quadratic Equations amongst Senior Secondary School Students in Fagge Local Government Area of Kano State, Nigeria
Authors: Musa Auwal Mamman, S. G. Isa
Abstract:
This study was carried out in order to investigate the effectiveness of GeoGebra software in teaching and learning of linear and quadratic equations amongst senior secondary school students in Fagge Local Government Area, Kano State–Nigeria. Five research items were raised in objectives, research questions and hypotheses respectively. A random sampling method was used in selecting 398 students from a population of 2098 of SS2 students. The experimental group was taught using the GeoGebra software while the control group was taught using the conventional teaching method. The instrument used for the study was the mathematics performance test (MPT) which was administered at the beginning and at the end of the study. The results of the study revealed that students taught with GeoGebra software (experimental group) performed better than students taught with traditional teaching method. The t- test was used to analyze the data obtained from the study.Keywords: GeoGebra Software, mathematics performance, random sampling, mathematics teaching
Procedia PDF Downloads 2471860 Progressive Type-I Interval Censoring with Binomial Removal-Estimation and Its Properties
Authors: Sonal Budhiraja, Biswabrata Pradhan
Abstract:
This work considers statistical inference based on progressive Type-I interval censored data with random removal. The scheme of progressive Type-I interval censoring with random removal can be described as follows. Suppose n identical items are placed on a test at time T0 = 0 under k pre-fixed inspection times at pre-specified times T1 < T2 < . . . < Tk, where Tk is the scheduled termination time of the experiment. At inspection time Ti, Ri of the remaining surviving units Si, are randomly removed from the experiment. The removal follows a binomial distribution with parameters Si and pi for i = 1, . . . , k, with pk = 1. In this censoring scheme, the number of failures in different inspection intervals and the number of randomly removed items at pre-specified inspection times are observed. Asymptotic properties of the maximum likelihood estimators (MLEs) are established under some regularity conditions. A β-content γ-level tolerance interval (TI) is determined for two parameters Weibull lifetime model using the asymptotic properties of MLEs. The minimum sample size required to achieve the desired β-content γ-level TI is determined. The performance of the MLEs and TI is studied via simulation.Keywords: asymptotic normality, consistency, regularity conditions, simulation study, tolerance interval
Procedia PDF Downloads 2501859 Random Variation of Treated Volumes in Fractionated 2D Image Based HDR Brachytherapy for Cervical Cancer
Authors: R. Tudugala, B. M. A. I. Balasooriya, W. M. Ediri Arachchi, R. W. M. W. K. Rathnayake, T. D. Premaratna
Abstract:
Brachytherapy involves placing a source of radiation near the cancer site which gives promising prognosis for cervical cancer treatments. The purpose of this study was to evaluate the effect of random variation of treated volumes in between fractions in the 2D image based fractionated high dose rate brachytherapy for cervical cancer at National Cancer Institute Maharagama, Sri Lanka. Dose plans were analyzed for 150 cervical cancer patients with orthogonal radiographs (2D) based brachytherapy. ICRU treated volumes was modeled by translating the applicators with the help of “Multisource HDR plus software”. The difference of treated volumes with respect to the applicator geometry was analyzed by using SPSS 18 software; to derived patient population based estimates of delivered treated volumes relative to ideally treated volumes. Packing was evaluated according to bladder dose, rectum dose and geometry of the dose distribution by three consultant radiation oncologist. The difference of treated volumes depends on types of the applicators, which was used in fractionated brachytherapy. The means of the “Difference of Treated Volume” (DTV) for “Evenly activated tandem (ET)” length” group was ((X_1)) -0.48 cm3 and ((X_2)) 11.85 cm3 for “Unevenly activated tandem length (UET) group. The range of the DTV for ET group was 35.80 cm3 whereas UET group 104.80 cm3. One sample T test was performed to compare the DTV with “Ideal treatment volume difference (0.00cm3)”. It is evident that P value was 0.732 for ET group and for UET it was 0.00 moreover independent two sample T test was performed to compare ET and UET groups and calculated P value was 0.005. Packing was evaluated under three categories 59.38% used “Convenient Packing Technique”, 33.33% used “Fairly Packing Technique” and 7.29% used “Not Convenient Packing” in their fractionated brachytherapy treatments. Random variation of treated volume in ET group is much lower than UET group and there is a significant difference (p<0.05) in between ET and UET groups which affects the dose distribution of the treatment. Furthermore, it can be concluded nearly 92.71% patient’s packing were used acceptable packing technique at NCIM, Sri Lanka.Keywords: brachytherapy, cervical cancer, high dose rate, tandem, treated volumes
Procedia PDF Downloads 2011858 Inference for Compound Truncated Poisson Lognormal Model with Application to Maximum Precipitation Data
Authors: M. Z. Raqab, Debasis Kundu, M. A. Meraou
Abstract:
In this paper, we have analyzed maximum precipitation data during a particular period of time obtained from different stations in the Global Historical Climatological Network of the USA. One important point to mention is that some stations are shut down on certain days for some reason or the other. Hence, the maximum values are recorded by excluding those readings. It is assumed that the number of stations that operate follows zero-truncated Poisson random variables, and the daily precipitation follows a lognormal random variable. We call this model a compound truncated Poisson lognormal model. The proposed model has three unknown parameters, and it can take a variety of shapes. The maximum likelihood estimators can be obtained quite conveniently using Expectation-Maximization (EM) algorithm. Approximate maximum likelihood estimators are also derived. The associated confidence intervals also can be obtained from the observed Fisher information matrix. Simulation results have been performed to check the performance of the EM algorithm, and it is observed that the EM algorithm works quite well in this case. When we analyze the precipitation data set using the proposed model, it is observed that the proposed model provides a better fit than some of the existing models.Keywords: compound Poisson lognormal distribution, EM algorithm, maximum likelihood estimation, approximate maximum likelihood estimation, Fisher information, skew distribution
Procedia PDF Downloads 1081857 Churn Prediction for Savings Bank Customers: A Machine Learning Approach
Authors: Prashant Verma
Abstract:
Commercial banks are facing immense pressure, including financial disintermediation, interest rate volatility and digital ways of finance. Retaining an existing customer is 5 to 25 less expensive than acquiring a new one. This paper explores customer churn prediction, based on various statistical & machine learning models and uses under-sampling, to improve the predictive power of these models. The results show that out of the various machine learning models, Random Forest which predicts the churn with 78% accuracy, has been found to be the most powerful model for the scenario. Customer vintage, customer’s age, average balance, occupation code, population code, average withdrawal amount, and an average number of transactions were found to be the variables with high predictive power for the churn prediction model. The model can be deployed by the commercial banks in order to avoid the customer churn so that they may retain the funds, which are kept by savings bank (SB) customers. The article suggests a customized campaign to be initiated by commercial banks to avoid SB customer churn. Hence, by giving better customer satisfaction and experience, the commercial banks can limit the customer churn and maintain their deposits.Keywords: savings bank, customer churn, customer retention, random forests, machine learning, under-sampling
Procedia PDF Downloads 1431856 Understanding the Thermal Transformation of Random Access Memory Cards: A Pathway to Their Efficient Recycling
Authors: Khushalini N. Ulman, Samane Maroufi, Veena H. Sahajwalla
Abstract:
Globally, electronic waste (e-waste) continues to grow at an alarming rate. Several technologies have been developed to recover valuable materials from e-waste, however, their efficiency can be increased with a better knowledge of the e-waste components. Random access memory cards (RAMs) are considered as high value scrap for the e-waste recyclers. Despite their high precious metal content, RAMs are still recycled in a conventional manner resulting in huge loss of resources. Our research work highlights the precious metal rich components of a RAM. Inductively coupled plasma (ICP) analysis of RAMs of six different generations have been carried out and the trends in their metal content have been investigated. Over the past decade, the copper content of RAMs has halved and their tin content has increased by 70 %. The stricter environmental laws have facilitated ~96 % drop in the lead content of RAMs. To comprehend the fundamentals of thermal transformation of RAMs, our research provides their detailed kinetic study. This can assist the e-waste recyclers in optimising their metal recovery processes. Thus, understanding the chemical and thermal behaviour of RAMs can open new avenues for efficient e-waste recycling.Keywords: electronic waste, kinetic study, recycling, thermal transformation
Procedia PDF Downloads 1451855 Designing Stochastic Non-Invasively Applied DC Pulses to Suppress Tremors in Multiple Sclerosis by Computational Modeling
Authors: Aamna Lawrence, Ashutosh Mishra
Abstract:
Tremors occur in 60% of the patients who have Multiple Sclerosis (MS), the most common demyelinating disease that affects the central and peripheral nervous system, and are the primary cause of disability in young adults. While pharmacological agents provide minimal benefits, surgical interventions like Deep Brain Stimulation and Thalamotomy are riddled with dangerous complications which make non-invasive electrical stimulation an appealing treatment of choice for dealing with tremors. Hence, we hypothesized that if the non-invasive electrical stimulation parameters (mainly frequency) can be computed by mathematically modeling the nerve fibre to take into consideration the minutest details of the axon morphologies, tremors due to demyelination can be optimally alleviated. In this computational study, we have modeled the random demyelination pattern in a nerve fibre that typically manifests in MS using the High-Density Hodgkin-Huxley model with suitable modifications to account for the myelin. The internode of the nerve fibre in our model could have up to ten demyelinated regions each having random length and myelin thickness. The arrival time of action potentials traveling the demyelinated and the normally myelinated nerve fibre between two fixed points in space was noted, and its relationship with the nerve fibre radius ranging from 5µm to 12µm was analyzed. It was interesting to note that there were no overlaps between the arrival time for action potentials traversing the demyelinated and normally myelinated nerve fibres even when a single internode of the nerve fibre was demyelinated. The study gave us an opportunity to design DC pulses whose frequency of application would be a function of the random demyelination pattern to block only the delayed tremor-causing action potentials. The DC pulses could be delivered to the peripheral nervous system non-invasively by an electrode bracelet that would suppress any shakiness beyond it thus paving the way for wearable neuro-rehabilitative technologies.Keywords: demyelination, Hodgkin-Huxley model, non-invasive electrical stimulation, tremor
Procedia PDF Downloads 1281854 Evaluation of Spatial Correlation Length and Karhunen-Loeve Expansion Terms for Predicting Reliability Level of Long-Term Settlement in Soft Soils
Authors: Mehrnaz Alibeikloo, Hadi Khabbaz, Behzad Fatahi
Abstract:
The spectral random field method is one of the widely used methods to obtain more reliable and accurate results in geotechnical problems involving material variability. Karhunen-Loeve (K-L) expansion method was applied to perform random field discretization of cross-correlated creep parameters. Karhunen-Loeve expansion method is based on eigenfunctions and eigenvalues of covariance function adopting Kernel integral solution. In this paper, the accuracy of Karhunen-Loeve expansion was investigated to predict long-term settlement of soft soils adopting elastic visco-plastic creep model. For this purpose, a parametric study was carried to evaluate the effect of K-L expansion terms and spatial correlation length on the reliability of results. The results indicate that small values of spatial correlation length require more K-L expansion terms. Moreover, by increasing spatial correlation length, the coefficient of variation (COV) of creep settlement increases, confirming more conservative and safer prediction.Keywords: Karhunen-Loeve expansion, long-term settlement, reliability analysis, spatial correlation length
Procedia PDF Downloads 1591853 Random Subspace Neural Classifier for Meteor Recognition in the Night Sky
Authors: Carlos Vera, Tetyana Baydyk, Ernst Kussul, Graciela Velasco, Miguel Aparicio
Abstract:
This article describes the Random Subspace Neural Classifier (RSC) for the recognition of meteors in the night sky. We used images of meteors entering the atmosphere at night between 8:00 p.m.-5: 00 a.m. The objective of this project is to classify meteor and star images (with stars as the image background). The monitoring of the sky and the classification of meteors are made for future applications by scientists. The image database was collected from different websites. We worked with RGB-type images with dimensions of 220x220 pixels stored in the BitMap Protocol (BMP) format. Subsequent window scanning and processing were carried out for each image. The scan window where the characteristics were extracted had the size of 20x20 pixels with a scanning step size of 10 pixels. Brightness, contrast and contour orientation histograms were used as inputs for the RSC. The RSC worked with two classes and classified into: 1) with meteors and 2) without meteors. Different tests were carried out by varying the number of training cycles and the number of images for training and recognition. The percentage error for the neural classifier was calculated. The results show a good RSC classifier response with 89% correct recognition. The results of these experiments are presented and discussed.Keywords: contour orientation histogram, meteors, night sky, RSC neural classifier, stars
Procedia PDF Downloads 1391852 Motion Detection Method for Clutter Rejection in the Bio-Radar Signal Processing
Authors: Carolina Gouveia, José Vieira, Pedro Pinho
Abstract:
The cardiopulmonary signal monitoring, without the usage of contact electrodes or any type of in-body sensors, has several applications such as sleeping monitoring and continuous monitoring of vital signals in bedridden patients. This system has also applications in the vehicular environment to monitor the driver, in order to avoid any possible accident in case of cardiac failure. Thus, the bio-radar system proposed in this paper, can measure vital signals accurately by using the Doppler effect principle that relates the received signal properties with the distance change between the radar antennas and the person’s chest-wall. Once the bio-radar aim is to monitor subjects in real-time and during long periods of time, it is impossible to guarantee the patient immobilization, hence their random motion will interfere in the acquired signals. In this paper, a mathematical model of the bio-radar is presented, as well as its simulation in MATLAB. The used algorithm for breath rate extraction is explained and a method for DC offsets removal based in a motion detection system is proposed. Furthermore, experimental tests were conducted with a view to prove that the unavoidable random motion can be used to estimate the DC offsets accurately and thus remove them successfully.Keywords: bio-signals, DC component, Doppler effect, ellipse fitting, radar, SDR
Procedia PDF Downloads 1411851 Stock Prediction and Portfolio Optimization Thesis
Authors: Deniz Peksen
Abstract:
This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.Keywords: stock prediction, portfolio optimization, data science, machine learning
Procedia PDF Downloads 801850 An Authentic Algorithm for Ciphering and Deciphering Called Latin Djokovic
Authors: Diogen Babuc
Abstract:
The question that is a motivation of writing is how many devote themselves to discovering something in the world of science where much is discerned and revealed, but at the same time, much is unknown. Methods: The insightful elements of this algorithm are the ciphering and deciphering algorithms of Playfair, Caesar, and Vigenère. Only a few of their main properties are taken and modified, with the aim of forming a specific functionality of the algorithm called Latin Djokovic. Specifically, a string is entered as input data. A key k is given, with a random value between the values a and b = a+3. The obtained value is stored in a variable with the aim of being constant during the run of the algorithm. In correlation to the given key, the string is divided into several groups of substrings, and each substring has a length of k characters. The next step involves encoding each substring from the list of existing substrings. Encoding is performed using the basis of Caesar algorithm, i.e., shifting with k characters. However, that k is incremented by 1 when moving to the next substring in that list. When the value of k becomes greater than b+1, it’ll return to its initial value. The algorithm is executed, following the same procedure, until the last substring in the list is traversed. Results: Using this polyalphabetic method, ciphering and deciphering of strings are achieved. The algorithm also works for a 100-character string. The x character isn’t used when the number of characters in a substring is incompatible with the expected length. The algorithm is simple to implement, but it’s questionable if it works better than the other methods from the point of view of execution time and storage space.Keywords: ciphering, deciphering, authentic, algorithm, polyalphabetic cipher, random key, methods comparison
Procedia PDF Downloads 1031849 Parameter Estimation for Contact Tracing in Graph-Based Models
Authors: Augustine Okolie, Johannes Müller, Mirjam Kretzchmar
Abstract:
We adopt a maximum-likelihood framework to estimate parameters of a stochastic susceptible-infected-recovered (SIR) model with contact tracing on a rooted random tree. Given the number of detectees per index case, our estimator allows to determine the degree distribution of the random tree as well as the tracing probability. Since we do not discover all infectees via contact tracing, this estimation is non-trivial. To keep things simple and stable, we develop an approximation suited for realistic situations (contract tracing probability small, or the probability for the detection of index cases small). In this approximation, the only epidemiological parameter entering the estimator is the basic reproduction number R0. The estimator is tested in a simulation study and applied to covid-19 contact tracing data from India. The simulation study underlines the efficiency of the method. For the empirical covid-19 data, we are able to compare different degree distributions and perform a sensitivity analysis. We find that particularly a power-law and a negative binomial degree distribution meet the data well and that the tracing probability is rather large. The sensitivity analysis shows no strong dependency on the reproduction number.Keywords: stochastic SIR model on graph, contact tracing, branching process, parameter inference
Procedia PDF Downloads 771848 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 122