Search results for: generalized random graphs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3047

Search results for: generalized random graphs

2507 Thermal Radiation and Chemical Reaction Effects on MHD Casson Fluid Past a Permeable Stretching Sheet in a Porous Medium

Authors: Y. Sunita Rani, Y. Hari Krishna, M. V. Ramana Murthy, K. Sudhaker Reddy

Abstract:

This article studied effects of radiation and chemical reaction on MHD casson fluoid flow past a Permeable Stretching Sheet in a Porous Medium. Suitable transformations are considered to transform the governing partial differential equations as ordinary ones and then solved by the numerical procedures like Runge- Kutta – Fehlberg shooting technique method. The effects of various governing parameters, on the velocity, temperature and concentration are displayed through graphs and discussed numerically.

Keywords: MHD, Casson fluid, porous medium, permeable stretching sheet

Procedia PDF Downloads 109
2506 Location Quotient Analysis: Case Study

Authors: Seyed Habib A. Rahmati, Mohamad Hasan Sadeghpour, Parsa Fallah Sheikhlari

Abstract:

Location quotient (LQ) is a comparison technique that represents emphasized economic structure of single zone versus the standard area to identify specialty for every zone. In another words, the exact calculation of this metric can show the main core competencies and critical capabilities of an area to the decision makers. This research focus on the exact calculation of the LQ for an Iranian Province called Qazvin and within a case study introduces LQ of the capable industries of Qazvin. Finally, through different graphs and tables, it creates an opportunity to compare the recognized capabilities.

Keywords: location quotient, case study, province analysis, core competency

Procedia PDF Downloads 634
2505 Progressive Type-I Interval Censoring with Binomial Removal-Estimation and Its Properties

Authors: Sonal Budhiraja, Biswabrata Pradhan

Abstract:

This work considers statistical inference based on progressive Type-I interval censored data with random removal. The scheme of progressive Type-I interval censoring with random removal can be described as follows. Suppose n identical items are placed on a test at time T0 = 0 under k pre-fixed inspection times at pre-specified times T1 < T2 < . . . < Tk, where Tk is the scheduled termination time of the experiment. At inspection time Ti, Ri of the remaining surviving units Si, are randomly removed from the experiment. The removal follows a binomial distribution with parameters Si and pi for i = 1, . . . , k, with pk = 1. In this censoring scheme, the number of failures in different inspection intervals and the number of randomly removed items at pre-specified inspection times are observed. Asymptotic properties of the maximum likelihood estimators (MLEs) are established under some regularity conditions. A β-content γ-level tolerance interval (TI) is determined for two parameters Weibull lifetime model using the asymptotic properties of MLEs. The minimum sample size required to achieve the desired β-content γ-level TI is determined. The performance of the MLEs and TI is studied via simulation.

Keywords: asymptotic normality, consistency, regularity conditions, simulation study, tolerance interval

Procedia PDF Downloads 232
2504 Random Variation of Treated Volumes in Fractionated 2D Image Based HDR Brachytherapy for Cervical Cancer

Authors: R. Tudugala, B. M. A. I. Balasooriya, W. M. Ediri Arachchi, R. W. M. W. K. Rathnayake, T. D. Premaratna

Abstract:

Brachytherapy involves placing a source of radiation near the cancer site which gives promising prognosis for cervical cancer treatments. The purpose of this study was to evaluate the effect of random variation of treated volumes in between fractions in the 2D image based fractionated high dose rate brachytherapy for cervical cancer at National Cancer Institute Maharagama, Sri Lanka. Dose plans were analyzed for 150 cervical cancer patients with orthogonal radiographs (2D) based brachytherapy. ICRU treated volumes was modeled by translating the applicators with the help of “Multisource HDR plus software”. The difference of treated volumes with respect to the applicator geometry was analyzed by using SPSS 18 software; to derived patient population based estimates of delivered treated volumes relative to ideally treated volumes. Packing was evaluated according to bladder dose, rectum dose and geometry of the dose distribution by three consultant radiation oncologist. The difference of treated volumes depends on types of the applicators, which was used in fractionated brachytherapy. The means of the “Difference of Treated Volume” (DTV) for “Evenly activated tandem (ET)” length” group was ((X_1)) -0.48 cm3 and ((X_2)) 11.85 cm3 for “Unevenly activated tandem length (UET) group. The range of the DTV for ET group was 35.80 cm3 whereas UET group 104.80 cm3. One sample T test was performed to compare the DTV with “Ideal treatment volume difference (0.00cm3)”. It is evident that P value was 0.732 for ET group and for UET it was 0.00 moreover independent two sample T test was performed to compare ET and UET groups and calculated P value was 0.005. Packing was evaluated under three categories 59.38% used “Convenient Packing Technique”, 33.33% used “Fairly Packing Technique” and 7.29% used “Not Convenient Packing” in their fractionated brachytherapy treatments. Random variation of treated volume in ET group is much lower than UET group and there is a significant difference (p<0.05) in between ET and UET groups which affects the dose distribution of the treatment. Furthermore, it can be concluded nearly 92.71% patient’s packing were used acceptable packing technique at NCIM, Sri Lanka.

Keywords: brachytherapy, cervical cancer, high dose rate, tandem, treated volumes

Procedia PDF Downloads 187
2503 Inference for Compound Truncated Poisson Lognormal Model with Application to Maximum Precipitation Data

Authors: M. Z. Raqab, Debasis Kundu, M. A. Meraou

Abstract:

In this paper, we have analyzed maximum precipitation data during a particular period of time obtained from different stations in the Global Historical Climatological Network of the USA. One important point to mention is that some stations are shut down on certain days for some reason or the other. Hence, the maximum values are recorded by excluding those readings. It is assumed that the number of stations that operate follows zero-truncated Poisson random variables, and the daily precipitation follows a lognormal random variable. We call this model a compound truncated Poisson lognormal model. The proposed model has three unknown parameters, and it can take a variety of shapes. The maximum likelihood estimators can be obtained quite conveniently using Expectation-Maximization (EM) algorithm. Approximate maximum likelihood estimators are also derived. The associated confidence intervals also can be obtained from the observed Fisher information matrix. Simulation results have been performed to check the performance of the EM algorithm, and it is observed that the EM algorithm works quite well in this case. When we analyze the precipitation data set using the proposed model, it is observed that the proposed model provides a better fit than some of the existing models.

Keywords: compound Poisson lognormal distribution, EM algorithm, maximum likelihood estimation, approximate maximum likelihood estimation, Fisher information, skew distribution

Procedia PDF Downloads 97
2502 Extension of Positive Linear Operator

Authors: Manal Azzidani

Abstract:

This research consideres the extension of special functions called Positive Linear Operators. the bounded linear operator which defined from normed space to Banach space will extend to the closure of the its domain, And extend identified linear functional on a vector subspace by Hana-Banach theorem which could be generalized to the positive linear operators.

Keywords: extension, positive operator, Riesz space, sublinear function

Procedia PDF Downloads 508
2501 Churn Prediction for Savings Bank Customers: A Machine Learning Approach

Authors: Prashant Verma

Abstract:

Commercial banks are facing immense pressure, including financial disintermediation, interest rate volatility and digital ways of finance. Retaining an existing customer is 5 to 25 less expensive than acquiring a new one. This paper explores customer churn prediction, based on various statistical & machine learning models and uses under-sampling, to improve the predictive power of these models. The results show that out of the various machine learning models, Random Forest which predicts the churn with 78% accuracy, has been found to be the most powerful model for the scenario. Customer vintage, customer’s age, average balance, occupation code, population code, average withdrawal amount, and an average number of transactions were found to be the variables with high predictive power for the churn prediction model. The model can be deployed by the commercial banks in order to avoid the customer churn so that they may retain the funds, which are kept by savings bank (SB) customers. The article suggests a customized campaign to be initiated by commercial banks to avoid SB customer churn. Hence, by giving better customer satisfaction and experience, the commercial banks can limit the customer churn and maintain their deposits.

Keywords: savings bank, customer churn, customer retention, random forests, machine learning, under-sampling

Procedia PDF Downloads 125
2500 The Impact of Board Characteristics on Firm Performance: Evidence from Banking Industry in India

Authors: Manmeet Kaur, Madhu Vij

Abstract:

The Board of Directors in a firm performs the primary role of an internal control mechanism. This Study seeks to understand the relationship between internal governance and performance of banks in India. The research paper investigates the effect of board structure (proportion of nonexecutive directors, gender diversity, board size and meetings per year) on the firm performance. This paper evaluates the impact of corporate governance mechanisms on bank’s financial performance using panel data for 28 listed banks in National Stock Exchange of India for the period of 2008-2014. Returns on Asset, Return on Equity, Tobin’s Q and Net Interest Margin were used as the financial performance indicators. To estimate the relationship among governance and bank performance initially the Study uses Pooled Ordinary Least Square (OLS) Estimation and Generalized Least Square (GLS) Estimation. Then a well-developed panel Generalized Method of Moments (GMM) Estimator is developed to investigate the dynamic nature of performance and governance relationship. The Study empirically confirms that two-step system GMM approach controls the problem of unobserved heterogeneity and endogeneity as compared to the OLS and GLS approach. The result suggests that banks with small board, boards with female members, and boards that meet more frequently tend to be more efficient and subsequently have a positive impact on performance of banks. The study offers insights to policy makers interested in enhancing the quality of governance of banks in India. Also, the findings suggest that board structure plays a vital role in the improvement of corporate governance mechanism for financial institutions. There is a need to have efficient boards in banks to improve the overall health of the financial institutions and the economic development of the country.

Keywords: board of directors, corporate governance, GMM estimation, Indian banking

Procedia PDF Downloads 243
2499 From Convexity in Graphs to Polynomial Rings

Authors: Ladznar S. Laja, Rosalio G. Artes, Jr.

Abstract:

This paper introduced a graph polynomial relating convexity concepts. A graph polynomial is a polynomial representing a graph given some parameters. On the other hand, a subgraph H of a graph G is said to be convex in G if for every pair of vertices in H, every shortest path with these end-vertices lies entirely in H. We define the convex subgraph polynomial of a graph G to be the generating function of the sequence of the numbers of convex subgraphs of G of cardinalities ranging from zero to the order of G. This graph polynomial is monic since G itself is convex. The convex index which counts the number of convex subgraphs of G of all orders is just the evaluation of this polynomial at 1. Relationships relating algebraic properties of convex subgraphs polynomial with graph theoretic concepts are established.

Keywords: convex subgraph, convex index, generating function, polynomial ring

Procedia PDF Downloads 196
2498 Understanding the Thermal Transformation of Random Access Memory Cards: A Pathway to Their Efficient Recycling

Authors: Khushalini N. Ulman, Samane Maroufi, Veena H. Sahajwalla

Abstract:

Globally, electronic waste (e-waste) continues to grow at an alarming rate. Several technologies have been developed to recover valuable materials from e-waste, however, their efficiency can be increased with a better knowledge of the e-waste components. Random access memory cards (RAMs) are considered as high value scrap for the e-waste recyclers. Despite their high precious metal content, RAMs are still recycled in a conventional manner resulting in huge loss of resources. Our research work highlights the precious metal rich components of a RAM. Inductively coupled plasma (ICP) analysis of RAMs of six different generations have been carried out and the trends in their metal content have been investigated. Over the past decade, the copper content of RAMs has halved and their tin content has increased by 70 %. The stricter environmental laws have facilitated ~96 % drop in the lead content of RAMs. To comprehend the fundamentals of thermal transformation of RAMs, our research provides their detailed kinetic study. This can assist the e-waste recyclers in optimising their metal recovery processes. Thus, understanding the chemical and thermal behaviour of RAMs can open new avenues for efficient e-waste recycling.

Keywords: electronic waste, kinetic study, recycling, thermal transformation

Procedia PDF Downloads 135
2497 Designing Stochastic Non-Invasively Applied DC Pulses to Suppress Tremors in Multiple Sclerosis by Computational Modeling

Authors: Aamna Lawrence, Ashutosh Mishra

Abstract:

Tremors occur in 60% of the patients who have Multiple Sclerosis (MS), the most common demyelinating disease that affects the central and peripheral nervous system, and are the primary cause of disability in young adults. While pharmacological agents provide minimal benefits, surgical interventions like Deep Brain Stimulation and Thalamotomy are riddled with dangerous complications which make non-invasive electrical stimulation an appealing treatment of choice for dealing with tremors. Hence, we hypothesized that if the non-invasive electrical stimulation parameters (mainly frequency) can be computed by mathematically modeling the nerve fibre to take into consideration the minutest details of the axon morphologies, tremors due to demyelination can be optimally alleviated. In this computational study, we have modeled the random demyelination pattern in a nerve fibre that typically manifests in MS using the High-Density Hodgkin-Huxley model with suitable modifications to account for the myelin. The internode of the nerve fibre in our model could have up to ten demyelinated regions each having random length and myelin thickness. The arrival time of action potentials traveling the demyelinated and the normally myelinated nerve fibre between two fixed points in space was noted, and its relationship with the nerve fibre radius ranging from 5µm to 12µm was analyzed. It was interesting to note that there were no overlaps between the arrival time for action potentials traversing the demyelinated and normally myelinated nerve fibres even when a single internode of the nerve fibre was demyelinated. The study gave us an opportunity to design DC pulses whose frequency of application would be a function of the random demyelination pattern to block only the delayed tremor-causing action potentials. The DC pulses could be delivered to the peripheral nervous system non-invasively by an electrode bracelet that would suppress any shakiness beyond it thus paving the way for wearable neuro-rehabilitative technologies.

Keywords: demyelination, Hodgkin-Huxley model, non-invasive electrical stimulation, tremor

Procedia PDF Downloads 113
2496 Geometric Nonlinear Dynamic Analysis of Cylindrical Composite Sandwich Shells Subjected to Underwater Blast Load

Authors: Mustafa Taskin, Ozgur Demir, M. Mert Serveren

Abstract:

The precise study of the impact of underwater explosions on structures is of great importance in the design and engineering calculations of floating structures, especially those used for military purposes, as well as power generation facilities such as offshore platforms that can become a target in case of war. Considering that ship and submarine structures are mostly curved surfaces, it is extremely important and interesting to examine the destructive effects of underwater explosions on curvilinear surfaces. In this study, geometric nonlinear dynamic analysis of cylindrical composite sandwich shells subjected to instantaneous pressure load is performed. The instantaneous pressure load is defined as an underwater explosion and the effects of the liquid medium are taken into account. There are equations in the literature for pressure due to underwater explosions, but these equations have been obtained for flat plates. For this reason, the instantaneous pressure load equations are arranged to be suitable for curvilinear structures before proceeding with the analyses. Fluid-solid interaction is defined by using Taylor's Plate Theory. The lower and upper layers of the cylindrical composite sandwich shell are modeled as composite laminate and the middle layer consists of soft core. The geometric nonlinear dynamic equations of the shell are obtained by Hamilton's principle, taken into account the von Kàrmàn theory of large displacements. Then, time dependent geometric nonlinear equations of motion are solved with the help of generalized differential quadrature method (GDQM) and dynamic behavior of cylindrical composite sandwich shells exposed to underwater explosion is investigated. An algorithm that can work parametrically for the solution has been developed within the scope of the study.

Keywords: cylindrical composite sandwich shells, generalized differential quadrature method, geometric nonlinear dynamic analysis, underwater explosion

Procedia PDF Downloads 175
2495 Evaluation of Spatial Correlation Length and Karhunen-Loeve Expansion Terms for Predicting Reliability Level of Long-Term Settlement in Soft Soils

Authors: Mehrnaz Alibeikloo, Hadi Khabbaz, Behzad Fatahi

Abstract:

The spectral random field method is one of the widely used methods to obtain more reliable and accurate results in geotechnical problems involving material variability. Karhunen-Loeve (K-L) expansion method was applied to perform random field discretization of cross-correlated creep parameters. Karhunen-Loeve expansion method is based on eigenfunctions and eigenvalues of covariance function adopting Kernel integral solution. In this paper, the accuracy of Karhunen-Loeve expansion was investigated to predict long-term settlement of soft soils adopting elastic visco-plastic creep model. For this purpose, a parametric study was carried to evaluate the effect of K-L expansion terms and spatial correlation length on the reliability of results. The results indicate that small values of spatial correlation length require more K-L expansion terms. Moreover, by increasing spatial correlation length, the coefficient of variation (COV) of creep settlement increases, confirming more conservative and safer prediction.

Keywords: Karhunen-Loeve expansion, long-term settlement, reliability analysis, spatial correlation length

Procedia PDF Downloads 143
2494 A Reduced Distributed Sate Space for Modular Petri Nets

Authors: Sawsen Khlifa, Chiheb AMeur Abid, Belhassan Zouari

Abstract:

Modular verification approaches have been widely attempted to cope with the well known state explosion problem. This paper deals with the modular verification of modular Petri nets. We propose a reduced version for the modular state space of a given modular Petri net. The new structure allows the creation of smaller modular graphs. Each one draws the behavior of the corresponding module and outlines some global information. Hence, this version helps to overcome the explosion problem and to use less memory space. In this condensed structure, the verification of some generic properties concerning one module is limited to the exploration of its associated graph.

Keywords: distributed systems, modular verification, petri nets, state space explosition

Procedia PDF Downloads 97
2493 Random Subspace Neural Classifier for Meteor Recognition in the Night Sky

Authors: Carlos Vera, Tetyana Baydyk, Ernst Kussul, Graciela Velasco, Miguel Aparicio

Abstract:

This article describes the Random Subspace Neural Classifier (RSC) for the recognition of meteors in the night sky. We used images of meteors entering the atmosphere at night between 8:00 p.m.-5: 00 a.m. The objective of this project is to classify meteor and star images (with stars as the image background). The monitoring of the sky and the classification of meteors are made for future applications by scientists. The image database was collected from different websites. We worked with RGB-type images with dimensions of 220x220 pixels stored in the BitMap Protocol (BMP) format. Subsequent window scanning and processing were carried out for each image. The scan window where the characteristics were extracted had the size of 20x20 pixels with a scanning step size of 10 pixels. Brightness, contrast and contour orientation histograms were used as inputs for the RSC. The RSC worked with two classes and classified into: 1) with meteors and 2) without meteors. Different tests were carried out by varying the number of training cycles and the number of images for training and recognition. The percentage error for the neural classifier was calculated. The results show a good RSC classifier response with 89% correct recognition. The results of these experiments are presented and discussed.

Keywords: contour orientation histogram, meteors, night sky, RSC neural classifier, stars

Procedia PDF Downloads 126
2492 Motion Detection Method for Clutter Rejection in the Bio-Radar Signal Processing

Authors: Carolina Gouveia, José Vieira, Pedro Pinho

Abstract:

The cardiopulmonary signal monitoring, without the usage of contact electrodes or any type of in-body sensors, has several applications such as sleeping monitoring and continuous monitoring of vital signals in bedridden patients. This system has also applications in the vehicular environment to monitor the driver, in order to avoid any possible accident in case of cardiac failure. Thus, the bio-radar system proposed in this paper, can measure vital signals accurately by using the Doppler effect principle that relates the received signal properties with the distance change between the radar antennas and the person’s chest-wall. Once the bio-radar aim is to monitor subjects in real-time and during long periods of time, it is impossible to guarantee the patient immobilization, hence their random motion will interfere in the acquired signals. In this paper, a mathematical model of the bio-radar is presented, as well as its simulation in MATLAB. The used algorithm for breath rate extraction is explained and a method for DC offsets removal based in a motion detection system is proposed. Furthermore, experimental tests were conducted with a view to prove that the unavoidable random motion can be used to estimate the DC offsets accurately and thus remove them successfully.

Keywords: bio-signals, DC component, Doppler effect, ellipse fitting, radar, SDR

Procedia PDF Downloads 122
2491 The Effects of SMS on the Formal Writings of the Students: A Comparative Study among the Students of Different Departments of IUB

Authors: Sumaira Saleem

Abstract:

This study reveals that the use of SMS effect the formal writing of the students. SMS is in vogue sine the last decade but its detrimental effects are effecting not only to the set norms but also deviant forms of expressions have come into the community to which all are not acquainted and it creates a hurdle in effective communication. It also determines the reasons behind the usage of SMS practices in the formal writings like in assignments and examinations. For this study a questionnaire was designed for faculty and students the data was collected from The Islamia University Bahawalpur and the formal work of the students was also collected to check the manifestation of SMS practices in writings. Data was analysed on excel sheet and the tables and graphs are used to explain the ratios and percentages of SMS usage. The results show that the usage of SMS has very strong effect upon the students writing.

Keywords: technology, writing, effects, SMS

Procedia PDF Downloads 363
2490 Stock Prediction and Portfolio Optimization Thesis

Authors: Deniz Peksen

Abstract:

This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.

Keywords: stock prediction, portfolio optimization, data science, machine learning

Procedia PDF Downloads 65
2489 An Authentic Algorithm for Ciphering and Deciphering Called Latin Djokovic

Authors: Diogen Babuc

Abstract:

The question that is a motivation of writing is how many devote themselves to discovering something in the world of science where much is discerned and revealed, but at the same time, much is unknown. Methods: The insightful elements of this algorithm are the ciphering and deciphering algorithms of Playfair, Caesar, and Vigenère. Only a few of their main properties are taken and modified, with the aim of forming a specific functionality of the algorithm called Latin Djokovic. Specifically, a string is entered as input data. A key k is given, with a random value between the values a and b = a+3. The obtained value is stored in a variable with the aim of being constant during the run of the algorithm. In correlation to the given key, the string is divided into several groups of substrings, and each substring has a length of k characters. The next step involves encoding each substring from the list of existing substrings. Encoding is performed using the basis of Caesar algorithm, i.e., shifting with k characters. However, that k is incremented by 1 when moving to the next substring in that list. When the value of k becomes greater than b+1, it’ll return to its initial value. The algorithm is executed, following the same procedure, until the last substring in the list is traversed. Results: Using this polyalphabetic method, ciphering and deciphering of strings are achieved. The algorithm also works for a 100-character string. The x character isn’t used when the number of characters in a substring is incompatible with the expected length. The algorithm is simple to implement, but it’s questionable if it works better than the other methods from the point of view of execution time and storage space.

Keywords: ciphering, deciphering, authentic, algorithm, polyalphabetic cipher, random key, methods comparison

Procedia PDF Downloads 89
2488 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection

Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad

Abstract:

The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.

Keywords: community detection, electrical segmentation, multiplex graph, power grid

Procedia PDF Downloads 59
2487 Parameter Estimation for Contact Tracing in Graph-Based Models

Authors: Augustine Okolie, Johannes Müller, Mirjam Kretzchmar

Abstract:

We adopt a maximum-likelihood framework to estimate parameters of a stochastic susceptible-infected-recovered (SIR) model with contact tracing on a rooted random tree. Given the number of detectees per index case, our estimator allows to determine the degree distribution of the random tree as well as the tracing probability. Since we do not discover all infectees via contact tracing, this estimation is non-trivial. To keep things simple and stable, we develop an approximation suited for realistic situations (contract tracing probability small, or the probability for the detection of index cases small). In this approximation, the only epidemiological parameter entering the estimator is the basic reproduction number R0. The estimator is tested in a simulation study and applied to covid-19 contact tracing data from India. The simulation study underlines the efficiency of the method. For the empirical covid-19 data, we are able to compare different degree distributions and perform a sensitivity analysis. We find that particularly a power-law and a negative binomial degree distribution meet the data well and that the tracing probability is rather large. The sensitivity analysis shows no strong dependency on the reproduction number.

Keywords: stochastic SIR model on graph, contact tracing, branching process, parameter inference

Procedia PDF Downloads 66
2486 The Determinants of Corporate Social Responsibility Disclosure Extent and Quality: The Case of Jordan

Authors: Hani Alkayed, Belal Omar, Eileen Roddy

Abstract:

This study focuses on investigating the determinants of Corporate Social Responsibility Disclosure (CSRD) extent and quality in Jordan. The study examines factors that influence CSR disclosure extent and quality, such as corporate characteristics (size, gearing, firm’s age, and industry type), corporate governance (board size, number of meetings, non-executive directors, female directors in the board, family directors in the board, foreign members, audit committee, type of external auditors, and CEO duality) and ownership structure (government ownership, institutional ownership, and ownership concentration). Legitimacy theory is utilised as the main theory for our theoretical framework. A quantitative approach is adopted for this research and content analysis technique is used to gather CSR disclosure extent and quality from the annual reports. The sample is withdrawn from the annual reports of 118 Jordanian companies over the period of 2010-2015. A CSRD index is constructed, and includes the disclosures of the following categories; environmental, human resources, product and consumers, and community involvement. A 7 point-scale measurement was developed to examine the quality of disclosure, were 0= No Disclosures, 1= General disclosures, (Non-monetary), 2= General disclosures, (Non-monetary) with pictures, charts, and graphs 3= Descriptive/ qualitative disclosures, specific details (Non-monetary), 4= Descriptive/ qualitative disclosures, specific details with pictures, charts, and graphs, 5= Numeric disclosures, full descriptions with supporting numbers, 6= Numeric disclosures, full descriptions with supporting numbers, pictures, and Charts. This study fills the gap in the literature regarding CSRD in Jordan, and the fact that all the previous studies have ignored a clear categorisation as a measurement of quality. The result shows that the extent of CSRD is higher than the quality in Jordan. Regarding the determinants of CSR disclosures, the followings were found to have a significant relationship with both extent and quality of CSRD except non-executives, were the significant relationship was found just with the extent of CSRD: board size, non-executive directors, firm’s age, foreign members on the board, number of boards meetings, the presence of audit committees, big 4, government ownership, firm’s size, industry type.

Keywords: content analysis, corporate governance, corporate social responsibility disclosure, Jordan, quality of disclosure

Procedia PDF Downloads 213
2485 Machine Learning Techniques for Estimating Ground Motion Parameters

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.

Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine

Procedia PDF Downloads 112
2484 Generator Subgraphs of the Wheel

Authors: Neil M. Mame

Abstract:

We consider only finite graphs without loops nor multiple edges. Let G be a graph with E(G) = {e1, e2, …., em}. The edge space of G, denoted by ε(G), is a vector space over the field Z2. The elements of ε(G) are all the subsets of E(G). Vector addition is defined as X+Y = X Δ Y, the symmetric difference of sets X and Y, for X, Y ∈ ε(G). Scalar multiplication is defined as 1.X =X and 0.X = Ø for X ∈ ε(G). The set S ⊆ ε(G) is called a generating set if every element ε(G) is a linear combination of the elements of S. For a non-empty set X ∈ ε(G), the smallest subgraph with edge set X is called edge-induced subgraph of G, denoted by G[X]. The set EH(G) = { A ∈ ε(G) : G[A] ≅ H } denotes the uniform set of H with respect to G and εH(G) denotes the subspace of ε(G) generated by EH(G). If εH(G) is generating set, then we call H a generator subgraph of G. This paper gives the characterization for the generator subgraphs of the wheel that contain cycles and gives the necessary conditions for the acyclic generator subgraphs of the wheel.

Keywords: edge space, edge-induced subgraph, generator subgraph, wheel

Procedia PDF Downloads 455
2483 Application of Directed Acyclic Graphs for Threat Identification Based on Ontologies

Authors: Arun Prabhakar

Abstract:

Threat modeling is an important activity carried out in the initial stages of the development lifecycle that helps in building proactive security measures in the product. Though there are many techniques and tools available today, one of the common challenges with the traditional methods is the lack of a systematic approach in identifying security threats. The proposed solution describes an organized model by defining ontologies that help in building patterns to enumerate threats. The concepts of graph theory are applied to build the pattern for discovering threats for any given scenario. This graph-based solution also brings in other benefits, making it a customizable and scalable model.

Keywords: directed acyclic graph, ontology, patterns, threat identification, threat modeling

Procedia PDF Downloads 126
2482 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2

Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk

Abstract:

Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.

Keywords: ecosystem services, grassland management, machine learning, remote sensing

Procedia PDF Downloads 202
2481 Predicting the Diagnosis of Alzheimer’s Disease: Development and Validation of Machine Learning Models

Authors: Jay L. Fu

Abstract:

Patients with Alzheimer's disease progressively lose their memory and thinking skills and, eventually, the ability to carry out simple daily tasks. The disease is irreversible, but early detection and treatment can slow down the disease progression. In this research, publicly available MRI data and demographic data from 373 MRI imaging sessions were utilized to build models to predict dementia. Various machine learning models, including logistic regression, k-nearest neighbor, support vector machine, random forest, and neural network, were developed. Data were divided into training and testing sets, where training sets were used to build the predictive model, and testing sets were used to assess the accuracy of prediction. Key risk factors were identified, and various models were compared to come forward with the best prediction model. Among these models, the random forest model appeared to be the best model with an accuracy of 90.34%. MMSE, nWBV, and gender were the three most important contributing factors to the detection of Alzheimer’s. Among all the models used, the percent in which at least 4 of the 5 models shared the same diagnosis for a testing input was 90.42%. These machine learning models allow early detection of Alzheimer’s with good accuracy, which ultimately leads to early treatment of these patients.

Keywords: Alzheimer's disease, clinical diagnosis, magnetic resonance imaging, machine learning prediction

Procedia PDF Downloads 126
2480 The Effects of Three Levels of Contextual Inference among adult Athletes

Authors: Abdulaziz Almustafa

Abstract:

Considering the critical role permanence has on predictions related to the contextual interference effect on laboratory and field research, this study sought to determine whether the paradigm of the effect depends on the complexity of the skill during the acquisition and transfer phases. The purpose of the present study was to investigate the effects of contextual interference CI by extending previous laboratory and field research with adult athletes through the acquisition and transfer phases. Male (n=60) athletes age 18-22 years-old, were chosen randomly from Eastern Province Clubs. They were assigned to complete blocked, random, or serial practices. Analysis of variance with repeated measures MANOVA indicated that, the results did not support the notion of CI. There were no significant differences in acquisition phase between blocked, serial and random practice groups. During the transfer phase, there were no major differences between the practice groups. Apparently, due to the task complexity, participants were probably confused and not able to use the advantages of contextual interference. This is another contradictory result to contextual interference effects in acquisition and transfer phases in sport settings. One major factor that can influence the effect of contextual interference is task characteristics as the nature of level of difficulty in sport-related skill.

Keywords: contextual interference, acquisition, transfer, task difficulty

Procedia PDF Downloads 450
2479 Enhanced Test Scheme based on Programmable Write Time for Future Computer Memories

Authors: Nor Zaidi Haron, Fauziyah Salehuddin, Norsuhaidah Arshad, Sani Irwan Salim

Abstract:

Resistive random access memories (RRAMs) are one of the main candidates for future computer memories. However, due to their tiny size and immature device technology, the quality of the outgoing RRAM chips is seen as a serious issue. Defective RRAM cells might behave differently than existing semiconductor memories (Dynamic RAM, Static RAM, and Flash), meaning that they are difficult to be detected using existing test schemes. This paper presents an enhanced test scheme, referred to as Programmable Short Write Time (PSWT) that is able to improve the detection of faulty RRAM cells. It is developed by applying multiple weak write operations, each with different time durations. The test circuit embedded in the RRAM chip is made programmable in order to supply different weak write times during testing. The RRAM electrical model is described using Verilog-AMS language and is simulated using HSPICE simulation tools. Simulation results show that the proposed test scheme offers better open-resistive fault detection compared to existing test schemes.

Keywords: memory fault, memory test, design-for-testability, resistive random access memory

Procedia PDF Downloads 369
2478 The Effect of Radiation on Unsteady MHD Flow past a Vertical Porous Plate in the Presence of Heat Flux

Authors: Pooja Sharma

Abstract:

In the present paper the effects of radiation is studied on unsteady flow of viscous incompressible electrically conducting fluid past a vertical porous plate embedded in the porous medium in the presence of constant heat flux. A uniform Transverse Magnetic field is considered and induced magnetic field is supposed as negligible. The non-linear governing equations are solved numerically. Numerical results of the velocity and temperature fields are shown through graphs. The results illustrates that the appropriator combination of regulated values of thermo-physical parameters is expedient for controlling the flow system.

Keywords: heat transfer, radiation, MHD flow, porous medium

Procedia PDF Downloads 424