Search results for: volatility clustering
420 Dissecting ESG: The Impact of Environmental, Social, and Governance Factors on Stock Price Risk in European Markets
Authors: Sylwia Frydrych, Jörg Prokop, Michał Buszko
Abstract:
This study investigates the complex relationship between corporate ESG (Environmental, Social, Governance) performance and stock price risk within the European market context. By analyzing a dataset of 435 companies across 19 European countries, the research assesses the impact of both combined ESG performance and its individual components on various risk measures, including volatility, idiosyncratic risk, systematic risk, and downside risk. The findings reveal that while overall ESG scores do not significantly influence stock price risk, disaggregating the ESG components uncovers significant relationships. Governance practices are shown to consistently reduce market risk, positioning them as critical in risk management. However, environmental engagement tends to increase risk, particularly in times of regulatory shifts like those introduced in the EU post-2018. This research provides valuable insights for investors and corporate managers on the nuanced roles of ESG factors in financial risk, emphasizing the need for careful consideration of each ESG pillar in decision-making processes.Keywords: ESG performance, ESG factors, ESG pillars, ESG scores
Procedia PDF Downloads 26419 Orphan Node Inclusion Protocol for Wireless Sensor Network
Authors: Sandeep Singh Waraich
Abstract:
Wireless sensor network (WSN ) consists of a large number of sensor nodes. The disparity in their energy consumption usually lead to the loss of equilibrium in wireless sensor network which may further results in an energy hole problem in wireless network. In this paper, we have considered the inclusion of orphan nodes which usually remain unutilized as intermediate nodes in multi-hop routing. The Orphan Node Inclusion (ONI) Protocol lets the cluster member to bring the orphan nodes into their clusters, thereby saving important resources and increasing network lifetime in critical applications of WSN.Keywords: wireless sensor network, orphan node, clustering, ONI protocol
Procedia PDF Downloads 421418 Factors Affecting Cesarean Section among Women in Qatar Using Multiple Indicator Cluster Survey Database
Authors: Sahar Elsaleh, Ghada Farhat, Shaikha Al-Derham, Fasih Alam
Abstract:
Background: Cesarean section (CS) delivery is one of the major concerns both in developing and developed countries. The rate of CS deliveries are on the rise globally, and especially in Qatar. Many socio-economic, demographic, clinical and institutional factors play an important role for cesarean sections. This study aims to investigate factors affecting the prevalence of CS among women in Qatar using the UNICEF’s Multiple Indicator Cluster Survey (MICS) 2012 database. Methods: The study has focused on the women’s questionnaire of the MICS, which was successfully distributed to 5699 participants. Following study inclusion and exclusion criteria, a final sample of 761 women aged 19- 49 years who had at least one delivery of giving birth in their lifetime before the survey were included. A number of socio-economic, demographic, clinical and institutional factors, identified through literature review and available in the data, were considered for the analyses. Bivariate and multivariate logistic regression models, along with a multi-level modeling to investigate clustering effect, were undertaken to identify the factors that affect CS prevalence in Qatar. Results: From the bivariate analyses the study has shown that, a number of categorical factors are statistically significantly associated with the dependent variable (CS). When identifying the factors from a multivariate logistic regression, the study found that only three categorical factors -‘age of women’, ‘place at delivery’ and ‘baby weight’ appeared to be significantly affecting the CS among women in Qatar. Although the MICS dataset is based on a cluster survey, an exploratory multi-level analysis did not show any clustering effect, i.e. no significant variation in results at higher level (households), suggesting that all analyses at lower level (individual respondent) are valid without any significant bias in results. Conclusion: The study found a statistically significant association between the dependent variable (CS delivery) and age of women, frequency of TV watching, assistance at birth and place of birth. These results need to be interpreted cautiously; however, it can be used as evidence-base for further research on cesarean section delivery in Qatar.Keywords: cesarean section, factors, multiple indicator cluster survey, MICS database, Qatar
Procedia PDF Downloads 118417 Environmental Pb-Free Cu Front Electrode for Si-Base Solar Cell Application
Authors: Wen-Hsi Lee, C.G. Kao
Abstract:
In this study, Cu paste was prepared and printed with narrow line screen printing process on polycrystalline Si solar cell which has already finished the back Al printing and deposition of double anti-reflection coatings (DARCs). Then, two-step firing process was applied to sinter the front electrode and obtain the ohmic contact between front electrode and solar cell. The first step was in air atmosphere. In this process, PbO-based glass frit etched the DARCs and Ag recrystallized at the surface of Si, constructing the preliminary contact. The second step was in reducing atmosphere. In this process, CuO reduced to Cu and sintered. Besides, Ag nanoparticles recrystallized in the glass layer at interface due to the interactions between H2, Ag and PbO-based glass frit and the volatility of Pb, constructing the ohmic contact between electrode and solar cell. By experiment and analysis, reaction mechanism in each stage was surmised, and it was also proven that ohmic contact and good sheet resistance for front electrode could both be obtained by applying newly-invented paste and process.Keywords: front electrode, solar cell, ohmic contact, screen printing, paste
Procedia PDF Downloads 333416 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods
Authors: Sohyoung Won, Heebal Kim, Dajeong Lim
Abstract:
Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium
Procedia PDF Downloads 141415 Hydrochemical Contamination Profiling and Spatial-Temporal Mapping with the Support of Multivariate and Cluster Statistical Analysis
Authors: Sofia Barbosa, Mariana Pinto, José António Almeida, Edgar Carvalho, Catarina Diamantino
Abstract:
The aim of this work was to test a methodology able to generate spatial-temporal maps that can synthesize simultaneously the trends of distinct hydrochemical indicators in an old radium-uranium tailings dam deposit. Multidimensionality reduction derived from principal component analysis and subsequent data aggregation derived from clustering analysis allow to identify distinct hydrochemical behavioural profiles and to generate synthetic evolutionary hydrochemical maps.Keywords: Contamination plume migration, K-means of PCA scores, groundwater and mine water monitoring, spatial-temporal hydrochemical trends
Procedia PDF Downloads 236414 Clustering-Based Detection of Alzheimer's Disease Using Brain MR Images
Authors: Sofia Matoug, Amr Abdel-Dayem
Abstract:
This paper presents a comprehensive survey of recent research studies to segment and classify brain MR (magnetic resonance) images in order to detect significant changes to brain ventricles. The paper also presents a general framework for detecting regions that atrophy, which can help neurologists in detecting and staging Alzheimer. Furthermore, a prototype was implemented to segment brain MR images in order to extract the region of interest (ROI) and then, a classifier was employed to differentiate between normal and abnormal brain tissues. Experimental results show that the proposed scheme can provide a reliable second opinion that neurologists can benefit from.Keywords: Alzheimer, brain images, classification techniques, Magnetic Resonance Images MRI
Procedia PDF Downloads 303413 Clustering Ethno-Informatics of Naming Village in Java Island Using Data Mining
Authors: Atje Setiawan Abdullah, Budi Nurani Ruchjana, I. Gede Nyoman Mindra Jaya, Eddy Hermawan
Abstract:
Ethnoscience is used to see the culture with a scientific perspective, which may help to understand how people develop various forms of knowledge and belief, initially focusing on the ecology and history of the contributions that have been there. One of the areas studied in ethnoscience is etno-informatics, is the application of informatics in the culture. In this study the science of informatics used is data mining, a process to automatically extract knowledge from large databases, to obtain interesting patterns in order to obtain a knowledge. While the application of culture described by naming database village on the island of Java were obtained from Geographic Indonesia Information Agency (BIG), 2014. The purpose of this study is; first, to classify the naming of the village on the island of Java based on the structure of the word naming the village, including the prefix of the word, syllable contained, and complete word. Second to classify the meaning of naming the village based on specific categories, as well as its role in the community behavioral characteristics. Third, how to visualize the naming of the village to a map location, to see the similarity of naming villages in each province. In this research we have developed two theorems, i.e theorems area as a result of research studies have collected intersection naming villages in each province on the island of Java, and the composition of the wedge theorem sets the provinces in Java is used to view the peculiarities of a location study. The methodology in this study base on the method of Knowledge Discovery in Database (KDD) on data mining, the process includes preprocessing, data mining and post processing. The results showed that the Java community prioritizes merit in running his life, always working hard to achieve a more prosperous life, and love as well as water and environmental sustainment. Naming villages in each location adjacent province has a high degree of similarity, and influence each other. Cultural similarities in the province of Central Java, East Java and West Java-Banten have a high similarity, whereas in Jakarta-Yogyakarta has a low similarity. This research resulted in the cultural character of communities within the meaning of the naming of the village on the island of Java, this character is expected to serve as a guide in the behavior of people's daily life on the island of Java.Keywords: ethnoscience, ethno-informatics, data mining, clustering, Java island culture
Procedia PDF Downloads 283412 Analysis of Ozone Episodes in the Forest and Vegetation Areas with Using HYSPLIT Model: A Case Study of the North-West Side of Biga Peninsula, Turkey
Authors: Deniz Sari, Selahattin İncecik, Nesimi Ozkurt
Abstract:
Surface ozone, which named as one of the most critical pollutants in the 21th century, threats to human health, forest and vegetation. Specifically, in rural areas surface ozone cause significant influences on agricultural productions and trees. In this study, in order to understand to the surface ozone levels in rural areas we focus on the north-western side of Biga Peninsula which covers by the mountainous and forested area. Ozone concentrations were measured for the first time with passive sampling at 10 sites and two online monitoring stations in this rural area from 2013 and 2015. Using with the daytime hourly O3 measurements during light hours (08:00–20:00) exceeding the threshold of 40 ppb over the 3 months (May, June and July) for agricultural crops, and over the six months (April to September) for forest trees AOT40 (Accumulated hourly O3 concentrations Over a Threshold of 40 ppb) cumulative index was calculated. AOT40 is defined by EU Directive 2008/50/EC to evaluate whether ozone pollution is a risk for vegetation, and is calculated by using hourly ozone concentrations from monitoring systems. In the present study, we performed the trajectory analysis by The Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model to follow the long-range transport sources contributing to the high ozone levels in the region. The ozone episodes observed between 2013 and 2015 were analysed using the HYSPLIT model developed by the NOAA-ARL. In addition, the cluster analysis is used to identify homogeneous groups of air mass transport patterns can be conducted through air trajectory clustering by grouping similar trajectories in terms of air mass movement. Backward trajectories produced for 3 years by HYSPLIT model were assigned to different clusters according to their moving speed and direction using a k-means clustering algorithm. According to cluster analysis results, northerly flows to study area cause to high ozone levels in the region. The results present that the ozone values in the study area are above the critical levels for forest and vegetation based on EU Directive 2008/50/EC.Keywords: AOT40, Biga Peninsula, HYSPLIT, surface ozone
Procedia PDF Downloads 255411 Foreign Debt and Firm Performance: Evidence from French Non-Financial Firms
Authors: Salma Mefteh-Wali, Marie-Josephe Rigobert
Abstract:
We investigate the impact of foreign currency debt on firm performance for a sample of non-financial French firms studied over the period 2002 to 2012. As foreign currency debt is both a financing and hedging instrument against foreign exchange risk, we mobilize optimal hedging theory and capital structure theory. When we study the impact on firm value, our main results show that before and after the financial crisis of 2008, foreign debt had the same behavior as domestic debt. We find that during the crisis period, foreign debt positively affects firm value. Investors perceive foreign debt as a natural hedging instrument that is likely to reduce the costs of underinvestment, alleviate cash flow volatility, limit the costs of financial distress, and generate tax shield benefits. Also, our results show that foreign leverage negatively affects the firm performance proxied by ROA and ROE, during and after the financial crisis. However, this impact is positive in the pre-crisis period.Keywords: foreign currency derivatives, foreign currency debt, foreign currency hedging, firm performance
Procedia PDF Downloads 314410 K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors
Authors: Shao-Tzu Huang, Chen-Chien Hsu, Wei-Yen Wang
Abstract:
Matching high dimensional features between images is computationally expensive for exhaustive search approaches in computer vision. Although the dimension of the feature can be degraded by simplifying the prior knowledge of homography, matching accuracy may degrade as a tradeoff. In this paper, we present a feature matching method based on k-means algorithm that reduces the matching cost and matches the features between images instead of using a simplified geometric assumption. Experimental results show that the proposed method outperforms the previous linear exhaustive search approaches in terms of the inlier ratio of matched pairs.Keywords: feature matching, k-means clustering, SIFT, RANSAC
Procedia PDF Downloads 359409 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter
Procedia PDF Downloads 331408 The Experimental Measurement of the LiBr Concentration of a Solar Absorption Machine
Authors: N. Hatraf, L. Merabti, Z. Neffah, W. Taane
Abstract:
The excessive consumption of fossil energies (electrical energy) during summer caused by the technological development involves more and more climate warming. In order to reduce the worst impact of gas emissions produced from classical air conditioning, heat driven solar absorption chiller is pretty promising; it consists on using solar as motive energy which is clean and environmentally friendly to provide cold. Solar absorption machine is composed by four components using Lithium Bromide /water as a refrigerating couple. LiBr- water is the most promising in chiller applications due to high safety, high volatility ratio, high affinity, high stability and its high latent heat. The lithium bromide solution is constitute by the salt lithium bromide which absorbs water under certain conditions of pressure and temperature however if the concentration of the solution is high in the absorption chillers; which exceed 70%, the solution will crystallize. The main aim of this article is to study the phenomena of the crystallization and to evaluate how the dependence between the electric conductivity and the concentration which should be controlled.Keywords: absorption, crystallization, experimental results, lithium bromide solution
Procedia PDF Downloads 310407 Construction of a Supply Chain Model Using the PREVA Method: The Case of Innovative Sargasso Recovery Projects in Ther Lesser Antilles
Authors: Maurice Bilioniere, Katie Lanneau
Abstract:
Suddenly appeared in 2011, invasions of sargasso seaweeds Fluitans and Natans are a climatic hazard which causes many problems in the Caribbean. Faced with the growth and frequency of the phenomenon of massive sargasso stranding on their coasts, the French West Indies are moving towards the path of industrial recovery. In this context of innovative projects, we will analyze the necessary requirements for the management and performance of the supply chain, taking into account the observed volatility of the sargasso input. Our prospective approach will consist in studying the theoretical framework of modeling a hybrid supply chain by coupling the discreet event simulation (DES) with a valuation of the process costs according to the "activity-based costing" method (ABC). The PREVA approach (PRocess EVAluation) chosen for our modeling has the advantage of evaluating the financial flows of the logistic process using an analytical model chained with an action model for the evaluation or optimization of physical flows.Keywords: sargasso, PREVA modeling, supply chain, ABC method, discreet event simulation (DES)
Procedia PDF Downloads 177406 Integrating Data Mining with Case-Based Reasoning for Diagnosing Sorghum Anthracnose
Authors: Mariamawit T. Belete
Abstract:
Cereal production and marketing are the means of livelihood for millions of households in Ethiopia. However, cereal production is constrained by technical and socio-economic factors. Among the technical factors, cereal crop diseases are the major contributing factors to the low yield. The aim of this research is to develop an integration of data mining and knowledge based system for sorghum anthracnose disease diagnosis that assists agriculture experts and development agents to make timely decisions. Anthracnose diagnosing systems gather information from Melkassa agricultural research center and attempt to score anthracnose severity scale. Empirical research is designed for data exploration, modeling, and confirmatory procedures for testing hypothesis and prediction to draw a sound conclusion. WEKA (Waikato Environment for Knowledge Analysis) was employed for the modeling. Knowledge based system has come across a variety of approaches based on the knowledge representation method; case-based reasoning (CBR) is one of the popular approaches used in knowledge-based system. CBR is a problem solving strategy that uses previous cases to solve new problems. The system utilizes hidden knowledge extracted by employing clustering algorithms, specifically K-means clustering from sampled anthracnose dataset. Clustered cases with centroid value are mapped to jCOLIBRI, and then the integrator application is created using NetBeans with JDK 8.0.2. The important part of a case based reasoning model includes case retrieval; the similarity measuring stage, reuse; which allows domain expert to transfer retrieval case solution to suit for the current case, revise; to test the solution, and retain to store the confirmed solution to the case base for future use. Evaluation of the system was done for both system performance and user acceptance. For testing the prototype, seven test cases were used. Experimental result shows that the system achieves an average precision and recall values of 70% and 83%, respectively. User acceptance testing also performed by involving five domain experts, and an average of 83% acceptance is achieved. Although the result of this study is promising, however, further study should be done an investigation on hybrid approach such as rule based reasoning, and pictorial retrieval process are recommended.Keywords: sorghum anthracnose, data mining, case based reasoning, integration
Procedia PDF Downloads 82405 Ionic Liquid 1-Butyl-3-Methylimidazolium Bromide as Reaction Medium for the Synthesis of Flavanones under Solvent-Free Conditions
Authors: Cecilia Espindola, Juan Carlos Palacios
Abstract:
Flavonoids are a large group of natural compounds which are found in many fruits and vegetables. A subgroup of these called flavanones display a wide range of biological activities, and they also have an important physiological role in plants. The ionic liquid (ILs) are compounds consisting of an organic cation with an organic or inorganic anion. Due to its unique properties such as high electrical conductivity, wide temperature range of the liquid state, thermal and electrochemical stability, high ionic density and low volatility and flammability, are considered as ecological solvents in organic synthesis, catalysis, electrolytes in accumulators, and electrochemistry, non-volatile plasticizers, and chemical separation. It was synthesized ionic liquid IL 1-butyl-3-methylimidazolium bromide free-solvent and used as reaction medium for flavanones synthesis, under several reaction conditions of temperature, time and production. The obtained compounds were analyzed by melting point, elemental analysis, IR and UV-vis spectroscopy.Keywords: 1-butyl-3-methylimidazolium bromide, flavonoids, free-solvent, IR spectroscopy
Procedia PDF Downloads 120404 Review and Comparison of Associative Classification Data Mining Approaches
Authors: Suzan Wedyan
Abstract:
Data mining is one of the main phases in the Knowledge Discovery Database (KDD) which is responsible of finding hidden and useful knowledge from databases. There are many different tasks for data mining including regression, pattern recognition, clustering, classification, and association rule. In recent years a promising data mining approach called associative classification (AC) has been proposed, AC integrates classification and association rule discovery to build classification models (classifiers). This paper surveys and critically compares several AC algorithms with reference of the different procedures are used in each algorithm, such as rule learning, rule sorting, rule pruning, classifier building, and class allocation for test cases.Keywords: associative classification, classification, data mining, learning, rule ranking, rule pruning, prediction
Procedia PDF Downloads 537403 Modelling Spatial Dynamics of Terrorism
Authors: André Python
Abstract:
To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling
Procedia PDF Downloads 351402 The Role of the Rate of Profit Concept in Creating Economic Stability in Islamic Financial Market
Authors: Trisiladi Supriyanto
Abstract:
This study aims to establish a concept of rate of profit on Islamic banking that can create economic justice and stability in the Islamic Financial Market (Banking and Capital Markets). A rate of profit that creates economic justice and stability can be achieved through its role in maintaining the stability of the financial system in which there is an equitable distribution of income and wealth. To determine the role of the rate of profit as the basis of the profit sharing system implemented in the Islamic financial system, we can see the connection of rate of profit in creating financial stability, especially in the asset-liability management of financial institutions that generate a stable net margin or the rate of profit that is not affected by the ups and downs of the market risk factors, including indirect effect on interest rates. Furthermore, Islamic financial stability can be seen from the role of the rate of profit on the stability of the Islamic financial assets value that are measured from the Islamic financial asset price volatility in the Islamic Bond Market in the Capital Market.Keywords: economic justice, equitable distribution of income, equitable distribution of wealth, rate of profit, stability in the financial system
Procedia PDF Downloads 314401 Unsupervised Learning of Spatiotemporally Coherent Metrics
Authors: Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, Yann LeCun
Abstract:
Current state-of-the-art classification and detection algorithms rely on supervised training. In this work we study unsupervised feature learning in the context of temporally coherent video data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity. We establish a connection between slow feature learning to metric learning and show that the trained encoder can be used to define a more temporally and semantically coherent metric.Keywords: machine learning, pattern clustering, pooling, classification
Procedia PDF Downloads 456400 Comparing Community Detection Algorithms in Bipartite Networks
Authors: Ehsan Khademi, Mahdi Jalili
Abstract:
Despite the special features of bipartite networks, they are common in many systems. Real-world bipartite networks may show community structure, similar to what one can find in one-mode networks. However, the interpretation of the community structure in bipartite networks is different as compared to one-mode networks. In this manuscript, we compare a number of available methods that are frequently used to discover community structure of bipartite networks. These networks are categorized into two broad classes. One class is the methods that, first, transfer the network into a one-mode network, and then apply community detection algorithms. The other class is the algorithms that have been developed specifically for bipartite networks. These algorithms are applied on a model network with prescribed community structure.Keywords: community detection, bipartite networks, co-clustering, modularity, network projection, complex networks
Procedia PDF Downloads 627399 Integrating Geographic Information into Diabetes Disease Management
Authors: Tsu-Yun Chiu, Tsung-Hsueh Lu, Tain-Junn Cheng
Abstract:
Background: Traditional chronic disease management did not pay attention to effects of geographic factors on the compliance of treatment regime, which resulted in geographic inequality in outcomes of chronic disease management. This study aims to examine the geographic distribution and clustering of quality indicators of diabetes care. Method: We first extracted address, demographic information and quality of care indicators (number of visits, complications, prescription and laboratory records) of patients with diabetes for 2014 from medical information system in a medical center in Tainan City, Taiwan, and the patients’ addresses were transformed into district- and village-level data. We then compared the differences of geographic distribution and clustering of quality of care indicators between districts and villages. Despite the descriptive results, rate ratios and 95% confidence intervals (CI) were estimated for indices of care in order to compare the quality of diabetes care among different areas. Results: A total of 23,588 patients with diabetes were extracted from the hospital data system; whereas 12,716 patients’ information and medical records were included to the following analysis. More than half of the subjects in this study were male and between 60-79 years old. Furthermore, the quality of diabetes care did indeed vary by geographical levels. Thru the smaller level, we could point out clustered areas more specifically. Fuguo Village (of Yongkang District) and Zhiyi Village (of Sinhua District) were found to be “hotspots” for nephropathy and cerebrovascular disease; while Wangliau Village and Erwang Village (of Yongkang District) would be “coldspots” for lowest proportion of ≥80% compliance to blood lipids examination. On the other hand, Yuping Village (in Anping District) was the area with the lowest proportion of ≥80% compliance to all laboratory examination. Conclusion: In spite of examining the geographic distribution, calculating rate ratios and their 95% CI could also be a useful and consistent method to test the association. This information is useful for health planners, diabetes case managers and other affiliate practitioners to organize care resources to the areas most needed.
Keywords: catchment area of healthcare, chronic disease management, Geographic information system, quality of diabetes care
Procedia PDF Downloads 284398 ARIMA-GARCH, A Statistical Modeling for Epileptic Seizure Prediction
Authors: Salman Mohamadi, Seyed Mohammad Ali Tayaranian Hosseini, Hamidreza Amindavar
Abstract:
In this paper, we provide a procedure to analyze and model EEG (electroencephalogram) signal as a time series using ARIMA-GARCH to predict an epileptic attack. The heteroskedasticity of EEG signal is examined through the ARCH or GARCH, (Autore- gressive conditional heteroskedasticity, Generalized autoregressive conditional heteroskedasticity) test. The best ARIMA-GARCH model in AIC sense is utilized to measure the volatility of the EEG from epileptic canine subjects, to forecast the future values of EEG. ARIMA-only model can perform prediction, but the ARCH or GARCH model acting on the residuals of ARIMA attains a con- siderable improved forecast horizon. First, we estimate the best ARIMA model, then different orders of ARCH and GARCH modelings are surveyed to determine the best heteroskedastic model of the residuals of the mentioned ARIMA. Using the simulated conditional variance of selected ARCH or GARCH model, we suggest the procedure to predict the oncoming seizures. The results indicate that GARCH modeling determines the dynamic changes of variance well before the onset of seizure. It can be inferred that the prediction capability comes from the ability of the combined ARIMA-GARCH modeling to cover the heteroskedastic nature of EEG signal changes.Keywords: epileptic seizure prediction , ARIMA, ARCH and GARCH modeling, heteroskedasticity, EEG
Procedia PDF Downloads 406397 Good Banks, Bad Banks, and Public Scrutiny: The Determinants of Corporate Social Responsibility in Times of Financial Volatility
Authors: A. W. Chalmers, O. M. van den Broek
Abstract:
This article examines the relationship between the global financial crisis and corporate social responsibility activities of financial services firms. It challenges the general consensus in existing studies that firms, when faced with economic hardship, tend to jettison CSR commitments. Instead, and building on recent insights into the institutional determinants of CSR, it is argued that firms are constrained in their ability to abandon CSR by the extent to which they are subject to intense public scrutiny by regulators and the news media. This argument is tested in the context of the European sovereign debt crisis drawing on a unique dataset of 170 firms in 15 different countries over a six-year period. Controlling for a battery of alternative explanations and comparing financial service providers to firms operating in other economic sectors, results indicate considerable evidence supporting the main argument. Rather than abandoning CSR during times of economic hardship, financial industry firms ramp up their CSR commitments in order to manage their public image and foster public trust in light of intense public scrutiny.Keywords: corporate social responsibility (CSR), public scrutiny, global financial crisis, financial services firms
Procedia PDF Downloads 307396 High-Frequency Cryptocurrency Portfolio Management Using Multi-Agent System Based on Federated Reinforcement Learning
Authors: Sirapop Nuannimnoi, Hojjat Baghban, Ching-Yao Huang
Abstract:
Over the past decade, with the fast development of blockchain technology since the birth of Bitcoin, there has been a massive increase in the usage of Cryptocurrencies. Cryptocurrencies are not seen as an investment opportunity due to the market’s erratic behavior and high price volatility. With the recent success of deep reinforcement learning (DRL), portfolio management can be modeled and automated. In this paper, we propose a novel DRL-based multi-agent system to automatically make proper trading decisions on multiple cryptocurrencies and gain profits in the highly volatile cryptocurrency market. We also extend this multi-agent system with horizontal federated transfer learning for better adapting to the inclusion of new cryptocurrencies in our portfolio; therefore, we can, through the concept of diversification, maximize our profits and minimize the trading risks. Experimental results through multiple simulation scenarios reveal that this proposed algorithmic trading system can offer three promising key advantages over other systems, including maximized profits, minimized risks, and adaptability.Keywords: cryptocurrency portfolio management, algorithmic trading, federated learning, multi-agent reinforcement learning
Procedia PDF Downloads 119395 A Literature Review on Development of a Forecast Supported Approach for the Continuous Pre-Planning of Required Transport Capacity for the Design of Sustainable Transport Chains
Authors: Georg Brunnthaller, Sandra Stein, Wilfried Sihn
Abstract:
Logistics service providers are facing increasing volatility concerning future transport demand. Short-term planning horizons and planning uncertainties lead to reduced capacity utilisation and increasing empty mileage. To overcome these challenges, a model is proposed to continuously pre-plan future transport capacity in order to redesign and adjust the intermodal fleet accordingly. It is expected that the model will enable logistics service providers to organise more economically and ecologically sustainable transport chains in a more flexible way. To further describe such planning aspects, this paper gives a structured literature review on transport planning problems. The focus is on strategic and tactical planning levels, comprising relevant fleet-sizing-, network-design- and choice-of-carriers-problems. Models and their developed solution techniques are presented and the literature review is concluded with an outlook to our future research objectivesKeywords: choice of transport mode, fleet-sizing, freight transport planning, multimodal, review, service network design
Procedia PDF Downloads 365394 Institutional Segmantation and Country Clustering: Implications for Multinational Enterprises Over Standardized Management
Authors: Jung-Hoon Han, Jooyoung Kwak
Abstract:
Distances between cultures, institutions are gaining academic attention once again since the classical debate on the validity of globalization. Despite the incessant efforts to define international segments with various concepts, no significant attempts have been made considering the institutional dimensions. Resource-based theory and institutional theory provides useful insights in assessing market environment and understanding when and how MNEs loose or gain advantages. This study consists of two parts: identifying institutional clusters and predicting the effect of MNEs’ origin on the applicability of competitive advantages. MNEs in one country cluster are expected to use similar management systems.Keywords: institutional theory, resource-based theory, institutional environment, cultural dimensions, cluster analysis, standardized management
Procedia PDF Downloads 489393 Comparative Study of Ad Hoc Routing Protocols in Vehicular Ad-Hoc Networks for Smart City
Authors: Khadija Raissi, Bechir Ben Gouissem
Abstract:
In this paper, we perform the investigation of some routing protocols in Vehicular Ad-Hoc Network (VANET) context. Indeed, we study the efficiency of protocols like Dynamic Source Routing (DSR), Ad hoc On-demand Distance Vector Routing (AODV), Destination Sequenced Distance Vector (DSDV), Optimized Link State Routing convention (OLSR) and Vehicular Multi-hop algorithm for Stable Clustering (VMASC) in terms of packet delivery ratio (PDR) and throughput. The performance evaluation and comparison between the studied protocols shows that the VMASC is the best protocols regarding fast data transmission and link stability in VANETs. The validation of all results is done by the NS3 simulator.Keywords: VANET, smart city, AODV, OLSR, DSR, OLSR, VMASC, routing protocols, NS3
Procedia PDF Downloads 298392 Care: A Cluster Based Approach for Reliable and Efficient Routing Protocol in Wireless Sensor Networks
Authors: K. Prasanth, S. Hafeezullah Khan, B. Haribalakrishnan, D. Arun, S. Jayapriya, S. Dhivya, N. Vijayarangan
Abstract:
The main goal of our approach is to find the optimum positions for the sensor nodes, reinforcing the communications in points where certain lack of connectivity is found. Routing is the major problem in sensor network’s data transfer between nodes. We are going to provide an efficient routing technique to make data signal transfer to reach the base station soon without any interruption. Clustering and routing are the two important key factors to be considered in case of WSN. To carry out the communication from the nodes to their cluster head, we propose a parameterizable protocol so that the developer can indicate if the routing has to be sensitive to either the link quality of the nodes or the their battery levels.Keywords: clusters, routing, wireless sensor networks, three phases, sensor networks
Procedia PDF Downloads 506391 Modified Active (MA) Algorithm to Generate Semantic Web Related Clustered Hierarchy for Keyword Search
Authors: G. Leena Giri, Archana Mathur, S. H. Manjula, K. R. Venugopal, L. M. Patnaik
Abstract:
Keyword search in XML documents is based on the notion of lowest common ancestors in the labelled trees model of XML documents and has recently gained a lot of research interest in the database community. In this paper, we propose the Modified Active (MA) algorithm which is an improvement over the active clustering algorithm by taking into consideration the entity aspect of the nodes to find the level of the node pertaining to a particular keyword input by the user. A portion of the bibliography database is used to experimentally evaluate the modified active algorithm and results show that it performs better than the active algorithm. Our modification improves the response time of the system and thereby increases the efficiency of the system.Keywords: keyword matching patterns, MA algorithm, semantic search, knowledge management
Procedia PDF Downloads 415