Search results for: gp-closed sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1220

Search results for: gp-closed sets

950 The Dialectic of Law and Politics for George Friedrich Wilhelm Hegel

Authors: Djehich Mohamed Yousri

Abstract:

This paper aims to address the dialectic of law and politics in the philosophy of the state of the philosopher Hegel by addressing the concept of law, which refers to its general meaning to the set of rules and legislation that man sets to apply them within society, as it is considered one of the primary and necessary conditions for the functioning of And organizing social life, when it defines the rights and duties of every individual belonging to the state, by approaching it with central concepts in political philosophy, such as the state, freedom and the people. The most prominent result that we reached through our analysis of the details of the problematic research is the relationship between law and politics in the philosophical system of Hegel; on the one hand, We find that the state is rational only to the extent that it resorts to the law and works under it, and the latter does not realize its essence and effectiveness unless it is extracted from the customs, traditions, and culture of the people so that it does not conflict with the ideal goal of its existence, which is to achieve freedom and protect it from all possible. A state does not mean at all to reduce the freedom of the people, so there is no conflict between law and freedom.

Keywords: hegel, the law, country, freedom, citizen

Procedia PDF Downloads 53
949 The Relationship between Basic Human Needs and Opportunity Based on Social Progress Index

Authors: Ebru Ozgur Guler, Huseyin Guler, Sera Sanli

Abstract:

Social Progress Index (SPI) whose fundamentals have been thrown in the World Economy Forum is an index which aims to form a systematic basis for guiding strategy for inclusive growth which requires achieving both economic and social progress. In this research, it has been aimed to determine the relations among “Basic Human Needs” (BHN) (including four variables of ‘Nutrition and Basic Medical Care’, ‘Water and Sanitation’, ‘Shelter’ and ‘Personal Safety’) and “Opportunity” (OPT) (that is composed of ‘Personal Rights’, ‘Personal Freedom and Choice’, ‘Tolerance and Inclusion’, and ‘Access to Advanced Education’ components) dimensions of 2016 SPI for 138 countries which take place in the website of Social Progress Imperative by carrying out canonical correlation analysis (CCA) which is a data reduction technique that operates in a way to maximize the correlation between two variable sets. In the interpretation of results, the first pair of canonical variates pointing to the highest canonical correlation has been taken into account. The first canonical correlation coefficient has been found as 0.880 indicating to the high relationship between BHN and OPT variable sets. Wilk’s Lambda statistic has revealed that an overall effect of 0.809 is highly large for the full model in order to be counted as statistically significant (with a p-value of 0.000). According to the standardized canonical coefficients, the largest contribution to BHN set of variables has come from ‘shelter’ variable. The most effective variable in OPT set has been detected to be ‘access to advanced education’. Findings based on canonical loadings have also confirmed these results with respect to the contributions to the first canonical variates. When canonical cross loadings (structure coefficients) are examined, for the first pair of canonical variates, the largest contributions have been provided by ‘shelter’ and ‘access to advanced education’ variables. Since the signs for structure coefficients have been found to be negative for all variables; all OPT set of variables are positively related to all of the BHN set of variables. In case canonical communality coefficients which are the sum of the squares of structure coefficients across all interpretable functions are taken as the basis; amongst all variables, ‘personal rights’ and ‘tolerance and inclusion’ variables can be said not to be useful in the model with 0.318721 and 0.341722 coefficients respectively. On the other hand, while redundancy index for BHN set has been found to be 0.615; OPT set has a lower redundancy index with 0.475. High redundancy implies high ability for predictability. The proportion of the total variation in BHN set of variables that is explained by all of the opposite canonical variates has been calculated as 63% and finally, the proportion of the total variation in OPT set that is explained by all of the canonical variables in BHN set has been determined as 50.4% and a large part of this proportion belongs to the first pair. The results suggest that there is a high and statistically significant relationship between BHN and OPT. This relationship is generally accounted by ‘shelter’ and ‘access to advanced education’.

Keywords: canonical communality coefficient, canonical correlation analysis, redundancy index, social progress index

Procedia PDF Downloads 191
948 Simulating the Hot Hand Phenomenon in Basketball with Bayesian Hidden Markov Models

Authors: Gabriel Calvo, Carmen Armero, Luigi Spezia

Abstract:

A basketball player is said to have a hot hand if his/her performance is better than expected in different periods of time. A way to deal with this phenomenon is to make use of latent variables, which can indicate whether the player is ‘on fire’ or not. This work aims to model the hot hand phenomenon through a Bayesian hidden Markov model (HMM) with two states (cold and hot) and two different probability of success depending on the corresponding hidden state. This task is illustrated through a comprehensive simulation study. The simulated data sets emulate the field goal attempts in an NBA season from different profile players. This model can be a powerful tool to assess the ‘streakiness’ of each player, and it provides information about the general performance of the players during the match. Finally, the Bayesian HMM allows computing the posterior probability of any type of streak.

Keywords: Bernoulli trials, field goals, latent variables, posterior distribution

Procedia PDF Downloads 135
947 Regularity and Maximal Congruence in Transformation Semigroups with Fixed Sets

Authors: Chollawat Pookpienlert, Jintana Sanwong

Abstract:

An element a of a semigroup S is called left (right) regular if there exists x in S such that a=xa² (a=a²x) and said to be intra-regular if there exist u,v in such that a=ua²v. Let T(X) be the semigroup of all full transformations on a set X under the composition of maps. For a fixed nonempty subset Y of X, let Fix(X,Y)={α ™ T(X) : yα=y for all y ™ Y}, where yα is the image of y under α. Then Fix(X,Y) is a semigroup of full transformations on X which fix all elements in Y. Here, we characterize left regular, right regular and intra-regular elements of Fix(X,Y) which characterizations are shown as follows: For α ™ Fix(X,Y), (i) α is left regular if and only if Xα\Y = Xα²\Y, (ii) α is right regular if and only if πα = πα², (iii) α is intra-regular if and only if | Xα\Y | = | Xα²\Y | such that Xα = {xα : x ™ X} and πα = {xα⁻¹ : x ™ Xα} in which xα⁻¹ = {a ™ X : aα=x}. Moreover, those regularities are equivalent if Xα\Y is a finite set. In addition, we count the number of those elements of Fix(X,Y) when X is a finite set. Finally, we determine the maximal congruence ρ on Fix(X,Y) when X is finite and Y is a nonempty proper subset of X. If we let | X \Y | = n, then we obtain that ρ = (Fixn x Fixn) ∪ (H ε x H ε) where Fixn = {α ™ Fix(X,Y) : | Xα\Y | < n} and H ε is the group of units of Fix(X,Y). Furthermore, we show that the maximal congruence is unique.

Keywords: intra-regular, left regular, maximal congruence, right regular, transformation semigroup

Procedia PDF Downloads 195
946 System Survivability in Networks in the Context of Defense/Attack Strategies: The Large Scale

Authors: Asma Ben Yaghlane, Mohamed Naceur Azaiez, Mehdi Mrad

Abstract:

We investigate the large scale of networks in the context of network survivability under attack. We use appropriate techniques to evaluate and the attacker-based- and the defender-based-network survivability. The attacker is unaware of the operated links by the defender. Each attacked link has some pre-specified probability to be disconnected. The defender choice is so that to maximize the chance of successfully sending the flow to the destination node. The attacker however will select the cut-set with the highest chance to be disabled in order to partition the network. Moreover, we extend the problem to the case of selecting the best p paths to operate by the defender and the best k cut-sets to target by the attacker, for arbitrary integers p,k > 1. We investigate some variations of the problem and suggest polynomial-time solutions.

Keywords: defense/attack strategies, large scale, networks, partitioning a network

Procedia PDF Downloads 243
945 Facial Recognition on the Basis of Facial Fragments

Authors: Tetyana Baydyk, Ernst Kussul, Sandra Bonilla Meza

Abstract:

There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild) face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face.

Keywords: face recognition, labeled faces in the wild (LFW) database, random local descriptor (RLD), random features

Procedia PDF Downloads 328
944 The Death of God: Between Nietzsche and Muhammad Iqbal

Authors: Maruf Hasan

Abstract:

This article will investigate how Muhammad Iqbal integrated key aspects of Nietzsche's philosophy into Islamic spirituality. At the same time, it sees how Iqbal developed a Tawhidic paradigm which differed from Nietzsche's atheism. The literary study of Nietzsche and Iqbal's texts, as well as their influence on Western and Muslim discourse, are used in this study as part of a qualitative research methodology. The result is that Iqbal accepted most of Nietzsche's ideas, while affirming Islamic Aqida. The article's conclusion is that, so long as a Tawhidic paradigm is upheld, there are no restrictions on the integration of knowledge. Iqbal's writings serve as a template for the fusion of Western and Islamic thought, presenting a solution to the apparent incompatibility of these two sets of beliefs. In order to deepen our awareness of the world and foster respect and understanding among cultures and religions, this research emphasizes the value of connecting with other perspectives.

Keywords: Nietzsche, Muhammad Iqbal, post-Islamism, post-Islamism interpretative approach

Procedia PDF Downloads 78
943 Proposed Alternative System for Existing Traffic Signal System

Authors: Alluri Swaroopa, L. V. N. Prasad

Abstract:

Alone with fast urbanization in world, traffic control problem became a big issue in urban construction. Having an efficient and reliable traffic control system is crucial to macro-traffic control. Traffic signal is used to manage conflicting requirement by allocating different sets of mutually compatible traffic movement during distinct time interval. Many approaches have been made proposed to solve this discrete stochastic problem. Recognizing the need to minimize right-of-way impacts while efficiently handling the anticipated high traffic volumes, the proposed alternative system gives effective design. This model allows for increased traffic capacity and reduces delays by eliminating a step in maneuvering through the freeway interchange. The concept proposed in this paper involves construction of bridges and ramps at intersection of four roads to control the vehicular congestion and to prevent traffic breakdown.

Keywords: bridges, junctions, ramps, urban traffic control

Procedia PDF Downloads 518
942 Comparison of the Seismic Response of Planar Regular and Irregular Steel Frames

Authors: Robespierre Chavez, Eden Bojorquez, Alfredo Reyes-Salazar

Abstract:

This study compares the seismic response of regular and vertically irregular steel frames determined by nonlinear time history analysis and by using several sets of earthquake records, which are divided in two categories: The first category having 20 stiff-soil ground motion records obtained from the NGA database, and the second category having 30 soft-soil ground motions recorded in the Lake Zone of Mexico City and exhibiting a dominant period (Ts) of two seconds. The steel frames in both format regular and irregular were designed according to the Mexico City Seismic Design Provisions (MCSDP). The effects of irregularity throught the height on the maximum interstory drifts are estimated.

Keywords: irregular steel frames, maximum interstory drifts, seismic response, seismic records

Procedia PDF Downloads 290
941 Using Genetic Algorithms and Rough Set Based Fuzzy K-Modes to Improve Centroid Model Clustering Performance on Categorical Data

Authors: Rishabh Srivastav, Divyam Sharma

Abstract:

We propose an algorithm to cluster categorical data named as ‘Genetic algorithm initialized rough set based fuzzy K-Modes for categorical data’. We propose an amalgamation of the simple K-modes algorithm, the Rough and Fuzzy set based K-modes and the Genetic Algorithm to form a new algorithm,which we hypothesise, will provide better Centroid Model clustering results, than existing standard algorithms. In the proposed algorithm, the initialization and updation of modes is done by the use of genetic algorithms while the membership values are calculated using the rough set and fuzzy logic.

Keywords: categorical data, fuzzy logic, genetic algorithm, K modes clustering, rough sets

Procedia PDF Downloads 208
940 OILU Tag: A Projective Invariant Fiducial System

Authors: Youssef Chahir, Messaoud Mostefai, Salah Khodja

Abstract:

This paper presents the development of a 2D visual marker, derived from a recent patented work in the field of numbering systems. The proposed fiducial uses a group of projective invariant straight-line patterns, easily detectable and remotely recognizable. Based on an efficient data coding scheme, the developed marker enables producing a large panel of unique real time identifiers with highly distinguishable patterns. The proposed marker Incorporates simultaneously decimal and binary information, making it readable by both humans and machines. This important feature opens up new opportunities for the development of efficient visual human-machine communication and monitoring protocols. Extensive experiment tests validate the robustness of the marker against acquisition and geometric distortions.

Keywords: visual markers, projective invariants, distance map, level sets

Procedia PDF Downloads 126
939 Improved K-Means Clustering Algorithm Using RHadoop with Combiner

Authors: Ji Eun Shin, Dong Hoon Lim

Abstract:

Data clustering is a common technique used in data analysis and is used in many applications, such as artificial intelligence, pattern recognition, economics, ecology, psychiatry and marketing. K-means clustering is a well-known clustering algorithm aiming to cluster a set of data points to a predefined number of clusters. In this paper, we implement K-means algorithm based on MapReduce framework with RHadoop to make the clustering method applicable to large scale data. RHadoop is a collection of R packages that allow users to manage and analyze data with Hadoop. The main idea is to introduce a combiner as a function of our map output to decrease the amount of data needed to be processed by reducers. The experimental results demonstrated that K-means algorithm using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also showed that our K-means algorithm using RHadoop with combiner was faster than regular algorithm without combiner as the size of data set increases.

Keywords: big data, combiner, K-means clustering, RHadoop

Procedia PDF Downloads 395
938 Recruitment Model (FSRM) for Faculty Selection Based on Fuzzy Soft

Authors: G. S. Thakur

Abstract:

This paper presents a Fuzzy Soft Recruitment Model (FSRM) for faculty selection of MHRD technical institutions. The selection criteria are based on 4-tier flexible structure in the institutions. The Advisory Committee on Faculty Recruitment (ACoFAR) suggested nine criteria for faculty in the proposed FSRM. The model Fuzzy Soft is proposed with consultation of ACoFAR based on selection criteria. The Fuzzy Soft distance similarity measures are applied for finding best faculty from the applicant pool.

Keywords: fuzzy soft set, fuzzy sets, fuzzy soft distance, fuzzy soft similarity measures, ACoFAR

Procedia PDF Downloads 312
937 Euthanasia in Dementia Cases: An Interview Study of Dutch Physicians' Experiences

Authors: J. E. Appel, R. N. Bouwmeester, L. Crombach, K. Georgieva, N. O’Shea, T. I. van Rijssel, L. Wingens

Abstract:

The Netherlands has a unique and progressive euthanasia law. Even people with advanced neurodegenerative diseases, like dementia, can request euthanasia when an Advanced Euthanasia Directive (AED) was written. Although the law sets some guidelines, in practice many complexities occur. Especially doctors experience difficult situations, as they have to decide whether euthanasia is justified. Research suggests that this leads to an emotional burden for them, due to feelings of isolation, fear of prosecution, as well as pressures from patient, family, or society. Existing literature, however, failed to address problems arising in dementia cases in particular, as well as possible sources of support. In order to investigate these issues, semi-structured in-depth interviews with 20 Dutch general practitioners and elderly care physicians will be conducted. Results are expected to be obtained by the end of December 2017.

Keywords: dementia, euthanasia, general practitioners, elderly care physicians, palliative care

Procedia PDF Downloads 177
936 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments

Authors: Skyler Kim

Abstract:

An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.

Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning

Procedia PDF Downloads 157
935 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 186
934 Factors Impacting Entrepreneurial Intention: A Literature Review

Authors: Abir S. AL-Harrasi, Eyad B. AL-Zadjali, Zahran S. AL-Salti

Abstract:

Entrepreneurship has captured the attention of policy-makers, educators and researchers in the last few decades. It has been regarded as a main driver for economic growth, development and employment generation in many countries worldwide. However, scholars have not agreed on the key factors that impact entrepreneurial intention. This study attempts, through an extensive literature review, to provide a holistic view and a more comprehensive understanding of the key factors that lead university undergraduate students to become entrepreneurs. A systematic literature review is conducted and several scientific articles and reports have been examined. The results of this study indicate that there are four main sets of factors: the personality-traits factors, contextual factors, motivational factors, and personal background factors. This research will serve as a base for future studies and will have valuable implications for policy makers and educators.

Keywords: entrepreneurship, entrepreneurial intention, literature review, economic growth

Procedia PDF Downloads 252
933 Variations in the Frequency-Magnitude Distribution with Depth in Kalabsha Area, Aswan, South Egypt

Authors: Ezzat Mohamed El-Amin

Abstract:

Mapping the earthquake-size distribution in various tectonic regimes on a local to regional scale reveals statistically significant variations in the range of at least 0.4 to 2.0 for the b-value in the frequency-magnitude distribution. We map the earthquake frequency–magnitude distribution (b value) as a function of depth in the Reservoir Triggered Seismicity (RTS) region in Kalabsha region, in south Egypt. About 1680 well-located events recorded during 1981–2014 in the Kalabsha region are selected for the analysis. The earthquake data sets are separated in 5 km zones from 0 to 25 km depth. The result shows a systematic decrease in b value up to 12 km followed by an increase. The increase in b value is interpreted to be caused by the presence of fluids. We also investigate the spatial distribution of b value with depth. Significant variations in the b value are detected, with b ranging from b 0.7 to 1.19. Low b value areas at 5 km depth indicate localized high stresses which are favorable for future rupture.

Keywords: seismicity, frequency-magnitude, b-value, earthquake

Procedia PDF Downloads 534
932 The Influence of Emotional Intelligence Skills on Innovative Start-Ups Coaching: A Neuro-Management Approach

Authors: Alina Parincu, Giuseppe Empoli, Alexandru Capatina

Abstract:

The purpose of this paper is to identify the most influential predictors of emotional intelligence skills, in the case of 20 business innovation coaches, on the co-creation of knowledge through coaching services delivered to innovative start-ups from Europe, funded through Horizon 2020 – SME Instrument. We considered the emotional intelligence skills (self-awareness, self-regulation, motivation, empathy and social skills) as antecedent conditions of the outcome: the quality of coaching services, perceived by the entrepreneurs who received funding within SME instrument, using fuzzy-sets qualitative comparative analysis (fsQCA) approach. The findings reveal that emotional intelligence skills, trained with neuro-management techniques, were associated with increased goal-focused business coaching skills.

Keywords: neuro-management, innovative start-ups, business coaching, fsQCA

Procedia PDF Downloads 134
931 The Administration of Infection Diseases During the Pandemic COVID-19 and the Role of the Differential Diagnosis with Biomarkers VB10

Authors: Sofia Papadimitriou

Abstract:

INTRODUCTION: The differential diagnosis between acute viral and bacterial infections is an important cost-effectiveness parameter at the stage of the treatment process in order to achieve the maximum benefits in therapeutic intervention by combining the minimum cost to ensure the proper use of antibiotics.The discovery of sensitive and robust molecular diagnostic tests in response to the role of the host in infections has enhanced the accurate diagnosis and differentiation of infections. METHOD: The study used a sample of six independent blood samples (total=756) which are associated with human proteins-proteins, each of which at the transcription stage expresses a different response in the host network between viral and bacterial infections.Τhe individual blood samples are subjected to a sequence of computer filters that identify a gene panel corresponding to an autonomous diagnostic score. The data set and the correspondence of the gene panel to the diagnostic patents a new Bangalore -Viral Bacterial (BL-VB). FINDING: We use a biomarker based on the blood of 10 genes(Panel-VB) that are an important prognostic value for the detection of viruses from bacterial infections with a weighted average AUROC of 0.97(95% CL:0.96-0.99) in eleven independent samples (sets n=898). We discovered a base with a patient score (VB 10 ) according to the table, which is a significant diagnostic value with a weighted average of AUROC 0.94(95% CL: 0.91-0.98) in 2996 patient samples from 56 public sets of data from 19 different countries. We also studied VB 10 in a new cohort of South India (BL-VB,n=56) and found 97% accuracy in confirmed cases of viral and bacterial infections. We found that VB 10 (a)accurately identifies the type of infection even in unspecified cases negative to the culture (b) shows its clinical condition recovery and (c) applies to all age groups, covering a wide range of acute bacterial and viral infectious, including non-specific pathogens. We applied our VB 10 rating to publicly available COVID 19 data and found that our rating diagnosed viral infection in patient samples. RESULTS: Τhe results of the study showed the diagnostic power of the biomarker VB 10 as a diagnostic test for the accurate diagnosis of acute infections in recovery conditions. We look forward to helping you make clinical decisions about prescribing antibiotics and integrating them into your policies management of antibiotic stewardship efforts. CONCLUSIONS: Overall, we are developing a new property of the RNA-based biomarker and a new blood test to differentiate between viral and bacterial infections to assist a physician in designing the optimal treatment regimen to contribute to the proper use of antibiotics and reduce the burden on antimicrobial resistance, AMR.

Keywords: acute infections, antimicrobial resistance, biomarker, blood transcriptome, systems biology, classifier diagnostic score

Procedia PDF Downloads 126
930 Aspects of the Detail Design of an Automated Biomethane Test

Authors: Ilias Katsanis, Paraskevas Papanikos, Nikolas Zacharopoulos, Vassilis C. Moulianitis, Evgenios Scourboutis, Diamantis T. Panagiotarakos

Abstract:

This paper presents aspects of the detailed design of an automated biomethane potential measurement system using CAD techniques. First, the design specifications grouped in eight sets that are used to design the design alternatives are briefly presented. Then, the major components of the final concept, as well as the design of the test, are presented. The material selection process is made using ANSYS EduPack database software. The mechanical behavior of one component developed in Creo v.5 is evaluated using finite element analysis. Finally, aspects of software development that integrate the BMP test is finally presented. This paper shows the advantages of CAD techniques in product design applied in the design of a mechatronic product.

Keywords: automated biomethane test, detail mechatronics design, materials selection, mechanical analysis

Procedia PDF Downloads 51
929 Identifying Families in C-SPAN’s: U.S. Presidential Ratings: 2000, 2009, and 2017

Authors: Alexander Cramer, Kenneth Cramer

Abstract:

Since the inauguration of President George Washington in 1789, the United States of America has seen the governance of some 44 individual presidents. Although such presidents share a variety of attributes, they still differ from one another on many others. Significantly, these traits may be used to construct distinct sets of 'families' of presidents throughout American history. By comparatively analyzing data from experts on the U.S. presidency – in this case, the C-SPAN Presidential Historians Surveys from 2000, 2009, and 2017 – this article identifies a consistent set of six presidential families: the All Stars; the Conservative Visionaries; the Postwar Progressives; the Average Joes; the Forgettables; and the Regrettables. In situating these categories in history, this article argues that U.S. presidents can be accurately organized into cohesive, like-performing families whose constituents share a common set of criteria.

Keywords: C-SPAN, POTUS presidential performance, presidential ranking, presidential studies, presidential surveys, United States

Procedia PDF Downloads 153
928 Undirected Endo-Cayley Digraphs of Cyclic Groups of Order Primes

Authors: Chanon Promsakon, Sayan Panma

Abstract:

Let S be a finite semigroup, A a subset of S and f an endomorphism on S. The endo-Cayley digraph of a semigroup S corresponding to a connecting set A and an endomorphism f, denoted by endo − Cayf (S, A) is a digraph whose vertex set is S and a vertex u is adjacent to a vertex v if and only if v = f(u)a for some a ∈ A. A digraph D is called undirected if any edge uv in D, there exists an edge vu in D. We consider the undirectedness of an endo-Cayley of a cyclic group of order prime, Zp. In this work, we investigate conditions for connecting sets and endomorphisms to make endo-Cayley digraphs of cyclic groups of order primes be undirected. Moreover, we give some conditions for an undirected endo-Cayley of cycle group of any order.

Keywords: endo-Cayley graph, undirected digraphs, cyclic groups, endomorphism

Procedia PDF Downloads 306
927 Solving Flowshop Scheduling Problems with Ant Colony Optimization Heuristic

Authors: Arshad Mehmood Ch, Riaz Ahmad, Imran Ali Ch, Waqas Durrani

Abstract:

This study deals with the application of Ant Colony Optimization (ACO) approach to solve no-wait flowshop scheduling problem (NW-FSSP). ACO algorithm so developed has been coded on Matlab computer application. The paper covers detailed steps to apply ACO and focuses on judging the strength of ACO in relation to other solution techniques previously applied to solve no-wait flowshop problem. The general purpose approach was able to find reasonably accurate solutions for almost all the problems under consideration and was able to handle a fairly large spectrum of problems with far reduced CPU effort. Careful scrutiny of the results reveals that the algorithm presented results better than other approaches like Genetic algorithm and Tabu Search heuristics etc; earlier applied to solve NW-FSSP data sets.

Keywords: no-wait, flowshop, scheduling, ant colony optimization (ACO), makespan

Procedia PDF Downloads 400
926 An Overview of New Era in Food Science and Technology

Authors: Raana Babadi Fathipour

Abstract:

Strict prerequisites of logical diaries united ought to demonstrate the exploratory information is (in)significant from the statistical point of view and has driven a soak increment within the utilization and advancement of the factual program. It is essential that the utilization of numerical and measurable strategies, counting chemometrics and many other factual methods/algorithms in nourishment science and innovation has expanded steeply within the final 20 a long time. Computational apparatuses accessible can be utilized not as it were to run factual investigations such as univariate and bivariate tests as well as multivariate calibration and improvement of complex models but also to run reenactments of distinctive scenarios considering a set of inputs or essentially making expectations for particular information sets or conditions. Conducting a fast look within the most legitimate logical databases (Pubmed, ScienceDirect, Scopus), it is conceivable to watch that measurable strategies have picked up a colossal space in numerous regions.

Keywords: food science, food technology, food safety, computational tools

Procedia PDF Downloads 37
925 Performance Analysis and Optimization for Diagonal Sparse Matrix-Vector Multiplication on Machine Learning Unit

Authors: Qiuyu Dai, Haochong Zhang, Xiangrong Liu

Abstract:

Diagonal sparse matrix-vector multiplication is a well-studied topic in the fields of scientific computing and big data processing. However, when diagonal sparse matrices are stored in DIA format, there can be a significant number of padded zero elements and scattered points, which can lead to a degradation in the performance of the current DIA kernel. This can also lead to excessive consumption of computational and memory resources. In order to address these issues, the authors propose the DIA-Adaptive scheme and its kernel, which leverages the parallel instruction sets on MLU. The researchers analyze the effect of allocating a varying number of threads, clusters, and hardware architectures on the performance of SpMV using different formats. The experimental results indicate that the proposed DIA-Adaptive scheme performs well and offers excellent parallelism.

Keywords: adaptive method, DIA, diagonal sparse matrices, MLU, sparse matrix-vector multiplication

Procedia PDF Downloads 78
924 Empirical and Indian Automotive Equity Portfolio Decision Support

Authors: P. Sankar, P. James Daniel Paul, Siddhant Sahu

Abstract:

A brief review of the empirical studies on the methodology of the stock market decision support would indicate that they are at a threshold of validating the accuracy of the traditional and the fuzzy, artificial neural network and the decision trees. Many researchers have been attempting to compare these models using various data sets worldwide. However, the research community is on the way to the conclusive confidence in the emerged models. This paper attempts to use the automotive sector stock prices from National Stock Exchange (NSE), India and analyze them for the intra-sectorial support for stock market decisions. The study identifies the significant variables and their lags which affect the price of the stocks using OLS analysis and decision tree classifiers.

Keywords: Indian automotive sector, stock market decisions, equity portfolio analysis, decision tree classifiers, statistical data analysis

Procedia PDF Downloads 451
923 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 91
922 The Analogue of a Property of Pisot Numbers in Fields of Formal Power Series

Authors: Wiem Gadri

Abstract:

This study delves into the intriguing properties of Pisot and Salem numbers within the framework of formal Laurent series over finite fields, a domain where these numbers’ spectral charac-teristics, Λm(β) and lm(β), have yet to be fully explored. Utilizing a methodological approach that combines algebraic number theory with the analysis of power series, we extend the foundational work of Erdos, Joo, and Komornik to this new setting. Our research uncovers bounds for lm(β), revealing how these depend on the degree of the minimal polynomial of β and thus offering a novel characterization of Pisot and Salem formal power series. The findings significantly contribute to our understanding of these numbers, highlighting their distribution and properties in the context of formal power series. This investigation not only bridges number theory with formal power series analysis but also sets the stage for further interdisciplinary research in these areas.

Keywords: Pisot numbers, Salem numbers, formal power series, over a finite field

Procedia PDF Downloads 15
921 Black Box Model and Evolutionary Fuzzy Control Methods of Coupled-Tank System

Authors: S. Yaman, S. Rostami

Abstract:

In this study, a black box modeling of the coupled-tank system is obtained by using fuzzy sets. The derived model is tested via adaptive neuro fuzzy inference system (ANFIS). In order to achieve a better control performance, the parameters of three different controller types, classical proportional integral controller (PID), fuzzy PID and function tuner method, are tuned by one of the evolutionary computation method, genetic algorithm. All tuned controllers are applied to the fuzzy model of the coupled-tank experimental setup and analyzed under the different reference input values. According to the results, it is seen that function tuner method demonstrates better robust control performance and guarantees the closed loop stability.

Keywords: function tuner method (FTM), fuzzy modeling, fuzzy PID controller, genetic algorithm (GA)

Procedia PDF Downloads 273