Search results for: bayesian inference
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 552

Search results for: bayesian inference

72 Security of Database Using Chaotic Systems

Authors: Eman W. Boghdady, A. R. Shehata, M. A. Azem

Abstract:

Database (DB) security demands permitting authorized users and prohibiting non-authorized users and intruders actions on the DB and the objects inside it. Organizations that are running successfully demand the confidentiality of their DBs. They do not allow the unauthorized access to their data/information. They also demand the assurance that their data is protected against any malicious or accidental modification. DB protection and confidentiality are the security concerns. There are four types of controls to obtain the DB protection, those include: access control, information flow control, inference control, and cryptographic. The cryptographic control is considered as the backbone for DB security, it secures the DB by encryption during storage and communications. Current cryptographic techniques are classified into two types: traditional classical cryptography using standard algorithms (DES, AES, IDEA, etc.) and chaos cryptography using continuous (Chau, Rossler, Lorenz, etc.) or discreet (Logistics, Henon, etc.) algorithms. The important characteristics of chaos are its extreme sensitivity to initial conditions of the system. In this paper, DB-security systems based on chaotic algorithms are described. The Pseudo Random Numbers Generators (PRNGs) from the different chaotic algorithms are implemented using Matlab and their statistical properties are evaluated using NIST and other statistical test-suits. Then, these algorithms are used to secure conventional DB (plaintext), where the statistical properties of the ciphertext are also tested. To increase the complexity of the PRNGs and to let pass all the NIST statistical tests, we propose two hybrid PRNGs: one based on two chaotic Logistic maps and another based on two chaotic Henon maps, where each chaotic algorithm is running side-by-side and starting from random independent initial conditions and parameters (encryption keys). The resulted hybrid PRNGs passed the NIST statistical test suit.

Keywords: algorithms and data structure, DB security, encryption, chaotic algorithms, Matlab, NIST

Procedia PDF Downloads 242
71 Class Size Effects on Reading Achievement in Europe: Evidence from Progress in International Reading Literacy Study

Authors: Ting Shen, Spyros Konstantopoulos

Abstract:

During the past three decades, class size effects have been a focal debate in education. The idea of having smaller class is enormously popular among parents, teachers and policy makers. The rationale of its popularity is that small classroom could provide a better learning environment in which there would be more teacher-pupil interaction and more individualized instruction. This early stage benefits would also have a long-term positive effect. It is a common belief that reducing class size may result in increases in student achievement. However, the empirical evidence about class-size effects from experimental or quasi-experimental studies has been mixed overall. This study sheds more light on whether class size reduction impacts reading achievement in eight European countries: Bulgaria, Germany, Hungary, Italy, Lithuania, Romania, Slovakia, and Slovenia. We examine class size effects on reading achievement using national probability samples of fourth graders. All eight European countries had participated in the Progress in International Reading Literacy Study (PIRLS) in 2001, 2006 and 2011. Methodologically, the quasi-experimental method of instrumental variables (IV) has been utilized to facilitate causal inference of class size effects. Overall, the results indicate that class size effects on reading achievement are not significant across countries and years. However, class size effects are evident in Romania where reducing class size increases reading achievement. In contrast, in Germany, increasing class size seems to increase reading achievement. In future work, it would be valuable to evaluate differential class size effects for minority or economically disadvantaged student groups or low- and high-achievers. Replication studies with different samples and in various settings would also be informative. Future research should continue examining class size effects in different age groups and countries using rich international databases.

Keywords: class size, reading achievement, instrumental variables, PIRLS

Procedia PDF Downloads 267
70 Quality of Service Based Routing Algorithm for Real Time Applications in MANETs Using Ant Colony and Fuzzy Logic

Authors: Farahnaz Karami

Abstract:

Routing is an important, challenging task in mobile ad hoc networks due to node mobility, lack of central control, unstable links, and limited resources. An ant colony has been found to be an attractive technique for routing in Mobile Ad Hoc Networks (MANETs). However, existing swarm intelligence based routing protocols find an optimal path by considering only one or two route selection metrics without considering correlations among such parameters making them unsuitable lonely for routing real time applications. Fuzzy logic combines multiple route selection parameters containing uncertain information or imprecise data in nature, but does not have multipath routing property naturally in order to provide load balancing. The objective of this paper is to design a routing algorithm using fuzzy logic and ant colony that can solve some of routing problems in mobile ad hoc networks, such as nodes energy consumption optimization to increase network lifetime, link failures rate reduction to increase packet delivery reliability and providing load balancing to optimize available bandwidth. In proposed algorithm, the path information will be given to fuzzy inference system by ants. Based on the available path information and considering the parameters required for quality of service (QoS), the fuzzy cost of each path is calculated and the optimal paths will be selected. NS2.35 simulation tools are used for simulation and the results are compared and evaluated with the newest QoS based algorithms in MANETs according to packet delivery ratio, end-to-end delay and routing overhead ratio criterions. The simulation results show significant improvement in the performance of these networks in terms of decreasing end-to-end delay, and routing overhead ratio, and also increasing packet delivery ratio.

Keywords: mobile ad hoc networks, routing, quality of service, ant colony, fuzzy logic

Procedia PDF Downloads 33
69 Companies’ Internationalization: Multi-Criteria-Based Prioritization Using Fuzzy Logic

Authors: Jorge Anibal Restrepo Morales, Sonia Martín Gómez

Abstract:

A model based on a logical framework was developed to quantify SMEs' internationalization capacity. To do so, linguistic variables, such as human talent, infrastructure, innovation strategies, FTAs, marketing strategies, finance, etc. were integrated. It is argued that a company’s management of international markets depends on internal factors, especially capabilities and resources available. This study considers internal factors as the biggest business challenge because they force companies to develop an adequate set of capabilities. At this stage, importance and strategic relevance have to be defined in order to build competitive advantages. A fuzzy inference system is proposed to model the resources, skills, and capabilities that determine the success of internationalization. Data: 157 linguistic variables were used. These variables were defined by international trade entrepreneurs, experts, consultants, and researchers. Using expert judgment, the variables were condensed into18 factors that explain SMEs’ export capacity. The proposed model is applied by means of a case study of the textile and clothing cluster in Medellin, Colombia. In the model implementation, a general index of 28.2 was obtained for internationalization capabilities. The result confirms that the sector’s current capabilities and resources are not sufficient for a successful integration into the international market. The model specifies the factors and variables, which need to be worked on in order to improve export capability. In the case of textile companies, the lack of a continuous recording of information stands out. Likewise, there are very few studies directed towards developing long-term plans, and., there is little consistency in exports criteria. This method emerges as an innovative management tool linked to internal organizational spheres and their different abilities.

Keywords: business strategy, exports, internationalization, fuzzy set methods

Procedia PDF Downloads 270
68 Phylogenetic Analysis Based On the Internal Transcribed Spacer-2 (ITS2) Sequences of Diadegma semiclausum (Hymenoptera: Ichneumonidae) Populations Reveals Significant Adaptive Evolution

Authors: Ebraheem Al-Jouri, Youssef Abu-Ahmad, Ramasamy Srinivasan

Abstract:

The parasitoid, Diadegma semiclausum (Hymenoptera: Ichneumonidae) is one of the most effective exotic parasitoids of diamondback moth (DBM), Plutella xylostella in the lowland areas of Homs, Syria. Molecular evolution studies are useful tools to shed light on the molecular bases of insect geographical spread and adaptation to new hosts and environment and for designing better control strategies. In this study, molecular evolution analysis was performed based on the 42 nuclear internal transcribed spacer-2 (ITS2) sequences representing the D. semiclausum and eight other Diadegma spp. from Syria and worldwide. Possible recombination events were identified by RDP4 program. Four potential recombinants of the American D. insulare and D. fenestrale (Jeju) were detected. After detecting and removing recombinant sequences, the ratio of non-synonymous (dN) to synonymous (dS) substitutions per site (dN/dS=ɷ) has been used to identify codon positions involved in adaptive processes. Bayesian techniques were applied to detect selective pressures at a codon level by using five different approaches including: fixed effects likelihood (FEL), internal fixed effects likelihood (IFEL), random effects method (REL), mixed effects model of evolution (MEME) and Program analysis of maximum liklehood (PAML). Among the 40 positively selected amino acids (aa) that differed significantly between clades of Diadegma species, three aa under positive selection were only identified in D. semiclausum. Additionally, all D. semiclausum branches tree were highly found under episodic diversifying selection (EDS) at p≤0.05. Our study provide evidence that both recombination and positive selection have contributed to the molecular diversity of Diadegma spp. and highlights the significant contribution of D. semiclausum in adaptive evolution and influence the fitness in the DBM parasitoid.

Keywords: diadegma sp, DBM, ITS2, phylogeny, recombination, dN/dS, evolution, positive selection

Procedia PDF Downloads 390
67 Exploring Affordable Care Practs in Nigeria’s Health Insurance Discourse

Authors: Emmanuel Chinaguh, Kehinde Adeosun

Abstract:

Nigerians die untimely, with 55.75 years of life expectancy, which is 17.45 below the world average of 73.2 (Worldometer, 2020). This is due, among other factors, to the country's limited access to high-quality healthcare. To increase access to good and affordable healthcare services, the National Health Insurance Authority (NHIA) Bill 2022 – which repealed the National Health Insurance Scheme Act 2004 – was passed into law. Applying Jacob Mey’s (2001) pragmatics act (pract) theory, this study explores how NHIA seeks to actualise these healthcare goals by characterising the general situational prototype or pragmemes and pragmatic acts in institutional communications. Data was sourced from the NHIA operational guidelines, which has 147 pages and four sections, and shared posters on NHIA Nigeria Twitter Handle with 14,200 followers. Digital humanities tools, like AntConc and Voyant, were engaged in the data analysis for text encoding and data visualisation. This study identifies these discourse tokens in the data: advertisement and programmes, standards and accreditation, records and information, and offences and penalties. Advertisement and programmes pract facilitating, propagating, prospecting, advising and informing; standards and accreditation, and records and information pract stating, informing and instructing; and offences and penalties pract stating and sanctioning. These practs combined to advance the goals of affordable care and universal accessibility to quality healthcare services. The pragmatic acts were marked by these pragmatic tools: shared situational knowledge (SSK), relevance (REL), reference (REF) and inference (INF). This paper adds to the understanding of health insurance discourse in Nigeria as a mediated social practice that promotes the health of Nigerians.

Keywords: affordable care, NHIA, Nigeria’s health insurance discourse, pragmatic acts.

Procedia PDF Downloads 50
66 Detecting Memory-Related Gene Modules in sc/snRNA-seq Data by Deep-Learning

Authors: Yong Chen

Abstract:

To understand the detailed molecular mechanisms of memory formation in engram cells is one of the most fundamental questions in neuroscience. Recent single-cell RNA-seq (scRNA-seq) and single-nucleus RNA-seq (snRNA-seq) techniques have allowed us to explore the sparsely activated engram ensembles, enabling access to the molecular mechanisms that underlie experience-dependent memory formation and consolidation. However, the absence of specific and powerful computational methods to detect memory-related genes (modules) and their regulatory relationships in the sc/snRNA-seq datasets has strictly limited the analysis of underlying mechanisms and memory coding principles in mammalian brains. Here, we present a deep-learning method named SCENTBOX, to detect memory-related gene modules and causal regulatory relationships among themfromsc/snRNA-seq datasets. SCENTBOX first constructs codifferential expression gene network (CEGN) from case versus control sc/snRNA-seq datasets. It then detects the highly correlated modules of differential expression genes (DEGs) in CEGN. The deep network embedding and attention-based convolutional neural network strategies are employed to precisely detect regulatory relationships among DEG genes in a module. We applied them on scRNA-seq datasets of TRAP; Ai14 mouse neurons with fear memory and detected not only known memory-related genes, but also the modules and potential causal regulations. Our results provided novel regulations within an interesting module, including Arc, Bdnf, Creb, Dusp1, Rgs4, and Btg2. Overall, our methods provide a general computational tool for processing sc/snRNA-seq data from case versus control studie and a systematic investigation of fear-memory-related gene modules.

Keywords: sc/snRNA-seq, memory formation, deep learning, gene module, causal inference

Procedia PDF Downloads 86
65 A Distributed Mobile Agent Based on Intrusion Detection System for MANET

Authors: Maad Kamal Al-Anni

Abstract:

This study is about an algorithmic dependence of Artificial Neural Network on Multilayer Perceptron (MPL) pertaining to the classification and clustering presentations for Mobile Adhoc Network vulnerabilities. Moreover, mobile ad hoc network (MANET) is ubiquitous intelligent internetworking devices in which it has the ability to detect their environment using an autonomous system of mobile nodes that are connected via wireless links. Security affairs are the most important subject in MANET due to the easy penetrative scenarios occurred in such an auto configuration network. One of the powerful techniques used for inspecting the network packets is Intrusion Detection System (IDS); in this article, we are going to show the effectiveness of artificial neural networks used as a machine learning along with stochastic approach (information gain) to classify the malicious behaviors in simulated network with respect to different IDS techniques. The monitoring agent is responsible for detection inference engine, the audit data is collected from collecting agent by simulating the node attack and contrasted outputs with normal behaviors of the framework, whenever. In the event that there is any deviation from the ordinary behaviors then the monitoring agent is considered this event as an attack , in this article we are going to demonstrate the  signature-based IDS approach in a MANET by implementing the back propagation algorithm over ensemble-based Traffic Table (TT), thus the signature of malicious behaviors or undesirable activities are often significantly prognosticated and efficiently figured out, by increasing the parametric set-up of Back propagation algorithm during the experimental results which empirically shown its effectiveness  for the ratio of detection index up to 98.6 percentage. Consequently it is proved in empirical results in this article, the performance matrices are also being included in this article with Xgraph screen show by different through puts like Packet Delivery Ratio (PDR), Through Put(TP), and Average Delay(AD).

Keywords: Intrusion Detection System (IDS), Mobile Adhoc Networks (MANET), Back Propagation Algorithm (BPA), Neural Networks (NN)

Procedia PDF Downloads 164
64 Lexical Semantic Analysis to Support Ontology Modeling of Maintenance Activities– Case Study of Offshore Riser Integrity

Authors: Vahid Ebrahimipour

Abstract:

Word representation and context meaning of text-based documents play an essential role in knowledge modeling. Business procedures written in natural language are meant to store technical and engineering information, management decision and operation experience during the production system life cycle. Context meaning representation is highly dependent upon word sense, lexical relativity, and sematic features of the argument. This paper proposes a method for lexical semantic analysis and context meaning representation of maintenance activity in a mass production system. Our approach constructs a straightforward lexical semantic approach to analyze facilitates semantic and syntactic features of context structure of maintenance report to facilitate translation, interpretation, and conversion of human-readable interpretation into computer-readable representation and understandable with less heterogeneity and ambiguity. The methodology will enable users to obtain a representation format that maximizes shareability and accessibility for multi-purpose usage. It provides a contextualized structure to obtain a generic context model that can be utilized during the system life cycle. At first, it employs a co-occurrence-based clustering framework to recognize a group of highly frequent contextual features that correspond to a maintenance report text. Then the keywords are identified for syntactic and semantic extraction analysis. The analysis exercises causality-driven logic of keywords’ senses to divulge the structural and meaning dependency relationships between the words in a context. The output is a word contextualized representation of maintenance activity accommodating computer-based representation and inference using OWL/RDF.

Keywords: lexical semantic analysis, metadata modeling, contextual meaning extraction, ontology modeling, knowledge representation

Procedia PDF Downloads 79
63 Introducing Two Species of Parastagonospora (Phaeosphaeriaceae) on Grasses from Italy and Russia, Based on Morphology and Phylogeny

Authors: Ishani D. Goonasekara, Erio Camporesi, Timur Bulgakov, Rungtiwa Phookamsak, Kevin D. Hyde

Abstract:

Phaeosphaeriaceae comprises a large number of species occurring mainly on grasses and cereal crops as endophytes, saprobes and especially pathogens. Parastagonospora is an important genus in Phaeosphaeriaceae that includes pathogens causing leaf and glume blotch on cereal crops. Currently, there are fifteen Parastagonospora species described, including both pathogens and saprobes. In this study, one sexual morph species and an asexual morph species, occurring as saprobes on members of Poaceae are introduced based on morphology and a combined molecular analysis of the LSU, SSU, ITS, and RPB2 gene sequence data. The sexual morph species Parastagonospora elymi was isolated from a Russian sample of Elymus repens, a grass commonly known as couch grass, and important for grazing animals, as a weed and used in traditional Austrian medicine. P. elymi is similar to the sexual morph of P. avenae in having cylindrical asci, bearing 8, overlapping biseriate, fusiform ascospores but can be distinguished by its subglobose to conical shaped, wider ascomata. In addition, no sheath was observed surrounding the ascospores. The asexual morph species was isolated from a specimen from Italy, on Dactylis glomerata, a commonly found grass distributed in temperate regions. It is introduced as Parastagonospora macrouniseptata, a coelomycete, and bears a close resemblance to P. allouniseptata and P. uniseptata in having globose to subglobose, pycnidial conidiomata and hyaline, cylindrical, 1-septate conidia. However, the new species could be distinguished in having much larger conidiomata. In the phylogenetic analysis which consisted of a maximum likelihood and Bayesian analysis P. elymi showed low bootstrap support, but well segregated from other strains within the Parastagonospora clade. P. neoallouniseptata formed a sister clade with P. allouniseptata with high statistical support.

Keywords: dothideomycetes, multi-gene analysis, Poaceae, saprobes, taxonomy

Procedia PDF Downloads 91
62 Application of Host Factors as Biomarker in Early Diagnosis of Pulmonary Tuberculosis

Authors: Ambrish Tiwari, Sudhasini Panda, Archana Singh, Kalpana Luthra, S. K. Sharma

Abstract:

Introduction: On the basis of available literature we know that various host factors play a role in outcome of Tuberculosis (TB) infection by modulating innate immunity. One such factor is Inducible Nitric Oxide Synthase enzyme (iNOS) which help in the production of Nitric Oxide (NO), an antimicrobial agent. Expression of iNOS is in control of various host factors in which Vitamin D along with its nuclear receptor Vitamin D receptor (VDR) is one of them. Vitamin D along with its receptor also produces cathelicidin (antimicrobicidal agent). With this background, we attempted to investigate the levels of Vitamin D and NO along with their associated molecules in tuberculosis patients and household contacts as compared to healthy controls and assess the implication of these findings in susceptibility to tuberculosis (TB). Study subjects and methods: 100 active TB patients, 75 household contacts, and 70 healthy controls were taken. VDR and iNOS mRNA levels were studied using real-time PCR. Serum VDR, cathelicidin, iNOS levels were measured using ELISA. Serum Vitamin D levels were measured in serum samples using chemiluminescence based immunoassay. NO was measured using colorimetry based kit. Results: VDR and iNOS mRNA levels were found to be lower in active TB group compared to household contacts and healthy controls (P=0.0001 and 0.005 respectively). The serum levels of Vitamin D were also found to be lower in active TB group as compared to healthy controls (P =0.001). Levels of cathelicidin and NO was higher in patient group as compared to other groups (p=0.01 and 0.5 respectively). However, the expression of VDR and iNOS and levels of vitamin D was significantly (P < 0.05) higher in household contacts compared to both active TB and healthy control groups. Inference: Higher levels of Vitamin D along with VDR and iNOS expression in household contacts as compared to patients suggest that vitamin D might have a protective role against TB which prevents activation of the disease. From our data, we can conclude that decreased vitamin D levels could be implicated in disease progression and we can use cathelicidin and NO as a biomarker for early diagnosis of pulmonary tuberculosis.

Keywords: vitamin D, VDR, iNOS, tuberculosis

Procedia PDF Downloads 278
61 Collaboration-Based Islamic Financial Services: Case Study of Islamic Fintech in Indonesia

Authors: Erika Takidah, Salina Kassim

Abstract:

Digital transformation has accelerated in the new millennium. It is reshaping the financial services industry from a traditional system to financial technology. Moreover, the number of financial inclusion rates in Indonesia is less than 60%. An innovative model needed to elucidate this national problem. On the other hand, the Islamic financial service industry and financial technology grow fast as a new aspire in economic development. An Islamic bank, takaful, Islamic microfinance, Islamic financial technology and Islamic social finance institution could collaborate to intensify the financial inclusion number in Indonesia. The primary motive of this paper is to examine the strategy of collaboration-based Islamic financial services to enhance financial inclusion in Indonesia, particularly facing the digital era. The fundamental findings for the main problems are the foundations and key ecosystems aspect involved in the development of collaboration-based Islamic financial services. By using the Interpretive Structural Model (ISM) approach, the core problems faced in the development of the models have lacked policy instruments guarding the collaboration-based Islamic financial services with fintech work process and availability of human resources for fintech. The core strategies or foundations that are needed in the framework of collaboration-based Islamic financial services are the ability to manage and analyze data in the big data era. For the aspects of the Ecosystem or actors involved in the development of this model, the important actor is government or regulator, educational institutions, and also existing industries (Islamic financial services). The outcome of the study designates that strategy collaboration of Islamic financial services institution supported by robust technology, a legal and regulatory commitment of the regulators and policymakers of the Islamic financial institutions, extensive public awareness of financial inclusion in Indonesia. The study limited itself to realize financial inclusion, particularly in Islamic finance development in Indonesia. The study will have an inference for the concerned professional bodies, regulators, policymakers, stakeholders, and practitioners of Islamic financial service institutions.

Keywords: collaboration, financial inclusion, Islamic financial services, Islamic fintech

Procedia PDF Downloads 107
60 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 299
59 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation

Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro

Abstract:

This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.

Keywords: acceptance, block size, mixed linear model, testing order, testing order

Procedia PDF Downloads 295
58 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 153
57 Modelling Volatility Spillovers and Cross Hedging among Major Agricultural Commodity Futures

Authors: Roengchai Tansuchat, Woraphon Yamaka, Paravee Maneejuk

Abstract:

From the past recent, the global financial crisis, economic instability, and large fluctuation in agricultural commodity price have led to increased concerns about the volatility transmission among them. The problem is further exacerbated by commodities volatility caused by other commodity price fluctuations, hence the decision on hedging strategy has become both costly and useless. Thus, this paper is conducted to analysis the volatility spillover effect among major agriculture including corn, soybeans, wheat and rice, to help the commodity suppliers hedge their portfolios, and manage the risk and co-volatility of them. We provide a switching regime approach to analyzing the issue of volatility spillovers in different economic conditions, namely upturn and downturn economic. In particular, we investigate relationships and volatility transmissions between these commodities in different economic conditions. We purposed a Copula-based multivariate Markov Switching GARCH model with two regimes that depend on an economic conditions and perform simulation study to check the accuracy of our proposed model. In this study, the correlation term in the cross-hedge ratio is obtained from six copula families – two elliptical copulas (Gaussian and Student-t) and four Archimedean copulas (Clayton, Gumbel, Frank, and Joe). We use one-step maximum likelihood estimation techniques to estimate our models and compare the performance of these copula using Akaike information criterion (AIC) and Bayesian information criteria (BIC). In the application study of agriculture commodities, the weekly data used are conducted from 4 January 2005 to 1 September 2016, covering 612 observations. The empirical results indicate that the volatility spillover effects among cereal futures are different, as response of different economic condition. In addition, the results of hedge effectiveness will also suggest the optimal cross hedge strategies in different economic condition especially upturn and downturn economic.

Keywords: agricultural commodity futures, cereal, cross-hedge, spillover effect, switching regime approach

Procedia PDF Downloads 170
56 University-home Partnerships for Enhancing Students’ Career Adapting Responses: A Moderated-mediation Model

Authors: Yin Ma, Xun Wang, Kelsey Austin

Abstract:

Purpose – Building upon career construction theory and the conservation of resources theory, we developed a moderated mediation model to examine how the perceived university support impact students’ career adapting responses, namely, crystallization, exploration, decision and preparation, via the mediator career adaptability and moderator perceived parental support. Design/methodology/approach – The multi-stage sampling strategy was employed and survey data were collected. Structural equation modeling was used to perform the analysis. Findings – Perceived university support could directly promote students’ career adaptability, and promote three career adapting responses, namely, exploration, decision and preparation. It could also impact four career adapting responses via mediation effect of career adaptability. Its impact on students’ career adaptability can greatly increase when students’ receive parental related career support. Research limitations/implications – The cross-sectional design limits causal inference. Conducted in China, our findings should be cautiously interpreted in other countries due to cultural differences. Practical implications – University support is vital to students’ career adaptability and supports from parents can enhance this process. University-home collaboration is necessary to promote students’ career adapting responses. For students, seeking and utilizing as much supporting resources as possible is vital for their human resources development. On an organizational level, universities could benefit from our findings by introducing the practices which ask students to rate the career-related courses and encourage them to chat with parents regularly. Originality/ value – Using recently developed scale, current work contributes to the literature by investigating the impact of multiple contextual factors on students’ career adapting response. It also provide the empirical support for the role of human intervention in fostering career adapting responses.

Keywords: career adapability, university and parental support, China studies, sociology of education

Procedia PDF Downloads 30
55 Machine Learning in Agriculture: A Brief Review

Authors: Aishi Kundu, Elhan Raza

Abstract:

"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.

Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting

Procedia PDF Downloads 76
54 Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data

Authors: Gayathri Nagarajan, L. D. Dhinesh Babu

Abstract:

Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics.

Keywords: big data analytics, ensemble machine learning, gradient boosted trees, Spark platform

Procedia PDF Downloads 216
53 Evaluation of the Effect of Milk Recording Intervals on the Accuracy of an Empirical Model Fitted to Dairy Sheep Lactations

Authors: L. Guevara, Glória L. S., Corea E. E, A. Ramírez-Zamora M., Salinas-Martinez J. A., Angeles-Hernandez J. C.

Abstract:

Mathematical models are useful for identifying the characteristics of sheep lactation curves to develop and implement improved strategies. However, the accuracy of these models is influenced by factors such as the recording regime, mainly the intervals between test day records (TDR). The current study aimed to evaluate the effect of different TDR intervals on the goodness of fit of the Wood model (WM) applied to dairy sheep lactations. A total of 4,494 weekly TDRs from 156 lactations of dairy crossbred sheep were analyzed. Three new databases were generated from the original weekly TDR data (7D), comprising intervals of 14(14D), 21(21D), and 28(28D) days. The parameters of WM were estimated using the “minpack.lm” package in the R software. The shape of the lactation curve (typical and atypical) was defined based on the WM parameters. The goodness of fit was evaluated using the mean square of prediction error (MSPE), Root of MSPE (RMSPE), Akaike´s Information Criterion (AIC), Bayesian´s Information Criterion (BIC), and the coefficient of correlation (r) between the actual and estimated total milk yield (TMY). WM showed an adequate estimate of TMY regardless of the TDR interval (P=0.21) and shape of the lactation curve (P=0.42). However, we found higher values of r for typical curves compared to atypical curves (0.9vs.0.74), with the highest values for the 28D interval (r=0.95). In the same way, we observed an overestimated peak yield (0.92vs.6.6 l) and underestimated time of peak yield (21.5vs.1.46) in atypical curves. The best values of RMSPE were observed for the 28D interval in both lactation curve shapes. The significant lowest values of AIC (P=0.001) and BIC (P=0.001) were shown by the 7D interval for typical and atypical curves. These results represent the first approach to define the adequate interval to record the regime of dairy sheep in Latin America and showed a better fitting for the Wood model using a 7D interval. However, it is possible to obtain good estimates of TMY using a 28D interval, which reduces the sampling frequency and would save additional costs to dairy sheep producers.

Keywords: gamma incomplete, ewes, shape curves, modeling

Procedia PDF Downloads 41
52 Machine Learning Techniques to Predict Cyberbullying and Improve Social Work Interventions

Authors: Oscar E. Cariceo, Claudia V. Casal

Abstract:

Machine learning offers a set of techniques to promote social work interventions and can lead to support decisions of practitioners in order to predict new behaviors based on data produced by the organizations, services agencies, users, clients or individuals. Machine learning techniques include a set of generalizable algorithms that are data-driven, which means that rules and solutions are derived by examining data, based on the patterns that are present within any data set. In other words, the goal of machine learning is teaching computers through 'examples', by training data to test specifics hypothesis and predict what would be a certain outcome, based on a current scenario and improve that experience. Machine learning can be classified into two general categories depending on the nature of the problem that this technique needs to tackle. First, supervised learning involves a dataset that is already known in terms of their output. Supervising learning problems are categorized, into regression problems, which involve a prediction from quantitative variables, using a continuous function; and classification problems, which seek predict results from discrete qualitative variables. For social work research, machine learning generates predictions as a key element to improving social interventions on complex social issues by providing better inference from data and establishing more precise estimated effects, for example in services that seek to improve their outcomes. This paper exposes the results of a classification algorithm to predict cyberbullying among adolescents. Data were retrieved from the National Polyvictimization Survey conducted by the government of Chile in 2017. A logistic regression model was created to predict if an adolescent would experience cyberbullying based on the interaction and behavior of gender, age, grade, type of school, and self-esteem sentiments. The model can predict with an accuracy of 59.8% if an adolescent will suffer cyberbullying. These results can help to promote programs to avoid cyberbullying at schools and improve evidence based practice.

Keywords: cyberbullying, evidence based practice, machine learning, social work research

Procedia PDF Downloads 141
51 The Computational Psycholinguistic Situational-Fuzzy Self-Controlled Brain and Mind System Under Uncertainty

Authors: Ben Khayut, Lina Fabri, Maya Avikhana

Abstract:

The models of the modern Artificial Narrow Intelligence (ANI) cannot: a) independently and continuously function without of human intelligence, used for retraining and reprogramming the ANI’s models, and b) think, understand, be conscious, cognize, infer, and more in state of Uncertainty, and changes in situations, and environmental objects. To eliminate these shortcomings and build a new generation of Artificial Intelligence systems, the paper proposes a Conception, Model, and Method of Computational Psycholinguistic Cognitive Situational-Fuzzy Self-Controlled Brain and Mind System (CPCSFSCBMSUU) using a neural network as its computational memory, operating under uncertainty, and activating its functions by perception, identification of real objects, fuzzy situational control, forming images of these objects, modeling their psychological, linguistic, cognitive, and neural values of properties and features, the meanings of which are identified, interpreted, generated, and formed taking into account the identified subject area, using the data, information, knowledge, and images, accumulated in the Memory. The functioning of the CPCSFSCBMSUU is carried out by its subsystems of the: fuzzy situational control of all processes, computational perception, identifying of reactions and actions, Psycholinguistic Cognitive Fuzzy Logical Inference, Decision making, Reasoning, Systems Thinking, Planning, Awareness, Consciousness, Cognition, Intuition, Wisdom, analysis and processing of the psycholinguistic, subject, visual, signal, sound and other objects, accumulation and using the data, information and knowledge in the Memory, communication, and interaction with other computing systems, robots and humans in order of solving the joint tasks. To investigate the functional processes of the proposed system, the principles of Situational Control, Fuzzy Logic, Psycholinguistics, Informatics, and modern possibilities of Data Science were applied. The proposed self-controlled System of Brain and Mind is oriented on use as a plug-in in multilingual subject Applications.

Keywords: computational brain, mind, psycholinguistic, system, under uncertainty

Procedia PDF Downloads 141
50 Using Business Intelligence Capabilities to Improve the Quality of Decision-Making: A Case Study of Mellat Bank

Authors: Jalal Haghighat Monfared, Zahra Akbari

Abstract:

Today, business executives need to have useful information to make better decisions. Banks have also been using information tools so that they can direct the decision-making process in order to achieve their desired goals by rapidly extracting information from sources with the help of business intelligence. The research seeks to investigate whether there is a relationship between the quality of decision making and the business intelligence capabilities of Mellat Bank. Each of the factors studied is divided into several components, and these and their relationships are measured by a questionnaire. The statistical population of this study consists of all managers and experts of Mellat Bank's General Departments (including 190 people) who use commercial intelligence reports. The sample size of this study was 123 randomly determined by statistical method. In this research, relevant statistical inference has been used for data analysis and hypothesis testing. In the first stage, using the Kolmogorov-Smirnov test, the normalization of the data was investigated and in the next stage, the construct validity of both variables and their resulting indexes were verified using confirmatory factor analysis. Finally, using the structural equation modeling and Pearson's correlation coefficient, the research hypotheses were tested. The results confirmed the existence of a positive relationship between decision quality and business intelligence capabilities in Mellat Bank. Among the various capabilities, including data quality, correlation with other systems, user access, flexibility and risk management support, the flexibility of the business intelligence system was the most correlated with the dependent variable of the present research. This shows that it is necessary for Mellat Bank to pay more attention to choose the required business intelligence systems with high flexibility in terms of the ability to submit custom formatted reports. Subsequently, the quality of data on business intelligence systems showed the strongest relationship with quality of decision making. Therefore, improving the quality of data, including the source of data internally or externally, the type of data in quantitative or qualitative terms, the credibility of the data and perceptions of who uses the business intelligence system, improves the quality of decision making in Mellat Bank.

Keywords: business intelligence, business intelligence capability, decision making, decision quality

Procedia PDF Downloads 88
49 Yield Loss Estimation Using Multiple Drought Severity Indices

Authors: Sara Tokhi Arab, Rozo Noguchi, Tofeal Ahamed

Abstract:

Drought is a natural disaster that occurs in a region due to a lack of precipitation and high temperatures over a continuous period or in a single season as a consequence of climate change. Precipitation deficits and prolonged high temperatures mostly affect the agricultural sector, water resources, socioeconomics, and the environment. Consequently, it causes agricultural product loss, food shortage, famines, migration, and natural resources degradation in a region. Agriculture is the first sector affected by drought. Therefore, it is important to develop an agricultural drought risk and loss assessment to mitigate the drought impact in the agriculture sector. In this context, the main purpose of this study was to assess yield loss using composite drought indices in the drought-affected vineyards. In this study, the CDI was developed for the years 2016 to 2020 by comprising five indices: the vegetation condition index (VCI), temperature condition index (TCI), deviation of NDVI from the long-term mean (NDVI DEV), normalized difference moisture index (NDMI) and precipitation condition index (PCI). Moreover, the quantitative principal component analysis (PCA) approach was used to assign a weight for each input parameter, and then the weights of all the indices were combined into one composite drought index. Finally, Bayesian regularized artificial neural networks (BRANNs) were used to evaluate the yield variation in each affected vineyard. The composite drought index result indicated the moderate to severe droughts were observed across the Kabul Province during 2016 and 2018. Moreover, the results showed that there was no vineyard in extreme drought conditions. Therefore, we only considered the severe and moderated condition. According to the BRANNs results R=0.87 and R=0.94 in severe drought conditions for the years of 2016 and 2018 and the R= 0.85 and R=0.91 in moderate drought conditions for the years of 2016 and 2018, respectively. In the Kabul Province within the two years drought periods, there was a significate deficit in the vineyards. According to the findings, 2018 had the highest rate of loss almost -7 ton/ha. However, in 2016 the loss rates were about – 1.2 ton/ha. This research will support stakeholders to identify drought affect vineyards and support farmers during severe drought.

Keywords: grapes, composite drought index, yield loss, satellite remote sensing

Procedia PDF Downloads 121
48 Genomic Characterisation of Equine Sarcoid-derived Bovine Papillomavirus Type 1 and 2 Using Nanopore-Based Sequencing

Authors: Lien Gysens, Bert Vanmechelen, Maarten Haspeslagh, Piet Maes, Ann Martens

Abstract:

Bovine papillomavirus (BPV) types 1 and 2 play a central role in the etiology of the most common neoplasm in horses, the equine sarcoid. The unknown mechanism behind the unique variety in a clinical presentation on the one hand and the host-dependent clinical outcome of BPV-1 infection, on the other hand, indicate the involvement of additional factors. Earlier studies have reported the potential functional significance of intratypic sequence variants, along with the existence of sarcoid-sourced BPV variants. Therefore, intratypic sequence variation seems to be an important emerging viral factor. This study aimed to give a broad insight in sarcoid-sourced BPV variation and explore its potential association with disease presentation. In order to do this, a nanopore sequencing approach was successfully optimized for screening a wide spectrum of clinical samples. Specimens of each tumour were initially screened for BPV-1/-2 by quantitative real-time PCR. A custom-designed primer set was used on BPV-positive samples to amplify the complete viral genome in two multiplex PCR reactions, resulting in a set of overlapping amplicons. For phylogenetic analysis, separate alignments were made of all available complete genome sequences for BPV-1/-2. The resulting alignments were used to infer Bayesian phylogenetic trees. We found substantial genetic variation among sarcoid-derived BPV-1, although this variation could not be linked to disease severity. Several of the BPV-1 genomes had multiple major deletions. Remarkably, the majority of the cluster within the region coding for late viral genes. Together with the extensiveness (up to 603 nucleotides) of the described deletions, this suggests an altered function of L1/L2 in disease pathogenesis. By generating a significant amount of complete-length BPV genomes, we succeeded in introducing next-generation sequencing into veterinary research focusing on the equine sarcoid, thus facilitating the first report of both nanopore-based sequencing of complete sarcoid-sourced BPV-1/-2 and the simultaneous nanopore sequencing of multiple complete genomes originating from a single clinical sample.

Keywords: Bovine papillomavirus, equine sarcoid, horse, nanopore sequencing, phylogenetic analysis

Procedia PDF Downloads 150
47 Expert Supporting System for Diagnosing Lymphoid Neoplasms Using Probabilistic Decision Tree Algorithm and Immunohistochemistry Profile Database

Authors: Yosep Chong, Yejin Kim, Jingyun Choi, Hwanjo Yu, Eun Jung Lee, Chang Suk Kang

Abstract:

For the past decades, immunohistochemistry (IHC) has been playing an important role in the diagnosis of human neoplasms, by helping pathologists to make a clearer decision on differential diagnosis, subtyping, personalized treatment plan, and finally prognosis prediction. However, the IHC performed in various tumors of daily practice often shows conflicting and very challenging results to interpret. Even comprehensive diagnosis synthesizing clinical, histologic and immunohistochemical findings can be helpless in some twisted cases. Another important issue is that the IHC data is increasing exponentially and more and more information have to be taken into account. For this reason, we reached an idea to develop an expert supporting system to help pathologists to make a better decision in diagnosing human neoplasms with IHC results. We gave probabilistic decision tree algorithm and tested the algorithm with real case data of lymphoid neoplasms, in which the IHC profile is more important to make a proper diagnosis than other human neoplasms. We designed probabilistic decision tree based on Bayesian theorem, program computational process using MATLAB (The MathWorks, Inc., USA) and prepared IHC profile database (about 104 disease category and 88 IHC antibodies) based on WHO classification by reviewing the literature. The initial probability of each neoplasm was set with the epidemiologic data of lymphoid neoplasm in Korea. With the IHC results of 131 patients sequentially selected, top three presumptive diagnoses for each case were made and compared with the original diagnoses. After the review of the data, 124 out of 131 were used for final analysis. As a result, the presumptive diagnoses were concordant with the original diagnoses in 118 cases (93.7%). The major reason of discordant cases was that the similarity of the IHC profile between two or three different neoplasms. The expert supporting system algorithm presented in this study is in its elementary stage and need more optimization using more advanced technology such as deep-learning with data of real cases, especially in differentiating T-cell lymphomas. Although it needs more refinement, it may be used to aid pathological decision making in future. A further application to determine IHC antibodies for a certain subset of differential diagnoses might be possible in near future.

Keywords: database, expert supporting system, immunohistochemistry, probabilistic decision tree

Procedia PDF Downloads 205
46 Socio-cultural Dimensions Inhibiting Female Condom Use by the Female Students: Experiences from a University in Rural South Africa

Authors: Christina Tafadzwa

Abstract:

Global HIV and AIDS trends show that Sub-Saharan Africa is the hardest-hit region, and women are disproportionately affected and infected by HIV. The trend is conspicuous in South Africa, where adolescent girls and young women (AGYW), female university students included, bear the burden of HIV infection. Although the female condom (FC) is the only female-oriented HIV and AIDS technology that provides dual protection against unwanted pregnancy and HIV, its uptake and use remain erratic, especially among the youth and young women in institutions of higher learning. This paper explores empirical evidence from the University of Venda (UniVen), which is in the rural areas of Limpopo Province in South Africa, and also among higher learning institutions experiencing low uptake and use of the FC. A phenomenological approach consisting of in-depth interviews was utilized to collect data from a total of 20 female university students at UniVen who were purposively sampled based on their participation in HIV and AIDS dialogues and campaigns conducted on campus. The findings that were analysed thematically revealed that notions of rurality and sociocultural beliefs surrounding women's sexual and reproductive health are key structural factors that influence the low use and uptake of the FC at the rural university. The evidence thus far revealed that female students are discouraged from collecting or initiating FC because of cultural dictates or prescripts which place the responsibility to collect and initiate condom use on men. Hence the inference that UniVen female students' realities are compounded by notions of rurality and society's patriarchal nature that intersect and limit women's autonomy in matters of sex. Guided by the women empowerment theory, this paper argues that such practices take away UniVen female students' agency to decide on their sexual and reproductive health. The normalisation of socio-cultural and harmful gender practices is also a retrogression in the women's health agenda. The paper recommends a holistic approach that engages traditional and community leaders, particularly men, to unlearn and uproot harmful gender norms and patriarchal elements that hinder the promotion and use of the FC.

Keywords: female condom, UniVen, socio-cultural factors, female students, HIV and AIDS

Procedia PDF Downloads 57
45 Investigating Homicide Offender Typologies Based on Their Clinical Histories and Crime Scene Behaviour Patterns

Authors: Valeria Abreu Minero, Edward Barker, Hannah Dickson, Francois Husson, Sandra Flynn, Jennifer Shaw

Abstract:

Purpose – The purpose of this paper is to identify offender typologies based on aspects of the offenders’ psychopathology and their associations with crime scene behaviours using data derived from the National Confidential Enquiry into Suicide and Safety in Mental Health concerning homicides in England and Wales committed by offenders in contact with mental health services in the year preceding the offence (n=759). Design/methodology/approach – The authors used multiple correspondence analysis to investigate the interrelationships between the variables and hierarchical agglomerative clustering to identify offender typologies. Variables describing: the offender’s mental health history; the offenders’ mental state at the time of offence; characteristics useful for police investigations; and patterns of crime scene behaviours were included. Findings – Results showed differences in the offender’s histories in relation to their crime scene behaviours. Further, analyses revealed three homicide typologies: externalising, psychosis and depression. Analyses revealed three homicide typologies: externalising, psychotic and depressive. Practical implications – These typologies may assist the police during homicide investigations by: furthering their understanding of the crime or likely suspect; offering insights into crime patterns; provide advice as to what an offender’s offence behaviour might signify about his/her mental health background; findings suggest information concerning offender psychopathology may be useful for offender profiling purposes in cases of homicide offenders with schizophrenia, depression and comorbid diagnosis of personality disorder and alcohol/drug dependence. Originality/value – Empirical studies with an emphasis on offender profiling have almost exclusively focussed on the inference of offender demographic characteristics. This study provides a first step in the exploration of offender psychopathology and its integration to the multivariate analysis of offence information for the purposes of investigative profiling of homicide by identifying the dominant patterns of mental illness within homicidal behaviour.

Keywords: offender profiling, mental illness, psychopathology, multivariate analysis, homicide, crime scene analysis, crime scene behviours, investigative advice

Procedia PDF Downloads 95
44 Freedom, Thought, and the Will: A Philosophical Reconstruction of Muhammad Iqbal’s Conception of Human Agency

Authors: Anwar ul Haq

Abstract:

Muhammad Iqbal was arguably the most significant South Asian Islamic philosopher of the last two centuries. While he is the most revered philosopher of the region, particularly in Pakistan, he is probably the least studied philosopher outside the region. The paper offers a philosophical reconstruction of Iqbal’s view of human agency; it has three sections. Section 1 focuses on Iqbal’s starting point of reflection in practical philosophy (inspired by Kant): our consciousness of ourselves as free agents. The paper brings out Iqbal’s continuity with Kant but also his divergence, in particular his non-Kantian view that we possess a non-sensory intuition of ourselves as free personal causes. It also offer an argument on Iqbal’s behalf for this claim, which is meant as a defense against a Kantian objection to the possibility of intuition of freedom and a skeptic’s challenge to the possibility of freedom in general. Remaining part of the paper offers a reconstruction of Iqbal’s two preconditions of the possibility of free agency. Section 2 discusses the first precondition, namely, the unity of consciousness involved in thought (this is a precondition of agency whether or not it is free). The unity has two aspects, a quantitative (or numerical) aspect and a qualitative (or rational) one. Section 2 offers a defense of these two aspects of the unity of consciousness presupposed by agency by focusing, with Iqbal, on the case of inference.Section 3 discusses a second precondition of the possibility of free agency, that thought and will must be identical in a free agent. Iqbal offers this condition in relief against Bergson’s view. Bergson (on Iqbal’s reading of him) argues that freedom of the will is possible only if the will’s ends are entirely its own and are wholly undetermined by anything from without, not even by thought. Iqbal observes that Bergson’s position ends in an insurmountable dualism of will and thought. Bergson’s view, Iqbal argues in particular, rests on an untenable conception of what an end consists in. An end, correctly understood, is framed by a thinking faculty, the intellect, and not by an extra-rational faculty. The present section outlines Iqbal’s argument for this claim, which rests on the premise that ends possess a certain unity which is intrinsic to particular ends and holds together different ends, and this unity is none other than the quantitative and qualitative unity of a thinking consciousness but in its practical application. Having secured the rational origin of ends, Iqbal argues that a free will must be identical with thought, or else it will be determined from without and won’t be free on that account. Freedom of the self is not a freedom from thought but a freedom in thought: it involves the ability to live a thoughtful life.

Keywords: iqbal, freedom, will, self

Procedia PDF Downloads 37
43 A Heteroskedasticity Robust Test for Contemporaneous Correlation in Dynamic Panel Data Models

Authors: Andreea Halunga, Chris D. Orme, Takashi Yamagata

Abstract:

This paper proposes a heteroskedasticity-robust Breusch-Pagan test of the null hypothesis of zero cross-section (or contemporaneous) correlation in linear panel-data models, without necessarily assuming independence of the cross-sections. The procedure allows for either fixed, strictly exogenous and/or lagged dependent regressor variables, as well as quite general forms of both non-normality and heteroskedasticity in the error distribution. The asymptotic validity of the test procedure is predicated on the number of time series observations, T, being large relative to the number of cross-section units, N, in that: (i) either N is fixed as T→∞; or, (ii) N²/T→0, as both T and N diverge, jointly, to infinity. Given this, it is not expected that asymptotic theory would provide an adequate guide to finite sample performance when T/N is "small". Because of this, we also propose and establish asymptotic validity of, a number of wild bootstrap schemes designed to provide improved inference when T/N is small. Across a variety of experimental designs, a Monte Carlo study suggests that the predictions from asymptotic theory do, in fact, provide a good guide to the finite sample behaviour of the test when T is large relative to N. However, when T and N are of similar orders of magnitude, discrepancies between the nominal and empirical significance levels occur as predicted by the first-order asymptotic analysis. On the other hand, for all the experimental designs, the proposed wild bootstrap approximations do improve agreement between nominal and empirical significance levels, when T/N is small, with a recursive-design wild bootstrap scheme performing best, in general, and providing quite close agreement between the nominal and empirical significance levels of the test even when T and N are of similar size. Moreover, in comparison with the wild bootstrap "version" of the original Breusch-Pagan test our experiments indicate that the corresponding version of the heteroskedasticity-robust Breusch-Pagan test appears reliable. As an illustration, the proposed tests are applied to a dynamic growth model for a panel of 20 OECD countries.

Keywords: cross-section correlation, time-series heteroskedasticity, dynamic panel data, heteroskedasticity robust Breusch-Pagan test

Procedia PDF Downloads 406