Search results for: conflicting claim on credit of discovery of ridge regression
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4549

Search results for: conflicting claim on credit of discovery of ridge regression

4369 Early Warning System of Financial Distress Based On Credit Cycle Index

Authors: Bi-Huei Tsai

Abstract:

Previous studies on financial distress prediction choose the conventional failing and non-failing dichotomy; however, the distressed extent differs substantially among different financial distress events. To solve the problem, “non-distressed”, “slightly-distressed” and “reorganization and bankruptcy” are used in our article to approximate the continuum of corporate financial health. This paper explains different financial distress events using the two-stage method. First, this investigation adopts firm-specific financial ratios, corporate governance and market factors to measure the probability of various financial distress events based on multinomial logit models. Specifically, the bootstrapping simulation is performed to examine the difference of estimated misclassifying cost (EMC). Second, this work further applies macroeconomic factors to establish the credit cycle index and determines the distressed cut-off indicator of the two-stage models using such index. Two different models, one-stage and two-stage prediction models, are developed to forecast financial distress, and the results acquired from different models are compared with each other, and with the collected data. The findings show that the two-stage model incorporating financial ratios, corporate governance and market factors has the lowest misclassification error rate. The two-stage model is more accurate than the one-stage model as its distressed cut-off indicators are adjusted according to the macroeconomic-based credit cycle index.

Keywords: Multinomial logit model, corporate governance, company failure, reorganization, bankruptcy

Procedia PDF Downloads 350
4368 Digitalised Welfare: Systems for Both Seeing and Working with Mess

Authors: Amelia Morris, Lizzie Coles-Kemp, Will Jones

Abstract:

This paper examines how community welfare initiatives transform how individuals use and experience an ostensibly universal welfare system. This paper argues that the digitalisation of welfare overlooks the complex reality of being unemployed or in low-wage work, and erects digital barriers to accessing welfare. Utilising analysis of ethnographic research in food banks and community groups, the paper explores the ways that Universal Credit has not abolished face-to-face support, but relocated it to unofficial sites of welfare. The apparent efficiency and simplicity of the state’s digital welfare apparatus, therefore, is produced not by reducing the ‘messiness’ of welfare, but by rendering it invisible within the digital framework. Using the analysis of the study’s data, this paper recommends three principles of service design that would render the messiness visible to the state.

Keywords: welfare, digitalisation, food bank, Universal Credit

Procedia PDF Downloads 119
4367 Integrated Nested Laplace Approximations For Quantile Regression

Authors: Kajingulu Malandala, Ranganai Edmore

Abstract:

The asymmetric Laplace distribution (ADL) is commonly used as the likelihood function of the Bayesian quantile regression, and it offers different families of likelihood method for quantile regression. Notwithstanding their popularity and practicality, ADL is not smooth and thus making it difficult to maximize its likelihood. Furthermore, Bayesian inference is time consuming and the selection of likelihood may mislead the inference, as the Bayes theorem does not automatically establish the posterior inference. Furthermore, ADL does not account for greater skewness and Kurtosis. This paper develops a new aspect of quantile regression approach for count data based on inverse of the cumulative density function of the Poisson, binomial and Delaporte distributions using the integrated nested Laplace Approximations. Our result validates the benefit of using the integrated nested Laplace Approximations and support the approach for count data.

Keywords: quantile regression, Delaporte distribution, count data, integrated nested Laplace approximation

Procedia PDF Downloads 134
4366 The Use of Geographically Weighted Regression for Deforestation Analysis: Case Study in Brazilian Cerrado

Authors: Ana Paula Camelo, Keila Sanches

Abstract:

The Geographically Weighted Regression (GWR) was proposed in geography literature to allow relationship in a regression model to vary over space. In Brazil, the agricultural exploitation of the Cerrado Biome is the main cause of deforestation. In this study, we propose a methodology using geostatistical methods to characterize the spatial dependence of deforestation in the Cerrado based on agricultural production indicators. Therefore, it was used the set of exploratory spatial data analysis tools (ESDA) and confirmatory analysis using GWR. It was made the calibration a non-spatial model, evaluation the nature of the regression curve, election of the variables by stepwise process and multicollinearity analysis. After the evaluation of the non-spatial model was processed the spatial-regression model, statistic evaluation of the intercept and verification of its effect on calibration. In an analysis of Spearman’s correlation the results between deforestation and livestock was +0.783 and with soybeans +0.405. The model presented R²=0.936 and showed a strong spatial dependence of agricultural activity of soybeans associated to maize and cotton crops. The GWR is a very effective tool presenting results closer to the reality of deforestation in the Cerrado when compared with other analysis.

Keywords: deforestation, geographically weighted regression, land use, spatial analysis

Procedia PDF Downloads 329
4365 Web-Based Cognitive Writing Instruction (WeCWI): A Theoretical-and-Pedagogical e-Framework for Language Development

Authors: Boon Yih Mah

Abstract:

Web-based Cognitive Writing Instruction (WeCWI)’s contribution towards language development can be divided into linguistic and non-linguistic perspectives. In linguistic perspective, WeCWI focuses on the literacy and language discoveries, while the cognitive and psychological discoveries are the hubs in non-linguistic perspective. In linguistic perspective, WeCWI draws attention to free reading and enterprises, which are supported by the language acquisition theories. Besides, the adoption of process genre approach as a hybrid guided writing approach fosters literacy development. Literacy and language developments are interconnected in the communication process; hence, WeCWI encourages meaningful discussion based on the interactionist theory that involves input, negotiation, output, and interactional feedback. Rooted in the e-learning interaction-based model, WeCWI promotes online discussion via synchronous and asynchronous communications, which allows interactions happened among the learners, instructor, and digital content. In non-linguistic perspective, WeCWI highlights on the contribution of reading, discussion, and writing towards cognitive development. Based on the inquiry models, learners’ critical thinking is fostered during information exploration process through interaction and questioning. Lastly, to lower writing anxiety, WeCWI develops the instructional tool with supportive features to facilitate the writing process. To bring a positive user experience to the learner, WeCWI aims to create the instructional tool with different interface designs based on two different types of perceptual learning style.

Keywords: WeCWI, literacy discovery, language discovery, cognitive discovery, psychological discovery

Procedia PDF Downloads 534
4364 Screening for Hit Identification against Mycobacterium abscessus

Authors: Jichan Jang

Abstract:

Mycobacterium abscessus is a rapidly growing life-threatening mycobacterium with multiple drug-resistance mechanisms. In this study, we screened the library to identify active molecules targeting Mycobacterium abscessus using resazurin live/dead assays. In this screening assay, the Z-factor was 0.7, as an indication of the statistical confidence of the assay. A cut-off of 80% growth inhibition in the screening resulted in the identification of four different compounds at a single concentration (20 μM). Dose-response curves identified three different hit candidates, which generated good inhibitory curves. All hit candidates were expected to have different molecular targets. Thus, we found that compound X, identified, may be a promising candidate in the M. abscessus drug discovery pipeline.

Keywords: Mycobacterium abscessus, antibiotics, drug discovery, emerging Pathogen

Procedia PDF Downloads 174
4363 Sparse Modelling of Cancer Patients’ Survival Based on Genomic Copy Number Alterations

Authors: Khaled M. Alqahtani

Abstract:

Copy number alterations (CNA) are variations in the structure of the genome, where certain regions deviate from the typical two chromosomal copies. These alterations are pivotal in understanding tumor progression and are indicative of patients' survival outcomes. However, effectively modeling patients' survival based on their genomic CNA profiles while identifying relevant genomic regions remains a statistical challenge. Various methods, such as the Cox proportional hazard (PH) model with ridge, lasso, or elastic net penalties, have been proposed but often overlook the inherent dependencies between genomic regions, leading to results that are hard to interpret. In this study, we enhance the elastic net penalty by incorporating an additional penalty that accounts for these dependencies. This approach yields smooth parameter estimates and facilitates variable selection, resulting in a sparse solution. Our findings demonstrate that this method outperforms other models in predicting survival outcomes, as evidenced by our simulation study. Moreover, it allows for a more meaningful interpretation of genomic regions associated with patients' survival. We demonstrate the efficacy of our approach using both real data from a lung cancer cohort and simulated datasets.

Keywords: copy number alterations, cox proportional hazard, lung cancer, regression, sparse solution

Procedia PDF Downloads 12
4362 Weighted Rank Regression with Adaptive Penalty Function

Authors: Kang-Mo Jung

Abstract:

The use of regularization for statistical methods has become popular. The least absolute shrinkage and selection operator (LASSO) framework has become the standard tool for sparse regression. However, it is well known that the LASSO is sensitive to outliers or leverage points. We consider a new robust estimation which is composed of the weighted loss function of the pairwise difference of residuals and the adaptive penalty function regulating the tuning parameter for each variable. Rank regression is resistant to regression outliers, but not to leverage points. By adopting a weighted loss function, the proposed method is robust to leverage points of the predictor variable. Furthermore, the adaptive penalty function gives us good statistical properties in variable selection such as oracle property and consistency. We develop an efficient algorithm to compute the proposed estimator using basic functions in program R. We used an optimal tuning parameter based on the Bayesian information criterion (BIC). Numerical simulation shows that the proposed estimator is effective for analyzing real data set and contaminated data.

Keywords: adaptive penalty function, robust penalized regression, variable selection, weighted rank regression

Procedia PDF Downloads 431
4361 Improving Cryptographically Generated Address Algorithm in IPv6 Secure Neighbor Discovery Protocol through Trust Management

Authors: M. Moslehpour, S. Khorsandi

Abstract:

As transition to widespread use of IPv6 addresses has gained momentum, it has been shown to be vulnerable to certain security attacks such as those targeting Neighbor Discovery Protocol (NDP) which provides the address resolution functionality in IPv6. To protect this protocol, Secure Neighbor Discovery (SEND) is introduced. This protocol uses Cryptographically Generated Address (CGA) and asymmetric cryptography as a defense against threats on integrity and identity of NDP. Although SEND protects NDP against attacks, it is computationally intensive due to Hash2 condition in CGA. To improve the CGA computation speed, we parallelized CGA generation process and used the available resources in a trusted network. Furthermore, we focused on the influence of the existence of malicious nodes on the overall load of un-malicious ones in the network. According to the evaluation results, malicious nodes have adverse impacts on the average CGA generation time and on the average number of tries. We utilized a Trust Management that is capable of detecting and isolating the malicious node to remove possible incentives for malicious behavior. We have demonstrated the effectiveness of the Trust Management System in detecting the malicious nodes and hence improving the overall system performance.

Keywords: CGA, ICMPv6, IPv6, malicious node, modifier, NDP, overall load, SEND, trust management

Procedia PDF Downloads 158
4360 The Underground Ecosystem of Credit Card Frauds

Authors: Abhinav Singh

Abstract:

Point Of Sale (POS) malwares have been stealing the limelight this year. They have been the elemental factor in some of the biggest breaches uncovered in past couple of years. Some of them include • Target: A Retail Giant reported close to 40 million credit card data being stolen • Home Depot : A home product Retailer reported breach of close to 50 million credit records • Kmart: A US retailer recently announced breach of 800 thousand credit card details. Alone in 2014, there have been reports of over 15 major breaches of payment systems around the globe. Memory scrapping malwares infecting the point of sale devices have been the lethal weapon used in these attacks. These malwares are capable of reading the payment information from the payment device memory before they are being encrypted. Later on these malwares send the stolen details to its parent server. These malwares are capable of recording all the critical payment information like the card number, security number, owner etc. All these information are delivered in raw format. This Talk will cover the aspects of what happens after these details have been sent to the malware authors. The entire ecosystem of credit card frauds can be broadly classified into these three steps: • Purchase of raw details and dumps • Converting them to plastic cash/cards • Shop! Shop! Shop! The focus of this talk will be on the above mentioned points and how they form an organized network of cyber-crime. The first step involves buying and selling of the stolen details. The key point to emphasize are : • How is this raw information been sold in the underground market • The buyer and seller anatomy • Building your shopping cart and preferences • The importance of reputation and vouches • Customer support and replace/refunds These are some of the key points that will be discussed. But the story doesn’t end here. As of now the buyer only has the raw card information. How will this raw information be converted to plastic cash? Now comes in picture the second part of this underground economy where-in these raw details are converted into actual cards. There are well organized services running underground that can help you in converting these details into plastic cards. We will discuss about this technique in detail. At last, the final step involves shopping with the stolen cards. The cards generated with the stolen details can be easily used to swipe-and-pay for purchased goods at different retail shops. Usually these purchases are of expensive items that have good resale value. Apart from using the cards at stores, there are underground services that lets you deliver online orders to their dummy addresses. Once the package is received it will be delivered to the original buyer. These services charge based on the value of item that is being delivered. The overall underground ecosystem of credit card fraud works in a bulletproof way and it involves people working in close groups and making heavy profits. This is a brief summary of what I plan to present at the talk. I have done an extensive research and have collected good deal of material to present as samples. Some of them include: • List of underground forums • Credit card dumps • IRC chats among these groups • Personal chat with big card sellers • Inside view of these forum owners. The talk will be concluded by throwing light on how these breaches are being tracked during investigation. How are credit card breaches tracked down and what steps can financial institutions can build an incidence response over it.

Keywords: POS mawalre, credit card frauds, enterprise security, underground ecosystem

Procedia PDF Downloads 408
4359 MapReduce Logistic Regression Algorithms with RHadoop

Authors: Byung Ho Jung, Dong Hoon Lim

Abstract:

Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested.

Keywords: big data, logistic regression, MapReduce, RHadoop

Procedia PDF Downloads 245
4358 A Breakthrough Improvement Brought by Taxi-Calling APPs for Taxi Operation Level

Authors: Yuan-Lin Liu, Ye Li, Tian Xia

Abstract:

Taxi-calling APPs have been used widely, while brought both benefits and a variety of issues for the taxi market. Many countries do not know whether the benefits are remarkable than the issues or not. This paper established a comparison between the basic scenario (2009-2012) and a taxi-calling software usage scenario (2012-2015) to explain the impact of taxi-calling APPs. The impacts of taxi-calling APPs illustrated by the comparison results are: 1) The supply and demand distribution is more balanced, extending from the city center to the suburb. The availability of taxi service has been improved in low density areas, thin market attribute has also been improved; 2)The ratio of short distance taxi trip decreased, long distance service increased, the utilization of mileage increased, and the rate of empty decreased; 3) The popularity of taxi-calling APPs was able to reduce the average empty distance, cruise time, empty mileage rate and average times of loading passengers, can also enhance the average operating speed, improve the taxi operating level, and reduce social cost although there are some disadvantages. This paper argues that the taxi industry and government can establish an integrated third-party credit information platform based on credit evaluated by the data of the drivers’ driving behaviors to supervise the drivers. Taxi-calling APPs under fully covered supervision in the mobile Internet environment will become a new trend.

Keywords: taxi, taxi-calling APPs, credit, scenario comparison

Procedia PDF Downloads 227
4357 A Generalized Weighted Loss for Support Vextor Classification and Multilayer Perceptron

Authors: Filippo Portera

Abstract:

Usually standard algorithms employ a loss where each error is the mere absolute difference between the true value and the prediction, in case of a regression task. In the present, we present several error weighting schemes that are a generalization of the consolidated routine. We study both a binary classification model for Support Vextor Classification and a regression net for Multylayer Perceptron. Results proves that the error is never worse than the standard procedure and several times it is better.

Keywords: loss, binary-classification, MLP, weights, regression

Procedia PDF Downloads 63
4356 An Empirical Analysis of the Effects of Corporate Derivatives Use on the Underlying Stock Price Exposure: South African Evidence

Authors: Edson Vengesai

Abstract:

Derivative products have become essential instruments in portfolio diversification, price discovery, and, most importantly, risk hedging. Derivatives are complex instruments; their valuation, volatility implications, and real impact on the underlying assets' behaviour are not well understood. Little is documented empirically, with conflicting conclusions on how these instruments affect firm risk exposures. Given the growing interest in using derivatives in risk management and portfolio engineering, this study examines the practical impact of derivative usage on the underlying stock price exposure and systematic risk. The paper uses data from South African listed firms. The study employs GARCH models to understand the effect of derivative uses on conditional stock volatility. The GMM models are used to estimate the effect of derivatives use on stocks' systematic risk as measured by Beta and on the total risk of stocks as measured by the standard deviation of returns. The results provide evidence on whether derivatives use is instrumental in reducing stock returns' systematic and total risk. The results are subjected to numerous controls for robustness, including financial leverage, firm size, growth opportunities, and macroeconomic effects.

Keywords: derivatives use, hedging, volatility, stock price exposure

Procedia PDF Downloads 76
4355 Interference among Lambsquarters and Oil Rapeseed Cultivars

Authors: Reza Siyami, Bahram Mirshekari

Abstract:

Seed and oil yield of rapeseed is considerably affected by weeds interference including mustard (Sinapis arvensis L.), lambsquarters (Chenopodium album L.) and redroot pigweed (Amaranthus retroflexus L.) throughout the East Azerbaijan province in Iran. To formulate the relationship between four independent growth variables measured in our experiment with a dependent variable, multiple regression analysis was carried out for the weed leaves number per plant (X1), green cover percentage (X2), LAI (X3) and leaf area per plant (X4) as independent variables and rapeseed oil yield as a dependent variable. The multiple regression equation is shown as follows: Seed essential oil yield (kg/ha) = 0.156 + 0.0325 (X1) + 0.0489 (X2) + 0.0415 (X3) + 0.133 (X4). Furthermore, the stepwise regression analysis was also carried out for the data obtained to test the significance of the independent variables affecting the oil yield as a dependent variable. The resulted stepwise regression equation is shown as follows: Oil yield = 4.42 + 0.0841 (X2) + 0.0801 (X3); R2 = 81.5. The stepwise regression analysis verified that the green cover percentage and LAI of weed had a marked increasing effect on the oil yield of rapeseed.

Keywords: green cover percentage, independent variable, interference, regression

Procedia PDF Downloads 389
4354 Proposing an Architecture for Drug Response Prediction by Integrating Multiomics Data and Utilizing Graph Transformers

Authors: Nishank Raisinghani

Abstract:

Efficiently predicting drug response remains a challenge in the realm of drug discovery. To address this issue, we propose four model architectures that combine graphical representation with varying positions of multiheaded self-attention mechanisms. By leveraging two types of multi-omics data, transcriptomics and genomics, we create a comprehensive representation of target cells and enable drug response prediction in precision medicine. A majority of our architectures utilize multiple transformer models, one with a graph attention mechanism and the other with a multiheaded self-attention mechanism, to generate latent representations of both drug and omics data, respectively. Our model architectures apply an attention mechanism to both drug and multiomics data, with the goal of procuring more comprehensive latent representations. The latent representations are then concatenated and input into a fully connected network to predict the IC-50 score, a measure of cell drug response. We experiment with all four of these architectures and extract results from all of them. Our study greatly contributes to the future of drug discovery and precision medicine by looking to optimize the time and accuracy of drug response prediction.

Keywords: drug discovery, transformers, graph neural networks, multiomics

Procedia PDF Downloads 112
4353 Continuous-Time Convertible Lease Pricing and Firm Value

Authors: Ons Triki, Fathi Abid

Abstract:

Along with the increase in the use of leasing contracts in corporate finance, multiple studies aim to model the credit risk of the lease in order to cover the losses of the lessor of the asset if the lessee goes bankrupt. In the current research paper, a convertible lease contract is elaborated in a continuous time stochastic universe aiming to ensure the financial stability of the firm and quickly recover the losses of the counterparties to the lease in case of default. This work examines the term structure of the lease rates taking into account the credit default risk and the capital structure of the firm. The interaction between the lessee's capital structure and the equilibrium lease rate has been assessed by applying the competitive lease market argument developed by Grenadier (1996) and the endogenous structural default model set forward by Leland and Toft (1996). The cumulative probability of default was calculated by referring to Leland and Toft (1996) and Yildirim and Huan (2006). Additionally, the link between lessee credit risk and lease rate was addressed so as to explore the impact of convertible lease financing on the term structure of the lease rate, the optimal leverage ratio, the cumulative default probability, and the optimal firm value by applying an endogenous conversion threshold. The numerical analysis is suggestive that the duration structure of lease rates increases with the increase in the degree of the market price of risk. The maximal value of the firm decreases with the effect of the optimal leverage ratio. The results are indicative that the cumulative probability of default increases with the maturity of the lease contract if the volatility of the asset service flows is significant. Introducing the convertible lease contract will increase the optimal value of the firm as a function of asset volatility for a high initial service flow level and a conversion ratio close to 1.

Keywords: convertible lease contract, lease rate, credit-risk, capital structure, default probability

Procedia PDF Downloads 51
4352 Analysis of Technical Efficiency and Its Determinants among Cattle Fattening Enterprises in Kebbi State, Nigeria

Authors: Gona Ayuba, Isiaka Mohammed, Kotom Mohammed Baba, Mohammed Aabubakar Maikasuwa

Abstract:

The study examined the technical efficiency and its determinants of cattle fattening enterprises in Kebbi state, Nigeria. Data were collected from a sample of 160 fatteners between June 2010 and June 2011 using the multistage random sampling technique. Translog stochastic frontier production function was employed for the analysis. Results of the analysis show that technical efficiency indices varied from 0.74 to 0.98%, with a mean of 0.90%, indicating that there was no wide gap between the efficiency of best technical efficient fatteners and that of the average fattener. The result also showed that fattening experience and herd size influenced the level of technical efficiency at 1% levels. It is recommended that credit agencies should ensure that credit made available to the fatteners is monitored to ensure appropriate utilization.

Keywords: technical efficiency, determinants, cattle, fattening enterprises

Procedia PDF Downloads 405
4351 Copula-Based Estimation of Direct and Indirect Effects in Path Analysis Model

Authors: Alam Ali, Ashok Kumar Pathak

Abstract:

Path analysis is a statistical technique used to evaluate the strength of the direct and indirect effects of variables. One or more structural regression equations are used to estimate a series of parameters in order to find the better fit of data. Sometimes, exogenous variables do not show a significant strength of their direct and indirect effect when the assumption of classical regression (ordinary least squares (OLS)) are violated by the nature of the data. The main motive of this article is to investigate the efficacy of the copula-based regression approach over the classical regression approach and calculate the direct and indirect effects of variables when data violates the OLS assumption and variables are linked through an elliptical copula. We perform this study using a well-organized numerical scheme. Finally, a real data application is also presented to demonstrate the performance of the superiority of the copula approach.

Keywords: path analysis, copula-based regression models, direct and indirect effects, k-fold cross validation technique

Procedia PDF Downloads 46
4350 Cross-Cultural Pragmatics: Apology Strategies by Libyans

Authors: Ahmed Elgadri

Abstract:

In the last thirty years, studies on cross-cultural pragmatics in general and apology strategies in specific have focused on western and East-Asian societies. A small volume of research has been conducted in investigating speech acts production by Arabic dialect speakers. Therefore, this study investigated the apology strategies used by Libyan Arabic speakers using an online Discourse Completion Task (DCT) questionnaire. The DCT consisted of six situations covering different social contexts. The survey was written in Libyan Arabic dialect to help generate vernacular speech as much as possible. The participants were 25 Libyan nationals, 12 females, and 13 males. Also, to get a deeper understanding of the motivation behind the use of certain strategies, the researcher interviewed four participants using the Libyan Arabic dialect as well. The results revealed a high use of IFID, offer of repair, and explanation. Although this might support the universality claim of speech acts strategies, it was clear that cultural norms and religion determined the choice of apology strategies significantly. This led to the discovery of new culture-specific strategies, as outlined later in this paper. This study gives an insight into politeness strategies in Libyan society, and it is hoped to contribute to the field of cross-cultural pragmatics.

Keywords: apologies, cross-cultural pragmatics, language and culture, Libyan Arabic, politeness, pragmatics, socio-pragmatics, speech acts

Procedia PDF Downloads 123
4349 Farmers’ Access to Agricultural Extension Services Delivery Systems: Evidence from a Field Study in India

Authors: Ankit Nagar, Dinesh Kumar Nauriyal, Sukhpal Singh

Abstract:

This paper examines the key determinants of farmers’ access to agricultural extension services, sources of agricultural extension services preferred and accessed by the farmers. An ordered logistic regression model was used to analyse the data of the 360 sample households based on a primary survey conducted in western Uttar Pradesh, India. The study finds that farmers' decision to engage in the agricultural extension programme is significantly influenced by factors such as education level, gender, farming experience, social group, group membership, farm size, credit access, awareness about the extension scheme, farmers' perception, and distance from extension sources. The most intriguing finding of this study is that the progressive farmers, which have long been regarded as a major source of knowledge diffusion, are the most distrusted sources of information as they are suspected of withholding vital information from potential beneficiaries. The positive relationship between farm size and ‘Access’ underlines that the extension services should revisit their strategies for targeting more marginal and small farmers constituting over 85 percent of the agricultural households by incorporating their priorities in their outreach programs. The study suggests that marginal and small farmers' productive potential could still be greatly augmented by the appropriate technology, advisory services, guidance, and improved market access. Also, the perception of poor quality of the public extension services can be corrected by initiatives aimed at building up extension workers' capacity.

Keywords: agriculture, access, extension services, ordered logistic regression

Procedia PDF Downloads 178
4348 Ontology-Driven Knowledge Discovery and Validation from Admission Databases: A Structural Causal Model Approach for Polytechnic Education in Nigeria

Authors: Bernard Igoche Igoche, Olumuyiwa Matthew, Peter Bednar, Alexander Gegov

Abstract:

This study presents an ontology-driven approach for knowledge discovery and validation from admission databases in Nigerian polytechnic institutions. The research aims to address the challenges of extracting meaningful insights from vast amounts of admission data and utilizing them for decision-making and process improvement. The proposed methodology combines the knowledge discovery in databases (KDD) process with a structural causal model (SCM) ontological framework. The admission database of Benue State Polytechnic Ugbokolo (Benpoly) is used as a case study. The KDD process is employed to mine and distill knowledge from the database, while the SCM ontology is designed to identify and validate the important features of the admission process. The SCM validation is performed using the conditional independence test (CIT) criteria, and an algorithm is developed to implement the validation process. The identified features are then used for machine learning (ML) modeling and prediction of admission status. The results demonstrate the adequacy of the SCM ontological framework in representing the admission process and the high predictive accuracies achieved by the ML models, with k-nearest neighbors (KNN) and support vector machine (SVM) achieving 92% accuracy. The study concludes that the proposed ontology-driven approach contributes to the advancement of educational data mining and provides a foundation for future research in this domain.

Keywords: admission databases, educational data mining, machine learning, ontology-driven knowledge discovery, polytechnic education, structural causal model

Procedia PDF Downloads 23
4347 Performance Analysis of Proprietary and Non-Proprietary Tools for Regression Testing Using Genetic Algorithm

Authors: K. Hema Shankari, R. Thirumalaiselvi, N. V. Balasubramanian

Abstract:

The present paper addresses to the research in the area of regression testing with emphasis on automated tools as well as prioritization of test cases. The uniqueness of regression testing and its cyclic nature is pointed out. The difference in approach between industry, with business model as basis, and academia, with focus on data mining, is highlighted. Test Metrics are discussed as a prelude to our formula for prioritization; a case study is further discussed to illustrate this methodology. An industrial case study is also described in the paper, where the number of test cases is so large that they have to be grouped as Test Suites. In such situations, a genetic algorithm proposed by us can be used to reconfigure these Test Suites in each cycle of regression testing. The comparison is made between a proprietary tool and an open source tool using the above-mentioned metrics. Our approach is clarified through several tables.

Keywords: APFD metric, genetic algorithm, regression testing, RFT tool, test case prioritization, selenium tool

Procedia PDF Downloads 401
4346 Neural Correlates of Decision-Making Under Ambiguity and Conflict

Authors: Helen Pushkarskaya, Michael Smithson, Jane E. Joseph, Christine Corbly, Ifat Levy

Abstract:

Studies of decision making under uncertainty generally focus on imprecise information about outcome probabilities (“ambiguity”). It is not clear, however, whether conflicting information about outcome probabilities affects decision making in the same manner as ambiguity does. Here we combine functional Magnetic Resonance Imaging (fMRI) and a simple gamble design to study this question. In this design, the levels of ambiguity and conflict are parametrically varied, and ambiguity and conflict gambles are matched on both expected value and variance. Behaviorally, participants avoided conflict more than ambiguity, and attitudes toward ambiguity and conflict did not correlate across subjects. Neurally, regional brain activation was differentially modulated by ambiguity level and aversion to ambiguity and by conflict level and aversion to conflict. Activation in the medial prefrontal cortex was correlated with the level of ambiguity and with ambiguity aversion, whereas activation in the ventral striatum was correlated with the level of conflict and with conflict aversion. This novel double dissociation indicates that decision makers process imprecise and conflicting information differently, a finding that has important implications for basic and clinical research.

Keywords: decision making, uncertainty, ambiguity, conflict, fMRI

Procedia PDF Downloads 527
4345 Bank's Role in Economic Growth: Case of Africa

Authors: S. Khalifa, R. Chkoundali

Abstract:

The specific role of banks in economic development varies, depending on scope. Firstly, the participation of banks in economic development focus around providing credit and services to generate revenues, which are then invested back into a local, national or international community. The specific roles banks play in the economic development of a small community differ from the role banks play in national or international economic development. Although the role can vary, factors such as access to credit and bank investment policies or practices remain constant, no matter the scope of economic development. This paper provides an overview of the economic situation of Africa and its short-term outlook. He referred to the progress made in the implementation of the Medium-Term Strategy (2008-2012) and some major achievements of the Bank, as the speed and flexibility with which she responded to the oil crisis, food and financial.

Keywords: economic growth, bank, Africa, economic development

Procedia PDF Downloads 436
4344 Discovery the Relics of Buddhist Stupa at Thanesar, Kurukshetra

Authors: Chander Shekhar, Manoj Kumar

Abstract:

Present paper deal with the discovery of the stupa’s relics which belongs to the Kushana period. These remains were found during the scientific clearance work at a mound near Brahma-SarovarThanesar, Kurukshetra. This archaeological work was done by Department of Archaeology & Museums Haryana Government. The relics of stupa show that it would have been similar to Assandh and Damekhstupa. As per-Buddhist literature, GoutamBudhha reached Thanesar. In memory of Buddh’s Journey, King Ashoka built a big Stupa at Thanesar on the bank of Sarasvati River. Chinese pilgrim Yuan Chuang also referred a Monastery and stupa near Aujas-ghatof Brahma-sarovar. It may be part of that settlement which was mentioned by Yuan Chuang.

Keywords: archaeology, stupa, buddhism, excavtoin

Procedia PDF Downloads 152
4343 A Hybrid Model Tree and Logistic Regression Model for Prediction of Soil Shear Strength in Clay

Authors: Ehsan Mehryaar, Seyed Armin Motahari Tabari

Abstract:

Without a doubt, soil shear strength is the most important property of the soil. The majority of fatal and catastrophic geological accidents are related to shear strength failure of the soil. Therefore, its prediction is a matter of high importance. However, acquiring the shear strength is usually a cumbersome task that might need complicated laboratory testing. Therefore, prediction of it based on common and easy to get soil properties can simplify the projects substantially. In this paper, A hybrid model based on the classification and regression tree algorithm and logistic regression is proposed where each leaf of the tree is an independent regression model. A database of 189 points for clay soil, including Moisture content, liquid limit, plastic limit, clay content, and shear strength, is collected. The performance of the developed model compared to the existing models and equations using root mean squared error and coefficient of correlation.

Keywords: model tree, CART, logistic regression, soil shear strength

Procedia PDF Downloads 165
4342 A Regression Model for Residual-State Creep Failure

Authors: Deepak Raj Bhat, Ryuichi Yatabe

Abstract:

In this study, a residual-state creep failure model was developed based on the residual-state creep test results of clayey soils. To develop the proposed model, the regression analyses were done by using the R. The model results of the failure time (tf) and critical displacement (δc) were compared with experimental results and found in close agreements to each others. It is expected that the proposed regression model for residual-state creep failure will be more useful for the prediction of displacement of different clayey soils in the future.

Keywords: regression model, residual-state creep failure, displacement prediction, clayey soils

Procedia PDF Downloads 376
4341 A Fuzzy Nonlinear Regression Model for Interval Type-2 Fuzzy Sets

Authors: O. Poleshchuk, E. Komarov

Abstract:

This paper presents a regression model for interval type-2 fuzzy sets based on the least squares estimation technique. Unknown coefficients are assumed to be triangular fuzzy numbers. The basic idea is to determine aggregation intervals for type-1 fuzzy sets, membership functions of whose are low membership function and upper membership function of interval type-2 fuzzy set. These aggregation intervals were called weighted intervals. Low and upper membership functions of input and output interval type-2 fuzzy sets for developed regression models are considered as piecewise linear functions.

Keywords: interval type-2 fuzzy sets, fuzzy regression, weighted interval

Procedia PDF Downloads 335
4340 Determinants of Non-Performing Loans: An Empirical Investigation of Bank-Specific Micro-Economic Factors

Authors: Amir Ikram, Faisal Ijaz, Qin Su

Abstract:

The empirical study was undertaken to explore the determinants of non-performing loans (NPLs) of small and medium enterprises (SMEs) sector held by the commercial banks. Primary data was collected through well-structured survey questionnaire from credit analysts/bankers of 42 branches of 9 commercial banks, operating in the district of Lahore (Pakistan), for 2014-2015. Selective descriptive analysis and Pearson chi-square technique were used to illustrate and evaluate the significance of different variables affecting NPLs. Branch age, duration of the loan, and credit policy were found to be significant determinants of NPLs. The study proposes that bank-specific and SME-specific microeconomic variables directly influence NPLs, while macroeconomic factors act as intermediary variables. Framework exhibiting causal nexus of NPLs was also drawn on the basis of empirical findings. The results elaborate various origins of NPLs and suggest that they are primarily instigated by the loan sanctioning procedure of the financial institution. The paper also underlines the risk management practices adopted by the bank at branch level to averse the risk of loan default. Empirical investigation of bank-specific microeconomic factors of NPLs with respect to Pakistan’s economy is the novelty of the study. Broader strategic policy implications are provided for credit analysts and entrepreneurs.

Keywords: commercial banks, microeconomic factors, non-performing loans, small and medium enterprises

Procedia PDF Downloads 232