Search results for: probability and statistics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2911

Search results for: probability and statistics

2641 An Analysis of a Queueing System with Heterogeneous Servers Subject to Catastrophes

Authors: M. Reni Sagayaraj, S. Anand Gnana Selvam, R. Reynald Susainathan

Abstract:

This study analyzed a queueing system with blocking and no waiting line. The customers arrive according to a Poisson process and the service times follow exponential distribution. There are two non-identical servers in the system. The queue discipline is FCFS, and the customers select the servers on fastest server first (FSF) basis. The service times are exponentially distributed with parameters μ1 and μ2 at servers I and II, respectively. Besides, the catastrophes occur in a Poisson manner with rate γ in the system. When server I is busy or blocked, the customer who arrives in the system leaves the system without being served. Such customers are called lost customers. The probability of losing a customer was computed for the system. The explicit time dependent probabilities of system size are obtained and a numerical example is presented in order to show the managerial insights of the model. Finally, the probability that arriving customer finds system busy and average number of server busy in steady state are obtained numerically.

Keywords: queueing system, blocking, poisson process, heterogeneous servers, queue discipline FCFS, busy period

Procedia PDF Downloads 477
2640 An Experimental Investigation of the Cognitive Noise Influence on the Bistable Visual Perception

Authors: Alexander E. Hramov, Vadim V. Grubov, Alexey A. Koronovskii, Maria K. Kurovskaуa, Anastasija E. Runnova

Abstract:

The perception of visual signals in the brain was among the first issues discussed in terms of multistability which has been introduced to provide mechanisms for information processing in biological neural systems. In this work the influence of the cognitive noise on the visual perception of multistable pictures has been investigated. The study includes an experiment with the bistable Necker cube illusion and the theoretical background explaining the obtained experimental results. In our experiments Necker cubes with different wireframe contrast were demonstrated repeatedly to different people and the probability of the choice of one of the cubes projection was calculated for each picture. The Necker cube was placed at the middle of a computer screen as black lines on a white background. The contrast of the three middle lines centered in the left middle corner was used as one of the control parameter. Between two successive demonstrations of Necker cubes another picture was shown to distract attention and to make a perception of next Necker cube more independent from the previous one. Eleven subjects, male and female, of the ages 20 through 45 were studied. The choice of the Necker cube projection was detected with the Electroencephalograph-recorder Encephalan-EEGR-19/26, Medicom MTD. To treat the experimental results we carried out theoretical consideration using the simplest double-well potential model with the presence of noise that led to the Fokker-Planck equation for the probability density of the stochastic process. At the first time an analytical solution for the probability of the selection of one of the Necker cube projection for different values of wireframe contrast have been obtained. Furthermore, having used the results of the experimental measurements with the help of the method of least squares we have calculated the value of the parameter corresponding to the cognitive noise of the person being studied. The range of cognitive noise parameter values for studied subjects turned to be [0.08; 0.55]. It should be noted, that experimental results have a good reproducibility, the same person being studied repeatedly another day produces very similar data with very close levels of cognitive noise. We found an excellent agreement between analytically deduced probability and the results obtained in the experiment. A good qualitative agreement between theoretical and experimental results indicates that even such a simple model allows simulating brain cognitive dynamics and estimating important cognitive characteristic of the brain, such as brain noise.

Keywords: bistability, brain, noise, perception, stochastic processes

Procedia PDF Downloads 422
2639 Employers’ Preferences when Employing Solo Self-employed: a Vignette Study in the Netherlands

Authors: Lian Kösters, Wendy Smits, Raymond Montizaan

Abstract:

The number of solo self-employed in the Netherlands has been increasing for years. The relative increase is among the largest in the EU. To explain this increase, most studies have focused on the supply side, workers who offer themselves as solo self-employed. The number of studies that focus on the demand side, the employer who hires the solo self-employed, is still scarce. Studies into employer behaviour conducted until now show that employers mainly choose self-employed workers when they have a temporary need for specialist knowledge, but also during projects or production peaks. These studies do not provide insight into the employers’ considerations for different contract types. In this study, interviews with employers were conducted, and available literature was consulted to provide an overview of the several factors employers use to compare different contract types. That input was used to set up a vignette study. This was carried out at the end of 2021 among almost 1000 business owners, HR managers, and business leaders of Dutch companies. Each respondent was given two sets of five fictitious candidates for two possible positions in their organization. They were asked to rank these candidates. The positions varied with regard to the type of tasks (core tasks or support tasks) and the time it took to train new people for the position. The respondents were asked additional questions about the positions, such as the required level of education, the duration, and the degree of predictability of tasks. The fictitious candidates varied, among other things, in the type of contract on which they would come to work for the organization. The results were analyzed using a rank-ordered logit analysis. This vignette setup makes it possible to see which factors are most important for employers when choosing to hire a solo self-employed person compared to other contracts. The results show that there are no indications that employers would want to hire solo self-employed workers en masse. They prefer regular employee contracts. The probability of being chosen with a solo self-employed contract over someone who comes to work as a temporary employee is 32 percent. This probability is even lower than for on-call and temporary agency workers. For a permanent contract, this probability is 46 percent. The results provide indications that employers consider knowledge and skills more important than the solo self-employed contract and that this can compensate. A solo self-employed candidate with 10 years of work experience has a 63 percent probability of being found attractive by an employer compared to a temporary employee without work experience. This suggests that employers are willing to give someone a less attractive contract for the employer if the worker so wishes. The results also show that the probability that a solo self-employed person is preferred over a candidate with a temporary employee contract is somewhat higher in business economics, administrative and technical professions. No significant results were found for factors where it was expected that solo self-employed workers are preferred more often, such as for unpredictable or temporary work.

Keywords: employer behaviour, rank-ordered logit analysis, solo self-employment, temporary contract, vignette study

Procedia PDF Downloads 45
2638 Wireless Transmission of Big Data Using Novel Secure Algorithm

Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha

Abstract:

This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.

Keywords: big data, two-hop transmission, physical layer wireless security, cooperative jamming, energy balance

Procedia PDF Downloads 456
2637 Study of Seismic Damage Reinforced Concrete Frames in Variable Height with Logistic Statistic Function Distribution

Authors: P. Zarfam, M. Mansouri Baghbaderani

Abstract:

In seismic design, the proper reaction to the earthquake and the correct and accurate prediction of its subsequent effects on the structure are critical. Choose a proper probability distribution, which gives a more realistic probability of the structure's damage rate, is essential in damage discussions. With the development of design based on performance, analytical method of modal push over as an inexpensive, efficacious, and quick one in the estimation of the structures' seismic response is broadly used in engineering contexts. In this research three concrete frames of 3, 6, and 13 stories are analyzed in non-linear modal push over by 30 different earthquake records by OpenSEES software, then the detriment indexes of roof's displacement and relative displacement ratio of the stories are calculated by two parameters: peak ground acceleration and spectra acceleration. These indexes are used to establish the value of damage relations with log-normal distribution and logistics distribution. Finally the value of these relations is compared and the effect of height on the mentioned damage relations is studied, too.

Keywords: modal pushover analysis, concrete structure, seismic damage, log-normal distribution, logistic distribution

Procedia PDF Downloads 221
2636 Convex Restrictions for Outage Constrained MU-MISO Downlink under Imperfect Channel State Information

Authors: A. Preetha Priyadharshini, S. B. M. Priya

Abstract:

In this paper, we consider the MU-MISO downlink scenario, under imperfect channel state information (CSI). The main issue in imperfect CSI is to keep the probability of each user achievable outage rate below the given threshold level. Such a rate outage constraints present significant and analytical challenges. There are many probabilistic methods are used to minimize the transmit optimization problem under imperfect CSI. Here, decomposition based large deviation inequality and Bernstein type inequality convex restriction methods are used to perform the optimization problem under imperfect CSI. These methods are used for achieving improved output quality and lower complexity. They provide a safe tractable approximation of the original rate outage constraints. Based on these method implementations, performance has been evaluated in the terms of feasible rate and average transmission power. The simulation results are shown that all the two methods offer significantly improved outage quality and lower computational complexity.

Keywords: imperfect channel state information, outage probability, multiuser- multi input single output, channel state information

Procedia PDF Downloads 776
2635 The Best Prediction Data Mining Model for Breast Cancer Probability in Women Residents in Kabul

Authors: Mina Jafari, Kobra Hamraee, Saied Hossein Hosseini

Abstract:

The prediction of breast cancer disease is one of the challenges in medicine. In this paper we collected 528 records of women’s information who live in Kabul including demographic, life style, diet and pregnancy data. There are many classification algorithm in breast cancer prediction and tried to find the best model with most accurate result and lowest error rate. We evaluated some other common supervised algorithms in data mining to find the best model in prediction of breast cancer disease among afghan women living in Kabul regarding to momography result as target variable. For evaluating these algorithms we used Cross Validation which is an assured method for measuring the performance of models. After comparing error rate and accuracy of three models: Decision Tree, Naive Bays and Rule Induction, Decision Tree with accuracy of 94.06% and error rate of %15 is found the best model to predicting breast cancer disease based on the health care records.

Keywords: decision tree, breast cancer, probability, data mining

Procedia PDF Downloads 108
2634 The Methodology of Out-Migration in Georgia

Authors: Shorena Tsiklauri

Abstract:

Out-migration is an important issue for Georgia as well as since independence has loosed due to emigration one fifth of its population. During Soviet time out-migration from USSR was almost impossible and one of the most important instruments in regulating population movement within the Soviet Union was the system of compulsory residential registrations, so-called “propiska”. Since independent here was not any regulation for migration from Georgia. The majorities of Georgian migrants go abroad by tourist visa and then overstay, becoming the irregular labor migrants. The official statistics on migration published for this period was based on the administrative system of population registration, were insignificant in terms of numbers and did not represent the real scope of these migration movements. This paper discusses the data quality and methodology of migration statistics in Georgia and we are going to answer the questions: what is the real reason of increasing immigration flows according to the official numbers since 2000s?

Keywords: data quality, Georgia, methodology, migration

Procedia PDF Downloads 386
2633 Mathematical Model of Corporate Bond Portfolio and Effective Border Preview

Authors: Sergey Podluzhnyy

Abstract:

One of the most important tasks of investment and pension fund management is building decision support system which helps to make right decision on corporate bond portfolio formation. Today there are several basic methods of bond portfolio management. They are duration management, immunization and convexity management. Identified methods have serious disadvantage: they do not take into account credit risk or insolvency risk of issuer. So, identified methods can be applied only for management and evaluation of high-quality sovereign bonds. Applying article proposes mathematical model for building an optimal in case of risk and yield corporate bond portfolio. Proposed model takes into account the default probability in formula of assessment of bonds which results to more correct evaluation of bonds prices. Moreover, applied model provides tools for visualization of the efficient frontier of corporate bonds portfolio taking into account the exposure to credit risk, which will increase the quality of the investment decisions of portfolio managers.

Keywords: corporate bond portfolio, default probability, effective boundary, portfolio optimization task

Procedia PDF Downloads 296
2632 Challenges for IoT Adoption in India: A Study Based on Foresight Analysis for 2025

Authors: Shruti Chopra, Vikas Rao Vadi

Abstract:

In the era of the digital world, the Internet of Things (IoT) has been receiving significant attention. Its ubiquitous connectivity between humans, machines to machines (M2M) and machines to humans provides it a potential to transform the society and establish an ecosystem to serve new dimensions to the economy of the country. Thereby, this study has attempted to identify the challenges that seem prevalent in IoT adoption in India through the literature survey. Further, the data has been collected by taking the opinions of experts to conduct the foresight analysis and it has been analyzed with the help of scenario planning process – Micmac, Mactor, Multipol, and Smic-Prob. As a methodology, the study has identified the relationship between variables through variable analysis using Micmac and actor analysis using Mactor, this paper has attempted to generate the entire field of possibilities in terms of hypotheses and construct various scenarios through Multipol. And lastly, the findings of the study include final scenarios that are selected using Smic-Prob by assigning the probability to all the scenarios (including the conditional probability). This study may help the practitioners and policymakers to remove the obstacles to successfully implement the IoT in India.

Keywords: Internet of Thing (IoT), foresight analysis, scenario planning, challenges, policymaking

Procedia PDF Downloads 126
2631 Statistical Correlation between Ply Mechanical Properties of Composite and Its Effect on Structure Reliability

Authors: S. Zhang, L. Zhang, X. Chen

Abstract:

Due to the large uncertainty on the mechanical properties of FRP (fibre reinforced plastic), the reliability evaluation of FRP structures are currently receiving much attention in industry. However, possible statistical correlation between ply mechanical properties has been so far overlooked, and they are mostly assumed to be independent random variables. In this study, the statistical correlation between ply mechanical properties of uni-directional and plain weave composite is firstly analyzed by a combination of Monte-Carlo simulation and finite element modeling of the FRP unit cell. Large linear correlation coefficients between the in-plane mechanical properties are observed, and the correlation coefficients are heavily dependent on the uncertainty of the fibre volume ratio. It is also observed that the correlation coefficients related to Poisson’s ratio are negative while others are positive. To experimentally achieve the statistical correlation coefficients between in-plane mechanical properties of FRP, all concerned in-plane mechanical properties of the same specimen needs to be known. In-plane shear modulus of FRP is experimentally derived by the approach suggested in the ASTM standard D5379M. Tensile tests are conducted using the same specimens used for the shear test, and due to non-uniform tensile deformation a modification factor is derived by a finite element modeling. Digital image correlation is adopted to characterize the specimen non-uniform deformation. The preliminary experimental results show a good agreement with the numerical analysis on the statistical correlation. Then, failure probability of laminate plates is calculated in cases considering and not considering the statistical correlation, using the Monte-Carlo and Markov Chain Monte-Carlo methods, respectively. The results highlight the importance of accounting for the statistical correlation between ply mechanical properties to achieve accurate failure probability of laminate plates. Furthermore, it is found that for the multi-layer laminate plate, the statistical correlation between the ply elastic properties significantly affects the laminate reliability while the effect of statistical correlation between the ply strength is minimal.

Keywords: failure probability, FRP, reliability, statistical correlation

Procedia PDF Downloads 129
2630 Patients' Interpretation of Prescribed Medication Instructions: A Pilot Study among Diabetes Mellitus Patients at Makanye Clinic in Limpopo Province, South Africa

Authors: Charity Ngoatle, Tebogo M. Mothiba, Mahlapahlapana J. Themane

Abstract:

Misapprehension of medications instructions due to poor health literacy is common in diabetic patients, predominantly leading to suboptimal medication therapy caused by taking less than expected, or getting inadequate medication concentration. Globally, 50% of adults have been reported to have misunderstood medication instructions which could be the cause of not using medication as prescribed. Reading material has been found not to improve people’s knowledge to the extent where they would be informed and knowledgeable about their health. This, therefore, depicts that instructive materials alone cannot improve health literacy but further patient education is still needed to explain what the information really mean. The aim of this study was to investigate patients’ interpretation of prescribed medication instructions at Makanye Clinic in Limpopo Province, South Africa. The study used a mixed method approach. A non-probability purposive and simple random sampling strategies will be used to select ten (10) participants for the pilot study. Semi-structured interviews with a guide and self- administered structured questionnaires will be used to collect data. Tesch’s eight steps for qualitative data analysis and SPSS version 24 with descriptive statistics will be adopted. The preliminary findings from other studies show that: (a) poor health literacy negatively affect medication adherence, (b) general literacy influence health literacy, and (c) there are poor health outcomes and medication adverse effects due to poor medication comprehension.

Keywords: instructions, diabetes mellitus, patients, prescribed medication

Procedia PDF Downloads 115
2629 Cyber Security Enhancement via Software Defined Pseudo-Random Private IP Address Hopping

Authors: Andre Slonopas, Zona Kostic, Warren Thompson

Abstract:

Obfuscation is one of the most useful tools to prevent network compromise. Previous research focused on the obfuscation of the network communications between external-facing edge devices. This work proposes the use of two edge devices, external and internal facing, which communicate via private IPv4 addresses in a software-defined pseudo-random IP hopping. This methodology does not require additional IP addresses and/or resources to implement. Statistical analyses demonstrate that the hopping surface must be at least 1e3 IP addresses in size with a broad standard deviation to minimize the possibility of coincidence of monitored and communication IPs. The probability of breaking the hopping algorithm requires a collection of at least 1e6 samples, which for large hopping surfaces will take years to collect. The probability of dropped packets is controlled via memory buffers and the frequency of hops and can be reduced to levels acceptable for video streaming. This methodology provides an impenetrable layer of security ideal for information and supervisory control and data acquisition systems.

Keywords: moving target defense, cybersecurity, network security, hopping randomization, software defined network, network security theory

Procedia PDF Downloads 157
2628 Capacity Building and Motivation as Determinants of Productivity among Library Personnel in Colleges of Education in Southwest, Nigeria

Authors: E. K. Soyele

Abstract:

This study is on capacity building and motivation as determinants of productivity among library personnel in colleges of education in South West, Nigeria. This study made use of a descriptive research design of survey type. A total enumeration sampling technique was used for the selected sample. The research sample consisted of 40 library personnel. The instrument used for the study was a structured questionnaire divided into four parts. Statistics data analysis used were descriptive statistics with frequencies, percentages, and regression statistics analysis. Findings from this study revealed that capacity building and motivation have positive impact on library personnel productivity with their percentages greater than 50% acceptance level. A test of null hypotheses at P < 0.05 significant level was tested to see the significance between capacity building and productivity, which was positive at P < 0.05 significant level. This implies that capacity building and motivation significantly determine productivity among library personnel in selected college libraries in Nigeria. The study concluded that there is need for institutions to equip their library personnel via training programmes, in-service, digital training, ICT training, seminars, and conferences, etc. Incentives should be provided to motivate personnel for high productivity. The study, therefore, recommends that government, institutions and library management should fund college libraries adequately so as to enhance capacity building, staff commitment and training for further education

Keywords: capacity building, library personnel, motivation, productivity

Procedia PDF Downloads 169
2627 Fuzzy Availability Analysis of a Battery Production System

Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz

Abstract:

In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.

Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)

Procedia PDF Downloads 193
2626 The Effect of Excel on Undergraduate Students’ Understanding of Statistics and the Normal Distribution

Authors: Masomeh Jamshid Nejad

Abstract:

Nowadays, statistical literacy is no longer a necessary skill but an essential skill with broad applications across diverse fields, especially in operational decision areas such as business management, finance, and economics. As such, learning and deep understanding of statistical concepts are essential in the context of business studies. One of the crucial topics in statistical theory and its application is the normal distribution, often called a bell-shaped curve. To interpret data and conduct hypothesis tests, comprehending the properties of normal distribution (the mean and standard deviation) is essential for business students. This requires undergraduate students in the field of economics and business management to visualize and work with data following a normal distribution. Since technology is interconnected with education these days, it is important to teach statistics topics in the context of Python, R-studio, and Microsoft Excel to undergraduate students. This research endeavours to shed light on the effect of Excel-based instruction on learners’ knowledge of statistics, specifically the central concept of normal distribution. As such, two groups of undergraduate students (from the Business Management program) were compared in this research study. One group underwent Excel-based instruction and another group relied only on traditional teaching methods. We analyzed experiential data and BBA participants’ responses to statistic-related questions focusing on the normal distribution, including its key attributes, such as the mean and standard deviation. The results of our study indicate that exposing students to Excel-based learning supports learners in comprehending statistical concepts more effectively compared with the other group of learners (teaching with the traditional method). In addition, students in the context of Excel-based instruction showed ability in picturing and interpreting data concentrated on normal distribution.

Keywords: statistics, excel-based instruction, data visualization, pedagogy

Procedia PDF Downloads 30
2625 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions

Authors: Valerii Dashuk

Abstract:

The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.

Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function

Procedia PDF Downloads 146
2624 Women Empowerment in Cassava Production: A Case Study of Southwest Nigeria

Authors: Adepoju A. A., Olapade-Ogunwole F., Ganiyu M. O.

Abstract:

This study examined women's empowerment in cassava production in southwest Nigeria. The contributions of the five domains namely decision about agricultural production, decision-making power over productive resources, control of the use of income, leadership and time allocation to women disempowerment, profiled the women based on their socio-economics features and determined factors influencing women's disempowerment. Primary data were collected from the women farmers and processors through the use of structured questionnaires. Purposive sampling was used to select the LGAs and villages based on a large number of cassava farmers and processors, while cluster sampling was used to select 360 respondents in the study area. Descriptive statistics such as bar charts and percentages, Women Empowerment in Agriculture (WEAI), and the Logit regression model were used to analyze the data collected. The results revealed that 63.88% of the women were disempowered. Lack of decision-making power over productive resources; 36.47% and leadership skills; 33.26% contributed mostly to the disempowerment of the women. About 85% of the married women were disempowered, while 76.92% of the women who participated in social group activities were more empowered than their disempowered counterparts. The findings showed that women with more years of processing experience have the probability of being disempowered while those who engage in farming as a primary livelihood activity, and participate in social groups among others have the tendency to be empowered. In view of this, it was recommended that women should be encouraged to farm and contribute to social group activities.

Keywords: cassava, production, empowerment, southwest, Nigeria

Procedia PDF Downloads 24
2623 Corporate Cultures Management towards the Retention of Employees: Case Study Company in Thailand

Authors: Duangsamorn Rungsawanpho

Abstract:

The objectives of this paper are to explore the corporate cultures management as determinants of employee retention company in Thailand. This study using mixed method methodology. Data collection using questionnaires and in-depth interviews. The statistics used for data analysis were percentage, mean, standard deviation and inferential statistics will include. The results show that the corporate management culture is perfect for any organization but it depends on the business and the industry because the situations or circumstances that corporate executives are met is different. Because the finding explained that the employees of the company determine the achievement of value-oriented by the corporate culture and international relations is perceived most value for their organizations. In additional we found the employees perceiving with participation can be interpreted as a positive example, many employees feel that they are part of management because they care about their opinions or ideas related with their work.

Keywords: corporate culture, employee retention, retention of employees, management approaches

Procedia PDF Downloads 275
2622 Organizational Innovations of the 20th Century as High Tech of the 21st: Evidence from Patent Data

Authors: Valery Yakubovich, Shuping wu

Abstract:

Organization theorists have long claimed that organizational innovations are nontechnological, in part because they are unpatentable. The claim rests on the assumption that organizational innovations are abstract ideas embodied in persons and contexts rather than in context-free practical tools. However, over the last three decades, organizational knowledge has been increasingly embodied in digital tools which, in principle, can be patented. To provide the first empirical evidence regarding the patentability of organizational innovations, we trained two machine learning algorithms to identify a population of 205,434 patent applications for organizational technologies (OrgTech) and, among them, 141,285 applications that use organizational innovations accumulated over the 20th century. Our event history analysis of the probability of patenting an OrgTech invention shows that ideas from organizational innovations decrease the probability of patent allowance unless they describe a practical tool. We conclude that the present-day digital transformation places organizational innovations in the realm of high tech and turns the debate about organizational technologies into the challenge of designing practical organizational tools that embody big ideas about organizing. We outline an agenda for patent-based research on OrgTech as an emerging phenomenon.

Keywords: organizational innovation, organizational technology, high tech, patents, machine learning

Procedia PDF Downloads 94
2621 The Impact of Corporate Social Responsibility on Brand Equity of the Telecommunication Industry in South Africa

Authors: Keitumetse Gaesirwe

Abstract:

This study investigated the effect of corporate social responsibility (CSR) on brand equity. Specific objectives include examining the connections between ethics and philanthropic constructs of CSR and brand loyalty in the telecommunication industry in South Africa. A convenience sampling technique was used, and closed-ended questionnaires were administered to 800 research participants across the nine provinces of South Africa. Data collected from the field was analyzed using inferential statistics (Ordinary Least Squares regression and correlation analysis) as well as descriptive statistics. Findings show positive and significant connections between the constructs of CSR and brand loyalty. The implications of the findings indicate that keeping ethical and philanthropy standards can be a source of competitive advantage and guarantee brand loyalty for telecommunication companies in South Africa.

Keywords: CSR, brand awareness, telecommunication industry, COVID-19, South Africa

Procedia PDF Downloads 83
2620 Determinants of Income Diversification among Support Zone Communities of National Parks in Nigeria

Authors: Daniel Etim Jacob, Samuel Onadeko, Edem A. Eniang, Imaobong Ufot Nelson

Abstract:

This paper examined determinants of income diversification among households in support zones communities of national parks in Nigeria. This involved the use household data collected through questionnaires administered randomly among 1009 household heads in the study area. The data obtained were analyzed using probability and non-probability statistical analysis such as regression and analysis of variance to test for mean difference between parks. The result obtained indicates that majority of the household heads were male (92.57%0, between the age class of 21 – 40 years (44.90%), had non-formal education (38.16%), were farmers (65.21%), owned land (95.44%), with a household size of 1 – 5 (36.67%) and an annual income range of ₦401,000 - ₦600,000 (24.58%). Mean Simpson index of diversity showed a general low (0.375) level of income diversification among the households. Income, age, off-farm dependence, education, household size and occupation where significant (p<0.01) factors that affected households’ income diversification. The study recommends improvement in the existing infrastructures and social capital in the communities as avenues to improve the livelihood and ensure positive conservation behaviors in the study area.

Keywords: income diversification, protected area, livelihood, poverty, Nigeria

Procedia PDF Downloads 112
2619 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks

Authors: Ahmed Abdullah Ahmed

Abstract:

The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.

Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments

Procedia PDF Downloads 485
2618 Using Machine Learning to Enhance Win Ratio for College Ice Hockey Teams

Authors: Sadixa Sanjel, Ahmed Sadek, Naseef Mansoor, Zelalem Denekew

Abstract:

Collegiate ice hockey (NCAA) sports analytics is different from the national level hockey (NHL). We apply and compare multiple machine learning models such as Linear Regression, Random Forest, and Neural Networks to predict the win ratio for a team based on their statistics. Data exploration helps determine which statistics are most useful in increasing the win ratio, which would be beneficial to coaches and team managers. We ran experiments to select the best model and chose Random Forest as the best performing. We conclude with how to bridge the gap between the college and national levels of sports analytics and the use of machine learning to enhance team performance despite not having a lot of metrics or budget for automatic tracking.

Keywords: NCAA, NHL, sports analytics, random forest, regression, neural networks, game predictions

Procedia PDF Downloads 82
2617 Real-World Comparison of Adherence to and Persistence with Dulaglutide and Liraglutide in UAE e-Claims Database

Authors: Ibrahim Turfanda, Soniya Rai, Karan Vadher

Abstract:

Objectives— The study aims to compare real-world adherence to and persistence with dulaglutide and liraglutide in patients with type 2 diabetes (T2D) initiating treatment in UAE. Methods— This was a retrospective, non-interventional study (observation period: 01 March 2017–31 August 2019) using the UAE Dubai e-Claims database. Included: adult patients initiating dulaglutide/liraglutide 01 September 2017–31 August 2018 (index period) with: ≥1 claim for T2D in the 6 months before index date (ID); ≥1 claim for dulaglutide/liraglutide during index period; and continuous medical enrolment for ≥6 months before and ≥12 months after ID. Key endpoints, assessed 3/6/12 months after ID: adherence to treatment (proportion of days covered [PDC; PDC ≥80% considered ‘adherent’], per-group mean±standard deviation [SD] PDC); and persistence (number of continuous therapy days from ID until discontinuation [i.e., >45 days gap] or end of observation period). Patients initiating dulaglutide/liraglutide were propensity score matched (1:1) based on baseline characteristics. Between-group comparison of adherence was analysed using the McNemar test (α=0.025). Persistence was analysed using Kaplan–Meier estimates with log-rank tests (α=0.025) for between-group comparisons. This study presents 12-month outcomes. Results— Following propensity score matching, 263 patients were included in each group. Mean±SD PDC for all patients at 12 months was significantly higher in the dulaglutide versus the liraglutide group (dulaglutide=0.48±0.30, liraglutide=0.39±0.28, p=0.0002). The proportion of adherent patients favored dulaglutide (dulaglutide=20.2%, liraglutide=12.9%, p=0.0302), as did the probability of being adherent to treatment (odds ratio [97.5% CI]: 1.70 [0.99, 2.91]; p=0.03). Proportion of persistent patients also favoured dulaglutide (dulaglutide=15.2%, liraglutide=9.1%, p=0.0528), as did the probability of discontinuing treatment 12 months after ID (p=0.027). Conclusions— Based on the UAE Dubai e-Claims database data, dulaglutide initiators exhibited significantly greater adherence in terms of mean PDC versus liraglutide initiators. The proportion of adherent patients and the probability of being adherent favored the dulaglutide group, as did treatment persistence.

Keywords: adherence, dulaglutide, effectiveness, liraglutide, persistence

Procedia PDF Downloads 85
2616 Comparison of Bone Mineral Density of Lumbar Spines between High Level Cyclists and Sedentary

Authors: Mohammad Shabani

Abstract:

The physical activities depending on the nature of the mechanical stresses they induce on bone sometimes have brought about different results. The purpose of this study was to compare bone mineral density (BMD) of the lumbar spine between the high-level cyclists and sedentary. Materials and Methods: In the present study, 73 cyclists senior (age: 25.81 ± 4.35 years; height: 179.66 ± 6.31 cm; weight: 71.55 ± 6.31 kg) and 32 sedentary subjects (age: 28.28 ± 4.52 years; height: 176.56 ± 6.2 cm; weight: 74.47 ± 8.35 kg) participated voluntarily. All cyclists belonged to the different teams from the International Cycling Union and they trained competitively for 10 years. BMD of the lumbar spine of the subjects was measured using DXA X-ray (Lunar). Descriptive statistics calculations were performed using computer software data processing (Statview 5, SAS Institute Inc. USA). The comparison of two independent distributions (BMD high level cyclists and sedentary) was made by the Student T Test standard. Probability 0.05 (p≤0 / 05) was adopted as significance. Results: The result of this study showed that the BMD values of the lumbar spine of sedentary subjects were significantly higher for all measured segments. Conclusion and Discussion: Cycling is firstly a common sport and on the other hand endurance sport. It is now accepted that weight bearing exercises have an osteogenic effect compared to non-weight bearing exercises. Thus, endurance sports such as cycling, compared to the activities imposing intense force in short time, seem not to really be osteogenic. Therefore, it can be concluded that cycling provides low stimulates osteogenic because of specific biomechanical forces of the sport and its lack of impact.

Keywords: BMD, lumbar spine, high level cyclist, cycling

Procedia PDF Downloads 243
2615 Decision Support System for Examination Selection

Authors: Katejarinporn Chaiya, Jarumon Nookong, Nutthapat Kaewrattanapat

Abstract:

The purposes of this research were to develop and find users’ satisfaction after using the Decision Support System for Examination Selection. This research presents the design of information systems. In order to find the necessary examination of the statistics. Based on the examination of the candidate and then taking the easy difficulty setting statistics applied to the test. In addition, research has also made performance appraisals from experts and user satisfaction. By results of analysis showed that the performance appraisals from experts on the system as a whole and at a good level. mean was 3.44 and S.D. was 0.55 and user satisfaction per system as a whole and the good level mean was 3.37 and S.D. was 0.42 can conclude that effective systems are in a good level. Work has been completed in accordance with the scope of work. The website used developing this project is PHP, MySQL.5.0.45 for database.

Keywords: secision support system, examination, PHP, information systems

Procedia PDF Downloads 420
2614 Navigating Government Finance Statistics: Effortless Retrieval and Comparative Analysis through Data Science and Machine Learning

Authors: Kwaku Damoah

Abstract:

This paper presents a methodology and software application (App) designed to empower users in accessing, retrieving, and comparatively exploring data within the hierarchical network framework of the Government Finance Statistics (GFS) system. It explores the ease of navigating the GFS system and identifies the gaps filled by the new methodology and App. The GFS, embodies a complex Hierarchical Network Classification (HNC) structure, encapsulating institutional units, revenues, expenses, assets, liabilities, and economic activities. Navigating this structure demands specialized knowledge, experience, and skill, posing a significant challenge for effective analytics and fiscal policy decision-making. Many professionals encounter difficulties deciphering these classifications, hindering confident utilization of the system. This accessibility barrier obstructs a vast number of professionals, students, policymakers, and the public from leveraging the abundant data and information within the GFS. Leveraging R programming language, Data Science Analytics and Machine Learning, an efficient methodology enabling users to access, navigate, and conduct exploratory comparisons was developed. The machine learning Fiscal Analytics App (FLOWZZ) democratizes access to advanced analytics through its user-friendly interface, breaking down expertise barriers.

Keywords: data science, data wrangling, drilldown analytics, government finance statistics, hierarchical network classification, machine learning, web application.

Procedia PDF Downloads 32
2613 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories

Authors: Haj Najafi Leila, Tehranizadeh Mohsen

Abstract:

Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.

Keywords: dependency, story-cost, cost modes, engineering demand parameter

Procedia PDF Downloads 151
2612 A Theoretical Approach on Electoral Competition, Lobby Formation and Equilibrium Policy Platforms

Authors: Deepti Kohli, Meeta Keswani Mehra

Abstract:

The paper develops a theoretical model of electoral competition with purely opportunistic candidates and a uni-dimensional policy using the probability voting approach while focusing on the aspect of lobby formation to analyze the inherent complex interactions between centripetal and centrifugal forces and their effects on equilibrium policy platforms. There exist three types of agents, namely, Left-wing, Moderate and Right-wing who comprise of the total voting population. Also, it is assumed that the Left and Right agents are free to initiate a lobby of their choice. If initiated, these lobbies generate donations which in turn can be contributed to one (or both) electoral candidates in order to influence them to implement the lobby’s preferred policy. Four different lobby formation scenarios have been considered: no lobby formation, only Left, only Right and both Left and Right. The equilibrium policy platforms, amount of individual donations by agents to their respective lobbies and the contributions offered to the electoral candidates have been solved for under each of the above four cases. Since it is assumed that the agents cannot coordinate each other’s actions during the lobby formation stage, there exists a probability with which a lobby would be formed, which is also solved for in the model. The results indicate that the policy platforms of the two electoral candidates converge completely under the cases of no lobby and both (extreme) formations but diverge under the cases of only one (Left or Right) lobby formation. This is because in the case of no lobby being formed, only the centripetal forces (emerging from the election-winning aspect) are present while in the case of both extreme (Left-wing and Right-wing) lobbies being formed, centrifugal forces (emerging from the lobby formation aspect) also arise but cancel each other out, again resulting in a pure policy convergence phenomenon. In contrast, in case of only one lobby being formed, both centripetal and centrifugal forces interact strategically, leading the two electoral candidates to choose completely different policy platforms in equilibrium. Additionally, it is found that in equilibrium, while the donation by a specific agent type increases with the formation of both lobbies in comparison to when only one lobby is formed, the probability of implementation of the policy being advocated by that lobby group falls.

Keywords: electoral competition, equilibrium policy platforms, lobby formation, opportunistic candidates

Procedia PDF Downloads 306