Search results for: weighted rank regression
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3982

Search results for: weighted rank regression

3712 Weighted Risk Scores Method Proposal for Occupational Safety Risk Assessment

Authors: Ulas Cinar, Omer Faruk Ugurlu, Selcuk Cebi

Abstract:

Occupational safety risk management is the most important element of a safe working environment. Effective risk management can only be possible with accurate analysis and evaluations. Scoring-based risk assessment methods offer considerable ease of application as they convert linguistic expressions into numerical results. It can also be easily adapted to any field. Contrary to all these advantages, important problems in scoring-based methods are frequently discussed. Effective measurability is one of the most critical problems. Existing methods allow experts to choose a score equivalent to each parameter. Therefore, experts prefer the score of the most likely outcome for risk. However, all other possible consequences are neglected. Assessments of the existing methods express the most probable level of risk, not the real risk of the enterprises. In this study, it is aimed to develop a method that will present a more comprehensive evaluation compared to the existing methods by evaluating the probability and severity scores, all sub-parameters, and potential results, and a new scoring-based method is proposed in the literature.

Keywords: occupational health and safety, risk assessment, scoring based risk assessment method, underground mining, weighted risk scores

Procedia PDF Downloads 120
3711 Spatial Temporal Rainfall Trends in Australia

Authors: Bright E. Owusu, Nittaya McNeil

Abstract:

Rainfall is one of the most essential quantities in meteorology and hydrology. It has important impacts on people’s daily life and excess or inadequate of it could bring tremendous losses in economy and cause fatalities. Population increase around the globe tends to have a corresponding increase in settlement and industrialization. Some countries are affected by flood and drought occasionally due to climate change, which disrupt most of the daily activities. Knowledge of trends in spatial and temporal rainfall variability and their physical explanations would be beneficial in climate change assessment and to determine erosivity. This study describes the spatial-temporal variability of daily rainfall in Australia and their corresponding long-term trend during 1950-2013. The spatial patterns were investigated by using exploratory factor analysis and the long term trend in rainfall time series were determined by linear regression, Mann-Kendall rank statistics and the Sen’s slope test. The exploratory factor analysis explained most of the variations in the data and grouped Australia into eight distinct rainfall regions with different rainfall patterns. Significant increasing trends in annual rainfall were observed in the northern regions of Australia. However, the northeastern part was the wettest of all the eight rainfall regions.

Keywords: climate change, explanatory factor analysis, Mann-Kendall and Sen’s slope test, rainfall.

Procedia PDF Downloads 328
3710 Urban Sustainability and Sustainable Mobility, Lessons Learned from the Case of Chile

Authors: Jorge Urrutia-Mosquera, Luz Flórez-Calderón, Yasna Cortés

Abstract:

We assessed the state of progress in terms of urban sustainability indicators and studied the impact of current land use conditions and the level of spatial accessibility to basic urban amenities on travel patterns and sustainable mobility in Santiago de Chile. We determined the spatial impact of urban facilities on sustainable travel patterns through the statistical analysis, data visualisation, and weighted regression models. The results show a need to diversify land use in more than 60% of the communes, although in 85% of the communes, accessibility to public spaces is guaranteed. The findings also suggest improving access to early education facilities, as only 26% of the communes meet the sustainability standard, negatively impacting travel in sustainable modes. It is also observed that the level of access to urban facilities generates spatial heterogeneity in the city, which negatively affects travel patterns in terms of time over 60 minutes and modes of travel in private vehicles. The results obtained allow us to identify opportunities for public policy intervention to promote and adopt sustainable mobility.

Keywords: land use, urban sustainability, travel patterns, spatial heterogeneity, GWR model, sustainable mobility

Procedia PDF Downloads 49
3709 Developing Variable Repetitive Group Sampling Control Chart Using Regression Estimator

Authors: Liaquat Ahmad, Muhammad Aslam, Muhammad Azam

Abstract:

In this article, we propose a control chart based on repetitive group sampling scheme for the location parameter. This charting scheme is based on the regression estimator; an estimator that capitalize the relationship between the variables of interest to provide more sensitive control than the commonly used individual variables. The control limit coefficients have been estimated for different sample sizes for less and highly correlated variables. The monitoring of the production process is constructed by adopting the procedure of the Shewhart’s x-bar control chart. Its performance is verified by the average run length calculations when the shift occurs in the average value of the estimator. It has been observed that the less correlated variables have rapid false alarm rate.

Keywords: average run length, control charts, process shift, regression estimators, repetitive group sampling

Procedia PDF Downloads 541
3708 BART Matching Method: Using Bayesian Additive Regression Tree for Data Matching

Authors: Gianna Zou

Abstract:

Propensity score matching (PSM), introduced by Paul R. Rosenbaum and Donald Rubin in 1983, is a popular statistical matching technique which tries to estimate the treatment effects by taking into account covariates that could impact the efficacy of study medication in clinical trials. PSM can be used to reduce the bias due to confounding variables. However, PSM assumes that the response values are normally distributed. In some cases, this assumption may not be held. In this paper, a machine learning method - Bayesian Additive Regression Tree (BART), is used as a more robust method of matching. BART can work well when models are misspecified since it can be used to model heterogeneous treatment effects. Moreover, it has the capability to handle non-linear main effects and multiway interactions. In this research, a BART Matching Method (BMM) is proposed to provide a more reliable matching method over PSM. By comparing the analysis results from PSM and BMM, BMM can perform well and has better prediction capability when the response values are not normally distributed.

Keywords: BART, Bayesian, matching, regression

Procedia PDF Downloads 123
3707 A Data Driven Approach for the Degradation of a Lithium-Ion Battery Based on Accelerated Life Test

Authors: Alyaa M. Younes, Nermine Harraz, Mohammad H. Elwany

Abstract:

Lithium ion batteries are currently used for many applications including satellites, electric vehicles and mobile electronics. Their ability to store relatively large amount of energy in a limited space make them most appropriate for critical applications. Evaluation of the life of these batteries and their reliability becomes crucial to the systems they support. Reliability of Li-Ion batteries has been mainly considered based on its lifetime. However, another important factor that can be considered critical in many applications such as in electric vehicles is the cycle duration. The present work presents the results of an experimental investigation on the degradation behavior of a Laptop Li-ion battery (type TKV2V) and the effect of applied load on the battery cycle time. The reliability was evaluated using an accelerated life test. Least squares linear regression with median rank estimation was used to estimate the Weibull distribution parameters needed for the reliability functions estimation. The probability density function, failure rate and reliability function under each of the applied loads were evaluated and compared. An inverse power model is introduced that can predict cycle time at any stress level given.

Keywords: accelerated life test, inverse power law, lithium-ion battery, reliability evaluation, Weibull distribution

Procedia PDF Downloads 148
3706 The Relationship Between Hourly Compensation and Unemployment Rate Using the Panel Data Regression Analysis

Authors: S. K. Ashiquer Rahman

Abstract:

the paper concentrations on the importance of hourly compensation, emphasizing the significance of the unemployment rate. There are the two most important factors of a nation these are its unemployment rate and hourly compensation. These are not merely statistics but they have profound effects on individual, families, and the economy. They are inversely related to one another. When we consider the unemployment rate that will probably decline as hourly compensations in manufacturing rise. But when we reduced the unemployment rates and increased job prospects could result from higher compensation. That’s why, the increased hourly compensation in the manufacturing sector that could have a favorable effect on job changing issues. Moreover, the relationship between hourly compensation and unemployment is complex and influenced by broader economic factors. In this paper, we use panel data regression models to evaluate the expected link between hourly compensation and unemployment rate in order to determine the effect of hourly compensation on unemployment rate. We estimate the fixed effects model, evaluate the error components, and determine which model (the FEM or ECM) is better by pooling all 60 observations. We then analysis and review the data by comparing 3 several countries (United States, Canada and the United Kingdom) using panel data regression models. Finally, we provide result, analysis and a summary of the extensive research on how the hourly compensation effects on the unemployment rate. Additionally, this paper offers relevant and useful informational to help the government and academic community use an econometrics and social approach to lessen on the effect of the hourly compensation on Unemployment rate to eliminate the problem.

Keywords: hourly compensation, Unemployment rate, panel data regression models, dummy variables, random effects model, fixed effects model, the linear regression model

Procedia PDF Downloads 51
3705 Group Sequential Covariate-Adjusted Response Adaptive Designs for Survival Outcomes

Authors: Yaxian Chen, Yeonhee Park

Abstract:

Driven by evolving FDA recommendations, modern clinical trials demand innovative designs that strike a balance between statistical rigor and ethical considerations. Covariate-adjusted response-adaptive (CARA) designs bridge this gap by utilizing patient attributes and responses to skew treatment allocation in favor of the treatment that is best for an individual patient’s profile. However, existing CARA designs for survival outcomes often hinge on specific parametric models, constraining their applicability in clinical practice. In this article, we address this limitation by introducing a CARA design for survival outcomes (CARAS) based on the Cox model and a variance estimator. This method addresses issues of model misspecification and enhances the flexibility of the design. We also propose a group sequential overlapweighted log-rank test to preserve type I error rate in the context of group sequential trials using extensive simulation studies to demonstrate the clinical benefit, statistical efficiency, and robustness to model misspecification of the proposed method compared to traditional randomized controlled trial designs and response-adaptive randomization designs.

Keywords: cox model, log-rank test, optimal allocation ratio, overlap weight, survival outcome

Procedia PDF Downloads 37
3704 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression

Procedia PDF Downloads 283
3703 Working Memory and Phonological Short-Term Memory in the Acquisition of Academic Formulaic Language

Authors: Zhicheng Han

Abstract:

This study examines the correlation between knowledge of formulaic language, working memory (WM), and phonological short-term memory (PSTM) in Chinese L2 learners of English. This study investigates if WM and PSTM correlate differently to the acquisition of formulaic language, which may be relevant for the discourse around the conceptualization of formulas. Connectionist approaches have lead scholars to argue that formulas are form-meaning connections stored whole, making PSTM significant in the acquisitional process as it pertains to the storage and retrieval of chunk information. Generativist scholars, on the other hand, argued for active participation of interlanguage grammar in the acquisition and use of formulaic language, where formulas are represented in the mind but retain the internal structure built around a lexical core. This would make WM, especially the processing component of WM an important cognitive factor since it plays a role in processing and holding information for further analysis and manipulation. The current study asked L1 Chinese learners of English enrolled in graduate programs in China to complete a preference raking task where they rank their preference for formulas, grammatical non-formulaic expressions, and ungrammatical phrases with and without the lexical core in academic contexts. Participants were asked to rank the options in order of the likeliness of them encountering these phrases in the test sentences within academic contexts. Participants’ syntactic proficiency is controlled with a cloze test and grammar test. Regression analysis found a significant relationship between the processing component of WM and preference of formulaic expressions in the preference ranking task while no significant correlation is found for PSTM or syntactic proficiency. The correlational analysis found that WM, PSTM, and the two proficiency test scores have significant covariates. However, WM and PSTM have different predictor values for participants’ preference for formulaic language. Both storage and processing components of WM are significantly correlated with the preference for formulaic expressions while PSTM is not. These findings are in favor of the role of interlanguage grammar and syntactic knowledge in the acquisition of formulaic expressions. The differing effects of WM and PSTM suggest that selective attention to and processing of the input beyond simple retention play a key role in successfully acquiring formulaic language. Similar correlational patterns were found for preferring the ungrammatical phrase with the lexical core of the formula over the ones without the lexical core, attesting to learners’ awareness of the lexical core around which formulas are constructed. These findings support the view that formulaic phrases retain internal syntactic structures that are recognized and processed by the learners.

Keywords: formulaic language, working memory, phonological short-term memory, academic language

Procedia PDF Downloads 38
3702 Optimization of Monitoring Networks for Air Quality Management in Urban Hotspots

Authors: Vethathirri Ramanujam Srinivasan, S. M. Shiva Nagendra

Abstract:

Air quality management in urban areas is a serious concern in both developed and developing countries. In this regard, more number of air quality monitoring stations are planned to mitigate air pollution in urban areas. In India, Central Pollution Control Board has set up 574 air quality monitoring stations across the country and proposed to set up another 500 stations in the next few years. The number of monitoring stations for each city has been decided based on population data. The setting up of ambient air quality monitoring stations and their operation and maintenance are highly expensive. Therefore, there is a need to optimize monitoring networks for air quality management. The present paper discusses the various methods such as Indian Standards (IS) method, US EPA method and European Union (EU) method to arrive at the minimum number of air quality monitoring stations. In addition, optimization of rain-gauge method and Inverse Distance Weighted (IDW) method using Geographical Information System (GIS) are also explored in the present work for the design of air quality network in Chennai city. In summary, additionally 18 stations are required for Chennai city, and the potential monitoring locations with their corresponding land use patterns are ranked and identified from the 1km x 1km sized grids.

Keywords: air quality monitoring network, inverse distance weighted method, population based method, spatial variation

Procedia PDF Downloads 161
3701 Chemometric QSRR Evaluation of Behavior of s-Triazine Pesticides in Liquid Chromatography

Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević

Abstract:

This study considers the selection of the most suitable in silico molecular descriptors that could be used for s-triazine pesticides characterization. Suitable descriptors among topological, geometrical and physicochemical are used for quantitative structure-retention relationships (QSRR) model establishment. Established models were obtained using linear regression (LR) and multiple linear regression (MLR) analysis. In this paper, MLR models were established avoiding multicollinearity among the selected molecular descriptors. Statistical quality of established models was evaluated by standard and cross-validation statistical parameters. For detection of similarity or dissimilarity among investigated s-triazine pesticides and their classification, principal component analysis (PCA) and hierarchical cluster analysis (HCA) were used and gave similar grouping. This study is financially supported by COST action TD1305.

Keywords: chemometrics, classification analysis, molecular descriptors, pesticides, regression analysis

Procedia PDF Downloads 367
3700 Support Vector Regression Combined with Different Optimization Algorithms to Predict Global Solar Radiation on Horizontal Surfaces in Algeria

Authors: Laidi Maamar, Achwak Madani, Abdellah El Ahdj Abdellah

Abstract:

The aim of this work is to use Support Vector regression (SVR) combined with dragonfly, firefly, Bee Colony and particle swarm Optimization algorithm to predict global solar radiation on horizontal surfaces in some cities in Algeria. Combining these optimization algorithms with SVR aims principally to enhance accuracy by fine-tuning the parameters, speeding up the convergence of the SVR model, and exploring a larger search space efficiently; these parameters are the regularization parameter (C), kernel parameters, and epsilon parameter. By doing so, the aim is to improve the generalization and predictive accuracy of the SVR model. Overall, the aim is to leverage the strengths of both SVR and optimization algorithms to create a more powerful and effective regression model for various cities and under different climate conditions. Results demonstrate close agreement between predicted and measured data in terms of different metrics. In summary, SVM has proven to be a valuable tool in modeling global solar radiation, offering accurate predictions and demonstrating versatility when combined with other algorithms or used in hybrid forecasting models.

Keywords: support vector regression (SVR), optimization algorithms, global solar radiation prediction, hybrid forecasting models

Procedia PDF Downloads 10
3699 Statistic Regression and Open Data Approach for Identifying Economic Indicators That Influence e-Commerce

Authors: Apollinaire Barme, Simon Tamayo, Arthur Gaudron

Abstract:

This paper presents a statistical approach to identify explanatory variables linearly related to e-commerce sales. The proposed methodology allows specifying a regression model in order to quantify the relevance between openly available data (economic and demographic) and national e-commerce sales. The proposed methodology consists in collecting data, preselecting input variables, performing regressions for choosing variables and models, testing and validating. The usefulness of the proposed approach is twofold: on the one hand, it allows identifying the variables that influence e- commerce sales with an accessible approach. And on the other hand, it can be used to model future sales from the input variables. Results show that e-commerce is linearly dependent on 11 economic and demographic indicators.

Keywords: e-commerce, statistical modeling, regression, empirical research

Procedia PDF Downloads 203
3698 Design of Low Latency Multiport Network Router on Chip

Authors: P. G. Kaviya, B. Muthupandian, R. Ganesan

Abstract:

On-chip routers typically have buffers are used input or output ports for temporarily storing packets. The buffers are consuming some router area and power. The multiple queues in parallel as in VC router. While running a traffic trace, not all input ports have incoming packets needed to be transferred. Therefore large numbers of queues are empty and others are busy in the network. So the time consumption should be high for the high traffic. Therefore using a RoShaQ, minimize the buffer area and time The RoShaQ architecture was send the input packets are travel through the shared queues at low traffic. At high load traffic the input packets are bypasses the shared queues. So the power and area consumption was reduced. A parallel cross bar architecture is proposed in this project in order to reduce the power consumption. Also a new adaptive weighted routing algorithm for 8-port router architecture is proposed in order to decrease the delay of the network on chip router. The proposed system is simulated using Modelsim and synthesized using Xilinx Project Navigator.

Keywords: buffer, RoShaQ architecture, shared queue, VC router, weighted routing algorithm

Procedia PDF Downloads 523
3697 An Enhanced Distributed Weighted Clustering Algorithm for Intra and Inter Cluster Routing in MANET

Authors: K. Gomathi

Abstract:

Mobile Ad hoc Networks (MANET) is defined as collection of routable wireless mobile nodes with no centralized administration and communicate each other using radio signals. Especially MANETs deployed in hostile environments where hackers will try to disturb the secure data transfer and drain the valuable network resources. Since MANET is battery operated network, preserving the network resource is essential one. For resource constrained computation, efficient routing and to increase the network stability, the network is divided into smaller groups called clusters. The clustering architecture consists of Cluster Head(CH), ordinary node and gateway. The CH is responsible for inter and intra cluster routing. CH election is a prominent research area and many more algorithms are developed using many different metrics. The CH with longer life sustains network lifetime, for this purpose Secondary Cluster Head(SCH) also elected and it is more economical. To nominate efficient CH, a Enhanced Distributed Weighted Clustering Algorithm (EDWCA) has been proposed. This approach considers metrics like battery power, degree difference and speed of the node for CH election. The proficiency of proposed one is evaluated and compared with existing algorithm using Network Simulator(NS-2).

Keywords: MANET, EDWCA, clustering, cluster head

Procedia PDF Downloads 367
3696 Support Vector Regression for Retrieval of Soil Moisture Using Bistatic Scatterometer Data at X-Band

Authors: Dileep Kumar Gupta, Rajendra Prasad, Pradeep Kumar, Varun Narayan Mishra, Ajeet Kumar Vishwakarma, Prashant K. Srivastava

Abstract:

An approach was evaluated for the retrieval of soil moisture of bare soil surface using bistatic scatterometer data in the angular range of 200 to 700 at VV- and HH- polarization. The microwave data was acquired by specially designed X-band (10 GHz) bistatic scatterometer. The linear regression analysis was done between scattering coefficients and soil moisture content to select the suitable incidence angle for retrieval of soil moisture content. The 250 incidence angle was found more suitable. The support vector regression analysis was used to approximate the function described by the input-output relationship between the scattering coefficient and corresponding measured values of the soil moisture content. The performance of support vector regression algorithm was evaluated by comparing the observed and the estimated soil moisture content by statistical performance indices %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 2.9451, 1.0986, and 0.9214, respectively at HH-polarization. At VV- polarization, the values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 3.6186, 0.9373, and 0.9428, respectively.

Keywords: bistatic scatterometer, soil moisture, support vector regression, RMSE, %Bias, NSE

Procedia PDF Downloads 398
3695 Runoff Estimation Using NRCS-CN Method

Authors: E. K. Naseela, B. M. Dodamani, Chaithra Chandran

Abstract:

The GIS and remote sensing techniques facilitate accurate estimation of surface runoff from watershed. In the present study an attempt has been made to evaluate the applicability of Natural Resources Service Curve Number method using GIS and Remote sensing technique in the upper Krishna basin (69,425 Sq.km). Landsat 7 (with resolution 30 m) satellite data for the year 2012 has been used for the preparation of land use land cover (LU/LC) map. The hydrologic soil group is mapped using GIS platform. The weighted curve numbers (CN) for all the 5 subcatchments calculated on the basis of LU/LC type and hydrologic soil class in the area by considering antecedent moisture condition. Monthly rainfall data was available for 58 raingauge stations. Overlay technique is adopted for generating weighted curve number. Results of the study show that land use changes determined from satellite images are useful in studying the runoff response of the basin. The results showed that there is no significant difference between observed and estimated runoff depths. For each subcatchment, statistically positive correlations were detected between observed and estimated runoff depth (0.6Keywords: curve number, GIS, remote sensing, runoff

Procedia PDF Downloads 518
3694 A Comparative Analysis of Machine Learning Techniques for PM10 Forecasting in Vilnius

Authors: Mina Adel Shokry Fahim, Jūratė Sužiedelytė Visockienė

Abstract:

With the growing concern over air pollution (AP), it is clear that this has gained more prominence than ever before. The level of consciousness has increased and a sense of knowledge now has to be forwarded as a duty by those enlightened enough to disseminate it to others. This realisation often comes after an understanding of how poor air quality indices (AQI) damage human health. The study focuses on assessing air pollution prediction models specifically for Lithuania, addressing a substantial need for empirical research within the region. Concentrating on Vilnius, it specifically examines particulate matter concentrations 10 micrometers or less in diameter (PM10). Utilizing Gaussian Process Regression (GPR) and Regression Tree Ensemble, and Regression Tree methodologies, predictive forecasting models are validated and tested using hourly data from January 2020 to December 2022. The study explores the classification of AP data into anthropogenic and natural sources, the impact of AP on human health, and its connection to cardiovascular diseases. The study revealed varying levels of accuracy among the models, with GPR achieving the highest accuracy, indicated by an RMSE of 4.14 in validation and 3.89 in testing.

Keywords: air pollution, anthropogenic and natural sources, machine learning, Gaussian process regression, tree ensemble, forecasting models, particulate matter

Procedia PDF Downloads 30
3693 Forecasting Equity Premium Out-of-Sample with Sophisticated Regression Training Techniques

Authors: Jonathan Iworiso

Abstract:

Forecasting the equity premium out-of-sample is a major concern to researchers in finance and emerging markets. The quest for a superior model that can forecast the equity premium with significant economic gains has resulted in several controversies on the choice of variables and suitable techniques among scholars. This research focuses mainly on the application of Regression Training (RT) techniques to forecast monthly equity premium out-of-sample recursively with an expanding window method. A broad category of sophisticated regression models involving model complexity was employed. The RT models include Ridge, Forward-Backward (FOBA) Ridge, Least Absolute Shrinkage and Selection Operator (LASSO), Relaxed LASSO, Elastic Net, and Least Angle Regression were trained and used to forecast the equity premium out-of-sample. In this study, the empirical investigation of the RT models demonstrates significant evidence of equity premium predictability both statistically and economically relative to the benchmark historical average, delivering significant utility gains. They seek to provide meaningful economic information on mean-variance portfolio investment for investors who are timing the market to earn future gains at minimal risk. Thus, the forecasting models appeared to guarantee an investor in a market setting who optimally reallocates a monthly portfolio between equities and risk-free treasury bills using equity premium forecasts at minimal risk.

Keywords: regression training, out-of-sample forecasts, expanding window, statistical predictability, economic significance, utility gains

Procedia PDF Downloads 78
3692 Self-Image of Police Officers

Authors: Leo Carlo B. Rondina

Abstract:

Self-image is an important factor to improve the self-esteem of the personnel. The purpose of the study is to determine the self-image of the police. The respondents were the 503 policemen assigned in different Police Station in Davao City, and they were chosen with the used of random sampling. With the used of Exploratory Factor Analysis (EFA), latent construct variables of police image were identified as follows; professionalism, obedience, morality and justice and fairness. Further, ordinal regression indicates statistical characteristics on ages 21-40 which means the age of the respondent statistically improves self-image.

Keywords: police image, exploratory factor analysis, ordinal regression, Galatea effect

Procedia PDF Downloads 260
3691 Detecting Port Maritime Communities in Spain with Complex Network Analysis

Authors: Nicanor Garcia Alvarez, Belarmino Adenso-Diaz, Laura Calzada Infante

Abstract:

In recent years, researchers have shown an interest in modelling maritime traffic as a complex network. In this paper, we propose a bipartite weighted network to model maritime traffic and detect port maritime communities. The bipartite weighted network considers two different types of nodes. The first one represents Spanish ports, while the second one represents the countries with which there is major import/export activity. The flow among both types of nodes is modeled by weighting the volume of product transported. To illustrate the model, the data is segmented by each type of traffic. This will allow fine tuning and the creation of communities for each type of traffic and therefore finding similar ports for a specific type of traffic, which will provide decision-makers with tools to search for alliances or identify their competitors. The traffic with the greatest impact on the Spanish gross domestic product is selected, and the evolution of the communities formed by the most important ports and their differences between 2019 and 2009 will be analyzed. Finally, the set of communities formed by the ports of the Spanish port system will be inspected to determine global similarities between them, analyzing the sum of the membership of the different ports in communities formed for each type of traffic in particular.

Keywords: bipartite networks, competition, infomap, maritime traffic, port communities

Procedia PDF Downloads 128
3690 Regression Analysis of Travel Indicators and Public Transport Usage in Urban Areas

Authors: Mehdi Moeinaddini, Zohreh Asadi-Shekari, Muhammad Zaly Shah, Amran Hamzah

Abstract:

Currently, planners try to have more green travel options to decrease economic, social and environmental problems. Therefore, this study tries to find significant urban travel factors to be used to increase the usage of alternative urban travel modes. This paper attempts to identify the relationship between prominent urban mobility indicators and daily trips by public transport in 30 cities from various parts of the world. Different travel modes, infrastructures and cost indicators were evaluated in this research as mobility indicators. The results of multi-linear regression analysis indicate that there is a significant relationship between mobility indicators and the daily usage of public transport.

Keywords: green travel modes, urban travel indicators, daily trips by public transport, multi-linear regression analysis

Procedia PDF Downloads 526
3689 Development of Generalized Correlation for Liquid Thermal Conductivity of N-Alkane and Olefin

Authors: A. Ishag Mohamed, A. A. Rabah

Abstract:

The objective of this research is to develop a generalized correlation for the prediction of thermal conductivity of n-Alkanes and Alkenes. There is a minority of research and lack of correlation for thermal conductivity of liquids in the open literature. The available experimental data are collected covering the groups of n-Alkanes and Alkenes.The data were assumed to correlate to temperature using Filippov correlation. Nonparametric regression of Grace Algorithm was used to develop the generalized correlation model. A spread sheet program based on Microsoft Excel was used to plot and calculate the value of the coefficients. The results obtained were compared with the data that found in Perry's Chemical Engineering Hand Book. The experimental data correlated to the temperature ranged "between" 273.15 to 673.15 K, with R2 = 0.99.The developed correlation reproduced experimental data that which were not included in regression with absolute average percent deviation (AAPD) of less than 7 %. Thus the spread sheet was quite accurate which produces reliable data.

Keywords: N-Alkanes, N-Alkenes, nonparametric, regression

Procedia PDF Downloads 635
3688 Comparing Groundwater Fluoride Level with WHO Guidelines and Classifying At-Risk Age Groups; Based on Health Risk Assessment

Authors: Samaneh Abolli, Kamyar Yaghmaeian, Ali Arab Aradani, Mahmood Alimohammadi

Abstract:

The main route of fluoride uptake is drinking water. Fluoride absorption in the acceptable range (0.5-1.5 mg L-¹) is suitable for the body, but it's too much consumption can have irreversible health effects. To compare fluoride concentration with the WHO guidelines, 112 water samples were taken from groundwater aquifers in 22 villages of Garmsar County, the central part of Iran, during 2018 to 2019.Fluoride concentration was measured by the SPANDS method, and its non-carcinogenic impacts were calculated using EDI and HQ. The statistical population was divided into four categories of infant, children, teenagers, and adults. Linear regression and Spearman rank correlation coefficient tests were used to investigate the relationships between the well's depth and fluoride concentration in the water samples. The annual mean concentrations of fluoride in 2018 and2019 were 0.75 and 0.64 mg -¹ and, the fluoride mean concentration in the samples classifying the cold and hot seasons of the studied years was 0.709 and 0.689 mg L-¹, respectively. The amount of fluoride in 27% of the samples in both years was less than the acceptable minimum (0.5 mg L-¹). Also, 11% of the samples in2018 (6 samples) had fluoride levels higher than 1.5 mg L-¹. The HQ showed that the children were vulnerable; teenagers and adults were in the next ranks, respectively. Statistical tests showed a reverse and significant correlation (R2 = 0.02, < 0.0001) between well depth and fluoride content. The border between the usefulness/harmfulness of fluoride is very narrow and requires extensive studies.

Keywords: fluoride, groundwater, health risk assessment, hazard quotient, Garmsar

Procedia PDF Downloads 46
3687 Language Development and Growing Spanning Trees in Children Semantic Network

Authors: Somayeh Sadat Hashemi Kamangar, Fatemeh Bakouie, Shahriar Gharibzadeh

Abstract:

In this study, we target to exploit Maximum Spanning Trees (MST) of children's semantic networks to investigate their language development. To do so, we examine the graph-theoretic properties of word-embedding networks. The networks are made of words children learn prior to the age of 30 months as the nodes and the links which are built from the cosine vector similarity of words normatively acquired by children prior to two and a half years of age. These networks are weighted graphs and the strength of each link is determined by the numerical similarities of the two words (nodes) on the sides of the link. To avoid changing the weighted networks to the binaries by setting a threshold, constructing MSTs might present a solution. MST is a unique sub-graph that connects all the nodes in such a way that the sum of all the link weights is maximized without forming cycles. MSTs as the backbone of the semantic networks are suitable to examine developmental changes in semantic network topology in children. From these trees, several parameters were calculated to characterize the developmental change in network organization. We showed that MSTs provides an elegant method sensitive to capture subtle developmental changes in semantic network organization.

Keywords: maximum spanning trees, word-embedding, semantic networks, language development

Procedia PDF Downloads 120
3686 Pure Scalar Equilibria for Normal-Form Games

Authors: Herbert W. Corley

Abstract:

A scalar equilibrium (SE) is an alternative type of equilibrium in pure strategies for an n-person normal-form game G. It is defined using optimization techniques to obtain a pure strategy for each player of G by maximizing an appropriate utility function over the acceptable joint actions. The players’ actions are determined by the choice of the utility function. Such a utility function could be agreed upon by the players or chosen by an arbitrator. An SE is an equilibrium since no players of G can increase the value of this utility function by changing their strategies. SEs are formally defined, and examples are given. In a greedy SE, the goal is to assign actions to the players giving them the largest individual payoffs jointly possible. In a weighted SE, each player is assigned weights modeling the degree to which he helps every player, including himself, achieve as large a payoff as jointly possible. In a compromise SE, each player wants a fair payoff for a reasonable interpretation of fairness. In a parity SE, the players want their payoffs to be as nearly equal as jointly possible. Finally, a satisficing SE achieves a personal target payoff value for each player. The vector payoffs associated with each of these SEs are shown to be Pareto optimal among all such acceptable vectors, as well as computationally tractable.

Keywords: compromise equilibrium, greedy equilibrium, normal-form game, parity equilibrium, pure strategies, satisficing equilibrium, scalar equilibria, utility function, weighted equilibrium

Procedia PDF Downloads 96
3685 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 63
3684 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm

Authors: Ghada Badr, Arwa Alturki

Abstract:

The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.

Keywords: alignment, RNA secondary structure, pairwise, component-based, data mining

Procedia PDF Downloads 435
3683 Response Surface Methodology for the Optimization of Paddy Husker by Medium Brown Rice Peeling Machine 6 Rubber Type

Authors: S. Bangphan, P. Bangphan, C. Ketsombun, T. Sammana

Abstract:

Optimization of response surface methodology (RSM) was employed to study the effects of three factor (rubber of clearance, spindle of speed, and rice of moisture) in brown rice peeling machine of the optimal good rice yield (99.67, average of three repeats). The optimized composition derived from RSM regression was analyzed using Regression analysis and Analysis of Variance (ANOVA). At a significant level α=0.05, the values of Regression coefficient, R2 adjust were 96.55% and standard deviation were 1.05056. The independent variables are initial rubber of clearance, spindle of speed and rice of moisture parameters namely. The investigating responses are final rubber clearance, spindle of speed and moisture of rice.

Keywords: brown rice, response surface methodology (RSM), peeling machine, optimization, paddy husker

Procedia PDF Downloads 550