Search results for: models error comparison
10322 Modeling Football Penalty Shootouts: How Improving Individual Performance Affects Team Performance and the Fairness of the ABAB Sequence
Authors: Pablo Enrique Sartor Del Giudice
Abstract:
Penalty shootouts often decide the outcome of important soccer matches. Although usually referred to as ”lotteries”, there is evidence that some national teams and clubs consistently perform better than others. The outcomes are therefore not explained just by mere luck, and therefore there are ways to improve the average performance of players, naturally at the expense of some sort of effort. In this article we study the payoff of player performance improvements in terms of the performance of the team as a whole. To do so we develop an analytical model with static individual performances, as well as Monte Carlo models that take into account the known influence of partial score and round number on individual performances. We find that within a range of usual values, the team performance improves above 70% faster than individual performances do. Using these models, we also estimate that the new ABBA penalty shootout ordering under test reduces almost all the known bias in favor of the first-shooting team under the current ABAB system.Keywords: football, penalty shootouts, Montecarlo simulation, ABBA
Procedia PDF Downloads 16210321 Revolving Ferrofluid Flow in Porous Medium with Rotating Disk
Authors: Paras Ram, Vikas Kumar
Abstract:
The transmission of Malaria with seasonal were studied through the use of mathematical models. The data from the annual number of Malaria cases reported to the Division of Epidemiology, Ministry of Public Health, Thailand during the period 1997-2011 were analyzed. The transmission of Malaria with seasonal was studied by formulating a mathematical model which had been modified to describe different situations encountered in the transmission of Malaria. In our model, the population was separated into two groups: the human and vector groups, and then constructed a system of nonlinear differential equations. Each human group was divided into susceptible, infectious in hot season, infectious in rainy season, infectious in cool season and recovered classes. The vector population was separated into two classes only: susceptible and infectious vectors. The analysis of the models was given by the standard dynamical modeling.Keywords: ferrofluid, magnetic field, porous medium, rotating disk, Neuringer-Rosensweig Model
Procedia PDF Downloads 42110320 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 17010319 Emancipation through the Inclusion of Civil Society in Contemporary Peacebuilding: A Case Study of Peacebuilding Efforts in Colombia
Authors: D. Romero Espitia
Abstract:
Research on peacebuilding has taken a critical turn into examining the neoliberal and hegemonic conception of peace operations. Alternative peacebuilding models have been analyzed, but the scholarly discussion fails to bring them together or form connections between them. The objective of this paper is to rethink peacebuilding by extracting the positive aspects of the various peacebuilding models, connecting them with the local context, and therefore promote emancipation in contemporary peacebuilding efforts. Moreover, local ownership has been widely labelled as one, if not the core principle necessary for a successful peacebuilding project. Yet, definitions of what constitutes the 'local' remain debated. Through a qualitative review of literature, this paper unpacks the contemporary conception of peacebuilding in nexus with 'local ownership' as manifested through civil society. Using Colombia as a case study, this paper argues that a new peacebuilding framework, one that reconsiders the terms of engagement between international and national actors, is needed in order to foster effective peacebuilding efforts in contested transitional states.Keywords: civil society, Colombia, emancipation, peacebuilding
Procedia PDF Downloads 13410318 Channel Estimation for Orthogonal Frequency Division Multiplexing Systems over Doubly Selective Channels Base on DCS-DCSOMP Algorithm
Authors: Linyu Wang, Furui Huo, Jianhong Xiang
Abstract:
The Doppler shift generated by high-speed movement and multipath effects in the channel are the main reasons for the generation of a time-frequency doubly-selective (DS) channel. There is severe inter-carrier interference (ICI) in the DS channel. Channel estimation for an orthogonal frequency division multiplexing (OFDM) system over a DS channel is very difficult. The simultaneous orthogonal matching pursuit algorithm under distributed compressive sensing theory (DCS-SOMP) has been used in channel estimation for OFDM systems over DS channels. However, the reconstruction accuracy of the DCS-SOMP algorithm is not high enough in the low SNR stage. To solve this problem, in this paper, we propose an improved DCS-SOMP algorithm based on the inner product difference comparison operation (DCS-DCSOMP). The reconstruction accuracy is improved by increasing the number of candidate indexes and designing the comparison conditions of inner product difference. We combine the DCS-DCSOMP algorithm with the basis expansion model (BEM) to reduce the complexity of channel estimation. Simulation results show the effectiveness of the proposed algorithm and its advantages over other algorithms.Keywords: OFDM, doubly selective, channel estimation, compressed sensing
Procedia PDF Downloads 9510317 0.13-µm Complementary Metal-Oxide Semiconductor Vector Modulator for Beamforming System
Authors: J. S. Kim
Abstract:
This paper presents a 0.13-µm Complementary Metal-Oxide Semiconductor (CMOS) vector modulator for beamforming system. The vector modulator features a 360° phase and gain range of -10 dB to 10 dB with a root mean square phase and amplitude error of only 2.2° and 0.45 dB, respectively. These features make it a suitable for wireless backhaul system in the 5 GHz industrial, scientific, and medical (ISM) bands. It draws a current of 20.4 mA from a 1.2 V supply. The total chip size is 1.87x1.34 mm².Keywords: CMOS, vector modulator, beamforming, 802.11ac
Procedia PDF Downloads 21010316 Modeling Waiting and Service Time for Patients: A Case Study of Matawale Health Centre, Zomba, Malawi
Authors: Moses Aron, Elias Mwakilama, Jimmy Namangale
Abstract:
Spending more time on long queues for a basic service remains a common challenge to most developing countries, including Malawi. For health sector in particular, Out-Patient Department (OPD) experiences long queues. This puts the lives of patients at risk. However, using queuing analysis to under the nature of the problems and efficiency of service systems, such problems can be abated. Based on a kind of service, literature proposes different possible queuing models. However, unlike using generalized assumed models proposed by literature, use of real time case study data can help in deeper understanding the particular problem model and how such a model can vary from one day to the other and also from each case to another. As such, this study uses data obtained from one urban HC for BP, Pediatric and General OPD cases to investigate an average queuing time for patients within the system. It seeks to highlight the proper queuing model by investigating the kind of distributions functions over patient’s arrival time, inter-arrival time, waiting time and service time. Comparable with the standard set values by WHO, the study found that patients at this HC spend more waiting times than service times. On model investigation, different days presented different models ranging from an assumed M/M/1, M/M/2 to M/Er/2. As such, through sensitivity analysis, in general, a commonly assumed M/M/1 model failed to fit the data but rather an M/Er/2 demonstrated to fit well. An M/Er/3 model seemed to be good in terms of measuring resource utilization, proposing a need to increase medical personnel at this HC. However, an M/Er/4 showed to cause more idleness of human resources.Keywords: health care, out-patient department, queuing model, sensitivity analysis
Procedia PDF Downloads 43510315 Modelling and Simulation Efforts in Scale-Up and Characterization of Semi-Solid Dosage Forms
Authors: Saurav S. Rath, Birendra K. David
Abstract:
Generic pharmaceutical industry has to operate in strict timelines of product development and scale-up from lab to plant. Hence, detailed product & process understanding and implementation of appropriate mechanistic modelling and Quality-by-design (QbD) approaches are imperative in the product life cycle. This work provides example cases of such efforts in topical dosage products. Topical products are typically in the form of emulsions, gels, thick suspensions or even simple solutions. The efficacy of such products is determined by characteristics like rheology and morphology. Defining, and scaling up the right manufacturing process with a given set of ingredients, to achieve the right product characteristics presents as a challenge to the process engineer. For example, the non-Newtonian rheology varies not only with CPPs and CMAs but also is an implicit function of globule size (CQA). Hence, this calls for various mechanistic models, to help predict the product behaviour. This paper focusses on such models obtained from computational fluid dynamics (CFD) coupled with population balance modelling (PBM) and constitutive models (like shear, energy density). In a special case of the use of high shear homogenisers (HSHs) for the manufacture of thick emulsions/gels, this work presents some findings on (i) scale-up algorithm for HSH using shear strain, a novel scale-up parameter for estimating mixing parameters, (ii) non-linear relationship between viscosity and shear imparted into the system, (iii) effect of hold time on rheology of product. Specific examples of how this approach enabled scale-up across 1L, 10L, 200L, 500L and 1000L scales will be discussed.Keywords: computational fluid dynamics, morphology, quality-by-design, rheology
Procedia PDF Downloads 26910314 Forecasting Stock Indexes Using Bayesian Additive Regression Tree
Authors: Darren Zou
Abstract:
Forecasting the stock market is a very challenging task. Various economic indicators such as GDP, exchange rates, interest rates, and unemployment have a substantial impact on the stock market. Time series models are the traditional methods used to predict stock market changes. In this paper, a machine learning method, Bayesian Additive Regression Tree (BART) is used in predicting stock market indexes based on multiple economic indicators. BART can be used to model heterogeneous treatment effects, and thereby works well when models are misspecified. It also has the capability to handle non-linear main effects and multi-way interactions without much input from financial analysts. In this research, BART is proposed to provide a reliable prediction on day-to-day stock market activities. By comparing the analysis results from BART and with time series method, BART can perform well and has better prediction capability than the traditional methods.Keywords: BART, Bayesian, predict, stock
Procedia PDF Downloads 13010313 Study and Analysis of the Factors Affecting Road Safety Using Decision Tree Algorithms
Authors: Naina Mahajan, Bikram Pal Kaur
Abstract:
The purpose of traffic accident analysis is to find the possible causes of an accident. Road accidents cannot be totally prevented but by suitable traffic engineering and management the accident rate can be reduced to a certain extent. This paper discusses the classification techniques C4.5 and ID3 using the WEKA Data mining tool. These techniques use on the NH (National highway) dataset. With the C4.5 and ID3 technique it gives best results and high accuracy with less computation time and error rate.Keywords: C4.5, ID3, NH(National highway), WEKA data mining tool
Procedia PDF Downloads 33810312 The Application of Lesson Study Model in Writing Review Text in Junior High School
Authors: Sulastriningsih Djumingin
Abstract:
This study has some objectives. It aims at describing the ability of the second-grade students to write review text without applying the Lesson Study model at SMPN 18 Makassar. Second, it seeks to describe the ability of the second-grade students to write review text by applying the Lesson Study model at SMPN 18 Makassar. Third, it aims at testing the effectiveness of the Lesson Study model in writing review text at SMPN 18 Makassar. This research was true experimental design with posttest Only group design involving two groups consisting of one class of the control group and one class of the experimental group. The research populations were all the second-grade students at SMPN 18 Makassar amounted to 250 students consisting of 8 classes. The sampling technique was purposive sampling technique. The control class was VIII2 consisting of 30 students, while the experimental class was VIII8 consisting of 30 students. The research instruments were in the form of observation and tests. The collected data were analyzed using descriptive statistical techniques and inferential statistical techniques with t-test types processed using SPSS 21 for windows. The results shows that: (1) of 30 students in control class, there are only 14 (47%) students who get the score more than 7.5, categorized as inadequate; (2) in the experimental class, there are 26 (87%) students who obtain the score of 7.5, categorized as adequate; (3) the Lesson Study models is effective to be applied in writing review text. Based on the comparison of the ability of the control class and experimental class, it indicates that the value of t-count is greater than the value of t-table (2.411> 1.667). It means that the alternative hypothesis (H1) proposed by the researcher is accepted.Keywords: application, lesson study, review text, writing
Procedia PDF Downloads 20210311 Effect of Realistic Lubricant Properties on Thermal Electrohydrodynamic Lubrication Behavior in Circular Contacts
Authors: Puneet Katyal, Punit Kumar
Abstract:
A great deal of efforts has been done in the field of thermal effects in electrohydrodynamic lubrication (TEHL) during the last five decades. The focus was primarily on the development of an efficient numerical scheme to deal with the computational challenges involved in the solution of TEHL model; however, some important aspects related to the accurate description of lubricant properties such as viscosity, rheology and thermal conductivity in EHL point contact analysis remain largely neglected. A few studies available in this regard are based upon highly complex mathematical models difficult to formulate and execute. Using a simplified thermal EHL model for point contacts, this work sheds some light on the importance of accurate characterization of the lubricant properties and demonstrates that the computed TEHL characteristics are highly sensitive to lubricant properties. It also emphasizes the use of appropriate mathematical models with experimentally determined parameters to account for correct lubricant behaviour.Keywords: TEHL, shear thinning, rheology, conductivity
Procedia PDF Downloads 20010310 Orthogonal Metal Cutting Simulation of Steel AISI 1045 via Smoothed Particle Hydrodynamic Method
Authors: Seyed Hamed Hashemi Sohi, Gerald Jo Denoga
Abstract:
Machining or metal cutting is one of the most widely used production processes in industry. The quality of the process and the resulting machined product depends on parameters like tool geometry, material, and cutting conditions. However, the relationships of these parameters to the cutting process are often based mostly on empirical knowledge. In this study, computer modeling and simulation using LS-DYNA software and a Smoothed Particle Hydrodynamic (SPH) methodology, was performed on the orthogonal metal cutting process to analyze three-dimensional deformation of AISI 1045 medium carbon steel during machining. The simulation was performed using the following constitutive models: the Power Law model, the Johnson-Cook model, and the Zerilli-Armstrong models (Z-A). The outcomes were compared against the simulated results obtained by Cenk Kiliçaslan using the Finite Element Method (FEM) and the empirical results of Jaspers and Filice. The analysis shows that the SPH method combined with the Zerilli-Armstrong constitutive model is a viable alternative to simulating the metal cutting process. The tangential force was overestimated by 7%, and the normal force was underestimated by 16% when compared with empirical values. The simulation values for flow stress versus strain at various temperatures were also validated against empirical values. The SPH method using the Z-A model has also proven to be robust against issues of time-scaling. Experimental work was also done to investigate the effects of friction, rake angle and tool tip radius on the simulation.Keywords: metal cutting, smoothed particle hydrodynamics, constitutive models, experimental, cutting forces analyses
Procedia PDF Downloads 26110309 Reproductive Performance of Dairy Cows at Different Parities: A Case Study in Enrekang Regency, Indonesia
Authors: Muhammad Yusuf, Abdul Latief Toleng, Djoni Prawira Rahardja, Ambo Ako, Sahiruddin Sahiruddin, Abdi Eriansyah
Abstract:
The objective of this study was to know the reproductive performance of dairy cows at different parities. A total of 60 dairy Holstein-Friesian cows with parity one to three from five small farms raised by the farmers were used in the study. All cows were confined in tie stall barn with rubber on the concrete floor. The herds were visited twice for survey with the help of a questionnaire. Reproductive parameters used in the study were days open, calving interval, and service per conception (S/C). The results of this study showed that the mean (±SD) days open of the cows in parity 2 was slightly longer than those in parity 3 (228.2±121.5 vs. 205.5±144.5; P=0.061). None cows conceived within 85 days postpartum in parity 3 in comparison to 13.8% cows conceived in parity 2. However, total cows conceived within 150 days post partum in parity 2 and parity 3 were 30.1% and 36.4%, respectively. Likewise, after reaching 210 days after calving, number of cows conceived in parity 3 had higher than number of cows in parity 2 (72.8% vs. 44.8%; P<0.05). The mean (±SD) calving interval of the cows in parity 2 and parity 3 were 508.2±121.5 and 495.5±144.1, respectively. Number of cows with calving interval of 400 and 450 days in parity 3 was higher than those cows in parity 2 (23.1% vs. 17.2% and 53.9% vs. 31.0%). Cows in parity 1 had significantly (P<0.01) lower number of S/C in comparison to the cows with parity 2 and parity 3 (1.6±1.2 vs. 3.5±3.4 and 3.3±2.1). It can be concluded that reproductive performance of the cows is affected by different parities.Keywords: dairy cows, parity, days open, calving interval, service per conception
Procedia PDF Downloads 25710308 A Study on the New Weapon Requirements Analytics Using Simulations and Big Data
Authors: Won Il Jung, Gene Lee, Luis Rabelo
Abstract:
Since many weapon systems are getting more complex and diverse, various problems occur in terms of the acquisition cost, time, and performance limitation. As a matter of fact, the experiment execution in real world is costly, dangerous, and time-consuming to obtain Required Operational Characteristics (ROC) for a new weapon acquisition although enhancing the fidelity of experiment results. Also, until presently most of the research contained a large amount of assumptions so therefore a bias is present in the experiment results. At this moment, the new methodology is proposed to solve these problems without a variety of assumptions. ROC of the new weapon system is developed through the new methodology, which is a way to analyze big data generated by simulating various scenarios based on virtual and constructive models which are involving 6 Degrees of Freedom (6DoF). The new methodology enables us to identify unbiased ROC on new weapons by reducing assumptions and provide support in terms of the optimal weapon systems acquisition.Keywords: big data, required operational characteristics (ROC), virtual and constructive models, weapon acquisition
Procedia PDF Downloads 28910307 Assessment of Students Skills in Error Detection in SQL Classes using Rubric Framework - An Empirical Study
Authors: Dirson Santos De Campos, Deller James Ferreira, Anderson Cavalcante Gonçalves, Uyara Ferreira Silva
Abstract:
Rubrics to learning research provide many evaluation criteria and expected performance standards linked to defined student activity for learning and pedagogical objectives. Despite the rubric being used in education at all levels, academic literature on rubrics as a tool to support research in SQL Education is quite rare. There is a large class of SQL queries is syntactically correct, but certainly, not all are semantically correct. Detecting and correcting errors is a recurring problem in SQL education. In this paper, we usthe Rubric Abstract Framework (RAF), which consists of steps, that allows us to map the information to measure student performance guided by didactic objectives defined by the teacher as long as it is contextualized domain modeling by rubric. An empirical study was done that demonstrates how rubrics can mitigate student difficulties in finding logical errors and easing teacher workload in SQL education. Detecting and correcting logical errors is an important skill for students. Researchers have proposed several ways to improve SQL education because understanding this paradigm skills are crucial in software engineering and computer science. The RAF instantiation was using in an empirical study developed during the COVID-19 pandemic in database course. The pandemic transformed face-to-face and remote education, without presential classes. The lab activities were conducted remotely, which hinders the teaching-learning process, in particular for this research, in verifying the evidence or statements of knowledge, skills, and abilities (KSAs) of students. Various research in academia and industry involved databases. The innovation proposed in this paper is the approach used where the results obtained when using rubrics to map logical errors in query formulation have been analyzed with gains obtained by students empirically verified. The research approach can be used in the post-pandemic period in both classroom and distance learning.Keywords: rubric, logical error, structured query language (SQL), empirical study, SQL education
Procedia PDF Downloads 19010306 Neuroevolution Based on Adaptive Ensembles of Biologically Inspired Optimization Algorithms Applied for Modeling a Chemical Engineering Process
Authors: Sabina-Adriana Floria, Marius Gavrilescu, Florin Leon, Silvia Curteanu, Costel Anton
Abstract:
Neuroevolution is a subfield of artificial intelligence used to solve various problems in different application areas. Specifically, neuroevolution is a technique that applies biologically inspired methods to generate neural network architectures and optimize their parameters automatically. In this paper, we use different biologically inspired optimization algorithms in an ensemble strategy with the aim of training multilayer perceptron neural networks, resulting in regression models used to simulate the industrial chemical process of obtaining bricks from silicone-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. In addition, the initial conditions that were taken into account during the design and commissioning of the installation can change over time, which leads to the need to add new mixes to adjust the operating conditions for the desired purpose, e.g., material properties and energy saving. The present approach follows the study by simulation of a process of obtaining bricks from silicone-based materials, i.e., the modeling and optimization of the process. Optimization aims to determine the working conditions that minimize the emissions represented by nitrogen monoxide. We first use a search procedure to find the best values for the parameters of various biologically inspired optimization algorithms. Then, we propose an adaptive ensemble strategy that uses only a subset of the best algorithms identified in the search stage. The adaptive ensemble strategy combines the results of selected algorithms and automatically assigns more processing capacity to the more efficient algorithms. Their efficiency may also vary at different stages of the optimization process. In a given ensemble iteration, the most efficient algorithms aim to maintain good convergence, while the less efficient algorithms can improve population diversity. The proposed adaptive ensemble strategy outperforms the individual optimizers and the non-adaptive ensemble strategy in convergence speed, and the obtained results provide lower error values.Keywords: optimization, biologically inspired algorithm, neuroevolution, ensembles, bricks, emission minimization
Procedia PDF Downloads 11610305 Gravitational Frequency Shifts for Photons and Particles
Authors: Jing-Gang Xie
Abstract:
The research, in this case, considers the integration of the Quantum Field Theory and the General Relativity Theory. As two successful models in explaining behaviors of particles, they are incompatible since they work at different masses and scales of energy, with the evidence that regards the description of black holes and universe formation. It is so considering previous efforts in merging the two theories, including the likes of the String Theory, Quantum Gravity models, and others. In a bid to prove an actionable experiment, the paper’s approach starts with the derivations of the existing theories at present. It goes on to test the derivations by applying the same initial assumptions, coupled with several deviations. The resulting equations get similar results to those of classical Newton model, quantum mechanics, and general relativity as long as conditions are normal. However, outcomes are different when conditions are extreme, specifically with no breakdowns even for less than Schwarzschild radius, or at Planck length cases. Even so, it proves the possibilities of integrating the two theories.Keywords: general relativity theory, particles, photons, Quantum Gravity Model, gravitational frequency shift
Procedia PDF Downloads 35910304 Effectiveness of Office-Based Occupational Therapy for Office Workers with Low Back Pain: A Public Health Approach
Authors: Dina Jalalvand, Joshua A. Cleland
Abstract:
This double-blind, randomized control trial with parallel groups aimed to examine the effectiveness of office-based occupational therapy for office workers with low back pain on the intensity of pain and range of motion. Seventy-two male office workers (age: 20-50 years) with chronic low back pain (more than three months with at least two symptoms of chronic low back pain) satisfied eligibility criteria and agreed to participate in this study. The absence of joint burst following magnetic resonance imagining (MRI) was considered as an important inclusion criterion as well. Subjects were randomly assigned to a control or experimental group. The experimental group received the modified package of exercise-based occupational therapy, which included 11 simple exercise movements (derived from Williams and McKenzie), and the control group just received the conventional therapy, which included their routine physiotherapy sessions. The subjects completed the exercises three times a week for a duration of six weeks. Each exercise session was 10-15 minutes. Pain intensity and range of motion were the primary outcomes and were measured at baseline, 6 weeks, and 12 weeks after the end of the intervention using the numerical rating scale (NRS) and goniometer accordingly. Repeated measure ANOVA was used for analyzing data. The results of this study showed that significant decreases in pain intensity (p ≤ 0.05) and an increase in range of motion (p ≤ 0.001) in the experimental group in comparison with the control group after 6 and 12 weeks of intervention (between-group comparisons). In addition, there was a significant decrease in intensity of the pain (p ≤ 0.05) and an increase (p ≤ 0.001) in range of motion in the intervention group in comparison with baseline after 6 and 12 weeks (within-group comparison). This showed a positive effect of exercise-based occupational therapy that could potentially be used with low cost among office workers who suffer from low back pain. In addition, it should be noted that the introduced package of exercise training is easy to do, and there is not a need for a specific introduction.Keywords: public health, office workers, low back pain, occupational therapy
Procedia PDF Downloads 21810303 Promoting Biofuels in India: Assessing Land Use Shifts Using Econometric Acreage Response Models
Authors: Y. Bhatt, N. Ghosh, N. Tiwari
Abstract:
Acreage response function are modeled taking account of expected harvest prices, weather related variables and other non-price variables allowing for partial adjustment possibility. At the outset, based on the literature on price expectation formation, we explored suitable formulations for estimating the farmer’s expected prices. Assuming that farmers form expectations rationally, the prices of food and biofuel crops are modeled using time-series methods for possible ARCH/GARCH effects to account for volatility. The prices projected on the basis of the models are then inserted to proxy for the expected prices in the acreage response functions. Food crop acreages in different growing states are found sensitive to their prices relative to those of one or more of the biofuel crops considered. The required percentage improvement in food crop yields is worked to offset the acreage loss.Keywords: acreage response function, biofuel, food security, sustainable development
Procedia PDF Downloads 30110302 The Use of Empirical Models to Estimate Soil Erosion in Arid Ecosystems and the Importance of Native Vegetation
Authors: Meshal M. Abdullah, Rusty A. Feagin, Layla Musawi
Abstract:
When humans mismanage arid landscapes, soil erosion can become a primary mechanism that leads to desertification. This study focuses on applying soil erosion models to a disturbed landscape in Umm Nigga, Kuwait, and identifying its predicted change under restoration plans, The northern portion of Umm Nigga, containing both coastal and desert ecosystems, falls within the boundaries of the Demilitarized Zone (DMZ) adjacent to Iraq, and has been fenced off to restrict public access since 1994. The central objective of this project was to utilize GIS and remote sensing to compare the MPSIAC (Modified Pacific South West Inter Agency Committee), EMP (Erosion Potential Method), and USLE (Universal Soil Loss Equation) soil erosion models and determine their applicability for arid regions such as Kuwait. Spatial analysis was used to develop the necessary datasets for factors such as soil characteristics, vegetation cover, runoff, climate, and topography. Results showed that the MPSIAC and EMP models produced a similar spatial distribution of erosion, though the MPSIAC had more variability. For the MPSIAC model, approximately 45% of the land surface ranged from moderate to high soil loss, while 35% ranged from moderate to high for the EMP model. The USLE model had contrasting results and a different spatial distribution of the soil loss, with 25% of area ranging from moderate to high erosion, and 75% ranging from low to very low. We concluded that MPSIAC and EMP were the most suitable models for arid regions in general, with the MPSIAC model best. We then applied the MPSIAC model to identify the amount of soil loss between coastal and desert areas, and fenced and unfenced sites. In the desert area, soil loss was different between fenced and unfenced sites. In these desert fenced sites, 88% of the surface was covered with vegetation and soil loss was very low, while at the desert unfenced sites it was 3% and correspondingly higher. In the coastal areas, the amount of soil loss was nearly similar between fenced and unfenced sites. These results implied that vegetation cover played an important role in reducing soil erosion, and that fencing is much more important in the desert ecosystems to protect against overgrazing. When applying the MPSIAC model predictively, we found that vegetation cover could be increased from 3% to 37% in unfenced areas, and soil erosion could then decrease by 39%. We conclude that the MPSIAC model is best to predict soil erosion for arid regions such as Kuwait.Keywords: soil erosion, GIS, modified pacific South west inter agency committee model (MPSIAC), erosion potential method (EMP), Universal soil loss equation (USLE)
Procedia PDF Downloads 29710301 Removal of Heavy Metal from Wastewater using Bio-Adsorbent
Authors: Rakesh Namdeti
Abstract:
The liquid waste-wastewater- is essentially the water supply of the community after it has been used in a variety of applications. In recent years, heavy metal concentrations, besides other pollutants, have increased to reach dangerous levels for the living environment in many regions. Among the heavy metals, Lead has the most damaging effects on human health. It can enter the human body through the uptake of food (65%), water (20%), and air (15%). In this background, certain low-cost and easily available biosorbent was used and reported in this study. The scope of the present study is to remove Lead from its aqueous solution using Olea EuropaeaResin as biosorbent. The results showed that the biosorption capacity of Olea EuropaeaResin biosorbent was more for Lead removal. The Langmuir, Freundlich, Tempkin, and Dubinin-Radushkevich (D-R) models were used to describe the biosorption equilibrium of Lead Olea EuropaeaResin biosorbent, and the biosorption followed the Langmuir isotherm. The kinetic models showed that the pseudo-second-order rate expression was found to represent well the biosorption data for the biosorbent.Keywords: novel biosorbent, central composite design, Lead, isotherms, kinetics
Procedia PDF Downloads 7810300 Surface Roughness Formed during Hybrid Turning of Inconel Alloy
Authors: Pawel Twardowski, Tadeusz Chwalczuk, Szymon Wojciechowski
Abstract:
Inconel 718 is a material characterized by the unique mechanical properties, high temperature strength, high thermal conductivity and the corrosion resistance. However, these features affect the low machinability of this material, which is usually manifested by the intense tool wear and low surface finish. Therefore, this paper is focused on the evaluation of surface roughness during hybrid machining of Inconel 718. The primary aim of the study was to determine the relations between the vibrations generated during hybrid turning and the formed surface roughness. Moreover, the comparison of tested machining techniques in terms of vibrations, tool wear and surface roughness has been made. The conducted tests included the face turning of Inconel 718 with laser assistance in the range of variable cutting speeds. The surface roughness was inspected with the application of stylus profile meter and accelerations of vibrations were measured with the use of three-component piezoelectric accelerometer. The carried out research shows that application of laser assisted machining can contribute to the reduction of surface roughness and cutting vibrations, in comparison to conventional turning. Moreover, the obtained results enable the selection of effective cutting speed allowing the improvement of surface finish and cutting dynamics.Keywords: hybrid machining, nickel alloys, surface roughness, turning, vibrations
Procedia PDF Downloads 32410299 Finite Element Modeling Techniques of Concrete in Steel and Concrete Composite Members
Authors: J. Bartus, J. Odrobinak
Abstract:
The paper presents a nonlinear analysis 3D model of composite steel and concrete beams with web openings using the Finite Element Method (FEM). The core of the study is the introduction of basic modeling techniques comprehending the description of material behavior, appropriate elements selection, and recommendations for overcoming problems with convergence. Results from various finite element models are compared in the study. The main objective is to observe the concrete failure mechanism and its influence on the structural performance of numerical models of the beams at particular load stages. The bearing capacity of beams, corresponding deformations, stresses, strains, and fracture patterns were determined. The results show how load-bearing elements consisting of concrete parts can be analyzed using FEM software with various options to create the most suitable numerical model. The paper demonstrates the versatility of Ansys software usage for structural simulations.Keywords: Ansys, concrete, modeling, steel
Procedia PDF Downloads 12110298 Generalization of Zhou Fixed Point Theorem
Authors: Yu Lu
Abstract:
Fixed point theory is a basic tool for the study of the existence of Nash equilibria in game theory. This paper presents a significant generalization of the Veinott-Zhou fixed point theorem for increasing correspondences, which serves as an essential framework for investigating the existence of Nash equilibria in supermodular and quasisupermodular games. To establish our proofs, we explore different conceptions of multivalued increasingness and provide comprehensive results concerning the existence of the largest/least fixed point. We provide two distinct approaches to the proof, each offering unique insights and advantages. These advancements not only extend the applicability of the Veinott-Zhou theorem to a broader range of economic scenarios but also enhance the theoretical framework for analyzing equilibrium behavior in complex game-theoretic models. Our findings pave the way for future research in the development of more sophisticated models of economic behavior and strategic interaction.Keywords: fixed-point, Tarski’s fixed-point theorem, Nash equilibrium, supermodular game
Procedia PDF Downloads 5510297 Statistical Modeling of Mobile Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes
Authors: Jihad S. Daba, J. P. Dubois
Abstract:
Understanding the statistics of non-isotropic scattering multipath channels that fade randomly with respect to time, frequency, and space in a mobile environment is very crucial for the accurate detection of received signals in wireless and cellular communication systems. In this paper, we derive stochastic models for the probability density function (PDF) of the shift in the carrier frequency caused by the Doppler Effect on the received illuminating signal in the presence of a dominant line of sight. Our derivation is based on a generalized Clarke’s and a two-wave partially developed scattering models, where the statistical distribution of the frequency shift is shown to be consistent with the power spectral density of the Doppler shifted signal.Keywords: Doppler shift, filtered Poisson process, generalized Clark’s model, non-isotropic scattering, partially developed scattering, Rician distribution
Procedia PDF Downloads 37210296 Cirrhosis Mortality Prediction as Classification using Frequent Subgraph Mining
Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride
Abstract:
In this work, we use machine learning and novel data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. To the best of our knowledge, this is the first work to apply modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning
Procedia PDF Downloads 13410295 Impact of Import Restriction on Rice Production in Nigeria
Authors: C. O. Igberi, M. U. Amadi
Abstract:
This research paper on the impact of import restriction on rice production in Nigeria is aimed at finding/proffering valid solutions to the age long problem of rice self-sufficiency, through a better understanding of policy measures used in the past, in this case, the effectiveness of rice import restriction of the early 90’s. It tries to answer the questions of; import restriction boosting domestic rice production and the macroeconomic determining factors of Gross Domestic Rice Product (GDRP). The research probe is investigated through literature and analytical frameworks, such that time series data on the GDRP, Gross Fixed Capital Formation (GFCF), average foreign rice producers’ prices(PPF), domestic producers’ prices (PPN) and the labour force (LABF) are collated for analysis (with an import restriction dummy variable, POL1). The research objectives/hypothesis are analysed using; Cointegration, Vector Error Correction Model (VECM), Impulse Response Function (IRF) and Granger Causality Test(GCT) methodologies. Results show that in the short-run error correction specification for GDRP, a percentage (1%) deviation away from the long-run equilibrium in a current quarter is only corrected by 0.14% in the subsequent quarter. Also, the rice import restriction policy had no significant effect on the GDRP at this time. Other findings show that the policy period has, in fact, had effects on the PPN and LABF. The choice variables used are valid macroeconomic factors that explain the GDRP of Nigeria, as adduced from the IRF and GCT, and in the long-run. Policy recommendations suggest that the import restriction is not disqualified as a veritable tool for improving domestic rice production, rather better enforcement procedures and strict adherence to the policy dictates is needed. Furthermore, accompanying policies which drive public and private capital investment and accumulation must be introduced. Also, employment rate and labour substitution in the agricultural sector should not be drastically changed, rather its welfare and efficiency be improved.Keywords: import restriction, gross domestic rice production, cointegration, VECM, Granger causality, impulse response function
Procedia PDF Downloads 20610294 Comparison Analysis on the Safety Culture between the Executives and the Operators: Case Study in the Aircraft Manufacturer in Taiwan
Authors: Wen-Chen Hwang, Yu-Hsi Yuan
Abstract:
According to the estimation made by researchers of safety and hygiene, 80% to 90% of workplace accidents in enterprises could be attributed to human factors. Nevertheless, human factors are not the only cause for accidents; instead, happening of accidents is also closely associated with the safety culture of the organization. Therefore, the most effective way of reducing accident rate would be to improve the social and the organizational factors that influence organization’s safety performance. Overview the present study is to understand the current level of safety culture in manufacturing enterprises. A tool for evaluating safety culture matching the needs and characteristics of manufacturing enterprises was developed by reviewing literature of safety culture, and taking the special backgrounds of the case enterprises into consideration. Expert validity was also implied for developing the questionnaire. Moreover, safety culture assessment was conducted through the practical investigation of the case enterprises. Total 505 samples were involved, 53 were executives and 452 were operators. The result of this study in comparison of the safety culture level between the executives and the operators was reached the significant level in 8 dimensions: Safety Commitment, Safety System, Safety Training, Safety Involvement, Reward and Motivation, Communication and Reporting, Leadership and Supervision, Learning and Changing. In general, the overall safety culture were executive level higher than operators level (M: 74.98 > 69.08; t=2.87; p < 0.01).Keywords: questionnaire survey, safety culture, t-test, media studies
Procedia PDF Downloads 31510293 A Non-Parametric Based Mapping Algorithm for Use in Audio Fingerprinting
Authors: Analise Borg, Paul Micallef
Abstract:
Over the past few years, the online multimedia collection has grown at a fast pace. Several companies showed interest to study the different ways to organize the amount of audio information without the need of human intervention to generate metadata. In the past few years, many applications have emerged on the market which are capable of identifying a piece of music in a short time. Different audio effects and degradation make it much harder to identify the unknown piece. In this paper, an audio fingerprinting system which makes use of a non-parametric based algorithm is presented. Parametric analysis is also performed using Gaussian Mixture Models (GMMs). The feature extraction methods employed are the Mel Spectrum Coefficients and the MPEG-7 basic descriptors. Bin numbers replaced the extracted feature coefficients during the non-parametric modelling. The results show that non-parametric analysis offer potential results as the ones mentioned in the literature.Keywords: audio fingerprinting, mapping algorithm, Gaussian Mixture Models, MFCC, MPEG-7
Procedia PDF Downloads 421