Search results for: sport business model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19003

Search results for: sport business model

13543 Vehicular Emission Estimation of Islamabad by Using Copert-5 Model

Authors: Muhammad Jahanzaib, Muhammad Z. A. Khan, Junaid Khayyam

Abstract:

Islamabad is the capital of Pakistan with the population of 1.365 million people and with a vehicular fleet size of 0.75 million. The vehicular fleet size is growing annually by the rate of 11%. Vehicular emissions are major source of Black carbon (BC). In developing countries like Pakistan, most of the vehicles consume conventional fuels like Petrol, Diesel, and CNG. These fuels are the major emitters of pollutants like CO, CO2, NOx, CH4, VOCs, and particulate matter (PM10). Carbon dioxide and methane are the leading contributor to the global warming with a global share of 9-26% and 4-9% respectively. NOx is the precursor of nitrates which ultimately form aerosols that are noxious to human health. In this study, COPERT (Computer program to Calculate Emissions from Road Transport) was used for vehicular emission estimation in Islamabad. COPERT is a windows based program which is developed for the calculation of emissions from the road transport sector. The emissions were calculated for the year of 2016 include pollutants like CO, NOx, VOC, and PM and energy consumption. The different variable was input to the model for emission estimation including meteorological parameters, average vehicular trip length and respective time duration, fleet configuration, activity data, degradation factor, and fuel effect. The estimated emissions for CO, CH4, CO2, NOx, and PM10 were found to be 9814.2, 44.9, 279196.7, 3744.2 and 304.5 tons respectively.

Keywords: COPERT Model, emission estimation, PM10, vehicular emission

Procedia PDF Downloads 247
13542 Experimental Assessment of Artificial Flavors Production

Authors: M. Unis, S. Turky, A. Elalem, A. Meshrghi

Abstract:

The Esterification kinetics of acetic acid with isopropnol in the presence of sulfuric acid as a homogenous catalyst was studied with isothermal batch experiments at 60,70 and 80°C and at a different molar ratio of isopropnol to acetic acid. Investigation of kinetics of the reaction indicated that the low of molar ratio is favored for esterification reaction, this is due to the reaction is catalyzed by acid. The maximum conversion, approximately 60.6% was obtained at 80°C for molar ratio of 1:3 acid : alcohol. It was found that increasing temperature of the reaction, increases the rate constant and conversion at a certain mole ratio, that is due to the esterification is exothermic. The homogenous reaction has been described with simple power-law model. The chemical equilibrium combustion calculated from the kinetic model in agreement with the measured chemical equilibrium.

Keywords: artificial flavors, esterification, chemical equilibria, isothermal

Procedia PDF Downloads 316
13541 Modelling of Groundwater Resources for Al-Najaf City, Iraq

Authors: Hayder H. Kareem, Shunqi Pan

Abstract:

Groundwater is a vital water resource in many areas in the world, particularly in the Middle-East region where the water resources become scarce and depleting. Sustainable management and planning of the groundwater resources become essential and urgent given the impact of the global climate change. In the recent years, numerical models have been widely used to predict the flow pattern and assess the water resources security, as well as the groundwater quality affected by the contaminants transported. In this study, MODFLOW is used to study the current status of groundwater resources and the risk of water resource security in the region centred at Al-Najaf City, which is located in the mid-west of Iraq and adjacent to the Euphrates River. In this study, a conceptual model is built using the geologic and hydrogeologic collected for the region, together with the Digital Elevation Model (DEM) data obtained from the "Global Land Cover Facility" (GLCF) and "United State Geological Survey" (USGS) for the study area. The computer model is also implemented with the distributions of 69 wells in the area with the steady pro-defined hydraulic head along its boundaries. The model is then applied with the recharge rate (from precipitation) of 7.55 mm/year, given from the analysis of the field data in the study area for the period of 1980-2014. The hydraulic conductivity from the measurements at the locations of wells is interpolated for model use. The model is calibrated with the measured hydraulic heads at the locations of 50 of 69 wells in the domain and results show a good agreement. The standard-error-of-estimate (SEE), root-mean-square errors (RMSE), Normalized RMSE and correlation coefficient are 0.297 m, 2.087 m, 6.899% and 0.971 respectively. Sensitivity analysis is also carried out, and it is found that the model is sensitive to recharge, particularly when the rate is greater than (15mm/year). Hydraulic conductivity is found to be another parameter which can affect the results significantly, therefore it requires high quality field data. The results show that there is a general flow pattern from the west to east of the study area, which agrees well with the observations and the gradient of the ground surface. It is found that with the current operational pumping rates of the wells in the area, a dry area is resulted in Al-Najaf City due to the large quantity of groundwater withdrawn. The computed water balance with the current operational pumping quantity shows that the Euphrates River supplies water into the groundwater of approximately 11759 m3/day, instead of gaining water of 11178 m3/day from the groundwater if no pumping from the wells. It is expected that the results obtained from the study can provide important information for the sustainable and effective planning and management of the regional groundwater resources for Al-Najaf City.

Keywords: Al-Najaf city, conceptual modelling, groundwater, unconfined aquifer, visual MODFLOW

Procedia PDF Downloads 198
13540 Response Analysis of a Steel Reinforced Concrete High-Rise Building during the 2011 Tohoku Earthquake

Authors: Naohiro Nakamura, Takuya Kinoshita, Hiroshi Fukuyama

Abstract:

The 2011 off The Pacific Coast of Tohoku Earthquake caused considerable damage to wide areas of eastern Japan. A large number of earthquake observation records were obtained at various places. To design more earthquake-resistant buildings and improve earthquake disaster prevention, it is necessary to utilize these data to analyze and evaluate the behavior of a building during an earthquake. This paper presents an earthquake response simulation analysis (hereafter a seismic response analysis) that was conducted using data recorded during the main earthquake (hereafter the main shock) as well as the earthquakes before and after it. The data were obtained at a high-rise steel-reinforced concrete (SRC) building in the bay area of Tokyo. We first give an overview of the building, along with the characteristics of the earthquake motion and the building during the main shock. The data indicate that there was a change in the natural period before and after the earthquake. Next, we present the results of our seismic response analysis. First, the analysis model and conditions are shown, and then, the analysis result is compared with the observational records. Using the analysis result, we then study the effect of soil-structure interaction on the response of the building. By identifying the characteristics of the building during the earthquake (i.e., the 1st natural period and the 1st damping ratio) by the Auto-Regressive eXogenous (ARX) model, we compare the analysis result with the observational records so as to evaluate the accuracy of the response analysis. In this study, a lumped-mass system SR model was used to conduct a seismic response analysis using observational data as input waves. The main results of this study are as follows: 1) The observational records of the 3/11 main shock put it between a level 1 and level 2 earthquake. The result of the ground response analysis showed that the maximum shear strain in the ground was about 0.1% and that the possibility of liquefaction occurring was low. 2) During the 3/11 main shock, the observed wave showed that the eigenperiod of the building became longer; this behavior could be generally reproduced in the response analysis. This prolonged eigenperiod was due to the nonlinearity of the superstructure, and the effect of the nonlinearity of the ground seems to have been small. 3) As for the 4/11 aftershock, a continuous analysis in which the subject seismic wave was input after the 3/11 main shock was input was conducted. The analyzed values generally corresponded well with the observed values. This means that the effect of the nonlinearity of the main shock was retained by the building. It is important to consider this when conducting the response evaluation. 4) The first period and the damping ratio during a vibration were evaluated by an ARX model. Our results show that the response analysis model in this study is generally good at estimating a change in the response of the building during a vibration.

Keywords: ARX model, response analysis, SRC building, the 2011 off the Pacific Coast of Tohoku Earthquake

Procedia PDF Downloads 153
13539 Input Data Balancing in a Neural Network PM-10 Forecasting System

Authors: Suk-Hyun Yu, Heeyong Kwon

Abstract:

Recently PM-10 has become a social and global issue. It is one of major air pollutants which affect human health. Therefore, it needs to be forecasted rapidly and precisely. However, PM-10 comes from various emission sources, and its level of concentration is largely dependent on meteorological and geographical factors of local and global region, so the forecasting of PM-10 concentration is very difficult. Neural network model can be used in the case. But, there are few cases of high concentration PM-10. It makes the learning of the neural network model difficult. In this paper, we suggest a simple input balancing method when the data distribution is uneven. It is based on the probability of appearance of the data. Experimental results show that the input balancing makes the neural networks’ learning easy and improves the forecasting rates.

Keywords: artificial intelligence, air quality prediction, neural networks, pattern recognition, PM-10

Procedia PDF Downloads 219
13538 Modeling The Deterioration Of Road Bridges At The Provincial Level In Laos

Authors: Hatthaphone Silimanotham, Michael Henry

Abstract:

The effective maintenance of road bridge infrastructure is becoming a widely researched topic in the civil engineering field. Deterioration is one of the main issues in bridge performance, and it is necessary to understand how bridges deteriorate to optimally plan budget allocation for bridge maintenance. In Laos, many bridges are in a deteriorated state, which may affect the performance of the bridge. Due to bridge deterioration, the Ministry of Public Works and Transport is interested in the deterioration model to allocate the budget efficiently and support the bridge maintenance planning. A deterioration model can be used to predict the bridge condition in the future based on the observed behavior in the past. This paper analyzes the available inspection data of road bridges on the road classifications network to build deterioration prediction models for the main bridge type found at the provincial level (concrete slab, concrete girder, and steel truss) using probabilistic deterioration modeling by linear regression method. The analysis targets there has three bridge types in the 18 provinces of Laos and estimates the bridge deterioration rating for evaluating the bridge's remaining life. This research thus considers the relationship between the service period and the bridge condition to represent the probability of bridge condition in the future. The results of the study can be used for a variety of bridge management tasks, including maintenance planning, budgeting, and evaluating bridge assets.

Keywords: deterioration model, bridge condition, bridge management, probabilistic modeling

Procedia PDF Downloads 148
13537 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 160
13536 Impact of a Virtual Reality-Training on Real-World Hockey Skill: An Intervention Trial

Authors: Matthew Buns

Abstract:

Training specificity is imperative for successful performance of the elite athlete. Virtual reality (VR) has been successfully applied to a broad range of training domains. However, to date there is little research investigating the use of VR for sport training. The purpose of this study was to address the question of whether virtual reality (VR) training can improve real world hockey shooting performance. Twenty four volunteers were recruited and randomly selected to complete the virtual training intervention or enter a control group with no training. Four primary types of data were collected: 1) participant’s experience with video games and hockey, 2) participant’s motivation toward video game use, 3) participants technical performance on real-world hockey, and 4) participant’s technical performance in virtual hockey. One-way multivariate analysis of variance (ANOVA) indicated that that the intervention group demonstrated significantly more real-world hockey accuracy [F(1,24) =15.43, p <.01, E.S. = 0.56] while shooting on goal than their control group counterparts [intervention M accuracy = 54.17%, SD=12.38, control M accuracy = 46.76%, SD=13.45]. One-way multivariate analysis of variance (MANOVA) repeated measures indicated significantly higher outcome scores on real-world accuracy (35.42% versus 54.17%; ES = 1.52) and velocity (51.10 mph versus 65.50 mph; ES=0.86) of hockey shooting on goal. This research supports the idea that virtual training is an effective tool for increasing real-world hockey skill.

Keywords: virtual training, hockey skills, video game, esports

Procedia PDF Downloads 138
13535 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder

Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen

Abstract:

Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.

Keywords: natural language inference, explanation generation, variational auto-encoder, generative model

Procedia PDF Downloads 134
13534 Static and Dynamic Hand Gesture Recognition Using Convolutional Neural Network Models

Authors: Keyi Wang

Abstract:

Similar to the touchscreen, hand gesture based human-computer interaction (HCI) is a technology that could allow people to perform a variety of tasks faster and more conveniently. This paper proposes a training method of an image-based hand gesture image and video clip recognition system using a CNN (Convolutional Neural Network) with a dataset. A dataset containing 6 hand gesture images is used to train a 2D CNN model. ~98% accuracy is achieved. Furthermore, a 3D CNN model is trained on a dataset containing 4 hand gesture video clips resulting in ~83% accuracy. It is demonstrated that a Cozmo robot loaded with pre-trained models is able to recognize static and dynamic hand gestures.

Keywords: deep learning, hand gesture recognition, computer vision, image processing

Procedia PDF Downloads 123
13533 Optimization of Hate Speech and Abusive Language Detection on Indonesian-language Twitter using Genetic Algorithms

Authors: Rikson Gultom

Abstract:

Hate Speech and Abusive language on social media is difficult to detect, usually, it is detected after it becomes viral in cyberspace, of course, it is too late for prevention. An early detection system that has a fairly good accuracy is needed so that it can reduce conflicts that occur in society caused by postings on social media that attack individuals, groups, and governments in Indonesia. The purpose of this study is to find an early detection model on Twitter social media using machine learning that has high accuracy from several machine learning methods studied. In this study, the support vector machine (SVM), Naïve Bayes (NB), and Random Forest Decision Tree (RFDT) methods were compared with the Support Vector machine with genetic algorithm (SVM-GA), Nave Bayes with genetic algorithm (NB-GA), and Random Forest Decision Tree with Genetic Algorithm (RFDT-GA). The study produced a comparison table for the accuracy of the hate speech and abusive language detection model, and presented it in the form of a graph of the accuracy of the six algorithms developed based on the Indonesian-language Twitter dataset, and concluded the best model with the highest accuracy.

Keywords: abusive language, hate speech, machine learning, optimization, social media

Procedia PDF Downloads 115
13532 Migrantional Entrepreneurship: Ethnography of a Journey That Changes Lives and the Territory

Authors: Francesca Alemanno

Abstract:

As a complex socio-spatial phenomenon, migration is a practice that also contains a strong imaginative component with respect to the place that, through displacement, one person wants to reach. Every migrant has undertaken his journey having in his mind an image of the displacement he was about to make, of its implications and finally, of the place or city in which he was or would have liked to land. Often, however, the imaginary that has come to build before departure does not fully correspond to the reality of landing; this discrepancy, which can be more or less wide, plays an important role in the relationship that is established with the territory and in the evolution, therefore, of the city itself. In this sense, therefore, the clash that occurs between the imagined and the real is one of the factors that can contribute to making the entry of a migrant into new territory as critical as it can be. Starting from this perspective, the experiences of people who derive from a migratory context and who, over time, manage to create a bond with the land of reception, are taken into account as stories of resistance as they are necessarily charged with a force that is capable of driving difficult and articulated processes of change. The phenomenon of migrant entrepreneurship that is taken into consideration by this abstract plays a very important role because it highlights the story of many people who have managed to build such a close bond with the new territory of arrival that they can imagine and then realize the construction of their own personal business. The margin of contrast between the imagined city and the one that will be inhabited will be observed through the narratives of those who, through the realization of his business project has acted directly on the reality in which he landed. The margin of contrast that exists between the imagined city and the one actually inhabited, together with the implications that this may have on real life, has been observed and analyzed through a period of fieldwork, practicing ethnography, through the narratives of people who find themselves living in a new city as a result of a migration path, and has been contextualized with the support of semi-structured interviews and field notes. At the theoretical level, the research is inserted into a constructionist framework, particularly suited to detect and analyze processes of change, construction of the imaginary and its own modification, being able to capture the consequent repercussions of this process on the conceptual, emotional and practical level.

Keywords: entrepreneurship, imagination, migration, resistance

Procedia PDF Downloads 138
13531 An Automated Procedure for Estimating the Glomerular Filtration Rate and Determining the Normality or Abnormality of the Kidney Stages Using an Artificial Neural Network

Authors: Hossain A., Chowdhury S. I.

Abstract:

Introduction: The use of a gamma camera is a standard procedure in nuclear medicine facilities or hospitals to diagnose chronic kidney disease (CKD), but the gamma camera does not precisely stage the disease. The authors sought to determine whether they could use an artificial neural network to determine whether CKD was in normal or abnormal stages based on GFR values (ANN). Method: The 250 kidney patients (Training 188, Testing 62) who underwent an ultrasonography test to diagnose a renal test in our nuclear medical center were scanned using a gamma camera. Before the scanning procedure, the patients received an injection of ⁹⁹ᵐTc-DTPA. The gamma camera computes the pre- and post-syringe radioactive counts after the injection has been pushed into the patient's vein. The artificial neural network uses the softmax function with cross-entropy loss to determine whether CKD is normal or abnormal based on the GFR value in the output layer. Results: The proposed ANN model had a 99.20 % accuracy according to K-fold cross-validation. The sensitivity and specificity were 99.10 and 99.20 %, respectively. AUC was 0.994. Conclusion: The proposed model can distinguish between normal and abnormal stages of CKD by using an artificial neural network. The gamma camera could be upgraded to diagnose normal or abnormal stages of CKD with an appropriate GFR value following the clinical application of the proposed model.

Keywords: artificial neural network, glomerular filtration rate, stages of the kidney, gamma camera

Procedia PDF Downloads 86
13530 CFD Simulation of Surge Wave Generated by Flow-Like Landslides

Authors: Liu-Chao Qiu

Abstract:

The damage caused by surge waves generated in water bodies by flow-like landslides can be very high in terms of human lives and economic losses. The complicated phenomena occurred in this highly unsteady process are difficult to model because three interacting phases: air, water and sediment are involved. The problem therefore is challenging since the effects of non-Newtonian fluid describing the rheology of the flow-like landslides, multi-phase flow and free surface have to be included in the simulation. In this work, the commercial computational fluid dynamics (CFD) package FLUENT is used to model the surge waves due to flow-like landslides. The comparison between the numerical results and experimental data reported in the literature confirms the accuracy of the method.

Keywords: flow-like landslide, surge wave, VOF, non-Newtonian fluids, multi-phase flows, free surface flow

Procedia PDF Downloads 407
13529 Synthesis and Characterization of New Polyesters Based on Diarylidene-1-Methyl-4-Piperidone

Authors: Tareg M. Elsunaki, Suleiman A. Arafa, Mohamed A. Abd-Alla

Abstract:

New interesting thermal stable polyesters containing 1-methyl-4-piperidone moiety in the main chain have been synthesized. These polyesters were synthesized by interfacial polycondensation technique of 3,5-bis(4-hydroxybenzylidene)-1-methyl-4-piperidone (I) and 3,5-bis(4-hydroxy-3-methoxy benzyli-dene)-1-methyl-4-piperidone (II) with terphthaloyl, isophthaloyl, 4,4'-diphenic, adipoyl and sebacoyl dichlorides. The yield and the values of the reduced viscosity of the produced polyesters were found to be affected by the type of an organic phase. In order to characterize these polymers, the necessary model compounds (A), (B) were prepared from (I), (II) respectively and benzoyl chloride. The structure of monomers (I), (II), model compounds and resulting polyesters were confirmed by IR, elemental analysis and 1HNMR spectroscopy. The various characteristic of the resulting polymers including solubility, thermal properties, viscosity and X-ray analysis were also studied.

Keywords: synthesis, characterization, new polyesters, chemistry

Procedia PDF Downloads 447
13528 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality

Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo

Abstract:

Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.

Keywords: linear model, models and modeling, probability, randomness, sample

Procedia PDF Downloads 110
13527 Risking Injury: Exploring the Relationship between Risk Propensity and Injuries among an Australian Rules Football Team

Authors: Sarah A. Harris, Fleur L. McIntyre, Paola T. Chivers, Benjamin G. Piggott, Fiona H. Farringdon

Abstract:

Australian Rules Football (ARF) is an invasion based, contact field sport with over one million participants. The contact nature of the game increases exposure to all injuries, including head trauma. Evidence suggests that both concussion and sub-concussive traumas such as head knocks may damage the brain, in particular the prefrontal cortex. The prefrontal cortex may not reach full maturity until a person is in their early twenties with males taking longer to mature than females. Repeated trauma to the pre-frontal cortex during maturation may lead to negative social, cognitive and emotional effects. It is also during this period that males exhibit high levels of risk taking behaviours. Risk propensity and the incidence of injury is an unexplored area of research. Little research has considered if the level of player’s (especially younger players) risk propensity in everyday life places them at an increased risk of injury. Hence the current study, investigated if a relationship exists between risk propensity and self-reported injuries including diagnosed concussion and head knocks, among male ARF players aged 18 to 31 years. Method: The study was conducted over 22 weeks with one West Australian Football League (WAFL) club during the 2015 competition. Pre-season risk propensity was measured using the 7-item self-report Risk Propensity Scale. Possible scores ranged from 9 to 63, with higher scores indicating higher risk propensity. Players reported their self-perceived injuries (concussion, head knocks, upper body and lower body injuries) fortnightly using the WAFL Injury Report Survey (WIRS). A unique ID code was used to ensure player anonymity, which also enabled linkage of survey responses and injury data tracking over the season. A General Linear Model (GLM) was used to analyse whether there was a relationship between risk propensity score and total number of injuries for each injury type. Results: Seventy one players (N=71) with an age range of 18.40 to 30.48 years and a mean age of 21.92 years (±2.96 years) participated in the study. Player’s mean risk propensity score was 32.73, SD ±8.38. Four hundred and ninety five (495) injuries were reported. The most frequently reported injury was head knocks representing 39.19% of total reported injuries. The GLM identified a significant relationship between risk propensity and head knocks (F=4.17, p=.046). No other injury types were significantly related to risk propensity. Discussion: A positive relationship between risk propensity and head trauma in contact sports (specifically WAFL) was discovered. Assessing player’s risk propensity therefore, may identify those more at risk of head injuries. Potentially leading to greater monitoring and education of these players throughout the season, regarding self-identification of head knocks and symptoms that may indicate trauma to the brain. This is important because many players involved in WAFL are in their late teens or early 20’s hence, may be at greater risk of negative outcomes if they experience repeated head trauma. Continued education and research into the risks associated with head injuries has the potential to improve player well-being.

Keywords: football, head injuries, injury identification, risk

Procedia PDF Downloads 321
13526 The Importance of Cultural Adaptation of B2C E-Services Design in Germany

Authors: Rasha Alhendawi

Abstract:

This research will give the introductory ideas for cultural adaption of B2C E-Service design in Germany. By the intense competition of E-Service development, many companies have realized the importance of understanding the emotional and cultural characteristics of their customers. Ignoring customers’ needs and requirements throughout the E-Service design can lead to faults, mistakes, and gaps. The term of E-Service usability now is changed not only to develop high quality E-Services, but also to be extended to include customer satisfaction and provide for them to feel local.

Keywords: human computer interaction (HCI), usability, cultural usability, E-Services, business-to-consumer (B2C), e-services

Procedia PDF Downloads 424
13525 A Study on Determining Market Orientation, Innovation Orientation and Firm Performance

Authors: Emel Gelmez, Derya Özilhan

Abstract:

In this study, the relationship between market orientation, innovation orientation and firm performance in the hotel enterprises in Konya was examined. Research data was obtained by survey method and the research was conducted on the enterprises operating in tourism business in Konya. Hypothesis were tested in terms of the main aim of the present study. According to the findings it was determined that there is a positive and significant relationship between each parameters.

Keywords: firm performance, innovation, innovation orientation, market orientation

Procedia PDF Downloads 338
13524 Simulation of Stress in Graphite Anode of Lithium-Ion Battery: Intra and Inter-Particle

Authors: Wenxin Mei, Jinhua Sun, Qingsong Wang

Abstract:

The volume expansion of lithium-ion batteries is mainly induced by intercalation induced stress within the negative electrode, resulting in capacity degradation and even battery failure. Stress generation due to lithium intercalation into graphite particles is investigated based on an electrochemical-mechanical model in this work. The two-dimensional model presented is fully coupled, inclusive of the impacts of intercalation-induced stress, stress-induced intercalation, to evaluate the lithium concentration, stress generation, and displacement intra and inter-particle. The results show that the distribution of lithium concentration and stress exhibits an analogous pattern, which reflects the relation between lithium diffusion and stress. The results of inter-particle stress indicate that larger Von-Mises stress is displayed where the two particles are in contact with each other, and deformation at the edge of particles is also observed, predicting fracture. Additionally, the maximum inter-particle stress at the end of lithium intercalation is nearly ten times the intraparticle stress. And the maximum inter-particle displacement is increased by 24% compared to the single-particle. Finally, the effect of graphite particle arrangement on inter-particle stress is studied. It is found that inter-particle stress with tighter arrangement exhibits lower stress. This work can provide guidance for predicting the intra and inter-particle stress to take measures to avoid cracking of electrode material.

Keywords: electrochemical-mechanical model, graphite particle, lithium concentration, lithium ion battery, stress

Procedia PDF Downloads 175
13523 City Image of Rio De Janeiro as the Host City of 2016 Olympic Games

Authors: Luciana Brandao Ferreira, Janaina de Moura Engracia Giraldi, Fabiana Gondim Mariutti, Marina Toledo de Arruda Lourencao

Abstract:

Developing countries, such as BRICS (Brazil, Russia, India, China and South Africa) are hosting sports mega-events to promote socio-economic development and image enhancement. Thus, this paper aims to verify the image of Rio de Janeiro, in Brazil, as the host city of 2016 Olympic Games, considering the main cognitive and affective image dimensions. The research design uses exploratory factorial analysis to find the most important factors highlighted in the city image dimensions. The data were collected by structured questionnaires with an international respondents sample (n=274) with high international travel experience. The results show that Rio’s image as a sport mega-event host city has two main factors in each dimension: Cognitive ('General Infrastructure'; 'Services and Attractions') and Affective ('Positive Feelings'; 'Negative Feelings'). The most important factor related to cognitive dimension was 'Services and Attractions' which is more related to tourism activities. In the affective dimension 'Positive Feelings' was the most important factor, which means a good result considering that is a city in an emerging country with many unmet social demands.

Keywords: Rio de Janeiro, 2016 olympic games, host city image, cognitive image dimension, affective image dimension

Procedia PDF Downloads 133
13522 Determinants of Budget Performance in an Oil-Based Economy

Authors: Adeola Adenikinju, Olusanya E. Olubusoye, Lateef O. Akinpelu, Dilinna L. Nwobi

Abstract:

Since the enactment of the Fiscal Responsibility Act (2007), the Federal Government of Nigeria (FGN) has made public its fiscal budget and the subsequent implementation report. A critical review of these documents shows significant variations in the five macroeconomic variables which are inputs in each Presidential budget; oil Production target (mbpd), oil price ($), Foreign exchange rate(N/$), and Gross Domestic Product growth rate (%) and inflation rate (%). This results in underperformance of the Federal budget expected output in terms of non-oil and oil revenue aggregates. This paper evaluates first the existing variance between budgeted and actuals, then the relationship and causality between the determinants of Federal fiscal budget assumptions, and finally the determinants of FGN’s Gross Oil Revenue. The paper employed the use of descriptive statistics, the Autoregressive distributed lag (ARDL) model, and a Profit oil probabilistic model to achieve these objectives. This model permits for both the static and dynamic effect(s) of the independent variable(s) on the dependent variable, unlike a static model that accounts for static or fixed effect(s) only. It offers a technique for checking the existence of a long-run relationship between variables, unlike other tests of cointegration, such as the Engle-Granger and Johansen tests, which consider only non-stationary series that are integrated of the same order. Finally, even with small sample size, the ARDL model is known to generate a valid result, for it is the dependent variable and is the explanatory variable. The results showed that there is a long-run relationship between oil revenue as a proxy for budget performance and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a short-run relationship between oil revenue and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a long-run relationship between non-oil revenue and its determinants; inflation rate, GDP growth rate, and foreign exchange rate. The grangers’ causality test results show that there is a mono-directional causality between oil revenue and its determinants. The Federal budget assumptions only explain 68% of oil revenue and 62% of non-oil revenue. There is a mono-directional causality between non-oil revenue and its determinants. The Profit oil Model describes production sharing contracts, joint ventures, and modified carrying arrangements as the greatest contributors to FGN’s gross oil revenue. This provides empirical justification for the selected macroeconomic variables used in the Federal budget design and performance evaluation. The research recommends other variables, debt and money supply, be included in the Federal budget design to explain the Federal budget revenue performance further.

Keywords: ARDL, budget performance, oil price, oil quantity, oil revenue

Procedia PDF Downloads 155
13521 Modeling of CREB Pathway Induced Gene Induction: From Stimulation to Repression

Authors: K. Julia Rose Mary, Victor Arokia Doss

Abstract:

Electrical and chemical stimulations up-regulate phosphorylaion of CREB, a transcriptional factor that induces its target gene production for memory consolidation and Late Long-Term Potentiation (L-LTP) in CA1 region of the hippocampus. L-LTP requires complex interactions among second-messenger signaling cascade molecules such as cAMP, CAMKII, CAMKIV, MAPK, RSK, PKA, all of which converge to phosphorylate CREB which along with CBP induces the transcription of target genes involved in memory consolidation. A differential equation based model for L-LTP representing stimulus-mediated activation of downstream mediators which confirms the steep, supralinear stimulus-response effects of activation and inhibition was used. The same was extended to accommodate the inhibitory effect of the Inducible cAMP Early Repressor (ICER). ICER is the natural inducible CREB antagonist represses CRE-Mediated gene transcription involved in long-term plasticity for learning and memory. After verifying the sensitivity and robustness of the model, we had simulated it with various empirical levels of repressor concentration to analyse their effect on the gene induction. The model appears to predict the regulatory dynamics of repression on the L-LTP and agrees with the experimental values. The flux data obtained in the simulations demonstrate various aspects of equilibrium between the gene induction and repression.

Keywords: CREB, L-LTP, mathematical modeling, simulation

Procedia PDF Downloads 280
13520 Failure Load Investigations in Adhesively Bonded Single-Strap Joints of Dissimilar Materials Using Cohesive Zone Model

Authors: B. Paygozar, S.A. Dizaji

Abstract:

Adhesive bonding is a highly valued type of fastening mechanical parts in complex structures, where joining some simple components is always needed. This method is of several merits, such as uniform stress distribution, appropriate bonding strength, and fatigue performance, and lightness, thereby outweighing other sorts of bonding methods. This study is to investigate the failure load of adhesive single-strap joints, including adherends of different sizes and materials. This kind of adhesive joint is very practical in different industries, especially when repairing the existing joints or attaching substrates of dissimilar materials. In this research, experimentally validated numerical analyses carried out in a commercial finite element package, ABAQUS, are utilized to extract the failure loads of the joints, based on the cohesive zone model. In addition, the stress analyses of the substrates are performed in order to acquire the effects of lowering the thickness of the substrates on the stress distribution inside them to avoid designs suffering from the necking or failure of the adherends. It was found out that this method of bonding is really feasible in joining dissimilar materials which can be utilized in a variety of applications. Moreover, the stress analyses indicated the minimum thickness for the adherends so as to avoid the failure of them.

Keywords: cohesive zone model, dissimilar materials, failure load, single strap joint

Procedia PDF Downloads 110
13519 Mathematical Modelling of Slag Formation in an Entrained-Flow Gasifier

Authors: Girts Zageris, Vadims Geza, Andris Jakovics

Abstract:

Gasification processes are of great interest due to their generation of renewable energy in the form of syngas from biodegradable waste. It is, therefore, important to study the factors that play a role in the efficiency of gasification and the longevity of the machines in which gasification takes place. This study focuses on the latter, aiming to optimize an entrained-flow gasifier by reducing slag formation on its walls to reduce maintenance costs. A CFD mathematical model for an entrained-flow gasifier is constructed – the model of an actual gasifier is rendered in 3D and appropriately meshed. Then, the turbulent gas flow in the gasifier is modeled with the realizable k-ε approach, taking devolatilization, combustion and coal gasification into account. Various such simulations are conducted, obtaining results for different air inlet positions and by tracking particles of varying sizes undergoing devolatilization and gasification. The model identifies potential problematic zones where most particles collide with the gasifier walls, indicating risk regions where ash deposits could most likely form. In conclusion, the effects on the formation of an ash layer of air inlet positioning and particle size allowed in the main gasifier tank are discussed, and possible solutions for decreasing a number of undesirable deposits are proposed. Additionally, an estimate of the impact of different factors such as temperature, gas properties and gas content, and different forces acting on the particles undergoing gasification is given.

Keywords: biomass particles, gasification, slag formation, turbulence k-ε modelling

Procedia PDF Downloads 271
13518 Application of Principal Component Analysis and Ordered Logit Model in Diabetic Kidney Disease Progression in People with Type 2 Diabetes

Authors: Mequanent Wale Mekonen, Edoardo Otranto, Angela Alibrandi

Abstract:

Diabetic kidney disease is one of the main microvascular complications caused by diabetes. Several clinical and biochemical variables are reported to be associated with diabetic kidney disease in people with type 2 diabetes. However, their interrelations could distort the effect estimation of these variables for the disease's progression. The objective of the study is to determine how the biochemical and clinical variables in people with type 2 diabetes are interrelated with each other and their effects on kidney disease progression through advanced statistical methods. First, principal component analysis was used to explore how the biochemical and clinical variables intercorrelate with each other, which helped us reduce a set of correlated biochemical variables to a smaller number of uncorrelated variables. Then, ordered logit regression models (cumulative, stage, and adjacent) were employed to assess the effect of biochemical and clinical variables on the order-level response variable (progression of kidney function) by considering the proportionality assumption for more robust effect estimation. This retrospective cross-sectional study retrieved data from a type 2 diabetic cohort in a polyclinic hospital at the University of Messina, Italy. The principal component analysis yielded three uncorrelated components. These are principal component 1, with negative loading of glycosylated haemoglobin, glycemia, and creatinine; principal component 2, with negative loading of total cholesterol and low-density lipoprotein; and principal component 3, with negative loading of high-density lipoprotein and a positive load of triglycerides. The ordered logit models (cumulative, stage, and adjacent) showed that the first component (glycosylated haemoglobin, glycemia, and creatinine) had a significant effect on the progression of kidney disease. For instance, the cumulative odds model indicated that the first principal component (linear combination of glycosylated haemoglobin, glycemia, and creatinine) had a strong and significant effect on the progression of kidney disease, with an effect or odds ratio of 0.423 (P value = 0.000). However, this effect was inconsistent across levels of kidney disease because the first principal component did not meet the proportionality assumption. To address the proportionality problem and provide robust effect estimates, alternative ordered logit models, such as the partial cumulative odds model, the partial adjacent category model, and the partial continuation ratio model, were used. These models suggested that clinical variables such as age, sex, body mass index, medication (metformin), and biochemical variables such as glycosylated haemoglobin, glycemia, and creatinine have a significant effect on the progression of kidney disease.

Keywords: diabetic kidney disease, ordered logit model, principal component analysis, type 2 diabetes

Procedia PDF Downloads 20
13517 Computational Team Dynamics in Student New Product Development Teams

Authors: Shankaran Sitarama

Abstract:

Teamwork is an extremely effective pedagogical tool in engineering education. New Product Development (NPD) has been an effective strategy of companies to streamline and bring innovative products and solutions to customers. Thus, Engineering curriculum in many schools, some collaboratively with business schools, have brought NPD into the curriculum at the graduate level. Teamwork is invariably used during instruction, where students work in teams to come up with new products and solutions. There is a significant emphasis of grade on the semester long teamwork for it to be taken seriously by students. As the students work in teams and go through this process to develop the new product prototypes, their effectiveness and learning to a great extent depends on how they function as a team and go through the creative process, come together, and work towards the common goal. A core attribute of a successful NPD team is their creativity and innovation. The team needs to be creative as a group, generating a breadth of ideas and innovative solutions that solve or address the problem they are targeting and meet the user’s needs. They also need to be very efficient in their teamwork as they work through the various stages of the development of these ideas resulting in a POC (proof-of-concept) implementation or a prototype of the product. The simultaneous requirement of teams to be creative and at the same time also converge and work together imposes different types of tensions in their team interactions. These ideational tensions / conflicts and sometimes relational tensions / conflicts are inevitable. Effective teams will have to deal with the Team dynamics and manage it to be resilient enough and yet be creative. This research paper provides a computational analysis of the teams’ communication that is reflective of the team dynamics, and through a superimposition of latent semantic analysis with social network analysis, provides a computational methodology of arriving at patterns of visual interaction. These team interaction patterns have clear correlations to the team dynamics and provide insights into the functioning and thus the effectiveness of the teams. 23 student NPD teams over 2 years of a course on Managing NPD that has a blend of engineering and business school students is considered, and the results are presented. It is also correlated with the teams’ detailed and tailored individual and group feedback and self-reflection and evaluation questionnaire.

Keywords: team dynamics, social network analysis, team interaction patterns, new product development teamwork, NPD teams

Procedia PDF Downloads 97
13516 Dynamic Modeling of Energy Systems Adapted to Low Energy Buildings in Lebanon

Authors: Nadine Yehya, Chantal Maatouk

Abstract:

Low energy buildings have been developed to achieve global climate commitments in reducing energy consumption. They comprise energy efficient buildings, zero energy buildings, positive buildings and passive house buildings. The reduced energy demands in Low Energy buildings call for advanced building energy modeling that focuses on studying active building systems such as heating, cooling and ventilation, improvement of systems performances, and development of control systems. Modeling and building simulation have expanded to cover different modeling approach i.e.: detailed physical model, dynamic empirical models, and hybrid approaches, which are adopted by various simulation tools. This paper uses DesignBuilder with EnergyPlus simulation engine in order to; First, study the impact of efficiency measures on building energy behavior by comparing Low energy residential model to a conventional one in Beirut-Lebanon. Second, choose the appropriate energy systems for the studied case characterized by an important cooling demand. Third, study dynamic modeling of Variable Refrigerant Flow (VRF) system in EnergyPlus that is chosen due to its advantages over other systems and its availability in the Lebanese market. Finally, simulation of different energy systems models with different modeling approaches is necessary to confront the different modeling approaches and to investigate the interaction between energy systems and building envelope that affects the total energy consumption of Low Energy buildings.

Keywords: physical model, variable refrigerant flow heat pump, dynamic modeling, EnergyPlus, the modeling approach

Procedia PDF Downloads 208
13515 Using Machine Learning to Classify Human Fetal Health and Analyze Feature Importance

Authors: Yash Bingi, Yiqiao Yin

Abstract:

Reduction of child mortality is an ongoing struggle and a commonly used factor in determining progress in the medical field. The under-5 mortality number is around 5 million around the world, with many of the deaths being preventable. In light of this issue, Cardiotocograms (CTGs) have emerged as a leading tool to determine fetal health. By using ultrasound pulses and reading the responses, CTGs help healthcare professionals assess the overall health of the fetus to determine the risk of child mortality. However, interpreting the results of the CTGs is time-consuming and inefficient, especially in underdeveloped areas where an expert obstetrician is hard to come by. Using a support vector machine (SVM) and oversampling, this paper proposed a model that classifies fetal health with an accuracy of 99.59%. To further explain the CTG measurements, an algorithm based on Randomized Input Sampling for Explanation ((RISE) of Black-box Models was created, called Feature Alteration for explanation of Black Box Models (FAB), and compared the findings to Shapley Additive Explanations (SHAP) and Local Interpretable Model Agnostic Explanations (LIME). This allows doctors and medical professionals to classify fetal health with high accuracy and determine which features were most influential in the process.

Keywords: machine learning, fetal health, gradient boosting, support vector machine, Shapley values, local interpretable model agnostic explanations

Procedia PDF Downloads 131
13514 Hardware Co-Simulation Based Based Direct Torque Control for Induction Motor Drive

Authors: Hanan Mikhael Dawood, Haider Salim, Jafar Al-Wash

Abstract:

This paper presents Proportional-Integral (PI) controller to improve the system performance which gives better torque and flux response. In addition, it reduces the undesirable torque ripple. The conventional DTC controller approach for induction machines, based on an improved torque and stator flux estimator, is implemented using Xilinx System Generator (XSG) for MATLAB/Simulink environment through Xilinx blocksets. The design was achieved in VHDL which is based on a MATLAB/Simulink simulation model. The hardware in the loop results are obtained considering the implementation of the proposed model on the Xilinx NEXYS2 Spartan 3E1200 FG320 Kit.

Keywords: induction motor, Direct Torque Control (DTC), Xilinx FPGA, motor drive

Procedia PDF Downloads 606