Search results for: progressive sports operational model
13185 The Role of Personality Characteristics and Psychological Harassment Behaviors Which Employees Are Exposed on Work Alienation
Authors: Hasan Serdar Öge, Esra Çiftçi, Kazım Karaboğa
Abstract:
The main purpose of the research is to address the role of psychological harassment behaviors (mobbing) to which employees are exposed and personality characteristics over work alienation. Research population was composed of the employees of Provincial Special Administration. A survey with four sections was created to measure variables and reach out the basic goals of the research. Correlation and step-wise regression analyses were performed to investigate the separate and overall effects of sub-dimensions of psychological harassment behaviors and personality characteristic on work alienation of employees. Correlation analysis revealed significant but weak relationships between work alienation and psychological harassment and personality characteristics. Step-wise regression analysis revealed also significant relationships between work alienation variable and assault to personality, direct negative behaviors (sub dimensions of mobbing) and openness (sub-dimension of personality characteristics). Each variable was introduced into the model step by step to investigate the effects of significant variables in explaining the variations in work alienation. While the explanation ratio of the first model was 13%, the last model including three variables had an explanation ratio of 24%.Keywords: alienation, five-factor personality characteristics, mobbing, psychological harassment, work alienation
Procedia PDF Downloads 40513184 Risk, Capital Buffers, and Bank Lending: The Adjustment of Euro Area Banks
Authors: Laurent Maurin, Mervi Toivanen
Abstract:
This paper estimates euro area banks’ internal target capital ratios and investigates whether banks’ adjustment to the targets have an impact on credit supply and holding of securities during the financial crisis in 2005-2011. Using data on listed banks and country-specific macro-variables a partial adjustment model is estimated in a panel context. The results indicate, firstly, that an increase in the riskiness of banks’ balance sheets influences positively on the target capital ratios. Secondly, the adjustment towards higher equilibrium capital ratios has a significant impact on banks’ assets. The impact is found to be more size-able on security holdings than on loans, thereby suggesting a pecking order.Keywords: Euro area, capital ratios, credit supply, partial adjustment model
Procedia PDF Downloads 44813183 Using Machine Learning as an Alternative for Predicting Exchange Rates
Authors: Pedro Paulo Galindo Francisco, Eli Dhadad Junior
Abstract:
This study addresses the Meese-Rogoff Puzzle by introducing the latest machine learning techniques as alternatives for predicting the exchange rates. Using RMSE as a comparison metric, Meese and Rogoff discovered that economic models are unable to outperform the random walk model as short-term exchange rate predictors. Decades after this study, no statistical prediction technique has proven effective in overcoming this obstacle; although there were positive results, they did not apply to all currencies and defined periods. Recent advancements in artificial intelligence technologies have paved the way for a new approach to exchange rate prediction. Leveraging this technology, we applied five machine learning techniques to attempt to overcome the Meese-Rogoff puzzle. We considered daily data for the real, yen, British pound, euro, and Chinese yuan against the US dollar over a time horizon from 2010 to 2023. Our results showed that none of the presented techniques were able to produce an RMSE lower than the Random Walk model. However, the performance of some models, particularly LSTM and N-BEATS were able to outperform the ARIMA model. The results also suggest that machine learning models have untapped potential and could represent an effective long-term possibility for overcoming the Meese-Rogoff puzzle.Keywords: exchage rate, prediction, machine learning, deep learning
Procedia PDF Downloads 3213182 Modeling the Moment of Resistance Generated by an Ore-Grinding Mill
Authors: Marinka Baghdasaryan, Tigran Mnoyan
Abstract:
The pertinence of modeling the moment of resistance generated by the ore-grinding mill is substantiated. Based on the ranking of technological indices obtained in the result of the survey among the specialists of several beneficiating plants, the factors determining the level of the moment of resistance generated by the mill are revealed. A priori diagram of the ranks is obtained in which the factors are arranged in the descending order of the impact degree on the level of the moment. The obtained model of the moment of resistance shows the technological character of the operation modes of the ore-grinding mill and can be used for improving the operation modes of the system motor-mill and preventing the abnormal mode of the drive synchronous motor.Keywords: model, abnormal mode, mill, correlation, moment of resistance, rotational speed
Procedia PDF Downloads 45213181 Effectiveness of Adopting Software Quality Frameworks in Software Organizations: A Qualitative Review
Authors: Sarah K. Amer, Nagwa Badr, Osman Ibrahim, Ahmed Hamad
Abstract:
This paper surveys the effectiveness of software process quality assurance frameworks, with some focus on Capability Maturity Model Integration (CMMI) - a framework that has become widely adopted in software organizations. The importance of quality improvement in software development, and the differences in the outcomes of quality framework implementation between Middle Eastern and North African (MENA-region) countries and non-MENA-region countries are discussed. The greatest challenges met in the MENA region are identified, with particular focus on Egypt and its rising software development industry.Keywords: software quality, software process improvement, software development methodologies, capability maturity model integration
Procedia PDF Downloads 35413180 Air Pollution and Respiratory-Related Restricted Activity Days in Tunisia
Authors: Mokhtar Kouki Inès Rekik
Abstract:
This paper focuses on the assessment of the air pollution and morbidity relationship in Tunisia. Air pollution is measured by ozone air concentration and the morbidity is measured by the number of respiratory-related restricted activity days during the 2-week period prior to the interview. Socioeconomic data are also collected in order to adjust for any confounding covariates. Our sample is composed by 407 Tunisian respondents; 44.7% are women, the average age is 35.2, near 69% are living in a house built after the 1980, and 27.8% have reported at least one day of respiratory-related restricted activity. The model consists on the regression of the number of respiratory-related restricted activity days on the air quality measure and the socioeconomic covariates. In order to correct for zero-inflation and heterogeneity, we estimate several models (Poisson, Negative binomial, Zero inflated Poisson, Poisson hurdle, Negative binomial hurdle and finite mixture Poisson models). Bootstrapping and post-stratification techniques are used in order to correct for any sample bias. According to the Akaike information criteria, the hurdle negative binomial model has the greatest goodness of fit. The main result indicates that, after adjusting for socioeconomic data, the ozone concentration increases the probability of positive number of restricted activity days.Keywords: bootstrapping, hurdle negbin model, overdispersion, ozone concentration, respiratory-related restricted activity days
Procedia PDF Downloads 25713179 Study on the Relationship between the Emission Property of Barium-Tungsten Cathode and Micro-Area Activity
Authors: Zhen Qin, Yufei Peng, Jianbei Li, Jidong Long
Abstract:
In order to study the activity of the coated aluminate barium-tungsten cathodes during activation, aging, poisoning and long-term use. Through a set of hot-cathode micro-area emission uniformity study device, we tested the micro-area emission performance of the cathode under different conditions. The change of activity of cathode micro-area was obtained. The influence of micro-area activity on the performance of the cathode was explained by the ageing model of barium-tungsten cathode. This helps to improve the design and process of the cathode and can point the way in finding the factors that affect life in the cathode operation.Keywords: barium-tungsten cathode, ageing model, micro-area emission, emission uniformity
Procedia PDF Downloads 40913178 Development and Modeling of the Process of Narrow-seam Laser Welding of Ni-Superalloy in a Hard-to-Reach Place
Authors: Vladimir Isakov, Evgeniy Rykov, Lubov Magerramova, Nikolay Emmaussky
Abstract:
For the manufacture of critical hollow products, a laser narrow-seam welding scheme based on the supply of a laser beam into the inner cavity has been developed. The report presents the results of comprehensive studies aimed at creating a sealed weld that repeats the geometric shape of the inner cavity using a rotary mirror. Laser welding of hard-to-reach places requires preliminary modeling of the process to identify defect-free modes performed at the highest possible welding speed. Optimization of the technological modes of the welded joint with a ratio of the seam width to its depth equal to 1/5 of the thickness of the Ni superalloy 6.0 mm was performed using the Verhulst limited growth model in a discrete representation. This mathematical model in the form of a recurrence relation made it possible to numerically investigate the entire variety of laser melting modes: chaotic; self-oscillating; stationary and attenuated. The control parameters and the parameter of the order to which other variables of the technological system of laser welding are subordinated are established. In it, the coefficient of relative heat capacity of the melt bath was used as a control parameter, characterizing the competition between the heat input by the laser and the heat sink into the surrounding metal. The parameter of the order of the narrow–seam laser welding process, in this interpretation, is a dimensionless value of the penetration depth, which is an argument of the function of the desired logistic equation. Experimental studies of narrow-seam welding were performed using a copper, water-cooled mirror by radiation from a powerful fiber laser. The obtained results were used to validate the evolutionary mathematical model of the laser welding process.Keywords: laser welding, internal cavity, limited growth model, ni-superalloy
Procedia PDF Downloads 313177 Effects of Cash Transfers Mitigation Impacts in the Face of Socioeconomic External Shocks: Evidence from Egypt
Authors: Basma Yassa
Abstract:
Evidence on cash transfers’ effectiveness in mitigating macro and idiosyncratic shocks’ impacts has been mixed and is mostly concentrated in Latin America, Sub-Saharan Africa, and South Asia with very limited evidence from the MENA region. Yet conditional cash transfers schemes have been continually used, especially in Egypt, as the main social protection tool in response to the recent socioeconomic crises and macro shocks. We use 2 panel datasets and 1 cross-sectional dataset to estimate the effectiveness of cash transfers as a shock-mitigative mechanism in the Egyptian context. In this paper, the results from the different models (Panel Fixed Effects model and the Regression Discontinuity Design (RDD) model) confirm that micro and macro shocks lead to significant decline in several household-level welfare outcomes and that Takaful cash transfers have a significant positive impact in mitigating the negative shock impacts, especially on households’ debt incidence, debt levels, and asset ownership, but not necessarily on food, and non-food expenditure levels. The results indicate large positive significant effects on decreasing household incidence of debt by up to 12.4 percent and lowered the debt size by approximately 18 percent among Takaful beneficiaries compared to non-beneficiaries’. Similar evidence is found on asset ownership levels, as the RDD model shows significant positive effects on total asset ownership and productive asset ownership, but the model failed to detect positive impacts on per capita food and non-food expenditures. Further extensions are still in progress to compare the models’ results with the DID model results when using a nationally representative ELMPS panel data (2018/2024) rounds. Finally, our initial analysis suggests that conditional cash transfers are effective in buffering the negative shock impacts on certain welfare indicators even after successive macro-economic shocks in 2022 and 2023 in the Egyptian Context.Keywords: cash transfers, fixed effects, household welfare, household debt, micro shocks, regression discontinuity design
Procedia PDF Downloads 4713176 Bonding Characteristics Between FRP and Concrete Substrates
Authors: Houssam A. Toutanji, Meng Han
Abstract:
This study focuses on the development of a fracture mechanics based-model that predicts the debonding behavior of FRP strengthened RC beams. In this study, a database includes 351 concrete prisms bonded with FRP plates tested in single and double shear were prepared. The existing fracture-mechanics-based models are applied to this database. Unfortunately the properties of adhesive layer, especially a soft adhesive layer, used on the specimens in the existing studies were not always able to found. Thus, the new model’s proposal was based on fifteen newly conducted pullout tests and twenty four data selected from two independent existing studies with the application of a soft adhesive layers and the availability of adhesive properties.Keywords: carbon fiber composite materials, interface response, fracture characteristics, maximum shear stress, ultimate transferable load
Procedia PDF Downloads 26913175 Development and Validation of an Instrument Measuring the Coping Strategies in Situations of Stress
Authors: Lucie Côté, Martin Lauzier, Guy Beauchamp, France Guertin
Abstract:
Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.Keywords: acceptance, coping strategies, stress, validation process
Procedia PDF Downloads 33913174 Use of a Symptom Scale Based on Degree of Functional Impairment for Acute Concussion
Authors: Matthew T. McCarthy, Sarah Janse, Natalie M. Pizzimenti, Anthony K. Savino, Brian Crosser, Sean C. Rose
Abstract:
Concussion is diagnosed clinically using a comprehensive history and exam, supported by ancillary testing. Frequently, symptom checklists are used as part of the evaluation of concussion. Existing symptom scales are based on a subjective Likert scale, without relation of symptoms to clinical or functional impairment. This is a retrospective review of 133 patients under age 30 seen in an outpatient neurology practice within 30 days of a probable or definite concussion. Each patient completed 2 symptom checklists at the initial visit – the SCAT-3 symptom evaluation (22 symptoms, 0-6 scale) and a scale based on the degree of clinical impairment for each symptom (22 symptoms, 0-3 scale related to functional impact of the symptom). Final clearance date was determined by the treating physician. 60.9% of patients were male with mean age 15.7 years (SD 2.3). Mean time from concussion to first visit was 6.9 days (SD 6.2), and 101 patients had definite concussions (75.9%), while 32 were diagnosed as probable (24.1%). 94 patients had a known clearance date (70.7%) with mean clearance time of 20.6 days (SD 18.6) and median clearance time of 19 days (95% CI 16-21). Mean total symptom score was 27.2 (SD 22.9) on the SCAT-3 and 14.7 (SD 11.9) for the functional impairment scale. Pearson’s correlation between the two scales was 0.98 (p < 0.001). After adjusting for patient and injury characteristics, an equivalent increase in score on each scale was associated with longer time to clearance (SCAT-3 hazard ratio 0.885, 95%CI 0.835-0.938, p < 0.001; functional impairment scale hazard ratio 0.851, 95%CI 0.802-0.902, p < 0.001). A concussion symptom scale based on degree of functional impairment correlates strongly with the SCAT-3 scale and demonstrates a similar association with time to clearance. By assessing the degree of impact on clinical functioning, this symptom scale reflects a more intuitive approach to rating symptoms and can be used in the management of concussion.Keywords: checklist, concussion, neurology, scale, sports, symptoms
Procedia PDF Downloads 15313173 Predictive Analytics in Oil and Gas Industry
Authors: Suchitra Chnadrashekhar
Abstract:
Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.Keywords: hydrocarbon, information technology, SAS, predictive analytics
Procedia PDF Downloads 36013172 Determinants of Aggregate Electricity Consumption in Ghana: A Multivariate Time Series Analysis
Authors: Renata Konadu
Abstract:
In Ghana, electricity has become the main form of energy which all sectors of the economy rely on for their businesses. Therefore, as the economy grows, the demand and consumption of electricity also grow alongside due to the heavy dependence on it. However, since the supply of electricity has not increased to match the demand, there has been frequent power outages and load shedding affecting business performances. To solve this problem and advance policies to secure electricity in Ghana, it is imperative that those factors that cause consumption to increase be analysed by considering the three classes of consumers; residential, industrial and non-residential. The main argument, however, is that, export of electricity to other neighbouring countries should be included in the electricity consumption model and considered as one of the significant factors which can decrease or increase consumption. The author made use of multivariate time series data from 1980-2010 and econometric models such as Ordinary Least Squares (OLS) and Vector Error Correction Model. Findings show that GDP growth, urban population growth, electricity exports and industry value added to GDP were cointegrated. The results also showed that there is unidirectional causality from electricity export and GDP growth and Industry value added to GDP to electricity consumption in the long run. However, in the short run, there was found to be a directional causality among all the variables and electricity consumption. The results have useful implication for energy policy makers especially with regards to electricity consumption, demand, and supply.Keywords: electricity consumption, energy policy, GDP growth, vector error correction model
Procedia PDF Downloads 43713171 Early Evaluation of Long-Span Suspension Bridges Using Smartphone Accelerometers
Authors: Ekin Ozer, Maria Q. Feng, Rupa Purasinghe
Abstract:
Structural deterioration of bridge systems possesses an ongoing threat to the transportation networks. Besides, landmark bridges’ integrity and safety are more than sole functionality, since they provide a strong presence for the society and nations. Therefore, an innovative and sustainable method to inspect landmark bridges is essential to ensure their resiliency in the long run. In this paper, a recently introduced concept, smartphone-based modal frequency estimation is addressed, and this paper targets to authenticate the fidelity of smartphone-based vibration measurements gathered from three landmark suspension bridges. Firstly, smartphones located at the bridge mid-span are adopted as portable and standalone vibration measurement devices. Then, their embedded accelerometers are utilized to gather vibration response under operational loads, and eventually frequency domain characteristics are deduced. The preliminary analysis results are compared with the reference publications and high-quality monitoring data to validate the usability of smartphones on long-span landmark suspension bridges. If the technical challenges such as high period of vibration, low amplitude excitation, embedded smartphone sensor features, sampling, and citizen engagement are tackled, smartphones can provide a novel and cost-free crowdsourcing tool for maintenance of these landmark structures. This study presents the early phase findings from three signature structures located in the United States.Keywords: smart and mobile sensing, structural health monitoring, suspension bridges, vibration analysis
Procedia PDF Downloads 29213170 Modelling Mode Choice Behaviour Using Cloud Theory
Authors: Leah Wright, Trevor Townsend
Abstract:
Mode choice models are crucial instruments in the analysis of travel behaviour. These models show the relationship between an individual’s choice of transportation mode for a given O-D pair and the individual’s socioeconomic characteristics such as household size and income level, age and/or gender, and the features of the transportation system. The most popular functional forms of these models are based on Utility-Based Choice Theory, which addresses the uncertainty in the decision-making process with the use of an error term. However, with the development of artificial intelligence, many researchers have started to take a different approach to travel demand modelling. In recent times, researchers have looked at using neural networks, fuzzy logic and rough set theory to develop improved mode choice formulas. The concept of cloud theory has recently been introduced to model decision-making under uncertainty. Unlike the previously mentioned theories, cloud theory recognises a relationship between randomness and fuzziness, two of the most common types of uncertainty. This research aims to investigate the use of cloud theory in mode choice models. This paper highlights the conceptual framework of the mode choice model using cloud theory. Merging decision-making under uncertainty and mode choice models is state of the art. The cloud theory model is expected to address the issues and concerns with the nested logit and improve the design of mode choice models and their use in travel demand.Keywords: Cloud theory, decision-making, mode choice models, travel behaviour, uncertainty
Procedia PDF Downloads 38813169 1-g Shake Table Tests to Study the Impact of PGA on Foundation Settlement in Liquefiable Soil
Authors: Md. Kausar Alam, Mohammad Yazdi, Peiman Zogh, Ramin Motamed
Abstract:
The liquefaction-induced ground settlement has caused severe damage to structures in the past decades. However, the amount of building settlement caused by liquefaction is directly proportional to the intensity of the ground shaking. To reduce this soil liquefaction effect, it is essential to examine the influence of peak ground acceleration (PGA). Unfortunately, limited studies have been carried out on this issue. In this study, a series of moderate scale 1g shake table experiments were conducted at the University of Nevada Reno to evaluate the influence of PGA with the same duration in liquefiable soil layers. The model is prepared based on a large-scale shake table with a scaling factor of N = 5, which has been conducted at the University of California, San Diego. The model ground has three soil layers with relative densities of 50% for crust, 30% for liquefiable, and 90% for dense layer, respectively. In addition, a shallow foundation is seated over an unsaturated crust layer. After preparing the model, the input motions having various peak ground accelerations (i.e., 0.16g, 0.25g, and 0.37g) for the same duration (10 sec) were applied. Based on the experimental results, when the PGA increased from 0.16g to 0.37g, the foundation increased from 20 mm to 100 mm. In addition, the expected foundation settlement based on the scaling factor was 25 mm, while the actual settlement for PGA 0.25g for 10 seconds was 50 mm.Keywords: foundation settlement, liquefaction, peak ground acceleration, shake table test
Procedia PDF Downloads 7713168 Numerical Analysis of a Pilot Solar Chimney Power Plant
Authors: Ehsan Gholamalizadeh, Jae Dong Chung
Abstract:
Solar chimney power plant is a feasible solar thermal system which produces electricity from the Sun. The objective of this study is to investigate buoyancy-driven flow and heat transfer through a built pilot solar chimney system called 'Kerman Project'. The system has a chimney with the height and diameter of 60 m and 3 m, respectively, and the average radius of its solar collector is about 20 m, and also its average collector height is about 2 m. A three-dimensional simulation was conducted to analyze the system, using computational fluid dynamics (CFD). In this model, radiative transfer equation was solved using the discrete ordinates (DO) radiation model taking into account a non-gray radiation behavior. In order to modelling solar irradiation from the sun’s rays, the solar ray tracing algorithm was coupled to the computation via a source term in the energy equation. The model was validated with comparing to the experimental data of the Manzanares prototype and also the performance of the built pilot system. Then, based on the numerical simulations, velocity and temperature distributions through the system, the temperature profile of the ground surface and the system performance were presented. The analysis accurately shows the flow and heat transfer characteristics through the pilot system and predicts its performance.Keywords: buoyancy-driven flow, computational fluid dynamics, heat transfer, renewable energy, solar chimney power plant
Procedia PDF Downloads 26213167 Research Networks and Knowledge Sharing: An Exploratory Study of Aquaculture in Europe
Authors: Zeta Dooly, Aidan Duane
Abstract:
The collaborative European funded research and development landscape provides prime environmental conditions for multi-disciplinary teams to learn and enhance their knowledge beyond the capability of training and learning within their own organisation cocoons. Whilst the emergence of the academic entrepreneur has changed the focus of educational institutions to that of quasi-businesses, the training and professional development of lecturers and academic staff are often not formalised to the same level as industry. This research focuses on industry and academic collaborative research funded by the European Commission. The impact of research is scalable if an optimum research network is created and managed effectively. This paper investigates network embeddedness, the nature of relationships, links, and nodes within a research network, and the enhancement of the network’s knowledge. The contribution of this paper extends our understanding of establishing and maintaining effective collaborative research networks. The effects of network embeddedness are recognized in the literature as pertinent to innovation and the economy. Network theory literature claims that networks are essential to innovative clusters such as Silicon valley and innovation in high tech industries. This research provides evidence to support the impact collaborative research has on the disparate individuals toward their innovative contributions to their organisations and their own professional development. This study adopts a qualitative approach and uncovers some of the challenges of multi-disciplinary research through case study insights. The contribution of this paper recommends the establishment of scaffolding to accommodate cooperation in research networks, role appointment, and addressing contextual complexities early to avoid problem cultivation. Furthermore, it suggests recommendations in relation to network formation, intra-network challenges in relation to open data, competition, friendships, and competency enhancement. The network capability is enhanced by the adoption of the relevant theories; network theory, open innovation, and social exchange, with the understanding that the network structure has an impact on innovation and social exchange in research networks. The research concludes that there is an opportunity to deepen our understanding of the impact of network reuse and network hoping that provides scaffolding for the network members to enhance and build upon their knowledge using a progressive approach.Keywords: research networks, competency building, network theory, case study
Procedia PDF Downloads 12613166 Drift-Wave Turbulence in a Tokamak Edge Plasma
Authors: S. Belgherras Bekkouche, T. Benouaz, S. M. A. Bekkouche
Abstract:
Tokamak plasma is far from having a stable background. The study of turbulent transport is an important part of the current research and advanced scenarios were devised to minimize it. To do this, we used a three-wave interaction model which allows to investigate the occurrence drift-wave turbulence driven by pressure gradients in the edge plasma of a tokamak. In order to simulate the energy redistribution among different modes, the growth/decay rates for the three waves was added. After a numerical simulation, we can determine certain aspects of the temporal dynamics exhibited by the model. Indeed for a wide range of the wave decay rate, an intermittent transition from periodic behavior to chaos is observed. Then, a control strategy of chaos was introduced with the aim of reducing or eliminating the weak turbulence.Keywords: wave interaction, plasma drift waves, wave turbulence, tokamak, edge plasma, chaos
Procedia PDF Downloads 55213165 Coral Reef Fishes in the Marine Protected Areas in Southern Cebu, Philippines
Authors: Christine M. Corrales, Gloria G. Delan, Rachel Luz V. Rica, Alfonso S. Piquero
Abstract:
Marine protected areas (MPAs) in the study sites were established 8-13 years ago and are presently operational. This study was conducted to gather baseline information on the diversity, density and biomass of coral reef fishes inside and outside the four marine protected areas (MPAs) of Cawayan, Dalaguete; Daan-Lungsod Guiwang, Alcoy; North Granada, Boljoon and Sta. Cruz, Ronda. Coral reef fishes in the MPAs were identified using Fish Visual Census Method. Results of the t-test showed that the mean diversity (fish species/250m2) of target and non-target reef fish species found inside and outside the MPAs were significantly different. Density (ind./1,000m2) of target species inside and outside the MPAs showed no significant difference. Similarly, density of non-target species inside and outside the MPAs also showed no significant difference. This is an indication that fish density inside and outside the MPAs were more or less of the same condition. The mean biomass (kg/1,000m2) of target species inside and outside the MPAs showed a significant difference in contrast with non-target species inside and outside the MPAs which showed a no significant difference. Higher biomass of target fish species belonging to family Caesonidae (fusiliers) and Scaridae (parrotfishes) were commonly observed inside the MPAs. Results showed that fish species were more diverse with higher density and biomass inside the MPAs than the outside area. However, fish diversity and density were mostly contributed by non-target species. Hence, long term protection and management of MPAs is needed to effectively increase fish diversity, density and biomass specifically on target fish species.Keywords: biomass, density, diversity, marine protected area, target fish species
Procedia PDF Downloads 39713164 Vehicular Emission Estimation of Islamabad by Using Copert-5 Model
Authors: Muhammad Jahanzaib, Muhammad Z. A. Khan, Junaid Khayyam
Abstract:
Islamabad is the capital of Pakistan with the population of 1.365 million people and with a vehicular fleet size of 0.75 million. The vehicular fleet size is growing annually by the rate of 11%. Vehicular emissions are major source of Black carbon (BC). In developing countries like Pakistan, most of the vehicles consume conventional fuels like Petrol, Diesel, and CNG. These fuels are the major emitters of pollutants like CO, CO2, NOx, CH4, VOCs, and particulate matter (PM10). Carbon dioxide and methane are the leading contributor to the global warming with a global share of 9-26% and 4-9% respectively. NOx is the precursor of nitrates which ultimately form aerosols that are noxious to human health. In this study, COPERT (Computer program to Calculate Emissions from Road Transport) was used for vehicular emission estimation in Islamabad. COPERT is a windows based program which is developed for the calculation of emissions from the road transport sector. The emissions were calculated for the year of 2016 include pollutants like CO, NOx, VOC, and PM and energy consumption. The different variable was input to the model for emission estimation including meteorological parameters, average vehicular trip length and respective time duration, fleet configuration, activity data, degradation factor, and fuel effect. The estimated emissions for CO, CH4, CO2, NOx, and PM10 were found to be 9814.2, 44.9, 279196.7, 3744.2 and 304.5 tons respectively.Keywords: COPERT Model, emission estimation, PM10, vehicular emission
Procedia PDF Downloads 26213163 Experimental Assessment of Artificial Flavors Production
Authors: M. Unis, S. Turky, A. Elalem, A. Meshrghi
Abstract:
The Esterification kinetics of acetic acid with isopropnol in the presence of sulfuric acid as a homogenous catalyst was studied with isothermal batch experiments at 60,70 and 80°C and at a different molar ratio of isopropnol to acetic acid. Investigation of kinetics of the reaction indicated that the low of molar ratio is favored for esterification reaction, this is due to the reaction is catalyzed by acid. The maximum conversion, approximately 60.6% was obtained at 80°C for molar ratio of 1:3 acid : alcohol. It was found that increasing temperature of the reaction, increases the rate constant and conversion at a certain mole ratio, that is due to the esterification is exothermic. The homogenous reaction has been described with simple power-law model. The chemical equilibrium combustion calculated from the kinetic model in agreement with the measured chemical equilibrium.Keywords: artificial flavors, esterification, chemical equilibria, isothermal
Procedia PDF Downloads 33513162 Response Analysis of a Steel Reinforced Concrete High-Rise Building during the 2011 Tohoku Earthquake
Authors: Naohiro Nakamura, Takuya Kinoshita, Hiroshi Fukuyama
Abstract:
The 2011 off The Pacific Coast of Tohoku Earthquake caused considerable damage to wide areas of eastern Japan. A large number of earthquake observation records were obtained at various places. To design more earthquake-resistant buildings and improve earthquake disaster prevention, it is necessary to utilize these data to analyze and evaluate the behavior of a building during an earthquake. This paper presents an earthquake response simulation analysis (hereafter a seismic response analysis) that was conducted using data recorded during the main earthquake (hereafter the main shock) as well as the earthquakes before and after it. The data were obtained at a high-rise steel-reinforced concrete (SRC) building in the bay area of Tokyo. We first give an overview of the building, along with the characteristics of the earthquake motion and the building during the main shock. The data indicate that there was a change in the natural period before and after the earthquake. Next, we present the results of our seismic response analysis. First, the analysis model and conditions are shown, and then, the analysis result is compared with the observational records. Using the analysis result, we then study the effect of soil-structure interaction on the response of the building. By identifying the characteristics of the building during the earthquake (i.e., the 1st natural period and the 1st damping ratio) by the Auto-Regressive eXogenous (ARX) model, we compare the analysis result with the observational records so as to evaluate the accuracy of the response analysis. In this study, a lumped-mass system SR model was used to conduct a seismic response analysis using observational data as input waves. The main results of this study are as follows: 1) The observational records of the 3/11 main shock put it between a level 1 and level 2 earthquake. The result of the ground response analysis showed that the maximum shear strain in the ground was about 0.1% and that the possibility of liquefaction occurring was low. 2) During the 3/11 main shock, the observed wave showed that the eigenperiod of the building became longer; this behavior could be generally reproduced in the response analysis. This prolonged eigenperiod was due to the nonlinearity of the superstructure, and the effect of the nonlinearity of the ground seems to have been small. 3) As for the 4/11 aftershock, a continuous analysis in which the subject seismic wave was input after the 3/11 main shock was input was conducted. The analyzed values generally corresponded well with the observed values. This means that the effect of the nonlinearity of the main shock was retained by the building. It is important to consider this when conducting the response evaluation. 4) The first period and the damping ratio during a vibration were evaluated by an ARX model. Our results show that the response analysis model in this study is generally good at estimating a change in the response of the building during a vibration.Keywords: ARX model, response analysis, SRC building, the 2011 off the Pacific Coast of Tohoku Earthquake
Procedia PDF Downloads 16413161 Input Data Balancing in a Neural Network PM-10 Forecasting System
Authors: Suk-Hyun Yu, Heeyong Kwon
Abstract:
Recently PM-10 has become a social and global issue. It is one of major air pollutants which affect human health. Therefore, it needs to be forecasted rapidly and precisely. However, PM-10 comes from various emission sources, and its level of concentration is largely dependent on meteorological and geographical factors of local and global region, so the forecasting of PM-10 concentration is very difficult. Neural network model can be used in the case. But, there are few cases of high concentration PM-10. It makes the learning of the neural network model difficult. In this paper, we suggest a simple input balancing method when the data distribution is uneven. It is based on the probability of appearance of the data. Experimental results show that the input balancing makes the neural networks’ learning easy and improves the forecasting rates.Keywords: artificial intelligence, air quality prediction, neural networks, pattern recognition, PM-10
Procedia PDF Downloads 23213160 Application of Medical Information System for Image-Based Second Opinion Consultations–Georgian Experience
Authors: Kldiashvili Ekaterina, Burduli Archil, Ghortlishvili Gocha
Abstract:
Introduction – Medical information system (MIS) is at the heart of information technology (IT) implementation policies in healthcare systems around the world. Different architecture and application models of MIS are developed. Despite of obvious advantages and benefits, application of MIS in everyday practice is slow. Objective - On the background of analysis of the existing models of MIS in Georgia has been created a multi-user web-based approach. This presentation will present the architecture of the system and its application for image based second opinion consultations. Methods – The MIS has been created with .Net technology and SQL database architecture. It realizes local (intranet) and remote (internet) access to the system and management of databases. The MIS is fully operational approach, which is successfully used for medical data registration and management as well as for creation, editing and maintenance of the electronic medical records (EMR). Five hundred Georgian language electronic medical records from the cervical screening activity illustrated by images were selected for second opinion consultations. Results – The primary goal of the MIS is patient management. However, the system can be successfully applied for image based second opinion consultations. Discussion – The ideal of healthcare in the information age must be to create a situation where healthcare professionals spend more time creating knowledge from medical information and less time managing medical information. The application of easily available and adaptable technology and improvement of the infrastructure conditions is the basis for eHealth applications. Conclusion - The MIS is perspective and actual technology solution. It can be successfully and effectively used for image based second opinion consultations.Keywords: digital images, medical information system, second opinion consultations, electronic medical record
Procedia PDF Downloads 45013159 Modeling The Deterioration Of Road Bridges At The Provincial Level In Laos
Authors: Hatthaphone Silimanotham, Michael Henry
Abstract:
The effective maintenance of road bridge infrastructure is becoming a widely researched topic in the civil engineering field. Deterioration is one of the main issues in bridge performance, and it is necessary to understand how bridges deteriorate to optimally plan budget allocation for bridge maintenance. In Laos, many bridges are in a deteriorated state, which may affect the performance of the bridge. Due to bridge deterioration, the Ministry of Public Works and Transport is interested in the deterioration model to allocate the budget efficiently and support the bridge maintenance planning. A deterioration model can be used to predict the bridge condition in the future based on the observed behavior in the past. This paper analyzes the available inspection data of road bridges on the road classifications network to build deterioration prediction models for the main bridge type found at the provincial level (concrete slab, concrete girder, and steel truss) using probabilistic deterioration modeling by linear regression method. The analysis targets there has three bridge types in the 18 provinces of Laos and estimates the bridge deterioration rating for evaluating the bridge's remaining life. This research thus considers the relationship between the service period and the bridge condition to represent the probability of bridge condition in the future. The results of the study can be used for a variety of bridge management tasks, including maintenance planning, budgeting, and evaluating bridge assets.Keywords: deterioration model, bridge condition, bridge management, probabilistic modeling
Procedia PDF Downloads 15913158 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals
Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar
Abstract:
Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks
Procedia PDF Downloads 18613157 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder
Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen
Abstract:
Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.Keywords: natural language inference, explanation generation, variational auto-encoder, generative model
Procedia PDF Downloads 15113156 Bridging the Gap through New Media Technology Acceptance: Exploring Chinese Family Business Culture
Authors: Farzana Sharmin, Mohammad Tipu Sultan
Abstract:
Emerging new media technology such as social media and social networking sites have changed the family business dynamics in Eastern Asia. The family business trends in China has been developed at an exponential rate towards technology. In the last two decades, many of this family business has succeeded in becoming major players in the Chinese and world economy. But there are a very few availabilities of literature on Chinese context regarding social media acceptance in terms of the family business. Therefore, this study has tried to cover the gap between culture and new media technology to understand the attitude of Chinese young entrepreneurs’ towards the family business. This paper focused on two cultural dimensions (collectivism, long-term orientation), which are adopted from Greet Hofstede’s. Additionally perceived usefulness and ease of use adopted from the Technology Acceptance Model (TAM) to explore the actual behavior of technology acceptance for the family business. A quantitative survey method (n=275) used to collect data Chinese family business owners' in Shanghai. The inferential statistical analysis was applied to extract trait factors, and verification of the model, respectively. The research results found that using social media for family business promotion has highly influenced by cultural values (collectivism and long-term orientation). The theoretical contribution of this research may also assist policymakers and practitioners of other developing countries to advertise and promote the family business through social media.Keywords: China, cultural dimensions, family business, technology acceptance model, TAM
Procedia PDF Downloads 148