Search results for: game outcome prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4687

Search results for: game outcome prediction

997 Characterising the Dynamic Friction in the Staking of Plain Spherical Bearings

Authors: Jacob Hatherell, Jason Matthews, Arnaud Marmier

Abstract:

Anvil Staking is a cold-forming process that is used in the assembly of plain spherical bearings into a rod-end housing. This process ensures that the bearing outer lip conforms to the chamfer in the matching rod end to produce a lightweight mechanical joint with sufficient strength to meet the pushout load requirement of the assembly. Finite Element (FE) analysis is being used extensively to predict the behaviour of metal flow in cold forming processes to support industrial manufacturing and product development. On-going research aims to validate FE models across a wide range of bearing and rod-end geometries by systematically isolating and understanding the uncertainties caused by variations in, material properties, load-dependent friction coefficients and strain rate sensitivity. The improved confidence in these models aims to eliminate the costly and time-consuming process of experimental trials in the introduction of new bearing designs. Previous literature has shown that friction coefficients do not remain constant during cold forming operations, however, the understanding of this phenomenon varies significantly and is rarely implemented in FE models. In this paper, a new approach to evaluate the normal contact pressure versus friction coefficient relationship is outlined using friction calibration charts generated via iterative FE models and ring compression tests. When compared to previous research, this new approach greatly improves the prediction of forming geometry and the forming load during the staking operation. This paper also aims to standardise the FE approach to modelling ring compression test and determining the friction calibration charts.

Keywords: anvil staking, finite element analysis, friction coefficient, spherical plain bearing, ring compression tests

Procedia PDF Downloads 202
996 Critical Assessment of Herbal Medicine Usage and Efficacy by Pharmacy Students

Authors: Anton V. Dolzhenko, Tahir Mehmood Khan

Abstract:

An ability to make an evidence-based decision is a critically important skill required for practicing pharmacists. The development of this skill is incorporated into the pharmacy curriculum. We aimed in our study to estimate perception of pharmacy students regarding herbal medicines and their ability to assess information on herbal medicines professionally. The current Monash University curriculum in Pharmacy does not provide comprehensive study material on herbal medicines and students should find their way to find information, assess its quality and make a professional decision. In the Pharmacy course, students are trained how to apply this process to conventional medicines. In our survey of 93 undergraduate students from year 1-4 of Pharmacy course at Monash University Malaysia, we found that students’ view on herbal medicines is sometimes associated with common beliefs, which affect students’ ability to make evidence-based conclusions regarding the therapeutic potential of herbal medicines. The use of herbal medicines is widespread and 95.7% of the participated students have prior experience of using them. In the scale 1 to 10, students rated the importance of acquiring herbal medicine knowledge for them as 8.1±1.6. More than half (54.9%) agreed that herbal medicines have the same clinical significance as conventional medicines in treating diseases. Even more, students agreed that healthcare settings should give equal importance to both conventional and herbal medicine use (80.6%) and that herbal medicines should comply with strict quality control procedures as conventional medicines (84.9%). The latter statement also indicates that students consider safety issues associated with the use of herbal medicines seriously. It was further confirmed by 94.6% of students saying that the safety and toxicity information on herbs and spices are important to pharmacists and 95.7% of students admitting that drug-herb interactions may affect therapeutic outcome. Only 36.5% of students consider herbal medicines as s safer alternative to conventional medicines. The students use information on herbal medicines from various sources and media. Most of the students (81.7%) obtain information on herbal medicines from the Internet and only 20.4% mentioned lectures/workshop/seminars as a source of such information. Therefore, we can conclude that students attained the skills on the critical assessment of therapeutic properties of conventional medicines have a potential to use their skills for evidence-based decisions regarding herbal medicines.

Keywords: evidence-based decision, pharmacy education, student perception, traditional medicines

Procedia PDF Downloads 271
995 TNFRSF11B Gene Polymorphisms A163G and G11811C in Prediction of Osteoporosis Risk

Authors: I. Boroňová, J.Bernasovská, J. Kľoc, Z. Tomková, E. Petrejčíková, D. Gabriková, S. Mačeková

Abstract:

Osteoporosis is a complex health disease characterized by low bone mineral density, which is determined by an interaction of genetics with metabolic and environmental factors. Current research in genetics of osteoporosis is focused on identification of responsible genes and polymorphisms. TNFRSF11B gene plays a key role in bone remodeling. The aim of this study was to investigate the genotype and allele distribution of A163G (rs3102735) osteoprotegerin gene promoter and G1181C (rs2073618) osteoprotegerin first exon polymorphisms in the group of 180 unrelated postmenopausal women with diagnosed osteoporosis and 180 normal controls. Genomic DNA was isolated from peripheral blood leukocytes using standard methodology. Genotyping for presence of different polymorphisms was performed using the Custom Taqman®SNP Genotyping assays. Hardy-Weinberg equilibrium was tested for each SNP in the groups of participants using the chi-square (χ2) test. The distribution of investigated genotypes in the group of patients with osteoporosis were as follows: AA (66.7%), AG (32.2%), GG (1.1%) for A163G polymorphism; GG (19.4%), CG (44.4%), CC (36.1%) for G1181C polymorphism. The distribution of genotypes in normal controls were follows: AA (71.1%), AG (26.1%), GG (2.8%) for A163G polymorphism; GG (22.2%), CG (48.9%), CC (28.9%) for G1181C polymorphism. In A163G polymorphism the variant G allele was more common among patients with osteoporosis: 17.2% versus 15.8% in normal controls. Also, in G1181C polymorphism the phenomenon of more frequent occurrence of C allele in the group of patients with osteoporosis was observed (58.3% versus 53.3%). Genotype and allele distributions showed no significant differences (A163G: χ2=0.270, p=0.605; χ2=0.250, p=0.616; G1181C: χ2= 1.730, p=0.188; χ2=1.820, p=0.177). Our results represents an initial study, further studies of more numerous file and associations studies will be carried out. Knowing the distribution of genotypes is important for assessing the impact of these polymorphisms on various parameters associated with osteoporosis. Screening for identification of “at-risk” women likely to develop osteoporosis and initiating subsequent early intervention appears to be most effective strategy to substantially reduce the risks of osteoporosis.

Keywords: osteoporosis, real-time PCR method, SNP polymorphisms

Procedia PDF Downloads 322
994 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder

Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen

Abstract:

Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.

Keywords: natural language inference, explanation generation, variational auto-encoder, generative model

Procedia PDF Downloads 143
993 The Successful in Construction Project via Effectiveness of Project Team

Authors: Zarabizan Zakaria, Hayati Zainal

Abstract:

The construction industry is one of the most important sectors that contribute to the nation’s economy and catalyze towards the growth of other industries. However, some construction projects have not been completed on its stipulated time and duration, scope and budget due to several factors. This problem arises due to the weaknesses of human factors, especially from ineffective leadership quality practiced by project managers and contractors in managing project teams. Therefore, a construction project should impose the element of Project Team. The project team is formed in the implementation of the project which includes the project brief, project scope, customer requirements and provided designs. Many organizations in the construction sector use teams to meet today's global competition and customer expectations, however, team effectiveness evaluation is required. In insuring the construction team is successful and effectiveness, the construction department must encourage, measure, set up, and evaluate or review the effectiveness of project team that was formed. In order to produce a better outcome for a high-end project, an effective and efficient project team is required which also help in increasing overall productivity. The purpose of this study is to determine the role of team effectiveness in the construction project team based on the overall construction project performance. It examines several different factors which related to team effectiveness. It also examines the relationship between team effectiveness factor and project performance aspect. Team Effect Review and Project Performance Review are developed to be used for data collection. Data collected were analyzed using several statistical tests. Results obtained from data analysis are validated using semi-structured interviews. Besides that, a comprehensive survey were developed to assess the way construction project teams in order to maintain its effectiveness throughout the project phase. In order to determine a project successful it has been found that Project Team Leadership is the most important factor. In addition, the definition of team effectiveness in the construction project team is developed based on the perspective of project clients and project team members. The results of this study are expected to provide an idea on the factors that are needed to be focused on improving the team's effectiveness towards project performance aspects. At the same time, the definition of team effectiveness from team members and owner views has been developed in order to provide a better understanding of the word team's effectiveness in construction projects.

Keywords: project team, leadership, construction project, project successful

Procedia PDF Downloads 168
992 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks

Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer

Abstract:

New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.

Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics

Procedia PDF Downloads 131
991 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 106
990 Expression of Micro-RNA268 in Zinc Deficient Rice

Authors: Sobia Shafqat, Saeed Ahmad Qaisrani

Abstract:

MicroRNAs play an essential role in the regulation and development of all processes in most eukaryotes because of their prospective part as mediators controlling cell growth and differentiation towards the exact position of RNAs response in plants under biotic and abiotic factors or stressors. In a few cases, Zn is oblivious poisonous for plants due to its heavy metal status. Some other metals are extremely toxic, like Cd, Hg, and Pb, but these elements require in rice for the programming of genes under abiotic stress resembling Zn stress when micro RNAs268 was importantly introduced in rice. The micro RNAs overexpressed in transgenic plants with an accumulation of a large amount of melanin dialdehyde, hydrogen peroxide, and an excessive quantity of Zn in the seedlings stage. Let out results for rice pliability under Zn stress micro RNAs act as negative controllers. But the role of micro RNA268 act as a modulator in different ecological condition. It has been explained clearly with a long understanding of the role of micro RNA268 under stress conditions; pliability and practically showed outcome to increase plant sufferance under Zn stress because micro RNAs is an intervention technique for gene regulation in gene expression. The proposed study was experimented with by using genetic factors of Zn stress and toxicity effect on rice plants done at District Vehari, Pakistan. The trial was performed randomly with three replications in a complete block design (RCBD). These blocks were controlled with different concentrations of genetic factors. By overexpression of micro RNA268 rice, seedling growth was not stopped under Zn deficiency due to the accumulation of a large amount of melanin dialdehyde, hydrogen peroxide, and an excessive quantity of Zn in their seedlings. Results showed that micro RNA268 act as a negative controller under Zn stress. In the end, under stress conditions, micro RNA268 showed the necessary function in the tolerance of rice plants. The directorial work sketch gave out high agronomic applications and yield outcomes in rice with a specific amount of Zn application.

Keywords: micro RNA268, zinc, rice, agronomic approach

Procedia PDF Downloads 55
989 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 75
988 Prediction of Positive Cloud-to-Ground Lightning Striking Zones for Charged Thundercloud Based on Line Charge Model

Authors: Surajit Das Barman, Rakibuzzaman Shah, Apurv Kumar

Abstract:

Bushfire is known as one of the ascendant factors to create pyrocumulus thundercloud that causes the ignition of new fires by pyrocumulonimbus (pyroCb) lightning strikes and creates major losses of lives and property worldwide. A conceptual model-based risk planning would be beneficial to predict the lightning striking zones on the surface of the earth underneath the pyroCb thundercloud. PyroCb thundercloud can generate both positive cloud-to-ground (+CG) and negative cloud-to-ground (-CG) lightning in which +CG tends to ignite more bushfires and cause massive damage to nature and infrastructure. In this paper, a simple line charge structured thundercloud model is constructed in 2-D coordinates using the method of image charge to predict the probable +CG lightning striking zones on the earth’s surface for two conceptual thundercloud charge configurations: titled dipole and conventional tripole structure with excessive lower positive charge regions that lead to producing +CG lightning. The electric potential and surface charge density along the earth’s surface for both structures via continuously adjusting the position and the charge density of their charge regions is investigated. Simulation results for tilted dipole structure confirm the down-shear extension of the upper positive charge region in the direction of the cloud’s forward flank by 4 to 8 km, resulting in negative surface density, and would expect +CG lightning to strike within 7.8 km to 20 km around the earth periphery in the direction of the cloud’s forward flank. On the other hand, the conceptual tripole charge structure with enhanced lower positive charge region develops negative surface charge density on the earth’s surface in the range |x| < 6.5 km beneath the thundercloud and highly favors producing +CG lightning strikes.

Keywords: pyrocumulonimbus, cloud-to-ground lightning, charge structure, surface charge density, forward flank

Procedia PDF Downloads 104
987 Thulium Laser Vaporisation and Enucleation of Prostate in Patients on Anticoagulants and Antiplatelet Agents

Authors: Abdul Fatah, Naveenchandra Acharya, Vamshi Krishna, T. Shivaprasad, Ramesh Ramayya

Abstract:

Background: Significant number of patients with bladder outlet obstruction due to BPH are on anti-platelets and anticoagulants. Prostate surgery in this group of patients either in the form of TURP or Open prostatectomy is associated with increased risk of bleeding complications requiring transfusions, packing of the prostatic fossa or ligation or embolization of internal iliac arteries. Withholding of antiplatelets and anticoagulants may be associated with cardiac and other complications. Efficacy of Thulium Laser in the above group of patients was evaluated in terms of peri-operative, postoperative and delayed bleeding complications as well as cardiac events in peri-operative and immediate postoperative period. Methods: 217 patients with a mean age of 68.8 years were enrolled between March 2009 and March 2013 (36 months), and treated for BPH with ThuLEP. Every patient was evaluated at base line according to: Digital Rectal Examination (DRE), prostate volume, Post-Voided volume (PVR), International Prostate Symptoms Score (I-PSS), PSA values, urine analysis and urine culture, uroflowmetry. The post operative complications in the form of drop in hemoglobin level, transfusion rates, post –operative cardiac events within a period of 30 days, delayed hematuria and events like deep vein thrombosis and pulmonary embolism were noted. Results: Our data showed a better post-operative outcome in terms of, postoperative bleeding requiring intervention 7 (3.2%), transfusion rate 4 (1.8%) and cardiac events within a period of 30 days 4(1.8%), delayed hematuria within 6 months 2(0.9 %) compared other series of prostatectomies. Conclusion: The thulium LASER prostatectomy is a safe and effective option for patients with cardiac comorbidties and those patients who are on antiplatelet agents and anticoagulants. The complication rate is less as compared to larger series reported with open and transurethral prostatectomies.

Keywords: thulium laser, prostatectomy, antiplatelet agents, bleeding

Procedia PDF Downloads 389
986 Role of Spatial Variability in the Service Life Prediction of Reinforced Concrete Bridges Affected by Corrosion

Authors: Omran M. Kenshel, Alan J. O'Connor

Abstract:

Estimating the service life of Reinforced Concrete (RC) bridge structures located in corrosive marine environments of a great importance to their owners/engineers. Traditionally, bridge owners/engineers relied more on subjective engineering judgment, e.g. visual inspection, in their estimation approach. However, because financial resources are often limited, rational calculation methods of estimation are needed to aid in making reliable and more accurate predictions for the service life of RC structures. This is in order to direct funds to bridges found to be the most critical. Criticality of the structure can be considered either form the Structural Capacity (i.e. Ultimate Limit State) or from Serviceability viewpoint whichever is adopted. This paper considers the service life of the structure only from the Structural Capacity viewpoint. Considering the great variability associated with the parameters involved in the estimation process, the probabilistic approach is most suited. The probabilistic modelling adopted here used Monte Carlo simulation technique to estimate the Reliability (i.e. Probability of Failure) of the structure under consideration. In this paper the authors used their own experimental data for the Correlation Length (CL) for the most important deterioration parameters. The CL is a parameter of the Correlation Function (CF) by which the spatial fluctuation of a certain deterioration parameter is described. The CL data used here were produced by analyzing 45 chloride profiles obtained from a 30 years old RC bridge located in a marine environment. The service life of the structure were predicted in terms of the load carrying capacity of an RC bridge beam girder. The analysis showed that the influence of SV is only evident if the reliability of the structure is governed by the Flexure failure rather than by the Shear failure.

Keywords: Chloride-induced corrosion, Monte-Carlo simulation, reinforced concrete, spatial variability

Procedia PDF Downloads 469
985 Comparison between the Roller-Foam and Neuromuscular Facilitation Stretching on Flexibility of Hamstrings Muscles

Authors: Paolo Ragazzi, Olivier Peillon, Paul Fauris, Mathias Simon, Raul Navarro, Juan Carlos Martin, Oriol Casasayas, Laura Pacheco, Albert Perez-Bellmunt

Abstract:

Introduction: The use of stretching techniques in the sports world is frequent and widely used for its many effects. One of the main benefits is the gain in flexibility, range of motion and facilitation of the sporting performance. Recently the use of Roller-Foam (RF) has spread in sports practice both at elite and recreational level for its benefits being similar to those observed in stretching. The objective of the following study is to compare the results of the Roller-Foam with the proprioceptive neuromuscular facilitation stretching (PNF) (one of the stretchings with more evidence) on the hamstring muscles. Study design: The design of the study is a single-blind, randomized controlled trial and the participants are 40 healthy volunteers. Intervention: The subjects are distributed randomly in one of the following groups; stretching (PNF) intervention group: 4 repetitions of PNF stretching (5seconds of contraction, 5 second of relaxation, 20 second stretch), Roller-Foam intervention group: 2 minutes of Roller-Foam was realized on the hamstring muscles. Main outcome measures: hamstring muscles flexibility was assessed at the beginning, during (30’’ of intervention) and the end of the session by using the Modified Sit and Reach test (MSR). Results: The baseline results data given in both groups are comparable to each other. The PNF group obtained an increase in flexibility of 3,1 cm at 30 seconds (first series) and of 5,1 cm at 2 minutes (the last of all series). The RF group obtained a 0,6 cm difference at 30 seconds and 2,4 cm after 2 minutes of application of roller foam. The results were statistically significant when comparing intragroups but not intergroups. Conclusions: Despite the fact that the use of roller foam is spreading in the sports and rehabilitation field, the results of the present study suggest that the gain of flexibility on the hamstrings is greater if PNF type stretches are used instead of RF. These results may be due to the fact that the use of roller foam intervened more in the fascial tissue, while the stretches intervene more in the myotendinous unit. Future studies are needed, increasing the sample number and diversifying the types of stretching.

Keywords: hamstring muscle, stretching, neuromuscular facilitation stretching, roller foam

Procedia PDF Downloads 183
984 An Experimental Investigation of the Surface Pressure on Flat Plates in Turbulent Boundary Layers

Authors: Azadeh Jafari, Farzin Ghanadi, Matthew J. Emes, Maziar Arjomandi, Benjamin S. Cazzolato

Abstract:

The turbulence within the atmospheric boundary layer induces highly unsteady aerodynamic loads on structures. These loads, if not accounted for in the design process, will lead to structural failure and are therefore important for the design of the structures. For an accurate prediction of wind loads, understanding the correlation between atmospheric turbulence and the aerodynamic loads is necessary. The aim of this study is to investigate the effect of turbulence within the atmospheric boundary layer on the surface pressure on a flat plate over a wide range of turbulence intensities and integral length scales. The flat plate is chosen as a fundamental geometry which represents structures such as solar panels and billboards. Experiments were conducted at the University of Adelaide large-scale wind tunnel. Two wind tunnel boundary layers with different intensities and length scales of turbulence were generated using two sets of spires with different dimensions and a fetch of roughness elements. Average longitudinal turbulence intensities of 13% and 26% were achieved in each boundary layer, and the longitudinal integral length scale within the three boundary layers was between 0.4 m and 1.22 m. The pressure distributions on a square flat plate at different elevation angles between 30° and 90° were measured within the two boundary layers with different turbulence intensities and integral length scales. It was found that the peak pressure coefficient on the flat plate increased with increasing turbulence intensity and integral length scale. For example, the peak pressure coefficient on a flat plate elevated at 90° increased from 1.2 to 3 with increasing turbulence intensity from 13% to 26%. Furthermore, both the mean and the peak pressure distribution on the flat plates varied with turbulence intensity and length scale. The results of this study can be used to provide a more accurate estimation of the unsteady wind loads on structures such as buildings and solar panels.

Keywords: atmospheric boundary layer, flat plate, pressure coefficient, turbulence

Procedia PDF Downloads 133
983 Deorbiting Performance of Electrodynamic Tethers to Mitigate Space Debris

Authors: Giulia Sarego, Lorenzo Olivieri, Andrea Valmorbida, Carlo Bettanini, Giacomo Colombatti, Marco Pertile, Enrico C. Lorenzini

Abstract:

International guidelines recommend removing any artificial body in Low Earth Orbit (LEO) within 25 years from mission completion. Among disposal strategies, electrodynamic tethers appear to be a promising option for LEO, thanks to the limited storage mass and the minimum interface requirements to the host spacecraft. In particular, recent technological advances make it feasible to deorbit large objects with tether lengths of a few kilometers or less. To further investigate such an innovative passive system, the European Union is currently funding the project E.T.PACK – Electrodynamic Tether Technology for Passive Consumable-less Deorbit Kit in the framework of the H2020 Future Emerging Technologies (FET) Open program. The project focuses on the design of an end of life disposal kit for LEO satellites. This kit aims to deploy a taped tether that can be activated at the spacecraft end of life to perform autonomous deorbit within the international guidelines. In this paper, the orbital performance of the E.T.PACK deorbiting kit is compared to other disposal methods. Besides, the orbital decay prediction is parametrized as a function of spacecraft mass and tether system performance. Different values of length, width, and thickness of the tether will be evaluated for various scenarios (i.e., different initial orbital parameters). The results will be compared to other end-of-life disposal methods with similar allocated resources. The analysis of the more innovative system’s performance with the tape coated with a thermionic material, which has a low work-function (LWT), for which no active component for the cathode is required, will also be briefly discussed. The results show that the electrodynamic tether option can be a competitive and performant solution for satellite disposal compared to other deorbit technologies.

Keywords: deorbiting performance, H2020, spacecraft disposal, space electrodynamic tethers

Procedia PDF Downloads 168
982 Classifying Turbomachinery Blade Mode Shapes Using Artificial Neural Networks

Authors: Ismail Abubakar, Hamid Mehrabi, Reg Morton

Abstract:

Currently, extensive signal analysis is performed in order to evaluate structural health of turbomachinery blades. This approach is affected by constraints of time and the availability of qualified personnel. Thus, new approaches to blade dynamics identification that provide faster and more accurate results are sought after. Generally, modal analysis is employed in acquiring dynamic properties of a vibrating turbomachinery blade and is widely adopted in condition monitoring of blades. The analysis provides useful information on the different modes of vibration and natural frequencies by exploring different shapes that can be taken up during vibration since all mode shapes have their corresponding natural frequencies. Experimental modal testing and finite element analysis are the traditional methods used to evaluate mode shapes with limited application to real live scenario to facilitate a robust condition monitoring scheme. For a real time mode shape evaluation, rapid evaluation and low computational cost is required and traditional techniques are unsuitable. In this study, artificial neural network is developed to evaluate the mode shape of a lab scale rotating blade assembly by using result from finite element modal analysis as training data. The network performance evaluation shows that artificial neural network (ANN) is capable of mapping the correlation between natural frequencies and mode shapes. This is achieved without the need of extensive signal analysis. The approach offers advantage from the perspective that the network is able to classify mode shapes and can be employed in real time including simplicity in implementation and accuracy of the prediction. The work paves the way for further development of robust condition monitoring system that incorporates real time mode shape evaluation.

Keywords: modal analysis, artificial neural network, mode shape, natural frequencies, pattern recognition

Procedia PDF Downloads 148
981 Assessment of Level of Sedation and Associated Factors Among Intubated Critically Ill Children in Pediatric Intensive Care Unit of Jimma University Medical Center: A Fourteen Months Prospective Observation Study, 2023

Authors: Habtamu Wolde Engudai

Abstract:

Background: Sedation can be provided to facilitate a procedure or to stabilize patients admitted in pediatric intensive care unit (PICU). Sedation is often necessary to maintain optimal care for critically ill children requiring mechanical ventilation. However, if sedation is too deep or too light, it has its own adverse effects, and hence, it is important to monitor the level of sedation and maintain an optimal level. Objectives: The objective is to assess the level of sedation and associated factors among intubated critically ill children admitted to PICU of JUMC, Jimma. Methods: A prospective observation study was conducted in the PICU of JUMC in September 2021 in 105 patients who were going to be admitted to the PICU aged less than 14 and with GCS >8. Data was collected by residents and nurses working in PICU. Data entry was done by Epi data manager (version 4.6.0.2). Statistical analysis and the creation of charts is going to be performed using SPSS version 26. Data was presented as mean, percentage and standard deviation. The assumption of logistic regression and the result of the assumption will be checked. To find potential predictors, bi-variable logistic regression was used for each predictor and outcome variable. A p value of <0.05 was considered as statistically significant. Finally, findings have been presented using figures, AOR, percentages, and a summary table. Result: in this study, 105 critically ill children had been involved who were started on continuous or intermittent forms of sedative drugs. Sedation level was assessed using a comfort scale three times per day. Based on this observation, we got a 44.8% level of suboptimal sedation at the baseline, a 36.2% level of suboptimal sedation at eight hours, and a 24.8% level of suboptimal sedation at sixteen hours. There is a significant association between suboptimal sedation and duration of stay with mechanical ventilation and the rate of unplanned extubation, which was shown by P < 0.05 using the Hosmer-Lemeshow test of goodness of fit (p> 0.44).

Keywords: level of sedation, critically ill children, Pediatric intensive care unit, Jimma university

Procedia PDF Downloads 57
980 A Continuous Real-Time Analytic for Predicting Instability in Acute Care Rapid Response Team Activations

Authors: Ashwin Belle, Bryce Benson, Mark Salamango, Fadi Islim, Rodney Daniels, Kevin Ward

Abstract:

A reliable, real-time, and non-invasive system that can identify patients at risk for hemodynamic instability is needed to aid clinicians in their efforts to anticipate patient deterioration and initiate early interventions. The purpose of this pilot study was to explore the clinical capabilities of a real-time analytic from a single lead of an electrocardiograph to correctly distinguish between rapid response team (RRT) activations due to hemodynamic (H-RRT) and non-hemodynamic (NH-RRT) causes, as well as predict H-RRT cases with actionable lead times. The study consisted of a single center, retrospective cohort of 21 patients with RRT activations from step-down and telemetry units. Through electronic health record review and blinded to the analytic’s output, each patient was categorized by clinicians into H-RRT and NH-RRT cases. The analytic output and the categorization were compared. The prediction lead time prior to the RRT call was calculated. The analytic correctly distinguished between H-RRT and NH-RRT cases with 100% accuracy, demonstrating 100% positive and negative predictive values, and 100% sensitivity and specificity. In H-RRT cases, the analytic detected hemodynamic deterioration with a median lead time of 9.5 hours prior to the RRT call (range 14 minutes to 52 hours). The study demonstrates that an electrocardiogram (ECG) based analytic has the potential for providing clinical decision and monitoring support for caregivers to identify at risk patients within a clinically relevant timeframe allowing for increased vigilance and early interventional support to reduce the chances of continued patient deterioration.

Keywords: critical care, early warning systems, emergency medicine, heart rate variability, hemodynamic instability, rapid response team

Procedia PDF Downloads 140
979 Temperature-Based Detection of Initial Yielding Point in Loading of Tensile Specimens Made of Structural Steel

Authors: Aqsa Jamil, Tamura Hiroshi, Katsuchi Hiroshi, Wang Jiaqi

Abstract:

The yield point represents the upper limit of forces which can be applied to a specimen without causing any permanent deformation. After yielding, the behavior of the specimen suddenly changes, including the possibility of cracking or buckling. So, the accumulation of damage or type of fracture changes depending on this condition. As it is difficult to accurately detect yield points of the several stress concentration points in structural steel specimens, an effort has been made in this research work to develop a convenient technique using thermography (temperature-based detection) during tensile tests for the precise detection of yield point initiation. To verify the applicability of thermography camera, tests were conducted under different loading conditions and measuring the deformation by installing various strain gauges and monitoring the surface temperature with the help of a thermography camera. The yield point of specimens was estimated with the help of temperature dip, which occurs due to the thermoelastic effect during the plastic deformation. The scattering of the data has been checked by performing a repeatability analysis. The effects of temperature imperfection and light source have been checked by carrying out the tests at daytime as well as midnight and by calculating the signal to noise ratio (SNR) of the noised data from the infrared thermography camera, it can be concluded that the camera is independent of testing time and the presence of a visible light source. Furthermore, a fully coupled thermal-stress analysis has been performed by using Abaqus/Standard exact implementation technique to validate the temperature profiles obtained from the thermography camera and to check the feasibility of numerical simulation for the prediction of results extracted with the help of the thermographic technique.

Keywords: signal to noise ratio, thermoelastic effect, thermography, yield point

Procedia PDF Downloads 100
978 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence

Authors: Sogand Barghi

Abstract:

The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.

Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting

Procedia PDF Downloads 61
977 Culture of Human Mesenchymal Stem Cells Culture in Xeno-Free Serum-Free Culture Conditions on Laminin-521

Authors: Halima Albalushi, Mohadese Boroojerdi, Murtadha Alkhabori

Abstract:

Introduction: Maintenance of stem cell properties during culture necessitates the recreation of the natural cell niche. Studies reported the promising outcome of mesenchymal stem cells (MSC) properties maintenance after using extracellular matrix such as CELLstart™, which is the recommended coating material for stem cells cultured in serum-free and xeno-free conditions. Laminin-521 is known as a crucial adhesion protein, which is found in natural stem cell niche, and plays an important role in facilitating the maintenance of self-renewal, pluripotency, standard morphology, and karyotype of human pluripotent stem cells (PSCs). The aim of this study is to investigate the effects of Laminin-521 on human umbilical cord-derived mesenchymal stem cells (UC-MSC) characteristics as a step toward clinical application. Methods: Human MSC were isolated from the umbilical cord via the explant method. Umbilical cord-derived-MSC were cultured in serum-free and xeno-free conditions in the presence of Laminin-521 for six passages. Cultured cells were evaluated by morphology and expansion index for each passage. Phenotypic characterization of UC-MSCs cultured on Laminin-521 was evaluated by assessment of cell surface markers. Results: Umbilical cord derived-MSCs formed small colonies and expanded as a homogeneous monolayer when cultured on Laminin-521. Umbilical cord derived-MSCs reached confluence after 4 days in culture. No statistically significant difference was detected in all passages when comparing the expansion index of UC-MSCs cultured on LN-521 and CELLstart™. Phenotypic characterization of UC-MSCs cultured on LN-521 using flow cytometry revealed positive expression of CD73, CD90, CD105 and negative expression of CD34, CD45, CD19, CD14 and HLA-DR.Conclusion: Laminin-521 is comparable to CELLstart™ in supporting UC-MSCs expansion and maintaining their characteristics during culture in xeno-free and serum-free culture conditions.

Keywords: mesenchymal stem cells, culture, laminin-521, xeno-free serum-free

Procedia PDF Downloads 66
976 Probabilistic Building Life-Cycle Planning as a Strategy for Sustainability

Authors: Rui Calejo Rodrigues

Abstract:

Building Refurbishing and Maintenance is a major area of knowledge ultimately dispensed to user/occupant criteria. The optimization of the service life of a building needs a special background to be assessed as it is one of those concepts that needs proficiency to be implemented. ISO 15686-2 Buildings and constructed assets - Service life planning: Part 2, Service life prediction procedures, states a factorial method based on deterministic data for building components life span. Major consequences result on a deterministic approach because users/occupants are not sensible to understand the end of components life span and so simply act on deterministic periods and so costly and resources consuming solutions do not meet global targets of planet sustainability. The estimation of 2 thousand million conventional buildings in the world, if submitted to a probabilistic method for service life planning rather than a deterministic one provide an immense amount of resources savings. Since 1989 the research team nowadays stating for CEES–Center for Building in Service Studies developed a methodology based on Montecarlo method for probabilistic approach regarding life span of building components, cost and service life care time spans. The research question of this deals with the importance of probabilistic approach of buildings life planning compared with deterministic methods. It is presented the mathematic model developed for buildings probabilistic lifespan approach and experimental data is obtained to be compared with deterministic data. Assuming that buildings lifecycle depends a lot on component replacement this methodology allows to conclude on the global impact of fixed replacements methodologies such as those on result of deterministic models usage. Major conclusions based on conventional buildings estimate are presented and evaluated under a sustainable perspective.

Keywords: building components life cycle, building maintenance, building sustainability, Montecarlo Simulation

Procedia PDF Downloads 201
975 Prisoners for Sexual Offences: Custodial Regime, Prison Experience and Reintegration Interventions

Authors: Nikolaos Koulouris, Anna Kasapoglou, Dimitris Koros

Abstract:

The paper aims to present the course of ongoing research concerning the treatment of pretrial detainees, convicted or released prisoners for sexual offenses, an area that has not received much attention in Greece in terms of the prison experience and the reintegration potentials regarding this specific category of prisoners. The study plan provides for the use of a combination of research methods (focus groups with prisoners, structured individual interviews with prisoners and prison staff). Also, interviews with ex-prisoners detained regarding sexual offenses will take place. In Greece, there are no special provisions for the treatment of sexual offenders in prison, nor are there any special programs in place for their rehabilitation. Sexual offenders are usually separated from other prisoners, as the informal code of the social organization of the prison community dictates, despite no relevant legal framework. The study aims to explore the reasons for the separate detention of sexual offenders and discuss their special (non) treatment from different points of view, namely the legality and legitimacy of this discriminatory practice in terms of prisoners’ protection, safety, stigmatization, and possible social exclusion, as well as their post-release expectations and social reintegration potentials. The purpose of the research is the exploration of the prison experience of sexual offenders, the exercise of their legal rights, their adjustment to the demands of social life in prison, as well as the role of prison officers and various interventions aiming to their preparation for reentry to society. The study will take into consideration the European and international prison/penitentiary standards and best practices in order to examine the issue comparatively, while the contribution of the United Nations and the Council of Europe and its standards will be used to assess the treatment of sexual offenders in terms of its compatibility to international and European model-rules and trends. The outcome will be utilized to form main directions and propositions for a coherent and consistent human rights-based and social integration-oriented penal policy regarding the treatment of persons accused or convicted of sexual offenses in Greece.

Keywords: prisoners’ treatment, sex offenders, social exclusion, social reintegration

Procedia PDF Downloads 150
974 Development of a Model for Predicting Radiological Risks in Interventional Cardiology

Authors: Stefaan Carpentier, Aya Al Masri, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis, and ulceration to appear. In order to prevent these deterministic effects, a prediction of the peak skin-dose for the patient is important in order to improve the post-operative care to be given to the patient. The objective of this study is to estimate, before the intervention, the patient dose for ‘Chronic Total Occlusion (CTO)’ procedures by selecting relevant clinical indicators. Materials and methods: 103 procedures were performed in the ‘Interventional Cardiology (IC)’ department using a Siemens Artis Zee image intensifier that provides the Air Kerma of each IC exam. Peak Skin Dose (PSD) was measured for each procedure using radiochromic films. Patient parameters such as sex, age, weight, and height were recorded. The complexity index J-CTO score, specific to each intervention, was determined by the cardiologist. A correlation method applied to these indicators allowed to specify their influence on the dose. A predictive model of the dose was created using multiple linear regressions. Results: Out of 103 patients involved in the study, 5 were excluded for clinical reasons and 2 for placement of radiochromic films outside the exposure field. 96 2D-dose maps were finally used. The influencing factors having the highest correlation with the PSD are the patient's diameter and the J-CTO score. The predictive model is based on these parameters. The comparison between estimated and measured skin doses shows an average difference of 0.85 ± 0.55 Gy for doses of less than 6 Gy. The mean difference between air-Kerma and PSD is 1.66 Gy ± 1.16 Gy. Conclusion: Using our developed method, a first estimate of the dose to the skin of the patient is available before the start of the procedure, which helps the cardiologist in carrying out its intervention. This estimation is more accurate than that provided by the Air-Kerma.

Keywords: chronic total occlusion procedures, clinical experimentation, interventional radiology, patient's peak skin dose

Procedia PDF Downloads 128
973 Chemical Kinetics and Computational Fluid-Dynamics Analysis of H2/CO/CO2/CH4 Syngas Combustion and NOx Formation in a Micro-Pilot-Ignited Supercharged Dual Fuel Engine

Authors: Ulugbek Azimov, Nearchos Stylianidis, Nobuyuki Kawahara, Eiji Tomita

Abstract:

A chemical kinetics and computational fluid-dynamics (CFD) analysis was performed to evaluate the combustion of syngas derived from biomass and coke-oven solid feedstock in a micro-pilot ignited supercharged dual-fuel engine under lean conditions. For this analysis, a new reduced syngas chemical kinetics mechanism was constructed and validated by comparing the ignition delay and laminar flame speed data with those obtained from experiments and other detail chemical kinetics mechanisms available in the literature. The reaction sensitivity analysis was conducted for ignition delay at elevated pressures in order to identify important chemical reactions that govern the combustion process. The chemical kinetics of NOx formation was analyzed for H2/CO/CO2/CH4 syngas mixtures by using counter flow burner and premixed laminar flame speed reactor models. The new mechanism showed a very good agreement with experimental measurements and accurately reproduced the effect of pressure, temperature and equivalence ratio on NOx formation. In order to identify the species important for NOx formation, a sensitivity analysis was conducted for pressures 4 bar, 10 bar and 16 bar and preheat temperature 300 K. The results show that the NOx formation is driven mostly by hydrogen based species while other species, such as N2, CO2 and CH4, have also important effects on combustion. Finally, the new mechanism was used in a multidimensional CFD simulation to predict the combustion of syngas in a micro-pilot-ignited supercharged dual-fuel engine and results were compared with experiments. The mechanism showed the closest prediction of the in-cylinder pressure and the rate of heat release (ROHR).

Keywords: syngas, chemical kinetics mechanism, internal combustion engine, NOx formation

Procedia PDF Downloads 403
972 An Empirical Exploration of Factors Influencing Lecturers' Acceptance of Open Educational Resources for Enhanced Knowledge Sharing in North-East Nigerian Universities

Authors: Bello, A., Muhammed Ibrahim Abba., Abdullahi, M., Dauda, Sabo, & Shittu, A. T.

Abstract:

This study investigated the Predictors of Lecturers Knowledge Sharing Acceptance on Open Educational Resources (OER) in North-East Nigerian in Universities. The study population comprised of 632 lecturers of Federal Universities in North-east Nigeria. The study sample covered 338 lecturers who were selected purposively from Adamawa, Bauchi and Borno State Federal Universities in Nigeria. The study adopted a prediction correlational research design. The instruments used for data collection was the questionnaire. Experts in the field of educational technology validated the instrument and tested it for reliability checks using Cronbach’s alpha. The constructs on lecturers’ acceptance to share OER yielded a reliability coefficient of; α = .956 for Performance Expectancy, α = .925; for Effort Expectancy, α = .955; for Social Influence, α = .879; for Facilitating Conditions and α = .948 for acceptance to share OER. the researchers contacted the Deanery of faculties of education and enlisted local coordinators to facilitate the data collection process at each university. The data was analysed using multiple sequential regression statistic at a significance level of 0.05 using SPSS version 23.0. The findings of the study revealed that performance expectancy (β = 0.658; t = 16.001; p = 0.000), effort expectancy (β = 0.194; t = 3.802; p = 0.000), social influence (β = 0.306; t = 5.246; p = 0.000), collectively indicated that the variables have a predictive capacity to stimulate lecturer’s acceptance to share their resources on OER repository. However, the finding revealed that facilitating conditions (β = .053; t = .899; p = 0.369), does not have a predictive capacity to stimulate lecturer’s acceptance to share their resources on OER repository. Based on these findings, the study recommends among others that the university management should consider adjusting OER policy to be centered around actualizing lecturers career progression.

Keywords: acceptance, lecturers, open educational resources, knowledge sharing

Procedia PDF Downloads 59
971 Decisional Regret in Men with Localized Prostate Cancer among Various Treatment Options and the Association with Erectile Functioning and Depressive Symptoms: A Moderation Analysis

Authors: Caren Hilger, Silke Burkert, Friederike Kendel

Abstract:

Men with localized prostate cancer (PCa) have to choose among different treatment options, such as active surveillance (AS) and radical prostatectomy (RP). All available treatment options may be accompanied by specific psychological or physiological side effects. Depending on the nature and extent of these side effects, patients are more or less likely to be satisfied or to struggle with their treatment decision in the long term. Therefore, the aim of this study was to assess and explain decisional regret in men with localized PCa. The role of erectile functioning as one of the main physiological side effects of invasive PCa treatment, depressive symptoms as a common psychological side effect, and the association of erectile functioning and depressive symptoms with decisional regret were investigated. Men with localized PCa initially managed with AS or RP (N=292) were matched according to length of therapy (mean 47.9±15.4 months). Subjects completed mailed questionnaires assessing decisional regret, changes in erectile functioning, depressive symptoms, and sociodemographic variables. Clinical data were obtained from case report forms. Differences among the two treatment groups (AS and RP) were calculated using t-tests and χ²-tests, relationships of decisional regret with erectile functioning and depressive symptoms were computed using multiple regression. Men were on average 70±7.2 years old. The two treatment groups differed markedly regarding decisional regret (p<.001, d=.50), changes in erectile functioning (p<.001, d=1.2), and depressive symptoms (p=.01, d=.30), with men after RP reporting higher values, respectively. Regression analyses showed that after adjustment for age, tumor risk category, and changes in erectile functioning, depressive symptoms were still significantly associated with decisional regret (B=0.52, p<.001). Additionally, when predicting decisional regret, the interaction of changes in erectile functioning and depressive symptoms reached significance for men after RP (B=0.52, p<.001), but not for men under AS (B=-0.16, p=.14). With increased changes in erectile functioning, the association of depressive symptoms with decisional regret became stronger in men after RP. Decisional regret is a phenomenon more prominent in men after RP than in men under AS. Erectile functioning and depressive symptoms interact in their prediction of decisional regret. Screening and treating depressive symptoms might constitute a starting point for interventions aiming to reduce decisional regret in this target group.

Keywords: active surveillance, decisional regret, depressive symptoms, erectile functioning, prostate cancer, radical prostatectomy

Procedia PDF Downloads 214
970 Determining the Materiality of an Undisclosed Fact: An Onerous Duty on the Assured

Authors: Adekemi Adebowale

Abstract:

The duty of disclosure in Nigerian insurance law is in need of reform. The materiality of an undisclosed fact (notwithstanding that it was an honest and innocent non-disclosure) currently entitles insurers to avoid insurance policies, leaving an insured with an uncovered loss. While the test of materiality requires an insured to voluntarily disclose facts that will influence an insurer's decision without proper guidelines from the insurer, the insurer is only expected to prove that the undisclosed fact had influenced its judgment in fixing the premium or determining whether to accept the risk. This problem places an onerous duty on the assured to volunteer to the insurer every material fact even though the insured only has a slight idea about the mind of a hypothetical prudent insurer. This paper explores the modern approach to revisiting the problem of an insured’s pre-contractual obligation to determine material facts in Nigerian insurance law. The aim is to build upon the change in the structure of insurance contract obligations in other common law jurisdictions such as the United Kingdom. The doctrinal and comparative methodology captures the burden imposed on the insured under the existing Nigerian insurance law. It finds that the continued application of the law leaves the insured in the weakest position, and he stands to lose in a contract supposedly created for his benefit. It is apparent that if this problem remains unresolved, the over-all consequence will contribute to a significant decline in the insurance contract, which may affect the Nigerian economy. The paper aims to evaluate the risks of the continuous application of the traditional law, which does not keep with the pace of modern insurance practice. It will ultimately produce a legally compliant reform, along with a significant deviation from the archaic structure that exists in the Nigerian insurance law. This paper forms part of an on-going PhD research on "The insured’s pre-contractual duty of utmost of utmost good faith". The outcome from the research to date finds that the insured bears the burden of the obligation to act in utmost good faith where it concerns disclosure of material facts.

Keywords: disclosure, materiality, Nigeria, United Kingdom, utmost good faith

Procedia PDF Downloads 114
969 Approach for Evaluating Wastewater Reuse Options in Agriculture

Authors: Manal Elgallal, Louise Fletcher, Barbara Evans

Abstract:

Water scarcity is a growing concern in many arid and semi-arid countries. The increase of water scarcity threatens economic development and sustainability of human livelihoods as well as environment especially in developing countries. Globally, agriculture is the largest water consumption sector, accounting for approximately 70% of all freshwater extraction. Growing competition between the agricultural and higher economic value in urban and industrial uses of high-quality freshwater supplies, especially in regions where water scarcity major problems, will increase the pressure on this precious resource. In this circumstance, wastewater may provide reliable source of water for agriculture and enable freshwater to be exchanged for more economically valuable purposes. Concern regarding the risks from microbial and toxic components to human health and environment quality is a serious obstacle for wastewater reuse particularly in agriculture. Although powerful approaches and tools for microbial risk assessment and management for safe use of wastewater are now available, few studies have attempted to provide any mechanism to quantitatively assess and manage the environmental risks resulting from reusing wastewater. In seeking pragmatic solutions to sustainable wastewater reuse, there remains a lack of research incorporating both health and environmental risk assessment and management with economic analysis in order to quantitatively combine cost, benefits and risks to rank alternative reuse options. This study seeks to enhance effective reuse of wastewater for irrigation in arid and semi-arid areas, the outcome of the study is an evaluation approach that can be used to assess different reuse strategies and to determine the suitable scale at which treatment alternatives and interventions are possible, feasible and cost effective in order to optimise the trade-offs between risks to protect public health and the environment and preserving the substantial benefits.

Keywords: environmental risks, management, life cycle costs, waste water irrigation

Procedia PDF Downloads 256
968 Impacting the Processes of Freight Logistics at Upper Austrian Companies by the Use of Mobility Management

Authors: Theresa Steiner, Markus Pajones, Christian Haider

Abstract:

Traffic is being induced by companies due to their economic behavior. Basically, two different types of traffic occur at company sites: freight traffic and commuting traffic. Due to the fact that these traffic types are connected to each other in different kinds, an integrated approach to manage them is useful. Mobility management is a proved method for companies, to handle the traffic processes caused by their business activities. According to recent trend analysis in Austria, the freight traffic as well as the individual traffic, as part of the commuting traffic, will continue to increase. More traffic jams, as well as negative environmental impacts, are expected impacts for the future. Mobility management is a tool to control the traffic behavior with the scope to reduce emissions and other negative effects which are caused by traffic. Until now, mobility management is mainly used for optimizing commuting traffic without taking the freight logistics processes into consideration. However, the method of mobility management can be used to improve the freight traffic area of a company as well. The focus of this paper will be particularly laid on analyzing to what extent companies are already using mobility management to influence not only the commuting traffic they produce but also their processes of freight logistics. A further objective is to acquire knowledge about the motivating factors which persuade companies to introduce and apply mobility management. Additionally, advantages and disadvantages of this tool will be defined as well as limitations and factors of success, with a special focus on freight logistics, will be depicted. The first step of this paper is to conduct a literature review on the issue of mobility management with a special focus on freight logistics processes. To compare the theoretical findings with the practice, interviews, following a structured interview guidline, with mobility managers of different companies in Upper Austria will be undertaken. A qualitative analysis of these surveys will in a first step show the motivation behind using mobility management to improve traffic processes and how far this approach is already being used to especially influence the freight traffic of the companies. An evaluation to what extent the method of mobility management is already being approached at Upper Austrian companies to regulate freight logistics processes will be one outcome of this publication. Furthermore, the results of the theoretical and practical analysis will reveal not only the possibilities but also the limitations of using mobility management to influence the processes of freight logistics.

Keywords: freight logistics processes, freight traffic, mobility management, passenger traffic

Procedia PDF Downloads 309