Search results for: subjective probability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1903

Search results for: subjective probability

433 Contagion of the Global Financial Crisis and Its Impact on Systemic Risk in the Banking System: Extreme Value Theory Analysis in Six Emerging Asia Economies

Authors: Ratna Kuswardani

Abstract:

This paper aims to study the impact of recent Global Financial Crisis (GFC) on 6 selected emerging Asian economies (Indonesia, Malaysia, Thailand, Philippines, Singapore, and South Korea). We first figure out the contagion of GFC from the US and Europe to the selected emerging Asian countries by studying the tail dependence of market stock returns between those countries. We apply the concept of Extreme Value Theory (EVT) to model the dependence between multiple returns series of variables under examination. We explore the factors causing the contagion between the regions. We find dependencies between markets that are influenced by their size, especially for large markets in emerging Asian countries that tend to have a higher dependency to the market in the more advanced country such as the U.S. and some countries in Europe. The results also suggest that the dependencies between market returns and bank stock returns in the same region tend to be higher than dependencies between these returns across two different regions. We extend our analysis by studying the impact of GFC on the systemic in the banking system. We also find that larger institution has more dependencies with the market stock, suggesting that larger size bank can cause disruption in the market. Further, the higher probability of extreme loss can be seen during the crisis period, which is shown by the non-linear dependency between the pre-crisis and the post-crisis period. Finally, our analysis suggests that systemic risk appears in the domestic banking systems in emerging Asia, as shown by the extreme dependencies within banks in the system. Overall, our results provide caution to policy makers and investors alike on the possible contagion of the impact of global financial crisis across different markets.

Keywords: contagion, extreme value theory, global financial crisis, systemic risk

Procedia PDF Downloads 146
432 Distinct Patterns of Resilience Identified Using Smartphone Mobile Experience Sampling Method (M-ESM) and a Dual Model of Mental Health

Authors: Hussain-Abdulah Arjmand, Nikki S. Rickard

Abstract:

The response to stress can be highly heterogenous, and may be influenced by methodological factors. The integrity of data will be optimized by measuring both positive and negative affective responses to an event, by measuring responses in real time as close to the stressful event as possible, and by utilizing data collection methods that do not interfere with naturalistic behaviours. The aim of the current study was to explore short term prototypical responses to major stressor events on outcome measures encompassing both positive and negative indicators of psychological functioning. A novel mobile experience sampling methodology (m-ESM) was utilized to monitor both effective responses to stressors in real time. A smartphone mental health app (‘Moodprism’) which prompts users daily to report both their positive and negative mood, as well as whether any significant event had occurred in the past 24 hours, was developed for this purpose. A sample of 142 participants was recruited as part of the promotion of this app. Participants’ daily reported experience of stressor events, levels of depressive symptoms and positive affect were collected across a 30 day period as they used the app. For each participant, major stressor events were identified on the subjective severity of the event rated by the user. Depression and positive affect ratings were extracted for the three days following the event. Responses to the event were scaled relative to their general reactivity across the remainder of the 30 day period. Participants were first clustered into groups based on initial reactivity and subsequent recovery following a stressor event. This revealed distinct patterns of responding along depressive symptomatology and positive affect. Participants were then grouped based on allocations to clusters in each outcome variable. A highly individualised nature in which participants respond to stressor events, in symptoms of depression and levels of positive affect, was observed. A complete description of the novel profiles identified will be presented at the conference. These findings suggest that real-time measurement of both positive and negative functioning to stressors yields a more complex set of responses than previously observed with retrospective reporting. The use of smartphone technology to measure individualized responding also proved to shed significant insight.

Keywords: depression, experience sampling methodology, positive functioning, resilience

Procedia PDF Downloads 235
431 Sensitivity and Uncertainty Analysis of One Dimensional Shape Memory Alloy Constitutive Models

Authors: A. B. M. Rezaul Islam, Ernur Karadogan

Abstract:

Shape memory alloys (SMAs) are known for their shape memory effect and pseudoelasticity behavior. Their thermomechanical behaviors are modeled by numerous researchers using microscopic thermodynamic and macroscopic phenomenological point of view. Tanaka, Liang-Rogers and Ivshin-Pence models are some of the most popular SMA macroscopic phenomenological constitutive models. They describe SMA behavior in terms of stress, strain and temperature. These models involve material parameters and they have associated uncertainty present in them. At different operating temperatures, the uncertainty propagates to the output when the material is subjected to loading followed by unloading. The propagation of uncertainty while utilizing these models in real-life application can result in performance discrepancies or failure at extreme conditions. To resolve this, we used probabilistic approach to perform the sensitivity and uncertainty analysis of Tanaka, Liang-Rogers, and Ivshin-Pence models. Sobol and extended Fourier Amplitude Sensitivity Testing (eFAST) methods have been used to perform the sensitivity analysis for simulated isothermal loading/unloading at various operating temperatures. As per the results, it is evident that the models vary due to the change in operating temperature and loading condition. The average and stress-dependent sensitivity indices present the most significant parameters at several temperatures. This work highlights the sensitivity and uncertainty analysis results and shows comparison of them at different temperatures and loading conditions for all these models. The analysis presented will aid in designing engineering applications by eliminating the probability of model failure due to the uncertainty in the input parameters. Thus, it is recommended to have a proper understanding of sensitive parameters and the uncertainty propagation at several operating temperatures and loading conditions as per Tanaka, Liang-Rogers, and Ivshin-Pence model.

Keywords: constitutive models, FAST sensitivity analysis, sensitivity analysis, sobol, shape memory alloy, uncertainty analysis

Procedia PDF Downloads 137
430 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 37
429 D-Wave Quantum Computing Ising Model: A Case Study for Forecasting of Heat Waves

Authors: Dmytro Zubov, Francesco Volponi

Abstract:

In this paper, D-Wave quantum computing Ising model is used for the forecasting of positive extremes of daily mean air temperature. Forecast models are designed with two to five qubits, which represent 2-, 3-, 4-, and 5-day historical data respectively. Ising model’s real-valued weights and dimensionless coefficients are calculated using daily mean air temperatures from 119 places around the world, as well as sea level (Aburatsu, Japan). In comparison with current methods, this approach is better suited to predict heat wave values because it does not require the estimation of a probability distribution from scarce observations. Proposed forecast quantum computing algorithm is simulated based on traditional computer architecture and combinatorial optimization of Ising model parameters for the Ronald Reagan Washington National Airport dataset with 1-day lead-time on learning sample (1975-2010 yr). Analysis of the forecast accuracy (ratio of successful predictions to total number of predictions) on the validation sample (2011-2014 yr) shows that Ising model with three qubits has 100 % accuracy, which is quite significant as compared to other methods. However, number of identified heat waves is small (only one out of nineteen in this case). Other models with 2, 4, and 5 qubits have 20 %, 3.8 %, and 3.8 % accuracy respectively. Presented three-qubit forecast model is applied for prediction of heat waves at other five locations: Aurel Vlaicu, Romania – accuracy is 28.6 %; Bratislava, Slovakia – accuracy is 21.7 %; Brussels, Belgium – accuracy is 33.3 %; Sofia, Bulgaria – accuracy is 50 %; Akhisar, Turkey – accuracy is 21.4 %. These predictions are not ideal, but not zeros. They can be used independently or together with other predictions generated by different method(s). The loss of human life, as well as environmental, economic, and material damage, from extreme air temperatures could be reduced if some of heat waves are predicted. Even a small success rate implies a large socio-economic benefit.

Keywords: heat wave, D-wave, forecast, Ising model, quantum computing

Procedia PDF Downloads 492
428 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 29
427 Selection of Qualitative Research Strategy for Bullying and Harassment in Sport

Authors: J. Vveinhardt, V. B. Fominiene, L. Jeseviciute-Ufartiene

Abstract:

Relevance of Research: Qualitative research is still regarded as highly subjective and not sufficiently scientific in order to achieve objective research results. However, it is agreed that a qualitative study allows revealing the hidden motives of the research participants, creating new theories, and highlighting the field of problem. There is enough research done to reveal these qualitative research aspects. However, each research area has its own specificity, and sport is unique due to the image of its participants, who are understood as strong and invincible. Therefore, a sport participant might have personal issues to recognize himself as a victim in the context of bullying and harassment. Accordingly, researcher has a dilemma in general making to speak a victim in sport. Thus, ethical aspects of qualitative research become relevant. The plenty fields of sport make a problem determining the sample size of research. Thus, the corresponding problem of this research is which and why qualitative research strategies are the most suitable revealing the phenomenon of bullying and harassment in sport. Object of research is qualitative research strategy for bullying and harassment in sport. Purpose of the research is to analyze strategies of qualitative research selecting suitable one for bullying and harassment in sport. Methods of research were scientific research analyses of qualitative research application for bullying and harassment research. Research Results: Four mane strategies are applied in the qualitative research; inductive, deductive, retroductive, and abductive. Inductive and deductive strategies are commonly used researching bullying and harassment in sport. The inductive strategy is applied as quantitative research in order to reveal and describe the prevalence of bullying and harassment in sport. The deductive strategy is used through qualitative methods in order to explain the causes of bullying and harassment and to predict the actions of the participants of bullying and harassment in sport and the possible consequences of these actions. The most commonly used qualitative method for the research of bullying and harassment in sports is semi-structured interviews in speech and in written. However, these methods may restrict the openness of the participants in the study when recording on the dictator or collecting incomplete answers when the participant in the survey responds in writing because it is not possible to refine the answers. Qualitative researches are more prevalent in terms of technology-defined research data. For example, focus group research in a closed forum allows participants freely interact with each other because of the confidentiality of the selected participants in the study. The moderator can purposefully formulate and submit problem-solving questions to the participants. Hence, the application of intelligent technology through in-depth qualitative research can help discover new and specific information on bullying and harassment in sport. Acknowledgement: This research is funded by the European Social Fund according to the activity ‘Improvement of researchers’ qualification by implementing world-class R&D projects of Measure No. 09.3.3-LMT-K-712.

Keywords: bullying, focus group, harassment, narrative, sport, qualitative research

Procedia PDF Downloads 172
426 Using Structured Analysis and Design Technique Method for Unmanned Aerial Vehicle Components

Authors: Najeh Lakhoua

Abstract:

Introduction: Scientific developments and techniques for the systemic approach generate several names to the systemic approach: systems analysis, systems analysis, structural analysis. The main purpose of these reflections is to find a multi-disciplinary approach which organizes knowledge, creates universal language design and controls complex sets. In fact, system analysis is structured sequentially by steps: the observation of the system by various observers in various aspects, the analysis of interactions and regulatory chains, the modeling that takes into account the evolution of the system, the simulation and the real tests in order to obtain the consensus. Thus the system approach allows two types of analysis according to the structure and the function of the system. The purpose of this paper is to present an application of system analysis of Unmanned Aerial Vehicle (UAV) components in order to represent the architecture of this system. Method: There are various analysis methods which are proposed, in the literature, in to carry out actions of global analysis and different points of view as SADT method (Structured Analysis and Design Technique), Petri Network. The methodology adopted in order to contribute to the system analysis of an Unmanned Aerial Vehicle has been proposed in this paper and it is based on the use of SADT. In fact, we present a functional analysis based on the SADT method of UAV components Body, power supply and platform, computing, sensors, actuators, software, loop principles, flight controls and communications). Results: In this part, we present the application of SADT method for the functional analysis of the UAV components. This SADT model will be composed exclusively of actigrams. It starts with the main function ‘To analysis of the UAV components’. Then, this function is broken into sub-functions and this process is developed until the last decomposition level has been reached (levels A1, A2, A3 and A4). Recall that SADT techniques are semi-formal; however, for the same subject, different correct models can be built without having to know with certitude which model is the good or, at least, the best. In fact, this kind of model allows users a sufficient freedom in its construction and so the subjective factor introduces a supplementary dimension for its validation. That is why the validation step on the whole necessitates the confrontation of different points of views. Conclusion: In this paper, we presented an application of system analysis of Unmanned Aerial Vehicle components. In fact, this application of system analysis is based on SADT method (Structured Analysis Design Technique). This functional analysis proved the useful use of SADT method and its ability of describing complex dynamic systems.

Keywords: system analysis, unmanned aerial vehicle, functional analysis, architecture

Procedia PDF Downloads 194
425 Mechanisms Underlying Comprehension of Visualized Personal Health Information: An Eye Tracking Study

Authors: Da Tao, Mingfu Qin, Wenkai Li, Tieyan Wang

Abstract:

While the use of electronic personal health portals has gained increasing popularity in the healthcare industry, users usually experience difficulty in comprehending and correctly responding to personal health information, partly due to inappropriate or poor presentation of the information. The way personal health information is visualized may affect how users perceive and assess their personal health information. This study was conducted to examine the effects of information visualization format and visualization mode on the comprehension and perceptions of personal health information among personal health information users with eye tracking techniques. A two-factor within-subjects experimental design was employed, where participants were instructed to complete a series of personal health information comprehension tasks under varied types of visualization mode (i.e., whether the information visualization is static or dynamic) and three visualization formats (i.e., bar graph, instrument-like graph, and text-only format). Data on a set of measures, including comprehension performance, perceptions, and eye movement indicators, were collected during the task completion in the experiment. Repeated measure analysis of variance analyses (RM-ANOVAs) was used for data analysis. The results showed that while the visualization format yielded no effects on comprehension performance, it significantly affected users’ perceptions (such as perceived ease of use and satisfaction). The two graphic visualizations yielded significantly higher favorable scores on subjective evaluations than that of the text format. While visualization mode showed no effects on users’ perception measures, it significantly affected users' comprehension performance in that dynamic visualization significantly reduced users' information search time. Both visualization format and visualization mode had significant main effects on eye movement behaviors, and their interaction effects were also significant. While the bar graph format and text format had similar time to first fixation across dynamic and static visualizations, instrument-like graph format had a larger time to first fixation for dynamic visualization than for static visualization. The two graphic visualization formats yielded shorter total fixation duration compared with the text-only format, indicating their ability to improve information comprehension efficiency. The results suggest that dynamic visualization can improve efficiency in comprehending important health information, and graphic visualization formats were favored more by users. The findings are helpful in the underlying comprehension mechanism of visualized personal health information and provide important implications for optimal design and visualization of personal health information.

Keywords: eye tracking, information comprehension, personal health information, visualization

Procedia PDF Downloads 101
424 Learning with Music: The Effects of Musical Tension on Long-Term Declarative Memory Formation

Authors: Nawras Kurzom, Avi Mendelsohn

Abstract:

The effects of background music on learning and memory are inconsistent, partly due to the intrinsic complexity and variety of music and partly to individual differences in music perception and preference. A prominent musical feature that is known to elicit strong emotional responses is musical tension. Musical tension can be brought about by building anticipation of rhythm, harmony, melody, and dynamics. Delaying the resolution of dominant-to-tonic chord progressions, as well as using dissonant harmonics, can elicit feelings of tension, which can, in turn, affect memory formation of concomitant information. The aim of the presented studies was to explore how forming declarative memory is influenced by musical tension, brought about within continuous music as well as in the form of isolated chords with varying degrees of dissonance/consonance. The effects of musical tension on long-term memory of declarative information were studied in two ways: 1) by evoking tension within continuous music pieces by delaying the release of harmonic progressions from dominant to tonic chords, and 2) by using isolated single complex chords with various degrees of dissonance/roughness. Musical tension was validated through subjective reports of tension, as well as physiological measurements of skin conductance response (SCR) and pupil dilation responses to the chords. In addition, music information retrieval (MIR) was used to quantify musical properties associated with tension and its release. Each experiment included an encoding phase, wherein individuals studied stimuli (words or images) with different musical conditions. Memory for the studied stimuli was tested 24 hours later via recognition tasks. In three separate experiments, we found positive relationships between tension perception and physiological measurements of SCR and pupil dilation. As for memory performance, we found that background music, in general, led to superior memory performance as compared to silence. We detected a trade-off effect between tension perception and memory, such that individuals who perceived musical tension as such displayed reduced memory performance for images encoded during musical tension, whereas tense music benefited memory for those who were less sensitive to the perception of musical tension. Musical tension exerts complex interactions with perception, emotional responses, and cognitive performance on individuals with and without musical training. Delineating the conditions and mechanisms that underlie the interactions between musical tension and memory can benefit our understanding of musical perception at large and the diverse effects that music has on ongoing processing of declarative information.

Keywords: musical tension, declarative memory, learning and memory, musical perception

Procedia PDF Downloads 93
423 Exploratory Study of Individual User Characteristics That Predict Attraction to Computer-Mediated Social Support Platforms and Mental Health Apps

Authors: Rachel Cherner

Abstract:

Introduction: The current study investigates several user characteristics that may predict the adoption of digital mental health supports. The extent to which individual characteristics predict preferences for functional elements of computer-mediated social support (CMSS) platforms and mental health (MH) apps is relatively unstudied. Aims: The present study seeks to illuminate the relationship between broad user characteristics and perceived attraction to CMSS platforms and MH apps. Methods: Participants (n=353) were recruited using convenience sampling methods (i.e., digital flyers, email distribution, and online survey forums). The sample was 68% male, and 32% female, with a mean age of 29. Participant racial and ethnic breakdown was 75% White, 7%, 5% Asian, and 5% Black or African American. Participants were asked to complete a 25-minute self-report questionnaire that included empirically validated measures assessing a battery of characteristics (i.e., subjective levels of anxiety/depression via PHQ-9 (Patient Health Questionnaire 9-item) and GAD-7 (Generalized Anxiety Disorder 7-item); attachment style via MAQ (Measure of Attachment Qualities); personality types via TIPI (The 10-Item Personality Inventory); growth mindset and mental health-seeking attitudes via GM (Growth Mindset Scale) and MHSAS (Mental Help Seeking Attitudes Scale)) and subsequent attitudes toward CMSS platforms and MH apps. Results: A stepwise linear regression was used to test if user characteristics significantly predicted attitudes towards key features of CMSS platforms and MH apps. The overall regression was statistically significant (R² =.20, F(1,344)=14.49, p<.000). Conclusion: This original study examines the clinical and sociocultural factors influencing decisions to use CMSS platforms and MH apps. Findings provide valuable insight for increasing adoption and engagement with digital mental health support. Fostering a growth mindset may be a method of increasing participant/patient engagement. In addition, CMSS platforms and MH apps may empower under-resourced and minority groups to gain basic access to mental health support. We do not assume this final model contains the best predictors of use; this is merely a preliminary step toward understanding the psychology and attitudes of CMSS platform/MH app users.

Keywords: computer-mediated social support platforms, digital mental health, growth mindset, health-seeking attitudes, mental health apps, user characteristics

Procedia PDF Downloads 87
422 Liquidity Risk of Banks in Light of a Dominant Share of Foreign Capital in the Polish Banking Sector

Authors: Karolina Patora

Abstract:

This article investigates liquidity risk management by banks, which has gained significant importance since the global financial crisis of 2008. The issue is of particular interest for countries like Poland, in which foreign capital plays a dominant role. Such an ownership structure poses certain risks to the local banking sector, which faces an increased probability of the withdrawal of funding or assets’ transfers abroad in case of a crisis. Both these factors can have a detrimental influence on the liquidity position of foreign-owned banks and hence negatively affect the financial stability of the whole banking sector. The aim of this study is to evaluate the impact of a dominating share of foreign investors in the Polish banking sector on the liquidity position of commercial banks. The study hypothesizes that the ownership structure of the Polish banking sector, in which there are banks predominantly controlled by foreign investors, does not pose a threat to the liquidity position of Polish banks. A supplementary research hypothesis is that the liquidity risk profile of foreign-owned banks differs from that of domestic banks. The sample consists of 14 foreign-owned banks and 5 domestic banks owned by local investors, which together constitute approximately 87% of the banking sector’s assets. The data covers the period of 2004–2014. The results of the regression models show no evidence of significant differences in terms of the dynamics of changes of the liquidity buffers between the foreign-owned and domestic banks, although the signs of the coefficients might suggest that the foreign-owned banks were decreasing the holdings of liquid assets at a slower pace over the examined period, compared to the domestic banks. However, no proof of the statistical significance of these findings has been found. The supplementary research hypothesis that the liquidity risk profile of foreign-controlled banks differs from that of domestic banks was rejected.

Keywords: foreign-owned banks, liquidity position, liquidity risk, financial stability

Procedia PDF Downloads 289
421 Passenger Preferences on Airline Check-In Methods: Traditional Counter Check-In Versus Common-Use Self-Service Kiosk

Authors: Cruz Queen Allysa Rose, Bautista Joymeeh Anne, Lantoria Kaye, Barretto Katya Louise

Abstract:

The study presents the preferences of passengers on the quality of service provided by the two airline check-in methods currently present in airports-traditional counter check-in and common-use self-service kiosks. Since a study has shown that airlines perceive self-service kiosks alone are sufficient enough to ensure adequate services and customer satisfaction, and in contrast, agents and passengers stated that it alone is not enough and that human interaction is essential. In reference with former studies that established opposing ideas about the choice of the more favorable airline check-in method to employ, it is the purpose of this study to present a recommendation that shall somehow fill-in the gap between the conflicting ideas by means of comparing the perceived quality of service through the RATER model. Furthermore, this study discusses the major competencies present in each method which are supported by the theories–FIRO Theory of Needs upholding the importance of inclusion, control and affection, and the Queueing Theory which points out the discipline of passengers and the length of the queue line as important factors affecting quality service. The findings of the study were based on the data gathered by the researchers from selected Thomasian third year and fourth year college students currently enrolled in the first semester of the academic year 2014-2015, who have already experienced both airline check-in methods through the implication of a stratified probability sampling. The statistical treatments applied in order to interpret the data were mean, frequency, standard deviation, t-test, logistic regression and chi-square test. The final point of the study revealed that there is a greater effect in passenger preference concerning the satisfaction experienced in common-use self-service kiosks in comparison with the application of the traditional counter check-in.

Keywords: traditional counter check-in, common-use self-service Kiosks, airline check-in methods

Procedia PDF Downloads 403
420 A Comprehensive Comparative Study on Seasonal Variation of Parameters Involved in Site Characterization and Site Response Analysis by Using Microtremor Data

Authors: Yehya Rasool, Mohit Agrawal

Abstract:

The site characterization and site response analysis are the crucial steps for reliable seismic microzonation of an area. So, the basic parameters involved in these fundamental steps are required to be chosen properly in order to efficiently characterize the vulnerable sites of the study region. In this study, efforts are made to delineate the variations in the physical parameter of the soil for the summer and monsoon seasons of the year (2021) by using Horizontal-to-Vertical Spectral Ratios (HVSRs) recorded at five sites of the Indian Institute of Technology (Indian School of Mines), Dhanbad, Jharkhand, India. The data recording at each site was done in such a way that less amount of anthropogenic noise was recorded at each site. The analysis has been done for five seismic parameters like predominant frequency, H/V ratio, the phase velocity of Rayleigh waves, shear wave velocity (Vs), compressional wave velocity (Vp), and Poisson’s ratio for both the seasons of the year. From the results, it is observed that these parameters majorly vary drastically for the upper layers of soil, which in turn may affect the amplification ratios and probability of exceedance obtained from seismic hazard studies. The HVSR peak comes out to be higher in monsoon, with a shift in predominant frequency as compared to the summer season of the year 2021. Also, the drastic reduction in shear wave velocity (up to ~10 m) of approximately 7%-15% is also perceived during the monsoon period with a slight decrease in compressional wave velocity. Generally, the increase in the Poisson ratios is found to have higher values during monsoon in comparison to the summer period. Our study may be very beneficial to various agricultural and geotechnical engineering projects.

Keywords: HVSR, shear wave velocity profile, Poisson ratio, microtremor data

Procedia PDF Downloads 82
419 Effect of Malnutrition at Admission on Length of Hospital Stay among Adult Surgical Patients in Wolaita Sodo University Comprehensive Specialized Hospital, South Ethiopia: Prospective Cohort Study, 2022

Authors: Yoseph Halala Handiso, Zewdi Gebregziabher

Abstract:

Background: Malnutrition in hospitalized patients remains a major public health problem in both developed and developing countries. Despite the fact that malnourished patients are more prone to stay longer in hospital, there is limited data regarding the magnitude of malnutrition and its effect on length of stay among surgical patients in Ethiopia, while nutritional assessment is also often a neglected component of the health service practice. Objective: This study aimed to assess the prevalence of malnutrition at admission and its effect on the length of hospital stay among adult surgical patients in Wolaita Sodo University Comprehensive Specialized Hospital, South Ethiopia, 2022. Methods: A facility-based prospective cohort study was conducted among 398 adult surgical patients admitted to the hospital. Participants in the study were chosen using a convenient sampling technique. Subjective global assessment was used to determine the nutritional status of patients with a minimum stay of 24 hours within 48 hours after admission (SGA). Data were collected using the open data kit (ODK) version 2022.3.3 software, while Stata version 14.1 software was employed for statistical analysis. The Cox regression model was used to determine the effect of malnutrition on the length of hospital stay (LOS) after adjusting for several potential confounders taken at admission. Adjusted hazard ratio (HR) with a 95% confidence interval was used to show the effect of malnutrition. Results: The prevalence of hospital malnutrition at admission was 64.32% (95% CI: 59%-69%) according to the SGA classification. Adult surgical patients who were malnourished at admission had higher median LOS (12 days: 95% CI: 11-13) as compared to well-nourished patients (8 days: 95% CI: 8-9), means adult surgical patients who were malnourished at admission were at higher risk of reduced chance of discharge with improvement (prolonged LOS) (AHR: 0.37, 95% CI: 0.29-0.47) as compared to well-nourished patients. Presence of comorbidity (AHR: 0.68, 95% CI: 0.50-90), poly medication (AHR: 0.69, 95% CI: 0.55-0.86), and history of admission (AHR: 0.70, 95% CI: 0.55-0.87) within the previous five years were found to be the significant covariates of the length of hospital stay (LOS). Conclusion: The magnitude of hospital malnutrition at admission was found to be high. Malnourished patients at admission had a higher risk of prolonged length of hospital stay as compared to well-nourished patients. The presence of comorbidity, polymedication, and history of admission were found to be the significant covariates of LOS. All stakeholders should give attention to reducing the magnitude of malnutrition and its covariates to improve the burden of LOS.

Keywords: effect of malnutrition, length of hospital stay, surgical patients, Ethiopia

Procedia PDF Downloads 59
418 Seismic Hazard Assessment of Tehran

Authors: Dorna Kargar, Mehrasa Masih

Abstract:

Due to its special geological and geographical conditions, Iran has always been exposed to various natural hazards. Earthquake is one of the natural hazards with random nature that can cause significant financial damages and casualties. This is a serious threat, especially in areas with active faults. Therefore, considering the population density in some parts of the country, locating and zoning high-risk areas are necessary and significant. In the present study, seismic hazard assessment via probabilistic and deterministic method for Tehran, the capital of Iran, which is located in Alborz-Azerbaijan province, has been done. The seismicity study covers a range of 200 km from the north of Tehran (X=35.74° and Y= 51.37° in LAT-LONG coordinate system) to identify the seismic sources and seismicity parameters of the study region. In order to identify the seismic sources, geological maps at the scale of 1: 250,000 are used. In this study, we used Kijko-Sellevoll's method (1992) to estimate seismicity parameters. The maximum likelihood estimation of earthquake hazard parameters (maximum regional magnitude Mmax, activity rate λ, and the Gutenberg-Richter parameter b) from incomplete data files is extended to the case of uncertain magnitude values. By the combination of seismicity and seismotectonic studies of the site, the acceleration with antiseptic probability may happen during the useful life of the structure is calculated with probabilistic and deterministic methods. Applying the results of performed seismicity and seismotectonic studies in the project and applying proper weights in used attenuation relationship, maximum horizontal and vertical acceleration for return periods of 50, 475, 950 and 2475 years are calculated. Horizontal peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.12g, 0.30g, 0.37g and 0.50, and Vertical peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.08g, 0.21g, 0.27g and 0.36g.

Keywords: peak ground acceleration, probabilistic and deterministic, seismic hazard assessment, seismicity parameters

Procedia PDF Downloads 63
417 Effects of Bleaching Procedures on Dentine Sensitivity

Authors: Suhayla Reda Al-Banai

Abstract:

Problem Statement: Tooth whitening was used for over one hundred and fifty year. The question concerning the whiteness of teeth is a complex one since tooth whiteness will vary from individual to individual, dependent on age and culture, etc. Tooth whitening following treatment may be dependent on the type of whitening system used to whiten the teeth. There are a few side-effects to the process, and these include tooth sensitivity and gingival irritation. Some individuals may experience no pain or sensitivity following the procedure. Purpose: To systematically review the available published literature until 31st December 2021 to identify all relevant studies for inclusion and to determine whether there was any evidence demonstrating that the application of whitening procedures resulted in the tooth sensitivity. Aim: Systematically review the available published works of literature to identify all relevant studies for inclusion and to determine any evidence demonstrating that application of 10% & 15% carbamide peroxide in tooth whitening procedures resulted in tooth sensitivity. Material and Methods: Following a review of 70 relevant papers from searching both electronic databases (OVID MEDLINE and PUBMED) and hand searching of relevant written journals, 49 studies were identified, 42 papers were subsequently excluded, and 7 studies were finally accepted for inclusion. The extraction of data for inclusion was conducted by two reviewers. The main outcome measures were the methodology and assessment used by investigators to evaluate tooth sensitivity in tooth whitening studies. Results: The reported evaluation of tooth sensitivity during tooth whitening procedures was based on the subjective response of subjects rather than a recognized methodology for evaluating. One of the problems in evaluating was the lack of homogeneity in study design. Seven studies were included. The studies included essential features namely: randomized group, placebo controls, doubleblind and single-blind. Drop-out was obtained from two of included studies. Three of the included studies reported sensitivity at the baseline visit. Two of the included studies mentioned the exclusion criteria Conclusions: The results were inconclusive due to: Limited number of included studies, the study methodology, and evaluation of DS reported. Tooth whitening procedures adversely affect both hard and soft tissues in the oral cavity. Sideeffects are mild and transient in nature. Whitening solutions with greater than 10% carbamide peroxide causes more tooth sensitivity. Studies using nightguard vital bleaching with 10% carbamide peroxide reported two side effects tooth sensitivity and gingival irritation, although tooth sensitivity was more prevalent than gingival irritation

Keywords: dentine, sensitivity, bleaching, carbamide peroxde

Procedia PDF Downloads 66
416 Comparison of Parametric and Bayesian Survival Regression Models in Simulated and HIV Patient Antiretroviral Therapy Data: Case Study of Alamata Hospital, North Ethiopia

Authors: Zeytu G. Asfaw, Serkalem K. Abrha, Demisew G. Degefu

Abstract:

Background: HIV/AIDS remains a major public health problem in Ethiopia and heavily affecting people of productive and reproductive age. We aimed to compare the performance of Parametric Survival Analysis and Bayesian Survival Analysis using simulations and in a real dataset application focused on determining predictors of HIV patient survival. Methods: A Parametric Survival Models - Exponential, Weibull, Log-normal, Log-logistic, Gompertz and Generalized gamma distributions were considered. Simulation study was carried out with two different algorithms that were informative and noninformative priors. A retrospective cohort study was implemented for HIV infected patients under Highly Active Antiretroviral Therapy in Alamata General Hospital, North Ethiopia. Results: A total of 320 HIV patients were included in the study where 52.19% females and 47.81% males. According to Kaplan-Meier survival estimates for the two sex groups, females has shown better survival time in comparison with their male counterparts. The median survival time of HIV patients was 79 months. During the follow-up period 89 (27.81%) deaths and 231 (72.19%) censored individuals registered. The average baseline cluster of differentiation 4 (CD4) cells count for HIV/AIDS patients were 126.01 but after a three-year antiretroviral therapy follow-up the average cluster of differentiation 4 (CD4) cells counts were 305.74, which was quite encouraging. Age, functional status, tuberculosis screen, past opportunistic infection, baseline cluster of differentiation 4 (CD4) cells, World Health Organization clinical stage, sex, marital status, employment status, occupation type, baseline weight were found statistically significant factors for longer survival of HIV patients. The standard error of all covariate in Bayesian log-normal survival model is less than the classical one. Hence, Bayesian survival analysis showed better performance than classical parametric survival analysis, when subjective data analysis was performed by considering expert opinions and historical knowledge about the parameters. Conclusions: Thus, HIV/AIDS patient mortality rate could be reduced through timely antiretroviral therapy with special care on the potential factors. Moreover, Bayesian log-normal survival model was preferable than the classical log-normal survival model for determining predictors of HIV patients survival.

Keywords: antiretroviral therapy (ART), Bayesian analysis, HIV, log-normal, parametric survival models

Procedia PDF Downloads 188
415 Historical Development of Negative Emotive Intensifiers in Hungarian

Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges

Abstract:

In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.

Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time

Procedia PDF Downloads 225
414 A Qualitative Look at Mental Health Stressors in Response to COVID-19

Authors: Gabriel G. Gaft, Xayvinay Xiong, Amanda Sunday

Abstract:

The emergent pandemic from COVID-19 virus has forced people to adjust to major changes. These changes include all elements of family and work life and required people to engage in novel behaviors. For many people, the social norms to which they have been accustomed no longer prevail. Not surprisingly, such enormous changes in daily life have been associated with greater problems in mental health; and research regarding ways in which mental health professionals can support people is more necessary than ever before. It is often useful to assess people’s reactions through surveys and utilize quantitative data to answer questions about coping strategies etc. It is also likely, however, that a host of individual factors are going to contribute to what might be considered 'good' or 'bad' coping mechanisms to a worldwide pandemic. To this end, qualitative studies—where the individual’s subjective experience is highlighted—are likely to provide more vital information for mental health professionals interested in supporting the particular person in front of them. This study reports on qualitative data, where X participants were asked questions about social distancing, coping strategies, and general attitudes towards social changes resulting from the COVID-19 pandemic. Informal interviews were conducted during the months of June-July 2020. Data were analyzed using Interpretative Phenomenological Analyses. Themes were identified first for each participant and then compared across different individual participants. Several findings emerged. First, all participants understood major health messages being imparted by governing bodies such as the CDC and WHO. The researchers feel this finding is important as it suggests health messages are at least being effectively communicated. Second, there was a clear trend for themes which highlighted the conflicting emotions participants felt about the changes they were expected to endure: positive and negative elements were identified, although a participant who had pre-existing conditions placed greater emphasis on the negative elements. One participant who was particularly interested in impression management also exclusively emphasized negative emotions. Third, participants who were able to reevaluate priorities—what Lazarus might call secondary appraisals—experienced social distancing as a positive rather than negative phenomenon. Finally, participants who were able to develop specific strategies—such as boundaries for work and self-care—reported themes of adjustment and contentment. Taken together, these findings suggest mental health practitioners can assist people to adjust more positively through specific techniques focusing on re-evaluation of life priorities and strategic coping skills.

Keywords: COVID-19, pandemic, phenomenology, virus

Procedia PDF Downloads 115
413 Multidisciplinary Approach for a Tsunami Reconstruction Plan in Coquimbo, Chile

Authors: Ileen Van den Berg, Reinier J. Daals, Chris E. M. Heuberger, Sven P. Hildering, Bob E. Van Maris, Carla M. Smulders, Rafael Aránguiz

Abstract:

Chile is located along the subduction zone of the Nazca plate beneath the South American plate, where large earthquakes and tsunamis have taken place throughout history. The last significant earthquake (Mw 8.2) occurred in September 2015 and generated a destructive tsunami, which mainly affected the city of Coquimbo (71.33°W, 29.96°S). The inundation area consisted of a beach, damaged seawall, damaged railway, wetland and old neighborhood; therefore, local authorities started a reconstruction process immediately after the event. Moreover, a seismic gap has been identified in the same area, and another large event could take place in the near future. The present work proposed an integrated tsunami reconstruction plan for the city of Coquimbo that considered several variables such as safety, nature & recreation, neighborhood welfare, visual obstruction, infrastructure, construction process, and durability & maintenance. Possible future tsunami scenarios are simulated by means of the Non-hydrostatic Evolution of Ocean WAVEs (NEOWAVE) model with 5 nested grids and a higher grid resolution of ~10 m. Based on the score from a multi-criteria analysis, the costs of the alternatives and a preference for a multifunctional solution, the alternative that includes an elevated coastal road with floodgates to reduce tsunami overtopping and control the return flow of a tsunami was selected as the best solution. It was also observed that the wetlands are significantly restored to their former configuration; moreover, the dynamic behavior of the wetlands is stimulated. The numerical simulation showed that the new coastal protection decreases damage and the probability of loss of life by delaying tsunami arrival time. In addition, new evacuation routes and a smaller inundation zone in the city increase safety for the area.

Keywords: tsunami, Coquimbo, Chile, reconstruction, numerical simulation

Procedia PDF Downloads 236
412 Accountability Mechanisms of Leaders and Its Impact on Performance and Value Creation: Comparative Analysis (France, Germany, United Kingdom)

Authors: Bahram Soltani, Louai Ghazieh

Abstract:

The responsibility has a big importance further to the financial crisis and the various pressures, which companies face their duties. The main objective of this study is to explain the variation of mechanisms of the responsibility of the manager in the company among the advanced capitalist economies. Then we study the impact of these mechanisms on the performance and the value creation in European companies. To reach our goal, we established a final sample composed on average of 284 French, British and German companies quoted in stock exchanges with 2272 annual reports examined during the period from 2005 to 2012. We examined at first the link of causalities between the determining-mechanisms bound to the company such as the characteristics of the board of directors, the composition of the shareholding and the ethics of the company on one side and the profitability of the company on the other side. The results show that the smooth running of the board of directors and its specialist committees are very important determinants of the responsibility of the managers who impact positively the performance and the value creation in the company. Furthermore, our results confirm that the presence of a solid ethical environment within the company will be effective to increase the probability that the managers realize ethical choices in the organizational decision-making. At the second time, we studied the impact of the determining mechanisms bound to the function and to the profile of manager to know its relational links, his remuneration, his training, his age and his experiences about the performance and the value creation in the company. Our results highlight the existence of a negative relation between the relational links of the manager, his very high remuneration and the general profitability of the company. This study is a contribution to the literature on the determining mechanisms of company director's responsibility (Accountability). It establishes an empirical and comparative analysis between three influential countries of Europe, to know France, the United Kingdom and Germany.

Keywords: leaders, company’s performance, accountability mechanisms, corporate governance, value creation of firm, financial crisis

Procedia PDF Downloads 375
411 Downtime Estimation of Building Structures Using Fuzzy Logic

Authors: M. De Iuliis, O. Kammouh, G. P. Cimellaro, S. Tesfamariam

Abstract:

Community Resilience has gained a significant attention due to the recent unexpected natural and man-made disasters. Resilience is the process of maintaining livable conditions in the event of interruptions in normally available services. Estimating the resilience of systems, ranging from individuals to communities, is a formidable task due to the complexity involved in the process. The most challenging parameter involved in the resilience assessment is the 'downtime'. Downtime is the time needed for a system to recover its services following a disaster event. Estimating the exact downtime of a system requires a lot of inputs and resources that are not always obtainable. The uncertainties in the downtime estimation are usually handled using probabilistic methods, which necessitates acquiring large historical data. The estimation process also involves ignorance, imprecision, vagueness, and subjective judgment. In this paper, a fuzzy-based approach to estimate the downtime of building structures following earthquake events is proposed. Fuzzy logic can integrate descriptive (linguistic) knowledge and numerical data into the fuzzy system. This ability allows the use of walk down surveys, which collect data in a linguistic or a numerical form. The use of fuzzy logic permits a fast and economical estimation of parameters that involve uncertainties. The first step of the method is to determine the building’s vulnerability. A rapid visual screening is designed to acquire information about the analyzed building (e.g. year of construction, structural system, site seismicity, etc.). Then, a fuzzy logic is implemented using a hierarchical scheme to determine the building damageability, which is the main ingredient to estimate the downtime. Generally, the downtime can be divided into three main components: downtime due to the actual damage (DT1); downtime caused by rational and irrational delays (DT2); and downtime due to utilities disruption (DT3). In this work, DT1 is computed by relating the building damageability results obtained from the visual screening to some already-defined components repair times available in the literature. DT2 and DT3 are estimated using the REDITM Guidelines. The Downtime of the building is finally obtained by combining the three components. The proposed method also allows identifying the downtime corresponding to each of the three recovery states: re-occupancy; functional recovery; and full recovery. Future work is aimed at improving the current methodology to pass from the downtime to the resilience of buildings. This will provide a simple tool that can be used by the authorities for decision making.

Keywords: resilience, restoration, downtime, community resilience, fuzzy logic, recovery, damage, built environment

Procedia PDF Downloads 156
410 Correlation of Urinary Waxy Casts with Renal Pathology

Authors: Muner M. B. Mohamed, Vipin Varghese, Dustin Chalmers, Khalid M. G. Mohammed, Juan Carlos Q. Velez

Abstract:

Background: Urinary waxy casts (uWxC) are traditionally described in textbooks as indicative of chronic renal parenchymal disease. However, data supporting this contention is lacking. uWxC can be seen in the context of various renal syndromes, including acute kidney injury, chronic kidney disease, rapidly progressive glomerulonephritis (GN), and nephrotic syndrome. Thus, we investigated the correlation between the identification of uWxC and renal pathological findings. Methods: We prospectively collected data of patients seen in nephrology consultation with a urine specimen subjected to the microscopic examination of the urinary sediment (MicrExUrSed) over a 3-year period. Within this cohort, we identified cases in which a kidney biopsy was concomitantly performed. We assessed the association of uWxC with glomerular or tubular pathology and with chronicity [interstitial fibrosis and tubular atrophy (IFTA) and glomerular obsolescence (GO)]. Results: Among 683 patients with MicrExUrSed,103 (15%) underwent kidney biopsy and were included. The mean age was 55 years, 51% women, 50% white, and 38% self-identified black. Median serum creatinine was 3.2 (0-7-15.6) mg/dL and not significantly different between those with and without uWxC (4.7 vs 3.8 mg/dL, p=0.13). uWxC was identified in 35 (34%) cases. A glomerulopathy was diagnosed in 79 (77%). Among those with uWxC (n=35), a glomerulopathy was more likely to be found with concomitant acute tubular injury (ATI) than without ATI (57% vs. 23%, p=0.0006), whereas among those without uWxC, glomerulopathies were found with or without concomitant ATI with similar frequency (41% vs. 34%, p=0.48). Overall (n=103), more patients with uWxC had ≥ 20% IFTA compared to those without uWxC (74% vs 51%, p=0.03). Among those with glomerulopathy (n=79), more patients with uWxC had ≥ 20% IFTA compared to those without uWxC (89% vs. 56%, p=0.004). uWxC did not correlate with GO. Conclusion: Identification of uWxC denotes a greater likelihood of finding evidence of ATI superimposed with a glomerulopathy rather than finding an isolated glomerular lesion. uWxC is associated with a greater probability of finding ≥ 20% IFTA in a kidney biopsy specimen, particularly in those with a glomerular pathology. This observation may help clinicians weigh on the suitability of a kidney biopsy when chronicity or coexistence of ATI is in question.

Keywords: waxy cast, kidney biopsy, acute tubular injury, glomerulopathy

Procedia PDF Downloads 87
409 Appearance-Based Discrimination in a Workplace: An Emerging Problem for Labor Law Relationships

Authors: Irmina Miernicka

Abstract:

Nowadays, dress codes and widely understood appearance are becoming more important in the workplace. They are often used in the workplace to standardize image of an employer, to communicate a corporate image and ensure that customers can easily identify it. It is also a way to build professionalism of employer. Additionally, in many cases, an employer will introduce a dress code for health and safety reasons. Employers more often oblige employees to follow certain rules concerning their clothing, grooming, make-up, body art or even weight. An important research problem is to find the limits of the employer's interference with the external appearance of employees. They are primarily determined by the two main obligations of the employer, i. e. the obligation to respect the employee's personal rights and the principle of equal treatment and non-discrimination in employment. It should also be remembered that the limits of the employer's interference will be different when certain rules concerning the employee's appearance result directly from the provisions of laws and other acts of universally binding law (workwear, official clothing, and uniform). The analysis of this issue was based on literature and jurisprudence, both domestic and foreign, including the U.S. and European case law, and led the author to put forward a thesis that there are four main principles, which will protect the employer from the allegation of discrimination. First, it is the principle of adequacy - the means requirements regarding dress code must be appropriate to the position and type of work performed by the employee. Secondly, in accordance with the purpose limitation principle, an employer may introduce certain requirements regarding the appearance of employees if there is a legitimate, objective justification for this (such as work safety or type of work performed), not dictated by the employer's subjective feelings and preferences. Thirdly, these requirements must not place an excessive burden on workers and be disproportionate in relation to the employer's objective (principle of proportionality). Fourthly, the employer should also ensure that the requirements imposed in the workplace are equally burdensome and enforceable from all groups of employees. Otherwise, it may expose itself to grounds of discrimination based on sex or age. At the same time, it is also possible to differentiate the situation of some employees if these differences are small and reflect established habits and traditions and if employees are obliged to maintain the same level of professionalism in their positions. Although this subject may seem to be insignificant, frequent application of dress codes and increasing awareness of both employees and employers indicate that its legal aspects need to be thoroughly analyzed. Many legal cases brought before U.S. and European courts show that employees look for legal protection when they consider that their rights are violated by dress code introduced in a workplace.

Keywords: labor law, the appearance of an employee, discrimination in the workplace, dress code in a workplace

Procedia PDF Downloads 120
408 Avian and Rodent Pest Infestations of Lowland Rice (Oryza sativa L.) and Evaluation of Attributable Losses in Savanna Transition Environment

Authors: Okwara O. S., Osunsina I. O. O., Pitan O. R., Afolabi C. G.

Abstract:

Rice (Oryza sativa L.) belongs to the family poaceae and has become the most popular food. Globally, this crop is been faced with the menace of vertebrate pests, of which birds and rodents are the most implicated. The study avian and rodents’ infestations and the evaluation of attributable losses was carried out in 2020 and 2021 with the objectives of identifying the types of bird and rodent species associated with lowland rice and to determine the infestation levels, damage intensity, and the crop loss induced by these pests. The experiment was laid out in a split plot arrangement fitted into a Randomized Complete Block Design (RCBD), with the main plots being protected and unprotected groups and the sub-plots being four rice varieties, Ofada, WITA-4, NERICA L-34, and Arica-3. Data collection was done over a 16-week period, and the data obtained were transformed using square root transformation model before Analysis of Variance (ANOVA) was done at 5% probability level. The results showed the infestation levels of both birds and rodents across all the treatment means of thevarieties as not significantly different (p > 0.05) in both seasons. The damage intensity by these pests in both years were also not significantly different (p > 0.05) among the means of the varieties, which explains the diverse feeding nature of birds and rodents when it comes to infestations. The infestation level under the protected group was significantly lower (p < 0.05) than the infestation level recorded under the unprotected group.Consequently, an estimated crop loss of 91.94 % and 90.75 % were recorded in 2020 and 2021, respectively, andthe identified pest birds were Ploceus melanocephalus, Ploceus cuculatus, and Spermestes cucullatus. Conclusively, vertebrates pest cause damage to lowland rice which could result to a high percentage crop loss if left uncontrolled.

Keywords: pests, infestations, evaluation, losses, rodents, avian

Procedia PDF Downloads 119
407 The Effects of Some Organic Amendments on Sediment Yield, Splash Loss, and Runoff of Soils of Selected Parent Materials in Southeastern Nigeria

Authors: Leonard Chimaobi Agim, Charles Arinzechukwu Igwe, Emmanuel Uzoma Onweremadu, Gabreil Osuji

Abstract:

Soil erosion has been linked to stream sedimentation, ecosystem degradation, and loss of soil nutrients. A study was conducted to evaluate the effect of some organic amendment on sediment yield, splash loss, and runoff of soils of selected parent materials in southeastern Nigeria. A total of 20 locations, five from each of four parent materials namely: Asu River Group (ARG), Bende Ameki Group (BAG), Coastal Plain Sand (CPS) and Falsebedded Sandstone (FBS) were used for the study. Collected soil samples were analyzed with standard methods for the initial soil properties. Rainfall simulation at an intensity of 190 mm hr-1was conducted for 30 minutes on the soil samples at both the initial stage and after amendment to obtain erosion parameters. The influence of parent material on sediment yield, splash loss and runoff based on rainfall simulation was tested for using one way analyses of variance, while the influence of organic material and their combinations were a factorially fitted in a randomized complete block design. The organic amendments include; goat dropping (GD), poultry dropping (PD), municipal solid waste (MSW) and their combinations (COA) applied at four rates of 0, 10, 20 and 30 t ha-1 respectively. Data were analyzed using analyses of variance suitable for a factorial experiment. Significant means were separated using LSD at 5 % probability levels. Result showed significant (p ≤ 0.05) lower values of sediment yield, splash loss and runoff following amendment. For instance, organic amendment reduced sediment yield under wet and dry runs by 12.91 % and 26.16% in Ishiagu, 40.76% and 45.67%, in Bende, 16.17% and 50% in Obinze and 22.80% and 42.35% in Umulolo respectively. Goat dropping and combination of amendment gave the best results in reducing sediment yield.

Keywords: organic amendment, parent material, rainfall simulation, soil erosion

Procedia PDF Downloads 341
406 Molecular Dynamic Simulation of CO2 Absorption into Mixed Aqueous Solutions MDEA/PZ

Authors: N. Harun, E. E. Masiren, W. H. W. Ibrahim, F. Adam

Abstract:

Amine absorption process is an approach for mitigation of CO2 from flue gas that produces from power plant. This process is the most common system used in chemical and oil industries for gas purification to remove acid gases. On the challenges of this process is high energy requirement for solvent regeneration to release CO2. In the past few years, mixed alkanolamines have received increasing attention. In most cases, the mixtures contain N-methyldiethanolamine (MDEA) as the base amine with the addition of one or two more reactive amines such as PZ. The reason for the application of such blend amine is to take advantage of high reaction rate of CO2 with the activator combined with the advantages of the low heat of regeneration of MDEA. Several experimental and simulation studies have been undertaken to understand this process using blend MDEA/PZ solvent. Despite those studies, the mechanism of CO2 absorption into the aqueous MDEA is not well understood and available knowledge within the open literature is limited. The aim of this study is to investigate the intermolecular interaction of the blend MDEA/PZ using Molecular Dynamics (MD) simulation. MD simulation was run under condition 313K and 1 atm using NVE ensemble at 200ps and NVT ensemble at 1ns. The results were interpreted in term of Radial Distribution Function (RDF) analysis through two system of interest i.e binary and tertiary. The binary system will explain the interaction between amine and water molecule while tertiary system used to determine the interaction between the amine and CO2 molecule. For the binary system, it was observed that the –OH group of MDEA is more attracted to water molecule compared to –NH group of MDEA. The –OH group of MDEA can form the hydrogen bond with water that will assist the solubility of MDEA in water. The intermolecular interaction probability of –OH and –NH group of MDEA with CO2 in blended MDEA/PZ is higher than using single MDEA. This findings show that PZ molecule act as an activator to promote the intermolecular interaction between MDEA and CO2.Thus, blend of MDEA with PZ is expecting to increase the absorption rate of CO2 and reduce the heat regeneration requirement.

Keywords: amine absorption process, blend MDEA/PZ, CO2 capture, molecular dynamic simulation, radial distribution function

Procedia PDF Downloads 289
405 The Impact of COVID-19 on Antibiotic Prescribing in Primary Care in England: Evaluation and Risk Prediction of the Appropriateness of Type and Repeat Prescribing

Authors: Xiaomin Zhong, Alexander Pate, Ya-Ting Yang, Ali Fahmi, Darren M. Ashcroft, Ben Goldacre, Brian Mackenna, Amir Mehrkar, Sebastian C. J. Bacon, Jon Massey, Louis Fisher, Peter Inglesby, Kieran Hand, Tjeerd van Staa, Victoria Palin

Abstract:

Background: This study aimed to predict risks of potentially inappropriate antibiotic type and repeat prescribing and assess changes during COVID-19. Methods: With the approval of NHS England, we used the OpenSAFELY platform to access the TPP SystmOne electronic health record (EHR) system and selected patients prescribed antibiotics from 2019 to 2021. Multinomial logistic regression models predicted the patient’s probability of receiving an inappropriate antibiotic type or repeating the antibiotic course for each common infection. Findings: The population included 9.1 million patients with 29.2 million antibiotic prescriptions. 29.1% of prescriptions were identified as repeat prescribing. Those with same-day incident infection coded in the EHR had considerably lower rates of repeat prescribing (18.0%), and 8.6% had a potentially inappropriate type. No major changes in the rates of repeat antibiotic prescribing during COVID-19 were found. In the ten risk prediction models, good levels of calibration and moderate levels of discrimination were found. Important predictors included age, prior antibiotic prescribing, and region. Patients varied in their predicted risks. For sore throat, the range from 2.5 to 97.5th percentile was 2.7 to 23.5% (inappropriate type) and 6.0 to 27.2% (repeat prescription). For otitis externa, these numbers were 25.9 to 63.9% and 8.5 to 37.1%, respectively. Interpretation: Our study found no evidence of changes in the level of inappropriate or repeat antibiotic prescribing after the start of COVID-19. Repeat antibiotic prescribing was frequent and varied according to regional and patient characteristics. There is a need for treatment guidelines to be developed around antibiotic failure and clinicians provided with individualised patient information.

Keywords: antibiotics, infection, COVID-19 pandemic, antibiotic stewardship, primary care

Procedia PDF Downloads 112
404 Integrated Mass Rapid Transit System for Smart City Project in Western India

Authors: Debasis Sarkar, Jatan Talati

Abstract:

This paper is an attempt to develop an Integrated Mass Rapid Transit System (MRTS) for a smart city project in Western India. Integrated transportation is one of the enablers of smart transportation for providing a seamless intercity as well as regional level transportation experience. The success of a smart city project at the city level for transportation is providing proper integration to different mass rapid transit modes by way of integrating information, physical, network of routes fares, etc. The methodology adopted for this study was primary data research through questionnaire survey. The respondents of the questionnaire survey have responded on the issues about their perceptions on the ways and means to improve public transport services in urban cities. The respondents were also required to identify the factors and attributes which might motivate more people to shift towards the public mode. Also, the respondents were questioned about the factors which they feel might restrain the integration of various modes of MRTS. Furthermore, this study also focuses on developing a utility equation for respondents with the help of multiple linear regression analysis and its probability to shift to public transport for certain factors listed in the questionnaire. It has been observed that for shifting to public transport, the most important factors that need to be considered were travel time saving and comfort rating. Also, an Integrated MRTS can be obtained by combining metro rail with BRTS, metro rail with monorail, monorail with BRTS and metro rail with Indian railways. Providing a common smart card to transport users for accessing all the different available modes would be a pragmatic solution towards integration of the available modes of MRTS.

Keywords: mass rapid transit systems, smart city, metro rail, bus rapid transit system, multiple linear regression, smart card, automated fare collection system

Procedia PDF Downloads 266