Search results for: projective methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15397

Search results for: projective methods

2587 Temperamental Determinants of Eye-Hand Coordination Formation in the Special Aerial Gymnastics Instruments (SAGI)

Authors: Zdzisław Kobos, Robert Jędrys, Zbigniew Wochyński

Abstract:

Motor activity and good health are sine qua non determinants of a proper practice of the profession, especially aviation. Therefore, candidates to the aviation are selected according their psychomotor ability by both specialist medical commissions. Moreover, they must past an examination of the physical fitness. During the studies in the air force academy, eye-hand coordination is formed in two stages. The future aircraft pilots besides all-purpose physical education must practice specialist training on SAGI. Training includes: looping, aerowheel, and gyroscope. Aim of the training on the above listed apparatuses is to form eye-hand coordination during the tasks in the air. Such coordination is necessary to perform various figures in the real flight. Therefore, during the education of the future pilots, determinants of the effective ways of this important parameter of the human body functioning are sought for. Several studies of the sport psychology indicate an important role of the temperament as a factor determining human behavior during the task performance and acquiring operating skills> Polish psychologist Jan Strelau refers to the basic, relatively constant personality features which manifest themselves in the formal characteristics of the human behavior. Temperament, being initially determined by the inborn physiological mechanisms, changes in the course of maturation and some environmental factors and concentrates on the energetic level and reaction characteristics in time. Objectives. This study aimed at seeking a relationship between temperamental features and eye-hand coordination formation during training on SAGI. Material and Methods: Group of 30 students of pilotage was examined in two situations. The first assessment of the eye-hand coordination level was carried out before the beginning of a 30-hour training on SAGI. The second assessment was carried out after training completion. Training lasted for 2 hours once a week. Temperament was evaluated with The Formal Characteristics of Behavior − Temperament Inventory (FCB-TI) developed by Bogdan Zawadzki and Jan Strelau. Eye-hand coordination was assessed with a computer version of the Warsaw System of Psychological Tests. Results: It was found that the training on SAGI increased the level of eye-hand coordination in the examined students. Conclusions: Higher level of the eye-hand coordination was obtained after completion of the training. Moreover, a relationship between eye-hand coordination level and selected temperamental features was statistically significant.

Keywords: temperament, eye-hand coordination, pilot, SAGI

Procedia PDF Downloads 441
2586 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 213
2585 Utilizing Topic Modelling for Assessing Mhealth App’s Risks to Users’ Health before and during the COVID-19 Pandemic

Authors: Pedro Augusto Da Silva E Souza Miranda, Niloofar Jalali, Shweta Mistry

Abstract:

BACKGROUND: Software developers utilize automated solutions to scrape users’ reviews to extract meaningful knowledge to identify problems (e.g., bugs, compatibility issues) and possible enhancements (e.g., users’ requests) to their solutions. However, most of these solutions do not consider the health risk aspects to users. Recent works have shed light on the importance of including health risk considerations in the development cycle of mHealth apps to prevent harm to its users. PROBLEM: The COVID-19 Pandemic in Canada (and World) is currently forcing physical distancing upon the general population. This new lifestyle made the usage of mHealth applications more essential than ever, with a projected market forecast of 332 billion dollars by 2025. However, this new insurgency in mHealth usage comes with possible risks to users’ health due to mHealth apps problems (e.g., wrong insulin dosage indication due to a UI error). OBJECTIVE: These works aim to raise awareness amongst mHealth developers of the importance of considering risks to users’ health within their development lifecycle. Moreover, this work also aims to help mHealth developers with a Proof-of-Concept (POC) solution to understand, process, and identify possible health risks to users of mHealth apps based on users’ reviews. METHODS: We conducted a mixed-method study design. We developed a crawler to mine the negative reviews from two samples of mHealth apps (my fitness, medisafe) from the Google Play store users. For each mHealth app, we performed the following steps: • The reviews are divided into two groups, before starting the COVID-19 (reviews’ submission date before 15 Feb 2019) and during the COVID-19 (reviews’ submission date starts from 16 Feb 2019 till Dec 2020). For each period, the Latent Dirichlet Allocation (LDA) topic model was used to identify the different clusters of reviews based on similar topics of review The topics before and during COVID-19 are compared, and the significant difference in frequency and severity of similar topics are identified. RESULTS: We successfully scraped, filtered, processed, and identified health-related topics in both qualitative and quantitative approaches. The results demonstrated the similarity between topics before and during the COVID-19.

Keywords: natural language processing (NLP), topic modeling, mHealth, COVID-19, software engineering, telemedicine, health risks

Procedia PDF Downloads 131
2584 Effect of Different Phosphorus Levels on Vegetative Growth of Maize Variety

Authors: Tegene Nigussie

Abstract:

Introduction: Maize is the most domesticated of all the field crops. Wild maize has not been found to date and there has been much speculation on its origin. Regardless of the validity of different theories, it is generally agreed that the center of origin of maize is Central America, primarily Mexico and the Caribbean. Maize in Africa is of a recent introduction although data suggest that it was present in Nigeria even before Columbus voyages. After being taken to Europe in 1493, maize was introduced to Africa and distributed (spread through the continent by different routes. Maize is an important cereal crop in Ethiopia in general, it is the primarily stable food, and rural households show strong preference. For human food, the important constituents of grain are carbohydrates (starch and sugars), protein, fat or oil (in the embryo) and minerals. About 75 percent of the kernel is starch, a range of 60.80 percent but low protein content (8-15%). In Ethiopia, the introduction of modern farming techniques appears to be a priority. However, the adoption of modern inputs by peasant farmers is found to be very slow, for example, the adoption rate of fertilizer, an input that is relatively adopted, is very slow. The difference in socio-economic factors lay behind the low rate of technological adoption, including price & marketing input. Objective: The aim of the study is to determine the optimum application rate or level of different phosphorus fertilizers for the vegetative growth of maize and to identify the effect of different phosphorus rates on the growth and development of maize. Methods: The vegetative parameter (above ground) measurement from five plants randomly sampled from the middle rows of each plot. Results: The interaction of nitrogen and maize variety showed a significant at (p<0.01) effect on plant height, with the application of 60kg/ha and BH140 maize variety in combination and root length with the application of 60kg/ha of nitrogen and BH140 variety of maize. The highest mean (12.33) of the number of leaves per plant and mean (7.1) of the number of nodes per plant can be used as an alternative for better vegetative growth of maize. Conclusion and Recommendation: Maize is one of the popular and cultivated crops in Ethiopia. This study was conducted to investigate the best dosage of phosphorus for vegetative growth, yield, and better quality of maize variety and to recommend a level of phosphorus rate and the best variety adaptable to the specific soil condition or area.

Keywords: leaf, carbohydrate protein, adoption, sugar

Procedia PDF Downloads 18
2583 Factors Affecting Cesarean Section among Women in Qatar Using Multiple Indicator Cluster Survey Database

Authors: Sahar Elsaleh, Ghada Farhat, Shaikha Al-Derham, Fasih Alam

Abstract:

Background: Cesarean section (CS) delivery is one of the major concerns both in developing and developed countries. The rate of CS deliveries are on the rise globally, and especially in Qatar. Many socio-economic, demographic, clinical and institutional factors play an important role for cesarean sections. This study aims to investigate factors affecting the prevalence of CS among women in Qatar using the UNICEF’s Multiple Indicator Cluster Survey (MICS) 2012 database. Methods: The study has focused on the women’s questionnaire of the MICS, which was successfully distributed to 5699 participants. Following study inclusion and exclusion criteria, a final sample of 761 women aged 19- 49 years who had at least one delivery of giving birth in their lifetime before the survey were included. A number of socio-economic, demographic, clinical and institutional factors, identified through literature review and available in the data, were considered for the analyses. Bivariate and multivariate logistic regression models, along with a multi-level modeling to investigate clustering effect, were undertaken to identify the factors that affect CS prevalence in Qatar. Results: From the bivariate analyses the study has shown that, a number of categorical factors are statistically significantly associated with the dependent variable (CS). When identifying the factors from a multivariate logistic regression, the study found that only three categorical factors -‘age of women’, ‘place at delivery’ and ‘baby weight’ appeared to be significantly affecting the CS among women in Qatar. Although the MICS dataset is based on a cluster survey, an exploratory multi-level analysis did not show any clustering effect, i.e. no significant variation in results at higher level (households), suggesting that all analyses at lower level (individual respondent) are valid without any significant bias in results. Conclusion: The study found a statistically significant association between the dependent variable (CS delivery) and age of women, frequency of TV watching, assistance at birth and place of birth. These results need to be interpreted cautiously; however, it can be used as evidence-base for further research on cesarean section delivery in Qatar.

Keywords: cesarean section, factors, multiple indicator cluster survey, MICS database, Qatar

Procedia PDF Downloads 119
2582 Evaluating the Ability to Cycle in Cities Using Geographic Information Systems Tools: The Case Study of Greek Modern Cities

Authors: Christos Karolemeas, Avgi Vassi, Georgia Christodoulopoulou

Abstract:

Although the past decades, planning a cycle network became an inseparable part of all transportation plans, there is still a lot of room for improvement in the way planning is made, in order to create safe and direct cycling networks that gather the parameters that positively influence one's decision to cycle. The aim of this article is to study, evaluate and visualize the bikeability of cities. This term is often used as the 'the ability of a person to bike' but this study, however, adopts the term in the sense of bikeability as 'the ability of the urban landscape to be biked'. The methodology used included assessing cities' accessibility by cycling, based on international literature and corresponding walkability methods and the creation of a 'bikeability index'. Initially, a literature review was made to identify the factors that positively affect the use of bicycle infrastructure. Those factors were used in order to create the spatial index and quantitatively compare the city network. Finally, the bikeability index was applied in two case studies: two Greek municipalities that, although, they have similarities in terms of land uses, population density and traffic congestion, they are totally different in terms of geomorphology. The factors suggested by international literature were (a) safety, (b) directness, (c) comfort and (d) the quality of the urban environment. Those factors were quantified through the following parameters: slope, junction density, traffic density, traffic speed, natural environment, built environment, activities coverage, centrality and accessibility to public transport stations. Each road section was graded for the above-mentioned parameters, and the overall grade shows the level of bicycle accessibility (low, medium, high). Each parameter, as well as the overall accessibility levels, were analyzed and visualized through Geographic Information Systems. This paper presents the bikeability index, its' results, the problems that have arisen and the conclusions from its' implementation through Strengths-Weaknesses-Opportunities-Threats analysis. The purpose of this index is to make it easy for researchers, practitioners, politicians, and stakeholders to quantify, visualize and understand which parts of the urban fabric are suitable for cycling.

Keywords: accessibility, cycling, green spaces, spatial data, urban environment

Procedia PDF Downloads 112
2581 Perspectives and Challenges a Functional Bread With Yeast Extract to Improve Human Diet

Authors: Cláudia Patrocínio, Beatriz Fernandes, Ana Filipa Pires

Abstract:

Background: Mirror therapy (MT) is used to improve motor function after stroke. During MT, a mirror is placed between the two upper limbs (UL), thus reflecting movements of the non- affected side as if it were the affected side. Objectives: The aim of this review is to analyze the evidence on the effec.tiveness of MT in the recovery of UL function in population with post chronic stroke. Methods: The literature search was carried out in PubMed, ISI Web of Science, and PEDro database. Inclusion criteria: a) studies that include individuals diagnosed with stroke for at least 6 months; b) intervention with MT in UL or comparing it with other interventions; c) articles published until 2023; d) articles published in English or Portuguese; e) randomized controlled studies. Exclusion criteria: a) animal studies; b) studies that do not provide a detailed description of the intervention; c) Studies using central electrical stimulation. The methodological quality of the included studies was assessed using the Physiotherapy Evidence Database (PEDro) scale. Studies with < 4 on PEDro scale were excluded. Eighteen studies met all the inclusion criteria. Main results and conclusions: The quality of the studies varies between 5 and 8. One article compared muscular strength training (MST) with MT vs without MT and four articles compared the use of MT vs conventional therapy (CT), one study compared extracorporeal shock therapy (EST) with and without MT and another study compared functional electrical stimulation (FES), MT and biofeedback, three studies compared MT with Mesh Glove (MG) or Sham Therapy, five articles compared performing bimanual exercises with and without MT and three studies compared MT with virtual reality (VR) or robot training (RT). The assessment of changes in function and structure (International Classification of Functioning, Disability and Health parameter) was carried out, in each article, mainly using the Fugl Meyer Assessment-Upper Limb scale, activity and participation (International Classification of Functioning, Disability and Health parameter) were evaluated using different scales, in each study. The positive results were seen in these parameters, globally. Results suggest that MT is more effective than other therapies in motor recovery and function of the affected UL, than these techniques alone, although the results have been modest in most of the included studies. There is also a more significant improvement in the distal movements of the affected hand than in the rest of the UL.

Keywords: physical therapy, mirror therapy, chronic stroke, upper limb, hemiplegia

Procedia PDF Downloads 56
2580 Weight Loss and Symptom Improvement in Women with Secondary Lymphedema Using Semaglutide

Authors: Shivani Thakur, Jasmin Dominguez Cervantes, Ahmed Zabiba, Fatima Zabiba, Sandhini Agarwal, Kamalpreet Kaur, Hussein Maatouk, Shae Chand, Omar Madriz, Tiffany Huang, Saloni Bansal

Abstract:

The prevalence of lymphedema in women in rural communities highlights the importance of developing effective treatment and prevention methods. Subjects with secondary lymphedema in California’s Central Valley were surveyed at 6 surgical clinics to assess demographics and symptoms of lymphedema. Additionally, subjects on semaglutide treatment for obesity and/or T2DM were monitored for their diabetes management, weight loss progress, and lymphedema symptoms compared to subjects who were not treated with semaglutide. The subjects were followed for 12 months. Subjects who were treated with semaglutide completed pre-treatment questionnaires and follow-up post-treatment questionnaires at 3, 6, 9, 12 months, along with medical assessment. The untreated subjects completed similar questionnaires. The questionnaires investigated subjective feelings regarding lymphedema symptoms and management using a Likert-scale; quantitative leg measurements were collected, and blood work reviewed at these appointments. Paired difference t-tests, chi-squared tests, and independent sample t-tests were performed. 50 subjects, aged 18-75 years, completed the surveys evaluating secondary lymphedema: 90% female, 69% Hispanic, 45% Spanish speaking, 42% disabled, 57 % employed, 54% income range below 30 thousand dollars, and average BMI of 40. Both treatment and non-treatment groups noted the most common symptoms were leg swelling (x̄=3.2, ▁d= 1.3), leg pain (x̄=3.2, ▁d=1.6 ), loss of daily function (x̄=3, ▁d=1.4 ), and negative body image (x̄=4.4, ▁d=0.54). Subjects in the semaglutide treatment group >3 months of treatment compared to the untreated group demonstrated: 55% subject in the treated group had a 10% weight loss vs 3% in the untreated group (average BMI reduction by 11% vs untreated by 2.5%, p<0.05) and improved subjective feelings about their lymphedema symptoms: leg swelling (x̄=2.4, ▁d=0.45 vs x̄=3.2, ▁d=1.3, p<0.05), leg pain (x̄=2.2, ▁d=0.45 vs x̄= 3.2, ▁d= 1.6, p<0.05), and heaviness (x̄=2.2, ▁d=0.45 vs x̄=3, ▁d=1.56, p<0.05). Improvement in diabetes management was demonstrated by an average of 0.9 % decrease in A1C values compared to untreated 0.1 %, p<0.05. In comparison to untreated subjects, treatment subjects on semaglutide noted 6 cm decrease in the circumference of the leg, knee, calf, and ankle compared to 2 cm in untreated subjects, p<0.05. Semaglutide was shown to significantly improve weight loss, T2DM management, leg circumference, and secondary lymphedema functional, physical and psychosocial symptoms.

Keywords: diabetes, secondary lymphedema, semaglutide, obesity

Procedia PDF Downloads 63
2579 Cultural Awareness, Intercultural Communication Competence and Academic Performance of Foreign Students Towards an Education ASEAN Integration in Global Education

Authors: Rizalito B. Javier

Abstract:

Research has shown that foreign students with higher levels of cultural awareness and intercultural communication competence tend to have better academic performance outcomes. This study aimed to find out the cultural awareness, intercultural communication competence, and academic performance of foreign students and its relationships among variables. Methods used were descriptive-comparative and correlational research design, quota purposive sampling technique while frequency counts and percentages, mean and standard deviation, T, and F-test and chi-square were utilized to analyze the data. The results revealed that the majority of the respondents were under the age bracket of 21-25 years old, mostly males, all single, and mostly citizens of Papua New Guinea, Angolan, Vanuatu, Tanzanian, Nigerian, Korean, Rwanda, and Myanmar. Most language spoken was English, many of them were born again Christians, the majority took BS business management degree program, their studies mainly supported by their parents, they had stayed in the Philippines for 3-4 years, and most of them attended five to six times of cultural awareness/competence workshop-seminars, majority of their parent’s occupations were family own business, and had been earning a family monthly income of P61,0000 and above. The respondents were highly aware of their culture in terms of clients’ issues. The intercultural communication competence of the respondents was slightly aware in terms of intercultural awareness, while the foreign students performed good remarks in their average academic performance. However, the profiles of the participants in terms of age, gender, civil status, nationality, course/degree program taken, support to the study, length of stay, workshop attended, and parents’ occupation have significant differences in the academic performance except for the type of family, language spoken, religion and family monthly income. Moreover, cultural awareness was significantly related to intercultural communication competence, and both were not related to academic performance. It is recommended that foreign students be provided with cultural orientation programs, offered language support services, promoted intercultural exchange activities, and implemented inclusive teaching practices to allow students to effectively navigate and interact with people from different cultural backgrounds, fostering a more inclusive and collaborative learning environment.

Keywords: cultural competence, communication competence, intercultural competence, and culture-academic performance.

Procedia PDF Downloads 23
2578 Forced-Choice Measurement Models of Behavioural, Social, and Emotional Skills: Theory, Research, and Development

Authors: Richard Roberts, Anna Kravtcova

Abstract:

Introduction: The realisation that personality can change over the course of a lifetime has led to a new companion model to the Big Five, the behavioural, emotional, and social skills approach (BESSA). BESSA hypothesizes that this set of skills represents how the individual is thinking, feeling, and behaving when the situation calls for it, as opposed to traits, which represent how someone tends to think, feel, and behave averaged across situations. The five major skill domains share parallels with the Big Five Factor (BFF) model creativity and innovation (openness), self-management (conscientiousness), social engagement (extraversion), cooperation (agreeableness), and emotional resilience (emotional stability) skills. We point to noteworthy limitations in the current operationalisation of BESSA skills (i.e., via Likert-type items) and offer up a different measurement approach: forced choice. Method: In this forced-choice paradigm, individuals were given three skill items (e.g., managing my time) and asked to select one response they believed they were “worst at” and “best at”. The Thurstonian IRT models allow these to be placed on a normative scale. Two multivariate studies (N = 1178) were conducted with a 22-item forced-choice version of the BESSA, a published measure of the BFF, and various criteria. Findings: Confirmatory factor analysis of the forced-choice assessment showed acceptable model fit (RMSEA<0.06), while reliability estimates were reasonable (around 0.70 for each construct). Convergent validity evidence was as predicted (correlations between 0.40 and 0.60 for corresponding BFF and BESSA constructs). Notable was the extent the forced-choice BESSA assessment improved upon test-criterion relationships over and above the BFF. For example, typical regression models find BFF personality accounting for 25% of the variance in life satisfaction scores; both studies showed incremental gains over the BFF exceeding 6% (i.e., BFF and BESSA together accounted for over 31% of the variance in both studies). Discussion: Forced-choice measurement models offer up the promise of creating equated test forms that may unequivocally measure skill gains and are less prone to fakability and reference bias effects. Implications for practitioners are discussed, especially those interested in selection, succession planning, and training and development. We also discuss how the forced choice method can be applied to other constructs like emotional immunity, cross-cultural competence, and self-estimates of cognitive ability.

Keywords: Big Five, forced-choice method, BFF, methods of measurements

Procedia PDF Downloads 95
2577 Innovations in the Implementation of Preventive Strategies and Measuring Their Effectiveness Towards the Prevention of Harmful Incidents to People with Mental Disabilities who Receive Home and Community Based Services

Authors: Carlos V. Gonzalez

Abstract:

Background: Providers of in-home and community based services strive for the elimination of preventable harm to the people under their care as well as to the employees who support them. Traditional models of safety and protection from harm have assumed that the absence of incidents of harm is a good indicator of safe practices. However, this model creates an illusion of safety that is easily shaken by sudden and inadvertent harmful events. As an alternative, we have developed and implemented an evidence-based resilient model of safety known as C.O.P.E. (Caring, Observing, Predicting and Evaluating). Within this model, safety is not defined by the absence of harmful incidents, but by the presence of continuous monitoring, anticipation, learning, and rapid response to events that may lead to harm. Objective: The objective was to evaluate the effectiveness of the C.O.P.E. model for the reduction of harm to individuals with mental disabilities who receive home and community based services. Methods: Over the course of 2 years we counted the number of incidents of harm and near misses. We trained employees on strategies to eliminate incidents before they fully escalated. We trained employees to track different levels of patient status within a scale from 0 to 10. Additionally, we provided direct support professionals and supervisors with customized smart phone applications to track and notify the team of changes in that status every 30 minutes. Finally, the information that we collected was saved in a private computer network that analyzes and graphs the outcome of each incident. Result and conclusions: The use of the COPE model resulted in: A reduction in incidents of harm. A reduction the use of restraints and other physical interventions. An increase in Direct Support Professional’s ability to detect and respond to health problems. Improvement in employee alertness by decreasing sleeping on duty. Improvement in caring and positive interaction between Direct Support Professionals and the person who is supported. Developing a method to globally measure and assess the effectiveness of prevention from harm plans. Future applications of the COPE model for the reduction of harm to people who receive home and community based services are discussed.

Keywords: harm, patients, resilience, safety, mental illness, disability

Procedia PDF Downloads 449
2576 Modeling of the Heat and Mass Transfer in Fluids through Thermal Pollution in Pipelines

Authors: V. Radulescu, S. Dumitru

Abstract:

Introduction: Determination of the temperature field inside a fluid in motion has many practical issues, especially in the case of turbulent flow. The phenomenon is greater when the solid walls have a different temperature than the fluid. The turbulent heat and mass transfer have an essential role in case of the thermal pollution, as it was the recorded during the damage of the Thermoelectric Power-plant Oradea (closed even today). Basic Methods: Solving the theoretical turbulent thermal pollution represents a particularly difficult problem. By using the semi-empirical theories or by simplifying the made assumptions, based on the experimental measurements may be assured the elaboration of the mathematical model for further numerical simulations. The three zones of flow are analyzed separately: the vicinity of the solid wall, the turbulent transition zone, and the turbulent core. For each area are determined the distribution law of temperature. It is determined the dependence of between the Stanton and Prandtl numbers with correction factors, based on measurements experimental. Major Findings/Results: The limitation of the laminar thermal substrate was determined based on the theory of Landau and Levice, using the assumption that the longitudinal component of the velocity pulsation and the pulsation’s frequency varies proportionally with the distance to the wall. For the calculation of the average temperature, the formula is used a similar solution as for the velocity, by an analogous mediation. On these assumptions, the numerical modeling was performed with a gradient of temperature for the turbulent flow in pipes (intact or damaged, with cracks) having 4 different diameters, between 200-500 mm, as there were in the Thermoelectric Power-plant Oradea. Conclusions: It was made a superposition between the molecular viscosity and the turbulent one, followed by addition between the molecular and the turbulent transfer coefficients, necessary to elaborate the theoretical and the numerical modeling. The concept of laminar boundary layer has a different thickness when it is compared the flow with heat transfer and that one without a temperature gradient. The obtained results are within the margin of error of 5%, between the semi-empirical classical theories and the developed model, based on the experimental data. Finally, it is obtained a general correlation between the Stanton number and the Prandtl number, for a specific flow (with associated Reynolds number).

Keywords: experimental measurements, numerical correlations, thermal pollution through pipelines, turbulent thermal flow

Procedia PDF Downloads 165
2575 Accurate Mass Segmentation Using U-Net Deep Learning Architecture for Improved Cancer Detection

Authors: Ali Hamza

Abstract:

Accurate segmentation of breast ultrasound images is of paramount importance in enhancing the diagnostic capabilities of breast cancer detection. This study presents an approach utilizing the U-Net architecture for segmenting breast ultrasound images aimed at improving the accuracy and reliability of mass identification within the breast tissue. The proposed method encompasses a multi-stage process. Initially, preprocessing techniques are employed to refine image quality and diminish noise interference. Subsequently, the U-Net architecture, a deep learning convolutional neural network (CNN), is employed for pixel-wise segmentation of regions of interest corresponding to potential breast masses. The U-Net's distinctive architecture, characterized by a contracting and expansive pathway, enables accurate boundary delineation and detailed feature extraction. To evaluate the effectiveness of the proposed approach, an extensive dataset of breast ultrasound images is employed, encompassing diverse cases. Quantitative performance metrics such as the Dice coefficient, Jaccard index, sensitivity, specificity, and Hausdorff distance are employed to comprehensively assess the segmentation accuracy. Comparative analyses against traditional segmentation methods showcase the superiority of the U-Net architecture in capturing intricate details and accurately segmenting breast masses. The outcomes of this study emphasize the potential of the U-Net-based segmentation approach in bolstering breast ultrasound image analysis. The method's ability to reliably pinpoint mass boundaries holds promise for aiding radiologists in precise diagnosis and treatment planning. However, further validation and integration within clinical workflows are necessary to ascertain their practical clinical utility and facilitate seamless adoption by healthcare professionals. In conclusion, leveraging the U-Net architecture for breast ultrasound image segmentation showcases a robust framework that can significantly enhance diagnostic accuracy and advance the field of breast cancer detection. This approach represents a pivotal step towards empowering medical professionals with a more potent tool for early and accurate breast cancer diagnosis.

Keywords: mage segmentation, U-Net, deep learning, breast cancer detection, diagnostic accuracy, mass identification, convolutional neural network

Procedia PDF Downloads 85
2574 Measuring Self-Regulation and Self-Direction in Flipped Classroom Learning

Authors: S. A. N. Danushka, T. A. Weerasinghe

Abstract:

The diverse necessities of instruction could be addressed effectively with the support of new dimensions of ICT integrated learning such as blended learning –which is a combination of face-to-face and online instruction which ensures greater flexibility in student learning and congruity of course delivery. As blended learning has been the ‘new normality' in education, many experimental and quasi-experimental research studies provide ample of evidence on its successful implementation in many fields of studies, but it is hard to justify whether blended learning could work similarly in the delivery of technology-teacher development programmes (TTDPs). The present study is bound with the particular research uncertainty, and having considered existing research approaches, the study methodology was set to decide the efficient instructional strategies for flipped classroom learning in TTDPs. In a quasi-experimental pre-test and post-test design with a mix-method research approach, the major study objective was tested with two heterogeneous samples (N=135) identified in a virtual learning environment in a Sri Lankan university. Non-randomized informal ‘before-and-after without control group’ design was employed, and two data collection methods, identical pre-test and post-test and Likert-scale questionnaires were used in the study. Selected two instructional strategies, self-directed learning (SDL) and self-regulated learning (SRL), were tested in an appropriate instructional framework with two heterogeneous samples (pre-service and in-service teachers). Data were statistically analyzed, and an efficient instructional strategy was decided via t-test, ANOVA, ANCOVA. The effectiveness of the two instructional strategy implementation models was decided via multiple linear regression analysis. ANOVA (p < 0.05) shows that age, prior-educational qualifications, gender, and work-experiences do not impact on learning achievements of the two diverse groups of learners through the instructional strategy is changed. ANCOVA (p < 0.05) analysis shows that SDL is efficient for two diverse groups of technology-teachers than SRL. Multiple linear regression (p < 0.05) analysis shows that the staged self-directed learning (SSDL) model and four-phased model of motivated self-regulated learning (COPES Model) are efficient in the delivery of course content in flipped classroom learning.

Keywords: COPES model, flipped classroom learning, self-directed learning, self-regulated learning, SSDL model

Procedia PDF Downloads 200
2573 Effective Apixaban Clearance with Cytosorb Extracorporeal Hemoadsorption

Authors: Klazina T. Havinga, Hilde R. H. de Geus

Abstract:

Introduction: Pre-operative coagulation management of Apixaban prescribed patients, a new oral anticoagulant (a factor Xa inhibitor), is difficult, especially when chronic kidney disease (CKD) causes drug overdose. Apixaban is not dialyzable due to its high level of protein binding. An antidote, Andexanet α, is available but expensive and has an unfavorable short half-life. We report the successful extracorporeal removal of Apixaban prior to emergency surgery with the CytoSorb® Hemoadsorption device. Methods: A 89-year-old woman with CKD, with an Apixaban prescription for atrial fibrillation, was presented at the ER with traumatic rib fractures, a flail chest, and an unstable spinal fracture (T12) for which emergency surgery was indicated. However, due to very high Apixaban levels, this surgery had to be postponed. Based on the Apixaban-specific anti-factor Xa activity (AFXaA) measurements at admission and 10 hours later, complete clearance was expected after 48 hours. In order to enhance the Apixaban removal and reduce the time to operation, and therefore reduce pulmonary complications, CRRT with CytoSorb® cartridge was initiated. Apixaban-specific anti-factor Xa activity (AFXaA) was measured frequently as a substitute for Apixaban drug concentrations, pre- and post adsorber, in order to calculate the adsorber-related clearance. Results: The admission AFXaA concentration, as a substitute for Apixaban drug levels, was 218 ng/ml, which decreased to 157 ng/ml after ten hours. Due to sustained anticoagulation effects, surgery was again postponed. However, the AFXaA levels decreased quickly to sub-therapeutic levels after CRRT (Multifiltrate Pro, Fresenius Medical Care, Blood flow 200 ml/min, Dialysate Flow 4000 ml/h, Prescribed renal dose 51 ml-kg-h) with Cytosorb® connected in series into the circuit was initiated (within 5 hours). The adsorber-related (indirect) Apixaban clearance was calculated every half hour (Cl=Qe * (AFXaA pre- AFXaA post/ AFXaA pre) with Qe=plasma flow rate calculated with Ht=0.38 and system blood flow rate 200 ml-min): 100 ml/min, 72 ml/min and 57 ml/min. Although, as expected, the adsorber-related clearance decreased quickly due to saturation of the beads, still the reduction rate achieved resulted in a very rapid decrease in AFXaA levels. Surgery was ordered and possible within 5 hours after Cytosorb initiation. Conclusion: The CytoSorb® Hemoadsorption device enabled rapid correction of Apixaban associated anticoagulation.

Keywords: Apixaban, CytoSorb, emergency surgery, Hemoadsorption

Procedia PDF Downloads 159
2572 Posterior Acetabular Fractures-Optimizing the Treatment by Enhancing Practical Skills

Authors: Olivera Lupescu, Taina Elena Avramescu, Mihail Nagea, Alexandru Dimitriu

Abstract:

Acetabular fractures represent a real challenge due to their impact upon the long term function of the hip joint, and due to the risk of intra- and peri-operative complications especially that they affect young, active people. That is why treating these fractures require certain skills which must be exercised, regarding the pre-operative planning, as well as the execution of surgery.The authors retrospectively analyse 38 cases with acetabular fractures operated using the posterior approach in our hospital between 01.01.2013- 01.01.2015 for which complete medical records ensure a follow-up of 24 months, in order to establish the main causes of potential errors and to underline the methods for preventing them. This target is included in the Erasmus + project ‘Collaborative learning for enhancing practical skills for patient-focused interventions in gait rehabilitation after orthopedic surgery COR-skills’. This paper analyses the pitfalls revealed by these cases, as well as the measures necessary to enhance the practical skills of the surgeons who perform acetabular surgery. Pre-op planning matched the intra and post-operative outcome in 88% of the analyzed points, from 72% at the beginning to 94% in the last case, meaning that experience is very important in treating this injury. The main problems detected for the posterior approach were: nervous complications - 3 cases, 1 of them a complete paralysis of the sciatic nerve, which recovered 6 months after surgery, and in other 2 cases intra-articular position of the screws was demonstrated by post-operative CT scans, so secondary screw removal was necessary in these cases. We analysed this incident, too, due to lack of information about the relationship between the screws and the joint secondary to this approach. Septic complications appeared in 3 cases, 2 superficial and 1 profound (requiring implant removal). The most important problems were the reduction of the fractures and the positioning of the screws so as not to interfere with the the articular space. In posterior acetabular fractures, pre-op complex planning is important in order to achieve maximum treatment efficacy with minimum of risk; an optimal training of the surgeons insisting on the main points of potential mistakes ensure the success of the procedure, as well as a favorable outcome for the patient.

Keywords: acetabular fractures, articular congruency, surgical skills, vocational training

Procedia PDF Downloads 207
2571 A Review of Digital Twins to Reduce Emission in the Construction Industry

Authors: Zichao Zhang, Yifan Zhao, Samuel Court

Abstract:

The carbon emission problem of the traditional construction industry has long been a pressing issue. With the growing emphasis on environmental protection and advancement of science and technology, the organic integration of digital technology and emission reduction has gradually become a mainstream solution. Among various sophisticated digital technologies, digital twins, which involve creating virtual replicas of physical systems or objects, have gained enormous attention in recent years as tools to improve productivity, optimize management and reduce carbon emissions. However, the relatively high implementation costs including finances, time, and manpower associated with digital twins have limited their widespread adoption. As a result, most of the current applications are primarily concentrated within a few industries. In addition, the creation of digital twins relies on a large amount of data and requires designers to possess exceptional skills in information collection, organization, and analysis. Unfortunately, these capabilities are often lacking in the traditional construction industry. Furthermore, as a relatively new concept, digital twins have different expressions and usage methods across different industries. This lack of standardized practices poses a challenge in creating a high-quality digital twin framework for construction. This paper firstly reviews the current academic studies and industrial practices focused on reducing greenhouse gas emissions in the construction industry using digital twins. Additionally, it identifies the challenges that may be encountered during the design and implementation of a digital twin framework specific to this industry and proposes potential directions for future research. This study shows that digital twins possess substantial potential and significance in enhancing the working environment within the traditional construction industry, particularly in their ability to support decision-making processes. It proves that digital twins can improve the work efficiency and energy utilization of related machinery while helping this industry save energy and reduce emissions. This work will help scholars in this field to better understand the relationship between digital twins and energy conservation and emission reduction, and it also serves as a conceptual reference for practitioners to implement related technologies.

Keywords: digital twins, emission reduction, construction industry, energy saving, life cycle, sustainability

Procedia PDF Downloads 106
2570 Characterization of WNK2 Role on Glioma Cells Vesicular Traffic

Authors: Viviane A. O. Silva, Angela M. Costa, Glaucia N. M. Hajj, Ana Preto, Aline Tansini, Martin Roffé, Peter Jordan, Rui M. Reis

Abstract:

Autophagy is a recycling and degradative system suggested to be a major cell death pathway in cancer cells. Autophagy pathway is interconnected with the endocytosis pathways sharing the same ultimate lysosomal destination. Lysosomes are crucial regulators of cell homeostasis, responsible to downregulate receptor signalling and turnover. It seems highly likely that derailed endocytosis can make major contributions to several hallmarks of cancer. WNK2, a member of the WNK (with-no-lysine [K]) subfamily of protein kinases, had been found downregulated by its promoter hypermethylation, and has been proposed to act as a specific tumour-suppressor gene in brain tumors. Although some contradictory studies indicated WNK2 as an autophagy modulator, its role in cancer cell death is largely unknown. There is also growing evidence for additional roles of WNK kinases in vesicular traffic. Aim: To evaluate the role of WNK2 in autophagy and endocytosis on glioma context. Methods: Wild-type (wt) A172 cells (WNK2 promoter-methylated), and A172 transfected either with an empty vector (Ev) or with a WNK2 expression vector, were used to assess the cellular basal capacities to promote autophagy, through western blot and flow-cytometry analysis. Additionally, we evaluated the effect of WNK2 on general endocytosis trafficking routes by immunofluorescence. Results: The re-expression of ectopic WNK2 did not interfere with autophagy-related protein light chain 3 (LC3-II) expression levels as well as did not promote mTOR signaling pathway alteration when compared with Ev or wt A172 cells. However, the restoration of WNK2 resulted in a marked increase (8 to 92,4%) of Acidic Vesicular Organelles formation (AVOs). Moreover, our results also suggest that WNK2 cells promotes delay in uptake and internalization rate of cholera toxin B and transferrin ligands. Conclusions: The restoration of WNK2 interferes in vesicular traffic during endocytosis pathway and increase AVOs formation. This results also suggest the role of WNK2 in growth factor receptor turnover related to cell growth and homeostasis and associates one more time, WNK2 silencing contribution in genesis of gliomas.

Keywords: autophagy, endocytosis, glioma, WNK2

Procedia PDF Downloads 370
2569 How Defining the Semi-Professional Journalist Is Creating Nuance and a Familiar Future for Local Journalism

Authors: Ross Hawkes

Abstract:

The rise of hyperlocal journalism and its role in the wider local news ecosystem has been debated across both industry and academic circles, particularly via the lens of structures, models, and platforms. The nuances within this sphere are now allowing for the semi-professional journalist to emerge as a key component of the landscape at the fringes of journalism. By identifying and framing the labour of these individuals against a backdrop of change within the professional local newspaper publishing industry, it is possible to address wider debates around the ways in which participants enter and exist in the space between amateur and professional journalism. Considerations around prior experience and understanding allow us to better shape and nuance the hyperlocal landscape in order to understand the challenges and opportunities facing local news via this emergent form of semi-professional journalistic production. The disruption to local news posed by the changing nature of audiences, long-established methods of production, the rise of digital platforms, and increased competition in the online space has brought questions around the very nature and identity of local news, as well as the uncertain future and precarity which surrounds it. While the hyperlocal sector has long been associated as a potential future direction for local journalism through an alternative approach to reporting and as a mechanism for participants to pass between amateurism towards professionalism, there is now a semi-professional space being occupied in a different way. Those framed as semi-professional journalists are not necessarily transiting through this space at the fringes of the professional industry; instead, they are occupying and claiming the space as an entity within itself. By framing the semi-professional journalist through a lens of prior experience and knowledge of the sector, it is possible to identify how their motivations vary from the traditional metrics of financial gain, personal progression, or a sense of civic or community duty. While such factors may be by-products of their labour, the desire of such reporters to recreate and retain experiences and values from their past as a participant or consumer is the central basis of the framework to define the semi-professional journalist. Through understanding the motivations, aims and factors shaping the semi-professional journalist within the wider journalism and hyperlocal journalism debates and landscape, it will be possible to better frame the role they can play in sustaining the longer term provision of local news and addressing broader issues and factors within the sector.

Keywords: hyperlocal, journalism, local news, semi-professionalism

Procedia PDF Downloads 30
2568 Effects of Roasting as Preservative Method on Food Value of the Runner Groundnuts, Arachis hypogaea

Authors: M. Y. Maila, H. P. Makhubele

Abstract:

Roasting is one of the oldest preservation method used in foods such as nuts and seeds. It is a process by which heat is applied to dry foodstuffs without the use of oil or water as a carrier. Groundnut seeds, also known as peanuts when sun dried or roasted, are among the oldest oil crops that are mostly consumed as a snack, after roasting in many parts of South Africa. However, roasting can denature proteins, destroy amino acids, decrease nutritive value and induce undesirable chemical changes in the final product. The aim of this study, therefore, was to evaluate the effect of various roasting times on the food value of the runner groundnut seeds. A constant temperature of 160 °C and various time-intervals (20, 30, 40, 50 and 60 min) were used for roasting groundnut seeds in an oven. Roasted groundnut seeds were then cooled and milled to flour. The milled sundried, raw groundnuts served as reference. The proximate analysis (moisture, energy and crude fats) was performed and the results were determined using standard methods. The antioxidant content was determined using HPLC. Mineral (cobalt, chromium, silicon and iron) contents were determined by first digesting the ash of sundried and roasted seed samples in 3M Hydrochloric acid and then determined by Atomic Absorption Spectrometry. All results were subjected to ANOVA through SAS software. Relative to the reference, roasting time significantly (p ≤ 0.05) reduced moisture (71%–88%), energy (74%) and crude fat (5%–64%) of the runner groundnut seeds, whereas the antioxidant content was significantly (p ≤ 0.05) increased (35%–72%) with increasing roasting time. Similarly, the tested mineral contents of the roasted runner groundnut seeds were also significantly (p ≤ 0.05) reduced at all roasting times: cobalt (21%–83%), chromium (48%–106%) and silicon (58%–77%). However, the iron content was significantly (p ≤ 0.05) unaffected. Generally, the tested runner groundnut seeds had higher food value in the raw state than in the roasted state, except for the antioxidant content. Moisture is a critical factor affecting the shelf life, texture and flavor of the final product. Loss of moisture ensures prolonged shelf life, which contribute to the stability of the roasted peanuts. Also, increased antioxidant content in roasted groundnuts is essential in other health-promoting compounds. In conclusion, the overall reduction in the proximate and mineral contents of the runner groundnuts seeds due to roasting is sufficient to suggest influences of roasting time on the food value of the final product and shelf life.

Keywords: dry roasting, legume, oil source, peanuts

Procedia PDF Downloads 291
2567 Modelling Soil Inherent Wind Erodibility Using Artifical Intellligent and Hybrid Techniques

Authors: Abbas Ahmadi, Bijan Raie, Mohammad Reza Neyshabouri, Mohammad Ali Ghorbani, Farrokh Asadzadeh

Abstract:

In recent years, vast areas of Urmia Lake in Dasht-e-Tabriz has dried up leading to saline sediments exposure on the surface lake coastal areas being highly susceptible to wind erosion. This study was conducted to investigate wind erosion and its relevance to soil physicochemical properties and also modeling of wind erodibility (WE) using artificial intelligence techniques. For this purpose, 96 soil samples were collected from 0-5 cm depth in 414000 hectares using stratified random sampling method. To measure the WE, all samples (<8 mm) were exposed to 5 different wind velocities (9.5, 11, 12.5, 14.1 and 15 m s-1 at the height of 20 cm) in wind tunnel and its relationship with soil physicochemical properties was evaluated. According to the results, WE varied within the range of 76.69-9.98 (g m-2 min-1)/(m s-1) with a mean of 10.21 and coefficient of variation of 94.5% showing a relatively high variation in the studied area. WE was significantly (P<0.01) affected by soil physical properties, including mean weight diameter, erodible fraction (secondary particles smaller than 0.85 mm) and percentage of the secondary particle size classes 2-4.75, 1.7-2 and 0.1-0.25 mm. Results showed that the mean weight diameter, erodible fraction and percentage of size class 0.1-0.25 mm demonstrated stronger relationship with WE (coefficients of determination were 0.69, 0.67 and 0.68, respectively). This study also compared efficiency of multiple linear regression (MLR), gene expression programming (GEP), artificial neural network (MLP), artificial neural network based on genetic algorithm (MLP-GA) and artificial neural network based on whale optimization algorithm (MLP-WOA) in predicting of soil wind erodibility in Dasht-e-Tabriz. Among 32 measured soil variable, percentages of fine sand, size classes of 1.7-2.0 and 0.1-0.25 mm (secondary particles) and organic carbon were selected as the model inputs by step-wise regression. Findings showed MLP-WOA as the most powerful artificial intelligence techniques (R2=0.87, NSE=0.87, ME=0.11 and RMSE=2.9) to predict soil wind erodibility in the study area; followed by MLP-GA, MLP, GEP and MLR and the difference between these methods were significant according to the MGN test. Based on the above finding MLP-WOA may be used as a promising method to predict soil wind erodibility in the study area.

Keywords: wind erosion, erodible fraction, gene expression programming, artificial neural network

Procedia PDF Downloads 73
2566 Association of Severe Preeclampsia with Offspring Neurodevelopmental and Psychiatric Disorders: A Finnish Population-Based Cohort Study

Authors: Linghua Kong, Xinxia Chen, Mika Gissler, Catharina Lavebratt

Abstract:

Background: Prenatal exposure to preeclampsia has been associated with an increased risk of offspring attention-deficit/hyperactivity disorders (ADHD), autism spectrum disorder (ASD), and intellectual disability. However, little is known about the association between prenatal exposure to severe preeclampsia and neurodevelopmental and psychiatric disorders in offspring. Objective: This study aimed to assess the risk of maternal preeclampsia combined with perinatal problems, specifically low birth weight and prematurity, on offspring neuropsychiatric disorders. Methods: All singleton live births in Finland between 1996 and 2014 (n=1 012 723) were followed up in nation-wide registries until 2018. Main exposures included pre-eclampsia, small for gestational age, and delivery before 34 gestational weeks. Offspring neurodevelopmental and psychiatric disorders (ICD-10 codes) were examined as outcomes variables. Offspring birth year, sex, maternal age at delivery, parity, marital status at birth, mother's country of birth, maternal smoking, maternal gestational diabetes, maternal use of psychotropic medication during pregnancy, and maternal systemic inflammatory diseases were used as covariates. Risks for neurodevelopmental and psychiatric disorders were estimated using Cox proportional hazards modeling. Results: Of the 1 012 723 offspring, 25 901 (2.6%) were exposed to preeclampsia, and 93 281 (9.2%) were diagnosed with a neuropsychiatric disorder. Compared to births unexposed to preeclampsia, small for gestational age or delivery before 34 gestational weeks, those exposed to preeclampsia only had a 21% increase in the likelihood of any neuropsychiatric disorders after adjusting for potential confounding (adjusted HR=1.21, 95% CI: 1.15-1.26), while exposure to preeclampsia combined with small for gestational age or delivery before 34 gestational weeks had a more than twofold increased risk of having a child with neuropsychiatric disorders (adjusted HR=2.16, 95% CI: 2.02-2.32). The adjusted HR for neuropsychiatric disorders in offspring with small for gestational age or delivery before 34 gestational weeks only was 1.79 (95% CI: 1.73-1.83). In addition, the risk estimate in offspring exposed to both preeclampsia and perinatal problems was greater than those only exposed to preeclampsia for having personality disorders (adjusted HR=1.66; 95% CI: 1.07-2.57), intellectual disabilities (adjusted HR=3.47; 95% CI: 2.86-4.22), specific developmental disorders (adjusted HR=2.91; 95% CI: 2.69-3.15), ASD (adjusted HR=1.75; 95% CI: 1.42-2.17), ADHD and conduct disorders (adjusted HR=2.00; 95%CI: 1.76-2.27), and other behavioral and emotional disorders (adjusted HR=2.09; 95% CI: 1.84-2.37). Conclusion: In utero exposure to severe preeclampsia increased the risk of several neurodevelopmental and psychiatric disorders in offspring. Our findings are relevant to women with hypertensive disorders with regard to pregnancy consultation and management and may yield effective clues for the prevention of neurodevelopmental and psychiatric disorders in childhood.

Keywords: low birth weight, neurodevelopmental disorders, preeclampsia, prematurity, psychiatric disorders

Procedia PDF Downloads 148
2565 Improving the Detection of Depression in Sri Lanka: Cross-Sectional Study Evaluating the Efficacy of a 2-Question Screen for Depression

Authors: Prasad Urvashi, Wynn Yezarni, Williams Shehan, Ravindran Arun

Abstract:

Introduction: Primary health services are often the first point of contact that patients with mental illness have with the healthcare system. A number of tools have been developed to increase detection of depression in the context of primary care. However, one challenge amongst many includes utilizing these tools within the limited primary care consultation timeframe. Therefore, short questionnaires that screen for depression that are just as effective as more comprehensive diagnostic tools may be beneficial in improving detection rates of patients visiting a primary care setting. Objective: To develop and determine the sensitivity and specificity of a 2-Question Questionnaire (2-QQ) to screen for depression in in a suburban primary care clinic in Ragama, Sri Lanka. The purpose is to develop a short screening tool for depression that is culturally adapted in order to increase the detection of depression in the Sri Lankan patient population. Methods: This was a cross-sectional study involving two steps. Step one: verbal administration of 2-QQ to patients by their primary care physician. Step two: completion of the Peradeniya Depression Scale, a validated diagnostic tool for depression, the patient after their consultation with the primary care physician. The results from the PDS were then correlated to the results from the 2-QQ for each patient to determine sensitivity and specificity of the 2-QQ. Results: A score of 1/+ on the 2-QQ was most sensitive but least specific. Thus, setting the threshold at this level is effective for correctly identifying depressed patients, but also inaccurately captures patients who are not depressed. A score of 6 on the 2-QQ was most specific but least sensitive. Setting the threshold at this level is effective for correctly identifying patients without depression, but not very effective at capturing patients with depression. Discussion: In the context of primary care, it may be worthwhile setting the 2-QQ screen at a lower threshold for positivity (such as a score of 1 or above). This would generate a high test sensitivity and thus capture the majority of patients that have depression. On the other hand, by setting a low threshold for positivity, patients who do not have depression but score higher than 1 on the 2-QQ will also be falsely identified as testing positive for depression. However, the benefits of identifying patients who present with depression may outweigh the harms of falsely identifying a non-depressed patient. It is our hope that the 2-QQ will serve as a quick primary screen for depression in the primary care setting and serve as a catalyst to identify and treat individuals with depression.

Keywords: depression, primary care, screening tool, Sri Lanka

Procedia PDF Downloads 259
2564 Immunocytochemical Stability of Antigens in Cytological Samples Stored in In-house Liquid-Based Medium

Authors: Anamarija Kuhar, Veronika Kloboves Prevodnik, Nataša Nolde, Ulrika Klopčič

Abstract:

The decision for immunocytochemistry (ICC) is usually made in the basis of the findings in Giemsa- and/or Papanicolaou- smears. More demanding diagnostic cases require preparation of additional cytological preparations. Therefore, it is convenient to suspend cytological samples in a liquid based medium (LBM) that preserve antigen and morphological properties. However, the duration of these properties being preserved in the medium is usually unknown. Eventually, cell morphology becomes impaired and altered, as well as antigen properties may be lost or become diffused. In this study, the influence of cytological sample storage length in in-house liquid based medium on antigen properties and cell morphology is evaluated. The question is how long the cytological samples in this medium can be stored so that the results of immunocytochemical reactions are still reliable and can be safely used in routine cytopathological diagnostics. The stability of 6 ICC markers that are most frequently used in everyday routine work were tested; Cytokeratin AE1/AE3, Calretinin, Epithelial specific antigen Ep-CAM (MOC-31), CD 45, Oestrogen receptor (ER), and Melanoma triple cocktail were tested on methanol fixed cytospins prepared from fresh fine needle aspiration biopsies, effusion samples, and disintegrated lymph nodes suspended in in-house cell medium. Cytospins were prepared on the day of the sampling as well as on the second, fourth, fifth, and eight day after sample collection. Next, they were fixed in methanol and immunocytochemically stained. Finally, the percentage of positive stained cells, reaction intensity, counterstaining, and cell morphology were assessed using two assessment methods: the internal assessment and the UK NEQAS ICC scheme assessment. Results show that the antigen properties for Cytokeratin AE1/AE3, MOC-31, CD 45, ER, and Melanoma triple cocktail were preserved even after 8 days of storage in in-house LBM, while the antigen properties for Calretinin remained unchanged only for 4 days. The key parameters for assessing detection of antigen are the proportion of cells with a positive reaction and intensity of staining. Well preserved cell morphology is highly important for reliable interpretation of ICC reaction. Therefore, it would be valuable to perform a similar analysis for other ICC markers to determine the duration in which the antigen and morphological properties are preserved in LBM.

Keywords: cytology samples, cytospins, immunocytochemistry, liquid-based cytology

Procedia PDF Downloads 145
2563 Optical and Surface Characteristics of Direct Composite, Polished and Glazed Ceramic Materials After Exposure to Tooth Brush Abrasion and Staining Solution

Authors: Maryam Firouzmandi, Moosa Miri

Abstract:

Aim and background: esthetic and structural reconstruction of anterior teeth may require the application of different restoration material. In this regard combination of direct composite veneer and ceramic crown is a common treatment option. Despite the initial matching, their long term harmony in term of optical and surface characteristics is a matter of concern. The purpose of this study is to evaluate and compare optical and surface characteristic of direct composite polished and glazed ceramic materials after exposure to tooth brush abrasion and staining solution. Materials and Methods: ten 2 mm thick disk shape specimens were prepared from IPS empress direct composite and twenty specimens from IPS e.max CAD blocks. Composite specimens and ten ceramic specimens were polished by using D&Z composite and ceramic polishing kit. The other ten specimens of ceramic were glazed with glazing liquid. Baseline measurement of roughness, CIElab coordinate, and luminance were recorded. Then the specimens underwent thermocycling, tooth brushing, and coffee staining. Afterword, the final measurements were recorded. Color coordinate were used to calculate ΔE76, ΔE00, translucency parameter, and contrast ratio. Data were analyzed by One-way ANOVA and post hoc LSD test. Results: baseline and final roughness of the study group were not different. At baseline, the order of roughness for the study group were as follows: composite < glazed ceramic < polished ceramic, but after aging, no difference. Between ceramic groups was not detected. The comparison of baseline and final luminance was similar to roughness but in reverse order. Unlike differential roughness which was comparable between the groups, changes in luminance of the glazed ceramic group was higher than other groups. ΔE76 and ΔE00 in the composite group were 18.35 and 12.84, in the glazed ceramic group were 1.3 and 0.79, and in polished ceramic were 1.26 and 0.85. These values for the composite group were significantly different from ceramic groups. Translucency of composite at baseline was significantly higher than final, but there was no significant difference between these values in ceramic groups. Composite was more translucency than ceramic at baseline and final measurement. Conclusion: Glazed ceramic surface was smoother than polished ceramic. Aging did not change the roughness. Optical properties (color and translucency) of the composite were influenced by aging. Luminance of composite, glazed ceramic, and polished ceramic decreased after aging, but the reduction in glazed ceramic was more pronounced.

Keywords: ceramic, tooth-brush abrasion, staining solution, composite resin

Procedia PDF Downloads 186
2562 Investigation of Deep Eutectic Solvents for Microwave Assisted Extraction and Headspace Gas Chromatographic Determination of Hexanal in Fat-Rich Food

Authors: Birute Bugelyte, Ingrida Jurkute, Vida Vickackaite

Abstract:

The most complicated step of the determination of volatile compounds in complex matrices is the separation of analytes from the matrix. Traditional analyte separation methods (liquid extraction, Soxhlet extraction) require a lot of time and labour; moreover, there is a risk to lose the volatile analytes. In recent years, headspace gas chromatography has been used to determine volatile compounds. To date, traditional extraction solvents have been used in headspace gas chromatography. As a rule, such solvents are rather volatile; therefore, a large amount of solvent vapour enters into the headspace together with the analyte. Because of that, the determination sensitivity of the analyte is reduced, a huge solvent peak in the chromatogram can overlap with the peaks of the analyts. The sensitivity is also limited by the fact that the sample can’t be heated at a higher temperature than the solvent boiling point. In 2018 it was suggested to replace traditional headspace gas chromatographic solvents with non-volatile, eco-friendly, biodegradable, inexpensive, and easy to prepare deep eutectic solvents (DESs). Generally, deep eutectic solvents have low vapour pressure, a relatively wide liquid range, much lower melting point than that of any of their individual components. Those features make DESs very attractive as matrix media for application in headspace gas chromatography. Also, DESs are polar compounds, so they can be applied for microwave assisted extraction. The aim of this work was to investigate the possibility of applying deep eutectic solvents for microwave assisted extraction and headspace gas chromatographic determination of hexanal in fat-rich food. Hexanal is considered one of the most suitable indicators of lipid oxidation degree as it is the main secondary oxidation product of linoleic acid, which is one of the principal fatty acids of many edible oils. Eight hydrophilic and hydrophobic deep eutectic solvents have been synthesized, and the influence of the temperature and microwaves on their headspace gas chromatographic behaviour has been investigated. Using the most suitable DES, microwave assisted extraction conditions and headspace gas chromatographic conditions have been optimized for the determination of hexanal in potato chips. Under optimized conditions, the quality parameters of the prepared technique have been determined. The suggested technique was applied for the determination of hexanal in potato chips and other fat-rich food.

Keywords: deep eutectic solvents, headspace gas chromatography, hexanal, microwave assisted extraction

Procedia PDF Downloads 196
2561 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink

Authors: Sanjay Rathee, Arti Kashyap

Abstract:

Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.

Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining

Procedia PDF Downloads 299
2560 Time Estimation of Return to Sports Based on Classification of Health Levels of Anterior Cruciate Ligament Using a Convolutional Neural Network after Reconstruction Surgery

Authors: Zeinab Jafari A., Ali Sharifnezhad B., Mohammad Razi C., Mohammad Haghpanahi D., Arash Maghsoudi

Abstract:

Background and Objective: Sports-related rupture of the anterior cruciate ligament (ACL) and following injuries have been associated with various disorders, such as long-lasting changes in muscle activation patterns in athletes, which might last after ACL reconstruction (ACLR). The rupture of the ACL might result in abnormal patterns of movement execution, extending the treatment period and delaying athletes’ return to sports (RTS). As ACL injury is especially prevalent among athletes, the lengthy treatment process and athletes’ absence from sports are of great concern to athletes and coaches. Thus, estimating safe time of RTS is of crucial importance. Therefore, using a deep neural network (DNN) to classify the health levels of ACL in injured athletes, this study aimed to estimate the safe time for athletes to return to competitions. Methods: Ten athletes with ACLR and fourteen healthy controls participated in this study. Three health levels of ACL were defined: healthy, six-month post-ACLR surgery and nine-month post-ACLR surgery. Athletes with ACLR were tested six and nine months after the ACLR surgery. During the course of this study, surface electromyography (sEMG) signals were recorded from five knee muscles, namely Rectus Femoris (RF), Vastus Lateralis (VL), Vastus Medialis (VM), Biceps Femoris (BF), Semitendinosus (ST), during single-leg drop landing (SLDL) and forward hopping (SLFH) tasks. The Pseudo-Wigner-Ville distribution (PWVD) was used to produce three-dimensional (3-D) images of the energy distribution patterns of sEMG signals. Then, these 3-D images were converted to two-dimensional (2-D) images implementing the heat mapping technique, which were then fed to a deep convolutional neural network (DCNN). Results: In this study, we estimated the safe time of RTS by designing a DCNN classifier with an accuracy of 90 %, which could classify ACL into three health levels. Discussion: The findings of this study demonstrate the potential of the DCNN classification technique using sEMG signals in estimating RTS time, which will assist in evaluating the recovery process of ACLR in athletes.

Keywords: anterior cruciate ligament reconstruction, return to sports, surface electromyography, deep convolutional neural network

Procedia PDF Downloads 79
2559 A Triad Pedagogy for Increased Digital Competence of Human Resource Management Students: Reflecting on Human Resource Information Systems at a South African University

Authors: Esther Pearl Palmer

Abstract:

Driven by the increased pressure on Higher Education Institutions (HEIs) to produce work-ready graduates for the modern world of work, this study reflects on triad teaching and learning practices to increase student engagement and employability. In the South African higher education context, the employability of graduates is imperative in strengthening the country’s economy and in increasing competitiveness. Within this context, the field of Human Resource Management (HRM) calls for innovative methods and approaches to teaching and learning and assessing the skills and competencies of graduates to render them employable. Digital competency in Human Resource Information Systems (HRIS) is an important component and prerequisite for employment in HRM. The purpose of this research is to reflect on the subject HRIS developed by lecturers at the Central University of Technology, Free State (CUT), with the intention to actively engage students in real-world learning activities and increase their employability. The Enrichment Triad Model (ETM) was used as theoretical framework to develop the subject as it supports a triad teaching and learning approach to education. It is, furthermore, an inter-structured model that supports collaboration between industry, academics and students. The study follows a mixed-method approach to reflect on the learning experiences of the industry, academics and students in the subject field over the past three years. This paper is a work in progress and seeks to broaden the scope of extant studies about student engagement in work-related learning to increase employability. Based on the ETM as theoretical framework and pedagogical practice, this paper proposes that following a triad teaching and learning approach will increase work-related skills of students. Findings from the study show that students, academics and industry alike regard educational opportunities that incorporate active learning experiences with the world of work enhances student engagement in learning and renders them more employable.

Keywords: digital competence, enriched triad model, human resource information systems, student engagement, triad pedagogy.

Procedia PDF Downloads 93
2558 Effects of the Affordable Care Act On Preventive Care Disparities

Authors: Cagdas Agirdas

Abstract:

Background: The Affordable Care Act (ACA) requires non-grandfathered private insurance plans, starting with plan years on or after September 23rd, 2010, to provide certain preventive care services without any cost sharing in the form of deductibles, copayments or co-insurance. This requirement may affect racial and ethnic disparities in preventive care as it provides the largest copay reduction in preventive care. Objectives: We ask whether the ACA’s free preventive care benefits are associated with a reduction in racial and ethnic disparities in the utilization of four preventive services: cholesterol screenings, colonoscopies, mammograms, and pap smears. Methods: We use a data set of over 6,000 individuals from the 2009, 2010, and 2013 Medical Expenditure Panel Surveys (MEPS). We restrict our data set only to individuals who are old enough to be eligible for each preventive service. Our difference-in-differences logistic regression model classifies privately-insured Hispanics, African Americans, and Asians as the treatment groups and 2013 as the after-policy year. Our control group consists of non-Hispanic whites on Medicaid as this program already covered preventive care services for free or at a low cost before the ACA. Results: After controlling for income, education, marital status, preferred interview language, self-reported health status, employment, having a usual source of care, age and gender, we find that the ACA is associated with increases in the probability of the median, privately-insured Hispanic person to get a colonoscopy by 3.6% and a mammogram by 3.1%, compared to a non-Hispanic white person on Medicaid. Similarly, we find that the median, privately-insured African American person’s probability of receiving these two preventive services improved by 2.3% and 2.4% compared to a non-Hispanic white person on Medicaid. We do not find any significant improvements for any racial or ethnic group for cholesterol screenings or pap smears. Furthermore, our results do not indicate any significant changes for Asians compared to non-Hispanic whites in utilizing the four preventive services. These reductions in racial/ethnic disparities are robust to reconfigurations of time periods, previous diagnosis, and residential status. Conclusions: Early effects of the ACA’s provision of free preventive care are significant for Hispanics and African Americans. Further research is needed for the later years as more individuals became aware of these benefits.

Keywords: preventive care, Affordable Care Act, cost sharing, racial disparities

Procedia PDF Downloads 155