Search results for: cooling methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16143

Search results for: cooling methods

2553 Perspectives and Challenges a Functional Bread With Yeast Extract to Improve Human Diet

Authors: Cláudia Patrocínio, Beatriz Fernandes, Ana Filipa Pires

Abstract:

Background: Mirror therapy (MT) is used to improve motor function after stroke. During MT, a mirror is placed between the two upper limbs (UL), thus reflecting movements of the non- affected side as if it were the affected side. Objectives: The aim of this review is to analyze the evidence on the effec.tiveness of MT in the recovery of UL function in population with post chronic stroke. Methods: The literature search was carried out in PubMed, ISI Web of Science, and PEDro database. Inclusion criteria: a) studies that include individuals diagnosed with stroke for at least 6 months; b) intervention with MT in UL or comparing it with other interventions; c) articles published until 2023; d) articles published in English or Portuguese; e) randomized controlled studies. Exclusion criteria: a) animal studies; b) studies that do not provide a detailed description of the intervention; c) Studies using central electrical stimulation. The methodological quality of the included studies was assessed using the Physiotherapy Evidence Database (PEDro) scale. Studies with < 4 on PEDro scale were excluded. Eighteen studies met all the inclusion criteria. Main results and conclusions: The quality of the studies varies between 5 and 8. One article compared muscular strength training (MST) with MT vs without MT and four articles compared the use of MT vs conventional therapy (CT), one study compared extracorporeal shock therapy (EST) with and without MT and another study compared functional electrical stimulation (FES), MT and biofeedback, three studies compared MT with Mesh Glove (MG) or Sham Therapy, five articles compared performing bimanual exercises with and without MT and three studies compared MT with virtual reality (VR) or robot training (RT). The assessment of changes in function and structure (International Classification of Functioning, Disability and Health parameter) was carried out, in each article, mainly using the Fugl Meyer Assessment-Upper Limb scale, activity and participation (International Classification of Functioning, Disability and Health parameter) were evaluated using different scales, in each study. The positive results were seen in these parameters, globally. Results suggest that MT is more effective than other therapies in motor recovery and function of the affected UL, than these techniques alone, although the results have been modest in most of the included studies. There is also a more significant improvement in the distal movements of the affected hand than in the rest of the UL.

Keywords: physical therapy, mirror therapy, chronic stroke, upper limb, hemiplegia

Procedia PDF Downloads 55
2552 Weight Loss and Symptom Improvement in Women with Secondary Lymphedema Using Semaglutide

Authors: Shivani Thakur, Jasmin Dominguez Cervantes, Ahmed Zabiba, Fatima Zabiba, Sandhini Agarwal, Kamalpreet Kaur, Hussein Maatouk, Shae Chand, Omar Madriz, Tiffany Huang, Saloni Bansal

Abstract:

The prevalence of lymphedema in women in rural communities highlights the importance of developing effective treatment and prevention methods. Subjects with secondary lymphedema in California’s Central Valley were surveyed at 6 surgical clinics to assess demographics and symptoms of lymphedema. Additionally, subjects on semaglutide treatment for obesity and/or T2DM were monitored for their diabetes management, weight loss progress, and lymphedema symptoms compared to subjects who were not treated with semaglutide. The subjects were followed for 12 months. Subjects who were treated with semaglutide completed pre-treatment questionnaires and follow-up post-treatment questionnaires at 3, 6, 9, 12 months, along with medical assessment. The untreated subjects completed similar questionnaires. The questionnaires investigated subjective feelings regarding lymphedema symptoms and management using a Likert-scale; quantitative leg measurements were collected, and blood work reviewed at these appointments. Paired difference t-tests, chi-squared tests, and independent sample t-tests were performed. 50 subjects, aged 18-75 years, completed the surveys evaluating secondary lymphedema: 90% female, 69% Hispanic, 45% Spanish speaking, 42% disabled, 57 % employed, 54% income range below 30 thousand dollars, and average BMI of 40. Both treatment and non-treatment groups noted the most common symptoms were leg swelling (x̄=3.2, ▁d= 1.3), leg pain (x̄=3.2, ▁d=1.6 ), loss of daily function (x̄=3, ▁d=1.4 ), and negative body image (x̄=4.4, ▁d=0.54). Subjects in the semaglutide treatment group >3 months of treatment compared to the untreated group demonstrated: 55% subject in the treated group had a 10% weight loss vs 3% in the untreated group (average BMI reduction by 11% vs untreated by 2.5%, p<0.05) and improved subjective feelings about their lymphedema symptoms: leg swelling (x̄=2.4, ▁d=0.45 vs x̄=3.2, ▁d=1.3, p<0.05), leg pain (x̄=2.2, ▁d=0.45 vs x̄= 3.2, ▁d= 1.6, p<0.05), and heaviness (x̄=2.2, ▁d=0.45 vs x̄=3, ▁d=1.56, p<0.05). Improvement in diabetes management was demonstrated by an average of 0.9 % decrease in A1C values compared to untreated 0.1 %, p<0.05. In comparison to untreated subjects, treatment subjects on semaglutide noted 6 cm decrease in the circumference of the leg, knee, calf, and ankle compared to 2 cm in untreated subjects, p<0.05. Semaglutide was shown to significantly improve weight loss, T2DM management, leg circumference, and secondary lymphedema functional, physical and psychosocial symptoms.

Keywords: diabetes, secondary lymphedema, semaglutide, obesity

Procedia PDF Downloads 61
2551 Cultural Awareness, Intercultural Communication Competence and Academic Performance of Foreign Students Towards an Education ASEAN Integration in Global Education

Authors: Rizalito B. Javier

Abstract:

Research has shown that foreign students with higher levels of cultural awareness and intercultural communication competence tend to have better academic performance outcomes. This study aimed to find out the cultural awareness, intercultural communication competence, and academic performance of foreign students and its relationships among variables. Methods used were descriptive-comparative and correlational research design, quota purposive sampling technique while frequency counts and percentages, mean and standard deviation, T, and F-test and chi-square were utilized to analyze the data. The results revealed that the majority of the respondents were under the age bracket of 21-25 years old, mostly males, all single, and mostly citizens of Papua New Guinea, Angolan, Vanuatu, Tanzanian, Nigerian, Korean, Rwanda, and Myanmar. Most language spoken was English, many of them were born again Christians, the majority took BS business management degree program, their studies mainly supported by their parents, they had stayed in the Philippines for 3-4 years, and most of them attended five to six times of cultural awareness/competence workshop-seminars, majority of their parent’s occupations were family own business, and had been earning a family monthly income of P61,0000 and above. The respondents were highly aware of their culture in terms of clients’ issues. The intercultural communication competence of the respondents was slightly aware in terms of intercultural awareness, while the foreign students performed good remarks in their average academic performance. However, the profiles of the participants in terms of age, gender, civil status, nationality, course/degree program taken, support to the study, length of stay, workshop attended, and parents’ occupation have significant differences in the academic performance except for the type of family, language spoken, religion and family monthly income. Moreover, cultural awareness was significantly related to intercultural communication competence, and both were not related to academic performance. It is recommended that foreign students be provided with cultural orientation programs, offered language support services, promoted intercultural exchange activities, and implemented inclusive teaching practices to allow students to effectively navigate and interact with people from different cultural backgrounds, fostering a more inclusive and collaborative learning environment.

Keywords: cultural competence, communication competence, intercultural competence, and culture-academic performance.

Procedia PDF Downloads 19
2550 Forced-Choice Measurement Models of Behavioural, Social, and Emotional Skills: Theory, Research, and Development

Authors: Richard Roberts, Anna Kravtcova

Abstract:

Introduction: The realisation that personality can change over the course of a lifetime has led to a new companion model to the Big Five, the behavioural, emotional, and social skills approach (BESSA). BESSA hypothesizes that this set of skills represents how the individual is thinking, feeling, and behaving when the situation calls for it, as opposed to traits, which represent how someone tends to think, feel, and behave averaged across situations. The five major skill domains share parallels with the Big Five Factor (BFF) model creativity and innovation (openness), self-management (conscientiousness), social engagement (extraversion), cooperation (agreeableness), and emotional resilience (emotional stability) skills. We point to noteworthy limitations in the current operationalisation of BESSA skills (i.e., via Likert-type items) and offer up a different measurement approach: forced choice. Method: In this forced-choice paradigm, individuals were given three skill items (e.g., managing my time) and asked to select one response they believed they were “worst at” and “best at”. The Thurstonian IRT models allow these to be placed on a normative scale. Two multivariate studies (N = 1178) were conducted with a 22-item forced-choice version of the BESSA, a published measure of the BFF, and various criteria. Findings: Confirmatory factor analysis of the forced-choice assessment showed acceptable model fit (RMSEA<0.06), while reliability estimates were reasonable (around 0.70 for each construct). Convergent validity evidence was as predicted (correlations between 0.40 and 0.60 for corresponding BFF and BESSA constructs). Notable was the extent the forced-choice BESSA assessment improved upon test-criterion relationships over and above the BFF. For example, typical regression models find BFF personality accounting for 25% of the variance in life satisfaction scores; both studies showed incremental gains over the BFF exceeding 6% (i.e., BFF and BESSA together accounted for over 31% of the variance in both studies). Discussion: Forced-choice measurement models offer up the promise of creating equated test forms that may unequivocally measure skill gains and are less prone to fakability and reference bias effects. Implications for practitioners are discussed, especially those interested in selection, succession planning, and training and development. We also discuss how the forced choice method can be applied to other constructs like emotional immunity, cross-cultural competence, and self-estimates of cognitive ability.

Keywords: Big Five, forced-choice method, BFF, methods of measurements

Procedia PDF Downloads 94
2549 Estimating the Ladder Angle and the Camera Position From a 2D Photograph Based on Applications of Projective Geometry and Matrix Analysis

Authors: Inigo Beckett

Abstract:

In forensic investigations, it is often the case that the most potentially useful recorded evidence derives from coincidental imagery, recorded immediately before or during an incident, and that during the incident (e.g. a ‘failure’ or fire event), the evidence is changed or destroyed. To an image analysis expert involved in photogrammetric analysis for Civil or Criminal Proceedings, traditional computer vision methods involving calibrated cameras is often not appropriate because image metadata cannot be relied upon. This paper presents an approach for resolving this problem, considering in particular and by way of a case study, the angle of a simple ladder shown in a photograph. The UK Health and Safety Executive (HSE) guidance document published in 2014 (INDG455) advises that a leaning ladder should be erected at 75 degrees to the horizontal axis. Personal injury cases can arise in the construction industry because a ladder is too steep or too shallow. Ad-hoc photographs of such ladders in their incident position provide a basis for analysis of their angle. This paper presents a direct approach for ascertaining the position of the camera and the angle of the ladder simultaneously from the photograph(s) by way of a workflow that encompasses a novel application of projective geometry and matrix analysis. Mathematical analysis shows that for a given pixel ratio of directly measured collinear points (i.e. features that lie on the same line segment) from the 2D digital photograph with respect to a given viewing point, we can constrain the 3D camera position to a surface of a sphere in the scene. Depending on what we know about the ladder, we can enforce another independent constraint on the possible camera positions which enables us to constrain the possible positions even further. Experiments were conducted using synthetic and real-world data. The synthetic data modeled a vertical plane with a ladder on a horizontally flat plane resting against a vertical wall. The real-world data was captured using an Apple iPhone 13 Pro and 3D laser scan survey data whereby a ladder was placed in a known location and angle to the vertical axis. For each case, we calculated camera positions and the ladder angles using this method and cross-compared them against their respective ‘true’ values.

Keywords: image analysis, projective geometry, homography, photogrammetry, ladders, Forensics, Mathematical modeling, planar geometry, matrix analysis, collinear, cameras, photographs

Procedia PDF Downloads 52
2548 Innovations in the Implementation of Preventive Strategies and Measuring Their Effectiveness Towards the Prevention of Harmful Incidents to People with Mental Disabilities who Receive Home and Community Based Services

Authors: Carlos V. Gonzalez

Abstract:

Background: Providers of in-home and community based services strive for the elimination of preventable harm to the people under their care as well as to the employees who support them. Traditional models of safety and protection from harm have assumed that the absence of incidents of harm is a good indicator of safe practices. However, this model creates an illusion of safety that is easily shaken by sudden and inadvertent harmful events. As an alternative, we have developed and implemented an evidence-based resilient model of safety known as C.O.P.E. (Caring, Observing, Predicting and Evaluating). Within this model, safety is not defined by the absence of harmful incidents, but by the presence of continuous monitoring, anticipation, learning, and rapid response to events that may lead to harm. Objective: The objective was to evaluate the effectiveness of the C.O.P.E. model for the reduction of harm to individuals with mental disabilities who receive home and community based services. Methods: Over the course of 2 years we counted the number of incidents of harm and near misses. We trained employees on strategies to eliminate incidents before they fully escalated. We trained employees to track different levels of patient status within a scale from 0 to 10. Additionally, we provided direct support professionals and supervisors with customized smart phone applications to track and notify the team of changes in that status every 30 minutes. Finally, the information that we collected was saved in a private computer network that analyzes and graphs the outcome of each incident. Result and conclusions: The use of the COPE model resulted in: A reduction in incidents of harm. A reduction the use of restraints and other physical interventions. An increase in Direct Support Professional’s ability to detect and respond to health problems. Improvement in employee alertness by decreasing sleeping on duty. Improvement in caring and positive interaction between Direct Support Professionals and the person who is supported. Developing a method to globally measure and assess the effectiveness of prevention from harm plans. Future applications of the COPE model for the reduction of harm to people who receive home and community based services are discussed.

Keywords: harm, patients, resilience, safety, mental illness, disability

Procedia PDF Downloads 447
2547 Modeling of the Heat and Mass Transfer in Fluids through Thermal Pollution in Pipelines

Authors: V. Radulescu, S. Dumitru

Abstract:

Introduction: Determination of the temperature field inside a fluid in motion has many practical issues, especially in the case of turbulent flow. The phenomenon is greater when the solid walls have a different temperature than the fluid. The turbulent heat and mass transfer have an essential role in case of the thermal pollution, as it was the recorded during the damage of the Thermoelectric Power-plant Oradea (closed even today). Basic Methods: Solving the theoretical turbulent thermal pollution represents a particularly difficult problem. By using the semi-empirical theories or by simplifying the made assumptions, based on the experimental measurements may be assured the elaboration of the mathematical model for further numerical simulations. The three zones of flow are analyzed separately: the vicinity of the solid wall, the turbulent transition zone, and the turbulent core. For each area are determined the distribution law of temperature. It is determined the dependence of between the Stanton and Prandtl numbers with correction factors, based on measurements experimental. Major Findings/Results: The limitation of the laminar thermal substrate was determined based on the theory of Landau and Levice, using the assumption that the longitudinal component of the velocity pulsation and the pulsation’s frequency varies proportionally with the distance to the wall. For the calculation of the average temperature, the formula is used a similar solution as for the velocity, by an analogous mediation. On these assumptions, the numerical modeling was performed with a gradient of temperature for the turbulent flow in pipes (intact or damaged, with cracks) having 4 different diameters, between 200-500 mm, as there were in the Thermoelectric Power-plant Oradea. Conclusions: It was made a superposition between the molecular viscosity and the turbulent one, followed by addition between the molecular and the turbulent transfer coefficients, necessary to elaborate the theoretical and the numerical modeling. The concept of laminar boundary layer has a different thickness when it is compared the flow with heat transfer and that one without a temperature gradient. The obtained results are within the margin of error of 5%, between the semi-empirical classical theories and the developed model, based on the experimental data. Finally, it is obtained a general correlation between the Stanton number and the Prandtl number, for a specific flow (with associated Reynolds number).

Keywords: experimental measurements, numerical correlations, thermal pollution through pipelines, turbulent thermal flow

Procedia PDF Downloads 164
2546 Accurate Mass Segmentation Using U-Net Deep Learning Architecture for Improved Cancer Detection

Authors: Ali Hamza

Abstract:

Accurate segmentation of breast ultrasound images is of paramount importance in enhancing the diagnostic capabilities of breast cancer detection. This study presents an approach utilizing the U-Net architecture for segmenting breast ultrasound images aimed at improving the accuracy and reliability of mass identification within the breast tissue. The proposed method encompasses a multi-stage process. Initially, preprocessing techniques are employed to refine image quality and diminish noise interference. Subsequently, the U-Net architecture, a deep learning convolutional neural network (CNN), is employed for pixel-wise segmentation of regions of interest corresponding to potential breast masses. The U-Net's distinctive architecture, characterized by a contracting and expansive pathway, enables accurate boundary delineation and detailed feature extraction. To evaluate the effectiveness of the proposed approach, an extensive dataset of breast ultrasound images is employed, encompassing diverse cases. Quantitative performance metrics such as the Dice coefficient, Jaccard index, sensitivity, specificity, and Hausdorff distance are employed to comprehensively assess the segmentation accuracy. Comparative analyses against traditional segmentation methods showcase the superiority of the U-Net architecture in capturing intricate details and accurately segmenting breast masses. The outcomes of this study emphasize the potential of the U-Net-based segmentation approach in bolstering breast ultrasound image analysis. The method's ability to reliably pinpoint mass boundaries holds promise for aiding radiologists in precise diagnosis and treatment planning. However, further validation and integration within clinical workflows are necessary to ascertain their practical clinical utility and facilitate seamless adoption by healthcare professionals. In conclusion, leveraging the U-Net architecture for breast ultrasound image segmentation showcases a robust framework that can significantly enhance diagnostic accuracy and advance the field of breast cancer detection. This approach represents a pivotal step towards empowering medical professionals with a more potent tool for early and accurate breast cancer diagnosis.

Keywords: mage segmentation, U-Net, deep learning, breast cancer detection, diagnostic accuracy, mass identification, convolutional neural network

Procedia PDF Downloads 84
2545 Measuring Self-Regulation and Self-Direction in Flipped Classroom Learning

Authors: S. A. N. Danushka, T. A. Weerasinghe

Abstract:

The diverse necessities of instruction could be addressed effectively with the support of new dimensions of ICT integrated learning such as blended learning –which is a combination of face-to-face and online instruction which ensures greater flexibility in student learning and congruity of course delivery. As blended learning has been the ‘new normality' in education, many experimental and quasi-experimental research studies provide ample of evidence on its successful implementation in many fields of studies, but it is hard to justify whether blended learning could work similarly in the delivery of technology-teacher development programmes (TTDPs). The present study is bound with the particular research uncertainty, and having considered existing research approaches, the study methodology was set to decide the efficient instructional strategies for flipped classroom learning in TTDPs. In a quasi-experimental pre-test and post-test design with a mix-method research approach, the major study objective was tested with two heterogeneous samples (N=135) identified in a virtual learning environment in a Sri Lankan university. Non-randomized informal ‘before-and-after without control group’ design was employed, and two data collection methods, identical pre-test and post-test and Likert-scale questionnaires were used in the study. Selected two instructional strategies, self-directed learning (SDL) and self-regulated learning (SRL), were tested in an appropriate instructional framework with two heterogeneous samples (pre-service and in-service teachers). Data were statistically analyzed, and an efficient instructional strategy was decided via t-test, ANOVA, ANCOVA. The effectiveness of the two instructional strategy implementation models was decided via multiple linear regression analysis. ANOVA (p < 0.05) shows that age, prior-educational qualifications, gender, and work-experiences do not impact on learning achievements of the two diverse groups of learners through the instructional strategy is changed. ANCOVA (p < 0.05) analysis shows that SDL is efficient for two diverse groups of technology-teachers than SRL. Multiple linear regression (p < 0.05) analysis shows that the staged self-directed learning (SSDL) model and four-phased model of motivated self-regulated learning (COPES Model) are efficient in the delivery of course content in flipped classroom learning.

Keywords: COPES model, flipped classroom learning, self-directed learning, self-regulated learning, SSDL model

Procedia PDF Downloads 197
2544 Effective Apixaban Clearance with Cytosorb Extracorporeal Hemoadsorption

Authors: Klazina T. Havinga, Hilde R. H. de Geus

Abstract:

Introduction: Pre-operative coagulation management of Apixaban prescribed patients, a new oral anticoagulant (a factor Xa inhibitor), is difficult, especially when chronic kidney disease (CKD) causes drug overdose. Apixaban is not dialyzable due to its high level of protein binding. An antidote, Andexanet α, is available but expensive and has an unfavorable short half-life. We report the successful extracorporeal removal of Apixaban prior to emergency surgery with the CytoSorb® Hemoadsorption device. Methods: A 89-year-old woman with CKD, with an Apixaban prescription for atrial fibrillation, was presented at the ER with traumatic rib fractures, a flail chest, and an unstable spinal fracture (T12) for which emergency surgery was indicated. However, due to very high Apixaban levels, this surgery had to be postponed. Based on the Apixaban-specific anti-factor Xa activity (AFXaA) measurements at admission and 10 hours later, complete clearance was expected after 48 hours. In order to enhance the Apixaban removal and reduce the time to operation, and therefore reduce pulmonary complications, CRRT with CytoSorb® cartridge was initiated. Apixaban-specific anti-factor Xa activity (AFXaA) was measured frequently as a substitute for Apixaban drug concentrations, pre- and post adsorber, in order to calculate the adsorber-related clearance. Results: The admission AFXaA concentration, as a substitute for Apixaban drug levels, was 218 ng/ml, which decreased to 157 ng/ml after ten hours. Due to sustained anticoagulation effects, surgery was again postponed. However, the AFXaA levels decreased quickly to sub-therapeutic levels after CRRT (Multifiltrate Pro, Fresenius Medical Care, Blood flow 200 ml/min, Dialysate Flow 4000 ml/h, Prescribed renal dose 51 ml-kg-h) with Cytosorb® connected in series into the circuit was initiated (within 5 hours). The adsorber-related (indirect) Apixaban clearance was calculated every half hour (Cl=Qe * (AFXaA pre- AFXaA post/ AFXaA pre) with Qe=plasma flow rate calculated with Ht=0.38 and system blood flow rate 200 ml-min): 100 ml/min, 72 ml/min and 57 ml/min. Although, as expected, the adsorber-related clearance decreased quickly due to saturation of the beads, still the reduction rate achieved resulted in a very rapid decrease in AFXaA levels. Surgery was ordered and possible within 5 hours after Cytosorb initiation. Conclusion: The CytoSorb® Hemoadsorption device enabled rapid correction of Apixaban associated anticoagulation.

Keywords: Apixaban, CytoSorb, emergency surgery, Hemoadsorption

Procedia PDF Downloads 157
2543 Posterior Acetabular Fractures-Optimizing the Treatment by Enhancing Practical Skills

Authors: Olivera Lupescu, Taina Elena Avramescu, Mihail Nagea, Alexandru Dimitriu

Abstract:

Acetabular fractures represent a real challenge due to their impact upon the long term function of the hip joint, and due to the risk of intra- and peri-operative complications especially that they affect young, active people. That is why treating these fractures require certain skills which must be exercised, regarding the pre-operative planning, as well as the execution of surgery.The authors retrospectively analyse 38 cases with acetabular fractures operated using the posterior approach in our hospital between 01.01.2013- 01.01.2015 for which complete medical records ensure a follow-up of 24 months, in order to establish the main causes of potential errors and to underline the methods for preventing them. This target is included in the Erasmus + project ‘Collaborative learning for enhancing practical skills for patient-focused interventions in gait rehabilitation after orthopedic surgery COR-skills’. This paper analyses the pitfalls revealed by these cases, as well as the measures necessary to enhance the practical skills of the surgeons who perform acetabular surgery. Pre-op planning matched the intra and post-operative outcome in 88% of the analyzed points, from 72% at the beginning to 94% in the last case, meaning that experience is very important in treating this injury. The main problems detected for the posterior approach were: nervous complications - 3 cases, 1 of them a complete paralysis of the sciatic nerve, which recovered 6 months after surgery, and in other 2 cases intra-articular position of the screws was demonstrated by post-operative CT scans, so secondary screw removal was necessary in these cases. We analysed this incident, too, due to lack of information about the relationship between the screws and the joint secondary to this approach. Septic complications appeared in 3 cases, 2 superficial and 1 profound (requiring implant removal). The most important problems were the reduction of the fractures and the positioning of the screws so as not to interfere with the the articular space. In posterior acetabular fractures, pre-op complex planning is important in order to achieve maximum treatment efficacy with minimum of risk; an optimal training of the surgeons insisting on the main points of potential mistakes ensure the success of the procedure, as well as a favorable outcome for the patient.

Keywords: acetabular fractures, articular congruency, surgical skills, vocational training

Procedia PDF Downloads 206
2542 A Review of Digital Twins to Reduce Emission in the Construction Industry

Authors: Zichao Zhang, Yifan Zhao, Samuel Court

Abstract:

The carbon emission problem of the traditional construction industry has long been a pressing issue. With the growing emphasis on environmental protection and advancement of science and technology, the organic integration of digital technology and emission reduction has gradually become a mainstream solution. Among various sophisticated digital technologies, digital twins, which involve creating virtual replicas of physical systems or objects, have gained enormous attention in recent years as tools to improve productivity, optimize management and reduce carbon emissions. However, the relatively high implementation costs including finances, time, and manpower associated with digital twins have limited their widespread adoption. As a result, most of the current applications are primarily concentrated within a few industries. In addition, the creation of digital twins relies on a large amount of data and requires designers to possess exceptional skills in information collection, organization, and analysis. Unfortunately, these capabilities are often lacking in the traditional construction industry. Furthermore, as a relatively new concept, digital twins have different expressions and usage methods across different industries. This lack of standardized practices poses a challenge in creating a high-quality digital twin framework for construction. This paper firstly reviews the current academic studies and industrial practices focused on reducing greenhouse gas emissions in the construction industry using digital twins. Additionally, it identifies the challenges that may be encountered during the design and implementation of a digital twin framework specific to this industry and proposes potential directions for future research. This study shows that digital twins possess substantial potential and significance in enhancing the working environment within the traditional construction industry, particularly in their ability to support decision-making processes. It proves that digital twins can improve the work efficiency and energy utilization of related machinery while helping this industry save energy and reduce emissions. This work will help scholars in this field to better understand the relationship between digital twins and energy conservation and emission reduction, and it also serves as a conceptual reference for practitioners to implement related technologies.

Keywords: digital twins, emission reduction, construction industry, energy saving, life cycle, sustainability

Procedia PDF Downloads 101
2541 Characterization of WNK2 Role on Glioma Cells Vesicular Traffic

Authors: Viviane A. O. Silva, Angela M. Costa, Glaucia N. M. Hajj, Ana Preto, Aline Tansini, Martin Roffé, Peter Jordan, Rui M. Reis

Abstract:

Autophagy is a recycling and degradative system suggested to be a major cell death pathway in cancer cells. Autophagy pathway is interconnected with the endocytosis pathways sharing the same ultimate lysosomal destination. Lysosomes are crucial regulators of cell homeostasis, responsible to downregulate receptor signalling and turnover. It seems highly likely that derailed endocytosis can make major contributions to several hallmarks of cancer. WNK2, a member of the WNK (with-no-lysine [K]) subfamily of protein kinases, had been found downregulated by its promoter hypermethylation, and has been proposed to act as a specific tumour-suppressor gene in brain tumors. Although some contradictory studies indicated WNK2 as an autophagy modulator, its role in cancer cell death is largely unknown. There is also growing evidence for additional roles of WNK kinases in vesicular traffic. Aim: To evaluate the role of WNK2 in autophagy and endocytosis on glioma context. Methods: Wild-type (wt) A172 cells (WNK2 promoter-methylated), and A172 transfected either with an empty vector (Ev) or with a WNK2 expression vector, were used to assess the cellular basal capacities to promote autophagy, through western blot and flow-cytometry analysis. Additionally, we evaluated the effect of WNK2 on general endocytosis trafficking routes by immunofluorescence. Results: The re-expression of ectopic WNK2 did not interfere with autophagy-related protein light chain 3 (LC3-II) expression levels as well as did not promote mTOR signaling pathway alteration when compared with Ev or wt A172 cells. However, the restoration of WNK2 resulted in a marked increase (8 to 92,4%) of Acidic Vesicular Organelles formation (AVOs). Moreover, our results also suggest that WNK2 cells promotes delay in uptake and internalization rate of cholera toxin B and transferrin ligands. Conclusions: The restoration of WNK2 interferes in vesicular traffic during endocytosis pathway and increase AVOs formation. This results also suggest the role of WNK2 in growth factor receptor turnover related to cell growth and homeostasis and associates one more time, WNK2 silencing contribution in genesis of gliomas.

Keywords: autophagy, endocytosis, glioma, WNK2

Procedia PDF Downloads 370
2540 How Defining the Semi-Professional Journalist Is Creating Nuance and a Familiar Future for Local Journalism

Authors: Ross Hawkes

Abstract:

The rise of hyperlocal journalism and its role in the wider local news ecosystem has been debated across both industry and academic circles, particularly via the lens of structures, models, and platforms. The nuances within this sphere are now allowing for the semi-professional journalist to emerge as a key component of the landscape at the fringes of journalism. By identifying and framing the labour of these individuals against a backdrop of change within the professional local newspaper publishing industry, it is possible to address wider debates around the ways in which participants enter and exist in the space between amateur and professional journalism. Considerations around prior experience and understanding allow us to better shape and nuance the hyperlocal landscape in order to understand the challenges and opportunities facing local news via this emergent form of semi-professional journalistic production. The disruption to local news posed by the changing nature of audiences, long-established methods of production, the rise of digital platforms, and increased competition in the online space has brought questions around the very nature and identity of local news, as well as the uncertain future and precarity which surrounds it. While the hyperlocal sector has long been associated as a potential future direction for local journalism through an alternative approach to reporting and as a mechanism for participants to pass between amateurism towards professionalism, there is now a semi-professional space being occupied in a different way. Those framed as semi-professional journalists are not necessarily transiting through this space at the fringes of the professional industry; instead, they are occupying and claiming the space as an entity within itself. By framing the semi-professional journalist through a lens of prior experience and knowledge of the sector, it is possible to identify how their motivations vary from the traditional metrics of financial gain, personal progression, or a sense of civic or community duty. While such factors may be by-products of their labour, the desire of such reporters to recreate and retain experiences and values from their past as a participant or consumer is the central basis of the framework to define the semi-professional journalist. Through understanding the motivations, aims and factors shaping the semi-professional journalist within the wider journalism and hyperlocal journalism debates and landscape, it will be possible to better frame the role they can play in sustaining the longer term provision of local news and addressing broader issues and factors within the sector.

Keywords: hyperlocal, journalism, local news, semi-professionalism

Procedia PDF Downloads 28
2539 Effects of Roasting as Preservative Method on Food Value of the Runner Groundnuts, Arachis hypogaea

Authors: M. Y. Maila, H. P. Makhubele

Abstract:

Roasting is one of the oldest preservation method used in foods such as nuts and seeds. It is a process by which heat is applied to dry foodstuffs without the use of oil or water as a carrier. Groundnut seeds, also known as peanuts when sun dried or roasted, are among the oldest oil crops that are mostly consumed as a snack, after roasting in many parts of South Africa. However, roasting can denature proteins, destroy amino acids, decrease nutritive value and induce undesirable chemical changes in the final product. The aim of this study, therefore, was to evaluate the effect of various roasting times on the food value of the runner groundnut seeds. A constant temperature of 160 °C and various time-intervals (20, 30, 40, 50 and 60 min) were used for roasting groundnut seeds in an oven. Roasted groundnut seeds were then cooled and milled to flour. The milled sundried, raw groundnuts served as reference. The proximate analysis (moisture, energy and crude fats) was performed and the results were determined using standard methods. The antioxidant content was determined using HPLC. Mineral (cobalt, chromium, silicon and iron) contents were determined by first digesting the ash of sundried and roasted seed samples in 3M Hydrochloric acid and then determined by Atomic Absorption Spectrometry. All results were subjected to ANOVA through SAS software. Relative to the reference, roasting time significantly (p ≤ 0.05) reduced moisture (71%–88%), energy (74%) and crude fat (5%–64%) of the runner groundnut seeds, whereas the antioxidant content was significantly (p ≤ 0.05) increased (35%–72%) with increasing roasting time. Similarly, the tested mineral contents of the roasted runner groundnut seeds were also significantly (p ≤ 0.05) reduced at all roasting times: cobalt (21%–83%), chromium (48%–106%) and silicon (58%–77%). However, the iron content was significantly (p ≤ 0.05) unaffected. Generally, the tested runner groundnut seeds had higher food value in the raw state than in the roasted state, except for the antioxidant content. Moisture is a critical factor affecting the shelf life, texture and flavor of the final product. Loss of moisture ensures prolonged shelf life, which contribute to the stability of the roasted peanuts. Also, increased antioxidant content in roasted groundnuts is essential in other health-promoting compounds. In conclusion, the overall reduction in the proximate and mineral contents of the runner groundnuts seeds due to roasting is sufficient to suggest influences of roasting time on the food value of the final product and shelf life.

Keywords: dry roasting, legume, oil source, peanuts

Procedia PDF Downloads 287
2538 Modelling Soil Inherent Wind Erodibility Using Artifical Intellligent and Hybrid Techniques

Authors: Abbas Ahmadi, Bijan Raie, Mohammad Reza Neyshabouri, Mohammad Ali Ghorbani, Farrokh Asadzadeh

Abstract:

In recent years, vast areas of Urmia Lake in Dasht-e-Tabriz has dried up leading to saline sediments exposure on the surface lake coastal areas being highly susceptible to wind erosion. This study was conducted to investigate wind erosion and its relevance to soil physicochemical properties and also modeling of wind erodibility (WE) using artificial intelligence techniques. For this purpose, 96 soil samples were collected from 0-5 cm depth in 414000 hectares using stratified random sampling method. To measure the WE, all samples (<8 mm) were exposed to 5 different wind velocities (9.5, 11, 12.5, 14.1 and 15 m s-1 at the height of 20 cm) in wind tunnel and its relationship with soil physicochemical properties was evaluated. According to the results, WE varied within the range of 76.69-9.98 (g m-2 min-1)/(m s-1) with a mean of 10.21 and coefficient of variation of 94.5% showing a relatively high variation in the studied area. WE was significantly (P<0.01) affected by soil physical properties, including mean weight diameter, erodible fraction (secondary particles smaller than 0.85 mm) and percentage of the secondary particle size classes 2-4.75, 1.7-2 and 0.1-0.25 mm. Results showed that the mean weight diameter, erodible fraction and percentage of size class 0.1-0.25 mm demonstrated stronger relationship with WE (coefficients of determination were 0.69, 0.67 and 0.68, respectively). This study also compared efficiency of multiple linear regression (MLR), gene expression programming (GEP), artificial neural network (MLP), artificial neural network based on genetic algorithm (MLP-GA) and artificial neural network based on whale optimization algorithm (MLP-WOA) in predicting of soil wind erodibility in Dasht-e-Tabriz. Among 32 measured soil variable, percentages of fine sand, size classes of 1.7-2.0 and 0.1-0.25 mm (secondary particles) and organic carbon were selected as the model inputs by step-wise regression. Findings showed MLP-WOA as the most powerful artificial intelligence techniques (R2=0.87, NSE=0.87, ME=0.11 and RMSE=2.9) to predict soil wind erodibility in the study area; followed by MLP-GA, MLP, GEP and MLR and the difference between these methods were significant according to the MGN test. Based on the above finding MLP-WOA may be used as a promising method to predict soil wind erodibility in the study area.

Keywords: wind erosion, erodible fraction, gene expression programming, artificial neural network

Procedia PDF Downloads 71
2537 Association of Severe Preeclampsia with Offspring Neurodevelopmental and Psychiatric Disorders: A Finnish Population-Based Cohort Study

Authors: Linghua Kong, Xinxia Chen, Mika Gissler, Catharina Lavebratt

Abstract:

Background: Prenatal exposure to preeclampsia has been associated with an increased risk of offspring attention-deficit/hyperactivity disorders (ADHD), autism spectrum disorder (ASD), and intellectual disability. However, little is known about the association between prenatal exposure to severe preeclampsia and neurodevelopmental and psychiatric disorders in offspring. Objective: This study aimed to assess the risk of maternal preeclampsia combined with perinatal problems, specifically low birth weight and prematurity, on offspring neuropsychiatric disorders. Methods: All singleton live births in Finland between 1996 and 2014 (n=1 012 723) were followed up in nation-wide registries until 2018. Main exposures included pre-eclampsia, small for gestational age, and delivery before 34 gestational weeks. Offspring neurodevelopmental and psychiatric disorders (ICD-10 codes) were examined as outcomes variables. Offspring birth year, sex, maternal age at delivery, parity, marital status at birth, mother's country of birth, maternal smoking, maternal gestational diabetes, maternal use of psychotropic medication during pregnancy, and maternal systemic inflammatory diseases were used as covariates. Risks for neurodevelopmental and psychiatric disorders were estimated using Cox proportional hazards modeling. Results: Of the 1 012 723 offspring, 25 901 (2.6%) were exposed to preeclampsia, and 93 281 (9.2%) were diagnosed with a neuropsychiatric disorder. Compared to births unexposed to preeclampsia, small for gestational age or delivery before 34 gestational weeks, those exposed to preeclampsia only had a 21% increase in the likelihood of any neuropsychiatric disorders after adjusting for potential confounding (adjusted HR=1.21, 95% CI: 1.15-1.26), while exposure to preeclampsia combined with small for gestational age or delivery before 34 gestational weeks had a more than twofold increased risk of having a child with neuropsychiatric disorders (adjusted HR=2.16, 95% CI: 2.02-2.32). The adjusted HR for neuropsychiatric disorders in offspring with small for gestational age or delivery before 34 gestational weeks only was 1.79 (95% CI: 1.73-1.83). In addition, the risk estimate in offspring exposed to both preeclampsia and perinatal problems was greater than those only exposed to preeclampsia for having personality disorders (adjusted HR=1.66; 95% CI: 1.07-2.57), intellectual disabilities (adjusted HR=3.47; 95% CI: 2.86-4.22), specific developmental disorders (adjusted HR=2.91; 95% CI: 2.69-3.15), ASD (adjusted HR=1.75; 95% CI: 1.42-2.17), ADHD and conduct disorders (adjusted HR=2.00; 95%CI: 1.76-2.27), and other behavioral and emotional disorders (adjusted HR=2.09; 95% CI: 1.84-2.37). Conclusion: In utero exposure to severe preeclampsia increased the risk of several neurodevelopmental and psychiatric disorders in offspring. Our findings are relevant to women with hypertensive disorders with regard to pregnancy consultation and management and may yield effective clues for the prevention of neurodevelopmental and psychiatric disorders in childhood.

Keywords: low birth weight, neurodevelopmental disorders, preeclampsia, prematurity, psychiatric disorders

Procedia PDF Downloads 147
2536 Improving the Detection of Depression in Sri Lanka: Cross-Sectional Study Evaluating the Efficacy of a 2-Question Screen for Depression

Authors: Prasad Urvashi, Wynn Yezarni, Williams Shehan, Ravindran Arun

Abstract:

Introduction: Primary health services are often the first point of contact that patients with mental illness have with the healthcare system. A number of tools have been developed to increase detection of depression in the context of primary care. However, one challenge amongst many includes utilizing these tools within the limited primary care consultation timeframe. Therefore, short questionnaires that screen for depression that are just as effective as more comprehensive diagnostic tools may be beneficial in improving detection rates of patients visiting a primary care setting. Objective: To develop and determine the sensitivity and specificity of a 2-Question Questionnaire (2-QQ) to screen for depression in in a suburban primary care clinic in Ragama, Sri Lanka. The purpose is to develop a short screening tool for depression that is culturally adapted in order to increase the detection of depression in the Sri Lankan patient population. Methods: This was a cross-sectional study involving two steps. Step one: verbal administration of 2-QQ to patients by their primary care physician. Step two: completion of the Peradeniya Depression Scale, a validated diagnostic tool for depression, the patient after their consultation with the primary care physician. The results from the PDS were then correlated to the results from the 2-QQ for each patient to determine sensitivity and specificity of the 2-QQ. Results: A score of 1/+ on the 2-QQ was most sensitive but least specific. Thus, setting the threshold at this level is effective for correctly identifying depressed patients, but also inaccurately captures patients who are not depressed. A score of 6 on the 2-QQ was most specific but least sensitive. Setting the threshold at this level is effective for correctly identifying patients without depression, but not very effective at capturing patients with depression. Discussion: In the context of primary care, it may be worthwhile setting the 2-QQ screen at a lower threshold for positivity (such as a score of 1 or above). This would generate a high test sensitivity and thus capture the majority of patients that have depression. On the other hand, by setting a low threshold for positivity, patients who do not have depression but score higher than 1 on the 2-QQ will also be falsely identified as testing positive for depression. However, the benefits of identifying patients who present with depression may outweigh the harms of falsely identifying a non-depressed patient. It is our hope that the 2-QQ will serve as a quick primary screen for depression in the primary care setting and serve as a catalyst to identify and treat individuals with depression.

Keywords: depression, primary care, screening tool, Sri Lanka

Procedia PDF Downloads 257
2535 Immunocytochemical Stability of Antigens in Cytological Samples Stored in In-house Liquid-Based Medium

Authors: Anamarija Kuhar, Veronika Kloboves Prevodnik, Nataša Nolde, Ulrika Klopčič

Abstract:

The decision for immunocytochemistry (ICC) is usually made in the basis of the findings in Giemsa- and/or Papanicolaou- smears. More demanding diagnostic cases require preparation of additional cytological preparations. Therefore, it is convenient to suspend cytological samples in a liquid based medium (LBM) that preserve antigen and morphological properties. However, the duration of these properties being preserved in the medium is usually unknown. Eventually, cell morphology becomes impaired and altered, as well as antigen properties may be lost or become diffused. In this study, the influence of cytological sample storage length in in-house liquid based medium on antigen properties and cell morphology is evaluated. The question is how long the cytological samples in this medium can be stored so that the results of immunocytochemical reactions are still reliable and can be safely used in routine cytopathological diagnostics. The stability of 6 ICC markers that are most frequently used in everyday routine work were tested; Cytokeratin AE1/AE3, Calretinin, Epithelial specific antigen Ep-CAM (MOC-31), CD 45, Oestrogen receptor (ER), and Melanoma triple cocktail were tested on methanol fixed cytospins prepared from fresh fine needle aspiration biopsies, effusion samples, and disintegrated lymph nodes suspended in in-house cell medium. Cytospins were prepared on the day of the sampling as well as on the second, fourth, fifth, and eight day after sample collection. Next, they were fixed in methanol and immunocytochemically stained. Finally, the percentage of positive stained cells, reaction intensity, counterstaining, and cell morphology were assessed using two assessment methods: the internal assessment and the UK NEQAS ICC scheme assessment. Results show that the antigen properties for Cytokeratin AE1/AE3, MOC-31, CD 45, ER, and Melanoma triple cocktail were preserved even after 8 days of storage in in-house LBM, while the antigen properties for Calretinin remained unchanged only for 4 days. The key parameters for assessing detection of antigen are the proportion of cells with a positive reaction and intensity of staining. Well preserved cell morphology is highly important for reliable interpretation of ICC reaction. Therefore, it would be valuable to perform a similar analysis for other ICC markers to determine the duration in which the antigen and morphological properties are preserved in LBM.

Keywords: cytology samples, cytospins, immunocytochemistry, liquid-based cytology

Procedia PDF Downloads 143
2534 Optical and Surface Characteristics of Direct Composite, Polished and Glazed Ceramic Materials After Exposure to Tooth Brush Abrasion and Staining Solution

Authors: Maryam Firouzmandi, Moosa Miri

Abstract:

Aim and background: esthetic and structural reconstruction of anterior teeth may require the application of different restoration material. In this regard combination of direct composite veneer and ceramic crown is a common treatment option. Despite the initial matching, their long term harmony in term of optical and surface characteristics is a matter of concern. The purpose of this study is to evaluate and compare optical and surface characteristic of direct composite polished and glazed ceramic materials after exposure to tooth brush abrasion and staining solution. Materials and Methods: ten 2 mm thick disk shape specimens were prepared from IPS empress direct composite and twenty specimens from IPS e.max CAD blocks. Composite specimens and ten ceramic specimens were polished by using D&Z composite and ceramic polishing kit. The other ten specimens of ceramic were glazed with glazing liquid. Baseline measurement of roughness, CIElab coordinate, and luminance were recorded. Then the specimens underwent thermocycling, tooth brushing, and coffee staining. Afterword, the final measurements were recorded. Color coordinate were used to calculate ΔE76, ΔE00, translucency parameter, and contrast ratio. Data were analyzed by One-way ANOVA and post hoc LSD test. Results: baseline and final roughness of the study group were not different. At baseline, the order of roughness for the study group were as follows: composite < glazed ceramic < polished ceramic, but after aging, no difference. Between ceramic groups was not detected. The comparison of baseline and final luminance was similar to roughness but in reverse order. Unlike differential roughness which was comparable between the groups, changes in luminance of the glazed ceramic group was higher than other groups. ΔE76 and ΔE00 in the composite group were 18.35 and 12.84, in the glazed ceramic group were 1.3 and 0.79, and in polished ceramic were 1.26 and 0.85. These values for the composite group were significantly different from ceramic groups. Translucency of composite at baseline was significantly higher than final, but there was no significant difference between these values in ceramic groups. Composite was more translucency than ceramic at baseline and final measurement. Conclusion: Glazed ceramic surface was smoother than polished ceramic. Aging did not change the roughness. Optical properties (color and translucency) of the composite were influenced by aging. Luminance of composite, glazed ceramic, and polished ceramic decreased after aging, but the reduction in glazed ceramic was more pronounced.

Keywords: ceramic, tooth-brush abrasion, staining solution, composite resin

Procedia PDF Downloads 185
2533 Investigation of Deep Eutectic Solvents for Microwave Assisted Extraction and Headspace Gas Chromatographic Determination of Hexanal in Fat-Rich Food

Authors: Birute Bugelyte, Ingrida Jurkute, Vida Vickackaite

Abstract:

The most complicated step of the determination of volatile compounds in complex matrices is the separation of analytes from the matrix. Traditional analyte separation methods (liquid extraction, Soxhlet extraction) require a lot of time and labour; moreover, there is a risk to lose the volatile analytes. In recent years, headspace gas chromatography has been used to determine volatile compounds. To date, traditional extraction solvents have been used in headspace gas chromatography. As a rule, such solvents are rather volatile; therefore, a large amount of solvent vapour enters into the headspace together with the analyte. Because of that, the determination sensitivity of the analyte is reduced, a huge solvent peak in the chromatogram can overlap with the peaks of the analyts. The sensitivity is also limited by the fact that the sample can’t be heated at a higher temperature than the solvent boiling point. In 2018 it was suggested to replace traditional headspace gas chromatographic solvents with non-volatile, eco-friendly, biodegradable, inexpensive, and easy to prepare deep eutectic solvents (DESs). Generally, deep eutectic solvents have low vapour pressure, a relatively wide liquid range, much lower melting point than that of any of their individual components. Those features make DESs very attractive as matrix media for application in headspace gas chromatography. Also, DESs are polar compounds, so they can be applied for microwave assisted extraction. The aim of this work was to investigate the possibility of applying deep eutectic solvents for microwave assisted extraction and headspace gas chromatographic determination of hexanal in fat-rich food. Hexanal is considered one of the most suitable indicators of lipid oxidation degree as it is the main secondary oxidation product of linoleic acid, which is one of the principal fatty acids of many edible oils. Eight hydrophilic and hydrophobic deep eutectic solvents have been synthesized, and the influence of the temperature and microwaves on their headspace gas chromatographic behaviour has been investigated. Using the most suitable DES, microwave assisted extraction conditions and headspace gas chromatographic conditions have been optimized for the determination of hexanal in potato chips. Under optimized conditions, the quality parameters of the prepared technique have been determined. The suggested technique was applied for the determination of hexanal in potato chips and other fat-rich food.

Keywords: deep eutectic solvents, headspace gas chromatography, hexanal, microwave assisted extraction

Procedia PDF Downloads 195
2532 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink

Authors: Sanjay Rathee, Arti Kashyap

Abstract:

Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.

Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining

Procedia PDF Downloads 294
2531 Time Estimation of Return to Sports Based on Classification of Health Levels of Anterior Cruciate Ligament Using a Convolutional Neural Network after Reconstruction Surgery

Authors: Zeinab Jafari A., Ali Sharifnezhad B., Mohammad Razi C., Mohammad Haghpanahi D., Arash Maghsoudi

Abstract:

Background and Objective: Sports-related rupture of the anterior cruciate ligament (ACL) and following injuries have been associated with various disorders, such as long-lasting changes in muscle activation patterns in athletes, which might last after ACL reconstruction (ACLR). The rupture of the ACL might result in abnormal patterns of movement execution, extending the treatment period and delaying athletes’ return to sports (RTS). As ACL injury is especially prevalent among athletes, the lengthy treatment process and athletes’ absence from sports are of great concern to athletes and coaches. Thus, estimating safe time of RTS is of crucial importance. Therefore, using a deep neural network (DNN) to classify the health levels of ACL in injured athletes, this study aimed to estimate the safe time for athletes to return to competitions. Methods: Ten athletes with ACLR and fourteen healthy controls participated in this study. Three health levels of ACL were defined: healthy, six-month post-ACLR surgery and nine-month post-ACLR surgery. Athletes with ACLR were tested six and nine months after the ACLR surgery. During the course of this study, surface electromyography (sEMG) signals were recorded from five knee muscles, namely Rectus Femoris (RF), Vastus Lateralis (VL), Vastus Medialis (VM), Biceps Femoris (BF), Semitendinosus (ST), during single-leg drop landing (SLDL) and forward hopping (SLFH) tasks. The Pseudo-Wigner-Ville distribution (PWVD) was used to produce three-dimensional (3-D) images of the energy distribution patterns of sEMG signals. Then, these 3-D images were converted to two-dimensional (2-D) images implementing the heat mapping technique, which were then fed to a deep convolutional neural network (DCNN). Results: In this study, we estimated the safe time of RTS by designing a DCNN classifier with an accuracy of 90 %, which could classify ACL into three health levels. Discussion: The findings of this study demonstrate the potential of the DCNN classification technique using sEMG signals in estimating RTS time, which will assist in evaluating the recovery process of ACLR in athletes.

Keywords: anterior cruciate ligament reconstruction, return to sports, surface electromyography, deep convolutional neural network

Procedia PDF Downloads 78
2530 A Triad Pedagogy for Increased Digital Competence of Human Resource Management Students: Reflecting on Human Resource Information Systems at a South African University

Authors: Esther Pearl Palmer

Abstract:

Driven by the increased pressure on Higher Education Institutions (HEIs) to produce work-ready graduates for the modern world of work, this study reflects on triad teaching and learning practices to increase student engagement and employability. In the South African higher education context, the employability of graduates is imperative in strengthening the country’s economy and in increasing competitiveness. Within this context, the field of Human Resource Management (HRM) calls for innovative methods and approaches to teaching and learning and assessing the skills and competencies of graduates to render them employable. Digital competency in Human Resource Information Systems (HRIS) is an important component and prerequisite for employment in HRM. The purpose of this research is to reflect on the subject HRIS developed by lecturers at the Central University of Technology, Free State (CUT), with the intention to actively engage students in real-world learning activities and increase their employability. The Enrichment Triad Model (ETM) was used as theoretical framework to develop the subject as it supports a triad teaching and learning approach to education. It is, furthermore, an inter-structured model that supports collaboration between industry, academics and students. The study follows a mixed-method approach to reflect on the learning experiences of the industry, academics and students in the subject field over the past three years. This paper is a work in progress and seeks to broaden the scope of extant studies about student engagement in work-related learning to increase employability. Based on the ETM as theoretical framework and pedagogical practice, this paper proposes that following a triad teaching and learning approach will increase work-related skills of students. Findings from the study show that students, academics and industry alike regard educational opportunities that incorporate active learning experiences with the world of work enhances student engagement in learning and renders them more employable.

Keywords: digital competence, enriched triad model, human resource information systems, student engagement, triad pedagogy.

Procedia PDF Downloads 92
2529 Effects of the Affordable Care Act On Preventive Care Disparities

Authors: Cagdas Agirdas

Abstract:

Background: The Affordable Care Act (ACA) requires non-grandfathered private insurance plans, starting with plan years on or after September 23rd, 2010, to provide certain preventive care services without any cost sharing in the form of deductibles, copayments or co-insurance. This requirement may affect racial and ethnic disparities in preventive care as it provides the largest copay reduction in preventive care. Objectives: We ask whether the ACA’s free preventive care benefits are associated with a reduction in racial and ethnic disparities in the utilization of four preventive services: cholesterol screenings, colonoscopies, mammograms, and pap smears. Methods: We use a data set of over 6,000 individuals from the 2009, 2010, and 2013 Medical Expenditure Panel Surveys (MEPS). We restrict our data set only to individuals who are old enough to be eligible for each preventive service. Our difference-in-differences logistic regression model classifies privately-insured Hispanics, African Americans, and Asians as the treatment groups and 2013 as the after-policy year. Our control group consists of non-Hispanic whites on Medicaid as this program already covered preventive care services for free or at a low cost before the ACA. Results: After controlling for income, education, marital status, preferred interview language, self-reported health status, employment, having a usual source of care, age and gender, we find that the ACA is associated with increases in the probability of the median, privately-insured Hispanic person to get a colonoscopy by 3.6% and a mammogram by 3.1%, compared to a non-Hispanic white person on Medicaid. Similarly, we find that the median, privately-insured African American person’s probability of receiving these two preventive services improved by 2.3% and 2.4% compared to a non-Hispanic white person on Medicaid. We do not find any significant improvements for any racial or ethnic group for cholesterol screenings or pap smears. Furthermore, our results do not indicate any significant changes for Asians compared to non-Hispanic whites in utilizing the four preventive services. These reductions in racial/ethnic disparities are robust to reconfigurations of time periods, previous diagnosis, and residential status. Conclusions: Early effects of the ACA’s provision of free preventive care are significant for Hispanics and African Americans. Further research is needed for the later years as more individuals became aware of these benefits.

Keywords: preventive care, Affordable Care Act, cost sharing, racial disparities

Procedia PDF Downloads 153
2528 Preparedness Level of European Cultural Institutions and Catering Establishments Within the Sanitary and Epidemiological Dimension During the COVID-19 Pandemic

Authors: Magdalena Barbara Kaziuk

Abstract:

Introduction: In December 2019, the first case of an acute infectious disease of the respiratory system caused by the SARS-CoV-2 virus was recorded in Wuhan in Central China. On March 11, 2020, the World Health Organization restrictions, among others, in the travel industry. Aim: The aim of the study was the assessment of the preparedness of European cultural institutions and catering establishments within the sanitary and epidemiological dimension during the COVID-19 pandemic by Polish tourists and their sense of safety in selected destinations. Material and methods: The study involved 300 Polish tourists (125 females, 175 males, age 46.5+/-12.9 years) who traveled during the COVID-19 pandemic to Southern European countries. 5 most popular travel destinations were selected: Italy, Austria, Greece, Croatia, and Mediterranean islands. The tourists assessed cultural institutions and catering establishments with the use of a proprietary questionnaire which concerned the preparedness regarding the sanitary and epidemiological requirements and the tourists' sense of safety. The number of respondents was deliberate - 60 persons per each country. Results: The more stringent sanitary regimes, the higher the sense of safety in the study group of females aged 45-50 (p<0.005), while the more stringent sanitary and epidemiological issues are implemented, the shorter the stay (p<0.001). Less stringent restrictions resulted in increased sense of freedom and mental rest in the group of studied males (p<0.005). Conclusions: The respondents' opinions revealed that the highest level of safety with regard to sanitary and epidemiological requirements (masks covering mouth and nose worn by both personnel and society, the necessity to present the COVID passport, the possibility to disinfect hands) was observed in Austria and Italy, while shorter length of the stay in these countries resulted from high prices, particularly in catering establishments. According to the respondents, less stringent restrictions, among others lack of the necessity to own the COVID passport, were linked to Croatia and Mediterranean islands. The sense of safety was satisfying, while the sense of freedom and mental rest was high. declared a string of COVID-19 cases a pandemic. Most countries implemented numerous sanitary and epidemiological

Keywords: sanitary and epidemiological regimes, tourist facilities, COVID-19 pandemic, sense of safety

Procedia PDF Downloads 124
2527 Possible Role of Fenofibrate and Clofibrate in Attenuated Cardioprotective Effect of Ischemic Preconditioning in Hyperlipidemic Rat Hearts

Authors: Gurfateh Singh, Mu Khan, Razia Khanam, Govind Mohan

Abstract:

Objective: The present study has been designed to investigate the beneficial role of Fenofibrate & Clofibrate in attenuated the cardioprotective effect of ischemic preconditioning (IPC) in hyperlipidemic rat hearts. Materials & Methods: Experimental hyperlipidemia was produced by feeding high fat diet to rats for a period of 28 days. Isolated langendorff’s perfused normal and hyperlipidemic rat hearts were subjected to global ischemia for 30 min followed by reperfusion for 120 min. The myocardial infarct size was assessed macroscopically using triphenyltetrazolium chloride staining. Coronary effluent was analyzed for lactate dehydrogenase (LDH) and creatine kinase-MB release to assess the extent of cardiac injury. Moreover, the oxidative stress in heart was assessed by measuring thiobarbituric acid reactive substance, superoxide anion generation and reduced form of glutathione. Results: The ischemia-reperfusion (I/R) has been noted to induce oxidative stress by increasing TBARS, superoxide anion generation and decreasing reduced form of glutathione in normal and hyperlipidemic rat hearts. Moreover, I/R produced myocardial injury, which was assessed in terms of increase in myocardial infarct size, LDH and CK-MB release in coronary effluent and decrease in coronary flow rate in normal and hyperlipidemic rat hearts. In addition, the hyperlipidemic rat hearts showed enhanced I/R-induced myocardial injury with high degree of oxidative stress as compared with normal rat hearts subjected to I/R. Four episodes of IPC (5 min each) afforded cardioprotection against I/R-induced myocardial injury in normal rat hearts as assessed in terms of improvement in coronary flow rate and reduction in myocardial infarct size, LDH, CK-MB and oxidative stress. On the other hand, IPC mediated myocardial protection against I/R-injury was abolished in hyperlipidemic rat hearts. However, Treatment with Fenofibrate (100 mg/kg/day, i.p.), Clofibrate (300mg/kg/day, i.p.) as a agonists of PPAR-α have not affected the cardioprotective effect of IPC in normal rat hearts, but its treatment markedly restored the cardioprotective potentials of IPC in hyperlipidemic rat hearts. Conclusion: It is noted that the high degree of oxidative stress produced in hyperlipidemic rat heart during reperfusion and consequent down regulation of PPAR-α may be responsible to abolish the cardioprotective potentials of IPC.

Keywords: Hyperlipidemia, ischemia-reperfusion injury, ischemic preconditioning, PPAR-α

Procedia PDF Downloads 288
2526 Moderate Electric Field Influence on Carotenoids Extraction Time from Heterochlorella luteoviridis

Authors: Débora P. Jaeschke, Eduardo A. Merlo, Rosane Rech, Giovana D. Mercali, Ligia D. F. Marczak

Abstract:

Carotenoids are high value added pigments that can be alternatively extracted from some microalgae species. However, the application of carotenoids synthetized by microalgae is still limited due to the utilization of organic toxic solvents. In this context, studies involving alternative extraction methods have been conducted with more sustainable solvents to replace and reduce the solvent volume and the extraction time. The aim of the present work was to evaluate the extraction time of carotenoids from the microalgae Heterochlorella luteoviridis using moderate electric field (MEF) as a pre-treatment to the extraction. The extraction methodology consisted of a pre-treatment in the presence of MEF (180 V) and ethanol (25 %, v/v) for 10 min, followed by a diffusive step performed for 50 min using a higher ethanol concentration (75 %, v/v). The extraction experiments were conducted at 30 °C and, to keep the temperature at this value, it was used an extraction cell with a water jacket that was connected to a water bath. Also, to enable the evaluation of MEF effect on the extraction, control experiments were performed using the same cell and conditions without voltage application. During the extraction experiments, samples were withdrawn at 1, 5 and 10 min of the pre-treatment and at 1, 5, 30, 40 and 50 min of the diffusive step. Samples were, then, centrifuged and carotenoids analyses were performed in the supernatant. Furthermore, an exhaustive extraction with ethyl acetate and methanol was performed, and the carotenoids content found for this analyses was considered as the total carotenoids content of the microalgae. The results showed that the application of MEF as a pre-treatment to the extraction influenced the extraction yield and the extraction time during the diffusive step; after the MEF pre-treatment and 50 min of the diffusive step, it was possible to extract up to 60 % of the total carotenoids content. Also, results found for carotenoids concentration of the extracts withdrawn at 5 and 30 min of the diffusive step did not presented statistical difference, meaning that carotenoids diffusion occurs mainly in the very beginning of the extraction. On the other hand, the results for control experiments showed that carotenoids diffusion occurs mostly during 30 min of the diffusive step, which evidenced MEF effect on the extraction time. Moreover, carotenoids concentration on samples withdrawn during the pre-treatment (1, 5 and 10 min) were below the quantification limit of the analyses, indicating that the extraction occurred in the diffusive step, when ethanol (75 %, v/v) was added to the medium. It is possible that MEF promoted cell membrane permeabilization and, when ethanol (75 %) was added, carotenoids interacted with the solvent and the diffusion occurred easily. Based on the results, it is possible to infer that MEF promoted the decrease of carotenoids extraction time due to the increasing of the permeability of the cell membrane which facilitates the diffusion from the cell to the medium.

Keywords: moderate electric field (MEF), pigments, microalgae, ethanol

Procedia PDF Downloads 463
2525 Physical Education Effect on Sports Science Analysis Technology

Authors: Peter Adly Hamdy Fahmy

Abstract:

The aim of the study was to examine the effects of a physical education program on student learning by combining the teaching of personal and social responsibility (TPSR) with a physical education model and TPSR with a traditional teaching model, these learning outcomes involving self-learning. -Study. Athletic performance, enthusiasm for sport, group cohesion, sense of responsibility and game performance. The participants were 3 secondary school physical education teachers and 6 physical education classes, 133 participants with students from the experimental group with 75 students and the control group with 58 students, and each teacher taught the experimental group and the control group for 16 weeks. The research methods used surveys, interviews and focus group meetings. Research instruments included the Personal and Social Responsibility Questionnaire, Sports Enthusiasm Scale, Group Cohesion Scale, Sports Self-Efficacy Scale, and Game Performance Assessment Tool. Multivariate analyzes of covariance and repeated measures ANOVA were used to examine differences in student learning outcomes between combining the TPSR with a physical education model and the TPSR with a traditional teaching model. The research findings are as follows: 1) The TPSR sports education model can improve students' learning outcomes, including sports self-efficacy, game performance, sports enthusiasm, team cohesion, group awareness and responsibility. 2) A traditional teaching model with TPSR could improve student learning outcomes, including sports self-efficacy, responsibility, and game performance. 3) The sports education model with TPSR could improve learning outcomes more than the traditional teaching model with TPSR, including sports self-efficacy, sports enthusiasm, responsibility and game performance. 4) Based on qualitative data on teachers' and students' learning experience, the physical education model with TPSR significantly improves learning motivation, group interaction and sense of play. The results suggest that physical education with TPSR could further improve learning outcomes in the physical education program. On the other hand, the hybrid model curriculum projects TPSR - Physical Education and TPSR - Traditional Education are good curriculum projects for moral character education that can be used in school physics.

Keywords: approach competencies, physical, education, teachers employment, graduate, physical education and sport sciences, SWOT analysis character education, sport season, game performance, sport competence

Procedia PDF Downloads 46
2524 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms

Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson

Abstract:

This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.

Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection

Procedia PDF Downloads 464