Search results for: final yield
686 Public-Private Partnership for Community Empowerment and Sustainability: Exploring Save the Children’s 'School Me' Project in West Africa
Authors: Gae Hee Song
Abstract:
This paper aims to address the evolution of public-private partnerships for mainstreaming an evaluation approach in the community-based education project. It examines the distinctive features of Save the Children’s School Me project in terms of empowerment evaluation principles introduced by David M. Fetterman, especially community ownership, capacity building, and organizational learning. School Me is a Save the Children Korea funded-project, having been implemented in Cote d’Ivoire and Sierra Leone since 2016. The objective of this project is to reduce gender-based disparities in school completion and learning outcomes by creating an empowering learning environment for girls and boys. Both quasi-experimental and experimental methods for impact evaluation have been used to explore changes in learning outcomes, gender attitudes, and learning environments. To locate School Me in the public-private partnership framework for community empowerment and sustainability, the data have been collected from School Me progress/final reports, baseline, and endline reports, fieldwork observations, inter-rater reliability of baseline and endline data collected from a total of 75 schools in Cote d’Ivoire and Sierra Leone. The findings of this study show that School Me project has a significant evaluation component, including qualitative exploratory research, participatory monitoring, and impact evaluation. It strongly encourages key actors, girls, boys, parents, teachers, community leaders, and local education authorities, to participate in the collection and interpretation of data. For example, 45 community volunteers collected baseline data in Cote d’Ivoire; on the other hand, three local government officers and fourteen enumerators participated in the follow-up data collection of Sierra Leone. Not only does this public-private partnership improve local government and community members’ knowledge and skills of monitoring and evaluation, but the evaluative findings also help them find their own problems and solutions with a strong sense of community ownership. Such community empowerment enables Save the Children country offices and member offices to gain invaluable experiences and lessons learned. As a result, empowerment evaluation leads to community-oriented governance and the sustainability of the School Me project.Keywords: community empowerment, Cote d’Ivoire, empowerment evaluation, public-private partnership, save the children, school me, Sierra Leone, sustainability
Procedia PDF Downloads 126685 Prevalence and Factors Associated with Illicit Drug Use Among Undergraduate Students in the University of Lagos, Nigeria
Authors: Abonyi, Emmanuel Ebuka, Amina Jafaru O.
Abstract:
Background: Illicit substance use among students is a phenomenon that has been widely studied, but it remains of interest due to its high prevalence and potential consequences. It is a major mental health concern among university students which may result in behavioral and academic problems, psychiatric disorders, and infectious diseases. Thus, this study was done to ascertain the prevalence and factors associated with the use of illicit drugs among these groups of people. Methods: A cross-sectional and descriptive survey was conducted among undergraduate students of the University of Lagos for the duration of three(3) months (August to October 2021). A total number of 938 undergraduate students were selected from seventeen faculties in the university. Pretested questionnaires were administered, completed, and returned. The data were analyzed using descriptive statistics and multivariate regression analysis. Results: From the data collected, it was observed that out of 938 undergraduate students of the University of Lagos that completed and returned the questionnaires, 56.3% were female and 43.7% were male. No gender differences were observed in the prevalence of use of any of the illicit substances. The result showed that the majority of the students that participated in the research were females(56.6%); it was observed that there were a total of 541 2nd-year students(57.7%) and 397 final-year students(42.3). Students between the age brackets of 20- 24 years had the highest frequency of 648(69.1%) of illicit drug use and students in none health-related disciplines. The result also showed that the majority of the students reported that they use Marijuana (31.7%), while lifetime use of LSD (6.3%), Heroin(4.8%), Cocaine (4.7%), and Ecstasy(4.5), Ketamine (3.4%). Besides, the use of alcohol was below average(44.1%). Additionally, Marijuana was among the ones that were mostly taken by students having a higher percentage and most of these respondents had experienced relationship problems with their family and intentions (50.9%). From the responses obtained, major reasons students indulge in illicit drug use were; curiosity to experiment, relief of stress after rigorous academic activities, social media influence, and peer pressure. Most Undergraduate students are in their most hyperactive stage in life, which makes them vulnerable to always want to explore practically every adventure. Hence, individual factors and social media influence are identified as major contributors to the prevalence of illicit drug use among undergraduate students at the University of Lagos, Nigeria. Conclusion: Control programs are most needed among the students. They should be comprehensive and focused on students' psycho-education about substances and their related negative consequences, plus the promotion of students' life skills, and integration into the family – and peer-based preventive interventions.Keywords: illicit drugs, addiction, undergraduate students, prevalence, substances
Procedia PDF Downloads 105684 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant
Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula
Abstract:
Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning
Procedia PDF Downloads 137683 Introducing Principles of Land Surveying by Assigning a Practical Project
Authors: Introducing Principles of Land Surveying by Assigning a Practical Project
Abstract:
A practical project is used in an engineering surveying course to expose sophomore and junior civil engineering students to several important issues related to the use of basic principles of land surveying. The project, which is the design of a two-lane rural highway to connect between two arbitrary points, requires students to draw the profile of the proposed highway along with the existing ground level. Areas of all cross-sections are then computed to enable quantity computations between them. Lastly, Mass-Haul Diagram is drawn with all important parts and features shown on it for clarity. At the beginning, students faced challenges getting started on the project. They had to spend time and effort thinking of the best way to proceed and how the work would flow. It was even more challenging when they had to visualize images of cut, fill and mixed cross sections in three dimensions before they can draw them to complete the necessary computations. These difficulties were then somewhat overcome with the help of the instructor and thorough discussions among team members and/or between different teams. The method of assessment used in this study was a well-prepared-end-of-semester questionnaire distributed to students after the completion of the project and the final exam. The survey contained a wide spectrum of questions from students' learning experience when this course development was implemented to students' satisfaction of the class instructions provided to them and the instructor's competency in presenting the material and helping with the project. It also covered the adequacy of the project to show a sample of a real-life civil engineering application and if there is any excitement added by implementing this idea. At the end of the questionnaire, students had the chance to provide their constructive comments and suggestions for future improvements of the land surveying course. Outcomes will be presented graphically and in a tabular format. Graphs provide visual explanation of the results and tables, on the other hand, summarize numerical values for each student along with some descriptive statistics, such as the mean, standard deviation, and coefficient of variation for each student and each question as well. In addition to gaining experience in teamwork, communications, and customer relations, students felt the benefit of assigning such a project. They noticed the beauty of the practical side of civil engineering work and how theories are utilized in real-life engineering applications. It was even recommended by students that such a project be exercised every time this course is offered so future students can have the same learning opportunity they had.Keywords: land surveying, highway project, assessment, evaluation, descriptive statistics
Procedia PDF Downloads 231682 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS
Authors: Eunsu Jang, Kang Park
Abstract:
In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis
Procedia PDF Downloads 402681 Virtual Reality and Other Real-Time Visualization Technologies for Architecture Energy Certifications
Authors: Román Rodríguez Echegoyen, Fernando Carlos López Hernández, José Manuel López Ujaque
Abstract:
Interactive management of energy certification ratings has remained on the sidelines of the evolution of virtual reality (VR) despite related advances in architecture in other areas such as BIM and real-time working programs. This research studies to what extent VR software can help the stakeholders to better understand energy efficiency parameters in order to obtain reliable ratings assigned to the parts of the building. To evaluate this hypothesis, the methodology has included the construction of a software prototype. Current energy certification systems do not follow an intuitive data entry system; neither do they provide a simple or visual verification of the technical values included in the certification by manufacturers or other users. This software, by means of real-time visualization and a graphical user interface, proposes different improvements to the current energy certification systems that ease the understanding of how the certification parameters work in a building. Furthermore, the difficulty of using current interfaces, which are not friendly or intuitive for the user, means that untrained users usually get a poor idea of the grounds for certification and how the program works. In addition, the proposed software allows users to add further information, such as financial and CO₂ savings, energy efficiency, and an explanatory analysis of results for the least efficient areas of the building through a new visual mode. The software also helps the user to evaluate whether or not an investment to improve the materials of an installation is worth the cost of the different energy certification parameters. The evaluated prototype (named VEE-IS) shows promising results when it comes to representing in a more intuitive and simple manner the energy rating of the different elements of the building. Users can also personalize all the inputs necessary to create a correct certification, such as floor materials, walls, installations, or other important parameters. Working in real-time through VR allows for efficiently comparing, analyzing, and improving the rated elements, as well as the parameters that we must enter to calculate the final certification. The prototype also allows for visualizing the building in efficiency mode, which lets us move over the building to analyze thermal bridges or other energy efficiency data. This research also finds that the visual representation of energy efficiency certifications makes it easy for the stakeholders to examine improvements progressively, which adds value to the different phases of design and sale.Keywords: energetic certification, virtual reality, augmented reality, sustainability
Procedia PDF Downloads 188680 Congenital Diaphragmatic Hernia Outcomes in a Low-Volume Center
Authors: Michael Vieth, Aric Schadler, Hubert Ballard, J. A. Bauer, Pratibha Thakkar
Abstract:
Introduction: Congenital diaphragmatic hernia (CDH) is a condition characterized by the herniation of abdominal contents into the thoracic cavity requiring postnatal surgical repair. Previous literature suggests improved CDH outcomes at high-volume regional referral centers compared to low-volume centers. The purpose of this study was to examine CDH outcomes at Kentucky Children’s Hospital (KCH), a low-volume center, compared to the Congenital Diaphragmatic Hernia Study Group (CDHSG). Methods: A retrospective chart review was performed at KCH from 2007-2019 for neonates with CDH, and then subdivided into two cohorts: those requiring ECMO therapy and those not requiring ECMO therapy. Basic demographic data and measures of mortality and morbidity including ventilator days and length of stay were compared to the CDHSG. Measures of morbidity for the ECMO cohort including duration of ECMO, clinical bleeding, intracranial hemorrhage, sepsis, need for continuous renal replacement therapy (CRRT), need for sildenafil at discharge, timing of surgical repair, and total ventilator days were collected. Statistical analysis was performed using IBM SPSS Statistics version 28. One-sample t-tests and one-sample Wilcoxon Signed Rank test were utilized as appropriate.Results: There were a total of 27 neonatal patients with CDH at KCH from 2007-2019; 9 of the 27 required ECMO therapy. The birth weight and gestational age were similar between KCH and the CDHSG (2.99 kg vs 2.92 kg, p =0.655; 37.0 weeks vs 37.4 weeks, p =0.51). About half of the patients were inborn in both cohorts (52% vs 56%, p =0.676). KCH cohort had significantly more Caucasian patients (96% vs 55%, p=<0.001). Unadjusted mortality was similar in both groups (KCH 70% vs CDHSG 72%, p =0.857). Using ECMO utilization (KCH 78% vs CDHSG 52%, p =0.118) and need for surgical repair (KCH 95% vs CDHSG 85%, p =0.060) as proxy for severity, both groups’ mortality were comparable. No significant difference was noted for pulmonary outcomes such as average ventilator days (KCH 43.2 vs. CDHSG 17.3, p =0.078) and home oxygen dependency (KCH 44% vs. CDHSG 24%, p =0.108). Average length of hospital stay for patients treated at KCH was similar to CDHSG (64.4 vs 49.2, p=1.000). Conclusion: Our study demonstrates that outcome in CDH patients is independent of center’s case volume status. Management of CDH with a standardized approach in a low-volume center can yield similar outcomes. This data supports the treatment of patients with CDH at low-volume centers as opposed to transferring to higher-volume centers.Keywords: ECMO, case volume, congenital diaphragmatic hernia, congenital diaphragmatic hernia study group, neonate
Procedia PDF Downloads 96679 Achieving Product Robustness through Variation Simulation: An Industrial Case Study
Authors: Narendra Akhadkar, Philippe Delcambre
Abstract:
In power protection and control products, assembly process variations due to the individual parts manufactured from single or multi-cavity tooling is a major problem. The dimensional and geometrical variations on the individual parts, in the form of manufacturing tolerances and assembly tolerances, are sources of clearance in the kinematic joints, polarization effect in the joints, and tolerance stack-up. All these variations adversely affect the quality of product, functionality, cost, and time-to-market. Variation simulation analysis may be used in the early product design stage to predict such uncertainties. Usually, variations exist in both manufacturing processes and materials. In the tolerance analysis, the effect of the dimensional and geometrical variations of the individual parts on the functional characteristics (conditions) of the final assembled products are studied. A functional characteristic of the product may be affected by a set of interrelated dimensions (functional parameters) that usually form a geometrical closure in a 3D chain. In power protection and control products, the prerequisite is: when a fault occurs in the electrical network, the product must respond quickly to react and break the circuit to clear the fault. Usually, the response time is in milliseconds. Any failure in clearing the fault may result in severe damage to the equipment or network, and human safety is at stake. In this article, we have investigated two important functional characteristics that are associated with the robust performance of the product. It is demonstrated that the experimental data obtained at the Schneider Electric Laboratory prove the very good prediction capabilities of the variation simulation performed using CETOL (tolerance analysis software) in an industrial context. Especially, this study allows design engineers to better understand the critical parts in the product that needs to be manufactured with good, capable tolerances. On the contrary, some parts are not critical for the functional characteristics (conditions) of the product and may lead to some reduction of the manufacturing cost, ensuring robust performance. The capable tolerancing is one of the most important aspects in product and manufacturing process design. In the case of miniature circuit breaker (MCB), the product's quality and its robustness are mainly impacted by two aspects: (1) allocation of design tolerances between the components of a mechanical assembly and (2) manufacturing tolerances in the intermediate machining steps of component fabrication.Keywords: geometrical variation, product robustness, tolerance analysis, variation simulation
Procedia PDF Downloads 164678 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R
Authors: Pavel H. Llamocca, Victoria Lopez
Abstract:
The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.Keywords: open data, R language, data integration, environmental data
Procedia PDF Downloads 315677 An Analysis of Gamification in the Post-Secondary Classroom
Authors: F. Saccucci
Abstract:
Gamification has now started to take root in the post-secondary classroom. Educators have learned much about gamification to date but there is still a great deal to learn. One definition of gamification is the ability to engage post-secondary students with games that are fun and correlate to class room curriculum. There is no shortage of literature illustrating the advantages of gamification in the class room. This study is an extension of similar thought as well as an extension of a previous study where in class testing proved with the used of paired T-test that gamification did significantly improve the students’ understanding of subject material. Gamification itself in the class room can range from high end computer simulated software to paper based games of which both have advantages and disadvantages. This analysis used a paper based game to highlight certain qualitative advantages of gamification. The paper based game in this analysis was inexpensive, required low preparation time for the faculty member and consumed approximately 20 minutes of class room time. Data for the study was collected through in class student feedback surveys and narrative from the faculty member moderating the game. Students were randomly selected into groups of four. Qualitative advantages identified in this analysis included: 1. Students had a chance to meet, connect and know other students. 2. Students enjoyed the gamification process given there was a sense of fun and competition. 3. The post assessment that followed the simulation game was not part of their grade calculation therefore it was an opportunity to participate in a low risk activity whereby students could subsequently self-assess their understanding of the subject material. 4. In the view of the student, content knowledge did increase after the gamification process. These qualitative advantages identified in this analysis contribute to the argument that there should be an attempt to use gamification in today’s post-secondary class room. The analysis also highlighted that eighty (80) percent of the respondents believe twenty minutes devoted to the gamification process was appropriate, however twenty (20) percentage of respondents believed that rather than scheduling a gamification process and its post quiz in the last week, a review for the final exam may have been more useful. An additional study to this hopes to determine if the scheduling of the gamification had any correlation to a percentage of the students not wanting to be engaged in the process. As well, the additional study hopes to determine at what incremental level of time invested in class room gamification produce no material incremental benefits to the student as well as determine if any correlation exist between respondents preferring not to have it at the end of the semester to students not believing the gamification process added to the increase of their curricular knowledge.Keywords: gamification, inexpensive, non-quantitative advantages, post-secondary
Procedia PDF Downloads 212676 Lumbar Punctures: Re-Audit of Procedure Documentation Following the Introduction of a Standardised Procedure Checklist
Authors: Hayley Lawrence, Nabi Shah, Sarah Dyer
Abstract:
Aims: Lumbar punctures are a common bedside procedure performed in acute medicine. Published guidance exists on the standardised documentation of invasive procedures in order to reduce the risk of complications. The audit aim was to assess current standards of documentation in accordance with both the GMC and the National Standards for Invasive Procedures guidelines. A second cycle was conducted after introducing a standardised sticker created using current guidelines. This would assess whether the sticker improved documentation, aiming for 100% standard in each step of the procedure. Methods: An initial prospective audit of current practice was conducted over a 3-month period. Patients were identified by their presenting complaints and by colleagues assessing acute medical patients. Initial findings were presented locally, and a further prospective audit was conducted following the implementation of a standardised sticker. Results: 19 lumbar punctures were included in the first cycle and 13 procedures in the second. Pre-procedure documentation was collected for each cycle, whereby documentation of ‘Indication’ improved from 5.3% to 84.6%, ‘Consent’ from 84.2% to 100%, ‘Coagulopathy’ from 0% to 61.5%, ‘Drug Chart checked’ from 0% to 100%, ‘Position of patient’ from 26.3% to 100% and use of ‘Aseptic Technique’ from 83.3% to 100% from the first to the second cycle respectively. ‘Level of Doctor’ and ‘Supervision’ decreased from 53% to 31% and 53% to 46%, respectively, in the second cycle. Documentation of the procedure itself also demonstrated improvements, with ‘Level of Insertion’ 15.8% to 100%, ‘Name of Antiseptic Used’ 11.1% to 69.2%, ‘Local Anaesthetic Used’ 26.3% to 53.8%, ‘Needle Gauge’ 42.1% to 76.9%, ‘Number of Attempts’ 78.9% to 100% and ‘Traumatic/Atraumatic’ procedure 26.3% to 92.3%, respectively. A similar number of opening pressures were documented in each cycle at 57.9% and 53.8%, respectively, but its documentation was deemed ‘Not Applicable’ in a higher number of patients in the second cycle. Post-procedure documentation improved, with ‘Number of Samples obtained’ increasing from 52.6% to 92.3% and documentation of ‘Immediate Complications’ increasing from 78.9% to 100%. ‘Dressing Applied’ was poorly documented in the first cycle at 16.7%. This was not included on the standardised sticker, resulting in 0% documentation in the second cycle. Documentation of Clinicians’ Name and Bleep reduced from 63.2% to 15.4%, but when the name only was analysed, this increased to 84.6%. Conclusions: Standardised stickers for lumbar punctures do improve documentation and hence should result in improved patient safety. There is still room for improvement to reach 100% standard in each area, especially with respect to the clinician’s name and contact details being documented. Final adjustments will be made to the sticker before being included in a lumbar puncture kit, which will be made readily available in the acute medical wards. Future audits could be extended to include other common bedside procedures performed in acute medicine to ensure documentation of all these procedures reaches 100% standard.Keywords: invasive procedure, lumbar puncture, medical record keeping, procedure checklist, procedure documentation, standardised documentation
Procedia PDF Downloads 109675 Basal Cell Carcinoma Excision Intraoperative Frozen Section for Tumor Clearance and Reconstructive Surgery: A Prospective Open Label Interventional Study
Authors: Moizza Tahir, Uzma Bashir, Aisha Akhtar, Zainab Ansari, Sameen Ansari, Muhammad Ali Tahir
Abstract:
Cancer burden has globally increased. Among cutaneous cancers basal cell carcinoma constitute vast majority of skin cancer. There is need for appropriate diagnostic, therapeutic and prognostic significance evaluation for skin cancers Present study report intraoperative frozen section (FS) histopathological clearance for excision of BCC in a tertiary care center and find the frequency of involvement of surgical margin with reference to anatomical site, with size and surgical technique. It was prospective open label interventional study conducted at Dermatology department of tertiary care hospital Rawalpindi Pakistan in lais on with histopathology department from January 2023 to April 2024. Total of thirty-six (n = 36) patients between age 45-80 years with basal cell carcinoma of 10-20mm on face were included following inclusion exclusion criteria by purposive sampling technique. Informed consent was taken. Surgical excision was performed and intraoperative frozen section histopathology clearance of tumor margin was taken from histopathologist on telephone. Surgical reconstruction was done. Final Histopathology report was reexamined on day 10th for margin and depth clearance. Descriptive statistics were calculated for age, gender, sun exposure, reconstructive technique, anatomical site, and tumor free margin report on frozen section analysis. Chi square test was employed for statistical significance of involvement of surgical margin with reference to anatomical site, size and decision on reconstructive surgical technique, p value of <0.05 was considered significant. Total of 36 patients of BCC were enrolled, males 12 (33.3%) and females were 24 (66.6%). Age ranged from 45 year to 80 year mean of 58.36 ±SD7.8. Size of BCC ranged from 10mm to 35mm mean of 25mm ±SD 0.63. Morphology was nodular 18 (50%), superficial spreading 11(30.6%), morphoeic 1 (2.8%) and ulcerative in 6(16.7%) cases. Intraoperative frozen section for histopathological margin clearance with 2-3 mm safety margin and surgical technique has p-value0.51, for anatomical site p value 0.24 and size p-0.84. Intraoperative frozen section (FS) histopathological clearance for BCC face with 2-3mm safety margin with reference to reconstructive technique, anatomical site and size of BCC were insignificant.Keywords: basal cell carcinoma, tumor free amrgin, basal cell carcinoma and frozen section, safety margin
Procedia PDF Downloads 57674 The Impact of Mycotoxins on the Anaerobic Digestion Process
Authors: Harald Lindorfer, Bettina Frauz, Dietmar Ramhold
Abstract:
Next to the well-known inhibitors in anaerobic digestion like ammonia, antibiotics or disinfectants, the number of process failures connected with mould growth in the feedstock increased significantly in the last years. It was assumed that mycotoxins are the cause of the negative effects. The financial damage to plants associated with these process failures is considerable. The aim of this study was to find a way of predicting the failures and furthermore strategies for a fast process recovery. In a first step, mould-contaminated feedstocks causing process failures in full-scale digesters were sampled and analysed on mycotoxin content. A selection of these samples was applied to biological inhibition tests. In this test, crystalline cellulose is applied in addition to the feedstock sample as standard substrate. Affected digesters were also sampled and analytical process data as well as operational data of the plants were recorded. Additionally, different mycotoxin substances, Deoxynivalenol, Zearalenon, Aflatoxin B1, Mycophenolic acid and Citrinin, were applied as pure substances to lab-scale digesters, individually and in various combinations, and effects were monitored. As expected, various mycotoxins were detected in all of the mould-contaminated samples. Nevertheless, inhibition effects were observed with only one of the collected samples, after applying it to an inhibition test. With this sample, the biogas yield of the standard substrate was reduced by approx. 20%. This result corresponds with observations made on full-scale plants. However, none of the tested mycotoxins applied as pure substance caused a negative effect on biogas production in lab scale digesters, neither after application as individual substance nor in combination. The recording of the process data in full-scale plants affected by process failures in most cases showed a severe accumulation of fatty acids alongside a decrease in biogas production and methane concentration. In the analytical data of the digester samples, a typical distribution of fatty acids with exceptionally high acetic acid concentrations could be identified. This typical fatty acid pattern can be used as a rapid identification parameter pointing to the cause of the process troubles and enable a fast implication of countermeasures. The results of the study show that more attention needs to be paid to feedstock storage and feedstock conservation before their application to anaerobic digesters. This is all the more important since first studies indicate that the occurrence of mycotoxins will likely increase in Europe due to the ongoing climate change.Keywords: Anaerobic digestion, Biogas, Feedstock conservation, Fungal mycotoxins, Inhibition, process failure
Procedia PDF Downloads 130673 Dynamic Wetting and Solidification
Authors: Yulii D. Shikhmurzaev
Abstract:
The modelling of the non-isothermal free-surface flows coupled with the solidification process has become the topic of intensive research with the advent of additive manufacturing, where complex 3-dimensional structures are produced by successive deposition and solidification of microscopic droplets of different materials. The issue is that both the spreading of liquids over solids and the propagation of the solidification front into the fluid and along the solid substrate pose fundamental difficulties for their mathematical modelling. The first of these processes, known as ‘dynamic wetting’, leads to the well-known ‘moving contact-line problem’ where, as shown recently both experimentally and theoretically, the contact angle formed by the free surfac with the solid substrate is not a function of the contact-line speed but is rather a functional of the flow field. The modelling of the propagating solidification front requires generalization of the classical Stefan problem, which would be able to describe the onset of the process and the non-equilibrium regime of solidification. Furthermore, given that both dynamic wetting and solification occur concurrently and interactively, they should be described within the same conceptual framework. The present work addresses this formidable problem and presents a mathematical model capable of describing the key element of additive manufacturing in a self-consistent and singularity-free way. The model is illustrated simple examples highlighting its main features. The main idea of the work is that both dynamic wetting and solidification, as well as some other fluid flows, are particular cases in a general class of flows where interfaces form and/or disappear. This conceptual framework allows one to derive a mathematical model from first principles using the methods of irreversible thermodynamics. Crucially, the interfaces are not considered as zero-mass entities introduced using Gibbsian ‘dividing surface’ but the 2-dimensional surface phases produced by the continuum limit in which the thickness of what physically is an interfacial layer vanishes, and its properties are characterized by ‘surface’ parameters (surface tension, surface density, etc). This approach allows for the mass exchange between the surface and bulk phases, which is the essence of the interface formation. As shown numerically, the onset of solidification is preceded by the pure interface formation stage, whilst the Stefan regime is the final stage where the temperature at the solidification front asymptotically approaches the solidification temperature. The developed model can also be applied to the flow with the substrate melting as well as a complex flow where both types of phase transition take place.Keywords: dynamic wetting, interface formation, phase transition, solidification
Procedia PDF Downloads 66672 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 151671 Care as a Situated Universal: Defining Care as a Practical Phenomenology Study
Authors: Amanda Aliende da Matta
Abstract:
This communication presents an aspect of phenomenon selection in an applied hermeneutic phenomenology study on care and vulnerability: the need to consider it as a situated universal. For that, we will first present the study and its methodology. Secondly, we will expose the need to understand phenomena as situation-defined, incorporating feminist thought. In an informatics class for 14 year olds, we explained the exercise: students have to make a 5 slide presentation about a topic of their choice. A does it on streetwear, B on Cristiano Ronaldo, C on Marvel, but J did it on Down Syndrome. Introducing it to the class, J explains the physical and cognitive differences caused by trisomy; when asked to explain it further, he says: "they are angels, teacher," and shows us a poster on his cellphone that says: if you laugh at a different child he will laugh with you because his innocence outweighs your ignorance. The anecdote shows, better than any theoretical explanation, something that some vulnerable people have; something beautiful and special but difficult to define. Let's call this something caring. The research has the main objective of accounting for the experience of caregiving in vulnerability, and it will be carried out with Applied Hermeneutic Phenomenology (AHP). The method's objective is to investigate the lived human experience in its pre-reflexive dimension to know its meaning structures. Contrary to other research methods, AHP does not produce theory about a specific context but seeks the meaning of the lived experience, in its characteristic of human experience. However, it is necessary that we understand care as defined in a concrete situation. We cannot start the research with an a priori definitive concept of care, or we would fall into the mistake of closing ourselves to only what we already know, as explained by Levinas. We incorporate, then, the notion of situated universals. Loyal to phenomenology, the definition of the phenomenon should start with an investigation of the word's etymology: the word cura, in its etymological root, means care. And care comes from the Latin word cogitātus/cōgĭto, which means "to pursue something in mind" and "to consider thoroughly." The verb cōgĭto, meanwhile, is composed of co- (altogether) and agitare (to deal with or think committedly about something, to concern oneself with) / ăgĭto (to set in motion, to move). Care, therefore, has in its origin a meditation on something, a concern about something, a verb that has a sense of action and movement. To care is to act out of concern for something/someone. This etymology, though, is not the final definition of the phenomenon, but only its skeleton. It needs to be embodied in the concrete situation to become a possible lived experience. And that means that the lived experience descriptions (LEDs) should be selected by taking into consideration how and if care was engendered in that concrete experience. Defining the phenomenon has to take into consideration situated knowledge.Keywords: applied hermeneutic phenomenology, care ethics, hermeneutics, phenomenology, situated universalism
Procedia PDF Downloads 89670 The Effects of Lighting Environments on the Perception and Psychology of Consumers of Different Genders in a 3C Retail Store
Authors: Yu-Fong Lin
Abstract:
The main purpose of this study is to explore the impact of different lighting arrangements that create different visual environments in a 3C retail store on the perception, psychology, and shopping tendencies of consumers of different genders. In recent years, the ‘emotional shopping’ model has been widely accepted in the consumer market; in addition to the emotional meaning and value of a product, the in-store ‘shopping atmosphere’ has also been increasingly regarded as significant. The lighting serves as an important environmental stimulus that influences the atmosphere of a store. Altering the lighting can change the color, the shape, and the atmosphere of a space. A successful retail lighting design can not only attract consumers’ attention and generate their interest in various goods, but it can also affect consumers’ shopping approach, behavior, and desires. 3C electronic products have become mainstream in the current consumer market. Consumers of different genders may demonstrate different behaviors and preferences within a 3C store environment. This study tests the impact of a combination of lighting contrasts and color temperatures in a 3C retail store on the visual perception and psychological reactions of consumers of different genders. The research design employs an experimental method to collect data from subjects and then uses statistical analysis adhering to a 2 x 2 x 2 factorial design to identify the influences of different lighting environments. This study utilizes virtual reality technology as the primary method by which to create four virtual store lighting environments. The four lighting conditions are as follows: high contrast/cool tone, high contrast/warm tone, low contrast/cool tone, and low contrast/warm tone. Differences in the virtual lighting and the environment are used to test subjects’ visual perceptions, emotional reactions, store satisfaction, approach-avoidance intentions, and spatial atmosphere preferences. The findings of our preliminary test indicate that female subjects have a higher pleasure response than male subjects in a 3C retail store. Based on the findings of our preliminary test, the researchers modified the contents of the questionnaires and the virtual 3C retail environment with different lighting conditions in order to conduct the final experiment. The results will provide information about the effects of retail lighting on the environmental psychology and the psychological reactions of consumers of different genders in a 3C retail store lighting environment. These results will enable useful practical guidelines about creating 3C retail store lighting and atmosphere for retailers and interior designers to be established.Keywords: 3C retail store, environmental stimuli, lighting, virtual reality
Procedia PDF Downloads 391669 Discrete Element Simulations of Composite Ceramic Powders
Authors: Julia Cristina Bonaldo, Christophe L. Martin, Severine Romero Baivier, Stephane Mazerat
Abstract:
Alumina refractories are commonly used in steel and foundry industries. These refractories are prepared through a powder metallurgy route. They are a mixture of hard alumina particles and graphite platelets embedded into a soft carbonic matrix (binder). The powder can be cold pressed isostatically or uniaxially, depending on the application. The compact is then fired to obtain the final product. The quality of the product is governed by the microstructure of the composite and by the process parameters. The compaction behavior and the mechanical properties of the fired product depend greatly on the amount of each phase, on their morphology and on the initial microstructure. In order to better understand the link between these parameters and the macroscopic behavior, we use the Discrete Element Method (DEM) to simulate the compaction process and the fracture behavior of the fired composite. These simulations are coupled with well-designed experiments. Four mixes with various amounts of Al₂O₃ and binder were tested both experimentally and numerically. In DEM, each particle is modelled and the interactions between particles are taken into account through appropriate contact or bonding laws. Here, we model a bimodal mixture of large Al₂O₃ and small Al₂O₃ covered with a soft binder. This composite is itself mixed with graphite platelets. X-ray tomography images are used to analyze the morphologies of the different components. Large Al₂O₃ particles and graphite platelets are modelled in DEM as sets of particles bonded together. The binder is modelled as a soft shell that covers both large and small Al₂O₃ particles. When two particles with binder indent each other, they first interact through this soft shell. Once a critical indentation is reached (towards the end of compaction), hard Al₂O₃ - Al₂O₃ contacts appear. In accordance with experimental data, DEM simulations show that the amount of Al₂O₃ and the amount of binder play a major role for the compaction behavior. The graphite platelets bend and break during the compaction, also contributing to the macroscopic stress. Firing step is modeled in DEM by ascribing bonds to particles which contact each other after compaction. The fracture behavior of the compacted mixture is also simulated and compared with experimental data. Both diametrical tests (Brazilian tests) and triaxial tests are carried out. Again, the link between the amount of Al₂O₃ particles and the fracture behavior is investigated. The methodology described here can be generalized to other particulate materials that are used in the ceramic industry.Keywords: cold compaction, composites, discrete element method, refractory materials, x-ray tomography
Procedia PDF Downloads 139668 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications
Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon
Abstract:
The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.Keywords: analysis, automated fibre placement, high speed, splicing
Procedia PDF Downloads 155667 DNA Hypomethylating Agents Induced Histone Acetylation Changes in Leukemia
Authors: Sridhar A. Malkaram, Tamer E. Fandy
Abstract:
Purpose: 5-Azacytidine (5AC) and decitabine (DC) are DNA hypomethylating agents. We recently demonstrated that both drugs increase the enzymatic activity of the histone deacetylase enzyme SIRT6. Accordingly, we are comparing the changes H3K9 acetylation changes in the whole genome induced by both drugs using leukemia cells. Description of Methods & Materials: Mononuclear cells from the bone marrow of six de-identified naive acute myeloid leukemia (AML) patients were cultured with either 500 nM of DC or 5AC for 72 h followed by ChIP-Seq analysis using a ChIP-validated acetylated-H3K9 (H3K9ac) antibody. Chip-Seq libraries were prepared from treated and untreated cells using SMARTer ThruPLEX DNA- seq kit (Takara Bio, USA) according to the manufacturer’s instructions. Libraries were purified and size-selected with AMPure XP beads at 1:1 (v/v) ratio. All libraries were pooled prior to sequencing on an Illumina HiSeq 1500. The dual-indexed single-read Rapid Run was performed with 1x120 cycles at 5 pM final concentration of the library pool. Sequence reads with average Phred quality < 20, with length < 35bp, PCR duplicates, and those aligning to blacklisted regions of the genome were filtered out using Trim Galore v0.4.4 and cutadapt v1.18. Reads were aligned to the reference human genome (hg38) using Bowtie v2.3.4.1 in end-to-end alignment mode. H3K9ac enriched (peak) regions were identified using diffReps v1.55.4 software using input samples for background correction. The statistical significance of differential peak counts was assessed using a negative binomial test using all individuals as replicates. Data & Results: The data from the six patients showed significant (Padj<0.05) acetylation changes at 925 loci after 5AC treatment versus 182 loci after DC treatment. Both drugs induced H3K9 acetylation changes at different chromosomal regions, including promoters, coding exons, introns, and distal intergenic regions. Ten common genes showed H3K9 acetylation changes by both drugs. Approximately 84% of the genes showed an H3K9 acetylation decrease by 5AC versus 54% only by DC. Figures 1 and 2 show the heatmaps for the top 100 genes and the 99 genes showing H3K9 acetylation decrease after 5AC treatment and DC treatment, respectively. Conclusion: Despite the similarity in hypomethylating activity and chemical structure, the effect of both drugs on H3K9 acetylation change was significantly different. More changes in H3K9 acetylation were observed after 5 AC treatments compared to DC. The impact of these changes on gene expression and the clinical efficacy of these drugs requires further investigation.Keywords: DNA methylation, leukemia, decitabine, 5-Azacytidine, epigenetics
Procedia PDF Downloads 149666 Effects of Global Validity of Predictive Cues upon L2 Discourse Comprehension: Evidence from Self-paced Reading
Authors: Binger Lu
Abstract:
It remains unclear whether second language (L2) speakers could use discourse context cues to predict upcoming information as native speakers do during online comprehension. Some researchers propose that L2 learners may have a reduced ability to generate predictions during discourse processing. At the same time, there is evidence that discourse-level cues are weighed more heavily in L2 processing than in L1. Previous studies showed that L1 prediction is sensitive to the global validity of predictive cues. The current study aims to explore whether and to what extent L2 learners can dynamically and strategically adjust their prediction in accord with the global validity of predictive cues in L2 discourse comprehension as native speakers do. In a self-paced reading experiment, Chinese native speakers (N=128), C-E bilinguals (N=128), and English native speakers (N=128) read high-predictable (e.g., Jimmy felt thirsty after running. He wanted to get some water from the refrigerator.) and low-predictable (e.g., Jimmy felt sick this morning. He wanted to get some water from the refrigerator.) discourses in two-sentence frames. The global validity of predictive cues was manipulated by varying the ratio of predictable (e.g., Bill stood at the door. He opened it with the key.) and unpredictable fillers (e.g., Bill stood at the door. He opened it with the card.), such that across conditions, the predictability of the final word of the fillers ranged from 100% to 0%. The dependent variable was reading time on the critical region (the target word and the following word), analyzed with linear mixed-effects models in R. C-E bilinguals showed reliable prediction across all validity conditions (β = -35.6 ms, SE = 7.74, t = -4.601, p< .001), and Chinese native speakers showed significant effect (β = -93.5 ms, SE = 7.82, t = -11.956, p< .001) in two of the four validity conditions (namely, the High-validity and MedLow conditions, where fillers ended with predictable words in 100% and 25% cases respectively), whereas English native speakers didn’t predict at all (β = -2.78 ms, SE = 7.60, t = -.365, p = .715). There was neither main effect (χ^²(3) = .256, p = .968) nor interaction (Predictability: Background: Validity, χ^²(3) = 1.229, p = .746; Predictability: Validity, χ^²(3) = 2.520, p = .472; Background: Validity, χ^²(3) = 1.281, p = .734) of Validity with speaker groups. The results suggest that prediction occurs in L2 discourse processing but to a much less extent in L1, witha significant effect in some conditions of L1 Chinese and anull effect in L1 English processing, consistent with the view that L2 speakers are more sensitive to discourse cues compared with L1 speakers. Additionally, the pattern of L1 and L2 predictive processing was not affected by the global validity of predictive cues. C-E bilinguals’ predictive processing could be partly transferred from their L1, as prior research showed that discourse information played a more significant role in L1 Chinese processing.Keywords: bilingualism, discourse processing, global validity, prediction, self-paced reading
Procedia PDF Downloads 139665 Participatory Cartography for Disaster Reduction in Pogreso, Yucatan Mexico
Authors: Gustavo Cruz-Bello
Abstract:
Progreso is a coastal community in Yucatan, Mexico, highly exposed to floods produced by severe storms and tropical cyclones. A participatory cartography approach was conducted to help to reduce floods disasters and assess social vulnerability within the community. The first step was to engage local authorities in risk management to facilitate the process. Two workshop were conducted, in the first, a poster size printed high spatial resolution satellite image of the town was used to gather information from the participants: eight women and seven men, among them construction workers, students, government employees and fishermen, their ages ranged between 23 and 58 years old. For the first task, participants were asked to locate emblematic places and place them in the image to familiarize with it. Then, they were asked to locate areas that get flooded, the buildings that they use as refuges, and to list actions that they usually take to reduce vulnerability, as well as to collectively come up with others that might reduce disasters. The spatial information generated at the workshops was digitized and integrated into a GIS environment. A printed version of the map was reviewed by local risk management experts, who validated feasibility of proposed actions. For the second workshop, we retrieved the information back to the community for feedback. Additionally a survey was applied in one household per block in the community to obtain socioeconomic, prevention and adaptation data. The information generated from the workshops was contrasted, through T and Chi Squared tests, with the survey data in order to probe the hypothesis that poorer or less educated people, are less prepared to face floods (more vulnerable) and live near or among higher presence of floods. Results showed that a great majority of people in the community are aware of the hazard and are prepared to face it. However, there was not a consistent relationship between regularly flooded areas with people’s average years of education, house services, or house modifications against heavy rains to be prepared to hazards. We could say that the participatory cartography intervention made participants aware of their vulnerability and made them collectively reflect about actions that can reduce disasters produced by floods. They also considered that the final map could be used as a communication and negotiation instrument with NGO and government authorities. It was not found that poorer and less educated people are located in areas with higher presence of floods.Keywords: climate change, floods, Mexico, participatory mapping, social vulnerability
Procedia PDF Downloads 114664 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model
Authors: Seydou Sinde
Abstract:
The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression
Procedia PDF Downloads 84663 Estimation of Rock Strength from Diamond Drilling
Authors: Hing Hao Chan, Thomas Richard, Masood Mostofi
Abstract:
The mining industry relies on an estimate of rock strength at several stages of a mine life cycle: mining (excavating, blasting, tunnelling) and processing (crushing and grinding), both very energy-intensive activities. An effective comminution design that can yield significant dividends often requires a reliable estimate of the material rock strength. Common laboratory tests such as rod, ball mill, and uniaxial compressive strength share common shortcomings such as time, sample preparation, bias in plug selection cost, repeatability, and sample amount to ensure reliable estimates. In this paper, the authors present a methodology to derive an estimate of the rock strength from drilling data recorded while coring with a diamond core head. The work presented in this paper builds on a phenomenological model of the bit-rock interface proposed by Franca et al. (2015) and is inspired by the now well-established use of the scratch test with PDC (Polycrystalline Diamond Compact) cutter to derive the rock uniaxial compressive strength. The first part of the paper introduces the phenomenological model of the bit-rock interface for a diamond core head that relates the forces acting on the drill bit (torque, axial thrust) to the bit kinematic variables (rate of penetration and angular velocity) and introduces the intrinsic specific energy or the energy required to drill a unit volume of rock for an ideally sharp drilling tool (meaning ideally sharp diamonds and no contact between the bit matrix and rock debris) that is found well correlated to the rock uniaxial compressive strength for PDC and roller cone bits. The second part describes the laboratory drill rig, the experimental procedure that is tailored to minimize the effect of diamond polishing over the duration of the experiments, and the step-by-step methodology to derive the intrinsic specific energy from the recorded data. The third section presents the results and shows that the intrinsic specific energy correlates well to the uniaxial compressive strength for the 11 tested rock materials (7 sedimentary and 4 igneous rocks). The last section discusses best drilling practices and a method to estimate the rock strength from field drilling data considering the compliance of the drill string and frictional losses along the borehole. The approach is illustrated with a case study from drilling data recorded while drilling an exploration well in Australia.Keywords: bit-rock interaction, drilling experiment, impregnated diamond drilling, uniaxial compressive strength
Procedia PDF Downloads 138662 Mechanical and Material Characterization on the High Nitrogen Supersaturated Tool Steels for Die-Technology
Authors: Tatsuhiko Aizawa, Hiroshi Morita
Abstract:
The tool steels such as SKD11 and SKH51 have been utilized as punch and die substrates for cold stamping, forging, and fine blanking processes. The heat-treated SKD11 punches with the hardness of 700 HV wrought well in the stamping of SPCC, normal steel plates, and non-ferrous alloy such as a brass sheet. However, they suffered from severe damage in the fine blanking process of smaller holes than 1.5 mm in diameter. Under the high aspect ratio of punch length to diameter, an elastoplastic bucking of slender punches occurred on the production line. The heat-treated punches had a risk of chipping at their edges. To be free from those damages, the blanking punch must have sufficient rigidity and strength at the same time. In the present paper, the small-hole blanking punch with a dual toughness structure was proposed to provide a solution to this engineering issue in production. The low-temperature plasma nitriding process was utilized to form the nitrogen supersaturated thick layer into the original SKD11 punch. Through the plasma nitriding at 673 K for 14.4 ks, the nitrogen supersaturated layer, with the thickness of 50 μm and without nitride precipitates, was formed as a high nitrogen steel (HNS) layer surrounding the original SKD11 punch. In this two-zone structured SKD11 punch, the surface hardness increased from 700 HV for the heat-treated SKD11 to 1400 HV. This outer high nitrogen SKD11 (HN-SKD11) layer had a homogeneous nitrogen solute depth profile with a nitrogen solute content plateau of 4 mass% till the border between the outer HN-SKD11 layer and the original SKD11 matrix. When stamping the brass sheet with the thickness of 1 mm by using this dually toughened SKD11 punch, the punch life was extended from 500 K shots to 10000 K shots to attain a much more stable production line to yield the brass American snaps. Furthermore, with the aid of the masking technique, the punch side surface layer with the thickness of 50 μm was modified by this high nitrogen super-saturation process to have a stripe structure where the un-nitrided SKD11 and the HN-SKD11 layers were alternatively aligned from the punch head to the punch bottom. This flexible structuring promoted the mechanical integrity of total rigidity and toughness as a punch with an extremely small diameter.Keywords: high nitrogen supersaturation, semi-dry cold stamping, solid solution hardening, tool steel dies, low temperature nitriding, dual toughness structure, extremely small diameter punch
Procedia PDF Downloads 89661 Data-Driven Surrogate Models for Damage Prediction of Steel Liquid Storage Tanks under Seismic Hazard
Authors: Laura Micheli, Majd Hijazi, Mahmoud Faytarouni
Abstract:
The damage reported by oil and gas industrial facilities revealed the utmost vulnerability of steel liquid storage tanks to seismic events. The failure of steel storage tanks may yield devastating and long-lasting consequences on built and natural environments, including the release of hazardous substances, uncontrolled fires, and soil contamination with hazardous materials. It is, therefore, fundamental to reliably predict the damage that steel liquid storage tanks will likely experience under future seismic hazard events. The seismic performance of steel liquid storage tanks is usually assessed using vulnerability curves obtained from the numerical simulation of a tank under different hazard scenarios. However, the computational demand of high-fidelity numerical simulation models, such as finite element models, makes the vulnerability assessment of liquid storage tanks time-consuming and often impractical. As a solution, this paper presents a surrogate model-based strategy for predicting seismic-induced damage in steel liquid storage tanks. In the proposed strategy, the surrogate model is leveraged to reduce the computational demand of time-consuming numerical simulations. To create the data set for training the surrogate model, field damage data from past earthquakes reconnaissance surveys and reports are collected. Features representative of steel liquid storage tank characteristics (e.g., diameter, height, liquid level, yielding stress) and seismic excitation parameters (e.g., peak ground acceleration, magnitude) are extracted from the field damage data. The collected data are then utilized to train a surrogate model that maps the relationship between tank characteristics, seismic hazard parameters, and seismic-induced damage via a data-driven surrogate model. Different types of surrogate algorithms, including naïve Bayes, k-nearest neighbors, decision tree, and random forest, are investigated, and results in terms of accuracy are reported. The model that yields the most accurate predictions is employed to predict future damage as a function of tank characteristics and seismic hazard intensity level. Results show that the proposed approach can be used to estimate the extent of damage in steel liquid storage tanks, where the use of data-driven surrogates represents a viable alternative to computationally expensive numerical simulation models.Keywords: damage prediction , data-driven model, seismic performance, steel liquid storage tanks, surrogate model
Procedia PDF Downloads 143660 Spatial Pattern and Predictors of Malaria in Ethiopia: Application of Auto Logistics Spatial Regression
Authors: Melkamu A. Zeru, Yamral M. Warkaw, Aweke A. Mitku, Muluwerk Ayele
Abstract:
Introduction: Malaria is a severe health threat in the World, mainly in Africa. It is the major cause of health problems in which the risk of morbidity and mortality associated with malaria cases are characterized by spatial variations across the county. This study aimed to investigate the spatial patterns and predictors of malaria distribution in Ethiopia. Methods: A weighted sample of 15,239 individuals with rapid diagnosis tests was obtained from the Central Statistical Agency and Ethiopia malaria indicator survey of 2015. Global Moran's I and Moran scatter plots were used in determining the distribution of malaria cases, whereas the local Moran's I statistic was used in identifying exposed areas. In data manipulation, machine learning was used for variable reduction and statistical software R, Stata, and Python were used for data management and analysis. The auto logistics spatial binary regression model was used to investigate the predictors of malaria. Results: The final auto logistics regression model reported that male clients had a positive significant effect on malaria cases as compared to female clients [AOR=2.401, 95 % CI: (2.125 - 2.713)]. The distribution of malaria across the regions was different. The highest incidence of malaria was found in Gambela [AOR=52.55, 95%CI: (40.54-68.12)] followed by Beneshangul [AOR=34.95, 95%CI: (27.159 - 44.963)]. Similarly, individuals in Amhara [AOR=0.243, 95% CI:(0.1950.303],Oromiya[AOR=0.197,95%CI:(0.1580.244)],DireDawa[AOR=0.064,95%CI(0.049-0.082)],AddisAbaba[AOR=0.057,95%CI:(0.044-0.075)], Somali[AOR=0.077,95%CI:(0.059-0.097)], SNNPR[OR=0.329, 95%CI: (0.261- 0.413)] and Harari [AOR=0.256, 95%CI:(0.201 - 0.325)] were less likely to had low incidence of malaria as compared with Tigray. Furthermore, for a one-meter increase in altitude, the odds of a positive rapid diagnostic test (RDT) decrease by 1.6% [AOR = 0.984, 95% CI :( 0.984 - 0.984)]. The use of a shared toilet facility was found as a protective factor for malaria in Ethiopia [AOR=1.671, 95% CI: (1.504 - 1.854)]. The spatial autocorrelation variable changes the constant from AOR = 0.471 for logistic regression to AOR = 0.164 for auto logistics regression. Conclusions: This study found that the incidence of malaria in Ethiopia had a spatial pattern that is associated with socio-economic, demographic, and geographic risk factors. Spatial clustering of malaria cases had occurred in all regions, and the risk of clustering was different across the regions. The risk of malaria was found to be higher for those who live in soil floor-type houses as compared to those who live in cement or ceramics floor type. Similarly, households with thatched, metal and thin, and other roof-type houses have a higher risk of malaria than ceramic tiles roof houses. Moreover, using a protected anti-mosquito net reduced the risk of malaria incidence.Keywords: malaria, Ethiopia, auto logistics, spatial model, spatial clustering
Procedia PDF Downloads 37659 Beware the Trolldom: Speculative Interests and Policy Implications behind the Circulation of Damage Claims
Authors: Antonio Davola
Abstract:
Moving from the evaluations operated by Richard Posner in his judgment on the case Carhart v. Halaska, the paper seeks to analyse the so-called ‘litigation troll’ phenomenon and the development of a damage claims market, i.e. a market in which the right to propose claims is voluntary exchangeable for money and can be asserted by private buyers. The aim of our study is to assess whether the implementation of a ‘damage claims market’ might represent a resource for victims or if, on the contrary, it might operate solely as a speculation tool for private investors. The analysis will move from the US experience, and will then focus on the EU framework. Firstly, the paper will analyse the relation between the litigation troll phenomenon and the patent troll activity: even though these activities are considered similar by Posner, a comparative study shows how these practices significantly differ in their impact on the market and on consumer protection, even moving from similar economic perspectives. The second part of the paper will focus on the main specific concerns related to the litigation trolling activity. The main issues that will be addressed are the risk that the circulation of damage claims might spur non-meritorious litigation and the implications of the misalignment between the victim of a tort and the actual plaintiff in court arising from the sale of a claim. In its third part, the paper will then focus on the opportunities and benefits that the introduction and regulation of a claims market might imply both for potential claims sellers and buyers, in order to ultimately assess whether such a solution might actually increase individual’s legal empowerment. Through the damage claims market compensation would be granted more quickly and easily to consumers who had suffered harm: tort victims would, in fact, be compensated instantly upon the sale of their claims without any burden of proof. On the other hand, claim-buyers would profit from the gap between the amount that a consumer would accept for an immediate refund and the compensation awarded in court. In the fourth part of the paper, the analysis will focus on the legal legitimacy of the litigation trolling activity in the US and the EU framework. Even though there is no express provision that forbids the sale of the right to pursue a claim in court - or that deems such a right to be non-transferable – procedural laws of single States (especially in the EU panorama) must be taken into account in evaluating this aspect. The fifth and final part of the paper will summarize the various data collected to suggest an evaluation on if, and through which normative solutions, the litigation trolling might comport benefits for competition and which would be its overall effect over consumer’s protection.Keywords: competition, claims, consumer's protection, litigation
Procedia PDF Downloads 232658 Pill-Box Dispenser as a Strategy for Therapeutic Management: A Qualitative Evaluation
Authors: Bruno R. Mendes, Francisco J. Caldeira, Rita S. Luís
Abstract:
Population ageing is directly correlated to an increase in medicine consumption. Beyond the latter and the polymedicated profile of elderly, it is possible to see a need for pharmacotherapeutic monitoring due to cognitive and physical impairment. In this sense, the tracking, organization and administration of medicines become a daily challenge and the pill-box dispenser system a solution. The pill-box dispenser (system) consists in a small compartmentalized container to unit dose organization, which means a container able to correlate the patient’s prescribed dose regimen and the time schedule of intake. In many European countries, this system is part of pharmacist’s role in clinical pharmacy. Despite this simple solution, therapy compliance is only possible if the patient adheres to the system, so it is important to establish a qualitative and quantitative analysis on the perception of the patient on the benefits and risks of the pill-box dispenser as well as the identification of the ideal system. The analysis was conducted through an observational study, based on the application of a standardized questionnaire structured with the numerical scale of Likert (5 levels) and previously validated on the population. The study was performed during a limited period of time and under a randomized sample of 188 participants. The questionnaire consisted of 22 questions: 6 background measures and 16 specific measures. The standards for the final comparative analysis were obtained through the state-of-the-art on the subject. The study carried out using the Likert scale afforded a degree of agreement and discordance between measures (Sample vs. Standard) of 56,25% and 43,75%, respectively. It was concluded that the pill-box dispenser has greater acceptance among a younger population, that was not the initial target of the system. However, this allows us to guarantee a high adherence in the future. Additionally, it was noted that the cost associated with this service is not a limiting factor for its use. The pill-box dispenser system, as currently implemented, demonstrates an important weakness regarding the quality and effectiveness of the medicines, which is not understood by the patient, revealing a significant lack of literacy when it concerns with medicine area. The characteristics of an ideal system remain unchanged, which means that the size, appearance and availability of information in the pill-box continue to be indispensable elements for the compliance with the system. The pill-box dispenser remains unsuitable regarding container size and the type of treatment to which it applies. Despite that, it might be a future standard for clinical pharmacy, allowing a differentiation of the pharmacist role, as well as a wider range of applications to other age groups and treatments.Keywords: clinical pharmacy, medicines, patient safety, pill-box dispenser
Procedia PDF Downloads 198657 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust
Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin
Abstract:
The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.Keywords: acoustic impedance, engine exhaust system, FEM model, test stand
Procedia PDF Downloads 59