Search results for: bridge condition assessment
76 The Politics of Health Education: A Cultural Analysis of Tobacco Control Communication in India
Authors: Ajay Ivan
Abstract:
This paper focuses on the cultural politics of health-promotional and disease-preventive pedagogic practices in the context of the national tobacco control programme in India. Tobacco consumption is typically problematised as a paradox: tobacco poses objective health risks such as cancer and heart disease, but its production, sale and export contribute significantly to state revenue. A blanket ban on tobacco products, therefore, is infeasible though desirable. Instead, initiatives against tobacco use have prioritised awareness creation and behaviour change to reduce its demand. This paper argues that public health communication is not, as commonly assumed, an apolitical and neutral transmission of disease-preventive information. Drawing on Michel Foucault’s concept of governmentality, it examines such campaigns as techniques of disciplining people rather than coercing them to give up tobacco use, which would be both impractical and counter-productive. At the level of the population, these programmes constitute a security mechanism that reduces risks without eliminating them, so as to ensure an optimal level of public health without hampering the economy. Anti-tobacco pedagogy thus aligns with a contemporary paradigm of health that emphasises risk-assessment and lifestyle management as tools of governance, using pedagogic techniques to teach people how to be healthy. The paper analyses the pictorial health warnings on tobacco packets and anti-tobacco advertisements in movie theatres mandated by the state, along with awareness-creation messages circulated by anti-tobacco advocacy groups in India, to show how they discursively construct tobacco and its consumption as a health risk. Smoking is resignified from a pleasurable and sociable practice to a deadly addiction that jeopardises the health of those who smoke and those who passively inhale the smoke. While disseminating information about the health risks of tobacco, these initiatives employ emotional and affective techniques of persuasion to discipline tobacco users. They incite fear of death and of social ostracism to motivate behaviour change, complementing their appeals to reason. Tobacco is portrayed as a grave moral danger to the family and a detriment to the vitality of the nation, such that using it contradicts one’s duties as a parent or citizen. Awareness programmes reproduce prevailing societal assumptions about health and disease, normalcy and deviance, and proper and improper conduct. Pedagogy thus functions as an apparatus of public health governance, recruiting subjects as volunteers in their own regulation and aligning their personal goals and aspirations to the objectives of tobacco control. The paper links this calculated management of subjectivity and the self-responsibilisation of the pedagogic subject to a distinct mode of neoliberal civic governance in contemporary India. Health features prominently in this mode of governance that serves the biopolitical obligation of the state as laid down in Article 39 of the Constitution, which includes a duty to ensure the health of its citizens. Insofar as the health of individuals is concerned, the problem is how to balance this duty of the state with the fundamental right of the citizen to choose how to live. Public health pedagogy, by directing the citizen’s ‘free’ choice without unduly infringing upon it, offers a tactical solution.Keywords: public health communication, pedagogic power, tobacco control, neoliberal governance
Procedia PDF Downloads 8475 Developing a Sustainable Transit Planning Index Using Analytical Hierarchy Process Method for ZEB Implementation in Canada
Authors: Mona Ghafouri-Azar, Sara Diamond, Jeremy Bowes, Grace Yuan, Aimee Burnett, Michelle Wyndham-West, Sara Wagner, Anand Pariyarath
Abstract:
Transportation is the fastest growing source of greenhouse gas emissions worldwide. In Canada, it is responsible for 23% of total CO2emissions from fuel combustion, and emissions from the transportation sector are the second largest source of emissions after the oil and gas sector. Currently, most Canadian public transportation systems rely on buses that operateon fossil fuels.Canada is currently investing billions of dollars to replacediesel buses with electric busesas this isperceived to have a significant impact on climate mitigation. This paper focuses on the possible impacts of zero emission buses (ZEB) on sustainable development, considering three dimensions of sustainability; environmental quality, economic growth, and social development.A sustainable transportation system is one that is safe, affordable, accessible, efficient, and resilient and that contributes minimal emissions of carbon and other pollutants.To enable implementation of these goals, relevant indicators were selected and defined that measure progress towards a sustainable transportation system. These were drawn from Canadian and international examples. Studies compare different European cities in terms of development, sustainability, and infrastructures, by using transport performance indicators. A Normalized Transport Sustainability index measures and compares policies in different urban areas and allows fine-tuning of policies. Analysts use a number ofmethods for sustainable analysis, like cost-benefit analysis (CBA) toassess economic benefit, life-cycle assessment (LCA) to assess social, economic, and environment factors and goals, and multi-criteria decision making (MCDM) analysis which can comparediffering stakeholder preferences.A multi criteria decision making approach is an appropriate methodology to plan and evaluate sustainable transit development and to provide insights and meaningful information for decision makers and transit agencies. It is essential to develop a system thataggregates specific discrete indices to assess the sustainability of transportation systems.Theseprioritize indicators appropriate for the differentCanadian transit system agencies and theirpreferences and requirements. This studywill develop an integrating index that alliesexistingdiscrete indexes to supporta reliable comparison between the current transportation system (diesel buses) and the new ZEB system emerging in Canada. As a first step, theindexes for each category are selected, and the index matrix constructed. Second, the selected indicators arenormalized to remove anyinconsistency between them. Next, the normalized matrix isweighted based on the relative importance of each index to the main domains of sustainability using the analytical hierarchy process (AHP) method. This is accomplished through expert judgement around the relative importance of different attributes with respect to the goals through apairwise comparison matrix. The considerationof multiple environmental, economic, and social factors (including equity and health) is integrated intoa sustainable transit planning index (STPI) which supportsrealistic ZEB implementation in Canada and beyond and is useful to different stakeholders, agencies, and ministries.Keywords: zero emission buses, sustainability, sustainable transit, transportation, analytical hierarchy process, environment, economy, social
Procedia PDF Downloads 12974 The Use of Artificial Intelligence in the Context of a Space Traffic Management System: Legal Aspects
Authors: George Kyriakopoulos, Photini Pazartzis, Anthi Koskina, Crystalie Bourcha
Abstract:
The need for securing safe access to and return from outer space, as well as ensuring the viability of outer space operations, maintains vivid the debate over the promotion of organization of space traffic through a Space Traffic Management System (STM). The proliferation of outer space activities in recent years as well as the dynamic emergence of the private sector has gradually resulted in a diverse universe of actors operating in outer space. The said developments created an increased adverse impact on outer space sustainability as the case of the growing number of space debris clearly demonstrates. The above landscape sustains considerable threats to outer space environment and its operators that need to be addressed by a combination of scientific-technological measures and regulatory interventions. In this context, recourse to recent technological advancements and, in particular, to Artificial Intelligence (AI) and machine learning systems, could achieve exponential results in promoting space traffic management with respect to collision avoidance as well as launch and re-entry procedures/phases. New technologies can support the prospects of a successful space traffic management system at an international scale by enabling, inter alia, timely, accurate and analytical processing of large data sets and rapid decision-making, more precise space debris identification and tracking and overall minimization of collision risks and reduction of operational costs. What is more, a significant part of space activities (i.e. launch and/or re-entry phase) takes place in airspace rather than in outer space, hence the overall discussion also involves the highly developed, both technically and legally, international (and national) Air Traffic Management System (ATM). Nonetheless, from a regulatory perspective, the use of AI for the purposes of space traffic management puts forward implications that merit particular attention. Key issues in this regard include the delimitation of AI-based activities as space activities, the designation of the applicable legal regime (international space or air law, national law), the assessment of the nature and extent of international legal obligations regarding space traffic coordination, as well as the appropriate liability regime applicable to AI-based technologies when operating for space traffic coordination, taking into particular consideration the dense regulatory developments at EU level. In addition, the prospects of institutionalizing international cooperation and promoting an international governance system, together with the challenges of establishment of a comprehensive international STM regime are revisited in the light of intervention of AI technologies. This paper aims at examining regulatory implications advanced by the use of AI technology in the context of space traffic management operations and its key correlating concepts (SSA, space debris mitigation) drawing in particular on international and regional considerations in the field of STM (e.g. UNCOPUOS, International Academy of Astronautics, European Space Agency, among other actors), the promising advancements of the EU approach to AI regulation and, last but not least, national approaches regarding the use of AI in the context of space traffic management, in toto. Acknowledgment: The present work was co-funded by the European Union and Greek national funds through the Operational Program "Human Resources Development, Education and Lifelong Learning " (NSRF 2014-2020), under the call "Supporting Researchers with an Emphasis on Young Researchers – Cycle B" (MIS: 5048145).Keywords: artificial intelligence, space traffic management, space situational awareness, space debris
Procedia PDF Downloads 26173 Optimizing Solids Control and Cuttings Dewatering for Water-Powered Percussive Drilling in Mineral Exploration
Authors: S. J. Addinell, A. F. Grabsch, P. D. Fawell, B. Evans
Abstract:
The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising down-hole water-powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barren cover. This system has shown superior rates of penetration in water-rich, hard rock formations at depths exceeding 500 metres. With fluid flow rates of up to 120 litres per minute at 200 bar operating pressure to energise the bottom hole tooling, excessive quantities of high quality drilling fluid (water) would be required for a prolonged drilling campaign. As a result, drilling fluid recovery and recycling has been identified as a necessary option to minimise costs and logistical effort. While the majority of the cuttings report as coarse particles, a significant fines fraction will typically also be present. To maximise tool life longevity, the percussive bottom hole assembly requires high quality fluid with minimal solids loading and any recycled fluid needs to have a solids cut point below 40 microns and a concentration less than 400 ppm before it can be used to reenergise the system. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process shows a strong power law relationship for particle size distributions. This data is critical in optimising solids control strategies and cuttings dewatering techniques. Optimisation of deployable solids control equipment is discussed and how the required centrate clarity was achieved in the presence of pyrite-rich metasediment cuttings. Key results were the successful pre-aggregation of fines through the selection and use of high molecular weight anionic polyacrylamide flocculants and the techniques developed for optimal dosing prior to scroll decanter centrifugation, thus keeping sub 40 micron solids loading within prescribed limits. Experiments on maximising fines capture in the presence of thixotropic drilling fluid additives (e.g. Xanthan gum and other biopolymers) are also discussed. As no core is produced during the drilling process, it is intended that the particle laden returned drilling fluid is used for top-of-hole geochemical and mineralogical assessment. A discussion is therefore presented on the biasing and latency of cuttings representivity by dewatering techniques, as well as the resulting detrimental effects on depth fidelity and accuracy. Data pertaining to the sample biasing with respect to geochemical signatures due to particle size distributions is presented and shows that, depending on the solids control and dewatering techniques used, it can have unwanted influence on top-of-hole analysis. Strategies are proposed to overcome these effects, improving sample quality. Successful solids control and cuttings dewatering for water-powered percussive drilling is presented, contributing towards the successful advancement of coiled tubing based greenfields mineral exploration.Keywords: cuttings, dewatering, flocculation, percussive drilling, solids control
Procedia PDF Downloads 25072 Research Cooperation between of Ukraine in Terms of Food Chain Safety Control in the Frame of MICRORISK Project
Authors: Kinga Wieczorek, Elzbieta Kukier, Remigiusz Pomykala, Beata Lachtara, Renata Szewczyk, Krzysztof Kwiatek, Jacek Osek
Abstract:
The MICRORISK project (Research cooperation in assessment of microbiological hazard and risk in the food chain) was funded by the European Commission under the FP7 PEOPLE 2012 IRSES call within the International Research Staff Exchange Scheme of Marie Curie Action and realized during years from 2014 to 2015. The main aim of the project was to establish a cooperation between the European Union (EU) and the third State in the area important from the public health point of view. The following organizations have been engaged in the activity: National Veterinary Research Institute (NVRI) in Pulawy, Poland (coordinator), French Agency for Food, Environmental and Occupational Health & Safety (ANSES) in Maisons Alfort, France, National Scientific Center Institute of Experimental and Clinical Veterinary Medicine (NSC IECVM), Kharkov and State Scientific and Research Institute of Laboratory Diagnostics and Veterinary and Sanitary Expertise (SSRILDVSE) Kijev Ukraine. The results of the project showed that Ukraine used microbiological criteria in accordance with Commission Regulation (EC) No 2073/2005 of 15 November 2005 on microbiological criteria for foodstuffs. Compliance concerns both the criteria applicable at the stage of food safety (retail trade), as well as evaluation criteria and process hygiene in food production. In this case, the Ukrainian legislation also provides application of the criteria that do not have counterparts in the food law of the European Union, and are based on the provisions of Ukrainian law. Partial coherence of the Ukrainian and EU legal requirements in terms of microbiological criteria for food and feed concerns microbiological parameters such as total plate count, coliforms, coagulase-positive Staphylococcus spp., including S. aureus. Analysis of laboratory methods used for microbiological hazards control in food production chain has shown that most methods used in the EU are well-known by Ukrainian partners, and many of them are routinely applied as the only standards in the laboratory practice or simultaneously used with Ukrainian methods. The area without any legislation, where the EU regulation and analytical methods should be implemented is the area of Shiga toxin producing E. coli, including E. coli O157 and staphylococcal enterotoxin detection. During the project, the analysis of the existing Ukrainian and EU data concerning the prevalence of the most important food-borne pathogens on different stages of food production chain was performed. Particularly, prevalence of Salmonella spp., Campylobacter spp., L. monocytogenes as well as clostridia was examined. The analysis showed that poultry meat still appears to be the most important food-borne source of Campylobacter and Salmonella in the UE. On the other hand, L. monocytogenes were seldom detected above the legal safety limit (100 cfu/g) among the EU countries. Moreover, the analysis revealed the lack of comprehensive data regarding the prevalence of the most important food-borne pathogens in Ukraine. The results of the MICRORISK project are networking activities among researches originations participating in the tasks will help with a better recognition of each other regarding very important, from the public health point of view areas such as microbiological hazards in the food production chain and finally will help to improve food quality and safety for consumers.Keywords: cooperation, European Union, food chain safety, food law, microbiological risk, Microrisk, Poland, Ukraine
Procedia PDF Downloads 37771 De-convolution Based IVIVC Correlation for Tacrolimus ER Tablet (Narrow Therapeutic Index Drug) With Widening of Dissolution Prediction for Virtual Bioequivalence
Authors: Sajad Khaliq Dar, Dipanjan Goswami, Arshad H. Khuroo, Mohd. Akhtar, Pulak Kumar Metia, Sudershan Kumar
Abstract:
Background: Development of modified-release oral dosage formulations (OSD) like tacrolimus in narrow therapeutic categories, together with high levels of intra-individual variability, impose greater challenges. The risk assessment for bioequivalence studies requires developing a suitable design through pilot studies involving the comparison of multiple formulations of the same product with a marketed product to understand the in-vivo behaviour. These formulations could have varying coating levels and other minor quantitative differences to achieve the desired release rate for the final product. Although small-scale studies are critical before the conduct of full-scale Pharmacokinetic (PK) studies, regulatory agencies evaluate critical bioavailability attributes (CBA) before approving the submitted dossiers. Since Tacrolimus is a BCS Class II drug, therefore developing the extended-release formulation, in addition to associated challenges, provides an opportunity to present the In vitro-in vivo correlations (IVIVC) to regulatory agencies, not only to exhibit product quality but also to reduce the burden of additional human trials and cost involved to them for bringing the product to market. Objective: The objective of this study was to develop a Level-A In vitro - In vivo Correlation (IVIVC) model for Sun Pharma’s test formulation Tacrolimus ER tablet 4mg and extend its application to a widened dissolution window of 25% at 2.5 hours (critical release time) sampling time point. Experimental Procedure: Post the conduct of two in-vivo studies, a pilot study evaluating two test prototypes on 24 subjects (under fasting) and a pivotal study having 50 subjects (under fasting), the observed pharmacokinetic profile was used for IVIVC model development. The dissolution media used was 0.005% HPC + 0.25% SLS in Water 900 mL at pH 4.50 using USP II (Paddle) apparatus with alternative sinkers operated at 100 RPM. The sampling time points were chosen to mimic the drug absorption in vivo. The dissolution best fit to data was obtained using Makoid Banakar kinetics. Then deconvolution, anchoring to concepts of the single compartment by Wagner Nelson method was applied for tacrolimus slow-release formulation batch with film coating weight build-up of 5.4% (used in pilot bio study), medium release with Hypromellose (retard-release exhibit batch used in the pivotal study) and fast release formulation batch with film coating weight build-up of 5.05% (used in pilot bio study). Results and Conclusion: The results were deemed acceptable as prediction errors for internal and external validation were < 3% depicting in-vitro drug release mimics in-vivo absorption. Moreover, the prediction result for the Test/Reference ratio was <15% for all test formulations and widening dissolution (i.e., 39%-64% drug release at 2.5hrs) predictions were well within 80-125% when compared against Envarsus XR (reference drug). This IVIVC-validated model can be used in the futuristic exploration of dose titration with 1mg tacrolimus ER OSD as a surrogate for In-vivo bioequivalence trials.Keywords: pharmacokinetics, BCS, oral dosage form, Bioavailability, intra-individual variability
Procedia PDF Downloads 570 Re-Designing Community Foodscapes to Enhance Social Inclusion in Sustainable Urban Environments
Authors: Carles Martinez-Almoyna Gual, Jiwon Choi
Abstract:
Urban communities face risks of disintegration and segregation as a consequence of globalised migration processes towards urban environments. Linking social and cultural components with environmental and economic dimensions becomes the goal of all the disciplines that aim to shape more sustainable urban environments. Solutions require interdisciplinary approaches and the use of a complex array of tools. One of these tools is the implementation of urban farming, which provides a wide range of advantages for creating more inclusive spaces and integrated communities. Since food is strongly related to the values and identities of any cultural group, it can be used as a medium to promote social inclusion in the context of urban multicultural societies. By bringing people together into specific urban sites, food production can be integrated into multifunctional spaces while addressing social, economic and ecological goals. The goal of this research is to assess different approaches to urban agriculture by analysing three existing community gardens located in Newtown, a suburb of Wellington, New Zealand. As a context for developing research, Newtown offers different approaches to urban farming and is really valuable for observing current trends of socialization in diverse and multicultural societies. All three spaces are located on public land owned by Wellington City Council and confined to a small, complex and progressively denser urban area. The developed analysis was focused on social, cultural and physical dimensions, combining community engagement with different techniques of spatial assessment. At the same time, a detailed investigation of each community garden was conducted with comparative analysis methodologies. This multidirectional setting of the analysis was established for extracting from the case studies both specific and typological knowledge. Each site was analysed and categorised under three broad themes: people, space and food. The analysis revealed that all three case studies had really different spatial settings, different approaches to food production and varying profiles of supportive communities. The main differences identified were demographics, values, objectives, internal organization, appropriation, and perception of the space. The community gardens were approached as case studies for developing design research. Following participatory design processes with the different communities, the knowledge gained from the analysis was used for proposing changes in the physical environment. The end goal of the design research was to improve the capacity of the spaces to facilitate social inclusiveness. In order to generate tangible changes, a range of small, strategic and feasible spatial interventions was explored. The smallness of the proposed interventions facilitates implementation by reducing time frames, technical resources, funding needs, and legal processes, working within the community´s own realm. These small interventions are expected to be implemented over time as part of an ongoing collaboration between the different communities, the university, and the local council. The applied research methodology showcases the capacity of universities to develop civic engagement by working with real communities that have concrete needs and face overall threats of disintegration and segregation.Keywords: community gardening, landscape architecture, participatory design, placemaking, social inclusion
Procedia PDF Downloads 12869 Challenges to Developing a Trans-European Programme for Health Professionals to Recognize and Respond to Survivors of Domestic Violence and Abuse
Authors: June Keeling, Christina Athanasiades, Vaiva Hendrixson, Delyth Wyndham
Abstract:
Recognition and education in violence, abuse, and neglect for medical and healthcare practitioners (REVAMP) is a trans-European project aiming to introduce a training programme that has been specifically developed by partners across seven European countries to meet the needs of medical and healthcare practitioners. Amalgamating the knowledge and experience of clinicians, researchers, and educators from interdisciplinary and multi-professional backgrounds, REVAMP has tackled the under-resourced and underdeveloped area of domestic violence and abuse. The team designed an online training programme to support medical and healthcare practitioners to recognise and respond appropriately to survivors of domestic violence and abuse at their point of contact with a health provider. The REVAMP partner countries include Europe: France, Lithuania, Germany, Greece, Iceland, Norway, and the UK. The training is delivered through a series of interactive online modules, adapting evidence-based pedagogical approaches to learning. Capturing and addressing the complexities of the project impacted the methodological decisions and approaches to evaluation. The challenge was to find an evaluation methodology that captured valid data across all partner languages to demonstrate the extent of the change in knowledge and understanding. Co-development by all team members was a lengthy iterative process, challenged by a lack of consistency in terminology. A mixed methods approach enabled both qualitative and quantitative data to be collected, at the start, during, and at the conclusion of the training for the purposes of evaluation. The module content and evaluation instrument were accessible in each partner country's language. Collecting both types of data provided a high-level snapshot of attainment via the quantitative dataset and an in-depth understanding of the impact of the training from the qualitative dataset. The analysis was mixed methods, with integration at multiple interfaces. The primary focus of the analysis was to support the overall project evaluation for the funding agency. A key project outcome was identifying that the trans-European approach posed several challenges. Firstly, the project partners did not share a first language or a legal or professional approach to domestic abuse and neglect. This was negotiated through complex, systematic, and iterative interaction between team members so that consensus could be achieved. Secondly, the context of the data collection in several different cultural, educational, and healthcare systems across Europe challenged the development of a robust evaluation. The participants in the pilot evaluation shared that the training was contemporary, well-designed, and of great relevance to inform practice. Initial results from the evaluation indicated that the participants were drawn from more than eight partner countries due to the online nature of the training. The primary results indicated a high level of engagement with the content and achievement through the online assessment. The main finding was that the participants perceived the impact of domestic abuse and neglect in very different ways in their individual professional contexts. Most significantly, the participants recognised the need for the training and the gap that existed previously. It is notable that a mixed-methods evaluation of a trans-European project is unusual at this scale.Keywords: domestic violence, e-learning, health professionals, trans-European
Procedia PDF Downloads 8568 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators
Authors: K. O'Malley
Abstract:
Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university
Procedia PDF Downloads 3467 The Development, Use and Imapct of an Open Source, Web-Based, Video-Annoation Tool to Provide Job-Embedded Professional Development for Educators: The Coaching Companion
Authors: Gail Joseph
Abstract:
In the United States, to advance the quality and education requirements of PreK teachers, there are concerns regarding barriers for existing early childhood educators to access formal degrees and ongoing professional development. Barriers exist related to affordability and access. Affordability is a key factor that impacts teachers access to degree programs. The lack of financial resources makes it difficult for many qualified candidates to begin, and complete, degree programs. Even if funding was not an issue, accessibility remains a pressing issue in higher education. Some common barriers include geography, long work hours, lack of professional community, childcare, and clear articulation agreements. Greater flexibility is needed to allow all early childhood professionals to pursue college coursework that takes into consideration the many competing demands on their schedules. For these busy professionals, it is particularly important that professional development opportunities are available “on demand” and are seen as relevant to their work. Courses that are available during non-traditional hours make attendance more accessible, and professional development that is relevant to what they need to know and be able to do to be effective in their current positions increase access to and the impact of ongoing professional education. EarlyEdU at the University of Washington provides institutes of higher education and state professional development systems with free comprehensive, competency based college courses based on the latest science of how to optimize child learning and outcomes across developmental domains. The coursework embeds an intentional teaching framework which requires teachers to know what to do in the moment, see effective teaching in themselves and others, enact these practices in the classroom, reflect on what works and what does not, and improve with thoughtful practices. Reinforcing the Intentional Teaching Framework in EarlyEdU courses is the Coaching Companion, an open source, web-based video annotation learning tool that supports coaching in higher education by enabling students to view and refine their teaching practices. The tool is integrated throughout EarlyEdU courses. With the Coaching Companion, students see upload teaching interactions on video and then reflect on the degree to which they incorporate evidence-based practices. Coaching Companion eliminates the traditional separation of theory and practice in college-based teacher preparation. Together, the Intentional Teaching Framework and the Coaching Companion transform the course instructor into a job-embedded coach. The instructor watches student interactions with children on video using the Coaching Companion and looks specifically for interactions defined in course assignments, readings, and lectures. Based on these observations, the instructor offers feedback and proposes next steps. Developed on federal and philanthropic funds, all EarlyEdU courses and the Coaching Companion are available for free to 2= and 4-year colleges and universities with early childhood degrees, as well as to state early learning and education departments to increase access to high quality professional development. We studied the impact of the Coaching Companion in two courses and demonstrated a significant increase in the quality of teacher-child interactions as measured by the PreK CLASS quality teaching assessment. Implications are discussed related to policy and practice.Keywords: education technology, distance education, early childhood education, professional development
Procedia PDF Downloads 13466 The Effect of Using Emg-based Luna Neurorobotics for Strengthening of Affected Side in Chronic Stroke Patients - Retrospective Study
Authors: Surbhi Kaura, Sachin Kandhari, Shahiduz Zafar
Abstract:
Chronic stroke, characterized by persistent motor deficits, often necessitates comprehensive rehabilitation interventions to improve functional outcomes and mitigate long-term dependency. Luna neurorobotic devices, integrated with EMG feedback systems, provide an innovative platform for facilitating neuroplasticity and functional improvement in stroke survivors. This retrospective study aims to investigate the impact of EMG-based Luna neurorobotic interventions on the strengthening of the affected side in chronic stroke patients. In rehabilitation, active patient participation significantly activates the sensorimotor network during motor control, unlike passive movement. Stroke is a debilitating condition that, when not effectively treated, can result in significant deficits and lifelong dependency. Common issues like neglecting the use of limbs can lead to weakness in chronic stroke cases. In rehabilitation, active patient participation significantly activates the sensorimotor network during motor control, unlike passive movement. This study aims to assess how electromyographic triggering (EMG-triggered) robotic treatments affect walking, ankle muscle force after an ischemic stroke, and the coactivation of agonist and antagonist muscles, which contributes to neuroplasticity with the assistance of biofeedback using robotics. Methods: The study utilized robotic techniques based on electromyography (EMG) for daily rehabilitation in long-term stroke patients, offering feedback and monitoring progress. Each patient received one session per day for two weeks, with the intervention group undergoing 45 minutes of robot-assisted training and exercise at the hospital, while the control group performed exercises at home. Eight participants with impaired motor function and gait after stroke were involved in the study. EMG-based biofeedback exercises were administered through the LUNA neuro-robotic machine, progressing from trigger and release mode to trigger and hold, and later transitioning to dynamic mode. Assessments were conducted at baseline and after two weeks, including the Timed Up and Go (TUG) test, a 10-meter walk test (10m), Berg Balance Scale (BBG), and gait parameters like cadence, step length, upper limb strength measured by EMG threshold in microvolts, and force in Newton meters. Results: The study utilized a scale to assess motor strength and balance, illustrating the benefits of EMG-biofeedback following LUNA robotic therapy. In the analysis of the left hemiparetic group, an increase in strength post-rehabilitation was observed. The pre-TUG mean value was 72.4, which decreased to 42.4 ± 0.03880133 seconds post-rehabilitation, with a significant difference indicated by a p-value below 0.05, reflecting a reduced task completion time. Similarly, in the force-based task, the pre-knee dynamic force in Newton meters was 18.2NM, which increased to 31.26NM during knee extension post-rehabilitation. The post-student t-test showed a p-value of 0.026, signifying a significant difference. This indicated an increase in the strength of knee extensor muscles after LUNA robotic rehabilitation. Lastly, at baseline, the EMG value for ankle dorsiflexion was 5.11 (µV), which increased to 43.4 ± 0.06 µV post-rehabilitation, signifying an increase in the threshold and the patient's ability to generate more motor units during left ankle dorsiflexion. Conclusion: This study aimed to evaluate the impact of EMG and dynamic force-based rehabilitation devices on walking and strength of the affected side in chronic stroke patients without nominal data comparisons among stroke patients. Additionally, it provides insights into the inclusion of EMG-triggered neurorehabilitation robots in the daily rehabilitation of patients.Keywords: neurorehabilitation, robotic therapy, stroke, strength, paralysis
Procedia PDF Downloads 6365 Comparative Assessment of the Thermal Tolerance of Spotted Stemborer, Chilo partellus Swinhoe (Lepidoptera: Crambidae) and Its Larval Parasitoid, Cotesia sesamiae Cameron (Hymenoptera: Braconidae)
Authors: Reyard Mutamiswa, Frank Chidawanyika, Casper Nyamukondiwa
Abstract:
Under stressful thermal environments, insects adjust their behaviour and physiology to maintain key life-history activities and improve survival. For interacting species, mutual or antagonistic, thermal stress may affect the participants in differing ways, which may then affect the outcome of the ecological relationship. In agroecosystems, this may be the fate of relationships between insect pests and their antagonistic parasitoids under acute and chronic thermal variability. Against this background, we therefore investigated the thermal tolerance of different developmental stages of Chilo partellus Swinhoe (Lepidoptera: Crambidae) and its larval parasitoid Cotesia sesamiae Cameron (Hymenoptera: Braconidae) using both dynamic and static protocols. In laboratory experiments, we determined lethal temperature assays (upper and lower lethal temperatures) using direct plunge protocols in programmable water baths (Systronix, Scientific, South Africa), effects of ramping rate on critical thermal limits following standardized protocols using insulated double-jacketed chambers (‘organ pipes’) connected to a programmable water bath (Lauda Eco Gold, Lauda DR.R. Wobser GMBH and Co. KG, Germany), supercooling points (SCPs) following dynamic protocols using a Pico logger connected to a programmable water bath, heat knock-down time (HKDT) and chill-coma recovery (CCRT) time following static protocols in climate chambers (HPP 260, Memmert GmbH + Co.KG, Germany) connected to a camera (HD Covert Network Camera, DS-2CD6412FWD-20, Hikvision Digital Technology Co., Ltd, China). When exposed for two hours to a static temperature, lower lethal temperatures ranged -9 to 6; -14 to -2 and -1 to 4ºC while upper lethal temperatures ranged from 37 to 48; 41 to 49 and 36 to 39ºC for C. partellus eggs, larvae and C. sesamiae adults respectively. Faster heating rates improved critical thermal maxima (CTmax) in C. partellus larvae and adult C. partellus and C. sesamiae. Lower cooling rates improved critical thermal minima (CTmin) in C. partellus and C. sesamiae adults while compromising CTmin in C. partellus larvae. The mean SCPs for C. partellus larvae, pupae and adults were -11.82±1.78, -10.43±1.73 and -15.75±2.47 respectively with adults having the lowest SCPs. Heat knock-down time and chill-coma recovery time varied significantly between C. partellus larvae and adults. Larvae had higher HKDT than adults, while the later recovered significantly faster following chill-coma. Current results suggest developmental stage differences in C. partellus thermal tolerance (with respect to lethal temperatures and critical thermal limits) and a compromised temperature tolerance of parasitoid C. sesamiae relative to its host, suggesting potential asynchrony between host-parasitoid population phenology and consequently biocontrol efficacy under global change. These results have broad implications to biological pest management insect-natural enemy interactions under rapidly changing thermal environments.Keywords: chill-coma recovery time, climate change, heat knock-down time, lethal temperatures, supercooling point
Procedia PDF Downloads 23964 EGF Serum Level in Diagnosis and Prediction of Mood Disorder in Adolescents and Young Adults
Authors: Monika Dmitrzak-Weglarz, Aleksandra Rajewska-Rager, Maria Skibinska, Natalia Lepczynska, Piotr Sibilski, Joanna Pawlak, Pawel Kapelski, Joanna Hauser
Abstract:
Epidermal growth factor (EGF) is a well-known neurotrophic factor that involves in neuronal growth and synaptic plasticity. The proteomic research provided in order to identify novel candidate biological markers for mood disorders focused on elevated EGF serum level in patients during depression episode. However, the EGF association with mood disorder spectrum among adolescents and young adults has not been studied extensively. In this study, we aim to investigate the serum levels of EGF in adolescents and young adults during hypo/manic, depressive episodes and in remission compared to healthy control group. In our study, we involved 80 patients aged 12-24 years in 2-year follow-up study with a primary diagnosis of mood disorder spectrum, and 35 healthy volunteers matched by age and gender. Diagnoses were established according to DSM-IV-TR criteria using structured clinical interviews: K-SADS for child and adolescents, and SCID for young adults. Clinical and biological evaluations were made at baseline and euthymic mood (at 3th or 6th month of treatment and after 1 and 2 years). The Young Mania Rating Scale and Hamilton Rating Scale for Depression were used for assessment. The study protocols were approved by the relevant ethics committee. Serum protein concentration was determined by Enzyme-Linked Immunosorbent Assays (ELISA) method. Human EGF (cat. no DY 236) DuoSet ELISA kit was used (R&D Systems). Serum EGF levels were analysed with following variables: age, age under 18 and above 18 years old, sex, family history of affective disorders, drug-free vs. medicated. Shapiro-Wilk test was used to test the normality of the data. The homogeneity of variance was calculated with Levene’s test. EGF levels showed non-normal distribution and the homogeneity of variance was violated. Non-parametric tests: Mann-Whitney U test, Kruskall-Wallis ANOVA, Friedman’s ANOVA, Wilcoxon signed rank test, Spearman correlation coefficient was applied in the analyses The statistical significance level was set at p<0.05. Elevated EGF level at baseline (p=0.001) and at month 24 (p=0.02) was detected in study subjects compared with controls. Increased EGF level in women at month 12 (p=0.02) compared to men in study group have been observed. Using Wilcoxon signed rank test differences in EGF levels were detected: decrease from baseline to month 3 (p=0.014) and increase comparing: month 3 vs. 24 (p=0.013); month 6 vs. 12 (p=0.021) and vs. 24 (p=0.008). EGF level at baseline was negatively correlated with depression and mania occurrence at 24 months. EGF level at 24 months was positively correlated with depression and mania occurrence at 12 months. No other correlations of EGF levels with clinical and demographical variables have been detected. The findings of the present study indicate that EGF serum level is significantly elevated in the study group of patients compared to the controls. We also observed fluctuations in EGF levels during two years of disease observation. EGF seems to be useful as an early marker for prediction of diagnosis, course of illness and treatment response in young patients during first episode od mood disorders, which requires further investigation. Grant was founded by National Science Center in Poland no 2011/03/D/NZ5/06146.Keywords: biological marker, epidermal growth factor, mood disorders, prediction
Procedia PDF Downloads 19063 Meta-Analysis of Previously Unsolved Cases of Aviation Mishaps Employing Molecular Pathology
Authors: Michael Josef Schwerer
Abstract:
Background: Analyzing any aircraft accident is mandatory based on the regulations of the International Civil Aviation Organization and the respective country’s criminal prosecution authorities. Legal medicine investigations are unavoidable when fatalities involve the flight crew or when doubts arise concerning the pilot’s aeromedical health status before the event. As a result of frequently tremendous blunt and sharp force trauma along with the impact of the aircraft to the ground, consecutive blast or fire exposition of the occupants or putrefaction of the dead bodies in cases of delayed recovery, relevant findings can be masked or destroyed and therefor being inaccessible in standard pathology practice comprising just forensic autopsy and histopathology. Such cases are of considerable risk of remaining unsolved without legal consequences for those responsible. Further, no lessons can be drawn from these scenarios to improve flight safety and prevent future mishaps. Aims and Methods: To learn from previously unsolved aircraft accidents, re-evaluations of the investigation files and modern molecular pathology studies were performed. Genetic testing involved predominantly PCR-based analysis of gene regulation, studying DNA promotor methylations, RNA transcription and posttranscriptional regulation. In addition, the presence or absence of infective agents, particularly DNA- and RNA-viruses, was studied. Technical adjustments of molecular genetic procedures when working with archived sample material were necessary. Standards for the proper interpretation of the respective findings had to be settled. Results and Discussion: Additional molecular genetic testing significantly contributes to the quality of forensic pathology assessment in aviation mishaps. Previously undetected cardiotropic viruses potentially explain e.g., a pilot’s sudden incapacitation resulting from cardiac failure or myocardial arrhythmia. In contrast, negative results for infective agents participate in ruling out concerns about an accident pilot’s fitness to fly and the aeromedical examiner’s precedent decision to issue him or her an aeromedical certificate. Care must be taken in the interpretation of genetic testing for pre-existing diseases such as hypertrophic cardiomyopathy or ischemic heart disease. Molecular markers such as mRNAs or miRNAs, which can establish these diagnoses in clinical patients, might be misleading in-flight crew members because of adaptive changes in their tissues resulting from repeated mild hypoxia during flight, for instance. Military pilots especially demonstrate significant physiological adjustments to their somatic burdens in flight, such as cardiocirculatory stress and air combat maneuvers. Their non-pathogenic alterations in gene regulation and expression will likely be misinterpreted for genuine disease by inexperienced investigators. Conclusions: The growing influence of molecular pathology on legal medicine practice has found its way into aircraft accident investigation. As appropriate quality standards for laboratory work and data interpretation are provided, forensic genetic testing supports the medico-legal analysis of aviation mishaps and potentially reduces the number of unsolved events in the future.Keywords: aviation medicine, aircraft accident investigation, forensic pathology, molecular pathology
Procedia PDF Downloads 4762 Factors Associated with Risky Sexual Behaviour in Adolescent Girls and Young Women in Cambodia: A Systematic Review
Authors: Farwa Rizvi, Joanne Williams, Humaira Maheen, Elizabeth Hoban
Abstract:
There is an increase in risky sexual behavior and unsafe sex in adolescent girls and young women aged 15 to 24 years in Cambodia, which negatively affects their reproductive health by increasing the risk of contracting sexually transmitted infections and unintended pregnancies. Risky sexual behavior includes ‘having sex at an early age, having multiple sexual partners, having sex while under the influence of alcohol or drugs, and unprotected sexual behaviors’. A systematic review of quantitative research conducted in Cambodia was undertaken, using the theoretical framework of the Social Ecological Model to identify the personal, social and cultural factors associated with risky sexual behavior and unsafe sex in young Cambodian women. PRISMA guidelines were used to search databases including Medline Complete, PsycINFO, CINAHL Complete, Academic Search Complete, Global Health, and Social Work Abstracts. Additional searches were conducted in Science Direct, Google Scholar and in the grey literature sources. A risk-of-bias tool developed explicitly for the systematic review of cross-sectional studies was used. Summary item on the overall risk of study bias after the inter-rater response showed that the risk-of-bias was high in two studies, moderate in one study and low in one study. The search strategy included a combination of subject terms and free text terms. The medical subject headings (MeSH) terms included were; contracept* or ‘birth control’ or ‘family planning’ or pregnan* or ‘safe sex’ or ‘protected intercourse’ or ‘unprotected intercourse’ or ‘protected sex’ or ‘unprotected sex’ or ‘risky sexual behaviour*’ or ‘abort*’ or ‘planned parenthood’ or ‘unplanned pregnancy’ AND ( barrier* or obstacle* or challenge* or knowledge or attitude* or factor* or determinant* or choic* or uptake or discontinu* or acceptance or satisfaction or ‘needs assessment’ or ‘non-use’ or ‘unmet need’ or ‘decision making’ ) AND Cambodia*. Initially, 300 studies were identified by using key words and finally, four quantitative studies were selected based on the inclusion criteria. The four studies were published between 2010 and 2016. The study participants ranged in age from 10-24 years, single or married, with 3 to 10 completed years of education. The mean age at sexual debut was reported to be 18 years. Using the perspective of the Social Ecological Model, risky sexual behavior was associated with individual-level factors including young age at sexual debut, low education, unsafe sex under the influence of alcohol and substance abuse, multiple sexual partners or transactional sex. Family level factors included living away from parents, orphan status and low levels of family support. Peer and partner level factors included peer delinquency and lack of condom use. Low socioeconomic status at the society level was also associated with risky sexual behaviour. There is scant research on sexual and reproductive health of adolescent girls and young women in Cambodia. Individual, family and social factors were significantly associated with risky sexual behaviour. More research is required to inform potential preventive strategies and policies that address young women’s sexual and reproductive health.Keywords: adolescents, high-risk sex, sexual activity, unplanned pregnancies
Procedia PDF Downloads 24761 An Intelligence-Led Methodologly for Detecting Dark Actors in Human Trafficking Networks
Authors: Andrew D. Henshaw, James M. Austin
Abstract:
Introduction: Human trafficking is an increasingly serious transnational criminal enterprise and social security issue. Despite ongoing efforts to mitigate the phenomenon and a significant expansion of security scrutiny over past decades, it is not receding. This is true for many nations in Southeast Asia, widely recognized as the global hub for trafficked persons, including men, women, and children. Clearly, human trafficking is difficult to address because there are numerous drivers, causes, and motivators for it to persist, such as non-military and non-traditional security challenges, i.e., climate change, global warming displacement, and natural disasters. These make displaced persons and refugees particularly vulnerable. The issue is so large conservative estimates put a dollar value at around $150 billion-plus per year (Niethammer, 2020) spanning sexual slavery and exploitation, forced labor, construction, mining and in conflict roles, and forced marriages of girls and women. Coupled with corruption throughout military, police, and civil authorities around the world, and the active hands of powerful transnational criminal organizations, it is likely that such figures are grossly underestimated as human trafficking is misreported, under-detected, and deliberately obfuscated to protect those profiting from it. For example, the 2022 UN report on human trafficking shows a 56% reduction in convictions in that year alone (UNODC, 2022). Our Approach: To better understand this, our research utilizes a bespoke methodology. Applying a JAM (Juxtaposition Assessment Matrix), which we previously developed to detect flows of dark money around the globe (Henshaw, A & Austin, J, 2021), we now focus on the human trafficking paradigm. Indeed, utilizing a JAM methodology has identified key indicators of human trafficking not previously explored in depth. Being a set of structured analytical techniques that provide panoramic interpretations of the subject matter, this iteration of the JAM further incorporates behavioral and driver indicators, including the employment of Open-Source Artificial Intelligence (OS-AI) across multiple collection points. The extracted behavioral data was then applied to identify non-traditional indicators as they contribute to human trafficking. Furthermore, as the JAM OS-AI analyses data from the inverted position, i.e., the viewpoint of the traffickers, it examines the behavioral and physical traits required to succeed. This transposed examination of the requirements of success delivers potential leverage points for exploitation in the fight against human trafficking in a new and novel way. Findings: Our approach identified new innovative datasets that have previously been overlooked or, at best, undervalued. For example, the JAM OS-AI approach identified critical 'dark agent' lynchpins within human trafficking that are difficult to detect and harder to connect to actors and agents within a network. Our preliminary data suggests this is in part due to the fact that ‘dark agents’ in extant research have been difficult to detect and potentially much harder to directly connect to the actors and organizations in human trafficking networks. Our research demonstrates that using new investigative techniques such as OS-AI-aided JAM introduces a powerful toolset to increase understanding of human trafficking and transnational crime and illuminate networks that, to date, avoid global law enforcement scrutiny.Keywords: human trafficking, open-source intelligence, transnational crime, human security, international human rights, intelligence analysis, JAM OS-AI, Dark Money
Procedia PDF Downloads 9260 Sandstone Petrology of the Kolhan Basin, Eastern India: Implications for the Tectonic Evolution of a Half-Graben
Authors: Rohini Das, Subhasish Das, Smruti Rekha Sahoo, Shagupta Yesmin
Abstract:
The Paleoproterozoic Kolhan Group (Purana) ensemble constitutes the youngest lithostratigraphic 'outlier' in the Singhbhum Archaean craton. The Kolhan unconformably overlies both the Singhbhum granite and the Iron Ore Group (IOG). Representing a typical sandstone-shale ( +/- carbonates) sequence, the Kolhan is characterized by the development of thin and discontinuous patches of basal conglomerates draped by sandstone beds. The IOG-fault limits the western 'distal' margin of the Kolhan basin showing evidence of passive subsidence subsequent to the initial rifting stage. The basin evolved as a half-graben under the influence of an extensional stress regime. The assumption of a tectonic setting for the NE-SW trending Kolhan basin possibly relates to the basin opening to the E-W extensional stress system that prevailed during the development of the Newer Dolerite dyke. The Paleoproterozoic age of the Kolhan basin is based on the consideration of the conformable stress pattern responsible both for the basin opening and the development of the conjugate fracture system along which the Newer Dolerite dykes intruded the Singhbhum Archaean craton. The Kolhan sandstones show progressive change towards greater textural and mineralogical maturity in its upbuilding. The trend of variations in different mineralogical and textural attributes, however, exhibits inflections at different lithological levels. Petrological studies collectively indicate that the sandstones were dominantly derived from a weathered granitic crust under a humid climatic condition. Provenance-derived variations in sandstone compositions are therefore a key in unraveling regional tectonic histories. The basin axis controlled the progradation direction which was likely driven by climatically induced sediment influx, a eustatic fall, or both. In the case of the incongruent shift, increased sediment supply permitted the rivers to cross the basinal deep. Temporal association of the Kolhan with tectonic structures in the belt indicates that syn-tectonic thrust uplift, not isostatic uplift or climate, caused the influx of quartz. The sedimentation pattern in the Kolhan reflects a change from braided fluvial-ephemeral pattern to a fan-delta-lacustrine type. The channel geometries and the climate exerted a major control on the processes of sediment transfer. Repeated fault controlled uplift of the source followed by subsidence and forced regression, generated multiple sediment cyclicity that led to the fluvial-fan delta sedimentation pattern. Intermittent uplift of the faulted blocks exposed fresh bedrock to mechanical weathering that generated a large amount of detritus and resulted to forced regressions, repeatedly disrupting the cycles which may reflect a stratigraphic response of connected rift basins at the early stage of extension. The marked variations in the thickness of the fan delta succession and the stacking pattern in different measured profiles reflect the overriding tectonic controls on fan delta evolution. The accumulated fault displacement created higher accommodation and thicker delta sequences. Intermittent uplift of fault blocks exposed fresh bedrock to mechanical weathering, generated a large amount of detritus, and resulted in forced closure of the land-locked basin, repeatedly disrupting the fining upward pattern. The control of source rock lithology or climate was of secondary importance to tectonic effects. Such a retrograding fan delta could be a stratigraphic response of connected rift basins at the early stage of extension.Keywords: Kolhan basin, petrology, sandstone, tectonics
Procedia PDF Downloads 50659 Quantitative Texture Analysis of Shoulder Sonography for Rotator Cuff Lesion Classification
Authors: Chung-Ming Lo, Chung-Chien Lee
Abstract:
In many countries, the lifetime prevalence of shoulder pain is up to 70%. In America, the health care system spends 7 billion per year about the healthy issues of shoulder pain. With respect to the origin, up to 70% of shoulder pain is attributed to rotator cuff lesions This study proposed a computer-aided diagnosis (CAD) system to assist radiologists classifying rotator cuff lesions with less operator dependence. Quantitative features were extracted from the shoulder ultrasound images acquired using an ALOKA alpha-6 US scanner (Hitachi-Aloka Medical, Tokyo, Japan) with linear array probe (scan width: 36mm) ranging from 5 to 13 MHz. During examination, the postures of the examined patients are standard sitting position and are followed by the regular routine. After acquisition, the shoulder US images were drawn out from the scanner and stored as 8-bit images with pixel value ranging from 0 to 255. Upon the sonographic appearance, the boundary of each lesion was delineated by a physician to indicate the specific pattern for analysis. The three lesion categories for classification were composed of 20 cases of tendon inflammation, 18 cases of calcific tendonitis, and 18 cases of supraspinatus tear. For each lesion, second-order statistics were quantified in the feature extraction. The second-order statistics were the texture features describing the correlations between adjacent pixels in a lesion. Because echogenicity patterns were expressed via grey-scale. The grey-scale co-occurrence matrixes with four angles of adjacent pixels were used. The texture metrics included the mean and standard deviation of energy, entropy, correlation, inverse different moment, inertia, cluster shade, cluster prominence, and Haralick correlation. Then, the quantitative features were combined in a multinomial logistic regression classifier to generate a prediction model of rotator cuff lesions. Multinomial logistic regression classifier is widely used in the classification of more than two categories such as the three lesion types used in this study. In the classifier, backward elimination was used to select a feature subset which is the most relevant. They were selected from the trained classifier with the lowest error rate. Leave-one-out cross-validation was used to evaluate the performance of the classifier. Each case was left out of the total cases and used to test the trained result by the remaining cases. According to the physician’s assessment, the performance of the proposed CAD system was shown by the accuracy. As a result, the proposed system achieved an accuracy of 86%. A CAD system based on the statistical texture features to interpret echogenicity values in shoulder musculoskeletal ultrasound was established to generate a prediction model for rotator cuff lesions. Clinically, it is difficult to distinguish some kinds of rotator cuff lesions, especially partial-thickness tear of rotator cuff. The shoulder orthopaedic surgeon and musculoskeletal radiologist reported greater diagnostic test accuracy than general radiologist or ultrasonographers based on the available literature. Consequently, the proposed CAD system which was developed according to the experiment of the shoulder orthopaedic surgeon can provide reliable suggestions to general radiologists or ultrasonographers. More quantitative features related to the specific patterns of different lesion types would be investigated in the further study to improve the prediction.Keywords: shoulder ultrasound, rotator cuff lesions, texture, computer-aided diagnosis
Procedia PDF Downloads 28658 A Study of Seismic Design Approaches for Steel Sheet Piles: Hydrodynamic Pressures and Reduction Factors Using CFD and Dynamic Calculations
Authors: Helena Pera, Arcadi Sanmartin, Albert Falques, Rafael Rebolo, Xavier Ametller, Heiko Zillgen, Cecile Prum, Boris Even, Eric Kapornyai
Abstract:
Sheet piles system can be an interesting solution when dealing with harbors or quays designs. However, current design methods lead to conservative approaches due to the lack of specific basis of design. For instance, some design features still deal with pseudo-static approaches, although being a dynamic problem. Under this concern, the study particularly focuses on hydrodynamic water pressure definition and stability analysis of sheet pile system under seismic loads. During a seismic event, seawater produces hydrodynamic pressures on structures. Currently, design methods introduce hydrodynamic forces by means of Westergaard formulation and Eurocodes recommendations. They apply constant hydrodynamic pressure on the front sheet pile during the entire earthquake. As a result, the hydrodynamic load may represent 20% of the total forces produced on the sheet pile. Nonetheless, some studies question that approach. Hence, this study assesses the soil-structure-fluid interaction of sheet piles under seismic action in order to evaluate if current design strategies overestimate hydrodynamic pressures. For that purpose, this study performs various simulations by Plaxis 2D, a well-known geotechnical software, and CFD models, which treat fluid dynamic behaviours. Knowing that neither Plaxis nor CFD can resolve a soil-fluid coupled problem, the investigation imposes sheet pile displacements from Plaxis as input data for the CFD model. Then, it provides hydrodynamic pressures under seismic action, which fit theoretical Westergaard pressures if calculated using the acceleration at each moment of the earthquake. Thus, hydrodynamic pressures fluctuate during seismic action instead of remaining constant, as design recommendations propose. Additionally, these findings detect that hydrodynamic pressure contributes a 5% to the total load applied on sheet pile due to its instantaneous nature. These results are in line with other studies that use added masses methods for hydrodynamic pressures. Another important feature in sheet pile design is the assessment of the geotechnical overall stability. It uses pseudo-static analysis since the dynamic analysis cannot provide a safety calculation. Consequently, it estimates the seismic action. One of its relevant factors is the selection of the seismic reduction factor. A huge amount of studies discusses the importance of it but also about all its uncertainties. Moreover, current European standards do not propose a clear statement on that, and they recommend using a reduction factor equal to 1. This leads to conservative requirements when compared with more advanced methods. Under this situation, the study calibrates seismic reduction factor by fitting results from pseudo-static to dynamic analysis. The investigation concludes that pseudo-static analyses could reduce seismic action by 40-50%. These results are in line with some studies from Japanese and European working groups. In addition, it seems suitable to account for the flexibility of the sheet pile-soil system. Nevertheless, the calibrated reduction factor is subjected to particular conditions of each design case. Further research would contribute to specifying recommendations for selecting reduction factor values in the early stages of the design. In conclusion, sheet pile design still has chances for improving its design methodologies and approaches. Consequently, design could propose better seismic solutions thanks to advanced methods such as findings of this study.Keywords: computational fluid dynamics, hydrodynamic pressures, pseudo-static analysis, quays, seismic design, steel sheet pile
Procedia PDF Downloads 14357 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems
Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana
Abstract:
Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP
Procedia PDF Downloads 20156 Structural Behavior of Subsoil Depending on Constitutive Model in Calculation Model of Pavement Structure-Subsoil System
Authors: M. Kadela
Abstract:
The load caused by the traffic movement should be transferred in the road constructions in a harmless way to the pavement as follows: − on the stiff upper layers of the structure (e.g. layers of asphalt: abrading and binding), and − through the layers of principal and secondary substructure, − on the subsoil, directly or through an improved subsoil layer. Reliable description of the interaction proceeding in a system “road construction – subsoil” should be in such case one of the basic requirements of the assessment of the size of internal forces of structure and its durability. Analyses of road constructions are based on: − elements of mechanics, which allows to create computational models, and − results of the experiments included in the criteria of fatigue life analyses. Above approach is a fundamental feature of commonly used mechanistic methods. They allow to use in the conducted evaluations of the fatigue life of structures arbitrarily complex numerical computational models. Considering the work of the system “road construction – subsoil”, it is commonly accepted that, as a result of repetitive loads on the subsoil under pavement, the growth of relatively small deformation in the initial phase is recognized, then this increase disappears, and the deformation takes the character completely reversible. The reliability of calculation model is combined with appropriate use (for a given type of analysis) of constitutive relationships. Phenomena occurring in the initial stage of the system “road construction – subsoil” is unfortunately difficult to interpret in the modeling process. The classic interpretation of the behavior of the material in the elastic-plastic model (e-p) is that elastic phase of the work (e) is undergoing to phase (e-p) by increasing the load (or growth of deformation in the damaging structure). The paper presents the essence of the calibration process of cooperating subsystem in the calculation model of the system “road construction – subsoil”, created for the mechanistic analysis. Calibration process was directed to show the impact of applied constitutive models on its deformation and stress response. The proper comparative base for assessing the reliability of created. This work was supported by the on-going research project “Stabilization of weak soil by application of layer of foamed concrete used in contact with subsoil” (LIDER/022/537/L-4/NCBR/2013) financed by The National Centre for Research and Development within the LIDER Programme. M. Kadela is with the Department of Building Construction Elements and Building Structures on Mining Areas, Building Research Institute, Silesian Branch, Katowice, Poland (phone: +48 32 730 29 47; fax: +48 32 730 25 22; e-mail: m.kadela@ itb.pl). models should be, however, the actual, monitored system “road construction – subsoil”. The paper presents too behavior of subsoil under cyclic load transmitted by pavement layers. The response of subsoil to cyclic load is recorded in situ by the observation system (sensors) installed on the testing ground prepared for this purpose, being a part of the test road near Katowice, in Poland. A different behavior of the homogeneous subsoil under pavement is observed for different seasons of the year, when pavement construction works as a flexible structure in summer, and as a rigid plate in winter. Albeit the observed character of subsoil response is the same regardless of the applied load and area values, this response can be divided into: - zone of indirect action of the applied load; this zone extends to the depth of 1,0 m under the pavement, - zone of a small strain, extending to about 2,0 m.Keywords: road structure, constitutive model, calculation model, pavement, soil, FEA, response of soil, monitored system
Procedia PDF Downloads 35755 Large Scale Method to Assess the Seismic Vulnerability of Heritage Buidings: Modal Updating of Numerical Models and Vulnerability Curves
Authors: Claire Limoge Schraen, Philippe Gueguen, Cedric Giry, Cedric Desprez, Frédéric Ragueneau
Abstract:
Mediterranean area is characterized by numerous monumental or vernacular masonry structures illustrating old ways of build and live. Those precious buildings are often poorly documented, present complex shapes and loadings, and are protected by the States, leading to legal constraints. This area also presents a moderate to high seismic activity. Even moderate earthquakes can be magnified by local site effects and cause collapse or significant damage. Moreover the structural resistance of masonry buildings, especially when less famous or located in rural zones has been generally lowered by many factors: poor maintenance, unsuitable restoration, ambient pollution, previous earthquakes. Recent earthquakes prove that any damage to these architectural witnesses to our past is irreversible, leading to the necessity of acting preventively. This means providing preventive assessments for hundreds of structures with no or few documents. In this context we want to propose a general method, based on hierarchized numerical models, to provide preliminary structural diagnoses at a regional scale, indicating whether more precise investigations and models are necessary for each building. To this aim, we adapt different tools, being developed such as photogrammetry or to be created such as a preprocessor starting from pictures to build meshes for a FEM software, in order to allow dynamic studies of the buildings of the panel. We made an inventory of 198 baroque chapels and churches situated in the French Alps. Then their structural characteristics have been determined thanks field surveys and the MicMac photogrammetric software. Using structural criteria, we determined eight types of churches and seven types for chapels. We studied their dynamical behavior thanks to CAST3M, using EC8 spectrum and accelerogramms of the studied zone. This allowed us quantifying the effect of the needed simplifications in the most sensitive zones and choosing the most effective ones. We also proposed threshold criteria based on the observed damages visible in the in situ surveys, old pictures and Italian code. They are relevant in linear models. To validate the structural types, we made a vibratory measures campaign using vibratory ambient noise and velocimeters. It also allowed us validating this method on old masonry and identifying the modal characteristics of 20 churches. Then we proceeded to a dynamic identification between numerical and experimental modes. So we updated the linear models thanks to material and geometrical parameters, often unknown because of the complexity of the structures and materials. The numerically optimized values have been verified thanks to the measures we made on the masonry components in situ and in laboratory. We are now working on non-linear models redistributing the strains. So we validate the damage threshold criteria which we use to compute the vulnerability curves of each defined structural type. Our actual results show a good correlation between experimental and numerical data, validating the final modeling simplifications and the global method. We now plan to use non-linear analysis in the critical zones in order to test reinforcement solutions.Keywords: heritage structures, masonry numerical modeling, seismic vulnerability assessment, vibratory measure
Procedia PDF Downloads 49354 A Systemic Review and Comparison of Non-Isolated Bi-Directional Converters
Authors: Rahil Bahrami, Kaveh Ashenayi
Abstract:
This paper presents a systematic classification and comparative analysis of non-isolated bi-directional DC-DC converters. The increasing demand for efficient energy conversion in diverse applications has spurred the development of various converter topologies. In this study, we categorize bi-directional converters into three distinct classes: Inverting, Non-Inverting, and Interleaved. Each category is characterized by its unique operational characteristics and benefits. Furthermore, a practical comparison is conducted by evaluating the results of simulation of each bi-directional converter. BDCs can be classified into isolated and non-isolated topologies. Non-isolated converters share a common ground between input and output, making them suitable for applications with minimal voltage change. They are easy to integrate, lightweight, and cost-effective but have limitations like limited voltage gain, switching losses, and no protection against high voltages. Isolated converters use transformers to separate input and output, offering safety benefits, high voltage gain, and noise reduction. They are larger and more costly but are essential for automotive designs where safety is crucial. The paper focuses on non-isolated systems.The paper discusses the classification of non-isolated bidirectional converters based on several criteria. Common factors used for classification include topology, voltage conversion, control strategy, power capacity, voltage range, and application. These factors serve as a foundation for categorizing converters, although the specific scheme might vary depending on contextual, application, or system-specific requirements. The paper presents a three-category classification for non-isolated bi-directional DC-DC converters: inverting, non-inverting, and interleaved. In the inverting category, converters produce an output voltage with reversed polarity compared to the input voltage, achieved through specific circuit configurations and control strategies. This is valuable in applications such as motor control and grid-tied solar systems. The non-inverting category consists of converters maintaining the same voltage polarity, useful in scenarios like battery equalization. Lastly, the interleaved category employs parallel converter stages to enhance power delivery and reduce current ripple. This classification framework enhances comprehension and analysis of non-isolated bi-directional DC-DC converters. The findings contribute to a deeper understanding of the trade-offs and merits associated with different converter types. As a result, this work aids researchers, practitioners, and engineers in selecting appropriate bi-directional converter solutions for specific energy conversion requirements. The proposed classification framework and experimental assessment collectively enhance the comprehension of non-isolated bi-directional DC-DC converters, fostering advancements in efficient power management and utilization.The simulation process involves the utilization of PSIM to model and simulate non-isolated bi-directional converter from both inverted and non-inverted category. The aim is to conduct a comprehensive comparative analysis of these converters, considering key performance indicators such as rise time, efficiency, ripple factor, and maximum error. This systematic evaluation provides valuable insights into the dynamic response, energy efficiency, output stability, and overall precision of the converters. The results of this comparison facilitate informed decision-making and potential optimizations, ensuring that the chosen converter configuration aligns effectively with the designated operational criteria and performance goals.Keywords: bi-directional, DC-DC converter, non-isolated, energy conversion
Procedia PDF Downloads 10153 Environmental Restoration Science in New York Harbor - Community Based Restoration Science Hubs, or “STEM Hubs”
Authors: Lauren B. Birney
Abstract:
The project utilizes the Billion Oyster Project (BOP-CCERS) place-based “restoration through education” model to promote computational thinking in NYC high school teachers and their students. Key learning standards such as Next Generation Science Standards and the NYC CS4All Equity and Excellence initiative are used to develop a computer science curriculum that connects students to their Harbor through hands-on activities based on BOP field science and educational programming. Project curriculum development is grounded in BOP-CCERS restoration science activities and data collection, which are enacted by students and educators at two Restoration Science STEM Hubs or conveyed through virtual materials. New York City Public School teachers with relevant experience are recruited as consultants to provide curriculum assessment and design feedback. The completed curriculum units are then conveyed to NYC high school teachers through professional learning events held at the Pace University campus and led by BOP educators. In addition, Pace University educators execute the Summer STEM Institute, an intensive two-week computational thinking camp centered on applying data analysis tools and methods to BOP-CCERS data. Both qualitative and quantitative analyses were performed throughout the five-year study. STEM+C – Community Based Restoration STEM Hubs. STEM Hubs are active scientific restoration sites capable of hosting school and community groups of all grade levels and professional scientists and researchers conducting long-term restoration ecology research. The STEM Hubs program has grown to include 14 STEM Hubs across all five boroughs of New York City and focuses on bringing in-field monitoring experience as well as coastal classroom experience to students. Restoration Science STEM Hubs activities resulted in: the recruitment of 11 public schools, 6 community groups, 12 teachers, and over 120 students receiving exposure to BOP activities. Field science protocols were designed exclusively around the use of the Oyster Restoration Station (ORS), a small-scale in situ experimental platforms which are suspended from a dock or pier. The ORS is intended to be used and “owned” by an individual school, teacher, class, or group of students, whereas the STEM Hub is explicitly designed as a collaborative space for large-scale community-driven restoration work and in-situ experiments. The ORS is also an essential tool in gathering Harbor data from disparate locations and instilling ownership of the research process amongst students. As such, it will continue to be used in that way. New and previously participating students will continue to deploy and monitor their own ORS, uploading data to the digital platform and conducting analysis of their own harbor-wide datasets. Programming the STEM Hub will necessitate establishing working relationships between schools and local research institutions. NYHF will provide introductions and the facilitation of initial workshops in school classrooms. However, once a particular STEM Hub has been established as a space for collaboration, each partner group, school, university, or CBO will schedule its own events at the site using the digital platform’s scheduling and registration tool. Monitoring of research collaborations will be accomplished through the platform’s research publication tool and has thus far provided valuable information on the projects’ trajectory, strategic plan, and pathway.Keywords: environmental science, citizen science, STEM, technology
Procedia PDF Downloads 9852 HydroParks: Directives for Physical Environment Interventions Battling Childhood Overweight in Berlin, Germany
Authors: Alvaro Valera Sosa
Abstract:
Background: In recent years, childhood overweight and obesity have become an increasing and challenging phenomenon in Berlin and Germany in general. The highest shares of childhood overweight in Berlin are district localities within the inner city ring with lowest socio-economic levels and the highest number of migration background populations. Most factors explaining overweight and obesity are linked to individual dispositions and nutrition balances. Among various strategies, to target drinking behaviors of children and adolescents has been proven to be effective. On the one hand, encouraging the intake of water – which does not contain energy and thus may support a healthy weight status – on the other hand, reducing the consumption of sugar-containing beverages – which are linked to weight gain and obesity. Anyhow, these preventive approaches have mostly developed into individual or educational interventions widely neglecting environmental modifications. Therefore, little is known on how urban physical environment patterns and features can act as influence factors for childhood overweight. Aiming the development of a physical environment intervention tackling children overweight, this study evaluated urban situations surrounding public playgrounds in Berlin where the issue is evident. It verified the presence and state of physical environmental conditions that can be conducive for children to engage physical activity and water intake. Methods: The study included public playgrounds for children from 0-12 y/o within district localities with the highest prevalence of childhood overweight, highest population density, and highest mixed uses. A systematic observation was realized to describe physical environment patterns and features that may affect children health behavior leading to overweight. Neighborhood walkability for all age groups was assessed using the Walkability for Health framework (TU-Berlin). Playground physical environment conditions were evaluated using Active Living Research assessment sheets. Finally, the food environment in the playground’s pedestrian catchment areas was reviewed focusing on: proximity to suppliers offering sugar-containing beverages, and physical access for 5 y/o children and up to drinking water following the Drinking Fountains and Public Health guidelines of the Pacific Institute. Findings: Out of 114 locations, only 7 had a child population over 3.000. Three with the lowest socio-economic index and highest percentage of migration background were selected. All three urban situations presented similar walkability: large trafficked avenues without buffer bordering at least one side of the playground, and important block to block disconnections for active travel. All three playgrounds rated equipment conditions from good to very good. None had water fountains at the reach of a 5 y/o. and all presented convenience stores and/or fast food outlets selling sugar-containing beverages nearby the perimeter. Conclusion: The three playground situations selected are representative of Berlin locations where most factors that influence children overweight are found. The results delivered urban and architectural design directives for an environmental intervention, used to study children health-related behavior. A post-intervention evaluation could prove associations between designed spaces and children overweight rate reduction creating a precedent in public health interventions and providing novel strategies for the health sector.Keywords: children overweight, evaluation research, public playgrounds, urban design, urban health
Procedia PDF Downloads 15951 Auto Rickshaw Impacts with Pedestrians: A Computational Analysis of Post-Collision Kinematics and Injury Mechanics
Authors: A. J. Al-Graitti, G. A. Khalid, P. Berthelson, A. Mason-Jones, R. Prabhu, M. D. Jones
Abstract:
Motor vehicle related pedestrian road traffic collisions are a major road safety challenge, since they are a leading cause of death and serious injury worldwide, contributing to a third of the global disease burden. The auto rickshaw, which is a common form of urban transport in many developing countries, plays a major transport role, both as a vehicle for hire and for private use. The most common auto rickshaws are quite unlike ‘typical’ four-wheel motor vehicle, being typically characterised by three wheels, a non-tilting sheet-metal body or open frame construction, a canvas roof and side curtains, a small drivers’ cabin, handlebar controls and a passenger space at the rear. Given the propensity, in developing countries, for auto rickshaws to be used in mixed cityscapes, where pedestrians and vehicles share the roadway, the potential for auto rickshaw impacts with pedestrians is relatively high. Whilst auto rickshaws are used in some Western countries, their limited number and spatial separation from pedestrian walkways, as a result of city planning, has not resulted in significant accident statistics. Thus, auto rickshaws have not been subject to the vehicle impact related pedestrian crash kinematic analyses and/or injury mechanics assessment, typically associated with motor vehicle development in Western Europe, North America and Japan. This study presents a parametric analysis of auto rickshaw related pedestrian impacts by computational simulation, using a Finite Element model of an auto rickshaw and an LS-DYNA 50th percentile male Hybrid III Anthropometric Test Device (dummy). Parametric variables include auto rickshaw impact velocity, auto rickshaw impact region (front, centre or offset) and relative pedestrian impact position (front, side and rear). The output data of each impact simulation was correlated against reported injury metrics, Head Injury Criterion (front, side and rear), Neck injury Criterion (front, side and rear), Abbreviated Injury Scale and reported risk level and adds greater understanding to the issue of auto rickshaw related pedestrian injury risk. The parametric analyses suggest that pedestrians are subject to a relatively high risk of injury during impacts with an auto rickshaw at velocities of 20 km/h or greater, which during some of the impact simulations may even risk fatalities. The present study provides valuable evidence for informing a series of recommendations and guidelines for making the auto rickshaw safer during collisions with pedestrians. Whilst it is acknowledged that the present research findings are based in the field of safety engineering and may over represent injury risk, compared to “Real World” accidents, many of the simulated interactions produced injury response values significantly greater than current threshold curves and thus, justify their inclusion in the study. To reduce the injury risk level and increase the safety of the auto rickshaw, there should be a reduction in the velocity of the auto rickshaw and, or, consideration of engineering solutions, such as retro fitting injury mitigation technologies to those auto rickshaw contact regions which are the subject of the greatest risk of producing pedestrian injury.Keywords: auto rickshaw, finite element analysis, injury risk level, LS-DYNA, pedestrian impact
Procedia PDF Downloads 19450 Contribution of Research to Innovation Management in the Traditional Fruit Production
Authors: Camille Aouinaït, Danilo Christen, Christoph Carlen
Abstract:
Introduction: Small and Medium-sized Enterprises (SMEs) are facing different challenges such as pressures on environmental resources, the rise of downstream power, and trade liberalization. Remaining competitive by implementing innovations and engaging in collaborations could be a strategic solution. In Switzerland, the Federal Institute for Research in Agriculture (Agroscope), the Federal schools of technology (EPFL and ETHZ), Cantonal universities and Universities of Applied Sciences (UAS) can provide substantial inputs. UAS were developed with specific missions to match the labor markets and society needs. Research projects produce patents, publications and improved networks of scientific expertise. The study’s goal is to measure the contribution of UAS and research organization to innovation and the impact of collaborations with partners in the non-academic environment in Swiss traditional fruit production. Materials and methods: The European projects Traditional Food Network to improve the transfer of knowledge for innovation (TRAFOON) and Social Impact Assessment of Productive Interactions between science and society (SIAMPI) frame the present study. The former aims to fill the gap between the needs of traditional food producing SMEs and innovations implemented following European projects. The latter developed a method to assess the impacts of scientific research. On one side, interviews with market players have been performed to make an inventory of needs of Swiss SMEs producing apricots and berries. The participative method allowed matching the current needs and the existing innovations coming from past European projects. Swiss stakeholders (e.g. producers, retailers, an inter-branch organization of fruits and vegetables) directly rated the needs on a five-Likert scale. To transfer the knowledge to SMEs, training workshops have been organized for apricot and berries actors separately, on specific topics. On the other hand, a mapping of a social network is drawn to characterize the links between actors, with a focus on the Swiss canton of Valais and UAS Valais Wallis. Type and frequency of interactions among actors have identified thanks to interviews. Preliminary results: A list of 369 SMEs needs grouped in 22 categories was produced with 37 fulfilled questionnaires. Swiss stakeholders rated 31 needs very important. Training workshops on apricot are focusing on varietal innovations, storage, disease (bacterial blight), pest (Drosophila suzukii), sorting and rootstocks. Entrepreneurship was targeted through trademark discussions in berry production. The UAS Valais Wallis collaborated on a few projects with Agroscope along with industries, at European and national levels. Political and public bodies interfere with the central area of agricultural vulgarization that induces close relationships between the research and the practical side. Conclusions: The needs identified by Swiss stakeholders are becoming part of training workshops to incentivize innovations. The UAS Valais Wallis takes part in collaboration projects with the research environment and market players that bring innovations helping SMEs in their contextual environment. Then, a Strategic Research and Innovation Agenda will be created in order to pursue research and answer the issues facing by SMEs.Keywords: agriculture, innovation, knowledge transfer, university and research collaboration
Procedia PDF Downloads 39649 A Case Study of Brownfield Revitalization in Taiwan
Authors: Jen Wang, Wei-Chia Hsu, Zih-Sin Wang, Ching-Ping Chu, Bo-Shiou Guo
Abstract:
In the late 19th century, the Jinguashi ore deposit in northern Taiwan was discovered, and accompanied with flourishing mining activities. However, tons of contaminants including heavy metals, sulfur dioxide, and total petroleum hydrocarbons (TPH) were released to surroundings and caused environmental problems. Site T was one of copper smelter located on the coastal hill near Jinguashi ore deposit. In over ten years of operation, variety contaminants were emitted that it polluted the surrounding soil and groundwater quality. In order to exhaust fumes produced from smelting process, three stacks were built along the hill behind the factory. The sediment inside the stacks contains high concentration of heavy metals such as arsenic, lead, copper, etc. Moreover, soil around the discarded stacks suffered a serious contamination when deposition leached from the ruptures of stacks. Consequently, Site T (including the factory and its surroundings) was declared as a pollution remediation site that visiting the site and land-use activities on it are forbidden. However, the natural landscape and cultural attractions of Site T are spectacular that it attracts a lot of visitors annually. Moreover, land resources are extremely precious in Taiwan. In addition, Taiwan Environmental Protection Administration (EPA) is actively promoting the contaminated land revitalization policy. Therefore, this study took Site T as case study for brownfield revitalization planning to the limits of activate and remediate the natural resources. Land-use suitability analysis and risk mapping were applied in this study to make appropriate risk management measures and redevelopment plan for the site. In land-use suitability analysis, surrounding factors into consideration such as environmentally sensitive areas, biological resources, land use, contamination, culture, and landscapes were collected to assess the development of each area; health risk mapping was introduced to show the image of risk assessments results based on the site contamination investigation. According to land-use suitability analysis, the site was divided into four zones: priority area (for high-efficiency development), secondary area (for co-development with priority area), conditional area (for reusing existing building) and limited area (for Eco-tourism and education). According to the investigation, polychlorinated biphenyls (PCB), heavy metals and TPH were considered as target contaminants while oral, inhalation and dermal would be the major exposure pathways in health risk assessment. In accordance with health risk map, the highest risk was found in the southwest and eastern side. Based on the results, the development plan focused on zoning and land use. Site T was recommended be divides to public facility zone, public architectonic art zone, viewing zone, existing building preservation zone, historic building zone, and cultural landscape zone for various purpose. In addition, risk management measures including sustained remediation, extinguish exposure and administration management are applied to ensure particular places are suitable for visiting and protect the visitors’ health. The consolidated results are corroborated available by analyzing aspects of law, land acquired method, maintenance and management and public participation. Therefore, this study has a certain reference value to promote the contaminated land revitalization policy in Taiwan.Keywords: brownfield revitalization, land-use suitability analysis, health risk map, risk management
Procedia PDF Downloads 18648 Nonlinear Homogenized Continuum Approach for Determining Peak Horizontal Floor Acceleration of Old Masonry Buildings
Authors: Andreas Rudisch, Ralf Lampert, Andreas Kolbitsch
Abstract:
It is a well-known fact among the engineering community that earthquakes with comparatively low magnitudes can cause serious damage to nonstructural components (NSCs) of buildings, even when the supporting structure performs relatively well. Past research works focused mainly on NSCs of nuclear power plants and industrial plants. Particular attention should also be given to architectural façade elements of old masonry buildings (e.g. ornamental figures, balustrades, vases), which are very vulnerable under seismic excitation. Large numbers of these historical nonstructural components (HiNSCs) can be found in highly frequented historical city centers and in the event of failure, they pose a significant danger to persons. In order to estimate the vulnerability of acceleration sensitive HiNSCs, the peak horizontal floor acceleration (PHFA) is used. The PHFA depends on the dynamic characteristics of the building, the ground excitation, and induced nonlinearities. Consequently, the PHFA can not be generalized as a simple function of height. In the present research work, an extensive case study was conducted to investigate the influence of induced nonlinearity on the PHFA for old masonry buildings. Probabilistic nonlinear FE time-history analyses considering three different hazard levels were performed. A set of eighteen synthetically generated ground motions was used as input to the structure models. An elastoplastic macro-model (multiPlas) for nonlinear homogenized continuum FE-calculation was calibrated to multiple scales and applied, taking specific failure mechanisms of masonry into account. The macro-model was calibrated according to the results of specific laboratory and cyclic in situ shear tests. The nonlinear macro-model is based on the concept of multi-surface rate-independent plasticity. Material damage or crack formation are detected by reducing the initial strength after failure due to shear or tensile stress. As a result, shear forces can only be transmitted to a limited extent by friction when the cracking begins. The tensile strength is reduced to zero. The first goal of the calibration was the consistency of the load-displacement curves between experiment and simulation. The calibrated macro-model matches well with regard to the initial stiffness and the maximum horizontal load. Another goal was the correct reproduction of the observed crack image and the plastic strain activities. Again the macro-model proved to work well in this case and shows very good correlation. The results of the case study show that there is significant scatter in the absolute distribution of the PHFA between the applied ground excitations. An absolute distribution along the normalized building height was determined in the framework of probability theory. It can be observed that the extent of nonlinear behavior varies for the three hazard levels. Due to the detailed scope of the present research work, a robust comparison with code-recommendations and simplified PHFA distributions are possible. The chosen methodology offers a chance to determine the distribution of PHFA along the building height of old masonry structures. This permits a proper hazard assessment of HiNSCs under seismic loads.Keywords: nonlinear macro-model, nonstructural components, time-history analysis, unreinforced masonry
Procedia PDF Downloads 16947 Assessment of Airborne PM0.5 Mutagenic and Genotoxic Effects in Five Different Italian Cities: The MAPEC_LIFE Project
Authors: T. Schilirò, S. Bonetta, S. Bonetta, E. Ceretti, D. Feretti, I. Zerbini, V. Romanazzi, S. Levorato, T. Salvatori, S. Vannini, M. Verani, C. Pignata, F. Bagordo, G. Gilli, S. Bonizzoni, A. Bonetti, E. Carraro, U. Gelatti
Abstract:
Air pollution is one of the most important worldwide health concern. In the last years, in both the US and Europe, new directives and regulations supporting more restrictive pollution limits were published. However, the early effects of air pollution occur, especially for the urban population. Several epidemiological and toxicological studies have documented the remarkable effect of particulate matter (PM) in increasing morbidity and mortality for cardiovascular disease, lung cancer and natural cause mortality. The finest fractions of PM (PM with aerodynamic diameter <2.5 µm and less) play a major role in causing chronic diseases. The International Agency for Research on Cancer (IARC) has recently classified air pollution and fine PM as carcinogenic to human (1 Group). The structure and composition of PM influence the biological properties of particles. The chemical composition varies with season and region of sampling, photochemical-meteorological conditions and sources of emissions. The aim of the MAPEC (Monitoring Air Pollution Effects on Children for supporting public health policy) study is to evaluate the associations between air pollution and biomarkers of early biological effects in oral mucosa cells of 6-8 year old children recruited from first grade schools. The study was performed in five Italian towns (Brescia, Torino, Lecce, Perugia and Pisa) characterized by different levels of airborne PM (PM10 annual average from 44 µg/m3 measured in Torino to 20 µg/m3 measured in Lecce). Two to five schools for each town were chosen to evaluate the variability of pollution within the same town. Child exposure to urban air pollution was evaluated by collecting ultrafine PM (PM0.5) in the school area, on the same day of biological sampling. PM samples were collected for 72h using a high-volume gravimetric air sampler and glass fiber filters in two different seasons (winter and spring). Gravimetric analysis of the collected filters was performed; PM0.5 organic extracts were chemically analyzed (PAH, Nitro-PAH) and tested on A549 by the Comet assay and Micronucleus test and on Salmonella strains (TA100, TA98, TA98NR and YG1021) by Ames test. Results showed that PM0.5 represents a high variable PM10 percentage (range 19.6-63%). PM10 concentration were generally lower than 50µg/m3 (EU daily limit). All PM0.5 extracts showed a mutagenic effect with TA98 strain (net revertant/m3 range 0.3-1.5) and suggested the presence of indirect mutagens, while lower effect was observed with TA100 strain. The results with the TA98NR and YG1021 strains showed the presence of nitroaromatic compounds as confirmed by the chemical analysis. No genotoxic or oxidative effect of PM0.5 extracts was observed using the comet assay (with/without Fpg enzyme) and micronucleus test except for some sporadic samples. The low biological effect observed could be related to the low level of air pollution observed in this winter sampling associated to a high atmospheric instability. For a greater understanding of the relationship between PM size, composition and biological effects the results obtained in this study suggest to investigate the biological effect of the other PM fractions and in particular of the PM0.5-1 fraction.Keywords: airborne PM, ames test, comet assay, micronucleus test
Procedia PDF Downloads 323