Search results for: open jet testing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5979

Search results for: open jet testing

249 Small Town Big Urban Issues the Case of Kiryat Ono, Israel

Authors: Ruth Shapira

Abstract:

Introduction: The rapid urbanization of the last century confronts planners, regulatory bodies, developers and most of all – the public with seemingly unsolved conflicts regarding values, capital, and wellbeing of the built and un-built urban space. This is reflected in the quality of the urban form and life which has known no significant progress in the last 2-3 decades despite the on-growing urban population. It is the objective of this paper to analyze some of these fundamental issues through the case study of a relatively small town in the center of Israel (Kiryat-Ono, 100,000 inhabitants), unfold the deep structure of qualities versus disruptors, present some cure that we have developed to bridge over and humbly suggest a practice that may be generic for similar cases. Basic Methodologies: The OBJECT, the town of Kiryat Ono, shall be experimented upon in a series of four action processes: De-composition, Re-composition, the Centering process and, finally, Controlled Structural Disintegration. Each stage will be based on facts, analysis of previous multidisciplinary interventions on various layers – and the inevitable reaction of the OBJECT, leading to the conclusion based on innovative theoretical and practical methods that we have developed and that we believe are proper for the open ended network, setting the rules for the contemporary urban society to cluster by. The Study: Kiryat Ono, was founded 70 years ago as an agricultural settlement and rapidly turned into an urban entity. In spite the massive intensification, the original DNA of the old small town was still deeply embedded, mostly in the quality of the public space and in the sense of clustered communities. In the past 20 years, the recent demand for housing has been addressed to on the national level with recent master plans and urban regeneration policies mostly encouraging individual economic initiatives. Unfortunately, due to the obsolete existing planning platform the present urban renewal is characterized by pressure of developers, a dramatic change in building scale and widespread disintegration of the existing urban and social tissue. Our office was commissioned to conceptualize two master plans for the two contradictory processes of Kiryat Ono’s future: intensification and conservation. Following a comprehensive investigation into the deep structures and qualities of the existing town, we developed a new vocabulary of conservation terms thus redefying the sense of PLACE. The main challenge was to create master plans that should offer a regulatory basis to the accelerated and sporadic development providing for the public good and preserving the characteristics of the PLACE consisting of a tool box of design guidelines that will have the ability to reorganize space along the time axis in a coherent way. In Conclusion: The system of rules that we have developed can generate endless possible patterns making sure that at each implementation fragment an event is created, and a better place is revealed. It takes time and perseverance but it seems to be the way to provide a healthy framework for the accelerated urbanization of our chaotic present.

Keywords: housing, architecture, urban qualities, urban regeneration, conservation, intensification

Procedia PDF Downloads 361
248 Momentum Profits and Investor Behavior

Authors: Aditya Sharma

Abstract:

Profits earned from relative strength strategy of zero-cost portfolio i.e. taking long position in winner stocks and short position in loser stocks from recent past are termed as momentum profits. In recent times, there has been lot of controversy and concern about sources of momentum profits, since the existence of these profits acts as an evidence of earning non-normal returns from publicly available information directly contradicting Efficient Market Hypothesis. Literature review reveals conflicting theories and differing evidences on sources of momentum profits. This paper aims at re-examining the sources of momentum profits in Indian capital markets. The study focuses on assessing the effect of fundamental as well as behavioral sources in order to understand the role of investor behavior in stock returns and suggest (if any) improvements to existing behavioral asset pricing models. This Paper adopts calendar time methodology to calculate momentum profits for 6 different strategies with and without skipping a month between ranking and holding period. For each J/K strategy, under this methodology, at the beginning of each month t stocks are ranked on past j month’s average returns and sorted in descending order. Stocks in upper decile are termed winners and bottom decile as losers. After ranking long and short positions are taken in winner and loser stocks respectively and both portfolios are held for next k months, in such manner that at any given point of time we have K overlapping long and short portfolios each, ranked from t-1 month to t-K month. At the end of period, returns of both long and short portfolios are calculated by taking equally weighted average across all months. Long minus short returns (LMS) are momentum profits for each strategy. Post testing for momentum profits, to study the role market risk plays in momentum profits, CAPM and Fama French three factor model adjusted LMS returns are calculated. In the final phase of studying sources, decomposing methodology has been used for breaking up the profits into unconditional means, serial correlations, and cross-serial correlations. This methodology is unbiased, can be used with the decile-based methodology and helps to test the effect of behavioral and fundamental sources altogether. From all the analysis, it was found that momentum profits do exist in Indian capital markets with market risk playing little role in defining them. Also, it was observed that though momentum profits have multiple sources (risk, serial correlations, and cross-serial correlations), cross-serial correlations plays a major role in defining these profits. The study revealed that momentum profits do have multiple sources however, cross-serial correlations i.e. the effect of returns of other stocks play a major role. This means that in addition to studying the investors` reactions to the information of the same firm it is also important to study how they react to the information of other firms. The analysis confirms that investor behavior does play an important role in stock returns and incorporating both the aspects of investors’ reactions in behavioral asset pricing models help make then better.

Keywords: investor behavior, momentum effect, sources of momentum, stock returns

Procedia PDF Downloads 302
247 Bridging Minds, Building Success Beyond Metrics: Uncovering Human Influence on Project Performance: Case Study of University of Salford

Authors: David Oyewumi Oyekunle, David Preston, Florence Ibeh

Abstract:

The paper provides an overview of the impacts of the human dimension in project management and team management on projects, which is increasingly affecting the performance of organizations. Recognizing its crucial significance, the research focuses on analyzing the psychological and interpersonal dynamics within project teams. This research is highly significant in the dynamic field of project management, as it addresses important gaps and offers vital insights that align with the constantly changing demands of the profession. A case study was conducted at the University of Salford to examine how human activity affects project management and performance. The study employed a mixed methodology to gain a deeper understanding of the real-world experiences of the subjects and project teams. Data analysis procedures to address the research objectives included the deductive approach, which involves testing a clear hypothesis or theory, as well as descriptive analysis and visualization. The survey comprised a sample size of 40 participants out of 110 project management professionals, including staff and final students in the Salford Business School, using a purposeful sampling method. To mitigate bias, the study ensured diversity in the sample by including both staff and final students. A smaller sample size allowed for more in-depth analysis and a focused exploration of the research objective. Conflicts, for example, are intricate occurrences shaped by a multitude of psychological stimuli and social interactions and may have either a deterrent perspective or a positive perspective on project performance and project management productivity. The study identified conflict elements, including culture, environment, personality, attitude, individual project knowledge, team relationships, leadership, and team dynamics among team members, as crucial human activities to minimize conflict. The findings are highly significant in the dynamic field of project management, as they address important gaps and offer vital insights that align with the constantly changing demands of the profession. It provided project professionals with valuable insights that can help them create a collaborative and high-performing project environment. Uncovering human influence on project performance, effective communication, optimal team synergy, and a keen understanding of project scope are necessary for the management of projects to attain exceptional performance and efficiency. For the research to achieve the aims of this study, it was acknowledged that the productive dynamics of teams and strong group cohesiveness are crucial for effectively managing conflicts in a beneficial and forward-thinking manner. Addressing the identified human influence will contribute to a more sustainable project management approach and offer opportunities for exploration and potential contributions to both academia and practical project management.

Keywords: human dimension, project management, team dynamics, conflict resolution

Procedia PDF Downloads 104
246 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology

Authors: Amarendar Reddy Addula

Abstract:

Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.

Keywords: artificial intelligence, ethics & human rights issues, laws, international laws

Procedia PDF Downloads 93
245 The U.S. Missile Defense Shield and Global Security Destabilization: An Inconclusive Link

Authors: Michael A. Unbehauen, Gregory D. Sloan, Alberto J. Squatrito

Abstract:

Missile proliferation and global stability are intrinsically linked. Missile threats continually appear at the forefront of global security issues. North Korea’s recently demonstrated nuclear and intercontinental ballistic missile (ICBM) capabilities, for the first time since the Cold War, renewed public interest in strategic missile defense capabilities. To protect from limited ICBM attacks from so-called rogue actors, the United States developed the Ground-based Midcourse Defense (GMD) system. This study examines if the GMD missile defense shield has contributed to a safer world or triggered a new arms race. Based upon increased missile-related developments and the lack of adherence to international missile treaties, it is generally perceived that the GMD system is a destabilizing factor for global security. By examining the current state of arms control treaties as well as existing missile arsenals and ongoing efforts in technologies to overcome U.S. missile defenses, this study seeks to analyze the contribution of GMD to global stability. A thorough investigation cannot ignore that, through the establishment of this limited capability, the U.S. violated longstanding, successful weapons treaties and caused concern among states that possess ICBMs. GMD capability contributes to the perception that ICBM arsenals could become ineffective, creating an imbalance in favor of the United States, leading to increased global instability and tension. While blame for the deterioration of global stability and non-adherence to arms control treaties is often placed on U.S. missile defense, the facts do not necessarily support this view. The notion of a renewed arms race due to GMD is supported neither by current missile arsenals nor by the inevitable development of new and enhanced missile technology, to include multiple independently targeted reentry vehicles (MIRVs), maneuverable reentry vehicles (MaRVs), and hypersonic glide vehicles (HGVs). The methodology in this study encapsulates a period of time, pre- and post-GMD introduction, while analyzing international treaty adherence, missile counts and types, and research in new missile technologies. The decline in international treaty adherence, coupled with a measurable increase in the number and types of missiles or research in new missile technologies during the period after the introduction of GMD, could be perceived as a clear indicator of GMD contributing to global instability. However, research into improved technology (MIRV, MaRV and HGV) prior to GMD, as well as a decline of various global missile inventories and testing of systems during this same period, would seem to invalidate this theory. U.S. adversaries have exploited the perception of the U.S. missile defense shield as a destabilizing factor as a pretext to strengthen and modernize their militaries and justify their policies. As a result, it can be concluded that global stability has not significantly decreased due to GMD; but rather, the natural progression of technological and missile development would inherently include innovative and dynamic approaches to target engagement, deterrence, and national defense.

Keywords: arms control, arms race, global security, GMD, ICBM, missile defense, proliferation

Procedia PDF Downloads 143
244 From Indigeneity to Urbanity: A Performative Study of Indian Saang (Folk Play) Tradition

Authors: Shiv Kumar

Abstract:

In the shifting scenario of postmodern age that foregrounds the multiplicity of meanings and discourses, the present research article seeks to investigate various paradigm shift of contemporary performances concerning Haryanvi Saangs, so-called folk plays, which are being performed widely in the regional territory of Haryana, a northern state of India. Folk arts cannot be studied efficiently by using the tools of literary criticism because it differs from the literature in many aspects. One of the most essential differences is that literary works invariably have an author. Folk works, on the contrary, never have an author. The situation is quite clear: either we acknowledge the presence of folk art as a phenomenon in the social and cultural history of people, or we do not acknowledge it and argue it is a poetical or art of fiction. This paper is an effort to understand the performative tradition of Saang which is traditionally known as Saang, Swang or Svang became a popular source for instruction and entertainment in the region and neighbouring states. Scholars and critics have long been debating about the origin of the word swang/svang/saang and their relationship to the Sanskrit word –Sangit, which means singing and music. But in the cultural context of Haryana, the word Saang means ‘to impersonate’ or ‘to imitate’ or ‘to copy someone or something’. The stories they portray are derived for the most part from the same myths, tales, epics and from the lives of Indian religious and folk heroes. Literally, the use of poetic sense, the implication of prose style and elaborate figurative technique are worthwhile to compile the productivity of a performance. All use music and song as an integral part of the performance so that it is also appropriate to call them folk opera. These folk plays are performed strictly by aboriginal people in the state. These people, sometimes denominated as Saangi, possess a culture distinct from the rest of Indian folk performances. The concerned form is also known with various other names like Manch, Khayal, Opera, Nautanki. The group of such folk plays can be seen as a dynamic activity and performed in the open space of the theatre. Nowadays, producers contributed greatly in order to create a rapidly growing musical outlet for budding new style of folk presentation and give rise to the electronic focus genre utilizing many musicians and performers who had to become precursors of the folk tradition in the region. Moreover, the paper proposes to examine available sources relative to this article, and it is believed to draw some different conclusions. For instance, to be a spectator of ongoing performances will contribute to providing enough guidance to move forward on this root. In this connection, the paper focuses critically upon the major performative aspects of Haryanvi Saang in relation to several inquiries such as the study of these plays in the context of Indian literary scenario, gender visualization and their dramatic representation, a song-music tradition in folk creativity and development of Haryanvi dramatic art in the contemporary socio-political background.

Keywords: folk play, indigenous, performance, Saang, tradition

Procedia PDF Downloads 155
243 Optimal Framework of Policy Systems with Innovation: Use of Strategic Design for Evolution of Decisions

Authors: Yuna Lee

Abstract:

In the current policy process, there has been a growing interest in more open approaches that incorporate creativity and innovation based on the forecasting groups composed by the public and experts together into scientific data-driven foresight methods to implement more effective policymaking. Especially, citizen participation as collective intelligence in policymaking with design and deep scale of innovation at the global level has been developed and human-centred design thinking is considered as one of the most promising methods for strategic foresight. Yet, there is a lack of a common theoretical foundation for a comprehensive approach for the current situation of and post-COVID-19 era, and substantial changes in policymaking practice are insignificant and ongoing with trial and error. This project hypothesized that rigorously developed policy systems and tools that support strategic foresight by considering the public understanding could maximize ways to create new possibilities for a preferable future, however, it must involve a better understating of Behavioural Insights, including individual and cultural values, profit motives and needs, and psychological motivations, for implementing holistic and multilateral foresight and creating more positive possibilities. To what extent is the policymaking system theoretically possible that incorporates the holistic and comprehensive foresight and policy process implementation, assuming that theory and practice, in reality, are different and not connected? What components and environmental conditions should be included in the strategic foresight system to enhance the capacity of decision from policymakers to predict alternative futures, or detect uncertainties of the future more accurately? And, compared to the required environmental condition, what are the environmental vulnerabilities of the current policymaking system? In this light, this research contemplates the question of how effectively policymaking practices have been implemented through the synthesis of scientific, technology-oriented innovation with the strategic design for tackling complex societal challenges and devising more significant insights to make society greener and more liveable. Here, this study conceptualizes the notions of a new collaborative way of strategic foresight that aims to maximize mutual benefits between policy actors and citizens through the cooperation stemming from evolutionary game theory. This study applies mixed methodology, including interviews of policy experts, with the case in which digital transformation and strategic design provided future-oriented solutions or directions to cities’ sustainable development goals and society-wide urgent challenges such as COVID-19. As a result, artistic and sensual interpreting capabilities through strategic design promote a concrete form of ideas toward a stable connection from the present to the future and enhance the understanding and active cooperation among decision-makers, stakeholders, and citizens. Ultimately, an improved theoretical foundation proposed in this study is expected to help strategically respond to the highly interconnected future changes of the post-COVID-19 world.

Keywords: policymaking, strategic design, sustainable innovation, evolution of cooperation

Procedia PDF Downloads 194
242 The Effects of the GAA15 (Gaelic Athletic Association 15) on Lower Extremity Injury Incidence and Neuromuscular Functional Outcomes in Collegiate Gaelic Games: A 2 Year Prospective Study

Authors: Brenagh E. Schlingermann, Clare Lodge, Paula Rankin

Abstract:

Background: Gaelic football, hurling and camogie are highly popular field games in Ireland. Research into the epidemiology of injury in Gaelic games revealed that approximately three quarters of the injuries in the games occur in the lower extremity. These injuries can have player, team and institutional impacts due to multiple factors including financial burden and time loss from competition. Research has shown it is possible to record injury data consistently with the GAA through a closed online recording system known as the GAA injury surveillance database. It has been established that determining the incidence of injury is the first step of injury prevention. The goals of this study were to create a dynamic GAA15 injury prevention programme which addressed five key components/goals; avoid positions associated with a high risk of injury, enhance flexibility, enhance strength, optimize plyometrics and address sports specific agilities. These key components are internationally recognized through the Prevent Injury, Enhance performance (PEP) programme which has proven reductions in ACL injuries by 74%. In national Gaelic games the programme is known as the GAA15 which has been devised from the principles of the PEP. No such injury prevention strategies have been published on this cohort in Gaelic games to date. This study will investigate the effects of the GAA15 on injury incidence and neuromuscular function in Gaelic games. Methods: A total of 154 players (mean age 20.32 ± 2.84) were recruited from the GAA teams within the Institute of Technology Carlow (ITC). Preseason and post season testing involved two objective screening tests; Y balance test and Three Hop Test. Practical workshops, with ongoing liaison, were provided to the coaches on the implementation of the GAA15. The programme was performed before every training session and game and the existing GAA injury surveillance database was accessed to monitor player’s injuries by the college sports rehabilitation athletic therapist. Retrospective analysis of the ITC clinic records were performed in conjunction with the database analysis as a means of tracking injuries that may have been missed. The effects of the programme were analysed by comparing the intervention groups Y balance and three hop test scores to an age/gender matched control group. Results: Year 1 results revealed significant increases in neuromuscular function as a result of the GAA15. Y Balance test scores for the intervention group increased in both the posterolateral (p=.005 and p=.001) and posteromedial reach directions (p= .001 and p=.001). A decrease in performance was determined for the three hop test (p=.039). Overall twenty-five injuries were reported during the season resulting in an injury rate of 3.00 injuries/1000hrs of participation; 1.25 injuries/1000hrs training and 4.25 injuries/1000hrs match play. Non-contact injuries accounted for 40% of the injuries sustained. Year 2 results are pending and expected April 2016. Conclusion: It is envisaged that implementation of the GAA15 will continue to reduce the risk of injury and improve neuromuscular function in collegiate Gaelic games athletes.

Keywords: GAA15, Gaelic games, injury prevention, neuromuscular training

Procedia PDF Downloads 336
241 Digital Image Correlation Based Mechanical Response Characterization of Thin-Walled Composite Cylindrical Shells

Authors: Sthanu Mahadev, Wen Chan, Melanie Lim

Abstract:

Anisotropy dominated continuous-fiber composite materials have garnered attention in numerous mechanical and aerospace structural applications. Tailored mechanical properties in advanced composites can exhibit superiority in terms of stiffness-to-weight ratio, strength-to-weight ratio, low-density characteristics, coupled with significant improvements in fatigue resistance as opposed to metal structure counterparts. Extensive research has demonstrated their core potential as more than just mere lightweight substitutes to conventional materials. Prior work done by Mahadev and Chan focused on formulating a modified composite shell theory based prognosis methodology for investigating the structural response of thin-walled circular cylindrical shell type composite configurations under in-plane mechanical loads respectively. The prime motivation to develop this theory stemmed from its capability to generate simple yet accurate closed-form analytical results that can efficiently characterize circular composite shell construction. It showcased the development of a novel mathematical framework to analytically identify the location of the centroid for thin-walled, open cross-section, curved composite shells that were characterized by circumferential arc angle, thickness-to-mean radius ratio, and total laminate thickness. Ply stress variations for curved cylindrical shells were analytically examined under the application of centric tensile and bending loading. This work presents a cost-effective, small-platform experimental methodology by taking advantage of the full-field measurement capability of digital image correlation (DIC) for an accurate assessment of key mechanical parameters such as in-plane mechanical stresses and strains, centroid location etc. Mechanical property measurement of advanced composite materials can become challenging due to their anisotropy and complex failure mechanisms. Full-field displacement measurements are well suited for characterizing the mechanical properties of composite materials because of the complexity of their deformation. This work encompasses the fabrication of a set of curved cylindrical shell coupons, the design and development of a novel test-fixture design and an innovative experimental methodology that demonstrates the capability to very accurately predict the location of centroid in such curved composite cylindrical strips via employing a DIC based strain measurement technique. Error percentage difference between experimental centroid measurements and previously estimated analytical centroid results are observed to be in good agreement. The developed analytical modified-shell theory provides the capability to understand the fundamental behavior of thin-walled cylindrical shells and offers the potential to generate novel avenues to understand the physics of such structures at a laminate level.

Keywords: anisotropy, composites, curved cylindrical shells, digital image correlation

Procedia PDF Downloads 316
240 Brittle Fracture Tests on Steel Bridge Bearings: Application of the Potential Drop Method

Authors: Natalie Hoyer

Abstract:

Usually, steel structures are designed for the upper region of the steel toughness-temperature curve. To address the reduced toughness properties in the temperature transition range, additional safety assessments based on fracture mechanics are necessary. These assessments enable the appropriate selection of steel materials to prevent brittle fracture. In this context, recommendations were established in 2011 to regulate the appropriate selection of steel grades for bridge bearing components. However, these recommendations are no longer fully aligned with more recent insights: Designing bridge bearings and their components in accordance with DIN EN 1337 and the relevant sections of DIN EN 1993 has led to an increasing trend of using large plate thicknesses, especially for long-span bridges. However, these plate thicknesses surpass the application limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with the regulations outlined in DIN EN 1993-1-10 regarding material toughness and through-thickness properties requires some further modifications. Therefore, these standards cannot be directly applied to the material selection for bearings without additional information. In addition, recent findings indicate that certain bridge bearing components are subjected to high fatigue loads, necessitating consideration in structural design, material selection, and calculations. To address this issue, the German Center for Rail Traffic Research initiated a research project aimed at developing a proposal to enhance the existing standards. This proposal seeks to establish guidelines for the selection of steel materials for bridge bearings to prevent brittle fracture, particularly for thick plates and components exposed to specific fatigue loads. The results derived from theoretical analyses, including finite element simulations and analytical calculations, are verified through component testing on a large-scale. During these large-scale tests, where a brittle failure is deliberately induced in a bearing component, an artificially generated defect is introduced into the specimen at the predetermined hotspot. Subsequently, a dynamic load is imposed until the crack initiation process transpires, replicating realistic conditions akin to a sharp notch resembling a fatigue crack. To stop the action of the dynamic load in time, it is important to precisely determine the point at which the crack size transitions from stable crack growth to unstable crack growth. To achieve this, the potential drop measurement method is employed. The proposed paper informs about the choice of measurement method (alternating current potential drop (ACPD) or direct current potential drop (DCPD)), presents results from correlations with created FE models, and may proposes a new approach to introduce beach marks into the fracture surface within the framework of potential drop measurement.

Keywords: beach marking, bridge bearing design, brittle fracture, design for fatigue, potential drop

Procedia PDF Downloads 39
239 Hydrogeomatic System for the Economic Evaluation of Damage by Flooding in Mexico

Authors: Alondra Balbuena Medina, Carlos Diaz Delgado, Aleida Yadira Vilchis Fránces

Abstract:

In Mexico, each year news is disseminated about the ravages of floods, such as the total loss of housing, damage to the fields; the increase of the costs of the food, derived from the losses of the harvests, coupled with health problems such as skin infection, etc. In addition to social problems such as delinquency, damage in education institutions and the population in general. The flooding is a consequence of heavy rains, tropical storms and or hurricanes that generate excess water in drainage systems that exceed its capacity. In urban areas, heavy rains can be one of the main factors in causing flooding, in addition to excessive precipitation, dam breakage, and human activities, for example, excessive garbage in the strainers. In agricultural areas, these can hardly achieve large areas of cultivation. It should be mentioned that for both areas, one of the significant impacts of floods is that they can permanently affect the livelihoods of many families, cause damage, for example in their workplaces such as farmlands, commercial or industry areas and where services are provided. In recent years, Information and Communication Technologies (ICT) have had an accelerated development, being reflected in the growth and the exponential evolution of the innovation giving; as a result, the daily generation of new technologies, updates, and applications. Innovation in the development of Information Technology applications has impacted on all areas of human activity. They influence all the orders of life of individuals, reconfiguring the way of perceiving and analyzing the world such as, for instance, interrelating with people as individuals and as a society, in the economic, political, social, cultural, educational, environmental, etc. Therefore the present work describes the creation of a system of calculation of flood costs for housing areas, retail establishments and agricultural areas from the Mexican Republic, based on the use and application of geotechnical tools being able to be useful for the benefit of the sectors of public, education and private. To generate analysis of hydrometereologic affections and with the obtained results to realize the Geoinformatics tool was constructed from two different points of view: the geoinformatic (design and development of GIS software) and the methodology of flood damage validation in order to integrate a tool that provides the user the monetary estimate of the effects caused by the floods. With information from the period 2000-2014, the functionality of the application was corroborated. For the years 2000 to 2009 only the analysis of the agricultural and housing areas was carried out, incorporating for the commercial establishment's information of the period 2010 - 2014. The method proposed for the resolution of this research project is a fundamental contribution to society, in addition to the tool itself. Therefore, it can be summarized that the problems that are in the physical-geographical environment, conceiving them from the point of view of the spatial analysis, allow to offer different alternatives of solution and also to open up slopes towards academia and research.

Keywords: floods, technological innovation, monetary estimation, spatial analysis

Procedia PDF Downloads 224
238 Pharmacokinetic Assessment of Antimicrobial Treatment of Acute Exacerbations of Chronic Obstructive Pulmonary Disease in Hospitalized Patients Colonized with Pseudomonas aeruginosa

Authors: Juliette Begin, Juliano Colapelle, Andrea Taratanu, Daniel Thirion, Amelie Marsot, Bryan A. Ross

Abstract:

Chronic obstructive pulmonary disease (COPD), a leading cause of death globally, is characterized by chronic airflow obstruction and acute exacerbations (AECOPDs) that are often triggered by respiratory infections. Pseudomonas aeruginosa (P. aeruginosa), a potentially serious bacterial cause of AECOPDs, is treated with targeted anti-pseudomonal antibiotics. These select few antimicrobials are often used as first-line therapy in patients who are clinically unwell and/or in those suspected of P. aeruginosa-related infection prior to confirmation, potentially contributing to antimicrobial resistance. The present study evaluates prescribing practices in patients with a confirmed sputum history of P. aeruginosa admitted for AECOPD at the McGill University Health Centre (MUHC) and treated with anti-pseudomonal antibiotics. Serum antibiotic concentrations were measured from the same-day peak, trough, and mid-dose blood sampling intervals after reaching steady-state (on or after day 3) and were quantified using ultra-high-performance liquid chromatography (UHPLC). Demographic, clinical, and treatment outcomes were extracted from patient medical charts. Treatment failure was defined by respiratory-related death or mechanical ventilation after ≥3 days of antibiotics; antibiotic therapy extended beyond 2 weeks or a new antibiotic regimen started; or urgent care readmission within 30 days for AECOPD. To date, 9 of 30 planned participants have completed testing: seven received ciprofloxacin, one received meropenem, and one received piperacillin-tazobactam. Due to serum sample batching requirements, the serum ciprofloxacin concentration results for the first 2/8 participants are presented at the time of writing. The first participant had serum levels of 5.45mg/L (T₀), 4.74mg/L (T₅₀), and 4.49mg/L (T₁₀₀), while the second had serum levels of 5mg/L (T₀), 2.6mg/L (T₅₀), and 2.51mg/L (T₁₀₀). Pharmacokinetic parameters Cmax (5.18±0.43mg/L), T₁/₂ (23.56±18.94hours), and AUC (181.9±155.95mg*h/l) were higher than reported monograph values and met target AUC-to-MIC ratio of >125. The patients treated with meropenem and with piperacillin-tazobactam experienced treatment failure. Preliminary results suggest that standard ciprofloxacin dosing in patients experiencing an AECOPD and colonized with P. aeruginosa appears to achieve effective serum concentrations. Final cohort results will inform the pharmacokinetic appropriateness and clinical sufficiency of current AECOPD antimicrobial strategies in P. aeruginosa-colonized patients. This study will guide clinicians in determining the appropriate dosing for AECOPD treatment to achieve therapeutic levels, optimizing outcomes, and minimizing adverse effects. It could also highlight the value of routine antibiotic level monitoring in patients with treatment failure to ensure optimal serum concentrations.

Keywords: acute exacerbation, antimicrobial resistance, chronic obstructive pulmonary disease, pharmacokinetics/pharmacodynamics, Pseudomonas aeruginosa

Procedia PDF Downloads 10
237 Fully Instrumented Small-Scale Fire Resistance Benches for Aeronautical Composites Assessment

Authors: Fabienne Samyn, Pauline Tranchard, Sophie Duquesne, Emilie Goncalves, Bruno Estebe, Serge Boubigot

Abstract:

Stringent fire safety regulations are enforced in the aeronautical industry due to the consequences that potential fire event on an aircraft might imply. This is so much true that the fire issue is considered right from the design of the aircraft structure. Due to the incorporation of an increasing amount of polymer matrix composites in replacement of more conventional materials like metals, the nature of the fire risks is changing. The choice of materials used is consequently of prime importance as well as the evaluation of its resistance to fire. The fire testing is mostly done using the so-called certification tests according to standards such as the ISO2685:1998(E). The latter describes a protocol to evaluate the fire resistance of structures located in fire zone (ability to withstand fire for 5min). The test consists in exposing an at least 300x300mm² sample to an 1100°C propane flame with a calibrated heat flux of 116kW/m². This type of test is time-consuming, expensive and gives access to limited information in terms of fire behavior of the materials (pass or fail test). Consequently, it can barely be used for material development purposes. In this context, the laboratory UMET in collaboration with industrial partners has developed a horizontal and a vertical small-scale instrumented fire benches for the characterization of the fire behavior of composites. The benches using smaller samples (no more than 150x150mm²) enables to cut downs costs and hence to increase sampling throughput. However, the main added value of our benches is the instrumentation used to collect useful information to understand the behavior of the materials. Indeed, measurements of the sample backside temperature are performed using IR camera in both configurations. In addition, for the vertical set up, a complete characterization of the degradation process, can be achieved via mass loss measurements and quantification of the gasses released during the tests. These benches have been used to characterize and study the fire behavior of aeronautical carbon/epoxy composites. The horizontal set up has been used in particular to study the performances and durability of protective intumescent coating on 2mm thick 2D laminates. The efficiency of this approach has been validated, and the optimized coating thickness has been determined as well as the performances after aging. Reductions of the performances after aging were attributed to the migration of some of the coating additives. The vertical set up has enabled to investigate the degradation process of composites under fire. An isotropic and a unidirectional 4mm thick laminates have been characterized using the bench and post-fire analyses. The mass loss measurements and the gas phase analyses of both composites do not present significant differences unlike the temperature profiles in the thickness of the samples. The differences have been attributed to differences of thermal conductivity as well as delamination that is much more pronounced for the isotropic composite (observed on the IR-images). This has been confirmed by X-ray microtomography. The developed benches have proven to be valuable tools to develop fire safe composites.

Keywords: aeronautical carbon/epoxy composite, durability, intumescent coating, small-scale ‘ISO 2685 like’ fire resistance test, X-ray microtomography

Procedia PDF Downloads 267
236 Index and Mechanical Geotechnical Properties and Their Control on the Strength and Durability of the Cainozoic Calcarenites in KwaZulu-Natal, South Africa

Authors: Luvuno N. Jele, Warwick W. Hastie, Andrew Green

Abstract:

Calcarenite is a clastic sedimentary beach rock composed of more than 50% sand sized (0.0625 – 2 mm) carbonate grains. In South Africa, these rocks occur as a narrow belt along most of the coast of KwaZulu-Natal and sporadically along the coast of the Eastern Cape. Calcarenites contain a high percentage of calcium carbonate, and due to a number of its physical and structural features, like porosity, cementing material, sedimentary structures, grain shape, and grain size; they are more prone to chemical and mechanical weathering. The objective of the research is to study the strength and compressibility characteristics of the calcarenites along the coast of KwaZulu-Natal to be able to better understand the geotechnical behaviour of these rocks, which may help to predict areas along the coast which may be potentially susceptible to failure/differential settling resulting in damage to property. A total of 148 cores were prepared and analyzed. Cores were analyzed perpendicular and parallel to bedding. Tests were carried out in accordance with the relevant codes and recommendations of the International Society for Rock Mechanics, American Standard Testing Methods, and Committee of Land and Transport Standard Specifications for Road and Bridge Works for State Road Authorities. Test carried out included: x-ray diffraction, petrography, shape preferred orientation (SPO), 3-D Tomography, rock porosity, rock permeability, ethylene glycol, slake durability, rock water absorption, Duncan swelling index, triaxial compressive strength, Brazilian tensile strength and uniaxial compression test with elastic modulus. The beach-rocks have a uniaxial compressive strength (UCS) ranging from 17,84Mpa to 287,35Mpa and exhibit three types of failure; (1) single sliding shear failure, (2) complete cone development, and (3) splitting failure. Brazilian tensile strength of the rocks ranges from 2.56 Mpa to 12,40 Ma, with those tested perpendicular to bedding showing lower tensile strength. Triaxial compressive tests indicate calcarenites have strength ranging from 86,10 Mpa to 371,85 Mpa. Common failure mode in the triaxial test is a single sliding shear failure. Porosity of the rocks varies from 1.25 % to 26.52 %. Rock tests indicate that the direction of loading, whether it be parallel to bedding or perpendicular to bedding, plays no significantrole in the strength and durability of the calcarenites. Porosity, cement type, and grain texture play major roles.UCS results indicate that saturated cores are weaker in strength compared to dry samples. Thus, water or moisture content plays a significant role in the strength and durability of the beach-rock. Loosely packed, highly porous and low magnesian-calcite bearing calcarenites show a decrease in strength compared to the densely packed, low porosity and high magnesian-calcite bearing calcarenites.

Keywords: beach-rock, calcarenite, cement, compressive, failure, porosity, strength, tensile, grains

Procedia PDF Downloads 92
235 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format

Authors: Maryam Fallahpoor, Biswajeet Pradhan

Abstract:

Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.

Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format

Procedia PDF Downloads 85
234 Innovation in PhD Training in the Interdisciplinary Research Institute

Authors: B. Shaw, K. Doherty

Abstract:

The Cultural Communication and Computing Research Institute (C3RI) is a diverse multidisciplinary research institute including art, design, media production, communication studies, computing and engineering. Across these disciplines it can seem like there are enormous differences of research practice and convention, including differing positions on objectivity and subjectivity, certainty and evidence, and different political and ethical parameters. These differences sit within, often unacknowledged, histories, codes, and communication styles of specific disciplines, and it is all these aspects that can make understanding of research practice across disciplines difficult. To explore this, a one day event was orchestrated, testing how a PhD community might communicate and share research in progress in a multi-disciplinary context. Instead of presenting results at a conference, research students were tasked to articulate their method of inquiry. A working party of students from across disciplines had to design a conference call, visual identity and an event framework that would work for students across all disciplines. The process of establishing the shape and identity of the conference was revealing. Even finding a linguistic frame that would meet the expectations of different disciplines for the conference call was challenging. The first abstracts submitted either resorted to reporting findings, or only described method briefly. It took several weeks of supported intervention for research students to get ‘inside’ their method and to understand their research practice as a process rich with philosophical and practical decisions and implications. In response to the abstracts the conference committee generated key methodological categories for conference sessions, including sampling, capturing ‘experience’, ‘making models’, researcher identities, and ‘constructing data’. Each session involved presentations by visual artists, communications students and computing researchers with inter-disciplinary dialogue, facilitated by alumni Chairs. The apparently simple focus on method illuminated research process as a site of creativity, innovation and discovery, and also built epistemological awareness, drawing attention to what is being researched and how it can be known. It was surprisingly difficult to limit students to discussing method, and it was apparent that the vocabulary available for method is sometimes limited. However, by focusing on method rather than results, the genuine process of research, rather than one constructed for approval, could be captured. In unlocking the twists and turns of planning and implementing research, and the impact of circumstance and contingency, students had to reflect frankly on successes and failures. This level of self – and public- critique emphasised the degree of critical thinking and rigour required in executing research and demonstrated that honest reportage of research, faults and all, is good valid research. The process also revealed the degree that disciplines can learn from each other- the computing students gained insights from the sensitive social contextualizing generated by communications and art and design students, and art and design students gained understanding from the greater ‘distance’ and emphasis on application that computing students applied to their subjects. Finding the means to develop dialogue across disciplines makes researchers better equipped to devise and tackle research problems across disciplines, potentially laying the ground for more effective collaboration.

Keywords: interdisciplinary, method, research student, training

Procedia PDF Downloads 206
233 Effect of Printing Process on Mechanical Properties and Porosity of 3D Printed Concrete Strips

Authors: Wei Chen

Abstract:

3D concrete printing technology is a novel and highly efficient construction method that holds significant promise for advancing low-carbon initiatives within the construction industry. In contrast to traditional construction practices, 3D printing offers a manual and formwork-free approach, resulting in a transformative shift in labor requirements and fabrication techniques. This transition yields substantial reductions in carbon emissions during the construction phase, as well as decreased on-site waste generation. Furthermore, when compared to conventionally printed concrete, 3D concrete exhibits mechanical anisotropy due to its layer-by-layer construction methodology. Therefore, it becomes imperative to investigate the influence of the printing process on the mechanical properties of 3D printed strips and to optimize the mechanical characteristics of these coagulated strips. In this study, we conducted three-dimensional reconstructions of printed blocks using both circular and directional print heads, incorporating various overlap distances between strips, and employed CT scanning for comprehensive analysis. Our research focused on assessing mechanical properties and micro-pore characteristics under different loading orientations.Our findings reveal that increasing the overlap degree between strips leads to enhanced mechanical properties of the strips. However, it's noteworthy that once full overlap is achieved, further increases in the degree of coincidence do not lead to a decrease in porosity between strips. Additionally, due to its superior printing cross-sectional area, the square printing head exhibited the most favorable impact on mechanical properties.This paper aims to improve the tensile strength, tensile ductility, and bending toughness of a recently developed ‘one-part’ geopolymer for 3D concrete printing (3DCP) applications, in order to address the insufficient tensile strength and brittle fracture characteristics of geopolymer materials in 3D printing scenarios where materials are subjected to tensile stress. The effects of steel fiber content, and aspect ratio, on mechanical properties, were systematically discussed, including compressive strength, flexure strength, splitting tensile strength, uniaxial tensile strength, bending toughness, and the anisotropy of 3DP-OPGFRC, respectively. The fiber distribution in the printed samples was obtained through x-ray computed tomography (X-CT) testing. In addition, the underlying mechanisms were discussed to provide a deep understanding of the role steel fiber played in the reinforcement. The experimental results showed that the flexural strength increased by 282% to 26.1MP, and the compressive strength also reached 104.5Mpa. A high tensile ductility, appreciable bending toughness, and strain-hardening behavior can be achieved with steel fiber incorporation. In addition, it has an advantage over the OPC-based steel fiber-reinforced 3D printing materials given in the existing literature (flexural strength 15 Mpa); It is also superior to the tensile strength (<6Mpa) of current geopolymer fiber reinforcements used for 3D printing. It is anticipated that the development of this 3D printable steel fiber reinforced ‘one-part’ geopolymer will be used to meet high tensile strength requirements for printing scenarios.

Keywords: 3D printing concrete, mechanical anisotropy, micro-pore structure, printing technology

Procedia PDF Downloads 76
232 Assessment of Potential Chemical Exposure to Betamethasone Valerate and Clobetasol Propionate in Pharmaceutical Manufacturing Laboratories

Authors: Nadeen Felemban, Hamsa Banjer, Rabaah Jaafari

Abstract:

One of the most common hazards in the pharmaceutical industry is the chemical hazard, which can cause harm or develop occupational health diseases/illnesses due to chronic exposures to hazardous substances. Therefore, a chemical agent management system is required, including hazard identification, risk assessment, controls for specific hazards and inspections, to keep your workplace healthy and safe. However, routine management monitoring is also required to verify the effectiveness of the control measures. Moreover, Betamethasone Valerate and Clobetasol Propionate are some of the APIs (Active Pharmaceutical Ingredients) with highly hazardous classification-Occupational Hazard Category (OHC 4), which requires a full containment (ECA-D) during handling to avoid chemical exposure. According to Safety Data Sheet, those chemicals are reproductive toxicants (reprotoxicant H360D), which may affect female workers’ health and cause fatal damage to an unborn child, or impair fertility. In this study, qualitative (chemical Risk assessment-qCRA) was conducted to assess the chemical exposure during handling of Betamethasone Valerate and Clobetasol Propionate in pharmaceutical laboratories. The outcomes of qCRA identified that there is a risk of potential chemical exposure (risk rating 8 Amber risk). Therefore, immediate actions were taken to ensure interim controls (according to the Hierarchy of controls) are in place and in use to minimize the risk of chemical exposure. No open handlings should be done out of the Steroid Glove Box Isolator (SGB) with the required Personal Protective Equipment (PPEs). The PPEs include coverall, nitrile hand gloves, safety shoes and powered air-purifying respirators (PAPR). Furthermore, a quantitative assessment (personal air sampling) was conducted to verify the effectiveness of the engineering controls (SGB Isolator) and to confirm if there is chemical exposure, as indicated earlier by qCRA. Three personal air samples were collected using an air sampling pump and filter (IOM2 filters, 25mm glass fiber media). The collected samples were analyzed by HPLC in the BV lab, and the measured concentrations were reported in (ug/m3) with reference to Occupation Exposure Limits, 8hr OELs (8hr TWA) for each analytic. The analytical results are needed in 8hr TWA (8hr Time-weighted Average) to be analyzed using Bayesian statistics (IHDataAnalyst). The results of the Bayesian Likelihood Graph indicate (category 0), which means Exposures are de "minimus," trivial, or non-existent Employees have little to no exposure. Also, these results indicate that the 3 samplings are representative samplings with very low variations (SD=0.0014). In conclusion, the engineering controls were effective in protecting the operators from such exposure. However, routine chemical monitoring is required every 3 years unless there is a change in the processor type of chemicals. Also, frequent management monitoring (daily, weekly, and monthly) is required to ensure the control measures are in place and in use. Furthermore, a Similar Exposure Group (SEG) was identified in this activity and included in the annual health surveillance for health monitoring.

Keywords: occupational health and safety, risk assessment, chemical exposure, hierarchy of control, reproductive

Procedia PDF Downloads 169
231 Evaluation of Toxicity of Cerium Oxide on Zebrafish Developmental Stages

Authors: Roberta Pecoraro, Elena Maria Scalisi

Abstract:

Engineered Nanoparticles (ENPs) and Nanomaterials (ENMs) concern an active research area and a sector in full expansion. They have physical-chemical characteristics and small size that improve their performance compared to common materials. Due to the increase in their production and their subsequent release into the environment, new strategies are emerging to assess risk of nanomaterials. NPs can be released into the environment through aquatic systems by human activities and exert toxicity on living organisms. We evaluated the potential toxic effect of cerium oxide (CeO2) nanoparticles because it’s used in different fields due to its peculiar properties. In order to assess nanoparticles toxicity, Fish Embryo Toxicity (FET) test was performed. Powders of CeO2 NPs supplied by the CNR-IMM of Catania are indicated as CeO2 type 1 (as-prepared) and CeO2 type 2 (modified), while CeO2 type 3 (commercial) is supplied by Sigma-Aldrich. Starting from a stock solution (0.001g/10 ml dilution water) of each type of CeO2 NPs, the other concentration solutions were obtained adding 1 ml of the stock solution to 9 ml of dilution water, leading to three different solutions of concentration (10-4, 10-5, 10-6 g/ml). All the solutions have been sonicated to avoid natural tendency of NPs to aggregate and sediment. FET test was performed according to the OECD guidelines for testing chemicals using our internal protocol procedure. A number of eight selected fertilized eggs were placed in each becher filled with 5 ml of each concentration of the three types of CeO2 NPs; control samples were incubated only with dilution water. Replication was performed for each concentration. During the exposure period, we observed four endpoints (embryo coagulation, lack of formation of somites, failure to lift the yolk bag, no heartbeat) by a stereomicroscope every 24 hours. Immunohistochemical analysis on treated larvae was performed to evaluate the expression of metallothioneins (MTs), Heat Shock Proteins 70 (HSP70) and 7-ethoxyresorufin-O-diethylase (EROD). Our results have not shown evident alterations on embryonic development because all embryos completed the development and the hatching of the eggs, started around the 48th hour after exposure, took place within the last observation at 72 hours. A good reactivity, both in the embryos and in the newly hatched larvae, was found. The presence of heartbeat has also been observed in embryos with reduced mobility confirming their viability. A higher expression of EROD biomarker was observed in the larvae exposed to the three types of CeO2, showing a clear difference with the control. A weak positivity was found for MTs biomarker in treated larvae as well as in the control. HSP70 are expressed homogeneously in all the type of nanoparticles tested but not too much greater than control. Our results are in agreement with other studies in the literature, in which the exposure of Danio rerio larvae to other metal oxide nanoparticles does not show adverse effects on survival and hatching time. Further studies are necessary to clarify the role of these NPs and also to solve conflicting opinions.

Keywords: Danio rerio, endpoints, fish embryo toxicity test, metallic nanoparticles

Procedia PDF Downloads 132
230 Project Management and International Development: Competencies for International Assignment

Authors: M. P. Leroux, C. Coulombe

Abstract:

Projects are popular vehicles through which international aid is delivered in developing countries. To achieve their objectives, many northern organizations develop projects with local partner organizations in the developing countries through technical assistance projects. International aid and international development projects precisely have long been criticized for poor results although billions are spent every year. Little empirical research in the field of project management has the focus on knowledge transfer in international development context. This paper focuses particularly on personal dimensions of international assignees participating in project within local team members in the host country. We propose to explore the possible links with a human resource management perspective in order to shed light on the less research problematic of knowledge transfer in development cooperation projects. The process leading to capacity building being far complex, involving multiple dimensions and far from being linear, we propose here to assess if traditional research on expatriate in multinational corporations pertain to the field of project management in developing countries. The following question is addressed: in the context of international development project cooperation, what personal determinants should the selection process focus when looking to fill a technical assistance position in a developing country? To answer that question, we first reviewed the literature on expatriate in the context of inter organizational knowledge transfer. Second, we proposed a theoretical framework combining perspectives of development studies and management to explore if parallels can be draw between traditional international assignment and technical assistance project assignment in developing countries. We conducted an exploratory study using case studies from technical assistance initiatives led in Haiti, a country in Central America. Data were collected from multiple sources following qualitative study research methods. Direct observations in the field were allowed by local leaders of six organization; individual interviews with present and past international assignees, individual interview with local team members, and focus groups were organized in order to triangulate information collected. Contrary from empirical research on knowledge transfer in multinational corporations, results tend to show that technical expertise rank well behind many others characteristics. Results tend to show the importance of soft skills, as a prerequisite to succeed in projects where local team have to collaborate. More importantly, international assignees who were talking knowledge sharing instead of knowledge transfer seemed to feel more satisfied at the end of their mandate than the others. Reciprocally, local team members who perceived to have participated in a project with an expat looking to share instead of aiming to transfer knowledge seemed to describe the results of project in more positive terms than the others. Results obtained from this exploratory study open the way for a promising research agenda in the field of project management. It emphasises the urgent need to achieve a better understanding on the complex set of soft skills project managers or project chiefs would benefit to develop, in particular, the ability to absorb knowledge and the willingness to share one’s knowledge.

Keywords: international assignee, international project cooperation, knowledge transfer, soft skills

Procedia PDF Downloads 141
229 Rapid, Automated Characterization of Microplastics Using Laser Direct Infrared Imaging and Spectroscopy

Authors: Andreas Kerstan, Darren Robey, Wesam Alvan, David Troiani

Abstract:

Over the last 3.5 years, Quantum Cascade Lasers (QCL) technology has become increasingly important in infrared (IR) microscopy. The advantages over fourier transform infrared (FTIR) are that large areas of a few square centimeters can be measured in minutes and that the light intensive QCL makes it possible to obtain spectra with excellent S/N, even with just one scan. A firmly established solution of the laser direct infrared imaging (LDIR) 8700 is the analysis of microplastics. The presence of microplastics in the environment, drinking water, and food chains is gaining significant public interest. To study their presence, rapid and reliable characterization of microplastic particles is essential. Significant technical hurdles in microplastic analysis stem from the sheer number of particles to be analyzed in each sample. Total particle counts of several thousand are common in environmental samples, while well-treated bottled drinking water may contain relatively few. While visual microscopy has been used extensively, it is prone to operator error and bias and is limited to particles larger than 300 µm. As a result, vibrational spectroscopic techniques such as Raman and FTIR microscopy have become more popular, however, they are time-consuming. There is a demand for rapid and highly automated techniques to measure particle count size and provide high-quality polymer identification. Analysis directly on the filter that often forms the last stage in sample preparation is highly desirable as, by removing a sample preparation step it can both improve laboratory efficiency and decrease opportunities for error. Recent advances in infrared micro-spectroscopy combining a QCL with scanning optics have created a new paradigm, LDIR. It offers improved speed of analysis as well as high levels of automation. Its mode of operation, however, requires an IR reflective background, and this has, to date, limited the ability to perform direct “on-filter” analysis. This study explores the potential to combine the filter with an infrared reflective surface filter. By combining an IR reflective material or coating on a filter membrane with advanced image analysis and detection algorithms, it is demonstrated that such filters can indeed be used in this way. Vibrational spectroscopic techniques play a vital role in the investigation and understanding of microplastics in the environment and food chain. While vibrational spectroscopy is widely deployed, improvements and novel innovations in these techniques that can increase the speed of analysis and ease of use can provide pathways to higher testing rates and, hence, improved understanding of the impacts of microplastics in the environment. Due to its capability to measure large areas in minutes, its speed, degree of automation and excellent S/N, the LDIR could also implemented for various other samples like food adulteration, coatings, laminates, fabrics, textiles and tissues. This presentation will highlight a few of them and focus on the benefits of the LDIR vs classical techniques.

Keywords: QCL, automation, microplastics, tissues, infrared, speed

Procedia PDF Downloads 66
228 Embryonic Aneuploidy – Morphokinetic Behaviors as a Potential Diagnostic Biomarker

Authors: Banafsheh Nikmehr, Mohsen Bahrami, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Mallory Pitts, Tolga B. Mesen, Tamer M. Yalcinkaya

Abstract:

The number of people who receive in vitro fertilization (IVF) treatment has increased on a startling trajectory over the past two decades. Despite advances in this field, particularly the introduction of intracytoplasmic sperm injection (ICSI) and the preimplantation genetic screening (PGS), the IVF success remains low. A major factor contributing to IVF failure is embryonic aneuploidy (abnormal chromosome content), which often results in miscarriage and birth defects. Although PGS is often used as the standard diagnostic tool to identify aneuploid embryos, it is an invasive approach that could affect the embryo development, and yet inaccessible to many patients due its high costs. As such, there is a clear need for a non-invasive cost-effective approach to identify euploid embryos for single embryo transfer (SET). The reported differences between morphokinetic behaviors of aneuploid and euploid embryos has shown promise to address this need. However, current literature is inconclusive and further research is urgently needed to translate current findings into clinical diagnostics. In this ongoing study, we found significant differences between morphokinetic behaviors of euploid and aneuploid embryos that provides important insights and reaffirms the promise of such behaviors for developing non-invasive methodologies. Methodology—A total of 242 embryos (euploid: 149, aneuploid: 93) from 74 patients who underwent IVF treatment in Carolinas Fertility Clinics in Winston-Salem, NC, were analyzed. All embryos were incubated in an EmbryoScope incubator. The patients were randomly selected from January 2019 to June 2021 with most patients having both euploid and aneuploid embryos. All embryos reached the blastocyst stage and had known PGS outcomes. The ploidy assessment was done by a third-party testing laboratory on day 5-7 embryo biopsies. The morphokinetic variables of each embryo were measured by the EmbryoViewer software (Uniesense FertiliTech) on time-lapse images using 7 focal depths. We compared the time to: pronuclei fading (tPNf), division to 2,3,…,9 cells (t2, t3,…,t9), start of embryo compaction (tSC), Morula formation (tM), start of blastocyst formation (tSC), blastocyst formation (tB), and blastocyst expansion (tEB), as well as intervals between them (e.g., c23 = t3 – t2). We used a mixed regression method for our statistical analyses to account for the correlation between multiple embryos per patient. Major Findings— The average age of the patients was 35.04 yrs. The average patient age associated with euploid and aneuploid embryos was not different (P = 0.6454). We found a significant difference in c45 = t5-t4 (P = 0.0298). Our results indicated this interval on average lasts significantly longer for aneuploid embryos - c45(aneuploid) = 11.93hr vs c45(euploid) = 7.97hr. In a separate analysis limited to embryos from the same patients (patients = 47, total embryos=200, euploid=112, aneuploid=88), we obtained the same results (P = 0.0316). The statistical power for this analysis exceeded 87%. No other variable was different between the two groups. Conclusion— Our results demonstrate the importance of morphokinetic variables as potential biomarkers that could aid in non-invasively characterizing euploid and aneuploid embryos. We seek to study a larger population of embryos and incorporate the embryo quality in future studies.

Keywords: IVF, embryo, euploidy, aneuploidy, morphokinteic

Procedia PDF Downloads 87
227 Finite Element Analysis of Human Tarsals, Meta Tarsals and Phalanges for Predicting probable location of Fractures

Authors: Irfan Anjum Manarvi, Fawzi Aljassir

Abstract:

Human bones have been a keen area of research over a long time in the field of biomechanical engineering. Medical professionals, as well as engineering academics and researchers, have investigated various bones by using medical, mechanical, and materials approaches to discover the available body of knowledge. Their major focus has been to establish properties of these and ultimately develop processes and tools either to prevent fracture or recover its damage. Literature shows that mechanical professionals conducted a variety of tests for hardness, deformation, and strain field measurement to arrive at their findings. However, they considered these results accuracy to be insufficient due to various limitations of tools, test equipment, difficulties in the availability of human bones. They proposed the need for further studies to first overcome inaccuracies in measurement methods, testing machines, and experimental errors and then carry out experimental or theoretical studies. Finite Element analysis is a technique which was developed for the aerospace industry due to the complexity of design and materials. But over a period of time, it has found its applications in many other industries due to accuracy and flexibility in selection of materials and types of loading that could be theoretically applied to an object under study. In the past few decades, the field of biomechanical engineering has also started to see its applicability. However, the work done in the area of Tarsals, metatarsals and phalanges using this technique is very limited. Therefore, present research has been focused on using this technique for analysis of these critical bones of the human body. This technique requires a 3-dimensional geometric computer model of the object to be analyzed. In the present research, a 3d laser scanner was used for accurate geometric scans of individual tarsals, metatarsals, and phalanges from a typical human foot to make these computer geometric models. These were then imported into a Finite Element Analysis software and a length refining process was carried out prior to analysis to ensure the computer models were true representatives of actual bone. This was followed by analysis of each bone individually. A number of constraints and load conditions were applied to observe the stress and strain distributions in these bones under the conditions of compression and tensile loads or their combination. Results were collected for deformations in various axis, and stress and strain distributions were observed to identify critical locations where fracture could occur. A comparative analysis of failure properties of all the three types of bones was carried out to establish which of these could fail earlier which is presented in this research. Results of this investigation could be used for further experimental studies by the academics and researchers, as well as industrial engineers, for development of various foot protection devices or tools for surgical operations and recovery treatment of these bones. Researchers could build up on these models to carryout analysis of a complete human foot through Finite Element analysis under various loading conditions such as walking, marching, running, and landing after a jump etc.

Keywords: tarsals, metatarsals, phalanges, 3D scanning, finite element analysis

Procedia PDF Downloads 326
226 Screening for Women with Chorioamnionitis: An Integrative Literature Review

Authors: Allison Herlene Du Plessis, Dalena (R.M.) Van Rooyen, Wilma Ten Ham-Baloyi, Sihaam Jardien-Baboo

Abstract:

Introduction: Women die in pregnancy and childbirth for five main reasons—severe bleeding, infections, unsafe abortions, hypertensive disorders (pre-eclampsia and eclampsia), and medical complications including cardiac disease, diabetes, or HIV/AIDS complicated by pregnancy. In 2015, WHO classified sepsis as the third highest cause for maternal mortalities in the world. Chorioamnionitis is a clinical syndrome of intrauterine infection during any stage of the pregnancy and it refers to ascending bacteria from the vaginal canal up into the uterus, causing infection. While the incidence rates for chorioamnionitis are not well documented, complications related to chorioamnionitis are well documented and midwives still struggle to identify this condition in time due to its complex nature. Few diagnostic methods are available in public health services, due to escalated laboratory costs. Often the affordable biomarkers, such as C-reactive protein CRP, full blood count (FBC) and WBC, have low significance in diagnosing chorioamnionitis. A lack of screening impacts on effective and timeous management of chorioamnionitis, and early identification and management of risks could help to prevent neonatal complications and reduce the subsequent series of morbidities and healthcare costs of infants who are health foci of perinatal infections. Objective: This integrative literature review provides an overview of current best research evidence on the screening of women at risk for chorioamnionitis. Design: An integrative literature review was conducted using a systematic electronic literature search through EBSCOhost, Cochrane Online, Wiley Online, PubMed, Scopus and Google. Guidelines, research studies, and reports in English related to chorioamnionitis from 2008 up until 2020 were included in the study. Findings: After critical appraisal, 31 articles were included. More than one third (67%) of the literature included ranked on the three highest levels of evidence (Level I, II and III). Data extracted regarding screening for chorioamnionitis was synthesized into four themes, namely: screening by clinical signs and symptoms, screening by causative factors of chorioamnionitis, screening of obstetric history, and essential biomarkers to diagnose chorioamnionitis. Key conclusions: There are factors that can be used by midwives to identify women at risk for chorioamnionitis. However, there are a paucity of established sociological, epidemiological and behavioral factors to screen this population. Several biomarkers are available to diagnose chorioamnionitis. Increased Interleukin-6 in amniotic fluid is the better indicator and strongest predictor of histological chorioamnionitis, whereas the available rapid matrix-metalloproteinase-8 test requires further testing. Maternal white blood cells count (WBC) has shown poor selectivity and sensitivity, and C-reactive protein (CRP) thresholds varied among studies and are not ideal for conclusive diagnosis of subclinical chorioamnionitis. Implications for practice: Screening of women at risk for chorioamnionitis by health care providers providing care for pregnant women, including midwives, is important for diagnosis and management before complications arise, particularly in resource-constraint settings.

Keywords: chorioamnionitis, guidelines, best evidence, screening, diagnosis, pregnant women

Procedia PDF Downloads 122
225 Computer Aided Design Solution Based on Genetic Algorithms for FMEA and Control Plan in Automotive Industry

Authors: Nadia Belu, Laurenţiu Mihai Ionescu, Agnieszka Misztal

Abstract:

The automotive industry is one of the most important industries in the world that concerns not only the economy, but also the world culture. In the present financial and economic context, this field faces new challenges posed by the current crisis, companies must maintain product quality, deliver on time and at a competitive price in order to achieve customer satisfaction. Two of the most recommended techniques of quality management by specific standards of the automotive industry, in the product development, are Failure Mode and Effects Analysis (FMEA) and Control Plan. FMEA is a methodology for risk management and quality improvement aimed at identifying potential causes of failure of products and processes, their quantification by risk assessment, ranking of the problems identified according to their importance, to the determination and implementation of corrective actions related. The companies use Control Plans realized using the results from FMEA to evaluate a process or product for strengths and weaknesses and to prevent problems before they occur. The Control Plans represent written descriptions of the systems used to control and minimize product and process variation. In addition Control Plans specify the process monitoring and control methods (for example Special Controls) used to control Special Characteristics. In this paper we propose a computer-aided solution with Genetic Algorithms in order to reduce the drafting of reports: FMEA analysis and Control Plan required in the manufacture of the product launch and improved knowledge development teams for future projects. The solution allows to the design team to introduce data entry required to FMEA. The actual analysis is performed using Genetic Algorithms to find optimum between RPN risk factor and cost of production. A feature of Genetic Algorithms is that they are used as a means of finding solutions for multi criteria optimization problems. In our case, along with three specific FMEA risk factors is considered and reduce production cost. Analysis tool will generate final reports for all FMEA processes. The data obtained in FMEA reports are automatically integrated with other entered parameters in Control Plan. Implementation of the solution is in the form of an application running in an intranet on two servers: one containing analysis and plan generation engine and the other containing the database where the initial parameters and results are stored. The results can then be used as starting solutions in the synthesis of other projects. The solution was applied to welding processes, laser cutting and bending to manufacture chassis for buses. Advantages of the solution are efficient elaboration of documents in the current project by automatically generating reports FMEA and Control Plan using multiple criteria optimization of production and build a solid knowledge base for future projects. The solution which we propose is a cheap alternative to other solutions on the market using Open Source tools in implementation.

Keywords: automotive industry, FMEA, control plan, automotive technology

Procedia PDF Downloads 405
224 An Aptasensor Based on Magnetic Relaxation Switch and Controlled Magnetic Separation for the Sensitive Detection of Pseudomonas aeruginosa

Authors: Fei Jia, Xingjian Bai, Xiaowei Zhang, Wenjie Yan, Ruitong Dai, Xingmin Li, Jozef Kokini

Abstract:

Pseudomonas aeruginosa is a Gram-negative, aerobic, opportunistic human pathogen that is present in the soil, water, and food. This microbe has been recognized as a representative food-borne spoilage bacterium that can lead to many types of infections. Considering the casualties and property loss caused by P. aeruginosa, the development of a rapid and reliable technique for the detection of P. aeruginosa is crucial. The whole-cell aptasensor, an emerging biosensor using aptamer as a capture probe to bind to the whole cell, for food-borne pathogens detection has attracted much attention due to its convenience and high sensitivity. Here, a low-field magnetic resonance imaging (LF-MRI) aptasensor for the rapid detection of P. aeruginosa was developed. The basic detection principle of the magnetic relaxation switch (MRSw) nanosensor lies on the ‘T₂-shortening’ effect of magnetic nanoparticles in NMR measurements. Briefly speaking, the transverse relaxation time (T₂) of neighboring water protons get shortened when magnetic nanoparticles are clustered due to the cross-linking upon the recognition and binding of biological targets, or simply when the concentration of the magnetic nanoparticles increased. Such shortening is related to both the state change (aggregation or dissociation) and the concentration change of magnetic nanoparticles and can be detected using NMR relaxometry or MRI scanners. In this work, two different sizes of magnetic nanoparticles, which are 10 nm (MN₁₀) and 400 nm (MN₄₀₀) in diameter, were first immobilized with anti- P. aeruginosa aptamer through 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC)/N-hydroxysuccinimide (NHS) chemistry separately, to capture and enrich the P. aeruginosa cells. When incubating with the target, a ‘sandwich’ (MN₁₀-bacteria-MN₄₀₀) complex are formed driven by the bonding of MN400 with P. aeruginosa through aptamer recognition, as well as the conjugate aggregation of MN₁₀ on the surface of P. aeruginosa. Due to the different magnetic performance of the MN₁₀ and MN₄₀₀ in the magnetic field caused by their different saturation magnetization, the MN₁₀-bacteria-MN₄₀₀ complex, as well as the unreacted MN₄₀₀ in the solution, can be quickly removed by magnetic separation, and as a result, only unreacted MN₁₀ remain in the solution. The remaining MN₁₀, which are superparamagnetic and stable in low field magnetic field, work as a signal readout for T₂ measurement. Under the optimum condition, the LF-MRI platform provides both image analysis and quantitative detection of P. aeruginosa, with the detection limit as low as 100 cfu/mL. The feasibility and specificity of the aptasensor are demonstrated in detecting real food samples and validated by using plate counting methods. Only two steps and less than 2 hours needed for the detection procedure, this robust aptasensor can detect P. aeruginosa with a wide linear range from 3.1 ×10² cfu/mL to 3.1 ×10⁷ cfu/mL, which is superior to conventional plate counting method and other molecular biology testing assay. Moreover, the aptasensor has a potential to detect other bacteria or toxins by changing suitable aptamers. Considering the excellent accuracy, feasibility, and practicality, the whole-cell aptasensor provides a promising platform for a quick, direct and accurate determination of food-borne pathogens at cell-level.

Keywords: magnetic resonance imaging, meat spoilage, P. aeruginosa, transverse relaxation time

Procedia PDF Downloads 151
223 A Semi-supervised Classification Approach for Trend Following Investment Strategy

Authors: Rodrigo Arnaldo Scarpel

Abstract:

Trend following is a widely accepted investment strategy that adopts a rule-based trading mechanism that rather than striving to predict market direction or on information gathering to decide when to buy and when to sell a stock. Thus, in trend following one must respond to market’s movements that has recently happen and what is currently happening, rather than on what will happen. Optimally, in trend following strategy, is to catch a bull market at its early stage, ride the trend, and liquidate the position at the first evidence of the subsequent bear market. For applying the trend following strategy one needs to find the trend and identify trade signals. In order to avoid false signals, i.e., identify fluctuations of short, mid and long terms and to separate noise from real changes in the trend, most academic works rely on moving averages and other technical analysis indicators, such as the moving average convergence divergence (MACD) and the relative strength index (RSI) to uncover intelligible stock trading rules following trend following strategy philosophy. Recently, some works has applied machine learning techniques for trade rules discovery. In those works, the process of rule construction is based on evolutionary learning which aims to adapt the rules to the current environment and searches for the global optimum rules in the search space. In this work, instead of focusing on the usage of machine learning techniques for creating trading rules, a time series trend classification employing a semi-supervised approach was used to early identify both the beginning and the end of upward and downward trends. Such classification model can be employed to identify trade signals and the decision-making procedure is that if an up-trend (down-trend) is identified, a buy (sell) signal is generated. Semi-supervised learning is used for model training when only part of the data is labeled and Semi-supervised classification aims to train a classifier from both the labeled and unlabeled data, such that it is better than the supervised classifier trained only on the labeled data. For illustrating the proposed approach, it was employed daily trade information, including the open, high, low and closing values and volume from January 1, 2000 to December 31, 2022, of the São Paulo Exchange Composite index (IBOVESPA). Through this time period it was visually identified consistent changes in price, upwards or downwards, for assigning labels and leaving the rest of the days (when there is not a consistent change in price) unlabeled. For training the classification model, a pseudo-label semi-supervised learning strategy was used employing different technical analysis indicators. In this learning strategy, the core is to use unlabeled data to generate a pseudo-label for supervised training. For evaluating the achieved results, it was considered the annualized return and excess return, the Sortino and the Sharpe indicators. Through the evaluated time period, the obtained results were very consistent and can be considered promising for generating the intended trading signals.

Keywords: evolutionary learning, semi-supervised classification, time series data, trading signals generation

Procedia PDF Downloads 88
222 Sustainability in Higher Education: A Case of Transition Management from a Private University in Turkey (Ongoing Study)

Authors: Ayse Collins

Abstract:

The Agenda 2030 puts Higher Education Institutions (HEIs) in the situation where they should emphasize ways to promote sustainability accordingly. However, it is still unclear: a) how sustainability is understood, and b) which actions have been taken in both discourse and practice by HEIs regarding the three pillars of sustainability, society, environment, and economy. There are models of sustainable universities developed by different authors from different countries; For Example, The Global Reporting Initiative (GRI) methodology which offers a variety of indicators to diagnose performance. However, these models have never been developed for universities in particular. Any model, in this sense, cannot be completed adequately without defining the appropriate tools to measure, analyze and control the performance of initiatives. There is a need to conduct researches in different universities from different countries to understand where we stand in terms of sustainable higher education. Therefore, this study aims at exploring the actions taken by a university in Ankara, Turkey, since Agenda 2030 should consider localizing its objectives and targets according to a certain geography. This university just announced 2021-2022 as “Sustainability Year.” Therefore, this research is a multi-methodology longitudinal study and uses the theoretical framework of the organization and transition management (TM). It is designed to examine the activities as being strategic, tactical, operational, and reflexive in nature and covers the six main aspects: academic community, administrative staff, operations and services, teaching, research, and extension. The preliminary research will answer the role of the top university governance, perception of the stakeholders (students, instructors, administrative and support staff) regarding sustainability, and the level of achievement at the mid-evaluation and final, end of year evaluation. TM Theory is a multi-scale, multi-actor, process-oriented approach with the analytical framework to explore and promote change in social systems. Therefore, the stages and respective methodology for collecting data in this research is: Pre-development Stage: a) semi-structured interviews with university governance, c) open-ended survey with faculty, students, and administrative staff d) Semi-structured interviews with support staff, and e) analysis of current secondary data for sustainability. Take-off Stage: a) semi-structured interviews with university governance, faculty, students, administrative and support staff, b) analysis of secondary data. Breakthrough stabilization a) survey with all stakeholders at the university, b) secondary data analysis by using selected indicators for the first sustainability report for universities The findings from the predevelopment stage highlight how stakeholders, coming from different faculties, different disciplines with different identities and characteristics, face the sustainability challenge differently. Though similar sustainable development goals ((social, environmental, and economic) are set in the institution, there are differences across disciplines and among different stakeholders, which need to be considered to reach the optimum goal. It is believed that the results will help changes in HEIs organizational culture to embed sustainability values in their strategic planning, academic and managerial work by putting enough time and resources to be successful in coping with sustainability.

Keywords: higher education, sustainability, sustainability auditing, transition management

Procedia PDF Downloads 105
221 The End Justifies the Means: Using Programmed Mastery Drill to Teach Spoken English to Spanish Youngsters, without Relying on Homework

Authors: Robert Pocklington

Abstract:

Most current language courses expect students to be ‘vocational’, sacrificing their free time in order to learn. However, pupils with a full-time job, or bringing up children, hardly have a spare moment. Others just need the language as a tool or a qualification, as if it were book-keeping or a driving license. Then there are children in unstructured families whose stressful life makes private study almost impossible. And the countless parents whose evenings and weekends have become a nightmare, trying to get the children to do their homework. There are many arguments against homework being a necessity (rather than an optional extra for more ambitious or dedicated students), making a clear case for teaching methods which facilitate full learning of the key content within the classroom. A methodology which could be described as Programmed Mastery Learning has been used at Fluency Language Academy (Spain) since 1992, to teach English to over 4000 pupils yearly, with a staff of around 100 teachers, barely requiring homework. The course is structured according to the tenets of Programmed Learning: small manageable teaching steps, immediate feedback, and constant successful activity. For the Mastery component (not stopping until everyone has learned), the memorisation and practice are entrusted to flashcard-based drilling in the classroom, leading all students to progress together and develop a permanently growing knowledge base. Vocabulary and expressions are memorised using flashcards as stimuli, obliging the brain to constantly recover words from the long-term memory and converting them into reflex knowledge, before they are deployed in sentence building. The use of grammar rules is practised with ‘cue’ flashcards: the brain refers consciously to the grammar rule each time it produces a phrase until it comes easily. This automation of lexicon and correct grammar use greatly facilitates all other language and conversational activities. The full B2 course consists of 48 units each of which takes a class an average of 17,5 hours to complete, allowing the vast majority of students to reach B2 level in 840 class hours, which is corroborated by an 85% pass-rate in the Cambridge University B2 exam (First Certificate). In the past, studying for qualifications was just one of many different options open to young people. Nowadays, youngsters need to stay at school and obtain qualifications in order to get any kind of job. There are many students in our classes who have little intrinsic interest in what they are studying; they just need the certificate. In these circumstances and with increasing government pressure to minimise failure, teachers can no longer think ‘If they don’t study, and fail, its their problem’. It is now becoming the teacher’s problem. Teachers are ever more in need of methods which make their pupils successful learners; this means assuring learning in the classroom. Furthermore, homework is arguably the main divider between successful middle-class schoolchildren and failing working-class children who drop out: if everything important is learned at school, the latter will have a much better chance, favouring inclusiveness in the language classroom.

Keywords: flashcard drilling, fluency method, mastery learning, programmed learning, teaching English as a foreign language

Procedia PDF Downloads 109
220 Owning (up to) the 'Art of the Insane': Re-Claiming Personhood through Copyright Law

Authors: Mathilde Pavis

Abstract:

From Schumann to Van Gogh, Frida Kahlo, and Ray Charles, the stories narrating the careers of artists with physical or mental disabilities are becoming increasingly popular. From the emergence of ‘pathography’ at the end of 18th century to cinematographic portrayals, the work and lives of differently-abled creative individuals continue to fascinate readers, spectators and researchers. The achievements of those artists form the tip of the iceberg composed of complex politico-cultural movements which continue to advocate for wider recognition of disabled artists’ contribution to western culture. This paper envisages copyright law as a potential tool to such end. It investigates the array of rights available to artists with intellectual disabilities to assert their position as authors of their artwork in the twenty-first-century looking at international and national copyright laws (UK and US). Put simply, this paper questions whether an artist’s intellectual disability could be a barrier to assert their intellectual property rights over their creation. From a legal perspective, basic principles of non-discrimination would contradict the representation of artists’ disability as an obstacle to authorship as granted by intellectual property laws. Yet empirical studies reveal that artists with intellectual disabilities are often denied the opportunity to exercise their intellectual property rights or any form of agency over their work. In practice, it appears that, unlike other non-disabled artists, the prospect for differently-abled creators to make use of their right is contingent to the context in which the creative process takes place. Often will the management of such rights rest with the institution, art therapist or mediator involved in the artists’ work as the latter will have necessitated greater support than their non-disabled peers for a variety of reasons, either medical or practical. Moreover, the financial setbacks suffered by medical institutions and private therapy practices have renewed administrators’ and physicians’ interest in monetising the artworks produced under their supervision. Adding to those economic incentives, the rise of criminal and civil litigation in psychiatric cases has also encouraged the retention of patients’ work by therapists who feel compelled to keep comprehensive medical records to shield themselves from liability in the event of a lawsuit. Unspoken transactions, contracts, implied agreements and consent forms have thus progressively made their way into the relationship between those artists and their therapists or assistants, disregarding any notions of copyright. The question of artists’ authorship finds itself caught in an unusually multi-faceted web of issues formed by tightening purse strings, ethical concerns and the fear of civil or criminal liability. Whilst those issues are playing out behind closed doors, the popularity of what was once called the ‘Art of the Insane’ continues to grow and open new commercial avenues. This socio-economic context exacerbates the need to devise a legal framework able to help practitioners, artists and their advocates navigate through those issues in such a way that neither this minority nor our cultural heritage suffers from the fragmentation of the legal protection available to them.

Keywords: authorship, copyright law, intellectual disabilities, art therapy and mediation

Procedia PDF Downloads 147