Search results for: soft computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1898

Search results for: soft computing

68 A Lightning Strike Mimic: The Abusive Use of Dog Shock Collar Presents as Encephalopathy, Respiratory Arrest, Cardiogenic Shock, Severe Hypernatremia, Rhabdomyolysis, and Multiorgan Injury

Authors: Merrick Lopez, Aashish Abraham, Melissa Egge, Marissa Hood, Jui Shah

Abstract:

A 3 year old male with unknown medical history presented initially with encephalopathy, intubated for respiratory failure, and admitted to the pediatric intensive care unit (PICU) with refractory shock. During resuscitation in the emergency department, he was found to be in severe metabolic acidosis with a pH of 7.03 and escalated on vasopressor drips for hypotension. His initial sodium was 174. He was noted to have burn injuries to his scalp, forehead, right axilla, bilateral arm creases and lower legs. He had rhabdomyolysis (initial creatinine kinase 5,430 U/L with peak levels of 62,340 normal <335 U/L), cardiac injury (initial troponin 88 ng/L with peak at 145 ng/L, normal <15ng/L), hypernatremia (peak 174, normal 140), hypocalcemia, liver injury, acute kidney injury, and neuronal loss on magnetic resonance imaging (MRI). Soft restraints and a shock collar were found in the home. He was critically ill for 8 days, but was gradually weaned off drips, extubated, and started on feeds. Discussion Electrical injury, specifically lightning injury is an uncommon but devastating cause of injury in pediatric patients. This patient with suspected abusive use of a dog shock collar presented similar to a lightning strike. Common entrance points include the hands and head, similar to our patient with linear wounds on his forehead. When current enters, it passes through tissues with the least resistance. Nerves, blood vessels, and muscles, have high fluid and electrolyte content and are commonly affected. Exit points are extremities: our child who had circumferential burns around his arm creases and ankles. Linear burns preferentially follow areas of high sweat concentration, and are thought to be due to vaporization of water on the skin’s surface. The most common cause of death from a lightning strike is due to cardiopulmonary arrest. The massive depolarization of the myocardium can result in arrhythmias and myocardial necrosis. The patient presented in cardiogenic shock with evident cardiac damage. Electricity going through vessels can lead to vaporization of intravascular water. This can explain his severe hypernatremia. He also sustained other internal organ injuries (adrenal glands, pancreas, liver, and kidney). Electrical discharge also leads to direct skeletal muscle injury in addition to prolonged muscular spasm. Rhabdomyolysis, the acute damage of muscle, leads to release of potentially toxic components into the circulation which could lead to acute renal failure. The patient had severe rhabdomyolysis and renal injury. Early hypocalcemia has been consistently demonstrated in patients with rhabdomyolysis. This was present in the patient and led to increased vasopressor needs. Central nervous system injuries are also common which can include encephalopathy, hypoxic injury, and cerebral infarction. The patient had evidence of brain injury as seen on MRI. Conclusion Electrical injuries due to lightning strikes and abusive use of a dog shock collar are rare, but can both present in similar ways with respiratory failure, shock, hypernatremia, rhabdomyolysis, brain injury, and multiorgan damage. Although rare, it is essential for early identification and prompt management for acute and chronic complications in these children.

Keywords: cardiogenic shock, dog shock collar, lightning strike, rhabdomyolysis

Procedia PDF Downloads 76
67 Enabling and Ageing-Friendly Neighbourhoods: An Eye-Tracking Study of Multi-Sensory Experience of Senior Citizens in Singapore

Authors: Zdravko Trivic, Kelvin E. Y. Low, Darko Radovic, Raymond Lucas

Abstract:

Our understanding and experience of the built environment are primarily shaped by multi‐sensory, emotional and symbolic modes of exchange with spaces. Associated sensory and cognitive declines that come with ageing substantially affect the overall quality of life of the elderly citizens and the ways they perceive and use urban environment. Reduced mobility and increased risk of falls, problems with spatial orientation and communication, lower confidence and independence levels, decreased willingness to go out and social withdrawal are some of the major consequences of sensory declines that challenge almost all segments of the seniors’ everyday living. However, contemporary urban environments are often either sensory overwhelming or depleting, resulting in physical, mental and emotional stress. Moreover, the design and planning of housing neighbourhoods hardly go beyond the passive 'do-no-harm' and universal design principles, and the limited provision of often non-integrated eldercare and inter-generational facilities. This paper explores and discusses the largely neglected relationships between the 'hard' and 'soft' aspects of housing neighbourhoods and urban experience, focusing on seniors’ perception and multi-sensory experience as vehicles for design and planning of high-density housing neighbourhoods that are inclusive and empathetic yet build senior residents’ physical and mental abilities at different stages of ageing. The paper outlines methods and key findings from research conducted in two high-density housing neighbourhoods in Singapore with aims to capture and evaluate multi-sensorial qualities of two neighbourhoods from the perspective of senior residents. Research methods employed included: on-site sensory recordings of 'objective' quantitative sensory data (air temperature and humidity, sound level and luminance) using multi-function environment meter, spatial mapping of patterns of elderly users’ transient and stationary activity, socio-sensory perception surveys and sensorial journeys with local residents using eye-tracking glasses, and supplemented by walk-along or post-walk interviews. The paper develops a multi-sensory framework to synthetize, cross-reference, and visualise the activity and spatio-sensory rhythms and patterns and distill key issues pertinent to ageing-friendly and health-supportive neighbourhood design. Key findings show senior residents’ concerns with walkability, safety, and wayfinding, overall aesthetic qualities, cleanliness, smell, noise, and crowdedness in their neighbourhoods, as well as the lack of design support for all-day use in the context of Singaporean tropical climate and for inter-generational social interaction. The (ongoing) analysis of eye-tracking data reveals the spatial elements of senior residents’ look at and interact with the most frequently, with the visual range often directed towards the ground. With capacities to meaningfully combine quantitative and qualitative, measured and experienced sensory data, multi-sensory framework shows to be fruitful for distilling key design opportunities based on often ignored aspects of subjective and often taken-for-granted interactions with the familiar outdoor environment. It offers an alternative way of leveraging the potentials of housing neighbourhoods to take a more active role in enabling healthful living at all stages of ageing.

Keywords: ageing-friendly neighbourhoods, eye-tracking, high-density environment, multi-sensory approach, perception

Procedia PDF Downloads 135
66 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems

Authors: Alejandro Adorjan

Abstract:

Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.

Keywords: calculus, engineering education, PreCalculus, Summer Program

Procedia PDF Downloads 281
65 Electric Vehicle Fleet Operators in the Energy Market - Feasibility and Effects on the Electricity Grid

Authors: Benjamin Blat Belmonte, Stephan Rinderknecht

Abstract:

The transition to electric vehicles (EVs) stands at the forefront of innovative strategies designed to address environmental concerns and reduce fossil fuel dependency. As the number of EVs on the roads increases, so too does the potential for their integration into energy markets. This research dives deep into the transformative possibilities of using electric vehicle fleets, specifically electric bus fleets, not just as consumers but as active participants in the energy market. This paper investigates the feasibility and grid effects of electric vehicle fleet operators in the energy market. Our objective centers around a comprehensive exploration of the sector coupling domain, with an emphasis on the economic potential in both electricity and balancing markets. Methodologically, our approach combines data mining techniques with thorough pre-processing, pulling from a rich repository of electricity and balancing market data. Our findings are grounded in the actual operational realities of the bus fleet operator in Darmstadt, Germany. We employ a Mixed Integer Linear Programming (MILP) approach, with the bulk of the computations being processed on the High-Performance Computing (HPC) platform ‘Lichtenbergcluster’. Our findings underscore the compelling economic potential of EV fleets in the energy market. With electric buses becoming more prevalent, the considerable size of these fleets, paired with their substantial battery capacity, opens up new horizons for energy market participation. Notably, our research reveals that economic viability is not the sole advantage. Participating actively in the energy market also translates into pronounced positive effects on grid stabilization. Essentially, EV fleet operators can serve a dual purpose: facilitating transport while simultaneously playing an instrumental role in enhancing grid reliability and resilience. This research highlights the symbiotic relationship between the growth of EV fleets and the stabilization of the energy grid. Such systems could lead to both commercial and ecological advantages, reinforcing the value of electric bus fleets in the broader landscape of sustainable energy solutions. In conclusion, the electrification of transport offers more than just a means to reduce local greenhouse gas emissions. By positioning electric vehicle fleet operators as active participants in the energy market, there lies a powerful opportunity to drive forward the energy transition. This study serves as a testament to the synergistic potential of EV fleets in bolstering both economic viability and grid stabilization, signaling a promising trajectory for future sector coupling endeavors.

Keywords: electric vehicle fleet, sector coupling, optimization, electricity market, balancing market

Procedia PDF Downloads 59
64 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning

Authors: Shayan Mohajer Hamidi

Abstract:

Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.

Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning

Procedia PDF Downloads 57
63 Biotech Processes to Recover Valuable Fraction from Buffalo Whey Usable in Probiotic Growth, Cosmeceutical, Nutraceutical and Food Industries

Authors: Alberto Alfano, Sergio D’ambrosio, Darshankumar Parecha, Donatella Cimini, Chiara Schiraldi.

Abstract:

The main objective of this study regards the setup of an efficient small-scale platform for the conversion of local renewable waste materials, such as whey, into added-value products, thereby reducing environmental impact and costs deriving from the disposal of processing waste products. The buffalo milk whey derived from the cheese-making process, called second cheese whey, is the main by-product of the dairy industry. Whey is the main and most polluting by-product obtained from cheese manufacturing consisting of lactose, lactic acid, proteins, and salts, making whey an added-value product. In Italy, and in particular, in the Campania region, soft cheese production needs a large volume of liquid waste, especially during late spring and summer. This project is part of a circular economy perspective focused on the conversion of potentially polluting and difficult to purify waste into a resource to be exploited, and it embodies the concept of the three “R”: reduce, recycle, and reuse. Special focus was paid to the production of health-promoting biomolecules and biopolymers, which may be exploited in different segments of the food and pharmaceutical industries. These biomolecules may be recovered through appropriate processes and reused in an attempt to obtain added value products. So, ultrafiltration and nanofiltration processes were performed to fractionate bioactive components starting from buffalo milk whey. In this direction, the present study focused on the implementation of a downstream process that converts waste generated from food and food processing industries into added value products with potential applications. Owing to innovative downstream and biotechnological processes, rather than a waste product may be considered a resource to obtain high added value products, such as food supplements (probiotics), cosmeceuticals, biopolymers, and recyclable purified water. Besides targeting gastrointestinal disorders, probiotics such as Lactobacilli have been reported to improve immunomodulation and protection of the host against infections caused by viral and bacterial pathogens. Interestingly, also inactivated microbial (probiotic) cells and their metabolic products, indicated as parabiotic and postbiotics, respectively, have a crucial role and act as mediators in the modulation of the host’s immune function. To boost the production of biomass (both viable and/or heat inactivated cells) and/or the synthesis of growth-related postbiotics, such as EPS, efficient and sustainable fermentation processes are necessary. Based on a “zero-waste” approach, wastes generated from local industries can be recovered and recycled to develop sustainable biotechnological processes to obtain probiotics as well as post and parabiotic, to be tested as bioactive compounds against gastrointestinal disorders. The results have shown it was possible to recover an ultrafiltration retentate with suitable characteristics to be used in skin dehydration, to perform films (i.e., packaging for food industries), or as a wound repair agent and a nanofiltration retentate to recover lactic acid and carbon sources (e.g., lactose, glucose..) used for microbial cultivation. On the side, the last goal is to obtain purified water that can be reused throughout the process. In fact, water reclamation and reuse provide a unique and viable opportunity to augment traditional water supplies, a key issue nowadays.

Keywords: biotech process, downstream process, probiotic growth, from waste to product, buffalo whey

Procedia PDF Downloads 56
62 A Case Study of Remote Location Viewing, and Its Significance in Mobile Learning

Authors: James Gallagher, Phillip Benachour

Abstract:

As location aware mobile technologies become ever more omnipresent, the prospect of exploiting their context awareness to enforce learning approaches thrives. Utilizing the growing acceptance of ubiquitous computing, and the steady progress both in accuracy and battery usage of pervasive devices, we present a case study of remote location viewing, how the application can be utilized to support mobile learning in situ using an existing scenario. Through the case study we introduce a new innovative application: Mobipeek based around a request/response protocol for the viewing of a remote location and explore how this can apply both as part of a teacher lead activity and informal learning situations. The system developed allows a user to select a point on a map, and send a request. Users can attach messages alongside time and distance constraints. Users within the bounds of the request can respond with an image, and accompanying message, providing context to the response. This application can be used alongside a structured learning activity such as the use of mobile phone cameras outdoors as part of an interactive lesson. An example of a learning activity would be to collect photos in the wild about plants, vegetation, and foliage as part of a geography or environmental science lesson. Another example could be to take photos of architectural buildings and monuments as part of an architecture course. These images can be uploaded then displayed back in the classroom for students to share their experiences and compare their findings with their peers. This can help to fosters students’ active participation while helping students to understand lessons in a more interesting and effective way. Mobipeek could augment the student learning experience by providing further interaction with other peers in a remote location. The activity can be part of a wider study between schools in different areas of the country enabling the sharing and interaction between more participants. Remote location viewing can be used to access images in a specific location. The choice of location will depend on the activity and lesson. For example architectural buildings of a specific period can be shared between two or more cities. The augmentation of the learning experience can be manifested in the different contextual and cultural influences as well as the sharing of images from different locations. In addition to the implementation of Mobipeek, we strive to analyse this application, and a subset of other possible and further solutions targeted towards making learning more engaging. Consideration is given to the benefits of such a system, privacy concerns, and feasibility of widespread usage. We also propose elements of “gamification”, in an attempt to further the engagement derived from such a tool and encourage usage. We conclude by identifying limitations, both from a technical, and a mobile learning perspective.

Keywords: context aware, location aware, mobile learning, remote viewing

Procedia PDF Downloads 278
61 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow

Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat

Abstract:

Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.

Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement

Procedia PDF Downloads 79
60 Product Life Cycle Assessment of Generatively Designed Furniture for Interiors Using Robot Based Additive Manufacturing

Authors: Andrew Fox, Qingping Yang, Yuanhong Zhao, Tao Zhang

Abstract:

Furniture is a very significant subdivision of architecture and its inherent interior design activities. The furniture industry has developed from an artisan-driven craft industry, whose forerunners saw themselves manifested in their crafts and treasured a sense of pride in the creativity of their designs, these days largely reduced to an anonymous collective mass-produced output. Although a very conservative industry, there is great potential for the implementation of collaborative digital technologies allowing a reconfigured artisan experience to be reawakened in a new and exciting form. The furniture manufacturing industry, in general, has been slow to adopt new methodologies for a design using artificial and rule-based generative design. This tardiness has meant the loss of potential to enhance its capabilities in producing sustainable, flexible, and mass customizable ‘right first-time’ designs. This paper aims to demonstrate the concept methodology for the creation of alternative and inspiring aesthetic structures for robot-based additive manufacturing (RBAM). These technologies can enable the economic creation of previously unachievable structures, which traditionally would not have been commercially economic to manufacture. The integration of these technologies with the computing power of generative design provides the tools for practitioners to create concepts which are well beyond the insight of even the most accomplished traditional design teams. This paper aims to address the problem by introducing generative design methodologies employing the Autodesk Fusion 360 platform. Examination of the alternative methods for its use has the potential to significantly reduce the estimated 80% contribution to environmental impact at the initial design phase. Though predominantly a design methodology, generative design combined with RBAM has the potential to leverage many lean manufacturing and quality assurance benefits, enhancing the efficiency and agility of modern furniture manufacturing. Through a case study examination of a furniture artifact, the results will be compared to a traditionally designed and manufactured product employing the Ecochain Mobius product life cycle analysis (LCA) platform. This will highlight the benefits of both generative design and robot-based additive manufacturing from an environmental impact and manufacturing efficiency standpoint. These step changes in design methodology and environmental assessment have the potential to revolutionise the design to manufacturing workflow, giving momentum to the concept of conceiving a pre-industrial model of manufacturing, with the global demand for a circular economy and bespoke sustainable design at its heart.

Keywords: robot, manufacturing, generative design, sustainability, circular econonmy, product life cycle assessment, furniture

Procedia PDF Downloads 129
59 Big Data for Local Decision-Making: Indicators Identified at International Conference on Urban Health 2017

Authors: Dana R. Thomson, Catherine Linard, Sabine Vanhuysse, Jessica E. Steele, Michal Shimoni, Jose Siri, Waleska Caiaffa, Megumi Rosenberg, Eleonore Wolff, Tais Grippa, Stefanos Georganos, Helen Elsey

Abstract:

The Sustainable Development Goals (SDGs) and Urban Health Equity Assessment and Response Tool (Urban HEART) identify dozens of key indicators to help local decision-makers prioritize and track inequalities in health outcomes. However, presentations and discussions at the International Conference on Urban Health (ICUH) 2017 suggested that additional indicators are needed to make decisions and policies. A local decision-maker may realize that malaria or road accidents are a top priority. However, s/he needs additional health determinant indicators, for example about standing water or traffic, to address the priority and reduce inequalities. Health determinants reflect the physical and social environments that influence health outcomes often at community- and societal-levels and include such indicators as access to quality health facilities, access to safe parks, traffic density, location of slum areas, air pollution, social exclusion, and social networks. Indicator identification and disaggregation are necessarily constrained by available datasets – typically collected about households and individuals in surveys, censuses, and administrative records. Continued advancements in earth observation, data storage, computing and mobile technologies mean that new sources of health determinants indicators derived from 'big data' are becoming available at fine geographic scale. Big data includes high-resolution satellite imagery and aggregated, anonymized mobile phone data. While big data are themselves not representative of the population (e.g., satellite images depict the physical environment), they can provide information about population density, wealth, mobility, and social environments with tremendous detail and accuracy when combined with population-representative survey, census, administrative and health system data. The aim of this paper is to (1) flag to data scientists important indicators needed by health decision-makers at the city and sub-city scale - ideally free and publicly available, and (2) summarize for local decision-makers new datasets that can be generated from big data, with layperson descriptions of difficulties in generating them. We include SDGs and Urban HEART indicators, as well as indicators mentioned by decision-makers attending ICUH 2017.

Keywords: health determinant, health outcome, mobile phone, remote sensing, satellite imagery, SDG, urban HEART

Procedia PDF Downloads 197
58 Transforming Mindsets and Driving Action through Environmental Sustainability Education: A Course in Case Studies and Project-Based Learning in Public Education

Authors: Sofia Horjales, Florencia Palma

Abstract:

Our society is currently experiencing a profound transformation, demanding a proactive response from governmental bodies and higher education institutions to empower the next generation as catalysts for change. Environmental sustainability is rooted in the critical need to maintain the equilibrium and integrity of natural ecosystems, ensuring the preservation of precious natural resources and biodiversity for the benefit of both present and future generations. It is an essential cornerstone of sustainable development, complementing social and economic sustainability. In this evolving landscape, active methodologies take a central role, aligning perfectly with the principles of the 2030 Agenda for Sustainable Development and emerging as a pivotal element of teacher education. The emphasis on active learning methods has been driven by the urgent need to nurture sustainability and instill social responsibility in our future leaders. The Universidad Tecnológica of Uruguay (UTEC) is a public, technologically-oriented institution established in 2012. UTEC is dedicated to decentralization, expanding access to higher education throughout Uruguay, and promoting inclusive social development. Operating through Regional Technological Institutes (ITRs) and associated centers spread across the country, UTEC faces the challenge of remote student populations. To address this, UTEC utilizes e-learning for equal opportunities, self-regulated learning, and digital skills development, enhancing communication among students, teachers, and peers through virtual classrooms. The Interdisciplinary Continuing Education Program is part of the Innovation and Entrepreneurship Department of UTEC. The main goal is to strengthen innovation skills through a transversal and multidisciplinary approach. Within this Program, we have developed a Case of Study and Project-Based Learning Virtual Course designed for university students and open to the broader UTEC community. The primary aim of this course is to establish a strong foundation for comprehending and addressing environmental sustainability issues from an interdisciplinary perspective. Upon completing the course, we expect students not only to understand the intricate interactions between social and ecosystem environments but also to utilize their knowledge and innovation skills to develop projects that offer enhancements or solutions to real-world challenges. Our course design centers on innovative learning experiences, rooted in active methodologies. We explore the intersection of these methods with sustainability and social responsibility in the education of university students. A paramount focus lies in gathering student feedback, empowering them to autonomously generate ideas with guidance from instructors, and even defining their own project topics. This approach underscores that when students are genuinely engaged in subjects of their choice, they not only acquire the necessary knowledge and skills but also develop essential attributes like effective communication, critical thinking, and problem-solving abilities. These qualities will benefit them throughout their lifelong learning journey. We are convinced that education serves as the conduit to merge knowledge and cultivate interdisciplinary collaboration, igniting awareness and instigating action for environmental sustainability. While systemic changes are undoubtedly essential for society and the economy, we are making significant progress by shaping perspectives and sparking small, everyday actions within the UTEC community. This approach empowers our students to become engaged global citizens, actively contributing to the creation of a more sustainable future.

Keywords: active learning, environmental education, project-based learning, soft skills development

Procedia PDF Downloads 55
57 Simplified Modeling of Post-Soil Interaction for Roadside Safety Barriers

Authors: Charly Julien Nyobe, Eric Jacquelin, Denis Brizard, Alexy Mercier

Abstract:

The performance of road side safety barriers depends largely on the dynamic interactions between post and soil. These interactions play a key role in the response of barriers to crash testing. In the literature, soil-post interaction is modeled in crash test simulations using three approaches. Many researchers have initially used the finite element approach, in which the post is embedded in a continuum soil modelled by solid finite elements. This method represents a more comprehensive and detailed approach, employing a mesh-based continuum to model the soil’s behavior and its interaction with the post. Although this method takes all soil properties into account, it is nevertheless very costly in terms of simulation time. In the second approach, all the points of the post located at a predefined depth are fixed. Although this approach reduces CPU computing time, it overestimates soil-post stiffness. The third approach involves modeling the post as a beam supported by a set of nonlinear springs in the horizontal directions. For support in the vertical direction, the posts were constrained at a node at ground level. This approach is less costly, but the literature does not provide a simple procedure to determine the constitutive law of the springs The aim of this study is to propose a simple and low-cost procedure to obtain the constitutive law of nonlinear springs that model the soil-post interaction. To achieve this objective, we will first present a procedure to obtain the constitutive law of nonlinear springs thanks to the simulation of a soil compression test. The test consists in compressing the soil contained in the tank by a rigid solid, up to a vertical displacement of 200 mm. The resultant force exerted by the ground on the rigid solid and its vertical displacement are extracted and, a force-displacement curve was determined. The proposed procedure for replacing the soil with springs must be tested against a reference model. The reference model consists of a wooden post embedded into the ground and impacted with an impactor. Two simplified models with springs are studied. In the first model, called Kh-Kv model, the springs are attached to the post in the horizontal and vertical directions. The second Kh model is the one described in the literature. The two simplified models are compared with the reference model according to several criteria: the displacement of a node located at the top of the post in vertical and horizontal directions; displacement of the post's center of rotation and impactor velocity. The results given by both simplified models are very close to the reference model results. It is noticeable that the Kh-Kv model is slightly better than the Kh model. Further, the former model is more interesting than the latter as it involves less arbitrary conditions. The simplified models also reduce the simulation time by a factor 4. The Kh-Kv model can therefore be used as a reliable tool to represent the soil-post interaction in a future research and development of road safety barriers.

Keywords: crash tests, nonlinear springs, soil-post interaction modeling, constitutive law

Procedia PDF Downloads 8
56 The 4th Critical R: Conceptualising the Development of Resilience as an Addition to the 3 Rs of the Essential Education Curricula

Authors: Akhentoolove Corbin, Leta De Jonge, Charmaine De Jonge

Abstract:

Introduction: Various writers have promoted the adoption of the 4th R in the education curricula (relationships, respect, reasoning, religion, computing, science, art, conflict management, music) and the 5th R (responsibility). They argue that the traditional 3 Rs are not adequate for the modern environment and the requirements for students to become functional citizens in society. In particular, the developing countries of the anglophone Caribbean (most of which are tiny islands) are susceptible to the dangers and complexities of climate change and global economic volatility. These proposed additions to the 3Rs do have some justification, but this research considers Resilience as even more important and relevant in a world that is faced with the negative prospects of climate change, poverty, discrimination, and economic volatility. It is argued that the foundation for resilient citizens, workers, and workplaces, must be built in the elementary and secondary/middle schools and then through the tertiary level, to achieve an outcome of more resilient students. Government, business, and society require widespread resilience to be capable of ‘bouncing back’ and be more adaptable, transformational, and sustainable. Methodology: The paper utilises a mixed-methods approach incorporating a questionnaire and interviews to determine participants’ opinions on the importance and relevance of resilience in the schools’ curricula and to government, business, and society. The target groups are as follows: educators at all levels, education administrators, members of the business sector, public sector, and 3rd sector. The research specifically targets the anglophone Caribbean developing countries (Barbados, Guyana, Jamaica, Trinidad, St. Lucia, and St Vincent, and the Grenadines). The research utilises SPSS for data analysis. Major Findings: The preliminary findings suggest that the majority of participants support the adoption of resilience as a 4th R in the curricula of the elementary, secondary/middle schools, and tertiary level in the anglophone Caribbean. The final results will allow the researchers to reveal more specific details on any variations among the islands in the sample andto engage in an in-depth discussion of the relevance and importance of resilience as the 4th R. Conclusion: Results seem to suggest that the education system should adopt the 4th R of resilience so that educators working in collaboration with the family and community/village can develop young citizens who are more resilient and capable of manifesting the behaviours and attitudes associated with ‘bouncing back,’ adaptability, transformation, and sustainability. These findings may be useful for education decision-makers and governments in these Caribbean islands, who have the authority and responsibility for the development of education policy, laws, and regulations.

Keywords: education, resilient students, adaptable, transformational, resilient citizens, workplaces, government

Procedia PDF Downloads 60
55 The Role of Professional Teacher Development in Introducing Trilingual Education into the Secondary School Curriculum: Lessons from Kazakhstan, Central Asia

Authors: Kairat Kurakbayev, Dina Gungor, Adil Ashirbekov, Assel Kambatyrova

Abstract:

Kazakhstan, a post-Soviet economy located in the Central Asia, is making great efforts to internationalize its national system of education. The country is very ambitious in making the national economy internationally competitive and education has become one of the main pillars of the nation’s strategic development plan for 2030. This paper discusses the role of professional teacher development in upgrading the secondary education curriculum with the introduction of English as a medium of instruction (EMI) in grades 10-11 grades. Having Kazakh as the state language and Russian as the official language, English bears a status of foreign language in the country. The development of trilingual education is very high on the agenda of the Ministry of Education and Science. It is planned that by 2019 STEM-related subjects – Biology, Chemistry, Computing and Physics – will be taught in EMI. Introducing English-medium education appears to be a very drastic reform and the teaching cadre is the key driver here. At the same time, after the collapse of the Soviet Union, the teaching profession is still struggling to become attractive in the eyes of the local youth. Moreover, the quality of Kazakhstan’s secondary education is put in question by OECD national review reports. The paper presents a case study of the nation-wide professional development programme arranged for 5 010 school teachers so that they could be able to teach their content subjects in English starting from 2019 onwards. The study is based on the mixed methods research involving the data derived from the surveys and semi-structured interviews held with the programme participants, i.e. school teachers. The findings of the study imply the significance of the school teachers’ attitudes towards the top-down reform of trilingual education. The qualitative research data reveal the teachers’ beliefs about advantages and disadvantages of having their content subjects (e.g. Biology or Chemistry) taught in EMI. The study highlights teachers’ concerns about their professional readiness to implement the top-down reform of English-medium education and discusses possible risks of academic underperforming on the part of students whose English language proficiency is not advanced. This paper argues that for the effective implementation of the English-medium education in secondary schools, the state should adopt a comprehensive approach to upgrading the national academic system where teachers’ attitudes and beliefs play the key role in making the trilingual education policy effective. The study presents lessons for other national academic systems considering to transfer its secondary education to English as a medium of instruction.

Keywords: teacher education, teachers' beliefs, trilingual education, case study

Procedia PDF Downloads 163
54 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases

Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar

Abstract:

Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.

Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning

Procedia PDF Downloads 109
53 Development of Chitosan/Dextran Gelatin Methacrylate Core/Shell 3D Scaffolds and Protein/Polycaprolactone Melt Electrowriting Meshes for Tissue Regeneration Applications

Authors: J. D. Cabral, E. Murray, P. Turner, E. Hewitt, A. Ali, M. McConnell

Abstract:

Worldwide demand for organ replacement and tissue regeneration is progressively increasing. Three-dimensional (3D) bioprinting, where a physical construct is produced using computer-aided design, is a promising tool to advance the tissue engineering and regenerative medicine fields. In this paper we describe two different approaches to developing 3D bioprinted constructs for use in tissue regeneration. Bioink development is critical in achieving the 3D biofabrication of functional, regenerative tissues. Hydrogels, cross-linked macromolecules that absorb large amounts of water, have received widespread interest as bioinks due to their relevant soft tissue mechanics, biocompatibility, and tunability. In turn, not only is bioink optimisation crucial, but the creation of vascularized tissues remains a key challenge for the successful fabrication of thicker, more clinically relevant bioengineered tissues. Among the various methodologies, cell-laden hydrogels are regarded as a favorable approach; and when combined with novel core/shell 3D bioprinting technology, an innovative strategy towards creating new vessel-like structures. In this work, we investigate this cell-based approach by using human umbilical endothelial cells (HUVECs) entrapped in a viscoelastic chitosan/dextran (CD)-based core hydrogel, printed simulataneously along with a gelatin methacrylate (GelMA) shell. We have expanded beyond our previously reported FDA approved, commercialised, post-surgical CD hydrogel, Chitogel®, by functionalizing it with cell adhesion and proteolytic peptides in order to promote bone marrow-derived mesenchymal stem cell (immortalized BMSC cell line, hTERT) and HUVECs growth. The biocompatibility and biodegradability of these cell lines in a 3D bioprinted construct is demonstrated. Our studies show that particular peptide combinations crosslinked within the CD hydrogel was found to increase in vitro growth of BMSCs and HUVECs by more than two-fold. These gels were then used as a core bioink combined with the more mechanically robust, UV irradiated GelMA shell bioink, to create 3D regenerative, vessel-like scaffolds with high print fidelity. As well, microporous MEW scaffolds made from milk proteins blended with PCL were found to show promising bioactivity, exhibiting a significant increase in keratinocyte (HaCaTs) and fibroblast (normal human dermal fibroblasts, NhDFs) cell migration and proliferation when compared to PCL only scaffolds. In conclusion, our studies indicate that a peptide functionalized CD hydrogel bioink reinforced with a GelMA shell is biocompatible, biodegradable, and an appropriate cell delivery vehicle in the creation of regenerative 3D constructs. In addition, a novel 3D printing technique, melt electrowriting (MEW), which allows fabrication of micrometer fibre meshes, was used to 3D print polycaprolactone (PCL) and bioactive milk protein, lactorferrin (LF) and whey protein (WP), blended scaffolds for potential skin regeneration applications. MEW milk protein/PCL scaffolds exhibited high porosity characteristics, low overall biodegradation, and rapid protein release. Human fibroblasts and keratinocyte cells were seeded on to the scaffolds. Scaffolds containing high concentrations of LF and combined proteins (LF+WP) showed improved cell viability over time as compared to PCL only scaffolds. This research highlights two scaffolds made using two different 3D printing techniques using a combination of both natural and synthetic biomaterial components in order to create regenerative constructs as potential chronic wound treatments.

Keywords: biomaterials, hydrogels, regenerative medicine, 3D bioprinting

Procedia PDF Downloads 255
52 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing

Authors: Rowan P. Martnishn

Abstract:

During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.

Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding

Procedia PDF Downloads 9
51 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry

Authors: C. A. Barros, Ana P. Barroso

Abstract:

Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.

Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis

Procedia PDF Downloads 204
50 A Low-Cost Disposable PDMS Microfluidic Cartridge with Reagent Storage Silicone Blisters for Isothermal DNA Amplification

Authors: L. Ereku, R. E. Mackay, A. Naveenathayalan, K. Ajayi, W. Balachandran

Abstract:

Over the past decade the increase of sexually transmitted infections (STIs) especially in the developing world due to high cost and lack of sufficient medical testing have given rise to the need for a rapid, low cost point of care medical diagnostic that is disposable and most significantly reproduces equivocal results achieved within centralised laboratories. This paper present the development of a disposable PDMS microfluidic cartridge incorporating blisters filled with reagents required for isothermal DNA amplification in clinical diagnostics and point-of-care testing. In view of circumventing the necessity for external complex microfluidic pumps, designing on-chip pressurised fluid reservoirs is embraced using finger actuation and blister storage. The fabrication of the blisters takes into consideration three proponents that include: material characteristics, fluid volume and structural design. Silicone rubber is the chosen material due to its good chemical stability, considerable tear resistance and moderate tension/compression strength. The case of fluid capacity and structural form go hand in hand as the reagent need for the experimental analysis determines the volume size of the blisters, whereas the structural form has to be designed to provide low compression stress when deformed for fluid expulsion. Furthermore, the top and bottom section of the blisters are embedded with miniature polar opposite magnets at a defined parallel distance. These magnets are needed to lock or restrain the blisters when fully compressed so as to prevent unneeded backflow as a result of elasticity. The integrated chip is bonded onto a large microscope glass slide (50mm x 75mm). Each part is manufactured using a 3D printed mould designed using Solidworks software. Die-casting is employed, using 3D printed moulds, to form the deformable blisters by forcing a proprietary liquid silicone rubber through the positive mould cavity. The set silicone rubber is removed from the cast and prefilled with liquid reagent and then sealed with a thin (0.3mm) burstable layer of recast silicone rubber. The main microfluidic cartridge is fabricated using classical soft lithographic techniques. The cartridge incorporates microchannel circuitry, mixing chamber, inlet port, outlet port, reaction chamber and waste chamber. Polydimethylsiloxane (PDMS, QSil 216) is mixed and degassed using a centrifuge (ratio 10:1) is then poured after the prefilled blisters are correctly positioned on the negative mould. Heat treatment of about 50C to 60C in the oven for about 3hours is needed to achieve curing. The latter chip production stage involves bonding the cured PDMS to the glass slide. A plasma coroner treater device BD20-AC (Electro-Technic Products Inc., US) is used to activate the PDMS and glass slide before they are both joined and adequately compressed together, then left in the oven over the night to ensure bonding. There are two blisters in total needed for experimentation; the first will be used as a wash buffer to remove any remaining cell debris and unbound DNA while the second will contain 100uL amplification reagents. This paper will present results of chemical cell lysis, extraction using a biopolymer paper membrane and isothermal amplification on a low-cost platform using the finger actuated blisters for reagent storage. The platform has been shown to detect 1x105 copies of Chlamydia trachomatis using Recombinase Polymerase Amplification (RPA).

Keywords: finger actuation, point of care, reagent storage, silicone blisters

Procedia PDF Downloads 359
49 Effectiveness of Peer Reproductive Health Education Program in Improving Knowledge, Attitude, and Use Health Service of High School Adolescent Girls in Eritrea in 2014

Authors: Ghidey Ghebreyohanes, Eltahir Awad Gasim Khalil, Zemenfes Tsighe, Faiza Ali

Abstract:

Background: reproductive health (RH) is a state of physical, mental and social well-being in all matters relating to the reproductive system at all stages of life. In East Africa including Eritrea, adolescents comprise more than a quarter of the population. The region holds the highest rates of sexually transmitted diseases, HIV, unwanted pregnancy and unsafe abortion with its complications. Young girls carry the highest burden of reproductive health problems due to their risk taking behavior, lack of knowledge, peer pressure, physiologic immaturity and low socioeconomic status. Design: this was a Community-based, randomized, case-controlled and pre-test-post-test intervention study. Setting: Zoba Debub was randomly selected out of the six zobas in Eritrea. The four high schools out of the 26 in Zoba Debub were randomly selected as study target schools. Over three quarter of the people live on farming. The target population was female students attending grade nine with majority of these girls live in the distant villages and walk to school. The study participants were randomly selected (n=165) from each school. Furthermore, the 1 intervention and 3 controls for the study arms were assigned randomly. Objectives: this study aimed to assess the effectiveness of peer reproductive health education in improving knowledge, attitude, and health service use of high school adolescent girls in Eritrea Methods: the protocol was reviewed and approved by the Scientific and Ethics Committees of Faculty of Nursing Sciences, University of Khartoum. Data was collected using pre-designed and pretested questionnaire emphasizing on reproductive health knowledge, attitude and practice. Sample size was calculated using proportion formula (α 0.01; power of 95%). Measures used were scores and proportions. Descriptive and inferential statistics, t-test and chi square at (α .01), 99% confidence interval were used to compare changes of pre and post-intervention scores using SPSS soft ware. Seventeen students were selected for peer educators by the school principals and other teachers based on inclusion criteria that include: good academic performance and acceptable behavior. One peer educator educated one group composed of 8-10 students for two months. One faculty member was selected to supervise peer educators. The principal investigator conducted the training of trainers and provided supervision and discussion to peer educators every two weeks until the end of intervention. Results: following informed consent, 627 students [164 in intervention and 463 in the control group] with a ratio of 1 to 3, were enrolled in the study. The mean age for the total study population was 15.4±1.0 years. The intervention group mean age was 15.3±1.0 year; while the control group had a mean age of 15.4±1.0. The mean ages for the study arms were similar (p= 0.4). The majority (96 %) of the study participants are from Tigrigna ethnic group. Reproductive knowledge scores which was calculated out of a total 61 grade points: intervention group (pretest 6.7 %, post-test 33.6 %; p= 0.0001); control group (pretest 7.3 %, posttest 7.3 %, p= 0.92). Proportion difference in attitude calculated out of 100%: intervention group (pretest 42.3 % post test 54.7% p= 0.001); controls group (pretest 45%, post test 44.8 p= 0.7). Proportion difference in Practice calculated out of 100 %: intervention group (pretest 15.4%, post test 80.4 % p= 0.0001); control group (pretest 16.8%, posttest 16.9 % p= 0.8). Mothers were quoted as major (> 90 %) source of reproductive health information. All focus group discussants and most of survey participants agreed on the urgent need of reproductive health information and services for adolescent girls. Conclusion: reproductive health knowledge and use of facilities is poor among adolescent girls in sub-urban Eretria. School-based peer reproductive health education is effective and is the best strategy to improve reproductive health knowledge and attitudes.

Keywords: reproductive health, adolescent girls, eretria, health education

Procedia PDF Downloads 345
48 Associated Problems with the Open Dump Site and Its Possible Solutions

Authors: Pangkaj Kumar Mahanta, Md. Rafizul Islam

Abstract:

The rapid growth of the population causes a substantial amount of increase in household waste all over the world. Waste management is becoming one of the most challenging phenomena in the present day. The most environmentally friendly final disposal process of waste is sanitary landfilling, which is practiced in most developing countries. However, in Southeast Asia, most of the final disposal point is an open dump site. Due to the ignominy of proper management of waste and monitoring, the surrounding environment gets polluted more by the open dump site in comparison with a sanitary landfill. Khulna is 3rd largest metropolitan city in Bangladesh, having a population of around 1.5 million and producing approximately 450 tons per day of Municipal Solid Waste. The Municipal solid waste of Khulna city is disposed of in Rajbandh open dump site. The surrounding air is being polluted by the gas produced in the open dump site. Also, the open dump site produces leachate, which contains various heavy metals like Cadmium (Cd), Chromium (Cr), Lead (Pb), Manganese (Mn), Mercury (Hg), Strontium (Sr), etc. Leachate pollutes the soil as well as the groundwater of the open dump site and also the surrounding area through seepage. Moreover, during the rainy season, the surface water is polluted by leachate runoff. Also, the plastic waste flowing out from the open dump site through various drivers pollutes the nearby environment. The health risk assessment associated with heavy metals was carried out by computing the chronic daily intake (CDI), hazard quotient (HQ), and hazard index (HI) via different exposure pathways following the USEPA guidelines. For ecological risk, potential contamination index (Cp), Contamination factor (CF), contamination load index (PLI), numerical integrated contamination factor (NICF), enrichment factor (EF), ecological risk index (ER), and potential ecological risk index (PERI) were computed. The health risk and ecological risk assessment results reveal that some heavy metals possess strong health and ecological risk. In addition, the child faces higher harmful health risks from several heavy metals than the adult for all the exposure pathways and media. The conversion of an open dump site into a sanitary landfill and a proper management system can reduce the problems associated with an open dump site. In the sanitary landfill, the produced gas will be managed properly to save the surrounding atmosphere from being polluted. The seepage of leachate can be minimized by installing a compacted clay layer (CCL) as a baseline and leachate collection in a sanitary landfill to save the underlying soil layer and surrounding water bodies from leachate. Another important component of a sanitary landfill is the conversion of plastic waste to energy will minimize the plastic pollution in the landfill area and also the surrounding soil and water bodies. Also, in the sanitary landfill, the bio-waste can be used to make compost to reduce the volume of bio-waste and proper utilization of the landfill area.

Keywords: ecological risk, health risk, open dump site, sanitary landfill

Procedia PDF Downloads 184
47 Data Quality on Regular Childhood Immunization Programme at Degehabur District: Somali Region, Ethiopia

Authors: Eyob Seife

Abstract:

Immunization is a life-saving intervention which prevents needless suffering through sickness, disability, and death. Emphasis on data quality and use will become even stronger with the development of the immunization agenda 2030 (IA2030). Quality of data is a key factor in generating reliable health information that enables monitoring progress, financial planning, vaccine forecasting capacities, and making decisions for continuous improvement of the national immunization program. However, ensuring data of sufficient quality and promoting an information-use culture at the point of the collection remains critical and challenging, especially in hard-to-reach and pastoralist areas where Degehabur district is selected based on a hypothesis of ‘there is no difference in reported and recounted immunization data consistency. Data quality is dependent on different factors where organizational, behavioral, technical, and contextual factors are the mentioned ones. A cross-sectional quantitative study was conducted on September 2022 in the Degehabur district. The study used the world health organization (WHO) recommended data quality self-assessment (DQS) tools. Immunization tally sheets, registers, and reporting documents were reviewed at 5 health facilities (2 health centers and 3 health posts) of primary health care units for one fiscal year (12 months) to determine the accuracy ratio. The data was collected by trained DQS assessors to explore the quality of monitoring systems at health posts, health centers, and the district health office. A quality index (QI) was assessed, and the accuracy ratio formulated were: the first and third doses of pentavalent vaccines, fully immunized (FI), and the first dose of measles-containing vaccines (MCV). In this study, facility-level results showed both over-reporting and under-reporting were observed at health posts when computing the accuracy ratio of the tally sheet to health post reports found at health centers for almost all antigens verified where pentavalent 1 was 88.3%, 60.4%, and 125.6% for Health posts A, B, and C respectively. For first-dose measles-containing vaccines (MCV), similarly, the accuracy ratio was found to be 126.6%, 42.6%, and 140.9% for Health posts A, B, and C, respectively. The accuracy ratio for fully immunized children also showed 0% for health posts A and B and 100% for health post-C. A relatively better accuracy ratio was seen at health centers where the first pentavalent dose was 97.4% and 103.3% for health centers A and B, while a first dose of measles-containing vaccines (MCV) was 89.2% and 100.9% for health centers A and B, respectively. A quality index (QI) of all facilities also showed results between the maximum of 33.33% and a minimum of 0%. Most of the verified immunization data accuracy ratios were found to be relatively better at the health center level. However, the quality of the monitoring system is poor at all levels, besides poor data accuracy at all health posts. So attention should be given to improving the capacity of staff and quality of monitoring system components, namely recording, reporting, archiving, data analysis, and using information for decision at all levels, especially in pastoralist areas where such kinds of study findings need to be improved beside to improving the data quality at root and health posts level.

Keywords: accuracy ratio, Degehabur District, regular childhood immunization program, quality of monitoring system, Somali Region-Ethiopia

Procedia PDF Downloads 88
46 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 371
45 Platform Virtual for Joint Amplitude Measurement Based in MEMS

Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez

Abstract:

Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.

Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation

Procedia PDF Downloads 251
44 Electromagnetic Modeling of a MESFET Transistor Using the Moments Method Combined with Generalised Equivalent Circuit Method

Authors: Takoua Soltani, Imen Soltani, Taoufik Aguili

Abstract:

The communications' and radar systems' demands give rise to new developments in the domain of active integrated antennas (AIA) and arrays. The main advantages of AIA arrays are the simplicity of fabrication, low cost of manufacturing, and the combination between free space power and the scanner without a phase shifter. The integrated active antenna modeling is the coupling between the electromagnetic model and the transport model that will be affected in the high frequencies. Global modeling of active circuits is important for simulating EM coupling, interaction between active devices and the EM waves, and the effects of EM radiation on active and passive components. The current review focuses on the modeling of the active element which is a MESFET transistor immersed in a rectangular waveguide. The proposed EM analysis is based on the Method of Moments combined with the Generalised Equivalent Circuit method (MOM-GEC). The Method of Moments which is the most common and powerful software as numerical techniques have been used in resolving the electromagnetic problems. In the class of numerical techniques, MOM is the dominant technique in solving of Maxwell and Transport’s integral equations for an active integrated antenna. In this situation, the equivalent circuit is introduced to the development of an integral method formulation based on the transposition of field problems in a Generalised equivalent circuit that is simpler to treat. The method of Generalised Equivalent Circuit (MGEC) was suggested in order to represent integral equations circuits that describe the unknown electromagnetic boundary conditions. The equivalent circuit presents a true electric image of the studied structures for describing the discontinuity and its environment. The aim of our developed method is to investigate the antenna parameters such as the input impedance and the current density distribution and the electric field distribution. In this work, we propose a global EM modeling of the MESFET AsGa transistor using an integral method. We will begin by describing the modeling structure that allows defining an equivalent EM scheme translating the electromagnetic equations considered. Secondly, the projection of these equations on common-type test functions leads to a linear matrix equation where the unknown variable represents the amplitudes of the current density. Solving this equation resulted in providing the input impedance, the distribution of the current density and the electric field distribution. From electromagnetic calculations, we were able to present the convergence of input impedance for different test function number as a function of the guide mode numbers. This paper presents a pilot study to find the answer to map out the variation of the existing current evaluated by the MOM-GEC. The essential improvement of our method is reducing computing time and memory requirements in order to provide a sufficient global model of the MESFET transistor.

Keywords: active integrated antenna, current density, input impedance, MESFET transistor, MOM-GEC method

Procedia PDF Downloads 188
43 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89

Authors: A. Chatel, I. S. Torreguitart, T. Verstraete

Abstract:

The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.

Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness

Procedia PDF Downloads 101
42 Innovations and Challenges: Multimodal Learning in Cybersecurity

Authors: Tarek Saadawi, Rosario Gennaro, Jonathan Akeley

Abstract:

There is rapidly growing demand for professionals to fill positions in Cybersecurity. This is recognized as a national priority both by government agencies and the private sector. Cybersecurity is a very wide technical area which encompasses all measures that can be taken in an electronic system to prevent criminal or unauthorized use of data and resources. This requires defending computers, servers, networks, and their users from any kind of malicious attacks. The need to address this challenge has been recognized globally but is particularly acute in the New York metropolitan area, home to some of the largest financial institutions in the world, which are prime targets of cyberattacks. In New York State alone, there are currently around 57,000 jobs in the Cybersecurity industry, with more than 23,000 unfilled positions. The Cybersecurity Program at City College is a collaboration between the Departments of Computer Science and Electrical Engineering. In Fall 2020, The City College of New York matriculated its first students in theCybersecurity Master of Science program. The program was designed to fill gaps in the previous offerings and evolved out ofan established partnership with Facebook on Cybersecurity Education. City College has designed a program where courses, curricula, syllabi, materials, labs, etc., are developed in cooperation and coordination with industry whenever possible, ensuring that students graduating from the program will have the necessary background to seamlessly segue into industry jobs. The Cybersecurity Program has created multiple pathways for prospective students to obtain the necessary prerequisites to apply in order to build a more diverse student population. The program can also be pursued on a part-time basis which makes it available to working professionals. Since City College’s Cybersecurity M.S. program was established to equip students with the advanced technical skills needed to thrive in a high-demand, rapidly-evolving field, it incorporates a range of pedagogical formats. From its outset, the Cybersecurity program has sought to provide both the theoretical foundations necessary for meaningful work in the field along with labs and applied learning projects aligned with skillsets required by industry. The efforts have involved collaboration with outside organizations and with visiting professors designing new courses on topics such as Adversarial AI, Data Privacy, Secure Cloud Computing, and blockchain. Although the program was initially designed with a single asynchronous course in the curriculum with the rest of the classes designed to be offered in-person, the advent of the COVID-19 pandemic necessitated a move to fullyonline learning. The shift to online learning has provided lessons for future development by providing examples of some inherent advantages to the medium in addition to its drawbacks. This talk will address the structure of the newly-implemented Cybersecurity Master’s Program and discuss the innovations, challenges, and possible future directions.

Keywords: cybersecurity, new york, city college, graduate degree, master of science

Procedia PDF Downloads 129
41 Detection of Curvilinear Structure via Recursive Anisotropic Diffusion

Authors: Sardorbek Numonov, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Dongeun Choi, Byung-Woo Hong

Abstract:

The detection of curvilinear structures often plays an important role in the analysis of images. In particular, it is considered as a crucial step for the diagnosis of chronic respiratory diseases to localize the fissures in chest CT imagery where the lung is divided into five lobes by the fissures that are characterized by linear features in appearance. However, the characteristic linear features for the fissures are often shown to be subtle due to the high intensity variability, pathological deformation or image noise involved in the imaging procedure, which leads to the uncertainty in the quantification of anatomical or functional properties of the lung. Thus, it is desired to enhance the linear features present in the chest CT images so that the distinctiveness in the delineation of the lobe is improved. We propose a recursive diffusion process that prefers coherent features based on the analysis of structure tensor in an anisotropic manner. The local image features associated with certain scales and directions can be characterized by the eigenanalysis of the structure tensor that is often regularized via isotropic diffusion filters. However, the isotropic diffusion filters involved in the computation of the structure tensor generally blur geometrically significant structure of the features leading to the degradation of the characteristic power in the feature space. Thus, it is required to take into consideration of local structure of the feature in scale and direction when computing the structure tensor. We apply an anisotropic diffusion in consideration of scale and direction of the features in the computation of the structure tensor that subsequently provides the geometrical structure of the features by its eigenanalysis that determines the shape of the anisotropic diffusion kernel. The recursive application of the anisotropic diffusion with the kernel the shape of which is derived from the structure tensor leading to the anisotropic scale-space where the geometrical features are preserved via the eigenanalysis of the structure tensor computed from the diffused image. The recursive interaction between the anisotropic diffusion based on the geometry-driven kernels and the computation of the structure tensor that determines the shape of the diffusion kernels yields a scale-space where geometrical properties of the image structure are effectively characterized. We apply our recursive anisotropic diffusion algorithm to the detection of curvilinear structure in the chest CT imagery where the fissures present curvilinear features and define the boundary of lobes. It is shown that our algorithm yields precise detection of the fissures while overcoming the subtlety in defining the characteristic linear features. The quantitative evaluation demonstrates the robustness and effectiveness of the proposed algorithm for the detection of fissures in the chest CT in terms of the false positive and the true positive measures. The receiver operating characteristic curves indicate the potential of our algorithm as a segmentation tool in the clinical environment. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: anisotropic diffusion, chest CT imagery, chronic respiratory disease, curvilinear structure, fissure detection, structure tensor

Procedia PDF Downloads 220
40 Improved Soil and Snow Treatment with the Rapid Update Cycle Land-Surface Model for Regional and Global Weather Predictions

Authors: Tatiana G. Smirnova, Stan G. Benjamin

Abstract:

Rapid Update Cycle (RUC) land surface model (LSM) was a land-surface component in several generations of operational weather prediction models at the National Center for Environment Prediction (NCEP) at the National Oceanic and Atmospheric Administration (NOAA). It was designed for short-range weather predictions with an emphasis on severe weather and originally was intentionally simple to avoid uncertainties from poorly known parameters. Nevertheless, the RUC LSM, when coupled with the hourly-assimilating atmospheric model, can produce a realistic evolution of time-varying soil moisture and temperature, as well as the evolution of snow cover on the ground surface. This result is possible only if the soil/vegetation/snow component of the coupled weather prediction model has sufficient skill to avoid long-term drift. RUC LSM was first implemented in the operational NCEP Rapid Update Cycle (RUC) weather model in 1998 and later in the Weather Research Forecasting Model (WRF)-based Rapid Refresh (RAP) and High-resolution Rapid Refresh (HRRR). Being available to the international WRF community, it was implemented in operational weather models in Austria, New Zealand, and Switzerland. Based on the feedback from the US weather service offices and the international WRF community and also based on our own validation, RUC LSM has matured over the years. Also, a sea-ice module was added to RUC LSM for surface predictions over the Arctic sea-ice. Other modifications include refinements to the snow model and a more accurate specification of albedo, roughness length, and other surface properties. At present, RUC LSM is being tested in the regional application of the Unified Forecast System (UFS). The next generation UFS-based regional Rapid Refresh FV3 Standalone (RRFS) model will replace operational RAP and HRRR at NCEP. Over time, RUC LSM participated in several international model intercomparison projects to verify its skill using observed atmospheric forcing. The ESM-SnowMIP was the last of these experiments focused on the verification of snow models for open and forested regions. The simulations were performed for ten sites located in different climatic zones of the world forced with observed atmospheric conditions. While most of the 26 participating models have more sophisticated snow parameterizations than in RUC, RUC LSM got a high ranking in simulations of both snow water equivalent and surface temperature. However, ESM-SnowMIP experiment also revealed some issues in the RUC snow model, which will be addressed in this paper. One of them is the treatment of grid cells partially covered with snow. RUC snow module computes energy and moisture budgets of snow-covered and snow-free areas separately by aggregating the solutions at the end of each time step. Such treatment elevates the importance of computing in the model snow cover fraction. Improvements to the original simplistic threshold-based approach have been implemented and tested both offline and in the coupled weather model. The detailed description of changes to the snow cover fraction and other modifications to RUC soil and snow parameterizations will be described in this paper.

Keywords: land-surface models, weather prediction, hydrology, boundary-layer processes

Procedia PDF Downloads 75
39 The Outcome of Early Balance Exercises and Agility Training in Sports Rehabilitation for Patients Post Anterior Cruciate Ligament (ACL) Reconstruction

Authors: S. M. A. Ismail, M. I. Ibrahim, H. Masdar, F. M. Effendi, M. F. Suhaimi, A. Suun

Abstract:

Introduction: It is generally known that the rehabilitation process is as important as the reconstruction surgery. Several literature has focused on how early the rehabilitation modalities can be initiated after the surgery to ensure a safe return of patients to sports or at least regaining the pre-injury level of function following an ACL reconstruction. Objectives: The main objective is to study and evaluate the outcome of early balance exercises and agility training in sports rehabilitation for patients post ACL reconstruction. To compare between early balance exercises and agility training as intervention and control. (material or non-material). All of them were recruited for material exercise (balance exercises and agility training with strengthening) and strengthening only rehabilitation protocol (non-material). Followed the prospective intervention trial. Materials and Methods: Post-operative ACL reconstruction patients performed in Selayang and Sg Buloh Hospitals from 2012 to 2014 were selected for this study. They were taken from Malaysian Knee Ligament Registry (MKLR) and all patients had single bundle reconstruction with autograft hamstring tendon (semitendinosus and gracilis). ACL injury from any type of sports were included. Subjects performed various type of physical activity for rehabilitation in every 18 week for a different type of rehab activity. All subject attended all 18 sessions of rehabilitation exercises and evaluation was done during the first, 9th and 18th session. Evaluation format were based on clinical assessment (anterior drawer, Lachmann, pivot shift, laxity with rolimeter, the end point and thigh circumference) and scoring (Lysholm Knee scoring and Tegner Activity Level scale). Rehabilitation protocol initiated from 24 week after the surgery. Evaluation format were based on clinical assessment (anterior drawer, Lachmann, pivot shift, laxity with rolimeter, the end point and thigh circumference) and scoring (Lysholm Knee scoring and Tegner Activity Level scale). Results and Discussion: 100 patients were selected of which 94 patients are male and 6 female. Age range is 18 to 54 year with the average of 28 years old for included 100 patients. All patients are evaluated after 24 week after the surgery. 50 of them were recruited for material exercise (balance exercises and agility training with strengthening) and 50 for strengthening only rehabilitation protocol (non-material). Demographically showed 85% suffering sports injury mainly from futsal and football. 39 % of them have abnormal BMI (26 – 38) and involving of the left knee. 100% of patient had the basic radiographic x-ray of knee and 98% had MRI. All patients had negative anterior drawer’s, Lachman test and Pivot shift test during the post ACL reconstruction after the complete rehabilitation. There was 95 subject sustained grade I injury, 5 of grade II and 0 of grade III with 90% of them had soft end-point. Overall they scored badly on presentation with 53% of Lysholm score (poor) and Tegner activity score level 3/10. After completing 9 weeks of exercises, of material group 90% had grade I laxity, 75% with firm end-point, Lysholm score 71% (fair) and Tegner activity level 5/10 comparing non-material group who had 62% of grade I laxity , 54% of firm end-point, Lyhslom score 62 % (poor) and Tegner activity level 4/10. After completed 18 weeks of exercises, of material group maintained 90% grade I laxity with 100 % with firm end-point, Lysholm score increase 91% (excellent) and Tegner activity level 7/10 comparing non-material group who had 69% of grade I laxity but maintained 54% of firm end-point, Lysholm score 76% (fair) and Tegner activity level 5/10. These showed the improvement were achieved fast on material group who have achieved satisfactory level after 9th cycle of exercises 75% (15/20) comparing non-material group who only achieved 54% (7/13) after completed 18th session. Most of them were grade I. These concepts are consolidated into our approach to prepare patients for return to play including field testing and maintenance training. Conclusions: The basic approach in ACL rehabilitation is to ensure return to sports at post-operative 6 month. Grade I and II laxity has favourable and early satisfactory outcome base on clinical assessment and Lysholm and Tegner scoring point. Reduction of laxity grading indicates satisfactory outcome. Firm end-point showed the adequacy of rehabilitation before starting previous sports game. Material exercise (balance exercises and agility training with strengthening) were beneficial and reliable in order to achieve favourable and early satisfactory outcome comparing strengthening only (non-material).We have identified that rehabilitation protocol varies between different patients. Therefore future post ACL reconstruction rehabilitation guidelines should look into focusing on rehabilitation techniques instead of time.

Keywords: post anterior cruciate ligament (ACL) reconstruction, single bundle, hamstring tendon, sports rehabilitation, balance exercises, agility balance

Procedia PDF Downloads 241