Search results for: pedagogical approaches
381 Multi-Label Approach to Facilitate Test Automation Based on Historical Data
Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally
Abstract:
The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.Keywords: machine learning, multi-class, multi-label, supervised learning, test automation
Procedia PDF Downloads 132380 Valorization of Banana Peels for Mercury Removal in Environmental Realist Conditions
Authors: E. Fabre, C. Vale, E. Pereira, C. M. Silva
Abstract:
Introduction: Mercury is one of the most troublesome toxic metals responsible for the contamination of the aquatic systems due to its accumulation and bioamplification along the food chain. The 2030 agenda for sustainable development of United Nations promotes the improving of water quality by reducing water pollution and foments an enhance in wastewater treatment, encouraging their recycling and safe water reuse globally. Sorption processes are widely used in wastewater treatments due to their many advantages such as high efficiency and low operational costs. In these processes the target contaminant is removed from the solution by a solid sorbent. The more selective and low cost is the biosorbent the more attractive becomes the process. Agricultural wastes are especially attractive approaches for sorption. They are largely available, have no commercial value and require little or no processing. In this work, banana peels were tested for mercury removal from low concentrated solutions. In order to investigate the applicability of this solid, six water matrices were used increasing the complexity from natural waters to a real wastewater. Studies of kinetics and equilibrium were also performed using the most known models to evaluate the viability of the process In line with the concept of circular economy, this study adds value to this by-product as well as contributes to liquid waste management. Experimental: The solutions were prepared with Hg(II) initial concentration of 50 µg L-1 in natural waters, at 22 ± 1 ºC, pH 6, magnetically stirring at 650 rpm and biosorbent mass of 0.5 g L-1. NaCl was added to obtain the salt solutions, seawater was collected from the Portuguese coast and the real wastewater was kindly provided by ISQ - Instituto de Soldadura e qualidade (Welding and Quality Institute) and diluted until the same concentration of 50 µg L-1. Banana peels were previously freeze-drying, milled, sieved and the particles < 1 mm were used. Results: Banana peels removed more than 90% of Hg(II) from all the synthetic solutions studied. In these cases, the enhance in the complexity of the water type promoted a higher mercury removal. In salt waters, the biosorbent showed removals of 96%, 95% and 98 % for 3, 15 and 30 g L-1 of NaCl, respectively. The residual concentration of Hg(II) in solution achieved the level of drinking water regulation (1 µg L-1). For real matrices, the lower Hg(II) elimination (93 % for seawater and 81 % for the real wastewaters), can be explained by the competition between the Hg(II) ions and the other elements present in these solutions for the sorption sites. Regarding the equilibrium study, the experimental data are better described by the Freundlich isotherm (R ^ 2=0.991). The Elovich equation provided the best fit to the kinetic points. Conclusions: The results exhibited the great ability of the banana peels to remove mercury. The environmental realist conditions studied in this work, highlight their potential usage as biosorbents in water remediation processes.Keywords: banana peels, mercury removal, sorption, water treatment
Procedia PDF Downloads 155379 Building Community through Discussion Forums in an Online Accelerated MLIS Program: Perspectives of Instructors and Students
Authors: Mary H Moen, Lauren H. Mandel
Abstract:
Creating a sense of community in online learning is important for student engagement and success. The integration of discussion forums within online learning environments presents an opportunity to explore how this computer mediated communications format can cultivate a sense of community among students in accelerated master’s degree programs. This research has two aims, to delve into the ways instructors utilize this communications technology to create community and to understand the feelings and experiences of graduate students participating in these forums in regard to its effectiveness in community building. This study is a two-phase approach encompassing qualitative and quantitative methodologies. The data will be collected at an online accelerated Master of Library and Information Studies program at a public university in the northeast of the United States. Phase 1 is a content analysis of the syllabi from all courses taught in the 2023 calendar year, which explores the format and rules governing discussion forum assignments. Four to six individual interviews of department faculty and part time faculty will also be conducted to illuminate their perceptions of the successes and challenges of their discussion forum activities. Phase 2 will be an online survey administered to students in the program during the 2023 calendar year. Quantitative data will be collected for statistical analysis, and short answer responses will be analyzed for themes. The survey is adapted from the Classroom Community Scale Short-Form (CSS-SF), which measures students' self-reported responses on their feelings of connectedness and learning. The prompts will contextualize the items from their experience in discussion forums during the program. Short answer responses on the challenges and successes of using discussion forums will be analyzed to gauge student perceptions and experiences using this type of communication technology in education. This research study is in progress. The authors anticipate that the findings will provide a comprehensive understanding of the varied approaches instructors use in discussion forums for community-building purposes in an accelerated MLIS program. They predict that the more varied, flexible, and consistent student uses of discussion forums are, the greater the sense of community students will report. Additionally, students’ and instructors’ perceptions and experiences within these forums will shed light on the successes and challenges faced, thereby offering valuable recommendations for enhancing online learning environments. The findings are significant because they can contribute actionable insights for instructors, educational institutions, and curriculum designers aiming to optimize the use of discussion forums in online accelerated graduate programs, ultimately fostering a richer and more engaging learning experience for students.Keywords: accelerated online learning, discussion forums, LIS programs, sense of community, g
Procedia PDF Downloads 84378 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 115377 Survey of the Literacy by Radio Project as an Innovation in Literacy Promotion in Nigeria
Authors: Stella Chioma Nwizu
Abstract:
The National Commission for Adult and Non Formal Education (NMEC) in Nigeria is charged with the reduction of illiteracy rate through the development, monitoring, and supervision of literacy programmes in Nigeria. In spite of various efforts by NMEC to reduce illiteracy, literature still shows that the illiteracy rate is still high. According to NMEC/UNICEF, about 60 million Nigerians are non-literate, and nearly two thirds of them are women. This situation forced the government to search for innovative and better approaches to literacy promotion and delivery. The literacy by radio project was adopted as an innovative intervention to literacy delivery in Nigeria because the radio is the cheapest and most easily affordable medium for non-literates. The project aimed at widening access to literacy programmes for the non-literate marginalized and disadvantaged groups in Nigeria by taking literacy programmes to their door steps. The literacy by radio has worked perfectly well in non-literacy reduction in Cuba. This innovative intervention of literacy by radio is anchored on the diffusion of innovation theory by Rogers. The literacy by radio has been going on for fifteen years and the efficacy and contributions of this innovation need to be investigated. Thus, the purpose of this research is to review the contributions of the literacy by radio in Nigeria. The researcher adopted the survey research design for the study. The population for the study consisted of 2,706 participants and 47 facilitators of the literacy by radio programme in the 10 pilot states in Nigeria. A sample of four states made up of 302 participants and eight facilitators were used for the study. Information was collected through Focus Group Discussion (FGD), interviews and content analysis of official documents. The data were analysed qualitatively to review the contributions of literacy by radio project and determine the efficacy of this innovative approach in facilitating literacy in Nigeria. Results from the field experience showed, among others, that more non-literates have better access to literacy programmes through this innovative approach. The pilot project was 88% successful; not less than 2,110 adults were made literate through the literacy by radio project in 2017. However, lack of enthusiasm and commitment on the part of the technical committee and facilitators due to non-payment of honorarium, poor signals from radio stations, interruption of lectures with adverts, low community involvement in decision making in the project are challenges to the success rate of the project. The researcher acknowledges the need to customize all materials and broadcasts in all the dialects of the participants and the inclusion of more civil rights, environmental protection and agricultural skills into the project. The study recommends among others, improved and timely funding of the project by the Federal Government to enable NMEC to fulfill her obligations towards the greater success of the programme, setting up of independent radio stations for airing the programmes and proper monitoring and evaluation of the project by NMEC and State Agencies for greater effectiveness. In an era of the knowledge-driven economy, no one should be allowed to get saddled with the weight of illiteracy.Keywords: innovative approach, literacy, project, radio, survey
Procedia PDF Downloads 66376 Method for Requirements Analysis and Decision Making for Restructuring Projects in Factories
Authors: Rene Hellmuth
Abstract:
The requirements for the factory planning and the building concerned have changed in the last years. Factory planning has the task of designing products, plants, processes, organization, areas, and the building of a factory. Regular restructuring gains more importance in order to maintain the competitiveness of a factory. Restrictions regarding new areas, shorter life cycles of product and production technology as well as a VUCA (volatility, uncertainty, complexity and ambiguity) world cause more frequently occurring rebuilding measures within a factory. Restructuring of factories is the most common planning case today. Restructuring is more common than new construction, revitalization and dismantling of factories. The increasing importance of restructuring processes shows that the ability to change was and is a promising concept for the reaction of companies to permanently changing conditions. The factory building is the basis for most changes within a factory. If an adaptation of a construction project (factory) is necessary, the inventory documents must be checked and often time-consuming planning of the adaptation must take place to define the relevant components to be adapted, in order to be able to finally evaluate them. The different requirements of the planning participants from the disciplines of factory planning (production planner, logistics planner, automation planner) and industrial construction planning (architect, civil engineer) come together during reconstruction and must be structured. This raises the research question: Which requirements do the disciplines involved in the reconstruction planning place on a digital factory model? A subordinate research question is: How can model-based decision support be provided for a more efficient design of the conversion within a factory? Because of the high adaptation rate of factories and its building described above, a methodology for rescheduling factories based on the requirements engineering method from software development is conceived and designed for practical application in factory restructuring projects. The explorative research procedure according to Kubicek is applied. Explorative research is suitable if the practical usability of the research results has priority. Furthermore, it will be shown how to best use a digital factory model in practice. The focus will be on mobile applications to meet the needs of factory planners on site. An augmented reality (AR) application will be designed and created to provide decision support for planning variants. The aim is to contribute to a shortening of the planning process and model-based decision support for more efficient change management. This requires the application of a methodology that reduces the deficits of the existing approaches. The time and cost expenditure are represented in the AR tablet solution based on a building information model (BIM). Overall, the requirements of those involved in the planning process for a digital factory model in the case of restructuring within a factory are thus first determined in a structured manner. The results are then applied and transferred to a construction site solution based on augmented reality.Keywords: augmented reality, digital factory model, factory planning, restructuring
Procedia PDF Downloads 134375 Precursor Synthesis of Carbon Materials with Different Aggregates Morphologies
Authors: Nikolai A. Khlebnikov, Vladimir N. Krasilnikov, Evgenii V. Polyakov, Anastasia A. Maltceva
Abstract:
Carbon materials with advanced surfaces are widely used both in modern industry and in environmental protection. The physical-chemical nature of these materials is determined by the morphology of primary atomic and molecular carbon structures, which are the basis for synthesizing the following materials: zero-dimensional (fullerenes), one-dimensional (fiber, tubes), two-dimensional (graphene) carbon nanostructures, three-dimensional (multi-layer graphene, graphite, foams) with unique physical-chemical and functional properties. Experience shows that the microscopic morphological level is the basis for the creation of the next mesoscopic morphological level. The dependence of the morphology on the chemical way and process prehistory (crystallization, colloids formation, liquid crystal state and other) is the peculiarity of the last called level. These factors determine the consumer properties of carbon materials, such as specific surface area, porosity, chemical resistance in corrosive environments, catalytic and adsorption activities. Based on the developed ideology of thin precursor synthesis, the authors discuss one of the approaches of the porosity control of carbon-containing materials with a given aggregates morphology. The low-temperature thermolysis of precursors in a gas environment of a given composition is the basis of the above-mentioned idea. The processes of carbothermic precursor synthesis of two different compounds: tungsten carbide WC:nC and zinc oxide ZnO:nC containing an impurity phase in the form of free carbon were selected as subjects of the research. In the first case, the transition metal (tungsten) forming carbides was the object of the synthesis. In the second case, there was selected zinc that does not form carbides. The synthesis of both kinds of transition metals compounds was conducted by the method of precursor carbothermic synthesis from the organic solution. ZnO:nC composites were obtained by thermolysis of succinate Zn(OO(CH2)2OO), formate glycolate Zn(HCOO)(OCH2CH2O)1/2, glycerolate Zn(OCH2CHOCH2OH), and tartrate Zn(OOCCH(OH)CH(OH)COO). WC:nC composite was synthesized from ammonium paratungstate and glycerol. In all cases, carbon structures that are specific for diamond- like carbon forms appeared on the surface of WC and ZnO particles after the heat treatment. Tungsten carbide and zinc oxide were removed from the composites by selective chemical dissolution preserving the amorphous carbon phase. This work presents the results of investigating WC:nC and ZnO:nC composites and carbon nanopowders with tubular, tape, plate and onion morphologies of aggregates that are separated by chemical dissolution of WC and ZnO from the composites by the following methods: SEM, TEM, XPA, Raman spectroscopy, and BET. The connection between the carbon morphology under the conditions of synthesis and chemical nature of the precursor and the possibility of regulation of the morphology with the specific surface area up to 1700-2000 m2/g of carbon-structured materials are discussed.Keywords: carbon morphology, composite materials, precursor synthesis, tungsten carbide, zinc oxide
Procedia PDF Downloads 335374 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions
Authors: Gaurangi Saxena, Ravindra Saxena
Abstract:
Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.Keywords: cloud computing, competitive advantage, customer relationship management, grid computing
Procedia PDF Downloads 312373 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration
Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu
Abstract:
Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery
Procedia PDF Downloads 131372 Using Computer Vision and Machine Learning to Improve Facility Design for Healthcare Facility Worker Safety
Authors: Hengameh Hosseini
Abstract:
Design of large healthcare facilities – such as hospitals, multi-service line clinics, and nursing facilities - that can accommodate patients with wide-ranging disabilities is a challenging endeavor and one that is poorly understood among healthcare facility managers, administrators, and executives. An even less-understood extension of this problem is the implications of weakly or insufficiently accommodative design of facilities for healthcare workers in physically-intensive jobs who may also suffer from a range of disabilities and who are therefore at increased risk of workplace accident and injury. Combine this reality with the vast range of facility types, ages, and designs, and the problem of universal accommodation becomes even more daunting and complex. In this study, we focus on the implication of facility design for healthcare workers suffering with low vision who also have physically active jobs. The points of difficulty are myriad and could span health service infrastructure, the equipment used in health facilities, and transport to and from appointments and other services can all pose a barrier to health care if they are inaccessible, less accessible, or even simply less comfortable for people with various disabilities. We conduct a series of surveys and interviews with employees and administrators of 7 facilities of a range of sizes and ownership models in the Northeastern United States and combine that corpus with in-facility observations and data collection to identify five major points of failure common to all the facilities that we concluded could pose safety threats to employees with vision impairments, ranging from very minor to severe. We determine that lack of design empathy is a major commonality among facility management and ownership. We subsequently propose three methods for remedying this lack of empathy-informed design, to remedy the dangers posed to employees: the use of an existing open-sourced Augmented Reality application to simulate the low-vision experience for designers and managers; the use of a machine learning model we develop to automatically infer facility shortcomings from large datasets of recorded patient and employee reviews and feedback; and the use of a computer vision model fine tuned on images of each facility to infer and predict facility features, locations, and workflows, that could again pose meaningful dangers to visually impaired employees of each facility. After conducting a series of real-world comparative experiments with each of these approaches, we conclude that each of these are viable solutions under particular sets of conditions, and finally characterize the range of facility types, workforce composition profiles, and work conditions under which each of these methods would be most apt and successful.Keywords: artificial intelligence, healthcare workers, facility design, disability, visually impaired, workplace safety
Procedia PDF Downloads 116371 Reuse of Historic Buildings for Tourism: Policy Gaps
Authors: Joseph Falzon, Margaret Nelson
Abstract:
Background: Regeneration and re-use of abandoned historic buildings present a continuous challenge for policy makers and stakeholders in the tourism and leisure industry. Obsolete historic buildings provide great potential for tourism and leisure accommodation, presenting unique heritage experiences to travellers and host communities. Contemporary demands in the hospitality industry continuously require higher standards, some of which are in conflict with heritage conservation principles. Objective: The aim of this research paper is to critically discuss regeneration policies with stakeholders of the tourism and leisure industry and to examine current practices in policy development and the resultant impact of policies on the Maltese tourism and leisure industry. Research Design: Six semi-structured interviews with stakeholders involved in the tourism and leisure industry participated in the research. A number of measures were taken to reduce bias and thus improve trustworthiness. Clear statements of the purpose of the research study were provided at the start of each interview to reduce expectancy bias. The interviews were semi-structured to minimise interviewer bias. Interviewees were allowed to expand and elaborate as necessary, with only necessary probing questions, to allow free expression of opinion and practices. Interview guide was submitted to participants at least two weeks before the interview to allow participants to prepare for the interview and prevent recall bias during the interview as much as possible. Interview questions and probes contained both positive and negative aspects to prevent interviewer bias. Policy documents were available during the interview to prevent recall bias. Interview recordings were transcribed ‘intelligent’ verbatim. Analysis was carried out using thematic analysis with the coding frame developed independently by two researchers. All phases of the study were governed by research ethics. Findings: Findings were grouped in main themes: financing of regeneration, governance, legislation and policies. Other key issues included value of historic buildings and approaches for regeneration. Whist regeneration of historic buildings was noted, participants discussed a number of barriers that hindered regeneration. Stakeholders identified gaps in policies and gaps at policy implementation stages. European Union funding policies facilitated regeneration initiatives but funding criteria based on economic deliverables presented the intangible heritage gap. Stakeholders identified niche markets for heritage tourism accommodation. Lack of research-based policies was also identified. Conclusion: Potential of regeneration is hindered by inadequate legal framework that supports contemporary needs of the tourism industry. Policies should be developed by active stakeholder participation. Adequate funding schemes have to support the tangible and intangible components of the built heritage.Keywords: governance, historic buildings, policy, tourism
Procedia PDF Downloads 235370 Predicting OpenStreetMap Coverage by Means of Remote Sensing: The Case of Haiti
Authors: Ran Goldblatt, Nicholas Jones, Jennifer Mannix, Brad Bottoms
Abstract:
Accurate, complete, and up-to-date geospatial information is the foundation of successful disaster management. When the 2010 Haiti Earthquake struck, accurate and timely information on the distribution of critical infrastructure was essential for the disaster response community for effective search and rescue operations. Existing geospatial datasets such as Google Maps did not have comprehensive coverage of these features. In the days following the earthquake, many organizations released high-resolution satellite imagery, catalyzing a worldwide effort to map Haiti and support the recovery operations. Of these organizations, OpenStreetMap (OSM), a collaborative project to create a free editable map of the world, used the imagery to support volunteers to digitize roads, buildings, and other features, creating the most detailed map of Haiti in existence in just a few weeks. However, large portions of the island are still not fully covered by OSM. There is an increasing need for a tool to automatically identify which areas in Haiti, as well as in other countries vulnerable to disasters, that are not fully mapped. The objective of this project is to leverage different types of remote sensing measurements, together with machine learning approaches, in order to identify geographical areas where OSM coverage of building footprints is incomplete. Several remote sensing measures and derived products were assessed as potential predictors of OSM building footprints coverage, including: intensity of light emitted at night (based on VIIRS measurements), spectral indices derived from Sentinel-2 satellite (normalized difference vegetation index (NDVI), normalized difference built-up index (NDBI), soil-adjusted vegetation index (SAVI), urban index (UI)), surface texture (based on Sentinel-1 SAR measurements)), elevation and slope. Additional remote sensing derived products, such as Hansen Global Forest Change, DLR`s Global Urban Footprint (GUF), and World Settlement Footprint (WSF), were also evaluated as predictors, as well as OSM street and road network (including junctions). Using a supervised classification with a random forest classifier resulted in the prediction of 89% of the variation of OSM building footprint area in a given cell. These predictions allowed for the identification of cells that are predicted to be covered but are actually not mapped yet. With these results, this methodology could be adapted to any location to assist with preparing for future disastrous events and assure that essential geospatial information is available to support the response and recovery efforts during and following major disasters.Keywords: disaster management, Haiti, machine learning, OpenStreetMap, remote sensing
Procedia PDF Downloads 125369 Foreseen the Future: Human Factors Integration in European Horizon Projects
Authors: José Manuel Palma, Paula Pereira, Margarida Tomás
Abstract:
Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0
Procedia PDF Downloads 65368 Improving Literacy Level Through Digital Books for Deaf and Hard of Hearing Students
Authors: Majed A. Alsalem
Abstract:
In our contemporary world, literacy is an essential skill that enables students to increase their efficiency in managing the many assignments they receive that require understanding and knowledge of the world around them. In addition, literacy enhances student participation in society improving their ability to learn about the world and interact with others and facilitating the exchange of ideas and sharing of knowledge. Therefore, literacy needs to be studied and understood in its full range of contexts. It should be seen as social and cultural practices with historical, political, and economic implications. This study aims to rebuild and reorganize the instructional designs that have been used for deaf and hard-of-hearing (DHH) students to improve their literacy level. The most critical part of this process is the teachers; therefore, teachers will be the center focus of this study. Teachers’ main job is to increase students’ performance by fostering strategies through collaborative teamwork, higher-order thinking, and effective use of new information technologies. Teachers, as primary leaders in the learning process, should be aware of new strategies, approaches, methods, and frameworks of teaching in order to apply them to their instruction. Literacy from a wider view means acquisition of adequate and relevant reading skills that enable progression in one’s career and lifestyle while keeping up with current and emerging innovations and trends. Moreover, the nature of literacy is changing rapidly. The notion of new literacy changed the traditional meaning of literacy, which is the ability to read and write. New literacy refers to the ability to effectively and critically navigate, evaluate, and create information using a range of digital technologies. The term new literacy has received a lot of attention in the education field over the last few years. New literacy provides multiple ways of engagement, especially to those with disabilities and other diverse learning needs. For example, using a number of online tools in the classroom provides students with disabilities new ways to engage with the content, take in information, and express their understanding of this content. This study will provide teachers with the highest quality of training sessions to meet the needs of DHH students so as to increase their literacy levels. This study will build a platform between regular instructional designs and digital materials that students can interact with. The intervention that will be applied in this study will be to train teachers of DHH to base their instructional designs on the notion of Technology Acceptance Model (TAM) theory. Based on the power analysis that has been done for this study, 98 teachers are needed to be included in this study. This study will choose teachers randomly to increase internal and external validity and to provide a representative sample from the population that this study aims to measure and provide the base for future and further studies. This study is still in process and the initial results are promising by showing how students have engaged with digital books.Keywords: deaf and hard of hearing, digital books, literacy, technology
Procedia PDF Downloads 490367 Pyramid of Deradicalization: Causes and Possible Solutions
Authors: Ashir Ahmed
Abstract:
Generally, radicalization happens when a person's thinking and behaviour become significantly different from how most of the members of their society and community view social issues and participate politically. Radicalization often leads to violent extremism that refers to the beliefs and actions of people who support or use violence to achieve ideological, religious or political goals. Studies on radicalization negate the common myths that someone must be in a group to be radicalised or anyone who experiences radical thoughts is a violent extremist. Moreover, it is erroneous to suggest that radicalisation is always linked to religion. Generally, the common motives of radicalization include ideological, issue-based, ethno-nationalist or separatist underpinning. Moreover, there are number of factors that further augments the chances of someone being radicalised and may choose the path of violent extremism and possibly terrorism. Since there are numbers of factors (and sometimes quite different) contributing in radicalization and violent extremism, it is highly unlikely to devise a single solution that could produce effective outcomes to deal with radicalization, violent extremism and terrorism. The pathway to deradicalization, like the pathway to radicalisation, is different for everyone. Considering the need of having customized deradicalization resolution, this study proposes a multi-tier framework, called ‘pyramid of deradicalization’ that first help identifying the stage at which an individual could be on the radicalization pathway and then propose a customize strategy to deal with the respective stage. The first tier (tier 1) addresses broader community and proposes a ‘universal approach’ aiming to offer community-based design and delivery of educational programs to raise awareness and provide general information on possible factors leading to radicalization and their remedies. The second tier focuses on the members of community who are more vulnerable and are disengaged from the rest of the community. This tier proposes a ‘targeted approach’ targeting the vulnerable members of the community through early intervention such as providing anonymous help lines where people feel confident and comfortable in seeking help without fearing the disclosure of their identity. The third tier aims to focus on people having clear evidence of moving toward extremism or getting radicalized. The people falls in this tier are believed to be supported through ‘interventionist approach’. The interventionist approach advocates the community engagement and community-policing, introducing deradicalization programmes to the targeted individuals and looking after their physical and mental health issues. The fourth and the last tier suggests the strategies to deal with people who are actively breaking the law. ‘Enforcement approach’ suggests various approaches such as strong law enforcement, fairness and accuracy in reporting radicalization events, unbiased treatment by law based on gender, race, nationality or religion and strengthen the family connections.It is anticipated that the operationalization of the proposed framework (‘pyramid of deradicalization’) would help in categorising people considering their tendency to become radicalized and then offer an appropriate strategy to make them valuable and peaceful members of the community.Keywords: deradicalization, framework, terrorism, violent extremism
Procedia PDF Downloads 269366 Design Flood Estimation in Satluj Basin-Challenges for Sunni Dam Hydro Electric Project, Himachal Pradesh-India
Authors: Navneet Kalia, Lalit Mohan Verma, Vinay Guleria
Abstract:
Introduction: Design Flood studies are essential for effective planning and functioning of water resource projects. Design flood estimation for Sunni Dam Hydro Electric Project located in State of Himachal Pradesh, India, on the river Satluj, was a big challenge in view of the river flowing in the Himalayan region from Tibet to India, having a large catchment area of varying topography, climate, and vegetation. No Discharge data was available for the part of the river in Tibet, whereas, for India, it was available only at Khab, Rampur, and Luhri. The estimation of Design Flood using standard methods was not possible. This challenge was met using two different approaches for upper (snow-fed) and lower (rainfed) catchment using Flood Frequency Approach and Hydro-metrological approach. i) For catchment up to Khab Gauging site (Sub-Catchment, C1), Flood Frequency approach was used. Around 90% of the catchment area (46300 sqkm) up to Khab is snow-fed which lies above 4200m. In view of the predominant area being snow-fed area, 1 in 10000 years return period flood estimated using Flood Frequency analysis at Khab was considered as Probable Maximum Flood (PMF). The flood peaks were taken from daily observed discharges at Khab, which were increased by 10% to make them instantaneous. Design Flood of 4184 cumec thus obtained was considered as PMF at Khab. ii) For catchment between Khab and Sunni Dam (Sub-Catchment, C2), Hydro-metrological approach was used. This method is based upon the catchment response to the rainfall pattern observed (Probable Maximum Precipitation - PMP) in a particular catchment area. The design flood computation mainly involves the estimation of a design storm hyetograph and derivation of the catchment response function. A unit hydrograph is assumed to represent the response of the entire catchment area to a unit rainfall. The main advantage of the hydro-metrological approach is that it gives a complete flood hydrograph which allows us to make a realistic determination of its moderation effect while passing through a reservoir or a river reach. These studies were carried out to derive PMF for the catchment area between Khab and Sunni Dam site using a 1-day and 2-day PMP values of 232 and 416 cm respectively. The PMF so obtained was 12920.60 cumec. Final Result: As the Catchment area up to Sunni Dam has been divided into 2 sub-catchments, the Flood Hydrograph for the Catchment C1 has been routed through the connecting channel reach (River Satluj) using Muskingum method and accordingly, the Design Flood was computed after adding the routed flood ordinates with flood ordinates of catchment C2. The total Design Flood (i.e. 2-Day PMF) with a peak of 15473 cumec was obtained. Conclusion: Even though, several factors are relevant while deciding the method to be used for design flood estimation, data availability and the purpose of study are the most important factors. Since, generally, we cannot wait for the hydrological data of adequate quality and quantity to be available, flood estimation has to be done using whatever data is available. Depending upon the type of data available for a particular catchment, the method to be used is to be selected.Keywords: design flood, design storm, flood frequency, PMF, PMP, unit hydrograph
Procedia PDF Downloads 327365 A Biophysical Study of the Dynamic Properties of Glucagon Granules in α Cells by Imaging-Derived Mean Square Displacement and Single Particle Tracking Approaches
Authors: Samuele Ghignoli, Valentina de Lorenzi, Gianmarco Ferri, Stefano Luin, Francesco Cardarelli
Abstract:
Insulin and glucagon are the two essential hormones for maintaining proper blood glucose homeostasis, which is disrupted in Diabetes. A constantly growing research interest has been focused on the study of the subcellular structures involved in hormone secretion, namely insulin- and glucagon-containing granules, and on the mechanisms regulating their behaviour. Yet, while several successful attempts were reported describing the dynamic properties of insulin granules, little is known about their counterparts in α cells, the glucagon-containing granules. To fill this gap, we used αTC1 clone 9 cells as a model of α cells and ZIGIR as a fluorescent Zinc chelator for granule labelling. We started by using spatiotemporal fluorescence correlation spectroscopy in the form of imaging-derived mean square displacement (iMSD) analysis. This afforded quantitative information on the average dynamical and structural properties of glucagon granules having insulin granules as a benchmark. Interestingly, the iMSD sensitivity to average granule size allowed us to confirm that glucagon granules are smaller than insulin ones (~1.4 folds, further validated by STORM imaging). To investigate possible heterogeneities in granule dynamic properties, we moved from correlation spectroscopy to single particle tracking (SPT). We developed a MATLAB script to localize and track single granules with high spatial resolution. This enabled us to classify the glucagon granules, based on their dynamic properties, as ‘blocked’ (i.e., trajectories corresponding to immobile granules), ‘confined/diffusive’ (i.e., trajectories corresponding to slowly moving granules in a defined region of the cell), or ‘drifted’ (i.e., trajectories corresponding to fast-moving granules). In cell-culturing control conditions, results show this average distribution: 32.9 ± 9.3% blocked, 59.6 ± 9.3% conf/diff, and 7.4 ± 3.2% drifted. This benchmarking provided us with a foundation for investigating selected experimental conditions of interest, such as the glucagon-granule relationship with the cytoskeleton. For instance, if Nocodazole (10 μM) is used for microtubule depolymerization, the percentage of drifted motion collapses to 3.5 ± 1.7% while immobile granules increase to 56.0 ± 10.7% (remaining 40.4 ± 10.2% of conf/diff). This result confirms the clear link between glucagon-granule motion and cytoskeleton structures, a first step towards understanding the intracellular behaviour of this subcellular compartment. The information collected might now serve to support future investigations on glucagon granules in physiology and disease. Acknowledgment: This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 866127, project CAPTUR3D).Keywords: glucagon granules, single particle tracking, correlation spectroscopy, ZIGIR
Procedia PDF Downloads 109364 Application of Multidimensional Model of Evaluating Organisational Performance in Moroccan Sport Clubs
Authors: Zineb Jibraili, Said Ouhadi, Jorge Arana
Abstract:
Introduction: Organizational performance is recognized by some theorists as one-dimensional concept, and by others as multidimensional. This concept, which is already difficult to apply in traditional companies, is even harder to identify, to measure and to manage when voluntary organizations are concerned, essentially because of the complexity of that form of organizations such as sport clubs who are characterized by the multiple goals and multiple constituencies. Indeed, the new culture of professionalization and modernization around organizational performance emerges new pressures from the state, sponsors, members and other stakeholders which have required these sport organizations to become more performance oriented, or to build their capacity in order to better manage their organizational performance. The evaluation of performance can be made by evaluating the input (e.g. available resources), throughput (e.g. processing of the input) and output (e.g. goals achieved) of the organization. In non-profit organizations (NPOs), questions of performance have become increasingly important in the world of practice. To our knowledge, most of studies used the same methods to evaluate the performance in NPSOs, but no recent study has proposed a club-specific model. Based on a review of the studies that specifically addressed the organizational performance (and effectiveness) of NPSOs at operational level, the present paper aims to provide a multidimensional framework in order to understand, analyse and measure organizational performance of sport clubs. This paper combines all dimensions founded in literature and chooses the most suited of them to our model that we will develop in Moroccan sport clubs case. Method: We propose to implicate our unified model of evaluating organizational performance that takes into account all the limitations found in the literature. On a sample of Moroccan sport clubs ‘Football, Basketball, Handball and Volleyball’, for this purpose we use a qualitative study. The sample of our study comprises data from sport clubs (football, basketball, handball, volleyball) participating on the first division of the professional football league over the period from 2011 to 2016. Each football club had to meet some specific criteria in order to be included in the sample: 1. Each club must have full financial data published in their annual financial statements, audited by an independent chartered accountant. 2. Each club must have sufficient data. Regarding their sport and financial performance. 3. Each club must have participated at least once in the 1st division of the professional football league. Result: The study showed that the dimensions that constitute the model exist in the field with some small modifications. The correlations between the different dimensions are positive. Discussion: The aim of this study is to test the unified model emerged from earlier and narrower approaches for Moroccan case. Using the input-throughput-output model for the sketch of efficiency, it was possible to identify and define five dimensions of organizational effectiveness applied to this field of study.Keywords: organisational performance, model multidimensional, evaluation organizational performance, sport clubs
Procedia PDF Downloads 323363 Nurture Early for Optimal Nutrition: A Community-Based Randomized Controlled Trial to Improve Infant Feeding and Care Practices Using Participatory Learning and Actions Approach
Authors: Priyanka Patil, Logan Manikam
Abstract:
Background: The first 1000 days of life are a critical window and can result in adverse health consequences due to inadequate nutrition. South-Asian (SA) communities face significant health disparities, particularly in maternal and child health. Community-based interventions, often employing Participatory-Learning and Action (PLA) approaches, have effectively addressed health inequalities in lower-income nations. The aim of this study was to assess the feasibility of implementing a PLA intervention to improve infant feeding and care practices in SA communities living in London. Methods: Comprehensive analyses were conducted to assess the feasibility/fidelity of this pilot randomized controlled trial. Summary statistics were computed to compare key metrics, including participant consent rates, attendance, retention, intervention support, and perceived effectiveness, against predefined progression rules guiding toward a definitive trial. Secondary outcomes were analyzed, drawing insights from multiple sources, such as The Children’s-Eating-Behaviour Questionnaire (CEBQ), Parental-Feeding-Style Questionnaires (PFSQ), Food-diary, and the Equality-Impact-Assessment (EIA) tool. A video analysis of children's mealtime behavior trends was conducted. Feedback interviews were collected from study participants. Results: Process-outcome measures met predefined progression rules for a definitive trial, which deemed the intervention as feasible and acceptable. The secondary outcomes analysis revealed no significant changes in children's BMI z-scores. This could be attributed to the abbreviated follow-up period of 6 months, reduced from 12 months, due to COVID-19-related delays. CEBQ analysis showed increased food responsiveness, along with decreased emotional over/undereating. A similar trend was observed in PFSQ. The EIA tool found no potential discrimination areas, and video analysis revealed a decrease in force-feeding practices. Participant feedback revealed improved awareness and knowledge sharing. Conclusion: This study demonstrates that a co-adapted PLA intervention is feasible and well-received in optimizing infant-care practices among South-Asian community members in a high-income country. These findings highlight the potential of community-based interventions to enhance health outcomes, promoting health equity.Keywords: child health, childhood obesity, community-based, infant nutrition
Procedia PDF Downloads 56362 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System
Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko
Abstract:
Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic
Procedia PDF Downloads 61361 Blended Learning Instructional Approach to Teach Pharmaceutical Calculations
Authors: Sini George
Abstract:
Active learning pedagogies are valued for their success in increasing 21st-century learners’ engagement, developing transferable skills like critical thinking or quantitative reasoning, and creating deeper and more lasting educational gains. 'Blended learning' is an active learning pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter. This project aimed to develop a blended learning instructional approach to teaching concepts around pharmaceutical calculations to year 1 pharmacy students. The wrong dose, strength or frequency of a medication accounts for almost a third of medication errors in the NHS therefore, progression to year 2 requires a 70% pass in this calculation test, in addition to the standard progression requirements. Many students were struggling to achieve this requirement in the past. It was also challenging to teach these concepts to students of a large class (> 130) with mixed mathematical abilities, especially within a traditional didactic lecture format. Therefore, short screencasts with voice-over of the lecturer were provided in advance of a total of four teaching sessions (two hours/session), incorporating core content of each session and talking through how they approached the calculations to model metacognition. Links to the screencasts were posted on the learning management. Viewership counts were used to determine that the students were indeed accessing and watching the screencasts on schedule. In the classroom, students had to apply the knowledge learned beforehand to a series of increasingly difficult set of questions. Students were then asked to create a question in group settings (two students/group) and to discuss the questions created by their peers in their groups to promote deep conceptual learning. Students were also given time for question-and-answer period to seek clarifications on the concepts covered. Student response to this instructional approach and their test grades were collected. After collecting and organizing the data, statistical analysis was carried out to calculate binomial statistics for the two data sets: the test grade for students who received blended learning instruction and the test grades for students who received instruction in a standard lecture format in class, to compare the effectiveness of each type of instruction. Student response and their performance data on the assessment indicate that the learning of content in the blended learning instructional approach led to higher levels of student engagement, satisfaction, and more substantial learning gains. The blended learning approach enabled each student to learn how to do calculations at their own pace freeing class time for interactive application of this knowledge. Although time-consuming for an instructor to implement, the findings of this research demonstrate that the blended learning instructional approach improves student academic outcomes and represents a valuable method to incorporate active learning methodologies while still maintaining broad content coverage. Satisfaction with this approach was high, and we are currently developing more pharmacy content for delivery in this format.Keywords: active learning, blended learning, deep conceptual learning, instructional approach, metacognition, pharmaceutical calculations
Procedia PDF Downloads 172360 Optimization of Cobalt Oxide Conversion to Co-Based Metal-Organic Frameworks
Authors: Aleksander Ejsmont, Stefan Wuttke, Joanna Goscianska
Abstract:
Gaining control over particle shape, size and crystallinity is an ongoing challenge for many materials. Especially metalorganic frameworks (MOFs) are recently widely studied. Besides their remarkable porosity and interesting topologies, morphology has proven to be a significant feature. It can affect the further material application. Thus seeking new approaches that enable MOF morphology modulation is important. MOFs are reticular structures, where building blocks are made up of organic linkers and metallic nodes. The most common strategy of ensuring metal source is using salts, which usually exhibit high solubility and hinder morphology control. However, there has been a growing interest in using metal oxides as structure-directing agents towards MOFs due to their very low solubility and shape preservation. Metal oxides can be treated as a metal reservoir during MOF synthesis. Up to now, reports in which receiving MOFs from metal oxides mostly present ZnO conversion to ZIF-8. However, there are other oxides, for instance, Co₃O₄, which often is overlooked due to their structural stability and insolubility in aqueous solutions. Cobalt-based materials are famed for catalytic activity. Therefore the development of their efficient synthesis is worth attention. In the presented work, an optimized Co₃O₄transition to Co-MOFviaa solvothermal approach was proposed. The starting point of the research was the synthesis of Co₃O₄ flower petals and needles under hydrothermal conditions using different cobalt salts (e.g., cobalt(II) chloride and cobalt(II) nitrate), in the presence of urea, and hexadecyltrimethylammonium bromide (CTAB) surfactant as a capping agent. After receiving cobalt hydroxide, the calcination process was performed at various temperatures (300–500 °C). Then cobalt oxides as a source of cobalt cations were subjected to reaction with trimesic acid in solvothermal environment and temperature of 120 °C leading to Co-MOF fabrication. The solution maintained in the system was a mixture of water, dimethylformamide, and ethanol, with the addition of strong acids (HF and HNO₃). To establish how solvents affect metal oxide conversion, several different solvent ratios were also applied. The materials received were characterized with analytical techniques, including X-ray powder diffraction, energy dispersive spectroscopy,low-temperature nitrogen adsorption/desorption, scanning, and transmission electron microscopy. It was confirmed that the synthetic routes have led to the formation of Co₃O₄ and Co-based MOF varied in shape and size of particles. The diffractograms showed receiving crystalline phase for Co₃O₄, and also for Co-MOF. The Co₃O₄ obtained from nitrates and with using low-temperature calcination resulted in smaller particles. The study indicated that cobalt oxide particles of different size influence the efficiency of conversion and morphology of Co-MOF. The highest conversion was achieved using metal oxides with small crystallites.Keywords: Co-MOF, solvothermal synthesis, morphology control, core-shell
Procedia PDF Downloads 162359 Delving into Market-Driving Behavior: A Conceptual Roadmap to Delineating Its Key Antecedents and Outcomes
Authors: Konstantinos Kottikas, Vlasis Stathakopoulos, Ioannis G. Theodorakis, Efthymia Kottika
Abstract:
Theorists have argued that Market Orientation is comprised of two facets, namely the Market Driven and the Market Driving components. The present theoretical paper centers on the latter, which to date has been notably under-investigated. The term Market Driving (MD) pertains to influencing the structure of the market, or the behavior of market players in a direction that enhances the competitive edge of the firm. Presently, the main objectives of the paper are the specification of key antecedents and outcomes of Market Driving behavior. Market Driving firms behave proactively, by leading their customers and changing the rules of the game rather than by responding passively to them. Leading scholars were the first to conceptually conceive the notion, followed by some qualitative studies and a limited number of quantitative publications. However, recently, academicians noted that research on the topic remains limited, expressing a strong necessity for further insights. Concerning the key antecedents, top management’s Transformational Leadership (i.e. the form of leadership which influences organizational members by aligning their values, goals and aspirations to facilitate value-consistent behaviors) is one of the key drivers of MD behavior. Moreover, scholars have linked the MD concept with Entrepreneurship. Finally, the role that Employee’s Creativity plays in the development of MD behavior has been theoretically exemplified by a stream of literature. With respect to the key outcomes, it has been demonstrated that MD Behavior positively triggers firm Performance, while theorists argue that it empowers the Competitive Advantage of the firm. Likewise, researchers explicate that MD Behavior produces Radical Innovation. In order to test the robustness of the proposed theoretical framework, a combination of qualitative and quantitative methods is proposed. In particular, the conduction of in-depth interviews with distinguished executives and academicians, accompanied with a large scale quantitative survey will be employed, in order to triangulate the empirical findings. Given that it triggers overall firm’s success, the MD concept is of high importance to managers. Managers can become aware that passively reacting to market conditions is no longer sufficient. On the contrary, behaving proactively, leading the market, and shaping its status quo are new innovative approaches that lead to a paramount competitive posture and Innovation outcomes. This study also exemplifies that managers can foster MD Behavior through Transformational Leadership, Entrepreneurship and recruitment of Creative Employees. To date, the majority of the publications on Market Orientation is unilaterally directed towards the responsive (i.e. the Market Driven) component. The present paper further builds on scholars’ exhortations, and investigates the Market Driving facet, ultimately aspiring to conceptually integrate the somehow fragmented scientific findings, in a holistic framework.Keywords: entrepreneurial orientation, market driving behavior, market orientation
Procedia PDF Downloads 384358 Food Safety in Wine: Removal of Ochratoxin a in Contaminated White Wine Using Commercial Fining Agents
Authors: Antònio Inês, Davide Silva, Filipa Carvalho, Luís Filipe-Riberiro, Fernando M. Nunes, Luís Abrunhosa, Fernanda Cosme
Abstract:
The presence of mycotoxins in foodstuff is a matter of concern for food safety. Mycotoxins are toxic secondary metabolites produced by certain molds, being ochratoxin A (OTA) one of the most relevant. Wines can also be contaminated with these toxicants. Several authors have demonstrated the presence of mycotoxins in wine, especially ochratoxin A. Its chemical structure is a dihydro-isocoumarin connected at the 7-carboxy group to a molecule of L-β-phenylalanine via an amide bond. As these toxicants can never be completely removed from the food chain, many countries have defined levels in food in order to attend health concerns. OTA contamination of wines might be a risk to consumer health, thus requiring treatments to achieve acceptable standards for human consumption. The maximum acceptable level of OTA in wines is 2.0 μg/kg according to the Commission regulation No. 1881/2006. Therefore, the aim of this work was to reduce OTA to safer levels using different fining agents, as well as their impact on white wine physicochemical characteristics. To evaluate their efficiency, 11 commercial fining agents (mineral, synthetic, animal and vegetable proteins) were used to get new approaches on OTA removal from white wine. Trials (including a control without addition of a fining agent) were performed in white wine artificially supplemented with OTA (10 µg/L). OTA analyses were performed after wine fining. Wine was centrifuged at 4000 rpm for 10 min and 1 mL of the supernatant was collected and added of an equal volume of acetonitrile/methanol/acetic acid (78:20:2 v/v/v). Also, the solid fractions obtained after fining, were centrifuged (4000 rpm, 15 min), the resulting supernatant discarded, and the pellet extracted with 1 mL of the above solution and 1 mL of H2O. OTA analysis was performed by HPLC with fluorescence detection. The most effective fining agent in removing OTA (80%) from white wine was a commercial formulation that contains gelatin, bentonite and activated carbon. Removals between 10-30% were obtained with potassium caseinate, yeast cell walls and pea protein. With bentonites, carboxymethylcellulose, polyvinylpolypyrrolidone and chitosan no considerable OTA removal was verified. Following, the effectiveness of seven commercial activated carbons was also evaluated and compared with the commercial formulation that contains gelatin, bentonite and activated carbon. The different activated carbons were applied at the concentration recommended by the manufacturer in order to evaluate their efficiency in reducing OTA levels. Trial and OTA analysis were performed as explained previously. The results showed that in white wine all activated carbons except one reduced 100% of OTA. The commercial formulation that contains gelatin, bentonite and activated carbon reduced only 73% of OTA concentration. These results may provide useful information for winemakers, namely for the selection of the most appropriate oenological product for OTA removal, reducing wine toxicity and simultaneously enhancing food safety and wine quality.Keywords: wine, ota removal, food safety, fining
Procedia PDF Downloads 538357 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang
Abstract:
Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI
Procedia PDF Downloads 268356 Yield Loss in Maize Due to Stem Borers and Their Integrated Management
Authors: C. P. Mallapur, U. K. Hulihalli, D. N. Kambrekar
Abstract:
Maize (Zea mays L.) an important cereal crop in the world has diversified uses including human consumption, animal feed, and industrial uses. A major constraint in low productivity of maize in India is undoubtedly insect pests particularly two species of stem borers, Chilo partellus (Swinhoe) and Sesamia inferens (Walker). The stem borers cause varying level of yield losses in different agro-climate regions (25.7 to 80.4%) resulting in a huge economic loss to the farmers. Although these pests are rather difficult to manage, efforts have been made to combat the menace by using effective insecticides. However, efforts have been made in the present study to integrate various possible approaches for sustainable management of these borers. Two field experiments were conducted separately during 2016-17 at Main Agricultural Research Station, University of Agricultural Sciences, Dharwad, Karnataka, India. In the first experiment, six treatments were randomized in RBD. The insect eggs at pinhead stage (@ 40 eggs/plant) were stapled to the under surface of leaves covering 15-20 % of plants in each plot after 15 days of sowing. The second experiment was planned with nine treatments replicated thrice. The border crop with NB -21 grass was planted all around the plots in the specific treatments while, cowpea intercrop (@6:1-row proportion) was sown along with the main crop and later, the insecticidal spray with chlorantraniliprole and nimbecidine was taken upon need basis in the specific treatments. The results indicated that the leaf injury and dead heart incidence were relatively more in the treatments T₂ and T₄ wherein, no insect control measures were made after the insect release (58.30 & 40.0 % leaf injury and 33.42 and 25.74% dead heart). On the contrary, these treatments recorded higher stem tunneling (32.4 and 24.8%) and resulted in lower grain yield (17.49 and 26.79 q/ha) compared to 29.04, 32.68, 40.93 and 46.38 q/ha recorded in T₁, T₃, T₅ and T₆ treatments, respectively. A maximum yield loss of 28.89 percent was noticed in T₂ followed by 19.59 percent in T₄ where no sprays were imposed. The data on integrated management trial revealed the lowest stem borer damage (19.28% leaf injury and 1.21% dead heart) in T₅ (seed treatment with thiamethoxam 70FS @ 8ml/kg seed + cow intercrop along with nimbecidine 0.03EC @ 5.0 ml/l and chlorantraniliprole 18.5SC spray @ 0.2 ml/l). The next best treatment was T₆ (ST+ NB-21 borer with nimbecidine and chlorantraniliprole spray) with 21.3 and 1.99 percent leaf injury and dead heart incidence, respectively. These treatments resulted in highest grain yield (77.71 and 75.53 q/ha in T₅ and T₆, respectively) compared to the standard check, T₁ (ST+ chlorantraniliprole spray) wherein, 27.63 percent leaf injury and 3.68 percent dead heart were noticed with 60.14 q/ha grain yield. The stem borers can cause yield loss up to 25-30 percent in maize which can be well tackled by seed treatment with thiamethoxam 70FS @ 8ml/kg seed and sowing the crop along with cowpea as intercrop (6:1 row proportion) or NB-21 grass as border crop followed by application of nimbecidine 0.03EC @ 5.0 ml/l and chlorantraniliprole 18.5SC @ 0.2 ml/l on need basis.Keywords: Maize stem borers, Chilo partellus, Sesamia inferens, crop loss, integrated management
Procedia PDF Downloads 179355 The Theotokos of the Messina Missal as a Byzantine Icon in Norman Sicily: A Study on Patronage and Devotion
Authors: Jesus Rodriguez Viejo
Abstract:
The aim of this paper is to study cross-cultural interactions between the West and Byzantium, in the fields of art and religion, by analyzing the decoration of one luxury manuscript. The Spanish National Library is home to one of the most extraordinary examples of illuminated manuscript production of Norman Sicily – the Messina Missal. Dating from the late twelfth century, this liturgical book was the result of the intense activity of artistic patronage of an Englishman, Richard Palmer. Appointed bishop of the Sicilian city in the second half of the century, Palmer set a painting workshop attached to his cathedral. The illuminated manuscripts produced there combine a clear Byzantine iconographic language with a myriad of elements imported from France, such as a large number of decorated initials. The most remarkable depiction contained in the Missal is that of the Theotokos (fol. 80r). Its appearance immediately recalls portative Byzantine icons of the Mother of God in South Italy and Byzantium and implies the intervention of an artist familiar with icon painting. The richness of this image is a clear proof of the prestige that Byzantine art enjoyed in the island after the Norman takeover. The production of the school of Messina under Richard Palmer could be considered a counterpart in the field of manuscript illumination of the court art of the Sicilian kings in Palermo and the impressive commissions for the cathedrals of Monreale and Cefalù. However, the ethnic composition of Palmer’s workshop has never been analyzed and therefore, we intend to shed light on the permanent presence of Greek-speaking artists in Norman Messina. The east of the island was the last stronghold of the Greeks and soon after the Norman conquest, the previous exchanges between the cities of this territory and Byzantium restarted again, mainly by way of trade. Palmer was not a Norman statesman, but a churchman and his love for religion and culture prevailed over the wars and struggles for power of the Sicilian kingdom in the central Mediterranean. On the other hand, the representation of the Theotokos can prove that Eastern devotional approaches to images were still common in the east of the island more than a century after the collapse of Byzantine rule. Local Norman lords repeatedly founded churches devoted to Greek saints and medieval Greek-speaking authors were widely copied in Sicilian scriptoria. The Madrid Missal and its Theotokos are doubtless the product of Western initiative but in a land culturally dominated by Byzantium. Westerners, such as Palmer and his circle, could have been immersed in this Hellenophile culture and therefore, naturally predisposed to perform prayers and rituals, in both public and private contexts, linked to ideas and practices of Greek origin, such as the concept of icon.Keywords: history of art, byzantine art, manuscripts, norman sicily, messina, patronage, devotion, iconography
Procedia PDF Downloads 350354 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration
Authors: Danny Barash
Abstract:
Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods
Procedia PDF Downloads 234353 Linguistic Cyberbullying, a Legislative Approach
Authors: Simona Maria Ignat
Abstract:
Bullying online has been an increasing studied topic during the last years. Different approaches, psychological, linguistic, or computational, have been applied. To our best knowledge, a definition and a set of characteristics of phenomenon agreed internationally as a common framework are still waiting for answers. Thus, the objectives of this paper are the identification of bullying utterances on Twitter and their algorithms. This research paper is focused on the identification of words or groups of words, categorized as “utterances”, with bullying effect, from Twitter platform, extracted on a set of legislative criteria. This set is the result of analysis followed by synthesis of law documents on bullying(online) from United States of America, European Union, and Ireland. The outcome is a linguistic corpus with approximatively 10,000 entries. The methods applied to the first objective have been the following. The discourse analysis has been applied in identification of keywords with bullying effect in texts from Google search engine, Images link. Transcription and anonymization have been applied on texts grouped in CL1 (Corpus linguistics 1). The keywords search method and the legislative criteria have been used for identifying bullying utterances from Twitter. The texts with at least 30 representations on Twitter have been grouped. They form the second corpus linguistics, Bullying utterances from Twitter (CL2). The entries have been identified by using the legislative criteria on the the BoW method principle. The BoW is a method of extracting words or group of words with same meaning in any context. The methods applied for reaching the second objective is the conversion of parts of speech to alphabetical and numerical symbols and writing the bullying utterances as algorithms. The converted form of parts of speech has been chosen on the criterion of relevance within bullying message. The inductive reasoning approach has been applied in sampling and identifying the algorithms. The results are groups with interchangeable elements. The outcomes convey two aspects of bullying: the form and the content or meaning. The form conveys the intentional intimidation against somebody, expressed at the level of texts by grammatical and lexical marks. This outcome has applicability in the forensic linguistics for establishing the intentionality of an action. Another outcome of form is a complex of graphemic variations essential in detecting harmful texts online. This research enriches the lexicon already known on the topic. The second aspect, the content, revealed the topics like threat, harassment, assault, or suicide. They are subcategories of a broader harmful content which is a constant concern for task forces and legislators at national and international levels. These topic – outcomes of the dataset are a valuable source of detection. The analysis of content revealed algorithms and lexicons which could be applied to other harmful contents. A third outcome of content are the conveyances of Stylistics, which is a rich source of discourse analysis of social media platforms. In conclusion, this corpus linguistics is structured on legislative criteria and could be used in various fields.Keywords: corpus linguistics, cyberbullying, legislation, natural language processing, twitter
Procedia PDF Downloads 86352 A Top-down vs a Bottom-up Approach on Lower Extremity Motor Recovery and Balance Following Acute Stroke: A Randomized Clinical Trial
Authors: Vijaya Kumar, Vidayasagar Pagilla, Abraham Joshua, Rakshith Kedambadi, Prasanna Mithra
Abstract:
Background: Post stroke rehabilitation are aimed to accelerate for optimal sensorimotor recovery, functional gain and to reduce long-term dependency. Intensive physical therapy interventions can enhance this recovery as experience-dependent neural plastic changes either directly act at cortical neural networks or at distal peripheral level (muscular components). Neuromuscular Electrical Stimulation (NMES), a traditional bottom-up approach, mirror therapy (MT), a relatively new top down approach have found to be an effective adjuvant treatment methods for lower extremity motor and functional recovery in stroke rehabilitation. However there is a scarcity of evidence to compare their therapeutic gain in stroke recovery.Aim: To compare the efficacy of neuromuscular electrical stimulation (NMES) and mirror therapy (MT) in very early phase of post stroke rehabilitation addressed to lower extremity motor recovery and balance. Design: observer blinded Randomized Clinical Trial. Setting: Neurorehabilitation Unit, Department of Physical Therapy, Tertiary Care Hospitals. Subjects: 32 acute stroke subjects with first episode of unilateral stroke with hemiparesis, referred for rehabilitation (onset < 3 weeks), Brunnstorm lower extremity recovery stages ≥3 and MMSE score more than 24 were randomized into two group [Group A-NMES and Group B-MT]. Interventions: Both the groups received eclectic approach to remediate lower extremity recovery which includes treatment components of Roods, Bobath and Motor learning approaches for 30 minutes a day for 6 days. Following which Group A (N=16) received 30 minutes of surface NMES training for six major paretic muscle groups (gluteus maximus and medius,quadriceps, hamstrings, tibialis anterior and gastrocnemius). Group B (N=16) was administered with 30 minutes of mirror therapy sessions to facilitate lower extremity motor recovery. Outcome measures: Lower extremity motor recovery, balance and activities of daily life (ADLs) were measured by Fugyl Meyer Assessment (FMA-LE), Berg Balance Scale (BBS), Barthel Index (BI) before and after intervention. Results: Pre Post analysis of either group across the time revealed statistically significant improvement (p < 0.001) for all the outcome variables for the either group. All parameters of NMES had greater change scores compared to MT group as follows: FMA-LE (25.12±3.01 vs. 23.31±2.38), BBS (35.12±4.61 vs. 34.68±5.42) and BI (40.00±10.32 vs. 37.18±7.73). Between the groups comparison of pre post values showed no significance with FMA-LE (p=0.09), BBS (p=0.80) and BI (p=0.39) respectively. Conclusion: Though either groups had significant improvement (pre to post intervention), none of them were superior to other in lower extremity motor recovery and balance among acute stroke subjects. We conclude that eclectic approach is an effective treatment irrespective of NMES or MT as an adjunct.Keywords: balance, motor recovery, mirror therapy, neuromuscular electrical stimulation, stroke
Procedia PDF Downloads 281