Search results for: rapid miner tool
467 Neuropsychiatric Outcomes of Intensive Music Therapy in Stroke Rehabilitation A Premilitary Investigation
Authors: Honey Bryant, Elvina Chu
Abstract:
Stroke is the leading cause of disability in adults in Canada and directly related to depression, anxiety, and sleep disorders; with an estimated annual cost of $50 billion in health care. Strokes not only impact the individual but society as a whole. Current stroke rehabilitation does not include Music Therapy, although it has success in clinical research in the use of stroke rehabilitation. This study examines the use of neurologic music therapy (NMT) in conjunction with stroke rehabilitation to improve sleep quality, reduce stress levels, and promote neurogenesis. Existing research on NMT in stroke is limited, which means any conclusive information gathered during this study will be significant. My novel hypotheses are a.) stroke patients will become less depressed and less anxious with improved sleep following NMT. b.) NMT will reduce stress levels and promote neurogenesis in stroke patients admitted for rehabilitation. c.) Beneficial effects of NMT will be sustained at least short-term following treatment. Participants were recruited from the in-patient stroke rehabilitation program at Providence Care Hospital in Kingston, Ontario, Canada. All participants-maintained stroke rehabilitation treatment as normal. The study was spilt into two groups, the first being Passive Music Listening (PML) and the second Neurologic Music Therapy (NMT). Each group underwent 10 sessions of intensive music therapy lasting 45 minutes for 10 consecutive days, excluding weekends. Psychiatric Assessments, Epworth Sleepiness Scale (ESS), Hospital Anxiety & Depression Rating Scale (HADS), and Music Engagement Questionnaire (MusEQ), were completed, followed by a general feedback interview. Physiological markers of stress were measured through blood pressure measurements and heart rate variability. Serum collections reviewed neurogenesis via Brain-derived neurotrophic factor (BDNF) and stress markers of cortisol levels. As this study is still on-going, a formal analysis of data has not been fully completed, although trends are following our hypotheses. A decrease in sleepiness and anxiety is seen upon the first cohort of PML. Feedback interviews have indicated most participants subjectively felt more relaxed and thought PML was useful in their recovery. If the hypothesis is supported, larger external funding which will allow for greater investigation of the use of NMT in stroke rehabilitation. As we know, NMT is not covered under Ontario Health Insurance Plan (OHIP), so there is limited scientific data surrounding its uses as a clinical tool. This research will provide detailed findings of the treatment of neuropsychiatric aspects of stroke. Concurrently, a passive music listening study is being designed to further review the use of PML in rehabilitation as well.Keywords: music therapy, psychotherapy, neurologic music therapy, passive music listening, neuropsychiatry, counselling, behavioural, stroke, stroke rehabilitation, rehabilitation, neuroscience
Procedia PDF Downloads 113466 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop
Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen
Abstract:
Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.
Procedia PDF Downloads 41465 Landslide Susceptibility Analysis in the St. Lawrence Lowlands Using High Resolution Data and Failure Plane Analysis
Authors: Kevin Potoczny, Katsuichiro Goda
Abstract:
The St. Lawrence lowlands extend from Ottawa to Quebec City and are known for large deposits of sensitive Leda clay. Leda clay deposits are responsible for many large landslides, such as the 1993 Lemieux and 2010 St. Jude (4 fatalities) landslides. Due to the large extent and sensitivity of Leda clay, regional hazard analysis for landslides is an important tool in risk management. A 2018 regional study by Farzam et al. on the susceptibility of Leda clay slopes to landslide hazard uses 1 arc second topographical data. A qualitative method known as Hazus is used to estimate susceptibility by checking for various criteria in a location and determine a susceptibility rating on a scale of 0 (no susceptibility) to 10 (very high susceptibility). These criteria are slope angle, geological group, soil wetness, and distance from waterbodies. Given the flat nature of St. Lawrence lowlands, the current assessment fails to capture local slopes, such as the St. Jude site. Additionally, the data did not allow one to analyze failure planes accurately. This study majorly improves the analysis performed by Farzam et al. in two aspects. First, regional assessment with high resolution data allows for identification of local locations that may have been previously identified as low susceptibility. This then provides the opportunity to conduct a more refined analysis on the failure plane of the slope. Slopes derived from 1 arc second data are relatively gentle (0-10 degrees) across the region; however, the 1- and 2-meter resolution 2022 HRDEM provided by NRCAN shows that short, steep slopes are present. At a regional level, 1 arc second data can underestimate the susceptibility of short, steep slopes, which can be dangerous as Leda clay landslides behave retrogressively and travel upwards into flatter terrain. At the location of the St. Jude landslide, slope differences are significant. 1 arc second data shows a maximum slope of 12.80 degrees and a mean slope of 4.72 degrees, while the HRDEM data shows a maximum slope of 56.67 degrees and a mean slope of 10.72 degrees. This equates to a difference of three susceptibility levels when the soil is dry and one susceptibility level when wet. The use of GIS software is used to create a regional susceptibility map across the St. Lawrence lowlands at 1- and 2-meter resolutions. Failure planes are necessary to differentiate between small and large landslides, which have so far been ignored in regional analysis. Leda clay failures can only retrogress as far as their failure planes, so the regional analysis must be able to transition smoothly into a more robust local analysis. It is expected that slopes within the region, once previously assessed at low susceptibility scores, contain local areas of high susceptibility. The goal is to create opportunities for local failure plane analysis to be undertaken, which has not been possible before. Due to the low resolution of previous regional analyses, any slope near a waterbody could be considered hazardous. However, high-resolution regional analysis would allow for more precise determination of hazard sites.Keywords: hazus, high-resolution DEM, leda clay, regional analysis, susceptibility
Procedia PDF Downloads 77464 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 94463 The Effects of Alpha-Lipoic Acid Supplementation on Post-Stroke Patients: A Systematic Review and Meta-Analysis of Randomized Controlled Trials
Authors: Hamid Abbasi, Neda Jourabchi, Ranasadat Abedi, Kiarash Tajernarenj, Mehdi Farhoudi, Sarvin Sanaie
Abstract:
Background: Alpha lipoic acid (ALA), fat- and water-soluble, coenzyme with sulfuret content, has received considerable attention for its potential therapeutic role in diabetes, cardiovascular diseases, cancers, and central nervous disease. This investigation aims to evaluate the probable protective effects of ALA in stroke patients. Methods: Based on Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines, This meta-analysis was performed. The PICO criteria for this meta-analysis were as follows: Population/Patients (P: stroke patients); Intervention (I: ALA); Comparison (C: control); Outcome (O: blood glucose, lipid profile, oxidative stress, inflammatory factors).In addition, Studies that were excluded from the analysis consisted of in vitro, in vivo, and ex vivo studies, case reports, quasi-experimental studies. Scopus, PubMed, Web of Science, EMBASE databases were searched until August 2023. Results: Of 496 records that were screened in the title/abstract stage, 9 studies were included in this meta-analysis. The sample sizes in the included studies vary between 28 and 90. The result of risk of bias was performed via risk of bias (RoB) in randomized-controlled trials (RCTs) based on the second version of the Cochrane RoB assessment tool. 8 studies had a definitely high risk of bias. Discussion: To the best of our knowledge, The present meta-analysis is the first study addressing the effectiveness of ALA supplementation in enhancing post-stroke metabolic markers, including lipid profile, oxidative stress, and inflammatory indices. It is imperative to acknowledge certain potential limitations inherent in this study. First of all, type of treatment (oral or intravenous infusion) could alter the bioavailability of ALA. Our study had restricted evidence regarding the impact of ALA supplementation on included outcomes. Therefore, further research is warranted to develop into the effects of ALA specifically on inflammation and oxidative stress. Funding: The research protocol was approved and supported by the Student Research Committee, Tabriz University of Medical Sciences (grant number: 72825). Registration: This study was registered in the International prospective register of systematic reviews (PROSPERO ID: CR42023461612).Keywords: alpha-lipoic acid, lipid profile, blood glucose, inflammatory factors, oxidative stress, meta-analysis, post-stroke
Procedia PDF Downloads 63462 Salmonella Emerging Serotypes in Northwestern Italy: Genetic Characterization by Pulsed-Field Gel Electrophoresis
Authors: Clara Tramuta, Floris Irene, Daniela Manila Bianchi, Monica Pitti, Giulia Federica Cazzaniga, Lucia Decastelli
Abstract:
This work presents the results obtained by the Regional Reference Centre for Salmonella Typing (CeRTiS) in a retrospective study aimed to investigate, through Pulsed-field Gel Electrophoresis (PFGE) analysis, the genetic relatedness of emerging Salmonella serotypes of human origin circulating in North-West of Italy. Furthermore, the goal of this work was to create a Regional database to facilitate foodborne outbreak investigation and to monitor them at an earlier stage. A total of 112 strains, isolated from 2016 to 2018 in hospital laboratories, were included in this study. The isolates were previously identified as Salmonella according to standard microbiological techniques and serotyping was performed according to ISO 6579-3 and the Kaufmann-White scheme using O and H antisera (Statens Serum Institut®). All strains were characterized by PFGE: analysis was conducted according to a standardized PulseNet protocol. The restriction enzyme XbaI was used to generate several distinguishable genomic fragments on the agarose gel. PFGE was performed on a CHEF Mapper system, separating large fragments and generating comparable genetic patterns. The agarose gel was then stained with GelRed® and photographed under ultraviolet transillumination. The PFGE patterns obtained from the 112 strains were compared using Bionumerics version 7.6 software with the Dice coefficient with 2% band tolerance and 2% optimization. For each serotype, the data obtained with the PFGE were compared according to the geographical origin and the year in which they were isolated. Salmonella strains were identified as follow: S. Derby n. 34; S. Infantis n. 38; S. Napoli n. 40. All the isolates had appreciable restricted digestion patterns ranging from approximately 40 to 1100 kb. In general, a fairly heterogeneous distribution of pulsotypes has emerged in the different provinces. Cluster analysis indicated high genetic similarity (≥ 83%) among strains of S. Derby (n. 30; 88%), S. Infantis (n. 36; 95%) and S. Napoli (n. 38; 95%) circulating in north-western Italy. The study underlines the genomic similarities shared by the emerging Salmonella strains in Northwest Italy and allowed to create a database to detect outbreaks in an early stage. Therefore, the results confirmed that PFGE is a powerful and discriminatory tool to investigate the genetic relationships among strains in order to monitoring and control Salmonellosis outbreak spread. Pulsed-field gel electrophoresis (PFGE) still represents one of the most suitable approaches to characterize strains, in particular for the laboratories for which NGS techniques are not available.Keywords: emerging Salmonella serotypes, genetic characterization, human strains, PFGE
Procedia PDF Downloads 105461 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning
Authors: Hossein Havaeji, Tony Wong, Thien-My Dao
Abstract:
1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning
Procedia PDF Downloads 122460 Link People from Different Age Together: Attitude and Behavior Changes in Inter-Generational Interaction Program
Authors: Qian Sun, Dannie Dai, Vivian Lou
Abstract:
Background: Changes in population structure and modernization have left traditional channels of achieving intergenerational solidarity in crisis. Policies and projects purposefully structuring intergenerational interaction are regarded as effective ways to enhance positive attitude changes between generations. However, few inter-generational interaction program has put equal emphasis on promoting positive changes on both attitude and behavior across generational groups. Objective: This study evaluated the effectiveness of an intergenerational interaction program which aims to facilitate positive attitude and behavioral interaction between both young and old individuals in Hong Kong. Method: A quasi-experimental design was adopted with the sample of 150 older participants and 161 young participants. Among 73 older and 78 young participants belong to experiment groups while 77 older participants and 84 young participants belong to control groups. The Age Group Evaluation and Description scale (AGED) was adopted to measure attitude toward young people by older participants and the Chinese version of Kogan’s Attitude towards Older People (KAOP) as well as Polizzi’s refined version of the Ageing Semantic Differential Scale (ASD) were used to measure attitude toward older people by the younger generation. The interpersonal behaviour of participants was assessed using Beglgrave’s behavioural observation tool. Six primary verbal or non-verbal interpersonal behaviours including smiles, looks, touches, encourages, initiated conversations and assists were identified and observed. Findings Effectiveness of attitude and behavior changes on both younger and older participants was confirmed in results. Compared with participants from the control group, experimental participants of elderly showed significant positive changes of attitudes toward the younger generation as assessed by AGED (F=138.34, p < .001). Moreover, older participants showed significant positive changes on three out of six behaviours (visual attention: t=2.26, p<0.05; initiate conversation: t=3.42, p<0.01; and touch: t=2.28, p<0.05). For younger participants, participants from experimental group showed significant positive changes in attitude toward older people (with F-score of 47.22 for KAOP and 72.75 for ASD, p<.001). Young participants also showed significant positive changes in two out of six behaviours (visual attention: t=3.70, p<0.01; initiate conversation: t=2.04, p<0.001). There is no significant relationship between attitude change and behaviour change in both older (p=0.86) and younger (p=0.22) groups. Conclusion: This study has brought practical implications for social work. The effective model of this program could assist social workers and allied professionals to design relevant projects for nurture intergenerational solidarity. Furthermore, insignificant results between attitude and behavior changes revealed that attitude change was not a strong predictor for behavior change, hence, intergenerational programs against age-stereotype should put equal emphasis on both attitudinal and behavioral aspects.Keywords: attitude and behaviour changes, intergenerational interaction, intergenerational solidarity, program design
Procedia PDF Downloads 243459 Evaluation Method for Fouling Risk Using Quartz Crystal Microbalance
Authors: Natsuki Kishizawa, Keiko Nakano, Hussam Organji, Amer Shaiban, Mohammad Albeirutty
Abstract:
One of the most important tasks in operating desalination plants using a reverse osmosis (RO) method is preventing RO membrane fouling caused by foulants found in seawater. Optimal design of the pre-treatment process of RO process for plants enables the reduction of foulants. Therefore, a quantitative evaluation of the fouling risk in pre-treated water, which is fed to RO, is required for optimal design. Some measurement methods for water quality such as silt density index (SDI) and total organic carbon (TOC) have been conservatively applied for evaluations. However, these methods have not been effective in some situations for evaluating the fouling risk of RO feed water. Furthermore, stable management of plants will be possible by alerts and appropriate control of the pre-treatment process by using the method if it can be applied to the inline monitoring system for the fouling risk of RO feed water. The purpose of this study is to develop a method to evaluate the fouling risk of RO feed water. We applied a quartz crystal microbalance (QCM) to measure the amount of foulants found in seawater using a sensor whose surface is coated with polyamide thin film, which is the main material of a RO membrane. The increase of the weight of the sensor after a certain length of time in which the sample water passes indicates the fouling risk of the sample directly. We classified the values as “FP: Fouling Potential”. The characteristics of the method are to measure the very small amount of substances in seawater in a short time: < 2h, and from a small volume of the sample water: < 50mL. Using some RO cell filtration units, a higher correlation between the pressure increase given by RO fouling and the FP from the method than SDI and TOC was confirmed in the laboratory-scale test. Then, to establish the correlation in the actual bench-scale RO membrane module, and to confirm the feasibility of the monitoring system as a control tool for the pre-treatment process, we have started a long-term test at an experimental desalination site by the Red Sea in Jeddah, Kingdom of Saudi Arabia. Implementing inline equipment for the method made it possible to measure FP intermittently (4 times per day) and automatically. Moreover, for two 3-month long operations, the RO operation pressure among feed water samples of different qualities was compared. The pressure increase through a RO membrane module was observed at a high FP RO unit in which feed water was treated by a cartridge filter only. On the other hand, the pressure increase was not observed at a low FP RO unit in which feed water was treated by an ultra-filter during the operation. Therefore, the correlation in an actual scale RO membrane was established in two runs of two types of feed water. The result suggested that the FP method enables the evaluation of the fouling risk of RO feed water.Keywords: fouling, monitoring, QCM, water quality
Procedia PDF Downloads 212458 Nutritional Education in Health Resort Institutions in the Face of Demographic and Epidemiological Changes in Poland
Authors: J. Woźniak-Holecka, T. Holecki, S. Jaruga
Abstract:
Spa treatment is an important area of the health care system in Poland due to the increasing needs of the population and the context of historical conditions for this form of therapy. It extends the range of financing possibilities of the outlets and increases the potential of spa services, which is very important in the context of demographic and epidemiological changes. The main advantages of spa treatment services include its relatively wide availability, low risk of side effects, good patient tolerance, long-lasting curative effect and a relatively low cost. In addition, patients should be provided with a proper diet and enable participation in health education and health promotion classes aimed at health problems consistent with the treatment profile. Challenges for global health care systems include a sharp increase in spending on benefits, dynamic development of health technologies and growing social expectations. This requires extending the competences of health resort facilities for health promotion. Within each type of health resort institutions in Poland, nutritional education services are implemented, aimed at creating and consolidating proper eating habits. Choosing the right diet can speed up recovery or become one of the methods to alleviate the symptoms of chronic diseases. During spa treatment patient learns the principles of rational nutrition and adequate dietotherapy to his diseases. The aim of the project is to assess the frequency and quality of nutritional education provided to patients in health resort facilities in a nationwide perspective. The material for the study will be data obtained as part of an in-depth interview conducted among Heads of Nutrition Departments of selected institutions. The use of nutritional education in a health resort may be an important goal of implementing the state health policy as a useful tool to reduce the risk of diet-related diseases. Recognizing nutritional education in health resort institutions as a type of full-value health service can be effective system support for health policy, including seniors, due to demographic changes currently occurring in the Polish population. Furthermore, it is necessary to increase the interest and motivation of patients to follow the recommendations of nutritional education, because it will bring tangible benefits for the long-term effects of therapy and care should be taken for the form and methodology of nutrition education implemented in health resort institutions. Finally it is necessary to construct an educational offer in terms of selected groups of patients with the highest health needs: the elderly and the disabled. In conclusion, it can be said that the system of nutritional education implemented in polish health resort institutions should be subjected to global changes and strong systemic correction.Keywords: health care system, nutritional education, public health, spa and treatment
Procedia PDF Downloads 114457 Metal-Organic Frameworks-Based Materials for Volatile Organic Compounds Sensing Applications: Strategies to Improve Sensing Performances
Authors: Claudio Clemente, Valentina Gargiulo, Alessio Occhicone, Giovanni Piero Pepe, Giovanni Ausanio, Michela Alfè
Abstract:
Volatile organic compound (VOC) emissions represent a serious risk to human health and the integrity of the ecosystems, especially at high concentrations. For this reason, it is very important to continuously monitor environmental quality and develop fast and reliable portable sensors to allow analysis on site. Chemiresistors have become promising candidates for VOC sensing as their ease of fabrication, variety of suitable sensitive materials, and simple sensing data. A chemoresistive gas sensor is a transducer that allows to measure the concentration of an analyte in the gas phase because the changes in resistance are proportional to the amount of the analyte present. The selection of the sensitive material, which interacts with the target analyte, is very important for the sensor performance. The most used VOC detection materials are metal oxides (MOx) for their rapid recovery, high sensitivity to various gas molecules, easy fabrication. Their sensing performance can be improved in terms of operating temperature, selectivity, and detection limit. Metal-organic frameworks (MOFs) have attracted a lot of attention also in the field of gas sensing due to their high porosity, high surface area, tunable morphologies, structural variety. MOFs are generated by the self-assembly of multidentate organic ligands connecting with adjacent multivalent metal nodes via strong coordination interactions, producing stable and highly ordered crystalline porous materials with well-designed structures. However, most MOFs intrinsically exhibit low electrical conductivity. To improve this property, MOFs can be combined with organic and inorganic materials in a hybrid fashion to produce composite materials or can be transformed into more stable structures. MOFs, indeed, can be employed as the precursors of metal oxides with well-designed architectures via the calcination method. The MOF-derived MOx partially preserved the original structure with high surface area and intrinsic open pores, which act as trapping centers for gas molecules, and showed a higher electrical conductivity. Core-shell heterostructures, in which the surface of a metal oxide core is completely coated by a MOF shell, forming a junction at the core-shell heterointerface, can also be synthesized. Also, nanocomposite in which MOF structures are intercalated with graphene related materials can also be produced, and the conductivity increases thanks to the high mobility of electrons of carbon materials. As MOF structures, zinc-based MOFs belonging to the ZIF family were selected in this work. Several Zn-based materials based and/or derived from MOFs were produced, structurally characterized, and arranged in a chemo resistive architecture, also exploring the potentiality of different approaches of sensing layer deposition based on PLD (pulsed laser deposition) and, in case of thermally labile materials, MAPLE (Matrix Assisted Pulsed Laser Evaporation) to enhance the adhesion to the support. The sensors were tested in a controlled humidity chamber, allowing for the possibility of varying the concentration of ethanol, a typical analyte chosen among the VOCs for a first survey. The effect of heating the chemiresistor to improve sensing performances was also explored. Future research will focus on exploring new manufacturing processes for MOF-based gas sensors with the aim to improve sensitivity, selectivity and reduce operating temperatures.Keywords: chemiresistors, gas sensors, graphene related materials, laser deposition, MAPLE, metal-organic frameworks, metal oxides, nanocomposites, sensing performance, transduction mechanism, volatile organic compounds
Procedia PDF Downloads 64456 Assessing Children’s Probabilistic and Creative Thinking in a Non-formal Learning Context
Authors: Ana Breda, Catarina Cruz
Abstract:
Daily, we face unpredictable events, often attributed to chance, as there is no justification for such an occurrence. Chance, understood as a source of uncertainty, is present in several aspects of human life, such as weather forecasts, dice rolling, and lottery. Surprisingly, humans and some animals can quickly adjust their behavior to handle efficiently doubly stochastic processes (random events with two layers of randomness, like unpredictable weather affecting dice rolling). This adjustment ability suggests that the human brain has built-in mechanisms for perceiving, understanding, and responding to simple probabilities. It also explains why current trends in mathematics education include probability concepts in official curriculum programs, starting from the third year of primary education onwards. In the first years of schooling, children learn to use a certain type of (specific) vocabulary, such as never, always, rarely, perhaps, likely, and unlikely, to help them to perceive and understand the probability of some events. These are keywords of crucial importance for their perception and understanding of probabilities. The development of the probabilistic concepts comes from facts and cause-effect sequences resulting from the subject's actions, as well as the notion of chance and intuitive estimates based on everyday experiences. As part of a junior summer school program, which took place at a Portuguese university, a non-formal learning experiment was carried out with 18 children in the 5th and 6th grades. This experience was designed to be implemented in a dynamic of a serious ice-breaking game, to assess their levels of probabilistic, critical, and creative thinking in understanding impossible, certain, equally probable, likely, and unlikely events, and also to gain insight into how the non-formal learning context influenced their achievements. The criteria used to evaluate probabilistic thinking included the creative ability to conceive events classified in the specified categories, the ability to properly justify the categorization, the ability to critically assess the events classified by other children, and the ability to make predictions based on a given probability. The data analysis employs a qualitative, descriptive, and interpretative-methods approach based on students' written productions, audio recordings, and researchers' field notes. This methodology allowed us to conclude that such an approach is an appropriate and helpful formative assessment tool. The promising results of this initial exploratory study require a future research study with children from these levels of education, from different regions, attending public or private schools, to validate and expand our findings.Keywords: critical and creative thinking, non-formal mathematics learning, probabilistic thinking, serious game
Procedia PDF Downloads 27455 Cycleloop Personal Rapid Transit: An Exploratory Study for Last Mile Connectivity in Urban Transport
Authors: Suresh Salla
Abstract:
In this paper, author explores for most sustainable last mile transport mode addressing present problems of traffic congestion, jams, pollution and travel stress. Development of energy-efficient sustainable integrated transport system(s) is/are must to make our cities more livable. Emphasis on autonomous, connected, electric, sharing system for effective utilization of systems (vehicles and public infrastructure) is on the rise. Many surface mobility innovations like PBS, Ride hailing, ride sharing, etc. are, although workable but if we analyze holistically, add to the already congested roads, difficult to ride in hostile weather, causes pollution and poses commuter stress. Sustainability of transportation is evaluated with respect to public adoption, average speed, energy consumption, and pollution. Why public prefer certain mode over others? How commute time plays a role in mode selection or shift? What are the factors play-ing role in energy consumption and pollution? Based on the study, it is clear that public prefer a transport mode which is exhaustive (i.e., less need for interchange – network is widespread) and intensive (i.e., less waiting time - vehicles are available at frequent intervals) and convenient with latest technologies. Average speed is dependent on stops, number of intersections, signals, clear route availability, etc. It is clear from Physics that higher the kerb weight of a vehicle; higher is the operational energy consumption. Higher kerb weight also demands heavier infrastructure. Pollution is dependent on source of energy, efficiency of vehicle, average speed. Mode can be made exhaustive when the unit infrastructure cost is less and can be offered intensively when the vehicle cost is less. Reliable and seamless integrated mobility till last ¼ mile (Five Minute Walk-FMW) is a must to encourage sustainable public transportation. Study shows that average speed and reliability of dedicated modes (like Metro, PRT, BRT, etc.) is high compared to road vehicles. Electric vehicles and more so battery-less or 3rd rail vehicles reduce pollution. One potential mode can be Cycleloop PRT, where commuter rides e-cycle in a dedicated path – elevated, at grade or underground. e-Bike with kerb weight per rider at 15 kg being 1/50th of car or 1/10th of other PRT systems makes it sustainable mode. Cycleloop tube will be light, sleek and scalable and can be modular erected, either on modified street lamp-posts or can be hanged/suspended between the two stations. Embarking and dis-embarking points or offline stations can be at an interval which suits FMW to mass public transit. In terms of convenience, guided e-Bike can be made self-balancing thus encouraging driverless on-demand vehicles. e-Bike equipped with smart electronics and drive controls can intelligently respond to field sensors and autonomously move reacting to Central Controller. Smart switching allows travel from origin to destination without interchange of cycles. DC Powered Batteryless e-cycle with voluntary manual pedaling makes it sustainable and provides health benefits. Tandem e-bike, smart switching and Platoon operations algorithm options provide superior through-put of the Cycleloop. Thus Cycleloop PRT will be exhaustive, intensive, convenient, reliable, speedy, sustainable, safe, pollution-free and healthy alternative mode for last mile connectivity in cities.Keywords: cycleloop PRT, five-minute walk, lean modular infrastructure, self-balanced intelligent e-cycle
Procedia PDF Downloads 131454 A Numerical Studies for Improving the Performance of Vertical Axis Wind Turbine by a Wind Power Tower
Authors: Soo-Yong Cho, Chong-Hyun Cho, Chae-Whan Rim, Sang-Kyu Choi, Jin-Gyun Kim, Ju-Seok Nam
Abstract:
Recently, vertical axis wind turbines (VAWT) have been widely used to produce electricity even in urban. They have several merits such as low sound noise, easy installation of the generator and simple structure without yaw-control mechanism and so on. However, their blades are operated under the influence of the trailing vortices generated by the preceding blades. This phenomenon deteriorates its output power and makes difficulty predicting correctly its performance. In order to improve the performance of VAWT, wind power towers can be applied. Usually, the wind power tower can be constructed as a multi-story building to increase the frontal area of the wind stream. Hence, multiple sets of the VAWT can be installed within the wind power tower, and they can be operated at high elevation. Many different types of wind power tower can be used in the field. In this study, a wind power tower with circular column shape was applied, and the VAWT was installed at the center of the wind power tower. Seven guide walls were used as a strut between the floors of the wind power tower. These guide walls were utilized not only to increase the wind velocity within the wind power tower but also to adjust the wind direction for making a better working condition on the VAWT. Hence, some important design variables, such as the distance between the wind turbine and the guide wall, the outer diameter of the wind power tower, the direction of the guide wall against the wind direction, should be considered to enhance the output power on the VAWT. A numerical analysis was conducted to find the optimum dimension on design variables by using the computational fluid dynamics (CFD) among many prediction methods. The CFD could be an accurate prediction method compared with the stream-tube methods. In order to obtain the accurate results in the CFD, it needs the transient analysis and the full three-dimensional (3-D) computation. However, this full 3-D CFD could be hard to be a practical tool because it requires huge computation time. Therefore, the reduced computational domain is applied as a practical method. In this study, the computations were conducted in the reduced computational domain and they were compared with the experimental results in the literature. It was examined the mechanism of the difference between the experimental results and the computational results. The computed results showed this computational method could be an effective method in the design methodology using the optimization algorithm. After validation of the numerical method, the CFD on the wind power tower was conducted with the important design variables affecting the performance of VAWT. The results showed that the output power of the VAWT obtained using the wind power tower was increased compared to them obtained without the wind power tower. In addition, they showed that the increased output power on the wind turbine depended greatly on the dimension of the guide wall.Keywords: CFD, performance, VAWT, wind power tower
Procedia PDF Downloads 387453 Suggestion of Methodology to Detect Building Damage Level Collectively with Flood Depth Utilizing Geographic Information System at Flood Disaster in Japan
Authors: Munenari Inoguchi, Keiko Tamura
Abstract:
In Japan, we were suffered by earthquake, typhoon, and flood disaster in 2019. Especially, 38 of 47 prefectures were affected by typhoon #1919 occurred in October 2019. By this disaster, 99 people were dead, three people were missing, and 484 people were injured as human damage. Furthermore, 3,081 buildings were totally collapsed, 24,998 buildings were half-collapsed. Once disaster occurs, local responders have to inspect damage level of each building by themselves in order to certificate building damage for survivors for starting their life reconstruction process. At that disaster, the total number to be inspected was so high. Based on this situation, Cabinet Office of Japan approved the way to detect building damage level efficiently, that is collectively detection. However, they proposed a just guideline, and local responders had to establish the concrete and infallible method by themselves. Against this issue, we decided to establish the effective and efficient methodology to detect building damage level collectively with flood depth. Besides, we thought that the flood depth was relied on the land height, and we decided to utilize GIS (Geographic Information System) for analyzing the elevation spatially. We focused on the analyzing tool of spatial interpolation, which is utilized to survey the ground water level usually. In establishing the methodology, we considered 4 key-points: 1) how to satisfy the condition defined in the guideline approved by Cabinet Office for detecting building damage level, 2) how to satisfy survivors for the result of building damage level, 3) how to keep equitability and fairness because the detection of building damage level was executed by public institution, 4) how to reduce cost of time and human-resource because they do not have enough time and human-resource for disaster response. Then, we proposed a methodology for detecting building damage level collectively with flood depth utilizing GIS with five steps. First is to obtain the boundary of flooded area. Second is to collect the actual flood depth as sampling over flooded area. Third is to execute spatial analysis of interpolation with sampled flood depth to detect two-dimensional flood depth extent. Fourth is to divide to blocks by four categories of flood depth (non-flooded, over the floor to 100 cm, 100 cm to 180 cm and over 180 cm) following lines of roads for getting satisfaction from survivors. Fifth is to put flood depth level to each building. In Koriyama city of Fukushima prefecture, we proposed the methodology of collectively detection for building damage level as described above, and local responders decided to adopt our methodology at typhoon #1919 in 2019. Then, we and local responders detect building damage level collectively to over 1,000 buildings. We have received good feedback that the methodology was so simple, and it reduced cost of time and human-resources.Keywords: building damage inspection, flood, geographic information system, spatial interpolation
Procedia PDF Downloads 124452 Managed Aquifer Recharge (MAR) for the Management of Stormwater on the Cape Flats, Cape Town
Authors: Benjamin Mauck, Kevin Winter
Abstract:
The city of Cape Town in South Africa, has shown consistent economic and population growth in the last few decades and that growth is expected to continue to increase into the future. These projected economic and population growth rates are set to place additional pressure on the city’s already strained water supply system. Thus, given Cape Town’s water scarcity, increasing water demands and stressed water supply system, coupled with global awareness around the issues of sustainable development, environmental protection and climate change, alternative water management strategies are required to ensure water is sustainably managed. Water Sensitive Urban Design (WSUD) is an approach to sustainable urban water management that attempts to assign a resource value to all forms of water in the urban context, viz. stormwater, wastewater, potable water and groundwater. WSUD employs a wide range of strategies to improve the sustainable management of urban water such as the water reuse, developing alternative available supply sources, sustainable stormwater management and enhancing the aesthetic and recreational value of urban water. Managed Aquifer Recharge (MAR) is one WSUD strategy which has proven to be a successful reuse strategy in a number of places around the world. MAR is the process where an aquifer is intentionally or artificially recharged, which provides a valuable means of water storage while enhancing the aquifers supply potential. This paper investigates the feasibility of implementing MAR in the sandy, unconfined Cape Flats Aquifer (CFA) in Cape Town. The main objective of the study is to assess if MAR is a viable strategy for stormwater management on the Cape Flats, aiding the prevention or mitigation of the seasonal flooding that occurs on the Cape Flats, while also improving the supply potential of the aquifer. This involves the infiltration of stormwater into the CFA during the wet winter months and in turn, abstracting from the CFA during the dry summer months for fit-for-purpose uses in order to optimise the recharge and storage capacity of the CFA. The fully-integrated MIKE SHE model is used in this study to simulate both surface water and groundwater hydrology. This modelling approach enables the testing of various potential recharge and abstraction scenarios required for implementation of MAR on the Cape Flats. Further MIKE SHE scenario analysis under projected future climate scenarios provides insight into the performance of MAR as a stormwater management strategy under climate change conditions. The scenario analysis using an integrated model such as MIKE SHE is a valuable tool for evaluating the feasibility of the MAR as a stormwater management strategy and its potential to contribute towards improving Cape Town’s water security into the future.Keywords: managed aquifer recharge, stormwater management, cape flats aquifer, MIKE SHE
Procedia PDF Downloads 248451 Comparison of Two Methods of Cryopreservation of Testicular Tissue from Prepubertal Lambs
Authors: Rensson Homero Celiz Ygnacio, Marco Aurélio Schiavo Novaes, Lucy Vanessa Sulca Ñaupas, Ana Paula Ribeiro Rodrigues
Abstract:
The cryopreservation of testicular tissue emerges as an alternative for the preservation of the reproductive potential of individuals who still cannot produce sperm; however, they will undergo treatments that may affect their fertility (e.g., chemotherapy). Therefore, the present work aims to compare two cryopreservation methods (slow freezing and vitrification) in testicular tissue of prepubertal lambs. For that, to obtain the testicular tissue, the animals were castrated and the testicles were collected immediately in a physiological solution supplemented with antibiotics. In the laboratory, the testis was split into small pieces. The total size of the testicular fragments was 3×3x1 mm³ and was placed in a dish contained in Minimum Essential Medium (MEM-HEPES). The fragments were distributed randomly into non-cryopreserved (fresh control), slow freezing (SF), and vitrified. To SF procedures, two fragments from a given male were then placed in a 2,0 mL cryogenic vial containing 1,0 mL MEM-HEPES supplemented with 20% fetal bovine serum (FBS) and 20% dimethylsulfoxide (DMSO). Tubes were placed into a Mr. Frosty™ Freezing container with isopropyl alcohol and transferred to a -80°C freezer for overnight storage. On the next day, each tube was plunged into liquid nitrogen (NL). For vitrification, the ovarian tissue cryosystem (OTC) device was used. Testicular fragments were placed in the OTC device and exposed to the first vitrification solution composed of MEM-HEPES supplemented with 10 mg/mL Bovine Serum Albumin (BSA), 0.25 M sucrose, 10% Ethylene glycol (EG), 10% DMSO and 150 μM alpha-lipoic acid for four min. The VS1 was discarded and then the fragments were submerged into a second vitrification solution (VS2) containing the same composition of VS1 but 20% EG and 20% DMSO. VS2 was then discarded and each OTC device containing up to four testicular fragments was closed and immersed in NL. After the storage period, the fragments were removed from the NL, kept at room temperature for one min and then immersed at 37 °C in a water bath for 30 s. Samples were warmed by sequentially immersing in solutions of MEM-HEPES supplemented with 3 mg/mL BSA and decreasing concentrations of sucrose. Hematoxylin-eosin staining to analyze the tissue architecture was used. The score scale used was from 0 to 3, classified with a score 0 representing normal morphologically, and 3 were considered a lot of alteration. The histomorphological evaluation of the testicular tissue shows that when evaluating the nuclear alteration (distinction of nucleoli and condensation of nuclei), there are no differences when using slow freezing with respect to the control. However, vitrification presents greater damage (p <0.05). On the other hand, when evaluating the epithelial alteration, we observed that the freezing showed scores statistically equal to the control in variables such as retraction of the basement membrane, formation of gaps and organization of the peritubular cells. The results of the study demonstrated that cryopreservation using the slow freezing method is an excellent tool for the preservation of pubertal testicular tissue.Keywords: cryopreservation, slow freezing, vitrification, testicular tissue, lambs
Procedia PDF Downloads 174450 Intellectual Property Law as a Tool to Enhance and Sustain Museums in Digital Era
Authors: Nayira Ahmed Galal Elden Hassan, Amr Mostafa Awad Kassem
Abstract:
The management of Intellectual Property (IP) in museums presents a multifaceted challenge, requiring a balance between granting access to cultural assets and maintaining control over them. In the digital age, IP has emerged as a critical aspect of museum operations, encompassing valuable assets within collections and museum-generated content. Effective IP management enables museums to generate revenue, protect rights, and promote cultural heritage while leveraging digital technologies. Opportunities such as e-commerce and licensing can drive economic growth, but they also introduce complexities related to IP protection and regulation. This study explores the dual nature of IP assets—collection-based and museum-generated—highlighting their implications for sustainability and cultural preservation. The analysis includes examples such as the German State Museum’s management of replicas from the Nefertiti bust, showcasing the challenges museums face when navigating IP frameworks. The research underscores the importance of a comprehensive understanding of IP laws to prevent legal disputes, reputational risks, and revenue loss. By adopting an analytical and comparative methodology, this paper examines museums that have effectively implemented IP rules to enhance their operations and sustain their resources. It investigates how IP management can help museums fulfill their mission of community engagement, education, and outreach while ensuring long-term sustainability. The findings demonstrate that balanced IP strategies are essential for securing financial stability, safeguarding cultural heritage, and adapting to the demands of the digital era. This research seeks to explore how museums can effectively fulfill their mission of community engagement, education, and outreach while ensuring long-term sustainability. It examines the extent to which intellectual property (IP) management can contribute to achieving these objectives, focusing on the benefits and challenges associated with adopting IP management strategies. Additionally, the study addresses the question of ownership by investigating who holds the rights to cultural assets and how these rights can be managed effectively to align with both institutional goals and the preservation of cultural heritage.The findings underscore the pivotal role of effective IP management in empowering museums to navigate the digital landscape, maximize revenue streams, and safeguard cultural heritage. The study emphasizes the necessity of adopting a balanced approach to IP management, which aligns institutional goals with the ethical and legal considerations of cultural heritage preservation.Keywords: intellectual property, museums, IP management, digital technologies, sustainability, cultural heritage
Procedia PDF Downloads 5449 Simultaneous Optimization of Design and Maintenance through a Hybrid Process Using Genetic Algorithms
Authors: O. Adjoul, A. Feugier, K. Benfriha, A. Aoussat
Abstract:
In general, issues related to design and maintenance are considered in an independent manner. However, the decisions made in these two sets influence each other. The design for maintenance is considered an opportunity to optimize the life cycle cost of a product, particularly in the nuclear or aeronautical field, where maintenance expenses represent more than 60% of life cycle costs. The design of large-scale systems starts with product architecture, a choice of components in terms of cost, reliability, weight and other attributes, corresponding to the specifications. On the other hand, the design must take into account maintenance by improving, in particular, real-time monitoring of equipment through the integration of new technologies such as connected sensors and intelligent actuators. We noticed that different approaches used in the Design For Maintenance (DFM) methods are limited to the simultaneous characterization of the reliability and maintainability of a multi-component system. This article proposes a method of DFM that assists designers to propose dynamic maintenance for multi-component industrial systems. The term "dynamic" refers to the ability to integrate available monitoring data to adapt the maintenance decision in real time. The goal is to maximize the availability of the system at a given life cycle cost. This paper presents an approach for simultaneous optimization of the design and maintenance of multi-component systems. Here the design is characterized by four decision variables for each component (reliability level, maintainability level, redundancy level, and level of monitoring data). The maintenance is characterized by two decision variables (the dates of the maintenance stops and the maintenance operations to be performed on the system during these stops). The DFM model helps the designers choose technical solutions for the large-scale industrial products. Large-scale refers to the complex multi-component industrial systems and long life-cycle, such as trains, aircraft, etc. The method is based on a two-level hybrid algorithm for simultaneous optimization of design and maintenance, using genetic algorithms. The first level is to select a design solution for a given system that considers the life cycle cost and the reliability. The second level consists of determining a dynamic and optimal maintenance plan to be deployed for a design solution. This level is based on the Maintenance Free Operating Period (MFOP) concept, which takes into account the decision criteria such as, total reliability, maintenance cost and maintenance time. Depending on the life cycle duration, the desired availability, and the desired business model (sales or rental), this tool provides visibility of overall costs and optimal product architecture.Keywords: availability, design for maintenance (DFM), dynamic maintenance, life cycle cost (LCC), maintenance free operating period (MFOP), simultaneous optimization
Procedia PDF Downloads 118448 Evaluating the Knowledge and Skill of Final Year Pharmacy Students in Maternal and Child Health at a University in South Africa
Authors: E. O. Egieyeh, N. Butler, R. Coetzee, M. Van Huyssteen, A. Bheekie
Abstract:
Background: High rate of maternal and child mortality is a global concern. Nationally, it constitutes one of South Africa’s quadruple burdens of diseases. Pharmacists have a crucial role in maternal and child health care delivery and as such should be equipped with adequate knowledge and skill required to contribute to maternal and child well-being. The International Pharmaceutical Federation statement of policy (2013) outlines pharmacist-led interventions in accordance with the World Health Organisation’s interventions in maternal, new-born and child health care. The South African Pharmacy Council’s guideline on Good Pharmacy Practice (2010) also stipulates the minimum standards required to participate in reproductive, maternal and child care. Pharmacy schools are obliged to train pharmacy students to meet priority health needs of the population so that graduates are ‘fit for purpose’. The purpose of the study is to evaluate the knowledge and skill of final year pharmacy students at a university in South Africa to determine their preparedness to contribute effectively to maternal and child health care. Method: A quantitative, descriptive, non-randomized baseline study was conducted among the final year students at the School of Pharmacy. Data was collected using a questionnaire designed in sections to assess knowledge of contraception, maternal and child health directed at the primary care level and framed within the scope of practice required of an entry-level generalist pharmacist. Participants’ skill in infant growth assessment was assessed in a section of the questionnaire in a written format. Participants ticked the topics they had been exposed to on a curriculum content assessment tool which was not graded. A pilot study examined the clarity and suitability of question items, and duration to complete the questionnaire. A score of 50% in each section of the questionnaire indicated a pass. The questionnaire was delivered in campus lecture venue. Results: Of the 102 students in final year, 53 (52%) students consented to participate in the study. Only 13.2% of participants scored above 50% in each section. Forty five (85%) participants scored above 50% in the contraception section while 40 (75%) scored less than 50% in the skills assessment. Less than half (45.3%) of the participants had a total score above 50%. Being a parent or working part-time as pharmacist assistance did not have any influence on the performance of the participants. Evaluation of participants’ curriculum content exposure showed differences in exposure to the various topics. Exposure to contraception teaching received the most recognition. Conclusion: Maternal and child health curriculum content should be reviewed at the university to enhance the knowledge and skill of pharmacy graduates.Keywords: final year pharmacy students, knowledge and skill, maternal and child health, South Africa
Procedia PDF Downloads 152447 Hypoglossal Nerve Stimulation (Baseline vs. 12 months) for Obstructive Sleep Apnea: A Meta-Analysis
Authors: Yasmeen Jamal Alabdallat, Almutazballlah Bassam Qablan, Hamza Al-Salhi, Salameh Alarood, Ibraheem Alkhawaldeh, Obada Abunar, Adam Abdallah
Abstract:
Obstructive sleep apnea (OSA) is a disorder caused by the repeated collapse of the upper airway during sleep. It is the most common cause of sleep-related breathing disorder, as OSA can cause loud snoring, daytime fatigue, or more severe problems such as high blood pressure, cardiovascular disease, coronary artery disease, insulin-resistant diabetes, and depression. The hypoglossal nerve stimulator (HNS) is an implantable medical device that reduces the occurrence of obstructive sleep apnea by electrically stimulating the hypoglossal nerve in rhythm with the patient's breathing, causing the tongue to move. This stimulation helps keep the patient's airways clear while they sleep. This systematic review and meta-analysis aimed to assess the clinical outcome of hypoglossal nerve stimulation as a treatment of obstructive sleep apnea. A computer literature search of PubMed, Scopus, Web of Science, and Cochrane Central Register of Controlled Trials was conducted from inception until August 2022. Studies assessing the following clinical outcomes (Apnea-Hypopnea Index (AHI), Epworth Sleepiness Scale (ESS), Functional Outcomes of Sleep Questionnaire (FOSQ), Oxygen Desaturation Indices (ODI), (Oxygen Saturation (SaO2)) were pooled in the meta-analysis using Review Manager Software. We assessed the quality of studies according to the Cochrane risk-of-bias tool for randomized trials (RoB2), Risk of Bias In Non-randomized Studies - of Interventions (ROBINS-I), and a modified version of NOS for the non-comparative cohort studies.13 Studies (Six Clinical Trials and Seven prospective cohort studies) with a total of 817 patients were included in the meta-analysis. The results of AHI were reported in 11 studies examining OSA 696 patients. We found that there was a significant improvement in the AHI after 12 months of HNS (MD = 18.2 with 95% CI, (16.7 to 19.7; I2 = 0%); P < 0.00001). Further, 12 studies reported the results of ESS after 12 months of intervention with a significant improvement in the range of sleepiness among the examined 757 OSA patients (MD = 5.3 with 95% CI, (4.75 to 5.86; I2 = 65%); P < 0.0001). Moreover, nine studies involving 699 participants reported the results of FOSQ after 12 months of HNS with a significant reported improvement (MD = -3.09 with 95% CI, (-3.41 to 2.77; I2 = 0%); P < 0.00001). In addition, ten studies reported the results of ODI with a significant improvement after 12 months of HNS among the 817 examined patients (MD = 14.8 with 95% CI, (13.25 to 16.32; I2 = 0%); P < 000001). The Hypoglossal Nerve Stimulation showed a significant positive impact on obstructive sleep apnea patients after 12 months of therapy in terms of apnea-hypopnea index, oxygen desaturation indices, manifestations of the behavioral morbidity associated with obstructive sleep apnea, and functional status resulting from sleepiness.Keywords: apnea, meta-analysis, hypoglossal, stimulation
Procedia PDF Downloads 115446 Metacognitive Processing in Early Readers: The Role of Metacognition in Monitoring Linguistic and Non-Linguistic Performance and Regulating Students' Learning
Authors: Ioanna Taouki, Marie Lallier, David Soto
Abstract:
Metacognition refers to the capacity to reflect upon our own cognitive processes. Although there is an ongoing discussion in the literature on the role of metacognition in learning and academic achievement, little is known about its neurodevelopmental trajectories in early childhood, when children begin to receive formal education in reading. Here, we evaluate the metacognitive ability, estimated under a recently developed Signal Detection Theory model, of a cohort of children aged between 6 and 7 (N=60), who performed three two-alternative-forced-choice tasks (two linguistic: lexical decision task, visual attention span task, and one non-linguistic: emotion recognition task) including trial-by-trial confidence judgements. Our study has three aims. First, we investigated how metacognitive ability (i.e., how confidence ratings track accuracy in the task) relates to performance in general standardized tasks related to students' reading and general cognitive abilities using Spearman's and Bayesian correlation analysis. Second, we assessed whether or not young children recruit common mechanisms supporting metacognition across the different task domains or whether there is evidence for domain-specific metacognition at this early stage of development. This was done by examining correlations in metacognitive measures across different task domains and evaluating cross-task covariance by applying a hierarchical Bayesian model. Third, using robust linear regression and Bayesian regression models, we assessed whether metacognitive ability in this early stage is related to the longitudinal learning of children in a linguistic and a non-linguistic task. Notably, we did not observe any association between students’ reading skills and metacognitive processing in this early stage of reading acquisition. Some evidence consistent with domain-general metacognition was found, with significant positive correlations between metacognitive efficiency between lexical and emotion recognition tasks and substantial covariance indicated by the Bayesian model. However, no reliable correlations were found between metacognitive performance in the visual attention span and the remaining tasks. Remarkably, metacognitive ability significantly predicted children's learning in linguistic and non-linguistic domains a year later. These results suggest that metacognitive skill may be dissociated to some extent from general (i.e., language and attention) abilities and further stress the importance of creating educational programs that foster students’ metacognitive ability as a tool for long term learning. More research is crucial to understand whether these programs can enhance metacognitive ability as a transferable skill across distinct domains or whether unique domains should be targeted separately.Keywords: confidence ratings, development, metacognitive efficiency, reading acquisition
Procedia PDF Downloads 150445 Measuring Emotion Dynamics on Facebook: Associations between Variability in Expressed Emotion and Psychological Functioning
Authors: Elizabeth M. Seabrook, Nikki S. Rickard
Abstract:
Examining time-dependent measures of emotion such as variability, instability, and inertia, provide critical and complementary insights into mental health status. Observing changes in the pattern of emotional expression over time could act as a tool to identify meaningful shifts between psychological well- and ill-being. From a practical standpoint, however, examining emotion dynamics day-to-day is likely to be burdensome and invasive. Utilizing social media data as a facet of lived experience can provide real-world, temporally specific access to emotional expression. Emotional language on social media may provide accurate and sensitive insights into individual and community mental health and well-being, particularly with focus placed on the within-person dynamics of online emotion expression. The objective of the current study was to examine the dynamics of emotional expression on the social network platform Facebook for active users and their relationship with psychological well- and ill-being. It was expected that greater positive and negative emotion variability, instability, and inertia would be associated with poorer psychological well-being and greater depression symptoms. Data were collected using a smartphone app, MoodPrism, which delivered demographic questionnaires, psychological inventories assessing depression symptoms and psychological well-being, and collected the Status Updates of consenting participants. MoodPrism also delivered an experience sampling methodology where participants completed items assessing positive affect, negative affect, and arousal, daily for a 30-day period. The number of positive and negative words in posts was extracted and automatically collated by MoodPrism. The relative proportion of positive and negative words from the total words written in posts was then calculated. Preliminary analyses have been conducted with the data of 9 participants. While these analyses are underpowered due to sample size, they have revealed trends that greater variability in the emotion valence expressed in posts is positively associated with greater depression symptoms (r(9) = .56, p = .12), as is greater instability in emotion valence (r(9) = .58, p = .099). Full data analysis utilizing time-series techniques to explore the Facebook data set will be presented at the conference. Identifying the features of emotion dynamics (variability, instability, inertia) that are relevant to mental health in social media emotional expression is a fundamental step in creating automated screening tools for mental health that are temporally sensitive, unobtrusive, and accurate. The current findings show how monitoring basic social network characteristics over time can provide greater depth in predicting risk and changes in depression and positive well-being.Keywords: emotion, experience sampling methods, mental health, social media
Procedia PDF Downloads 250444 Option Pricing Theory Applied to the Service Sector
Authors: Luke Miller
Abstract:
This paper develops an options pricing methodology to value strategic pricing strategies in the services sector. More specifically, this study provides a unifying taxonomy of current service sector pricing practices, frames these pricing decisions as strategic real options, demonstrates accepted option valuation techniques to assess service sector pricing decisions, and suggests future research areas where pricing decisions and real options overlap. Enhancing revenue in the service sector requires proactive decision making in a world of uncertainty. In an effort to strategically price service products, revenue enhancement necessitates a careful study of the service costs, customer base, competition, legalities, and shared economies with the market. Pricing decisions involve the quality of inputs, manpower, and best practices to maintain superior service. These decisions further hinge on identifying relevant pricing strategies and understanding how these strategies impact a firm’s value. A relatively new area of research applies option pricing theory to investments in real assets and is commonly known as real options. The real options approach is based on the premise that many corporate decisions to invest or divest in assets are simply an option wherein the firm has the right to make an investment without any obligation to act. The decision maker, therefore, has more flexibility and the value of this operating flexibility should be taken into consideration. The real options framework has already been applied to numerous areas including manufacturing, inventory, natural resources, research and development, strategic decisions, technology, and stock valuation. Additionally, numerous surveys have identified a growing need for the real options decision framework within all areas of corporate decision-making. Despite the wide applicability of real options, no study has been carried out linking service sector pricing decisions and real options. This is surprising given the service sector comprises 80% of the US employment and Gross Domestic Product (GDP). Identifying real options as a practical tool to value different service sector pricing strategies is believed to have a significant impact on firm decisions. This paper identifies and discusses four distinct pricing strategies available to the service sector from an options’ perspective: (1) Cost-based profit margin, (2) Increased customer base, (3) Platform pricing, and (4) Buffet pricing. Within each strategy lie several pricing tactics available to the service firm. These tactics can be viewed as options the decision maker has to best manage a strategic position in the market. To demonstrate the effectiveness of including flexibility in the pricing decision, a series of pricing strategies were developed and valued using a real options binomial lattice structure. The options pricing approach discussed in this study allows service firms to directly incorporate market-driven perspectives into the decision process and thus synchronizing service operations with organizational economic goals.Keywords: option pricing theory, real options, service sector, valuation
Procedia PDF Downloads 355443 Big Data for Local Decision-Making: Indicators Identified at International Conference on Urban Health 2017
Authors: Dana R. Thomson, Catherine Linard, Sabine Vanhuysse, Jessica E. Steele, Michal Shimoni, Jose Siri, Waleska Caiaffa, Megumi Rosenberg, Eleonore Wolff, Tais Grippa, Stefanos Georganos, Helen Elsey
Abstract:
The Sustainable Development Goals (SDGs) and Urban Health Equity Assessment and Response Tool (Urban HEART) identify dozens of key indicators to help local decision-makers prioritize and track inequalities in health outcomes. However, presentations and discussions at the International Conference on Urban Health (ICUH) 2017 suggested that additional indicators are needed to make decisions and policies. A local decision-maker may realize that malaria or road accidents are a top priority. However, s/he needs additional health determinant indicators, for example about standing water or traffic, to address the priority and reduce inequalities. Health determinants reflect the physical and social environments that influence health outcomes often at community- and societal-levels and include such indicators as access to quality health facilities, access to safe parks, traffic density, location of slum areas, air pollution, social exclusion, and social networks. Indicator identification and disaggregation are necessarily constrained by available datasets – typically collected about households and individuals in surveys, censuses, and administrative records. Continued advancements in earth observation, data storage, computing and mobile technologies mean that new sources of health determinants indicators derived from 'big data' are becoming available at fine geographic scale. Big data includes high-resolution satellite imagery and aggregated, anonymized mobile phone data. While big data are themselves not representative of the population (e.g., satellite images depict the physical environment), they can provide information about population density, wealth, mobility, and social environments with tremendous detail and accuracy when combined with population-representative survey, census, administrative and health system data. The aim of this paper is to (1) flag to data scientists important indicators needed by health decision-makers at the city and sub-city scale - ideally free and publicly available, and (2) summarize for local decision-makers new datasets that can be generated from big data, with layperson descriptions of difficulties in generating them. We include SDGs and Urban HEART indicators, as well as indicators mentioned by decision-makers attending ICUH 2017.Keywords: health determinant, health outcome, mobile phone, remote sensing, satellite imagery, SDG, urban HEART
Procedia PDF Downloads 209442 Nano-MFC (Nano Microbial Fuel Cell): Utilization of Carbon Nano Tube to Increase Efficiency of Microbial Fuel Cell Power as an Effective, Efficient and Environmentally Friendly Alternative Energy Sources
Authors: Annisa Ulfah Pristya, Andi Setiawan
Abstract:
Electricity is the primary requirement today's world, including Indonesia. This is because electricity is a source of electrical energy that is flexible to use. Fossil energy sources are the major energy source that is used as a source of energy power plants. Unfortunately, this conversion process impacts on the depletion of fossil fuel reserves and causes an increase in the amount of CO2 in the atmosphere, disrupting health, ozone depletion, and the greenhouse effect. Solutions have been applied are solar cells, ocean wave power, the wind, water, and so forth. However, low efficiency and complicated treatment led to most people and industry in Indonesia still using fossil fuels. Referring to this Fuel Cell was developed. Fuel Cells are electrochemical technology that continuously converts chemical energy into electrical energy for the fuel and oxidizer are the efficiency is considerably higher than the previous natural source of electrical energy, which is 40-60%. However, Fuel Cells still have some weaknesses in terms of the use of an expensive platinum catalyst which is limited and not environmentally friendly. Because of it, required the simultaneous source of electrical energy and environmentally friendly. On the other hand, Indonesia is a rich country in marine sediments and organic content that is never exhausted. Stacking the organic component can be an alternative energy source continued development of fuel cell is A Microbial Fuel Cell. Microbial Fuel Cells (MFC) is a tool that uses bacteria to generate electricity from organic and non-organic compounds. MFC same tools as usual fuel cell composed of an anode, cathode and electrolyte. Its main advantage is the catalyst in the microbial fuel cell is a microorganism and working conditions carried out in neutral solution, low temperatures, and environmentally friendly than previous fuel cells (Chemistry Fuel Cell). However, when compared to Chemistry Fuel Cell, MFC only have an efficiency of 40%. Therefore, the authors provide a solution in the form of Nano-MFC (Nano Microbial Fuel Cell): Utilization of Carbon Nano Tube to Increase Efficiency of Microbial Fuel Cell Power as an Effective, Efficient and Environmentally Friendly Alternative Energy Source. Nano-MFC has the advantage of an effective, high efficiency, cheap and environmental friendly. Related stakeholders that helped are government ministers, especially Energy Minister, the Institute for Research, as well as the industry as a production executive facilitator. strategic steps undertaken to achieve that begin from conduct preliminary research, then lab scale testing, and dissemination and build cooperation with related parties (MOU), conduct last research and its applications in the field, then do the licensing and production of Nano-MFC on an industrial scale and publications to the public.Keywords: CNT, efficiency, electric, microorganisms, sediment
Procedia PDF Downloads 409441 Adaptation and Validation of Voice Handicap Index in Telugu Language
Authors: B. S. Premalatha, Kausalya Sahani
Abstract:
Background: Voice is multidimensional which convey emotion, feelings, and communication. Voice disorders have an adverse effect on the physical, emotional and functional domains of an individual. Self-rating by clients about their voice problem helps the clinicians to plan intervention strategies. Voice handicap index is one such self-rating scale contains 30 questions that quantify the functional, physical and emotional impacts of a voice disorder on a patient’s quality of life. Each subsection has 10 questions. Though adapted and validated versions of VHI are available in other Indian languages but not in Telugu, which is a Dravidian language native to India. It is mainly spoken in Andhra Pradesh and neighbouring states in southern India. Objectives: To adapt and validate the English version of Voice Handicap Index (VHI) into Telugu language and evaluate its internal consistency and clinical validate in Telugu speaking population. Materials: The study carried out in three stages. First stage was a forward translation of English version of VHI, was given to ten experts, who were well proficient in writing and reading Telugu and five speech-language pathologists to translate into Telugu. Second Stage was backward translation where translated version of Telugu was given to a different group of ten experts (who were well proficient in writing and reading Telugu) and five speech-language pathologists who were native Telugu speakers and had good proficiency in Telugu and English. The third stage was an administration of translated version on Telugu to the targeted population. Totally 40 clinical subjects and 40 normal controls served as participants, and each group had 26 males and 14 females’ age range of 20 to 60 years. Clinical group comprised of individuals with laryngectomee with the Tracheoesophageal puncture (n=18), laryngitis (n=11), vocal nodules (n=7) and vocal fold palsy (n=4). Participants were asked to mark of their each experience on a 5 point equal appearing scale (0=never, 1=almost never, 2=sometimes, 3=almost always, 4=always) with a maximum total score of 120. Results: Statistical analysis was made by using SPSS software (22.0.0 Version). Mean, standard deviation and percentage (%) were calculated all the participants for both the groups. Internal consistency of VHI in Telugu was found to be excellent with the consistency scores for all the domains such as physical, emotional and functional are 0.742, 0.934and 0.938. The validity of scores showed a significant difference between clinical population and control group for domains like physical, emotional and functional and total scores. P value found to be less than 0.001( < 0.001). Negative correlation found in age and gender among self-domains such as physical, emotional and functional total scores in dysphonic and control group. Conclusion: The present study indicated that VHI in Telugu is able to discriminate participants having voice pathology from normal populations, which make this as a valid tool to collect information about their voice from the participants.Keywords: adaptation, Telugu Version, translation, Voice Handicap Index (VHI)
Procedia PDF Downloads 277440 Thinking Historiographically in the 21st Century: The Case of Spanish Musicology, a History of Music without History
Authors: Carmen Noheda
Abstract:
This text provides a reflection on the way of thinking about the study of the history of music by examining the production of historiography in Spain at the turn of the century. Based on concepts developed by the historical theorist Jörn Rüsen, the article focuses on the following aspects: the theoretical artifacts that structure the interpretation of the limits of writing the history of music, the narrative patterns used to give meaning to the discourse of history, and the orientation context that functions as a source of criteria of significance for both interpretation and representation. This analysis intends to show that historical music theory is not only a means to abstractly explore the complex questions connected to the production of historical knowledge, but also a tool for obtaining concrete images about the intellectual practice of professional musicologists. Writing about the historiography of contemporary Spanish music is a task that requires both a knowledge of the history that is being written and investigated, as well as a familiarity with current theoretical trends and methodologies that allow for the recognition and definition of the different tendencies that have arisen in recent decades. With the objective of carrying out these premises, this project takes as its point of departure the 'immediate historiography' in relation to Spanish music at the beginning of the 21st century. The hesitation that Spanish musicology has shown in opening itself to new anthropological and sociological approaches, along with its rigidity in the face of the multiple shifts in dynamic forms of thinking about history, have produced a standstill whose consequences can be seen in the delayed reception of the historiographical revolutions that have emerged in the last century. Methodologically, this essay is underpinned by Rüsen’s notion of the disciplinary matrix, which is an important contribution to the understanding of historiography. Combined with his parallel conception of differing paradigms of historiography, it is useful for analyzing the present-day forms of thinking about the history of music. Following these theories, the article will in the first place address the characteristics and identification of present historiographical currents in Spanish musicology to thereby carry out an analysis based on the theories of Rüsen. Finally, it will establish some considerations for the future of musical historiography, whose atrophy has not only fostered the maintenance of an ingrained positivist tradition, but has also implied, in the case of Spain, an absence of methodological schools and an insufficient participation in international theoretical debates. An update of fundamental concepts has become necessary in order to understand that thinking historically about music demands that we remember that subjects are always linked by reciprocal interdependencies that structure and define what it is possible to create. In this sense, the fundamental aim of this research departs from the recognition that the history of music is embedded in the conditions that make it conceivable, communicable and comprehensible within a society.Keywords: historiography, Jörn Rüssen, Spanish musicology, theory of history of music
Procedia PDF Downloads 190439 A Policy Review on the Transitional Period from MDGs to SDGs: Experience from the Local Economy of Tigrai Regional State of Ethiopia
Authors: Tewele Gerlase Haile
Abstract:
Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs. The global development landscape underwent a transformative shift in 2015 as the international community pivoted from the MDGs to the more ambitious and comprehensive SDGs. The NDGs were a set of eight international development goals established by the United Nations in 2000, with the aim of improving the lives of people around the world by 2015. SDGs are a continuation of the MDGs. Unlike on the other development goals, progress on eradication of extreme hunger and poverty (MDG 1) has been slow at a continental level. The implementation of the MDGs was uneven: some countries have already achieved many of them, while the others have not started any of them yet. With its Poverty Reduction Strategic Papers (PRSPs), Ethiopia has been given special attention to the first MDG since 1993. The Ethiopian government was actively engaged in anti-poverty political campaign leaving other agendas as secondary issues. Poverty in Ethiopia progressively reduced over the years; it was 44.2% in 2000, 38.7% in 2007, 29.6 % in 2011, and it is projected to further reduce to 16.7% by the end of 2020. The long-term impact of war on the sustainability and effectiveness of SDG-related initiatives in post-conflict regions, particularly in how local governance and community resilience are affected. This could involve exploring how war interrupts progress, which specific SDGs are most vulnerable, and what strategies might mitigate these impacts. Reviewing a transitional period enables policy makers to align global or national development goals into local development goals with an uninterrupted policy continuity. The existing literature on development economics often neglects the importance of reviewing the transitional period of consecutive global development goals in a local or regional perspective. Reviewing a transitional period enables policy makers to align global or national development goals into local development goals with an uninterrupted policy continuity. Using a Policy Coherence for Development (PCD) approach as analytical tool, this paper is intended to retrospectively review what happened to the local economy of Tigrai Regional State during the transitional period from MDGs (2000-2015) to SDGs (2015-2030). Taking a retrospective facts and observations into account, policy discontinuity is witnessed in Tigrai following the dissolution of the EPRDF that followed with a terrible war that claimed about a million human lives and worth of over a hundred Billion US dollars economic costs. The unhealthy political reform caused not only a terrible war but also breaks the promising SDGs. Unlike other regional states, Tigrai left unprivileged to translate the ambitious SDGs into its local development policies.Keywords: local development, political reform, war, MDGs, SDGs, Ethiopia, tigrai
Procedia PDF Downloads 20438 Life Cycle Datasets for the Ornamental Stone Sector
Authors: Isabella Bianco, Gian Andrea Blengini
Abstract:
The environmental impact related to ornamental stones (such as marbles and granites) is largely debated. Starting from the industrial revolution, continuous improvements of machineries led to a higher exploitation of this natural resource and to a more international interaction between markets. As a consequence, the environmental impact of the extraction and processing of stones has increased. Nevertheless, if compared with other building materials, ornamental stones are generally more durable, natural, and recyclable. From the scientific point of view, studies on stone life cycle sustainability have been carried out, but these are often partial or not very significant because of the high percentage of approximations and assumptions in calculations. This is due to the lack, in life cycle databases (e.g. Ecoinvent, Thinkstep, and ELCD), of datasets about the specific technologies employed in the stone production chain. For example, databases do not contain information about diamond wires, chains or explosives, materials commonly used in quarries and transformation plants. The project presented in this paper aims to populate the life cycle databases with specific data of specific stone processes. To this goal, the methodology follows the standardized approach of Life Cycle Assessment (LCA), according to the requirements of UNI 14040-14044 and to the International Reference Life Cycle Data System (ILCD) Handbook guidelines of the European Commission. The study analyses the processes of the entire production chain (from-cradle-to-gate system boundaries), including the extraction of benches, the cutting of blocks into slabs/tiles and the surface finishing. Primary data have been collected in Italian quarries and transformation plants which use technologies representative of the current state-of-the-art. Since the technologies vary according to the hardness of the stone, the case studies comprehend both soft stones (marbles) and hard stones (gneiss). In particular, data about energy, materials and emissions were collected in marble basins of Carrara and in Beola and Serizzo basins located in the province of Verbano Cusio Ossola. Data were then elaborated through an appropriate software to build a life cycle model. The model was realized setting free parameters that allow an easy adaptation to specific productions. Through this model, the study aims to boost the direct participation of stone companies and encourage the use of LCA tool to assess and improve the stone sector environmental sustainability. At the same time, the realization of accurate Life Cycle Inventory data aims at making available, to researchers and stone experts, ILCD compliant datasets of the most significant processes and technologies related to the ornamental stone sector.Keywords: life cycle assessment, LCA datasets, ornamental stone, stone environmental impact
Procedia PDF Downloads 233