Search results for: summary indices
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1426

Search results for: summary indices

256 Challenges to Safe and Effective Prescription Writing in the Environment Where Digital Prescribing is Absent

Authors: Prashant Neupane, Asmi Pandey, Mumna Ehsan, Katie Davies, Richard Lowsby

Abstract:

Introduction/Background & aims: Safe and effective prescribing in hospitals, directly and indirectly, impacts the health of the patients. Even though digital prescribing in the National Health Service (NHS), UK has been used in lots of tertiary centers along with district general hospitals, a significant number of NHS trusts are still using paper prescribing. We came across lots of irregularities in our daily clinical practice when we are doing paper prescribing. The main aim of the study was to assess how safely and effectively are we prescribing at our hospital where there is no access to digital prescribing. Method/Summary of work: We conducted a prospective audit in the critical care department at Mid Cheshire Hopsitals NHS Foundation Trust in which 20 prescription charts from different patients were randomly selected over a period of 1 month. We assessed 16 multiple categories from each prescription chart and compared them to the standard trust guidelines on prescription. Results/Discussion: We collected data from 20 different prescription charts. 16 categories were evaluated within each prescription chart. The results showed there was an urgent need for improvement in 8 different sections. In 85% of the prescription chart, all the prescribers who prescribed the medications were not identified. Name, GMC number and signature were absent in the required prescriber identification section of the prescription chart. In 70% of prescription charts, either indication or review date of the antimicrobials was absent. Units of medication were not documented correctly in 65% and the allergic status of the patient was absent in 30% of the charts. The start date of medications was missing and alternations of the medications were not done properly in 35%of charts. The patient's name was not recorded in all desired sections of the chart in 50% of cases and cancellations of the medication were not done properly in 45% of the prescription charts. Conclusion(s): From the audit and data analysis, we assessed the areas in which we needed improvement in prescription writing in the Critical care department. However, during the meetings and conversations with the experts from the pharmacy department, we realized this audit is just a representation of the specialized department of the hospital where access to prescribing is limited to a certain number of prescribers. But if we consider bigger departments of the hospital where patient turnover is much more, the results could be much worse. The findings were discussed in the Critical care MDT meeting where suggestions regarding digital/electronic prescribing were discussed. A poster and presentation regarding safe and effective prescribing were done, awareness poster was prepared and attached alongside every bedside in critical care where it is visible to prescribers. We consider this as a temporary measure to improve the quality of prescribing, however, we strongly believe digital prescribing will help to a greater extent to control weak areas which are seen in paper prescribing.

Keywords: safe prescribing, NHS, digital prescribing, prescription chart

Procedia PDF Downloads 98
255 Coffee Consumption Has No Acute Effects on Glucose Metabolism in Healthy Men: A Randomized Crossover Clinical Trial

Authors: Caio E. G. Reis, Sara Wassell, Adriana L. Porto, Angélica A. Amato, Leslie J. C. Bluck, Teresa H. M. da Costa

Abstract:

Background: Multiple epidemiologic studies have consistently reported association between increased coffee consumption and a lowered risk of Type 2 Diabetes Mellitus. However, the mechanisms behind this finding have not been fully elucidated. Objective: We investigate the effect of coffee (caffeinated and decaffeinated) on glucose effectiveness and insulin sensitivity using the stable isotope minimal model protocol with oral glucose administration in healthy men. Design: Fifteen healthy men underwent 5 arms randomized crossover single-blinding (researchers) clinical trial. They consumed decaffeinated coffee, caffeinated coffee (with and without sugar), and controls – water (with and without sugar) followed 1 hour by an oral glucose tolerance test (75 g of available carbohydrate) with intravenous labeled dosing interpreted by the two compartment minimal model (225 minutes). One-way ANOVA with Bonferroni adjustment were used to compare the effects of the tested beverages on glucose metabolism parameters. Results: Decaffeinated coffee resulted in 29% and 85% higher insulin sensitivity compared with caffeinated coffee and water, respectively, and the caffeinated coffee showed 15% and 60% higher glucose effectiveness compared with decaffeinated coffee and water, respectively. However, these differences were not significant (p > 0.10). In overall analyze (0 – 225 min) there were no significant differences on glucose effectiveness, insulin sensitivity, and glucose and insulin area under the curve between the groups. The beneficial effects of coffee did not seem to act in the short-term (hours) on glucose metabolism parameters mainly on insulin sensitivity indices. The benefits of coffee consumption occur in the long-term (years) as has been shown in the reduction of Type 2 Diabetes Mellitus risk in epidemiological studies. The clinical relevance of the present findings is that there is no need to avoid coffee as the drink choice for healthy people. Conclusions: The findings of this study demonstrate that the consumption of caffeinated and decaffeinated coffee with or without sugar has no acute effects on glucose metabolism in healthy men. Further researches, including long-term interventional studies, are needed to fully elucidate the mechanisms behind the coffee effects on reduced risk for Type 2 Diabetes Mellitus.

Keywords: coffee, diabetes mellitus type 2, glucose, insulin

Procedia PDF Downloads 413
254 Bi-objective Network Optimization in Disaster Relief Logistics

Authors: Katharina Eberhardt, Florian Klaus Kaiser, Frank Schultmann

Abstract:

Last-mile distribution is one of the most critical parts of a disaster relief operation. Various uncertainties, such as infrastructure conditions, resource availability, and fluctuating beneficiary demand, render last-mile distribution challenging in disaster relief operations. The need to balance critical performance criteria like response time, meeting demand and cost-effectiveness further complicates the task. The occurrence of disasters cannot be controlled, and the magnitude is often challenging to assess. In summary, these uncertainties create a need for additional flexibility, agility, and preparedness in logistics operations. As a result, strategic planning and efficient network design are critical for an effective and efficient response. Furthermore, the increasing frequency of disasters and the rising cost of logistical operations amplify the need to provide robust and resilient solutions in this area. Therefore, we formulate a scenario-based bi-objective optimization model that integrates pre-positioning, allocation, and distribution of relief supplies extending the general form of a covering location problem. The proposed model aims to minimize underlying logistics costs while maximizing demand coverage. Using a set of disruption scenarios, the model allows decision-makers to identify optimal network solutions to address the risk of disruptions. We provide an empirical case study of the public authorities’ emergency food storage strategy in Germany to illustrate the potential applicability of the model and provide implications for decision-makers in a real-world setting. Also, we conduct a sensitivity analysis focusing on the impact of varying stockpile capacities, single-site outages, and limited transportation capacities on the objective value. The results show that the stockpiling strategy needs to be consistent with the optimal number of depots and inventory based on minimizing costs and maximizing demand satisfaction. The strategy has the potential for optimization, as network coverage is insufficient and relies on very high transportation and personnel capacity levels. As such, the model provides decision support for public authorities to determine an efficient stockpiling strategy and distribution network and provides recommendations for increased resilience. However, certain factors have yet to be considered in this study and should be addressed in future works, such as additional network constraints and heuristic algorithms.

Keywords: humanitarian logistics, bi-objective optimization, pre-positioning, last mile distribution, decision support, disaster relief networks

Procedia PDF Downloads 55
253 3D Text Toys: Creative Approach to Experiential and Immersive Learning for World Literacy

Authors: Azyz Sharafy

Abstract:

3D Text Toys is an innovative and creative approach that utilizes 3D text objects to enhance creativity, literacy, and basic learning in an enjoyable and gamified manner. By using 3D Text Toys, children can develop their creativity, visually learn words and texts, and apply their artistic talents within their creative abilities. This process incorporates haptic engagement with 2D and 3D texts, word building, and mechanical construction of everyday objects, thereby facilitating better word and text retention. The concept involves constructing visual objects made entirely out of 3D text/words, where each component of the object represents a word or text element. For instance, a bird can be recreated using words or text shaped like its wings, beak, legs, head, and body, resulting in a 3D representation of the bird purely composed of text. This can serve as an art piece or a learning tool in the form of a 3D text toy. These 3D text objects or toys can be crafted using natural materials such as leaves, twigs, strings, or ropes, or they can be made from various physical materials using traditional crafting tools. Digital versions of these objects can be created using 2D or 3D software on devices like phones, laptops, iPads, or computers. To transform digital designs into physical objects, computerized machines such as CNC routers, laser cutters, and 3D printers can be utilized. Once the parts are printed or cut out, students can assemble the 3D texts by gluing them together, resulting in natural or everyday 3D text objects. These objects can be painted to create artistic pieces or text toys, and the addition of wheels can transform them into moving toys. One of the significant advantages of this visual and creative object-based learning process is that students not only learn words but also derive enjoyment from the process of creating, painting, and playing with these objects. The ownership and creation process further enhances comprehension and word retention. Moreover, for individuals with learning disabilities such as dyslexia, ADD (Attention Deficit Disorder), or other learning difficulties, the visual and haptic approach of 3D Text Toys can serve as an additional creative and personalized learning aid. The application of 3D Text Toys extends to both the English language and any other global written language. The adaptation and creative application may vary depending on the country, space, and native written language. Furthermore, the implementation of this visual and haptic learning tool can be tailored to teach foreign languages based on age level and comprehension requirements. In summary, this creative, haptic, and visual approach has the potential to serve as a global literacy tool.

Keywords: 3D text toys, creative, artistic, visual learning for world literacy

Procedia PDF Downloads 39
252 Evaluation of the Conditions of Managed Aquifer Recharge in the West African Basement Area

Authors: Palingba Aimé Marie Doilkom, Mahamadou Koïta, Jean-michel Vouillamoz, Angelbert Biaou

Abstract:

Most African populations rely on groundwater in rural areas for their consumption. Indeed, in the face of climate change and strong demographic growth, groundwater, particularly in the basement, is increasingly in demand. The question of the sustainability of water resources in this type of environment is therefore becoming a major issue. Groundwater recharge can be natural or artificial. Unlike natural recharge, which often results from the natural infiltration of surface water (e.g. a share of rainfall), artificial recharge consists of causing water infiltration through appropriate developments to artificially replenish the water stock of an aquifer. Artificial recharge is, therefore, one of the measures that can be implemented to secure water supply, combat the effects of climate change, and, more generally, contribute to improving the quantitative status of groundwater bodies. It is in this context that the present research is conducted with the aim of developing artificial recharge in order to contribute to the sustainability of basement aquifers in a context of climatic variability and constantly increasing water needs of populations. In order to achieve the expected results, it is therefore important to determine the characteristics of the infiltration basins and to identify the areas suitable for their implementation. The geometry of the aquifer was reproduced, and the hydraulic properties of the aquifer were collected and characterized, including boundary conditions, hydraulic conductivity, effective porosity, recharge, Van Genuchten parameters, and saturation indices. The aquifer of the Sanon experimental site is made up of three layers, namely the saprolite, the fissured horizon, and the healthy basement. Indeed, the saprolite and the fissured medium were considered for the simulations. The first results with FEFLOW model show that the water table reacts continuously for the first 100 days before stabilizing. The hydraulic charge increases by an average of 1 m. The further away from the basin, the less the water table reacts. However, if a variable hydraulic head is imposed on the basins, it can be seen that the response of the water table is not uniform over time. The lower the basin hydraulic head, the less it affects the water table. These simulations must be continued by improving the characteristics of the basins in order to obtain the appropriate characteristics for a good recharge.

Keywords: basement area, FEFLOW, infiltration basin, MAR

Procedia PDF Downloads 56
251 Solar Power Generation in a Mining Town: A Case Study for Australia

Authors: Ryan Chalk, G. M. Shafiullah

Abstract:

Climate change is a pertinent issue facing governments and societies around the world. The industrial revolution has resulted in a steady increase in the average global temperature. The mining and energy production industries have been significant contributors to this change prompting government to intervene by promoting low emission technology within these sectors. This paper initially reviews the energy problem in Australia and the mining sector with a focus on the energy requirements and production methods utilised in Western Australia (WA). Renewable energy in the form of utility-scale solar photovoltaics (PV) provides a solution to these problems by providing emission-free energy which can be used to supplement the existing natural gas turbines in operation at the proposed site. This research presents a custom renewable solution for the mining site considering the specific township network, local weather conditions, and seasonal load profiles. A summary of the required PV output is presented to supply slightly over 50% of the towns power requirements during the peak (summer) period, resulting in close to full coverage in the trench (winter) period. Dig Silent Power Factory Software has been used to simulate the characteristics of the existing infrastructure and produces results of integrating PV. Large scale PV penetration in the network introduce technical challenges, that includes; voltage deviation, increased harmonic distortion, increased available fault current and power factor. Results also show that cloud cover has a dramatic and unpredictable effect on the output of a PV system. The preliminary analyses conclude that mitigation strategies are needed to overcome voltage deviations, unacceptable levels of harmonics, excessive fault current and low power factor. Mitigation strategies are proposed to control these issues predominantly through the use of high quality, made for purpose inverters. Results show that use of inverters with harmonic filtering reduces the level of harmonic injections to an acceptable level according to Australian standards. Furthermore, the configuration of inverters to supply active and reactive power assist in mitigating low power factor problems. Use of FACTS devices; SVC and STATCOM also reduces the harmonics and improve the power factor of the network, and finally, energy storage helps to smooth the power supply.

Keywords: climate change, mitigation strategies, photovoltaic (PV), power quality

Procedia PDF Downloads 146
250 Structural Analysis of Archaeoseismic Records Linked to the 5 July 408 - 410 AD Utica Strong Earthquake (NE Tunisia)

Authors: Noureddine Ben Ayed, Abdelkader Soumaya, Saïd Maouche, Ali Kadri, Mongi Gueddiche, Hayet Khayati-Ammar, Ahmed Braham

Abstract:

The archaeological monument of Utica, located in north-eastern Tunisia, was founded (8th century BC) By the Phoenicians as a port installed on the trade route connecting Phoenicia and the Straits of Gibraltar in the Mediterranean Sea. The flourishment of this city as an important settlement during the Roman period was followed by a sudden abandonment, disuse and progressive oblivion in the first half of the fifth century AD. This decadence can be attributed to the destructive earthquake of 5 July 408 - 410 AD, affecting this historic city as documented in 1906 by the seismologist Fernand De Montessus De Ballore. The magnitude of the Utica earthquake was estimated at 6.8 by the Tunisian National Institute of Meteorology (INM). In order to highlight the damage caused by this earthquake, a field survey was carried out at the Utica ruins to detect and analyse the earthquake archaeological effects (EAEs) using structural geology methods. This approach allowed us to highlight several structural damages, including: (1) folded mortar pavements, (2) cracks affecting the mosaic and walls of a water basin in the "House of the Grand Oecus", (3) displaced columns, (4) block extrusion in masonry walls, (5) undulations in mosaic pavements, (6) tilted walls. The structural analysis of these EAEs and data measurements reveal a seismic cause for all evidence of deformation in the Utica monument. The maximum horizontal strain of the ground (e.g. SHmax) inferred from the building oriented damage in Utica shows a NNW-SSE direction under a compressive tectonic regime. For the seismogenic source of this earthquake, we propose the active E-W to NE-SW trending Utique - Ghar El Melh reverse fault, passing through the Utica Monument and extending towards the Ghar El Melh Lake, as the causative tectonic structure. The active fault trace is well supported by instrumental seismicity, geophysical data (e.g., gravity, seismic profiles) and geomorphological analyses. In summary, we find that the archaeoseismic records detected at Utica are similar to those observed at many other archaeological sites affected by destructive ancient earthquakes around the world. Furthermore, the calculated orientation of the average maximum horizontal stress (SHmax) closely match the state of the actual stress field, as highlighted by some earthquake focal mechanisms in this region.

Keywords: Tunisia, utica, seimogenic fault, archaeological earthquake effects

Procedia PDF Downloads 22
249 Geomatic Techniques to Filter Vegetation from Point Clouds

Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades

Abstract:

More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.

Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud

Procedia PDF Downloads 112
248 Leuprolide Induced Scleroderma Renal Crisis: A Case Report

Authors: Nirali Sanghavi, Julia Ash, Amy Wasserman

Abstract:

Introduction: To the best of our knowledge, there is only one case report that found an association between leuprolide and scleroderma renal crisis (SRC). Leuprolide has been noted to cause acute renal failure in some patients. Given the close timing of the leuprolide injection and the worsening renal function in our patient, leuprolide likely caused exacerbation of lupus nephritis and SRC. Interestingly, our patient on long-term hydroxychloroquine (HCQ) with normal baseline cardiac function was found to have HCQ cardiomyopathy highlighting the need for close monitoring of HCQ toxicity. We know that some of the risk factors that are involved in HCQ induced cardiomyopathy are older age, females, increased dose and >10 years of HCQ use, and pre-existing cardiac and renal insufficiency. Case presentation: A 34-year-old African American woman with a history of overlap of systemic lupus erythematosus (SLE) and scleroderma features and class III lupus nephritis presented with severe headaches, elevated blood pressure (180/120 mmHg) and worsening creatinine levels (2.07 mg/dL). The headaches started 1 month ago after she started leuprolide injections for fibroids. She was being treated with mycophenolate mofetil 1 gm twice a day, belimumab weekly, HCQ 200mg, and prednisone 5 mg daily. She has been on HCQ since her teenage years. The examination was unremarkable except for proximal interphalangeal joint contractures in the right hand and sclerodactyly of bilateral hands, unchanged from baseline. Laboratory findings include urinalysis, which showed 3+ protein, 1+ blood, 6 red blood cells, and 14 white blood cells ruling out thrombotic microangiopathy. C3 was 32 mg/dL, C4 <5 mg/dL, and +dsDNA increased >1000. She was started on captopril and discharged once creatinine and blood pressure was controlled. She was readmitted with hypertension, hyperkalemia, worsening creatinine, nephrotic range proteinuria, complaints of chest pressure, and shortness of breath with pleuritic chest pain. Physical examination and lab findings were unchanged. She was treated with pulse dose methyl prednisone followed by taper and multiple anti-hypertensive agents, including captopril, for presumed lupus nephritis flare versus SRC. Renal biopsy was consistent with SRC and class IV lupus nephritis and was started on cyclophosphamide. While cardiac biopsy showed borderline myocarditis without necrosis and cytoplasmic vacuolization consistent with HCQ cardiomyopathy, hence HCQ was discontinued. Summary: It highlights a rare association of leuprolide causing exacerbation of lupus nephritis or SRC. Although rare, the current case reinforces the importance of close monitoring for HCQ toxicity in patients with renal insufficiency.

Keywords: leuprolide, lupus nephritis, scleroderma, SLE

Procedia PDF Downloads 68
247 Reliability of Clinical Coding in Accurately Estimating the Actual Prevalence of Adverse Drug Event Admissions

Authors: Nisa Mohan

Abstract:

Adverse drug event (ADE) related hospital admissions are common among older people. The first step in prevention is accurately estimating the prevalence of ADE admissions. Clinical coding is an efficient method to estimate the prevalence of ADE admissions. The objective of the study is to estimate the rate of under-coding of ADE admissions in older people in New Zealand and to explore how clinical coders decide whether or not to code an admission as an ADE. There has not been any research in New Zealand to explore these areas. This study is done using a mixed-methods approach. Two common and serious ADEs in older people, namely bleeding and hypoglycaemia were selected for the study. In study 1, eight hundred medical records of people aged 65 years and above who are admitted to hospital due to bleeding and hypoglycemia during the years 2015 – 2016 were selected for quantitative retrospective medical records review. This selection was made to estimate the proportion of ADE-related bleeding and hypoglycemia admissions that are not coded as ADEs. These files were reviewed and recorded as to whether the admission was caused by an ADE. The hospital discharge data were reviewed to check whether all the ADE admissions identified in the records review were coded as ADEs, and the proportion of under-coding of ADE admissions was estimated. In study 2, thirteen clinical coders were selected to conduct qualitative semi-structured interviews using a general inductive approach. Participants were selected purposively based on their experience in clinical coding. Interview questions were designed in a way to investigate the reasons for the under-coding of ADE admissions. The records review study showed that 35% (Cl 28% - 44%) of the ADE-related bleeding admissions and 22% of the ADE-related hypoglycemia admissions were not coded as ADEs. Although the quality of clinical coding is high across New Zealand, a substantial proportion of ADE admissions were under-coded. This shows that clinical coding might under-estimate the actual prevalence of ADE related hospital admissions in New Zealand. The interviews with the clinical coders added that lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing might be the potential reasons for the under-coding of the ADE admissions. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. These results highlight that further work is needed on interventions to improve the clinical coding of ADE admissions, such as providing education to coders about the importance of ADEs, education to clinicians about the importance of clear and confirmed medical records entries, availing pharmacist service to improve the detection and clear documentation of ADE admissions and including a mandatory field in the discharge summary about external causes of diseases.

Keywords: adverse drug events, bleeding, clinical coders, clinical coding, hypoglycemia

Procedia PDF Downloads 110
246 Sleep Ecology, Sleep Regulation and Behavior Problems in Maltreated Preschoolers: A Scoping Review

Authors: Sabrina Servot, Annick St-Amand, Michel Rousseau, Valerie Simard, Evelyne Touchette

Abstract:

Child maltreatment has a profound impact on children’s development. In its victims, internalizing and externalizing problems are highly prevalent, and sleep problems are common. Furthermore, the environment they live in is often disorganized, lacking routine and consistency. In non-maltreated children, several studies documented the important role of sleep regulation and sleep ecology. A poor sleep ecology (e.g., lack of sleep hygiene and bedtime routine, inappropriate sleeping location) may lead to sleep regulation problems (e.g., short sleep duration, nocturnal awakenings), and sleep regulation problems may increase the risk of behavior problems. Therefore, this scoping review aims to map evidence about sleep ecology and sleep regulation and the associations between sleep ecology, sleep regulation, and behavior problems in maltreated preschoolers. Literature from 1993 was searched in PsycInfo, Pubmed, Medline, Eric, and Proquest Dissertations and Theses. Articles and thesis were comprehensively reviewed based upon inclusion/exclusion criteria: 1) it concerns maltreated children aged 1-5 years, and 2) it addresses at least one of the following: sleep ecology, sleep regulation, and/or their associations with behavior problems in maltreated preschoolers. From the 650 studies screened, nine of them were included. Data were charted according to study characteristics, nature of variable documented, measures, analyses performed, and results of each study, then synthesized in a narrative summary. The main results show all included articles were quantitative. Foster children samples were used in four studies, children experienced different types of maltreatment in six studies, while one was specifically about sexually abused children. Regarding sleep ecology, only one study describing maltreated preschoolers’ sleep ecology was found, while seven studies documented sleep regulation. Among these seven studies, 17 different sleep variables (e.g., parasomnia, dyssomnia, total 24-h sleep duration) were used, each study documenting from one to nine of them. Actigraphic measures were employed in three studies, the others used parent-reported questionnaires or sleep diaries. Maltreated children’s sleep was described and/or compared to non-maltreated children’s sleep, or an intervention group, showing mild differences. As for associations between sleep regulation and behavior problems, five studies investigated it and performed correlational or linear regression analyses between sleep and behavior problems, revealing some significant associations. No study was found about associations between sleep ecology and sleep regulation, between sleep ecology and behavior problems, or between these three variables. In conclusion, literature about sleep ecology, sleep regulation, and their associations with behavior problems are far more scarce in maltreated preschoolers than in non-maltreated ones. At present, there is especially a paucity of research about sleep ecology and the association between sleep ecology and sleep regulation in maltreated preschoolers, while studies on non-maltreated children showed sleep ecology plays a major role in sleep regulation. In addition, as sleep regulation is measured in many different ways among the studies, it is difficult to compare their findings. Finally, it seems necessary that research fill these gaps, as recommendations could be made to clinicians working with maltreated preschoolers regarding the use of sleep ecology and sleep regulation as intervention tools.

Keywords: maltreated preschoolers, sleep ecology, sleep regulation, behavior problems

Procedia PDF Downloads 125
245 Growth and Bone Health in Children following Liver Transplantation

Authors: Faris Alkhalil, Rana Bitar, Amer Azaz, Hisham Natour, Noora Almeraikhi, Mohamad Miqdady

Abstract:

Background: Children with liver transplantation are achieving very good survival and so there is now a need to concentrate on achieving good health in these patients and preventing disease. Immunosuppressive medications have side effects that need to be monitored and if possible avoided. Glucocorticoids and calcineurin inhibitors are detrimental to bone and mineral homeostasis in addition steroids can also affect linear growth. Steroid sparing regimes in renal transplant children has shown to improve children’s height. Aim: We aim to review the growth and bone health of children post liver transplant by measuring bone mineral density (BMD) using dual energy X-ray absorptiometry (DEXA) scan and assessing if there is a clear link between poor growth and impaired bone health and use of long term steroids. Subjects and Methods: This is a single centre retrospective Cohort study, we reviewed the medical notes of children (0-16 years) who underwent a liver transplantation between November 2000 to November 2016 and currently being followed at our centre. Results: 39 patients were identified (25 males and 14 females), the median transplant age was 2 years (range 9 months - 16 years), and the median follow up was 6 years. Four patients received a combined transplant, 2 kidney and liver transplant and 2 received a liver and small bowel transplant. The indications for transplant included, Biliary Atresia (31%), Acute Liver failure (18%), Progressive Familial Intrahepatic Cholestasis (15%), transplantable metabolic disease (10%), TPN related liver disease (8%), Primary Hyperoxaluria (5%), Hepatocellular carcinoma (3%) and other causes (10%). 36 patients (95%) were on a calcineurin inhibitor (34 patients were on Tacrolimus and 2 on Cyclosporin). The other three patients were on Sirolimus. Low dose long-term steroids was used in 21% of the patients. A considerable proportion of the patients had poor growth. 15% were below the 3rd centile for weight for age and 21% were below the 3rd centile for height for age. Most of our patients with poor growth were not on long term steroids. 49% of patients had a DEXA scan post transplantation. 21% of these children had low bone mineral density, one patient had met osteoporosis criteria with a vertebral fracture. Most of our patients with impaired bone health were not on long term steroids. 20% of the patients who did not undergo a DEXA scan developed long bone fractures and 50% of them were on long term steroid use which may suggest impaired bone health in these patients. Summary and Conclusion: The incidence of impaired bone health, although studied in limited number of patients; was high. Early recognition and treatment should be instituted to avoid fractures and improve bone health. Many of the patients were below the 3rd centile for weight and height however there was no clear relationship between steroid use and impaired bone health, reduced weight and reduced linear height.

Keywords: bone, growth, pediatric, liver, transplantation

Procedia PDF Downloads 261
244 Assessment of Households' Food Security and Hunger Level across Communities in Ile-Ife, Southwestern Nigeria

Authors: Adebayo-Victoria Tobi Dada, Dada Emmanuel

Abstract:

This study assessed households’ food security and hunger levels among different communities with varying educational and economic background in Ile-Ife, Nigeria, and its environment. It also examined the impacts of varying demography on the household food security level in the area. This was with a view to providing information on the food security status of the subjects within the study area. Ten different communities with varying demography (Parakin, Mokuro, Ilare, Obafemi Awolowo University (OAU) Staff Quarters, Ibadan Road, Aba-Iya Gani, Eleweran, Iraye, Boosa, and Eku-Isobo) were identified within the study area. Fieldwork was then carried out from 7th to 14th of March, 2016 in each of these communities through survey of market prices of food stuff, diet, and nutrition, social well-being, food accessibility and affordability as well as price fluctuation and variation in household’s social background. Selection of households for the survey was done using stratified random sampling method. Key informants included community heads, landlords, tenants, and household heads. Similarly, information on food security levels with respect to demographic backgrounds was obtained from the use of modified Food and Hunger Insecurity Module (FHIM) structured questionnaire. The questionnaire was administered to one percent of the households’ population per community. The results showed that communities such as Parakin and OAU Senior Staff Quarters were dominated by civil servants, while community such as Boosa was dominated by artisans. Respondents earning between ₦11,000 and ₦20,000 per month, during the study period, had the highest percentage across the selected communities. The household food security indices showed that about 41% of the investigated respondents could not guarantee their household food for a month, while 18% reduced or skipped meals. There were positive significant relationships between monthly income (F-value = 132.04), educational status (F-value = 102.30), occupation (F-value = 104.05) and food budget (F-value = 122.09), all at p < 0.05. However, there was no significant relationship between the monthly food budget and household sizes (t-value = -1.4074, p > 0.05). Food secured households’ had the household heads with a higher level of educational attainment. The study concluded that large variations which existed between socio-economic and educational background among the communities had significant effects on households’ food security level in the study area.

Keywords: food security, households, hunger level, market prices

Procedia PDF Downloads 187
243 Relationship between Readability of Paper-Based Braille and Character Spacing

Authors: T. Nishimura, K. Doi, H. Fujimoto, T. Wada

Abstract:

The Number of people with acquired visual impairments has increased in recent years. In specialized courses at schools for the blind and in Braille lessons offered by social welfare organizations, many people with acquired visual impairments cannot learn to read adequately Braille. One of the reasons is that the common Braille patterns for people visual impairments who already has mature Braille reading skill being difficult to read for Braille reading beginners. In addition, there is the scanty knowledge of Braille book manufacturing companies regarding what Braille patterns would be easy to read for beginners. Therefore, it is required to investigate a suitable Braille patterns would be easy to read for beginners. In order to obtain knowledge regarding suitable Braille patterns for beginners, this study aimed to elucidate the relationship between readability of paper-based Braille and its patterns. This study focused on character spacing, which readily affects Braille reading ability, to determine a suitable character spacing ratio (ratio of character spacing to dot spacing) for beginners. Specifically, considering beginners with acquired visual impairments who are unfamiliar with reading Braille, we quantitatively evaluated the effect of character spacing ratio on Braille readability through an evaluation experiment using sighted subjects with no experience of reading Braille. In this experiment, ten sighted adults took the blindfold were asked to read test piece (three Braille characters). Braille used as test piece was composed of five dots. They were asked to touch the Braille by sliding their forefinger on the test piece immediately after the test examiner gave a signal to start the experiment. Then, they were required to release their forefinger from the test piece when they perceived the Braille characters. Seven conditions depended on character spacing ratio was held (i.e., 1.2, 1.4, 1.5, 1.6, 1.8, 2.0, 2.2 [mm]), and the other four depended on the dot spacing (i.e., 2.0, 2.5, 3.0, 3.5 [mm]). Ten trials were conducted for each conditions. The test pieces are created using by NISE Graphic could print Braille adjusted arbitrary value of character spacing and dot spacing with high accuracy. We adopted the evaluation indices for correct rate, reading time, and subjective readability to investigate how the character spacing ratio affects Braille readability. The results showed that Braille reading beginners could read Braille accurately and quickly, when character spacing ratio is more than 1.8 and dot spacing is more than 3.0 mm. Furthermore, it is difficult to read Braille accurately and quickly for beginners, when both character spacing and dot spacing are small. For this study, suitable character spacing ratio to make reading easy for Braille beginners is revealed.

Keywords: Braille, character spacing, people with visual impairments, readability

Procedia PDF Downloads 262
242 Identification of Cocoa-Based Agroforestry Systems in Northern Madagascar: Pillar of Sustainable Management

Authors: Marizia Roberta Rasoanandrasana, Hery Lisy Tiana. Ranarijaona, Herintsitohaina Razakamanarivo, Eric Delaitre, Nandrianina Ramifehiarivo

Abstract:

Madagascar is one of the producer’s countries of world's fine cocoa. Cocoa-based agroforestry systems (CBAS) plays a very important economic role for over 75% of the population in the north of Madagascar, the island's main cocoa-producing area. It is also viewed as a key factor in the deforestation of local protected areas. It is therefore urgent to establish a compromise between cocoa production and forest conservation in this region which is difficult due to a lack of accurate cocoa agro-systems data. In order to fill these gaps and to response to these socio-economic and environmental concerns, this study aims to describe CBAS by providing precise data on their characteristics and to establish a typology. To achieve this, 150 farms were surveyed and observed to characterize CBAS based on 11 agronomic and 6 socio-economic data. Also, 30 representative plots of CBAS among the 150 farms were inventoried for providing accurate ecological data (6 variables) as an additional data for the typology determination. The results showed that Madagascar’s CBAS systems are generally extensive and practiced by smallholders. Four types of cocoa-based agroforestry system were identified, with significant differences between the following variables: yield, planting age, cocoa density, density of associated trees, preceding crop, associated crops, Shannon-Wiener indices and species richness in the upper stratum. Type 1 is characterized by old systems (>45 years) with low crop density (425 cocoa trees/ha), installed after conversion of crops other than coffee (> 50%) and giving low yields (427 kg/ha/year). Type 2 consists of simple agroforestry systems (no associated crop 0%), fairly young (20 years) with low density of associated trees (77 trees/ha) and low species diversity (H'=1.17). Type 3 is characterized by high crop density (778 trees/ha and 175 trees/ha for cocoa and associated trees respectively) and a medium level of species diversity (H'=1.74, 8 species). Type 4 is particularly characterized by orchard regeneration method involving replanting and tree lopping (100%). Analysis of the potential of these four types has identified Type 4 as a promising practice for sustainable agriculture.

Keywords: conservation, practices, productivity, protect areas, smallholder, trade-off, typology

Procedia PDF Downloads 72
241 Recycling Waste Product for Metal Removal from Water

Authors: Saidur R. Chowdhury, Mamme K. Addai, Ernest K. Yanful

Abstract:

The research was performed to assess the potential of nickel smelter slag, an industrial waste, as an adsorbent in the removal of metals from aqueous solution. An investigation was carried out for Arsenic (As), Copper (Cu), lead (Pb) and Cadmium (Cd) adsorption from aqueous solution. Smelter slag was obtain from Ni ore at the Vale Inco Ni smelter in Sudbury, Ontario, Canada. The batch experimental studies were conducted to evaluate the removal efficiencies of smelter slag. The slag was characterized by surface analytical techniques. The slag contained different iron oxides and iron silicate bearing compounds. In this study, the effect of pH, contact time, particle size, competition by other ions, slag dose and distribution coefficient were evaluated to measure the optimum adsorption conditions of the slag as an adsorbent for As, Cu, Pb and Cd. The results showed 95-99% removal of As, Cu, Pb, and almost 50-60% removal of Cd, while batch experimental studies were conducted at 5-10 mg/L of initial concentration of metals, 10 g/L of slag doses, 10 hours of contact time and 170 rpm of shaking speed and 25oC condition. The maximum removal of Arsenic (As), Copper (Cu), lead (Pb) was achieved at pH 5 while the maximum removal of Cd was found after pH 7. The column experiment was also conducted to evaluate adsorption depth and service time for metal removal. This study also determined adsorption capacity, adsorption rate and mass transfer rate. The maximum adsorption capacity was found to be 3.84 mg/g for As, 4 mg/g for Pb, and 3.86 mg/g for Cu. The adsorption capacity of nickel slag for the four test metals were in decreasing order of Pb > Cu > As > Cd. Modelling of experimental data with Visual MINTEQ revealed that saturation indices of < 0 were recorded in all cases suggesting that the metals at this pH were under- saturated and thus in their aqueous forms. This confirms the absence of precipitation in the removal of these metals at the pHs. The experimental results also showed that Fe and Ni leaching from the slag during the adsorption process was found to be very minimal, ranging from 0.01 to 0.022 mg/L indicating the potential adsorbent in the treatment industry. The study also revealed that waste product (Ni smelter slag) can be used about five times more before disposal in a landfill or as a stabilization material. It also highlighted the recycled slags as a potential reactive adsorbent in the field of remediation engineering. It also explored the benefits of using renewable waste products for the water treatment industry.

Keywords: adsorption, industrial waste, recycling, slag, treatment

Procedia PDF Downloads 124
240 Optimization Principles of Eddy Current Separator for Mixtures with Different Particle Sizes

Authors: Cao Bin, Yuan Yi, Wang Qiang, Amor Abdelkader, Ali Reza Kamali, Diogo Montalvão

Abstract:

The study of the electrodynamic behavior of non-ferrous particles in time-varying magnetic fields is a promising area of research with wide applications, including recycling of non-ferrous metals, mechanical transmission, and space debris. The key technology for recovering non-ferrous metals is eddy current separation (ECS), which utilizes the eddy current force and torque to separate non-ferrous metals. ECS has several advantages, such as low energy consumption, large processing capacity, and no secondary pollution, making it suitable for processing various mixtures like electronic scrap, auto shredder residue, aluminum scrap, and incineration bottom ash. Improving the separation efficiency of mixtures with different particle sizes in ECS can create significant social and economic benefits. Our previous study investigated the influence of particle size on separation efficiency by combining numerical simulations and separation experiments. Pearson correlation analysis found a strong correlation between the eddy current force in simulations and the repulsion distance in experiments, which confirmed the effectiveness of our simulation model. The interaction effects between particle size and material type, rotational speed, and magnetic pole arrangement were examined. It offer valuable insights for the design and optimization of eddy current separators. The underlying mechanism behind the effect of particle size on separation efficiency was discovered by analyzing eddy current and field gradient. The results showed that the magnitude and distribution heterogeneity of eddy current and magnetic field gradient increased with particle size in eddy current separation. Based on this, we further found that increasing the curvature of magnetic field lines within particles could also increase the eddy current force, providing a optimized method to improving the separation efficiency of fine particles. By combining the results of the studies, a more systematic and comprehensive set of optimization guidelines can be proposed for mixtures with different particle size ranges. The separation efficiency of fine particles could be improved by increasing the rotational speed, curvature of magnetic field lines, and electrical conductivity/density of materials, as well as utilizing the eddy current torque. When designing an ECS, the particle size range of the target mixture should be investigated in advance, and the suitable parameters for separating the mixture can be fixed accordingly. In summary, these results can guide the design and optimization of ECS, and also expand the application areas for ECS.

Keywords: eddy current separation, particle size, numerical simulation, metal recovery

Procedia PDF Downloads 57
239 Constructing Evaluation Indicators for the Supply of Urban-Friendly Shelters from the Perspective of the Needs of the Elderly People in Taiwan

Authors: Chuan-Ming Tung, Tzu-Chiao Yuan

Abstract:

This research aims to construct the supply indicators and weights of shelter space from a perspective of the needs of the elderly by virtue of literature review, a systematical compilation of related regulations, and the use of the Analytical Hierarchy Process method, the questionnaires regarding the indicators filled out by 16 experts and scholars. The researcher then used 3 schools and 2 activity centers in Banqiao District, New Taipei City, as study cases to evaluate the ‘friendliness’ degree/level for the supply of shelters meeting the needs of elderly people. The supply evaluation indicators of friendly shelters meeting the needs of the elderly include "Administrative Operations and Service Needs" and "Residence-related and Living Needs"; under the "Administrative Operations and Service Needs" are "Management Operations and Information Provision", "Shelter Space Preparedness and Logistics Support", "Medical Care and Social Support", and "Shelters and Medical Environment", a total of 17 assessment items in four indicators, while under the "Residence-related and Living Needs" are "Dietary Needs", "Sleep Needs", "Hygiene and Sanitation Needs", "Accessibility and Convenience Needs ", etc., a total of 18 assessment items in four indicators. The results show that "Residence-related and Living Needs" is the most important item in the main levels of the supply indicators of the needs for friendly shelters to elderly people (weigh value 0.5504), followed by "Administrative Operations and Service Needs" (0.4496). The order of importance of the supply indicators of friendly shelters for the needs of elderly people is as follows: "Hygiene and Sanitation Needs" (0.1721), "Dietary Needs" (0.1340), "Medical Care and Social Support" (0.1300), "Sleep Needs" (0.1277), "Accessibility and Convenience Needs" (0.1166), "Basic Environment of Shelters" (0.1145), "Shelter Space Preparedness and Logistics Support" (0.1115) and "Management Operations and Information Provision" (0.0936). In addition, it can be noticed from the results of the case evaluation that the provision of refuges and shelters, mainly from schools and activity centers, is extremely inadequate for the needs of the elderly. In a set of comprehensive comparisons and contrasts, the evaluation indicators of refuges and shelters that need to be improved are "Medical Care and Social Support", "Hygiene and Sanitation Needs", "Sleep Needs", "Dietary Needs", and "Shelter Space Preparedness and Logistics Support".

Keywords: needs of the elderly people, urban shelters, evaluation indicators/indices., taiwan

Procedia PDF Downloads 56
238 The Development of Group Counseling Program for Elderly's Caregivers by Base on Person-Centered Theory to Promoting for the Resilience Quotient in Elderly People

Authors: Jirapan Khruesarn, Wimwipa Boonklin

Abstract:

Background: Currently, Thailand has an aging population. In 2017, the elderly population was over 11.14 million. There will be an increase in the number of elderly people, 8.39 million, some people grumble to themselves and have conflicts with their offspring or those close to them. It is a source of stress. Mental health promotion should be given to the elderly in order to cope with these changes. Due to the family characteristics of Thai society, these family members will act as caregivers for the elderly. Therefore, a group-counseling program based on Personnel-Centered Theory for Elderly Caregivers in Mental Health Promotion for Older People in Na Kaeo Municipality, Kau Ka District, Lampang Province, has been developed to compare the elderly care behavior before and after the participation. Methods: This research was study for 20 elderly' caregiver: Those aimed to compare the before and after use of group program for caregiver to promoting for the elderly by the following methods: Step 1 Establish a framework for evaluating elderly care behaviors and develop a group counseling program for promote mental health for elderly on: 1) Body 2) Willpower 3) Social and community management and 4) Organizing learning process. Step 2 Assessing an Elderly Care Behaviors by using "The behavior assessment on caring for the elderly" and assessing the mental health power level of the elderly and follow the counseling program 9 times and compare of the elderly care behaviors before and after joined a group program, and compare of mental health level of caregiver attends a group program. Results: This study is developing a group counseling program to promoting for the resilience quotient in elderly people that the results of the study could be summarized as follows: 1) Before the elderly's caregivers join a group counseling program: Mental health promotion behaviors of the elderly were at the high level of (3.32), and after: were at the high level of (3.44). 2) Before the elderly's caregiver attends a group counseling program: the mental health level of the elderly the mean score was (47.85 percent), and the standard deviation was (0.21 percent) and after. The elderly had a higher score of (51.45 percent) In summary, after the elderly caregivers joined the group, the elderly are higher in all aspects promote mental health for elderly and the statistically significance at the 0.05, It shows that programs are fit for personal and community condition in promoting the mental health of the elderly because this theory has the idea that: Humans have the ability to use their intelligence to solve problems or make decisions effectively, And member of group counseling program have ventured and express grievances that the counselor is a facilitator who focuses on personal development by building relationships among people. In other words, the factors contributing to higher levels of elderly care behaviors is group counseling, that isn't a hypothetical process but focus on building relationships that are based on mutual trust and Unconditional acceptance.

Keywords: group counseling base on person-centered theory, elderly person, resilience quotient: RQ, caregiver

Procedia PDF Downloads 65
237 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.

Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy

Procedia PDF Downloads 92
236 Changes in Geospatial Structure of Households in the Czech Republic: Findings from Population and Housing Census

Authors: Jaroslav Kraus

Abstract:

Spatial information about demographic processes are a standard part of outputs in the Czech Republic. That was also the case of Population and Housing Census which was held on 2011. This is a starting point for a follow up study devoted to two basic types of households: single person households and households of one completed family. Single person households and one family households create more than 80 percent of all households, but the share and spatial structure is in long-term changing. The increase of single households is results of long-term fertility decrease and divorce increase, but also possibility of separate living. There are regions in the Czech Republic with traditional demographic behavior, and regions like capital Prague and some others with changing pattern. Population census is based - according to international standards - on the concept of currently living population. Three types of geospatial approaches will be used for analysis: (i) firstly measures of geographic distribution, (ii) secondly mapping clusters to identify the locations of statistically significant hot spots, cold spots, spatial outliers, and similar features and (iii) finally analyzing pattern approach as a starting point for more in-depth analyses (geospatial regression) in the future will be also applied. For analysis of this type of data, number of households by types should be distinct objects. All events in a meaningful delimited study region (e.g. municipalities) will be included in an analysis. Commonly produced measures of central tendency and spread will include: identification of the location of the center of the point set (by NUTS3 level); identification of the median center and standard distance, weighted standard distance and standard deviational ellipses will be also used. Identifying that clustering exists in census households datasets does not provide a detailed picture of the nature and pattern of clustering but will be helpful to apply simple hot-spot (and cold spot) identification techniques to such datasets. Once the spatial structure of households will be determined, any particular measure of autocorrelation can be constructed by defining a way of measuring the difference between location attribute values. The most widely used measure is Moran’s I that will be applied to municipal units where numerical ratio is calculated. Local statistics arise naturally out of any of the methods for measuring spatial autocorrelation and will be applied to development of localized variants of almost any standard summary statistic. Local Moran’s I will give an indication of household data homogeneity and diversity on a municipal level.

Keywords: census, geo-demography, households, the Czech Republic

Procedia PDF Downloads 83
235 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment

Authors: Ella Sèdé Maforikan

Abstract:

Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.

Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment

Procedia PDF Downloads 35
234 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)

Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg

Abstract:

One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.

Keywords: arsenic, fluoride, groundwater contamination, logistic regression

Procedia PDF Downloads 320
233 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids

Authors: S. Gariani, I. Shyha

Abstract:

Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.

Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions

Procedia PDF Downloads 254
232 Dynamic Conformal Arc versus Intensity Modulated Radiotherapy for Image Guided Stereotactic Radiotherapy of Cranial Lesion

Authors: Chor Yi Ng, Christine Kong, Loretta Teo, Stephen Yau, FC Cheung, TL Poon, Francis Lee

Abstract:

Purpose: Dynamic conformal arc (DCA) and intensity modulated radiotherapy (IMRT) are two treatment techniques commonly used for stereotactic radiosurgery/radiotherapy of cranial lesions. IMRT plans usually give better dose conformity while DCA plans have better dose fall off. Rapid dose fall off is preferred for radiotherapy of cranial lesions, but dose conformity is also important. For certain lesions, DCA plans have good conformity, while for some lesions, the conformity is just unacceptable with DCA plans, and IMRT has to be used. The choice between the two may not be apparent until each plan is prepared and dose indices compared. We described a deviation index (DI) which is a measurement of the deviation of the target shape from a sphere, and test its functionality to choose between the two techniques. Method and Materials: From May 2015 to May 2017, our institute has performed stereotactic radiotherapy for 105 patients treating a total of 115 lesions (64 DCA plans and 51 IMRT plans). Patients were treated with the Varian Clinac iX with HDMLC. Brainlab Exactrac system was used for patient setup. Treatment planning was done with Brainlab iPlan RT Dose (Version 4.5.4). DCA plans were found to give better dose fall off in terms of R50% (R50% (DCA) = 4.75 Vs R50% (IMRT) = 5.242) while IMRT plans have better conformity in terms of treatment volume ratio (TVR) (TVR(DCA) = 1.273 Vs TVR(IMRT) = 1.222). Deviation Index (DI) is proposed to better facilitate the choice between the two techniques. DI is the ratio of the volume of a 1 mm shell of the PTV and the volume of a 1 mm shell of a sphere of identical volume. DI will be close to 1 for a near spherical PTV while a large DI will imply a more irregular PTV. To study the functionality of DI, 23 cases were chosen with PTV volume ranged from 1.149 cc to 29.83 cc, and DI ranged from 1.059 to 3.202. For each case, we did a nine field IMRT plan with one pass optimization and a five arc DCA plan. Then the TVR and R50% of each case were compared and correlated with the DI. Results: For the 23 cases, TVRs and R50% of the DCA and IMRT plans were examined. The conformity for IMRT plans are better than DCA plans, with majority of the TVR(DCA)/TVR(IMRT) ratios > 1, values ranging from 0.877 to1.538. While the dose fall off is better for DCA plans, with majority of the R50%(DCA)/ R50%(IMRT) ratios < 1. Their correlations with DI were also studied. A strong positive correlation was found between the ratio of TVRs and DI (correlation coefficient = 0.839), while the correlation between the ratio of R50%s and DI was insignificant (correlation coefficient = -0.190). Conclusion: The results suggest DI can be used as a guide for choosing the planning technique. For DI greater than a certain value, we can expect the conformity for DCA plans to become unacceptably great, and IMRT will be the technique of choice.

Keywords: cranial lesions, dynamic conformal arc, IMRT, image guided radiotherapy, stereotactic radiotherapy

Procedia PDF Downloads 219
231 An Exploration of the Experiences of Women in Polygamous Marriages: A Case Study of Matizha Village, Masvingo, Zimbabwe

Authors: Flora Takayindisa, Tsoaledi Thobejane, Thizwilondi Mudau

Abstract:

This study highlights what people in polygamous marriages face on a daily basis. It argues that there are more disadvantages for women in polygamous marriages than their counterparts in monogamous relationships. The study further suggests that the patriarchal power structure seems to take a powerful and effective role on polygamous marriages in our societies, particularly in Zimbabwe where this study took place. The study explored the intricacies of polygamous marriages and how these dominances can be resolved. The research is therefore presented through the ‘lived realities’ of the affected women in polygamous marriages in Gutu District located in Masvingo Province of Zimbabwe. Polygamous marriages are practised in different societies. Some women who are practising a polygamous lifestyle are emotionally and physically abused in their relationships. Evidence also suggests children from polygamous marriages also suffer psychologically when their fathers take other wives. Relationships within the family are very difficult because of the husband’s seeming favouritism for one wife. Children are mostly affected by disputes between co-wives and they often lack quality time with their fathers. There are mixed feelings about polygamous marriages. There are some people who condemn it saying inhumane. However, considerations must be made of what it might mean to other people who do not have choices of any other form of marriage. The other factor that has to be noted is that polygamous marriages are not always negative. There are some positive things that result from the polygamous marriages. The study was conducted in a village called Matizha. In the study, a qualitative research approach was employed to stimulate awareness of the social, cultural, religious and the effect of economic factors in polygamous marriages. This approach facilitates a unique understanding of the experiences of women in polygamous marriages, their experiences being both negative and positive. The qualitative type of research method enabled the respondents to have an open minded when they were being asked questions. The researcher utilised the feminist theory in the study. The researcher employed guide interviews to acquire information from the participants. The chapter focuses on the participants who took part in the study, how the participants were selected, ethical considerations, data collection, the interview process, the research instruments and the summary. The data was obtained using a guided interview for all the respondents ranging from all ages who are in polygamous marriages. The researcher presented the demographic information of the participants. Thereafter, the researcher presented other aspects of the data collection like social factors, economic factors and also religious affiliation. The conclusions and recommendations are made from the four main themes that came up from the discussions. The recommendations were made for the women, the policies and laws affecting women, and finally, recommendations for future research. It is believed that the overall objectives of the study have been met and research questions have been answered based on the findings of the study discussed.

Keywords: co-wives, egalitarianism, experiences, polyandry, polygamy, woman

Procedia PDF Downloads 233
230 Development of a Risk Disclosure Index and Examination of Its Determinants: An Empirical Study in Indian Context

Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav

Abstract:

Worldwide regulators, practitioners and researchers view risk-disclosure as one of the most important steps that will promote corporate accountability and transparency. Recognizing this growing significance of risk disclosures, the paper first develops a risk disclosure index. Covering 69 risk items/themes, this index is developed by employing thematic content analysis and encompasses three attributes of disclosure: namely, nature (qualitative or quantitative), time horizon (backward-looking or forward-looking) and tone (no impact, positive impact or negative impact). As the focus of study is on substantive rather than symbolic disclosure, content analysis has been carried out manually. The study is based on non-financial companies of Nifty500 index and covers a ten year period from April 1, 2005 to March 31, 2015, thus yielding 3,872 annual reports for analysis. The analysis reveals that (on an average) only about 14% of risk items (i.e. about 10 out 69 risk items studied) are being disclosed by Indian companies. Risk items that are frequently disclosed are mostly macroeconomic in nature and their disclosures tend to be qualitative, forward-looking and conveying both positive and negative aspects of the concerned risk. The second objective of the paper is to gauge the factors that affect the level of disclosures in annual reports. Given the panel nature of data, and possible endogeneity amongst variables, Diff-GMM regression has been applied. The results indicate that age and size of firms have a significant positive impact on disclosure quality, whereas growth rate does not have a significant impact. Further, post-recession period (2009-2015) has witnessed significant improvement in quality of disclosures. In terms of corporate governance variables, board size, board independence, CEO duality, presence of CRO and constitution of risk management committee appear to be significant factors in determining the quality of risk disclosures. It is noteworthy that the study contributes to literature by putting forth a variant to existing disclosure indices that not only captures the quantity but also the quality of disclosures (in terms of semantic attributes). Also, the study is a first of its kind attempt in a prominent emerging market i.e. India. Therefore, this study is expected to facilitate regulators in mandating and regulating risk disclosures and companies in their endeavor to reduce information asymmetry.

Keywords: risk disclosure, voluntary disclosures, corporate governance, Diff-GMM

Procedia PDF Downloads 142
229 Distributional and Developmental Analysis of PM2.5 in Beijing, China

Authors: Alexander K. Guo

Abstract:

PM2.5 poses a large threat to people’s health and the environment and is an issue of large concern in Beijing, brought to the attention of the government by the media. In addition, both the United States Embassy in Beijing and the government of China have increased monitoring of PM2.5 in recent years, and have made real-time data available to the public. This report utilizes hourly historical data (2008-2016) from the U.S. Embassy in Beijing for the first time. The first objective was to attempt to fit probability distributions to the data to better predict a number of days exceeding the standard, and the second was to uncover any yearly, seasonal, monthly, daily, and hourly patterns and trends that may arise to better understand of air control policy. In these data, 66,650 hours and 2687 days provided valid data. Lognormal, gamma, and Weibull distributions were fit to the data through an estimation of parameters. The Chi-squared test was employed to compare the actual data with the fitted distributions. The data were used to uncover trends, patterns, and improvements in PM2.5 concentration over the period of time with valid data in addition to specific periods of time that received large amounts of media attention, analyzed to gain a better understanding of causes of air pollution. The data show a clear indication that Beijing’s air quality is unhealthy, with an average of 94.07µg/m3 across all 66,650 hours with valid data. It was found that no distribution fit the entire dataset of all 2687 days well, but each of the three above distribution types was optimal in at least one of the yearly data sets, with the lognormal distribution found to fit recent years better. An improvement in air quality beginning in 2014 was discovered, with the first five months of 2016 reporting an average PM2.5 concentration that is 23.8% lower than the average of the same period in all years, perhaps the result of various new pollution-control policies. It was also found that the winter and fall months contained more days in both good and extremely polluted categories, leading to a higher average but a comparable median in these months. Additionally, the evening hours, especially in the winter, reported much higher PM2.5 concentrations than the afternoon hours, possibly due to the prohibition of trucks in the city in the daytime and the increased use of coal for heating in the colder months when residents are home in the evening. Lastly, through analysis of special intervals that attracted media attention for either unnaturally good or bad air quality, the government’s temporary pollution control measures, such as more intensive road-space rationing and factory closures, are shown to be effective. In summary, air quality in Beijing is improving steadily and do follow standard probability distributions to an extent, but still needs improvement. Analysis will be updated when new data become available.

Keywords: Beijing, distribution, patterns, pm2.5, trends

Procedia PDF Downloads 223
228 Attention and Memory in the Music Learning Process in Individuals with Visual Impairments

Authors: Lana Burmistrova

Abstract:

Introduction: The influence of visual impairments on several cognitive processes used in the music learning process is an increasingly important area in special education and cognitive musicology. Many children have several visual impairments due to the refractive errors and irreversible inhibitors. However, based on the compensatory neuroplasticity and functional reorganization, congenitally blind (CB) and early blind (EB) individuals use several areas of the occipital lobe to perceive and process auditory and tactile information. CB individuals have greater memory capacity, memory reliability, and less false memory mechanisms are used while executing several tasks, they have better working memory (WM) and short-term memory (STM). Blind individuals use several strategies while executing tactile and working memory n-back tasks: verbalization strategy (mental recall), tactile strategy (tactile recall) and combined strategies. Methods and design: The aim of the pilot study was to substantiate similar tendencies while executing attention, memory and combined auditory tasks in blind and sighted individuals constructed for this study, and to investigate attention, memory and combined mechanisms used in the music learning process. For this study eight (n=8) blind and eight (n=8) sighted individuals aged 13-20 were chosen. All respondents had more than five years music performance and music learning experience. In the attention task, all respondents had to identify pitch changes in tonal and randomized melodic pairs. The memory task was based on the mismatch negativity (MMN) proportion theory: 80 percent standard (not changed) and 20 percent deviant (changed) stimuli (sequences). Every sequence was named (na-na, ra-ra, za-za) and several items (pencil, spoon, tealight) were assigned for each sequence. Respondents had to recall the sequences, to associate them with the item and to detect possible changes. While executing the combined task, all respondents had to focus attention on the pitch changes and had to detect and describe these during the recall. Results and conclusion: The results support specific features in CB and EB, and similarities between late blind (LB) and sighted individuals. While executing attention and memory tasks, it was possible to observe the tendency in CB and EB by using more precise execution tactics and usage of more advanced periodic memory, while focusing on auditory and tactile stimuli. While executing memory and combined tasks, CB and EB individuals used passive working memory to recall standard sequences, active working memory to recall deviant sequences and combined strategies. Based on the observation results, assessment of blind respondents and recording specifics, following attention and memory correlations were identified: reflective attention and STM, reflective attention and periodic memory, auditory attention and WM, tactile attention and WM, auditory tactile attention and STM. The results and the summary of findings highlight the attention and memory features used in the music learning process in the context of blindness, and the tendency of the several attention and memory types correlated based on the task, strategy and individual features.

Keywords: attention, blindness, memory, music learning, strategy

Procedia PDF Downloads 163
227 Prevalence of Behavioral and Emotional Problems in School Going Adolescents in India

Authors: Anshu Gupta, Charu Gupta

Abstract:

Background: Adolescence is the transitional period between puberty and adulthood. It is marked by immense turmoil in emotional and behavioral spheres. Adolescents are at risk of an array of behavioral and emotional problems, resulting in social, academic and vocational function impairments. Conflicts in the family and inability of the parents to cope with the changing demands of an adolescent have a negative impact on the overall development of the child. This augers ill for the individual’s future, resulting in depression, delinquency and suicides among other problems. Aim: The aim of the study was to compare the prevalence of behavioral and emotional problems in school going adolescents aged 13 to 15 years residing in Ludhiana city. Method: A total of 1380 school children in the age group of 13 to 15 years were assessed by the adolescent health screening questionnaire (FAPS) and Youth Self-Report (2001) questionnaire. Statistical significance was ascertained by t-test, chi-square test (x²) and ANOVA, as appropriate. Results: A considerably high prevalence of behavioral and emotional problems was found in school going adolescents (26.5%), more in girls (31.7%) than in boys (24.4%). In case of boys, the maximum problem was in the 13 year age group, i.e., 28.2%, followed by a significant decline by the age of 14 years, i.e., 24.2% and 15 years, i.e., 19.6%. In case of girls also, the maximum problem was in the 13 year age group, i.e., 32.4% followed by a marginal decline in the 14 years i.e., 31.8% and 15 year age group, i.e., 30.2%. Demographic factors were non contributory. Internalizing syndrome (22.4%) was the most common problem followed by the neither internalizing nor externalizing (17.6%) group. In internalizing group, most (26.5%) of the students were observed to be anxious/ depressed. Social problem was observed to be the most frequent (10.6%) among neither internalizing nor externalizing group. Aggressive behavior was the commonest (8.4%) among externalizing group. Internalizing problems, mainly anxiety and depression, were commoner in females (30.6%) than males (24.6%). More boys (16%) than girls (13.4%) were reported to suffer from externalizing disorders. A critical review of the data showed that most of the adolescents had poor knowledge about reproductive health. Almost 36% reported that the source of their information on sexual and reproductive health being friends and the electronic media. There was a high percentage of adolescents who reported being worried about sexual abuse (20.2%) with majority of them being girls (93.6%) reflecting poorly on the social setup in the country. About 41% of adolescents reported being concerned about body weight and most of them being girls (92.4%). Up to 14.5% reported having thoughts of using alcohol or drugs perhaps due to the easy availability of substances of abuse in this part of the country. 12.8% (mostly girls) reported suicidal thoughts. Summary/conclusion: There is a high prevalence of emotional and behavioral problems among school-going adolescents. Resolution of these problems during adolescence is essential for attaining a healthy adulthood. The need of the hour is to spread awareness among caregivers and formulation of effective management strategies including school mental health programme.

Keywords: adolescence, behavioral, emotional, internalizing problem

Procedia PDF Downloads 257