Search results for: software reuse
203 Solutions for Food-Safe 3D Printing
Authors: Geremew Geidare Kailo, Igor Gáspár, András Koris, Ivana Pajčin, Flóra Vitális, Vanja Vlajkov
Abstract:
Three-dimension (3D) printing, a very popular additive manufacturing technology, has recently undergone rapid growth and replaced the use of conventional technology from prototyping to producing end-user parts and products. The 3D Printing technology involves a digital manufacturing machine that produces three-dimensional objects according to designs created by the user via 3D modeling or computer-aided design/manufacturing (CAD/CAM) software. The most popular 3D printing system is Fused Deposition Modeling (FDM) or also called Fused Filament Fabrication (FFF). A 3D-printed object is considered food safe if it can have direct contact with the food without any toxic effects, even after cleaning, storing, and reusing the object. This work analyzes the processing timeline of the filament (material for 3D printing) from unboxing to the extrusion through the nozzle. It is an important task to analyze the growth of bacteria on the 3D printed surface and in gaps between the layers. By default, the 3D-printed object is not food safe after longer usage and direct contact with food (even though they use food-safe filaments), but there are solutions for this problem. The aim of this work was to evaluate the 3D-printed object from different perspectives of food safety. Firstly, testing antimicrobial 3D printing filaments from a food safety aspect since the 3D Printed object in the food industry may have direct contact with the food. Therefore, the main purpose of the work is to reduce the microbial load on the surface of a 3D-printed part. Coating with epoxy resin was investigated, too, to see its effect on mechanical strength, thermal resistance, surface smoothness and food safety (cleanability). Another aim of this study was to test new temperature-resistant filaments and the effect of high temperature on 3D printed materials to see if they can be cleaned with boiling or similar hi-temp treatment. This work proved that all three mentioned methods could improve the food safety of the 3D printed object, but the size of this effect variates. The best result we got was with coating with epoxy resin, and the object was cleanable like any other injection molded plastic object with a smooth surface. Very good results we got by boiling the objects, and it is good to see that nowadays, more and more special filaments have a food-safe certificate and can withstand boiling temperatures too. Using antibacterial filaments reduced bacterial colonies to 1/5, but the biggest advantage of this method is that it doesn’t require any post-processing. The object is ready out of the 3D printer. Acknowledgements: The research was supported by the Hungarian and Serbian bilateral scientific and technological cooperation project funded by the Hungarian National Office for Research, Development and Innovation (NKFI, 2019-2.1.11-TÉT-2020-00249) and the Ministry of Education, Science and Technological Development of the Republic of Serbia. The authors acknowledge the Hungarian University of Agriculture and Life Sciences’s Doctoral School of Food Science for the support in this studyKeywords: food safety, 3D printing, filaments, microbial, temperature
Procedia PDF Downloads 143202 Review of Concepts and Tools Applied to Assess Risks Associated with Food Imports
Authors: A. Falenski, A. Kaesbohrer, M. Filter
Abstract:
Introduction: Risk assessments can be performed in various ways and in different degrees of complexity. In order to assess risks associated with imported foods additional information needs to be taken into account compared to a risk assessment on regional products. The present review is an overview on currently available best practise approaches and data sources used for food import risk assessments (IRAs). Methods: A literature review has been performed. PubMed was searched for articles about food IRAs published in the years 2004 to 2014 (English and German texts only, search string “(English [la] OR German [la]) (2004:2014 [dp]) import [ti] risk”). Titles and abstracts were screened for import risks in the context of IRAs. The finally selected publications were analysed according to a predefined questionnaire extracting the following information: risk assessment guidelines followed, modelling methods used, data and software applied, existence of an analysis of uncertainty and variability. IRAs cited in these publications were also included in the analysis. Results: The PubMed search resulted in 49 publications, 17 of which contained information about import risks and risk assessments. Within these 19 cross references were identified to be of interest for the present study. These included original articles, reviews and guidelines. At least one of the guidelines of the World Organisation for Animal Health (OIE) and the Codex Alimentarius Commission were referenced in any of the IRAs, either for import of animals or for imports concerning foods, respectively. Interestingly, also a combination of both was used to assess the risk associated with the import of live animals serving as the source of food. Methods ranged from full quantitative IRAs using probabilistic models and dose-response models to qualitative IRA in which decision trees or severity tables were set up using parameter estimations based on expert opinions. Calculations were done using @Risk, R or Excel. Most heterogeneous was the type of data used, ranging from general information on imported goods (food, live animals) to pathogen prevalence in the country of origin. These data were either publicly available in databases or lists (e.g., OIE WAHID and Handystatus II, FAOSTAT, Eurostat, TRACES), accessible on a national level (e.g., herd information) or only open to a small group of people (flight passenger import data at national airport customs office). In the IRAs, an uncertainty analysis has been mentioned in some cases, but calculations have been performed only in a few cases. Conclusion: The current state-of-the-art in the assessment of risks of imported foods is characterized by a great heterogeneity in relation to general methodology and data used. Often information is gathered on a case-by-case basis and reformatted by hand in order to perform the IRA. This analysis therefore illustrates the need for a flexible, modular framework supporting the connection of existing data sources with data analysis and modelling tools. Such an infrastructure could pave the way to IRA workflows applicable ad-hoc, e.g. in case of a crisis situation.Keywords: import risk assessment, review, tools, food import
Procedia PDF Downloads 302201 Post Harvest Fungi Diversity and Level of Aflatoxin Contamination in Stored Maize: Cases of Kitui, Nakuru and Trans-Nzoia Counties in Kenya
Authors: Gachara Grace, Kebira Anthony, Harvey Jagger, Wainaina James
Abstract:
Aflatoxin contamination of maize in Africa poses a major threat to food security and the health of many African people. In Kenya, aflatoxin contamination of maize is high due to the environmental, agricultural and socio-economic factors. Many studies have been conducted to understand the scope of the problem, especially at pre-harvest level. This research was carried out to gather scientific information on the fungi population, diversity and aflatoxin level during the post-harvest period. The study was conducted in three geographical locations of; Kitui, Kitale and Nakuru. Samples were collected from storage structures of farmers and transported to the Biosciences eastern and central Africa (BecA), International Livestock and Research Institute (ILRI) hub laboratories. Mycoflora was recovered using the direct plating method. A total of five fungal genera (Aspergillus, Penicillium, Fusarium, Rhizopus and Bssyochlamys spp.) were isolated from the stored maize samples. The most common fungal species that were isolated from the three study sites included A. flavus at 82.03% followed by A.niger and F.solani at 49% and 26% respectively. The aflatoxin producing fungi A. flavus was recovered in 82.03% of the samples. Aflatoxin levels were analysed on both the maize samples and in vitro. Most of the A. flavus isolates recorded a high level of aflatoxin when they were analysed for presence of aflatoxin B1 using ELISA. In Kitui, all the samples (100%) had aflatoxin levels above 10ppb with a total aflatoxin mean of 219.2ppb. In Kitale, only 3 samples (n=39) had their aflatoxin levels less than 10ppb while in Nakuru, the total aflatoxin mean level of this region was 239.7ppb. When individual samples were analysed using Vicam fluorometer method, aflatoxin analysis revealed that most of the samples (58.4%) had been contaminated. The means were significantly different (p=0.00<0.05) in all the three locations. Genetic relationships of A. flavus isolates were determined using 13 Simple Sequence Repeats (SSRs) markers. The results were used to generate a phylogenetic tree using DARwin5 software program. A total of 5 distinct clusters were revealed among the genotypes. The isolates appeared to cluster separately according to the geographical locations. Principal Coordinates Analysis (PCoA) of the genetic distances among the 91 A. flavus isolates explained over 50.3% of the total variation when two coordinates were used to cluster the isolates. Analysis of Molecular Variance (AMOVA) showed a high variation of 87% within populations and 13% among populations. This research has shown that A. flavus is the main fungal species infecting maize grains in Kenya. The influence of aflatoxins on human populations in Kenya demonstrates a clear need for tools to manage contamination of locally produced maize. Food basket surveys for aflatoxin contamination should be conducted on a regular basis. This would assist in obtaining reliable data on aflatoxin incidence in different food crops. This would go a long way in defining control strategies for this menace.Keywords: aflatoxin, Aspergillus flavus, genotyping, Kenya
Procedia PDF Downloads 278200 Edmonton Urban Growth Model as a Support Tool for the City Plan Growth Scenarios Development
Authors: Sinisa J. Vukicevic
Abstract:
Edmonton is currently one of the youngest North American cities and has achieved significant growth over the past 40 years. Strong urban shift requires a new approach to how the city is envisioned, planned, and built. This approach is evidence-based scenario development, and an urban growth model was a key support tool in framing Edmonton development strategies, developing urban policies, and assessing policy implications. The urban growth model has been developed using the Metronamica software platform. The Metronamica land use model evaluated the dynamic of land use change under the influence of key development drivers (population and employment), zoning, land suitability, and land and activity accessibility. The model was designed following the Big City Moves ideas: become greener as we grow, develop a rebuildable city, ignite a community of communities, foster a healing city, and create a city of convergence. The Big City Moves were converted to three development scenarios: ‘Strong Central City’, ‘Node City’, and ‘Corridor City’. Each scenario has a narrative story that expressed scenario’s high level goal, scenario’s approach to residential and commercial activities, to transportation vision, and employment and environmental principles. Land use demand was calculated for each scenario according to specific density targets. Spatial policies were analyzed according to their level of importance within the policy set definition for the specific scenario, but also through the policy measures. The model was calibrated on the way to reproduce known historical land use pattern. For the calibration, we used 2006 and 2011 land use data. The validation is done independently, which means we used the data we did not use for the calibration. The model was validated with 2016 data. In general, the modeling process contain three main phases: ‘from qualitative storyline to quantitative modelling’, ‘model development and model run’, and ‘from quantitative modelling to qualitative storyline’. The model also incorporates five spatial indicators: distance from residential to work, distance from residential to recreation, distance to river valley, urban expansion and habitat fragmentation. The major finding of this research could be looked at from two perspectives: the planning perspective and technology perspective. The planning perspective evaluates the model as a tool for scenario development. Using the model, we explored the land use dynamic that is influenced by a different set of policies. The model enables a direct comparison between the three scenarios. We explored the similarities and differences of scenarios and their quantitative indicators: land use change, population change (and spatial allocation), job allocation, density (population, employment, and dwelling unit), habitat connectivity, proximity to objects of interest, etc. From the technology perspective, the model showed one very important characteristic: the model flexibility. The direction for policy testing changed many times during the consultation process and model flexibility in applying all these changes was highly appreciated. The model satisfied our needs as scenario development and evaluation tool, but also as a communication tool during the consultation process.Keywords: urban growth model, scenario development, spatial indicators, Metronamica
Procedia PDF Downloads 95199 The Role of Supply Chain Agility in Improving Manufacturing Resilience
Authors: Maryam Ziaee
Abstract:
This research proposes a new approach and provides an opportunity for manufacturing companies to produce large amounts of products that meet their prospective customers’ tastes, needs, and expectations and simultaneously enable manufacturers to increase their profit. Mass customization is the production of products or services to meet each individual customer’s desires to the greatest possible extent in high quantities and at reasonable prices. This process takes place at different levels such as the customization of goods’ design, assembly, sale, and delivery status, and classifies in several categories. The main focus of this study is on one class of mass customization, called optional customization, in which companies try to provide their customers with as many options as possible to customize their products. These options could range from the design phase to the manufacturing phase, or even methods of delivery. Mass customization values customers’ tastes, but it is only one side of clients’ satisfaction; on the other side is companies’ fast responsiveness delivery. It brings the concept of agility, which is the ability of a company to respond rapidly to changes in volatile markets in terms of volume and variety. Indeed, mass customization is not effectively feasible without integrating the concept of agility. To gain the customers’ satisfaction, the companies need to be quick in responding to their customers’ demands, thus highlighting the significance of agility. This research offers a different method that successfully integrates mass customization and fast production in manufacturing industries. This research is built upon the hypothesis that the success key to being agile in mass customization is to forecast demand, cooperate with suppliers, and control inventory. Therefore, the significance of the supply chain (SC) is more pertinent when it comes to this stage. Since SC behavior is dynamic and its behavior changes constantly, companies have to apply one of the predicting techniques to identify the changes associated with SC behavior to be able to respond properly to any unwelcome events. System dynamics utilized in this research is a simulation approach to provide a mathematical model among different variables to understand, control, and forecast SC behavior. The final stage is delayed differentiation, the production strategy considered in this research. In this approach, the main platform of products is produced and stocked and when the company receives an order from a customer, a specific customized feature is assigned to this platform and the customized products will be created. The main research question is to what extent applying system dynamics for the prediction of SC behavior improves the agility of mass customization. This research is built upon a qualitative approach to bring about richer, deeper, and more revealing results. The data is collected through interviews and is analyzed through NVivo software. This proposed model offers numerous benefits such as reduction in the number of product inventories and their storage costs, improvement in the resilience of companies’ responses to their clients’ needs and tastes, the increase of profits, and the optimization of productivity with the minimum level of lost sales.Keywords: agility, manufacturing, resilience, supply chain
Procedia PDF Downloads 91198 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality
Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo
Abstract:
Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.Keywords: linear model, models and modeling, probability, randomness, sample
Procedia PDF Downloads 119197 Conceptualizing Health-Seeking Behavior among Adolescents and Youth with Substance Use Disorder in Urban Kwazulu-Natal. A Candidacy Framework Analysis
Authors: Siphesihle Hlongwane
Abstract:
Background: Globally, alcohol consumption, smoking, and the use of illicit drugs kill more than 11.8 million people each year. In sub-Saharan Africa, substance abuse is responsible for more than 6.4% of all deaths recorded and about 4.7% of all Disability Adjusted Life Years (DALYs), with numbers still expected to grow if no drastic measures are taken to curb and address drug use. In a setting where substance use is rife, understanding contextual factors that influence an individual’s perceived eligibility to seek rehabilitation is paramount. Using the candidacy framework, we unpack how situational factors influence an individual’s perceived eligibility for healthcare uptake in adolescents and youth with substance use disorder (SUD). Methods: The candidacy framework is concerned with how people consider their eligibility for accessing a health service. The study collected and analyzed primary qualitative data to answer the research question. Data were collected between January and July 2022 on participants aged between 18 and 35 for drug users and 18 to 60 for family members. Participants include 20 previous and current drug users and 20 family members that experience the effects of addiction. A pre-drafted semi-structured interview guide was administered to a conveniently sampled population supplemented with a referral sampling method. Data were thematically analyzed using the NVivo 12pro software to manage the data. Findings: Our findings show that people with substance use disorders are aware of their drug use habits and acknowledge their candidacy for health services. Candidacy for health services is also acknowledged by those around them, such as family members and peers, and as such, information on the navigation of health services for drug users is shared by those who have attended health services, those affected by drug use, and this includes health service research by family members to identify accessible health services. While participants reported willingness to quit drug use if assistance is provided, the permeability of health care services is hindered by both individual determinations to quit drug use from long-time use and the availability of health services for drug users, such as rehabilitation centers. Our findings also show that drug users are conscious and can articulate their ailments; however, the hunt for the next dose of drugs and long waiting cues for health service acquisition overshadows their claim to health services. Participants reported a mixture of treatments prescribed, with some more gruesome than others prescribed, thus serving as both a facilitator and barrier for health service uptake. Despite some unorthodox forms of treatments prescribed in health care, the majority of those who enter treatment complete the process of treatment, although some are met with setbacks and sometimes relapse after treatment has finished. Conclusion: Drug users are able to ascertain their candidacy for health services; however, individual and environmental characteristics relating to drug use hinder the use of health services. Drug use interventions need to entice health service uptake as a way to improve candidacy for health use.Keywords: substance use disorder, rehabilitation, drug use, relapse, South Africa, candidacy framework
Procedia PDF Downloads 98196 Potential Assessment and Techno-Economic Evaluation of Photovoltaic Energy Conversion System: A Case of Ethiopia Light Rail Transit System
Authors: Asegid Belay Kebede, Getachew Biru Worku
Abstract:
The Earth and its inhabitants have faced an existential threat as a result of severe manmade actions. Global warming and climate change have been the most apparent manifestations of this threat throughout the world, with increasingly intense heat waves, temperature rises, flooding, sea-level rise, ice sheet melting, and so on. One of the major contributors to this disaster is the ever-increasing production and consumption of energy, which is still primarily fossil-based and emits billions of tons of hazardous GHG. The transportation industry is recognized as the biggest actor in terms of emissions, accounting for 24% of direct CO2 emissions and being one of the few worldwide sectors where CO2 emissions are still growing. Rail transportation, which includes all from light rail transit to high-speed rail services, is regarded as one of the most efficient modes of transportation, accounting for 9% of total passenger travel and 7% of total freight transit. Nonetheless, there is still room for improvement in the transportation sector, which might be done by incorporating alternative and/or renewable energy sources. As a result of these rapidly changing global energy situations and rapidly dwindling fossil fuel supplies, we were driven to analyze the possibility of renewable energy sources for traction applications. Even a small achievement in energy conservation or harnessing might significantly influence the total railway system and have the potential to transform the railway sector like never before. As a result, the paper begins by assessing the potential for photovoltaic (PV) power generation on train rooftops and existing infrastructure such as railway depots, passenger stations, traction substation rooftops, and accessible land along rail lines. As a result, a method based on a Google Earth system (using Helioscopes software) is developed to assess the PV potential along rail lines and on train station roofs. As an example, the Addis Ababa light rail transit system (AA-LRTS) is utilized. The case study examines the electricity-generating potential and economic performance of photovoltaics installed on AALRTS. As a consequence, the overall capacity of solar systems on all stations, including train rooftops, reaches 72.6 MWh per day, with an annual power output of 10.6 GWh. Throughout a 25-year lifespan, the overall CO2 emission reduction and total profit from PV-AA-LRTS can reach 180,000 tons and 892 million Ethiopian birrs, respectively. The PV-AA-LRTS has a 200% return on investment. All PV stations have a payback time of less than 13 years, and the price of solar-generated power is less than $0.08/kWh, which can compete with the benchmark price of coal-fired electricity. Our findings indicate that PV-AA-LRTS has tremendous potential, with both energy and economic advantages.Keywords: sustainable development, global warming, energy crisis, photovoltaic energy conversion, techno-economic analysis, transportation system, light rail transit
Procedia PDF Downloads 76195 Ethical Decision-Making by Healthcare Professionals during Disasters: Izmir Province Case
Authors: Gulhan Sen
Abstract:
Disasters could result in many deaths and injuries. In these difficult times, accessible resources are limited, demand and supply balance is distorted, and there is a need to make urgent interventions. Disproportionateness between accessible resources and intervention capacity makes triage a necessity in every stage of disaster response. Healthcare professionals, who are in charge of triage, have to evaluate swiftly and make ethical decisions about which patients need priority and urgent intervention given the limited available resources. For such critical times in disaster triage, 'doing the greatest good for the greatest number of casualties' is adopted as a code of practice. But there is no guide for healthcare professionals about ethical decision-making during disasters, and this study is expected to use as a source in the preparation of the guide. This study aimed to examine whether the qualities healthcare professionals in Izmir related to disaster triage were adequate and whether these qualities influence their capacity to make ethical decisions. The researcher used a survey developed for data collection. The survey included two parts. In part one, 14 questions solicited information about socio-demographic characteristics and knowledge levels of the respondents on ethical principles of disaster triage and allocation of scarce resources. Part two included four disaster scenarios adopted from existing literature and respondents were asked to make ethical decisions in triage based on the provided scenarios. The survey was completed by 215 healthcare professional working in Emergency-Medical Stations, National Medical Rescue Teams and Search-Rescue-Health Teams in Izmir. The data was analyzed with SPSS software. Chi-Square Test, Mann-Whitney U Test, Kruskal-Wallis Test and Linear Regression Analysis were utilized. According to results, it was determined that 51.2% of the participants had inadequate knowledge level of ethical principles of disaster triage and allocation of scarce resources. It was also found that participants did not tend to make ethical decisions on four disaster scenarios which included ethical dilemmas. They stayed in ethical dilemmas that perform cardio-pulmonary resuscitation, manage limited resources and make decisions to die. Results also showed that participants who had more experience in disaster triage teams, were more likely to make ethical decisions on disaster triage than those with little or no experience in disaster triage teams(p < 0.01). Moreover, as their knowledge level of ethical principles of disaster triage and allocation of scarce resources increased, their tendency to make ethical decisions also increased(p < 0.001). In conclusion, having inadequate knowledge level of ethical principles and being inexperienced affect their ethical decision-making during disasters. So results of this study suggest that more training on disaster triage should be provided on the areas of the pre-impact phase of disaster. In addition, ethical dimension of disaster triage should be included in the syllabi of the ethics classes in the vocational training for healthcare professionals. Drill, simulations, and board exercises can be used to improve ethical decision making abilities of healthcare professionals. Disaster scenarios where ethical dilemmas are faced should be prepared for such applied training programs.Keywords: disaster triage, medical ethics, ethical principles of disaster triage, ethical decision-making
Procedia PDF Downloads 249194 Pushover Analysis of a Typical Bridge Built in Central Zone of Mexico
Authors: Arturo Galvan, Jatziri Y. Moreno-Martinez, Daniel Arroyo-Montoya, Jose M. Gutierrez-Villalobos
Abstract:
Bridges are one of the most seismically vulnerable structures on highway transportation systems. The general process for assessing the seismic vulnerability of a bridge involves the evaluation of its overall capacity and demand. One of the most common procedures to obtain this capacity is by means of pushover analysis of the structure. Typically, the bridge capacity is assessed using non-linear static methods or non-linear dynamic analyses. The non-linear dynamic approaches use step by step numerical solutions for assessing the capacity with the consuming computer time inconvenience. In this study, a nonlinear static analysis (‘pushover analysis’) was performed to predict the collapse mechanism of a typical bridge built in the central zone of Mexico (Celaya, Guanajuato). The bridge superstructure consists of three simple supported spans with a total length of 76 m: 22 m of the length of extreme spans and 32 m of length of the central span. The deck width is of 14 m and the concrete slab depth is of 18 cm. The bridge is built by means of frames of five piers with hollow box-shaped sections. The dimensions of these piers are 7.05 m height and 1.20 m diameter. The numerical model was created using a commercial software considering linear and non-linear elements. In all cases, the piers were represented by frame type elements with geometrical properties obtained from the structural project and construction drawings of the bridge. The deck was modeled with a mesh of rectangular thin shell (plate bending and stretching) finite elements. The moment-curvature analysis was performed for the sections of the piers of the bridge considering in each pier the effect of confined concrete and its reinforcing steel. In this way, plastic hinges were defined on the base of the piers to carry out the pushover analysis. In addition, time history analyses were performed using 19 accelerograms of real earthquakes that have been registered in Guanajuato. In this way, the displacements produced by the bridge were determined. Finally, pushover analysis was applied through the control of displacements in the piers to obtain the overall capacity of the bridge before the failure occurs. It was concluded that the lateral deformation of the piers due to a critical earthquake occurred in this zone is almost imperceptible due to the geometry and reinforcement demanded by the current design standards and compared to its displacement capacity, they were excessive. According to the analysis, it was found that the frames built with five piers increase the rigidity in the transverse direction of the bridge. Hence it is proposed to reduce these frames of five piers to three piers, maintaining the same geometrical characteristics and the same reinforcement in each pier. Also, the mechanical properties of materials (concrete and reinforcing steel) were maintained. Once a pushover analysis was performed considering this configuration, it was concluded that the bridge would continue having a “correct” seismic behavior, at least for the 19 accelerograms considered in this study. In this way, costs in material, construction, time and labor would be reduced in this study case.Keywords: collapse mechanism, moment-curvature analysis, overall capacity, push-over analysis
Procedia PDF Downloads 153193 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads
Authors: Raja Umer Sajjad, Chang Hee Lee
Abstract:
Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters
Procedia PDF Downloads 241192 Information and Communication Technology Skills of Finnish Students in Particular by Gender
Authors: Antero J. S. Kivinen, Suvi-Sadetta Kaarakainen
Abstract:
Digitalization touches every aspect of contemporary society, changing the way we live our everyday life. Contemporary society is sometimes described as knowledge society including unprecedented amount of information people face daily. The tools to manage this information flow are ICT-skills which are both technical skills and reflective skills needed to manage incoming information. Therefore schools are under constant pressure of revision. In the latest Programme for International Student Assessment (PISA) girls have been outperforming boys in all Organization for Economic Co-operation and Development (OECD) member countries and the gender gap between girls and boys is widest in Finland. This paper presents results of the Comprehensive Schools in the Digital Age project of RUSE, University of Turku. The project is in connection with Finnish Government Analysis, Assessment and Research Activities. First of all, this paper examines gender differences in ICT-skills of Finnish upper comprehensive school students. Secondly, it explores in which way differences are changing when students proceed to upper secondary and vocational education. ICT skills are measured using a performance-based ICT-skill test. Data is collected in 3 phases, January-March 2017 (upper comprehensive schools, n=5455), September-December 2017 (upper secondary and vocational schools, n~3500) and January-March 2018 (Upper comprehensive schools). The age of upper comprehensive school student’s is 15-16 and upper secondary and vocational school 16-18. The test is divided into 6 categories: basic operations, productivity software, social networking and communication, content creation and publishing, applications and requirements for the ICT study programs. Students have filled a survey about their ICT-usage and study materials they use in school and home. Cronbach's alpha was used to estimate the reliability of the ICT skill test. Statistical differences between genders were examined using two-tailed independent samples t-test. Results of first data from upper comprehensive schools show that there is no statistically significant difference in ICT-skill tests total scores between genders (boys 10.24 and girls 10.64, maximum being 36). Although, there were no gender difference in total test scores, there are differences in above mentioned six categories. Girls get better scores on school related and social networking test subjects while boys perform better on more technical oriented subjects. Test scores on basic operations are quite low for both groups. Perhaps these can partly be explained by the fact that the test was made on computers and majority of students ICT-usage consist of smartphones and tablets. Against this background it is important to analyze further the reasons for these differences. In a context of ongoing digitalization of everyday life and especially working life, the significant purpose of this analyses is to find answers how to guarantee the adequate ICT skills for all students.Keywords: basic education, digitalization, gender differences, ICT-skills, upper comprehensive education, upper secondary education, vocational education
Procedia PDF Downloads 135191 An Exploratory Factor and Cluster Analysis of the Willingness to Pay for Last Mile Delivery
Authors: Maximilian Engelhardt, Stephan Seeck
Abstract:
The COVID-19 pandemic is accelerating the already growing field of e-commerce. The resulting urban freight transport volume leads to traffic and negative environmental impact. Furthermore, the service level of parcel logistics service provider is lacking far behind the expectations of consumer. These challenges can be solved by radically reorganize the urban last mile distribution structure: parcels could be consolidated in a micro hub within the inner city and delivered within time windows by cargo bike. This approach leads to a significant improvement of consumer satisfaction with their overall delivery experience. However, this approach also leads to significantly increased costs per parcel. While there is a relevant share of online shoppers that are willing to pay for such a delivery service there are no deeper insights about this target group available in the literature. Being aware of the importance of knowing target groups for businesses, the aim of this paper is to elaborate the most important factors that determine the willingness to pay for sustainable and service-oriented parcel delivery (factor analysis) and to derive customer segments (cluster analysis). In order to answer those questions, a data set is analyzed using quantitative methods of multivariate statistics. The data set was generated via an online survey in September and October 2020 within the five largest cities in Germany (n = 1.071). The data set contains socio-demographic, living-related and value-related variables, e.g. age, income, city, living situation and willingness to pay. In a prior work of the author, the data was analyzed applying descriptive and inference statistical methods that only provided limited insights regarding the above-mentioned research questions. The analysis in an exploratory way using factor and cluster analysis promise deeper insights of relevant influencing factors and segments for user behavior of the mentioned parcel delivery concept. The analysis model is built and implemented with help of the statistical software language R. The data analysis is currently performed and will be completed in December 2021. It is expected that the results will show the most relevant factors that are determining user behavior of sustainable and service-oriented parcel deliveries (e.g. age, current service experience, willingness to pay) and give deeper insights in characteristics that describe the segments that are more or less willing to pay for a better parcel delivery service. Based on the expected results, relevant implications and conclusions can be derived for startups that are about to change the way parcels are delivered: more customer-orientated by time window-delivery and parcel consolidation, more environmental-friendly by cargo bike. The results will give detailed insights regarding their target groups of parcel recipients. Further research can be conducted by exploring alternative revenue models (beyond the parcel recipient) that could compensate the additional costs, e.g. online-shops that increase their service-level or municipalities that reduce traffic on their streets.Keywords: customer segmentation, e-commerce, last mile delivery, parcel service, urban logistics, willingness-to-pay
Procedia PDF Downloads 108190 Effect of Methoxy and Polyene Additional Functionalized Group on the Photocatalytic Properties of Polyene-Diphenylaniline Organic Chromophores for Solar Energy Applications
Authors: Ife Elegbeleye, Nnditshedzeni Eric, Regina Maphanga, Femi Elegbeleye, Femi Agunbiade
Abstract:
The global potential of other renewable energy sources such as wind, hydroelectric, bio-mass, and geothermal is estimated to be approximately 13 %, with hydroelectricity constituting a larger percentage. Sunlight provides by far the largest of all carbon-neutral energy sources. More energy from the sunlight strikes the Earth in one hour (4.3 × 1020 J) than all the energy consumed on the planet in a year (4.1 × 1020 J), hence, solar energy remains the most abundant clean, renewable energy resources for mankind. Photovoltaic (PV) devices such as silicon solar cells, dye sensitized solar cells are utilized for harnessing solar energy. Polyene-diphenylaniline organic molecules are important sets of molecules that has stirred many research interest as photosensitizers in TiO₂ semiconductor-based dye sensitized solar cells (DSSCs). The advantages of organic dye molecule over metal-based complexes are higher extinction coefficient, moderate cost, good environmental compatibility, and electrochemical properties. The polyene-diphenylaniline organic dyes with basic configuration of donor-π-acceptor are affordable, easy to synthesize and possess chemical structures that can easily be modified to optimize their photocatalytic and spectral properties. The enormous interest in polyene-diphenylaniline dyes as photosensitizers is due to their fascinating spectral properties which include visible light to near infra-red-light absorption. In this work, density functional theory approach via GPAW software, Avogadro and ASE were employed to study the effect of methoxy functionalized group on the spectral properties of polyene-diphenylaniline dyes and their photons absorbing characteristics in the visible region to near infrared region of the solar spectrum. Our results showed that the two-phenyl based complexes D5 and D7 exhibits maximum absorption peaks at 750 nm and 850 nm, while D9 and D11 with methoxy group shows maximum absorption peak at 800 nm and 900 nm respectively. The highest absorption wavelength is notable for D9 and D11 containing additional polyene and methoxy groups. Also, D9 and D11 chromophores with the methoxy group shows lower energy gap of 0.98 and 0.85 respectively than the corresponding D5 and D7 dyes complexes with energy gap of 1.32 and 1.08. The analysis of their electron injection kinetics ∆Ginject into the band gap of TiO₂ shows that D9 and D11 with the methoxy group has higher electron injection kinetics of -2.070 and -2.030 than the corresponding polyene-diphenylaniline complexes without the addition of polyene group with ∆Ginject values of -2.820 and -2.130 respectively. Our findings suggest that the addition of functionalized group as an extension of the organic complexes results in higher light harvesting efficiencies and bathochromic shift of the absorption spectra to higher wavelength which suggest higher current densities and open circuit voltage in DSSCs. The study suggests that the photocatalytic properties of organic chromophores/complexes with donor-π-acceptor configuration can be enhanced by the addition of functionalized groups.Keywords: renewable energy resource, solar energy, dye sensitized solar cells, polyene-diphenylaniline organic chromophores
Procedia PDF Downloads 115189 DNA Barcoding for Identification of Dengue Vectors from Assam and Arunachal Pradesh: North-Eastern States in India
Authors: Monika Soni, Shovonlal Bhowmick, Chandra Bhattacharya, Jitendra Sharma, Prafulla Dutta, Jagadish Mahanta
Abstract:
Aedes aegypti and Aedes albopictus are considered as two major vectors to transmit dengue virus. In North-east India, two states viz. Assam and Arunachal Pradesh are known to be high endemic zone for dengue and Chikungunya viral infection. The taxonomical classification of medically important vectors are important for mapping of actual evolutionary trends and epidemiological studies. However, misidentification of mosquito species in field-collected mosquito specimens could have a negative impact which may affect vector-borne disease control policy. DNA barcoding is a prominent method to record available species, differentiate from new addition and change of population structure. In this study, a combined approach of a morphological and molecular technique of DNA barcoding was adopted to explore sequence variation in mitochondrial cytochrome c oxidase subunit I (COI) gene within dengue vectors. The study has revealed the map distribution of the dengue vector from two states i.e. Assam and Arunachal Pradesh, India. Approximate five hundred mosquito specimens were collected from different parts of two states, and their morphological features were compared with the taxonomic keys. The analysis of detailed taxonomic study revealed identification of two species Aedes aegypti and Aedes albopictus. The species aegypti comprised of 66.6% of the specimen and represented as dominant dengue vector species. The sequences obtained through standard DNA barcoding protocol were compared with public databases, viz. GenBank and BOLD. The sequences of all Aedes albopictus have shown 100% similarity whereas sequence of Aedes aegypti has shown 99.77 - 100% similarity of COI gene with that of different geographically located same species based on BOLD database search. From dengue prevalent different geographical regions fifty-nine sequences were retrieved from NCBI and BOLD databases of the same and related taxa to determine the evolutionary distance model based on the phylogenetic analysis. Neighbor-Joining (NJ) and Maximum Likelihood (ML) phylogenetic tree was constructed in MEGA6.06 software with 1000 bootstrap replicates using Kimura-2-Parameter model. Data were analyzed for sequence divergence and found that intraspecific divergence ranged from 0.0 to 2.0% and interspecific divergence ranged from 11.0 to 12.0%. The transitional and transversional substitutions were tested individually. The sequences were deposited in NCBI: GenBank database. This observation claimed the first DNA barcoding analysis of Aedes mosquitoes from North-eastern states in India and also confirmed the range expansion of two important mosquito species. Overall, this study insight into the molecular ecology of the dengue vectors from North-eastern India which will enhance the understanding to improve the existing entomological surveillance and vector incrimination program.Keywords: COI, dengue vectors, DNA barcoding, molecular identification, North-east India, phylogenetics
Procedia PDF Downloads 304188 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks
Authors: Andrew N. Saylor, James R. Peters
Abstract:
Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging
Procedia PDF Downloads 131187 Multiparticulate SR Formulation of Dexketoprofen Trometamol by Wurster Coating Technique
Authors: Bhupendra G. Prajapati, Alpesh R. Patel
Abstract:
The aim of this research work is to develop sustained release multi-particulates dosage form of Dexketoprofen trometamol, which is the pharmacologically active isomer of ketoprofen. The objective is to utilization of active enantiomer with minimal dose and administration frequency, extended release multi-particulates dosage form development for better patience compliance was explored. Drug loaded and sustained release coated pellets were prepared by fluidized bed coating principle by wurster coater. Microcrystalline cellulose as core pellets, povidone as binder and talc as anti-tacking agents were selected during drug loading while Kollicoat SR 30D as sustained release polymer, triethyl citrate as plasticizer and micronized talc as an anti-adherent were used in sustained release coating. Binder optimization trial in drug loading showed that there was increase in process efficiency with increase in the binder concentration. 5 and 7.5%w/w concentration of Povidone K30 with respect to drug amount gave more than 90% process efficiency while higher amount of rejects (agglomerates) were observed for drug layering trial batch taken with 7.5% binder. So for drug loading, optimum Povidone concentration was selected as 5% of drug substance quantity since this trial had good process feasibility and good adhesion of the drug onto the MCC pellets. 2% w/w concentration of talc with respect to total drug layering solid mass shows better anti-tacking property to remove unnecessary static charge as well as agglomeration generation during spraying process. Optimized drug loaded pellets were coated for sustained release coating from 16 to 28% w/w coating to get desired drug release profile and results suggested that 22% w/w coating weight gain is necessary to get the required drug release profile. Three critical process parameters of Wurster coating for sustained release were further statistically optimized for desired quality target product profile attributes like agglomerates formation, process efficiency, and drug release profile using central composite design (CCD) by Minitab software. Results show that derived design space consisting 1.0 to 1.2 bar atomization air pressure, 7.8 to 10.0 gm/min spray rate and 29-34°C product bed temperature gave pre-defined drug product quality attributes. Scanning Image microscopy study results were also dictate that optimized batch pellets had very narrow particle size distribution and smooth surface which were ideal properties for reproducible drug release profile. The study also focused on optimized dexketoprofen trometamol pellets formulation retain its quality attributes while administering with common vehicle, a liquid (water) or semisolid food (apple sauce). Conclusion: Sustained release multi-particulates were successfully developed for dexketoprofen trometamol which may be useful to improve acceptability and palatability of a dosage form for better patient compliance.Keywords: dexketoprofen trometamol, pellets, fluid bed technology, central composite design
Procedia PDF Downloads 136186 Hybrid Renewable Energy Systems for Electricity and Hydrogen Production in an Urban Environment
Authors: Same Noel Ngando, Yakub Abdulfatai Olatunji
Abstract:
Renewable energy micro-grids, such as those powered by solar or wind energy, are often intermittent in nature. This means that the amount of energy generated by these systems can vary depending on weather conditions or other factors, which can make it difficult to ensure a steady supply of power. To address this issue, energy storage systems have been developed to increase the reliability of renewable energy micro-grids. Battery systems have been the dominant energy storage technology for renewable energy micro-grids. Batteries can store large amounts of energy in a relatively small and compact package, making them easy to install and maintain in a micro-grid setting. Additionally, batteries can be quickly charged and discharged, allowing them to respond quickly to changes in energy demand. However, the process involved in recycling batteries is quite costly and difficult. An alternative energy storage system that is gaining popularity is hydrogen storage. Hydrogen is a versatile energy carrier that can be produced from renewable energy sources such as solar or wind. It can be stored in large quantities at low cost, making it suitable for long-distance mass storage. Unlike batteries, hydrogen does not degrade over time, so it can be stored for extended periods without the need for frequent maintenance or replacement, allowing it to be used as a backup power source when the micro-grid is not generating enough energy to meet demand. When hydrogen is needed, it can be converted back into electricity through a fuel cell. Energy consumption data is got from a particular residential area in Daegu, South Korea, and the data is processed and analyzed. From the analysis, the total energy demand is calculated, and different hybrid energy system configurations are designed using HOMER Pro (Hybrid Optimization for Multiple Energy Resources) and MATLAB software. A techno-economic and environmental comparison and life cycle assessment (LCA) of the different configurations using battery and hydrogen as storage systems are carried out. The various scenarios included PV-hydrogen-grid system, PV-hydrogen-grid-wind, PV-hydrogen-grid-biomass, PV-hydrogen-wind, PV-hydrogen-biomass, biomass-hydrogen, wind-hydrogen, PV-battery-grid-wind, PV- battery -grid-biomass, PV- battery -wind, PV- battery -biomass, and biomass- battery. From the analysis, the least cost system for the location was the PV-hydrogen-grid system, with a net present cost of about USD 9,529,161. Even though all scenarios were environmentally friendly, taking into account the recycling cost and pollution involved in battery systems, all systems with hydrogen as a storage system produced better results. In conclusion, hydrogen is becoming a very prominent energy storage solution for renewable energy micro-grids. It is easier to store compared with electric power, so it is suitable for long-distance mass storage. Hydrogen storage systems have several advantages over battery systems, including flexibility, long-term stability, and low environmental impact. The cost of hydrogen storage is still relatively high, but it is expected to decrease as more hydrogen production, and storage infrastructure is built. With the growing focus on renewable energy and the need to reduce greenhouse gas emissions, hydrogen is expected to play an increasingly important role in the energy storage landscape.Keywords: renewable energy systems, microgrid, hydrogen production, energy storage systems
Procedia PDF Downloads 94185 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea
Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal
Abstract:
Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism
Procedia PDF Downloads 267184 Adaptation Measures as a Response to Climate Change Impacts and Associated Financial Implications for Construction Businesses by the Application of a Mixed Methods Approach
Authors: Luisa Kynast
Abstract:
It is obvious that buildings and infrastructure are highly impacted by climate change (CC). Both, design and material of buildings need to be resilient to weather events in order to shelter humans, animals, or goods. As well as buildings and infrastructure are exposed to weather events, the construction process itself is generally carried out outdoors without being protected from extreme temperatures, heavy rain, or storms. The production process is restricted by technical limitations for processing materials with machines and physical limitations due to human beings (“outdoor-worker”). In future due to CC, average weather patterns are expected to change as well as extreme weather events are expected to occur more frequently and more intense and therefore have a greater impact on production processes and on the construction businesses itself. This research aims to examine this impact by analyzing an association between responses to CC and financial performance of businesses within the construction industry. After having embedded the above depicted field of research into the resource dependency theory, a literature review was conducted to expound the state of research concerning a contingent relation between climate change adaptation measures (CCAM) and corporate financial performance for construction businesses. The examined studies prove that this field is rarely investigated, especially for construction businesses. Therefore, reports of the Carbon Disclosure Project (CDP) were analyzed by applying content analysis using the software tool MAXQDA. 58 construction companies – located worldwide – could be examined. To proceed even more systematically a coding scheme analogous to findings in literature was adopted. Out of qualitative analysis, data was quantified and a regression analysis containing corporate financial data was conducted. The results gained stress adaptation measures as a response to CC as a crucial proxy to handle climate change impacts (CCI) by mitigating risks and exploiting opportunities. In CDP reports the majority of answers stated increasing costs/expenses as a result of implemented measures. A link to sales/revenue was rarely drawn. Though, CCAM were connected to increasing sales/revenues. Nevertheless, this presumption is supported by the results of the regression analysis where a positive effect of implemented CCAM on construction businesses´ financial performance in the short-run was ascertained. These findings do refer to appropriate responses in terms of the implemented number of CCAM. Anyhow, still businesses show a reluctant attitude for implementing CCAM, which was confirmed by findings in literature as well as by findings in CDP reports. Businesses mainly associate CCAM with costs and expenses rather than with an effect on their corporate financial performance. Mostly companies underrate the effect of CCI and overrate the costs and expenditures for the implementation of CCAM and completely neglect the pay-off. Therefore, this research shall create a basis for bringing CC to the (financial) attention of corporate decision-makers, especially within the construction industry.Keywords: climate change adaptation measures, construction businesses, financial implication, resource dependency theory
Procedia PDF Downloads 145183 Vibration Based Structural Health Monitoring of Connections in Offshore Wind Turbines
Authors: Cristobal García
Abstract:
The visual inspection of bolted joints in wind turbines is dangerous, expensive, and impractical due to the non-possibility to access the platform by workboat in certain sea state conditions, as well as the high costs derived from the transportation of maintenance technicians to offshore platforms located far away from the coast, especially if helicopters are involved. Consequently, the wind turbine operators have the need for simpler and less demanding techniques for the analysis of the bolts tightening. Vibration-based structural health monitoring is one of the oldest and most widely-used means for monitoring the health of onshore and offshore wind turbines. The core of this work is to find out if the modal parameters can be efficiently used as a key performance indicator (KPIs) for the assessment of joint bolts in a 1:50 scale tower of a floating offshore wind turbine (12 MW). A non-destructive vibration test is used to extract the vibration signals of the towers with different damage statuses. The procedure can be summarized in three consecutive steps. First, an artificial excitation is introduced by means of a commercial shaker mounted on the top of the tower. Second, the vibration signals of the towers are recorded for 8 s at a sampling rate of 20 kHz using an array of commercial accelerometers (Endevco, 44A16-1032). Third, the natural frequencies, damping, and overall vibration mode shapes are calculated using the software Siemens LMS 16A. Experiments show that the natural frequencies, damping, and mode shapes of the tower are directly dependent on the fixing conditions of the towers, and therefore, the variations of both parameters are a good indicator for the estimation of the static axial force acting in the bolt. Thus, this vibration-based structural method proposed can be potentially used as a diagnostic tool to evaluate the tightening torques of the bolted joints with the advantages of being an economical, straightforward, and multidisciplinary approach that can be applied for different typologies of connections by operation and maintenance technicians. In conclusion, TSI, in collaboration with the consortium of the FIBREGY project, is conducting innovative research where vibrations are utilized for the estimation of the tightening torque of a 1:50 scale steel-based tower prototype. The findings of this research carried out in the context of FIBREGY possess multiple implications for the assessment of the bolted joint integrity in multiple types of connections such as tower-to-nacelle, modular, tower-to-column, tube-to-tube, etc. This research is contextualized in the framework of the FIBREGY project. The EU-funded FIBREGY project (H2020, grant number 952966) will evaluate the feasibility of the design and construction of a new generation of marine renewable energy platforms using lightweight FRP materials in certain structural elements (e.g., tower, floating platform). The FIBREGY consortium is composed of 11 partners specialized in the offshore renewable energy sector and funded partially by the H2020 program of the European Commission with an overall budget of 8 million Euros.Keywords: SHM, vibrations, connections, floating offshore platform
Procedia PDF Downloads 126182 Development of Mesoporous Gel Based Nonwoven Structure for Thermal Barrier Application
Authors: R. P. Naik, A. K. Rakshit
Abstract:
In recent years, with the rapid development in science and technology, people have increasing requirements on uses of clothing for new functions, which contributes to opportunities for further development and incorporation of new technologies along with novel materials. In this context, textiles are of fast decalescence or fast heat radiation media as per as comfort accountability of textile articles are concern. The microstructure and texture of textiles play a vital role in determining the heat-moisture comfort level of the human body because clothing serves as a barrier to the outside environment and a transporter of heat and moisture from the body to the surrounding environment to keep thermal balance between body heat produced and body heat loss. The main bottleneck which is associated with textile materials to be successful as thermal insulation materials can be enumerated as; firstly, high loft or bulkiness of material so as to provide predetermined amount of insulation by ensuring sufficient trapping of air. Secondly, the insulation depends on forced convection; such convective heat loss cannot be prevented by textile material. Third is that the textile alone cannot reach the level of thermal conductivity lower than 0.025 W/ m.k of air. Perhaps, nano-fibers can do so, but still, mass production and cost-effectiveness is a problem. Finally, such high loft materials for thermal insulation becomes heavier and uneasy to manage especially when required to carry over a body. The proposed works aim at developing lightweight effective thermal insulation textiles in combination with nanoporous silica-gel which provides the fundamental basis for the optimization of material properties to achieve good performance of the clothing system. This flexible nonwoven silica-gel composites fabric in intact monolith was successfully developed by reinforcing SiO2-gel in thermal bonded nonwoven fabric via sol-gel processing. Ambient Pressure Drying method is opted for silica gel preparation for cost-effective manufacturing. The formed structure of the nonwoven / SiO₂ -gel composites were analyzed, and the transfer properties were measured. The effects of structure and fibre on the thermal properties of the SiO₂-gel composites were evaluated. Samples are then tested against untreated samples of same GSM in order to study the effect of SiO₂-gel application on various properties of nonwoven fabric. The nonwoven fabric composites reinforced with aerogel showed intact monolith structure were also analyzed for their surface structure, functional group present, microscopic images. Developed product reveals a significant reduction in pores' size and air permeability than the conventional nonwoven fabric. Composite made from polyester fibre with lower GSM shows lowest thermal conductivity. Results obtained were statistically analyzed by using STATISTICA-6 software for their level of significance. Univariate tests of significance for various parameters are practiced which gives the P value for analyzing significance level along with that regression summary for dependent variable are also studied to obtain correlation coefficient.Keywords: silica-gel, heat insulation, nonwoven fabric, thermal barrier clothing
Procedia PDF Downloads 112181 Oblique Radiative Solar Nano-Polymer Gel Coating Heat Transfer and Slip Flow: Manufacturing Simulation
Authors: Anwar Beg, Sireetorn Kuharat, Rashid Mehmood, Rabil Tabassum, Meisam Babaie
Abstract:
Nano-polymeric solar paints and sol-gels have emerged as a major new development in solar cell/collector coatings offering significant improvements in durability, anti-corrosion and thermal efficiency. They also exhibit substantial viscosity variation with temperature which can be exploited in solar collector designs. Modern manufacturing processes for such nano-rheological materials frequently employ stagnation flow dynamics under high temperature which invokes radiative heat transfer. Motivated by elaborating in further detail the nanoscale heat, mass and momentum characteristics of such sol gels, the present article presents a mathematical and computational study of the steady, two-dimensional, non-aligned thermo-fluid boundary layer transport of copper metal-doped water-based nano-polymeric sol gels under radiative heat flux. To simulate real nano-polymer boundary interface dynamics, thermal slip is analysed at the wall. A temperature-dependent viscosity is also considered. The Tiwari-Das nanofluid model is deployed which features a volume fraction for the nanoparticle concentration. This approach also features a Maxwell-Garnet model for the nanofluid thermal conductivity. The conservation equations for mass, normal and tangential momentum and energy (heat) are normalized via appropriate transformations to generate a multi-degree, ordinary differential, non-linear, coupled boundary value problem. Numerical solutions are obtained via the stable, efficient Runge-Kutta-Fehlberg scheme with shooting quadrature in MATLAB symbolic software. Validation of solutions is achieved with a Variational Iterative Method (VIM) utilizing Langrangian multipliers. The impact of key emerging dimensionless parameters i.e. obliqueness parameter, radiation-conduction Rosseland number (Rd), thermal slip parameter (α), viscosity parameter (m), nanoparticles volume fraction (ϕ) on non-dimensional normal and tangential velocity components, temperature, wall shear stress, local heat flux and streamline distributions is visualized graphically. Shear stress and temperature are boosted with increasing radiative effect whereas local heat flux is reduced. Increasing wall thermal slip parameter depletes temperatures. With greater volume fraction of copper nanoparticles temperature and thermal boundary layer thickness is elevated. Streamlines are found to be skewed markedly towards the left with positive obliqueness parameter.Keywords: non-orthogonal stagnation-point heat transfer, solar nano-polymer coating, MATLAB numerical quadrature, Variational Iterative Method (VIM)
Procedia PDF Downloads 136180 Expanding Entrepreneurial Capabilities through Business Incubators: A Case Study of Idea Hub Nigeria
Authors: Kenechukwu Ikebuaku
Abstract:
Entrepreneurship has long been offered as the panacea for poor economic growth and high rate of unemployment. Business incubation is considered an effective means for enhancing entrepreneurial actitivities while engendering socio-economic development. Information Technology Developers Entrepreneurship Accelerator (iDEA), is a software business incubation programme established by the Nigerian government as a means of boosting digital entrepreneurship activities and reducing unemployment in the country. This study assessed the contribution of iDEA Nigeria’s entrepreneurship programmes towards enhancing the capabilities of its tenants. Using the capability approach and the sustainable livelihoods approach, the study analysed iDEA programmes’ contribution towards the expansion of participants’ entrepreneurial capabilities. Apart from identifying a set of entrepreneurial capabilities from both the literature and empirical analysis, the study went further to ascertain how iDEA incubation has helped to enhance those capabilities for its tenants. It also examined digital entrepreneurship as a valued functioning and as an intermediate functioning leading to other valuable functioning. Furthermore, the study examined gender as a conversion factor in digital entrepreneurship. Both qualitative and quantitative research methods were used for the study, and measurement of key variables was made. While the entire population was utilised to collect data for the quantitative research, purposive sampling was used to select respondents for semi-structured interviews in the qualitative research. However, only 40 beneficiaries agreed to take part in the survey while 10 respondents were interviewed for the study. Responses collected from questionnaires administered were subjected to statistical analysis using SPSS. The study developed indexes to measure the perception of the respondents, on how iDEA programmes have enhanced their entrepreneurial capabilities. The Capabilities Enhancement Perception Index (CEPI) computed indicated that the respondents believed that iDEA programmes enhanced their entrepreneurial capabilities. While access to power supply and reliable internet have the highest positive deviations around mean, negotiation skills and access to customers/clients have the highest negative deviation. These were well supported by the findings of the qualitative analysis in which the participants unequivocally narrated how the resources provided by iDEA aid them in their entrepreneurial endeavours. It was also found that iDEA programmes have a significant effect on the tenants’ access to networking opportunities, both with other emerging entrepreneurs and established entrepreneurs. While assessing gender as a conversion factor, it was discovered that there was very low female participation within the digital entrepreneurship ecosystem. The root cause of this gender disparity was found in unquestioned cultural beliefs and social norms which relegate women to a subservient position and household duties. The findings also showed that many of the entrepreneurs could be considered opportunity-based entrepreneurs rather than necessity entrepreneurs, and that digital entrepreneurship is a valued functioning for iDEA tenants. With regards to challenges facing digital entrepreneurship in Nigeria, infrastructural/institutional inadequacies, lack of funding opportunities, and unfavourable government policies, were considered inimical to entrepreneurial capabilities in the country.Keywords: entrepreneurial capabilities, unemployment, business incubators, development
Procedia PDF Downloads 239179 Localized Recharge Modeling of a Coastal Aquifer from a Dam Reservoir (Korba, Tunisia)
Authors: Nejmeddine Ouhichi, Fethi Lachaal, Radhouane Hamdi, Olivier Grunberger
Abstract:
Located in Cap Bon peninsula (Tunisia), the Lebna dam was built in 1987 to balance local water salt intrusion taking place in the coastal aquifer of Korba. The first intention was to reduce coastal groundwater over-pumping by supplying surface water to a large irrigation system. The unpredicted beneficial effect was recorded with the occurrence of a direct localized recharge to the coastal aquifer by leakage through the geological material of the southern bank of the lake. The hydrological balance of the reservoir dam gave an estimation of the annual leakage volume, but dynamic processes and sound quantification of recharge inputs are still required to understand the localized effect of the recharge in terms of piezometry and quality. Present work focused on simulating the recharge process to confirm the hypothesis, and established a sound quantification of the water supply to the coastal aquifer and extend it to multi-annual effects. A spatial frame of 30km² was used for modeling. Intensive outcrops and geophysical surveys based on 68 electrical resistivity soundings were used to characterize the aquifer 3D geometry and the limit of the Plio-quaternary geological material concerned by the underground flow paths. Permeabilities were determined using 17 pumping tests on wells and piezometers. Six seasonal piezometric surveys on 71 wells around southern reservoir dam banks were performed during the 2019-2021 period. Eight monitoring boreholes of high frequency (15min) piezometric data were used to examine dynamical aspects. Model boundary conditions were specified using the geophysics interpretations coupled with the piezometric maps. The dam-groundwater flow model was performed using Visual MODFLOW software. Firstly, permanent state calibration based on the first piezometric map of February 2019 was established to estimate the permanent flow related to the different reservoir levels. Secondly, piezometric data for the 2019-2021 period were used for transient state calibration and to confirm the robustness of the model. Preliminary results confirmed the temporal link between the reservoir level and the localized recharge flow with a strong threshold effect for levels below 16 m.a.s.l. The good agreement of computed flow through recharge cells on the southern banks and hydrological budget of the reservoir open the path to future simulation scenarios of the dilution plume imposed by the localized recharge. The dam reservoir-groundwater flow-model simulation results approve a potential for storage of up to 17mm/year in existing wells, under gravity-feed conditions during level increases on the reservoir into the three years of operation. The Lebna dam groundwater flow model characterized a spatiotemporal relation between groundwater and surface water.Keywords: leakage, MODFLOW, saltwater intrusion, surface water-groundwater interaction
Procedia PDF Downloads 138178 Biosynthesis of Silver Nanoparticles Using Zataria multiflora Extract, and Study of Antibacterial Effects on UTI Bacteria (MDR)
Authors: Mohammad Hossein Pazandeh, Monir Doudi, Sona Rostampour Yasouri
Abstract:
Irregular consumption of current antibiotic makes increases of antibiotic resistance between urin pathogens on all worlds. This study selected based on this great community problem. The aim of this study was the biosynthesis of silver nanoparticles from Zataria multiflora extract and then to investigate its antibacterial effect on gram-negative bacilli common in Urinary Tract Infections (UTI) and MDR. The plant used in the present research was Zataria multiflora whose extract was prepared through Soxhlet extraction method. Green synthesis condition of silver nanoparticles was investigated in terms of three parameters including the extract amount, concentration of silver nitrate salt, and temperature. The seizes of nanoparticles were determined by Zetasizer. In order to identify synthesized silver nanoparticles Transmission Electron Microscopy (TEM) and X-ray Diffraction (XRD) methods were used. For evaluating the antibacterial effects of nanoparticles synthesized through biological method different concentrations of silver nanoparticles were studied on 140 cases of Muliple Drug Resistance (MDR) bacteria strains Escherichia coli, Klebsiella pneumoniae, Enterobacter aerogenes, Proteus vulgaris,Citrobacter freundii, Acinetobacter bumanii and Pseudomonas aeruginosa, (each genus of bacteria, 20 samples), which all were MDR and cause urinary tract infections , for identification of bacteria were used of Polymerase Chain Reaction (PCR) test and laboratory methods (Agar well diffusion and Microdilution methods) to assess their sensitivity to Nanoparticles. The data were analyzed using SPSS software by nonparametric Kruskal-Wallis and Mann-Whitney tests. Significant results were found about the effects of silver nitrate concentration, different amounts of Zataria multiflora extract, and temperature on nanoparticles; that is, by increasing the concentration of silver nitrate, extract amount, and temperature, the sizes of synthesized nanoparticles declined. However, the effect of above mentioned factors on particles diffusion index was not significant. Based on the TEM results, particles were mainly spherical shape with a diameter range of 25 to 50 nm. The results of XRD Analysis indicated the formation of Nanostructures and Nanocrystals of silver.. The obtained results of antibacterial effects of different concentrations of silver nanoparticles on according to agar well diffusion and microdilution method, biologically synthesized nanoparticles showed 1000 mg /ml highest and lowest mean inhibition zone diameter in E.coli , Acinetobacter bumanii 23 and 15mm, respectively. MIC was observed for all of bacteria 125mg/ml and for Acinetobacter bumanii 250mg/ml.Comparing the growth inhibitory effect of chemically synthesized Nanoparticles and biologically synthesized Nanoparticles showed that in the chemical method the highest growth inhibition belonged to the concentration of 62.5 mg /ml. The inhibitory effect on the growth all of bacteria causes of urine infection and MDR was observed and by increasing silver ion concentration in Nanoparticles, antibacterial activity increased. Generally, the biological synthesis can be considered an efficient way not only in making Nanoparticles but also for having anti-bacterial properties. It is more biocompatible and may be possess less toxicity than the Nanoparticles synthesized chemically.Keywords: biosynthesis, MDR bacteria, silver nanoparticles, UTI
Procedia PDF Downloads 54177 Productivity of Grain Sorghum-Cowpea Intercropping System: Climate-Smart Approach
Authors: Mogale T. E., Ayisi K. K., Munjonji L., Kifle Y. G.
Abstract:
Grain sorghum and cowpea are important staple crops in many areas of South Africa, particularly the Limpopo Province. The two crops are produced under a wide range of unsustainable conventional methods, which reduces productivity in the long run. Climate-smart traditional methods such as intercropping can be adopted to ensure sustainable production of these important two crops in the province. A no-tillage field experiment was laid out in a randomised complete block design (RCBD) with four replications over two seasons in two distinct agro-ecological zones, Syferkuil and Ofcolacoin, the province to assess the productivity of sorghum-cowpea intercropped under two cowpea densities.LCi Ultra compact photosynthesis machine was used to collect photosynthetic rate data biweekly between 11h00 and 13h00 until physiological maturity. Biomass and grain yield of the component crops in binary and sole cultures were determined at harvest maturity from middle rows of 2.7 m2 area. The biomass was oven dried in the laboratory at 65oC till constant weight. To obtain grain yield, harvested sorghum heads and cowpea pods were threshed, cleaned, and weighed. Harvest index (HI) and land equivalent ratio (LER) of the two crops were calculated to assess intercrop productivity relative to sole cultures. Data was analysed using the statistical analysis software system (SAS) 9.4 version, followed by mean separation using the least significant difference method. The photosyntheticrate of sorghum-cowpea intercrop was influenced by cowpea density and sorghum cultivar. Photosynthetic rate under low density was higher compared to high density, but this was dependent on the growing conditions. Dry biomass accumulation, grain yield, and harvest index differed among the sorghum cultivars and cowpea in both binary and sole cultures at the two test locations during the 2018/19 and 2020/21 growing seasons. Cowpea grain and dry biomass yields werein excess of 60% under high density compared to low density in both binary and sole cultures. The results revealed that grain yield accumulation of sorghum cultivars was influenced by the density of the companion cowpea crop as well as the production season. For instant, at Syferkuil, Enforcer and Ns5511 accumulated high yield under low density, whereas, at Ofcolaco, the higher yield was recorded under high density. Generally, under low cowpea density, cultivar Enforcer produced relatively higher grain yield whereas, under higher density, Titan yield was superior. The partial and total LER varied with growing season and the treatments studied. The total LERs exceeded 1.0 at the two locations across seasons, ranging from 1.3 to 1.8. From the results, it can be concluded that resources were used more efficiently in sorghum-cowpea intercrop at both Syferkuil and Ofcolaco. Furthermore, intercropping system improved photosynthetic rate, grain yield, and dry matter accumulation of sorghum and cowpea depending on growing conditions and density of cowpea. Hence, the sorghum-cowpea intercropping system can be adopted as a climate-smart practice for sustainable production in the Limpopo province.Keywords: cowpea, climate-smart, grain sorghum, intercropping
Procedia PDF Downloads 225176 Modern Technology-Based Methods in Neurorehabilitation for Social Competence Deficit in Children with Acquired Brain Injury
Authors: M. Saard, A. Kolk, K. Sepp, L. Pertens, L. Reinart, C. Kööp
Abstract:
Introduction: Social competence is often impaired in children with acquired brain injury (ABI), but evidence-based rehabilitation for social skills has remained undeveloped. Modern technology-based methods create effective and safe learning environments for pediatric social skills remediation. The aim of the study was to implement our structured model of neuro rehab for socio-cognitive deficit using multitouch-multiuser tabletop (MMT) computer-based platforms and virtual reality (VR) technology. Methods: 40 children aged 8-13 years (yrs) have participated in the pilot study: 30 with ABI -epilepsy, traumatic brain injury and/or tic disorder- and 10 healthy age-matched controls. From the patients, 12 have completed the training (M = 11.10 yrs, SD = 1.543) and 20 are still in training or in the waiting-list group (M = 10.69 yrs, SD = 1.704). All children performed the first individual and paired assessments. For patients, second evaluations were performed after the intervention period. Two interactive applications were implemented into rehabilitation design: Snowflake software on MMT tabletop and NoProblem on DiamondTouch Table (DTT), which allowed paired training (2 children at once). Also, in individual training sessions, HTC Vive VR device was used with VR metaphors of difficult social situations to treat social anxiety and train social skills. Results: At baseline (B) evaluations, patients had higher deficits in executive functions on the BRIEF parents’ questionnaire (M = 117, SD = 23.594) compared to healthy controls (M = 22, SD = 18.385). The most impaired components of social competence were emotion recognition, Theory of Mind skills (ToM), cooperation, verbal/non-verbal communication, and pragmatics (Friendship Observation Scale scores only 25-50% out of 100% for patients). In Sentence Completion Task and Spence Anxiety Scale, the patients reported a lack of friends, behavioral problems, bullying in school, and social anxiety. Outcome evaluations: Snowflake on MMT improved executive and cooperation skills and DTT developed communication skills, metacognitive skills, and coping. VR, video modelling and role-plays improved social attention, emotional attitude, gestural behaviors, and decreased social anxiety. NEPSY-II showed improvement in Affect Recognition [B = 7, SD = 5.01 vs outcome (O) = 10, SD = 5.85], Verbal ToM (B = 8, SD = 3.06 vs O = 10, SD = 4.08), Contextual ToM (B = 8, SD = 3.15 vs O = 11, SD = 2.87). ToM Stories test showed an improved understanding of Intentional Lying (B = 7, SD = 2.20 vs O = 10, SD = 0.50), and Sarcasm (B=6, SD = 2.20 vs O = 7, SD = 2.50). Conclusion: Neurorehabilitation based on the Structured Model of Neurorehab for Socio-Cognitive Deficit in children with ABI were effective in social skills remediation. The model helps to understand theoretical connections between components of social competence and modern interactive computerized platforms. We encourage therapists to implement these next-generation devices into the rehabilitation process as MMT and VR interfaces are motivating for children, thus ensuring good compliance. Improving children’s social skills is important for their and their families’ quality of life and social capital.Keywords: acquired brain injury, children, social skills deficit, technology-based neurorehabilitation
Procedia PDF Downloads 121175 Nature of Forest Fragmentation Owing to Human Population along Elevation Gradient in Different Countries in Hindu Kush Himalaya Mountains
Authors: Pulakesh Das, Mukunda Dev Behera, Manchiraju Sri Ramachandra Murthy
Abstract:
Large numbers of people living in and around the Hindu Kush Himalaya (HKH) region, depends on this diverse mountainous region for ecosystem services. Following the global trend, this region also experiencing rapid population growth, and demand for timber and agriculture land. The eight countries sharing the HKH region have different forest resources utilization and conservation policies that exert varying forces in the forest ecosystem. This created a variable spatial as well altitudinal gradient in rate of deforestation and corresponding forest patch fragmentation. The quantitative relationship between fragmentation and demography has not been established before for HKH vis-à-vis along elevation gradient. This current study was carried out to attribute the overall and different nature in landscape fragmentations along the altitudinal gradient with the demography of each sharing countries. We have used the tree canopy cover data derived from Landsat data to analyze the deforestation and afforestation rate, and corresponding landscape fragmentation observed during 2000 – 2010. Area-weighted mean radius of gyration (AMN radius of gyration) was computed owing to its advantage as spatial indicator of fragmentation over non-spatial fragmentation indices. Using the subtraction method, the change in fragmentation was computed during 2000 – 2010. Using the tree canopy cover data as a surrogate of forest cover, highest forest loss was observed in Myanmar followed by China, India, Bangladesh, Nepal, Pakistan, Bhutan, and Afghanistan. However, the sequence of fragmentation was different after the maximum fragmentation observed in Myanmar followed by India, China, Bangladesh, and Bhutan; whereas increase in fragmentation was seen following the sequence of as Nepal, Pakistan, and Afghanistan. Using SRTM-derived DEM, we observed higher rate of fragmentation up to 2400m that corroborated with high human population for the year 2000 and 2010. To derive the nature of fragmentation along the altitudinal gradients, the Statistica software was used, where the user defined function was utilized for regression applying the Gauss-Newton estimation method with 50 iterations. We observed overall logarithmic decrease in fragmentation change (area-weighted mean radius of gyration), forest cover loss and population growth during 2000-2010 along the elevation gradient with very high R2 values (i.e., 0.889, 0.895, 0.944 respectively). The observed negative logarithmic function with the major contribution in the initial elevation gradients suggest to gap filling afforestation in the lower altitudes to enhance the forest patch connectivity. Our finding on the pattern of forest fragmentation and human population across the elevation gradient in HKH region will have policy level implication for different nations and would help in characterizing hotspots of change. Availability of free satellite derived data products on forest cover and DEM, grid-data on demography, and utility of geospatial tools helped in quick evaluation of the forest fragmentation vis-a-vis human impact pattern along the elevation gradient in HKH.Keywords: area-weighted mean radius of gyration, fragmentation, human impact, tree canopy cover
Procedia PDF Downloads 215174 Awareness and Willingness of Signing 'Consent Form in Palliative Care' in Elderly Patients with End Stage Renal Disease
Authors: Hsueh Ping Peng
Abstract:
End-stage renal disease most commonly occurs in the elderly population. Elderly people are approaching the end of their lives, and when facing major life-threatening situations, apart from aggressive medical treatment, they can also choose treatment methods such as hospice care to improve their quality of life. The purpose of this study was to investigate factors associated with the awareness and willingness to sign hospice and palliative care consent forms in elderly with end-stage renal disease. This study used both quantitative, cross-sectional study designs. In the quantitative section, 110 elderly patients (aged 65 or above) with end-stage renal disease receiving conventional hemodialysis were recruited as study participants from a medical center in Taipei City. Data were collected using structured questionnaires. Study tools included basic demographic data, questionnaires on the awareness and perception of hospice and palliative care, etc. After collecting the data, data analysis was conducted using SPSS 20.0 statistical software, including descriptive statistics, chi-square test, logistic regression, and other inferential statistics. The results showed that the average age of participants was 71.6 years old, more males than females, average years of dialysis was 6.1 years and most subjects rated their self-perceived health status as fair. Results of the study are summarized as follows: Elderly people with end-stage renal disease did not have sufficient knowledge and awareness about hospice and palliative care. Influencing factors included level of education, marital status, years of dialysis and age, etc. Demographic factors influencing the signing of consent forms included gender, marital status, and age, which all showed significant impacts. Factors taken into consideration when signing consent forms included awareness of hospice care, understanding the relevant definitions of hospice care, and understanding that consent may be modified or cancelled at any time; it was predicted that people who knew more about ways to receive hospice care or more related definitions were more willing to sign the consent forms. In the qualitative study section, 10 participants who signed the consent form, five male, and 5 female, between the ages of 65-90, have completed the semi-structured interviews. Analysis of the interviews revealed six themes: (1) passing away peacefully, (2) autonomy on arrangements of life and death, (3) unwillingness to increase family and social burden, (4) friends and relatives’ experience influencing the decision to give consent, (5) sharing information to facilitate the giving of consent, (6) facing each day with ease, to reflect the experience and factors of consideration for elderly with end-stage renal disease when signing consent forms. The results of this study provides the awareness, thoughts and feelings of elderly with end-stage renal disease on signing consent forms, and serve as a future reference for the dialysis unit to enhance the promotion of hospice and palliative care and related caregiving measures, thereby improving the quality of life and care for elderly people with end-stage renal disease.Keywords: end-stage renal disease, hemodialysis, hospice and palliative care, awareness, willingness
Procedia PDF Downloads 168