Search results for: small technology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11738

Search results for: small technology

1298 Determination of Non-CO2 Greenhouse Gas Emission in Electronics Industry

Authors: Bong Jae Lee, Jeong Il Lee, Hyo Su Kim

Abstract:

Both developed and developing countries have adopted the decision to join the Paris agreement to reduce greenhouse gas (GHG) emissions at the Conference of the Parties (COP) 21 meeting in Paris. As a result, the developed and developing countries have to submit the Intended Nationally Determined Contributions (INDC) by 2020, and each country will be assessed for their performance in reducing GHG. After that, they shall propose a reduction target which is higher than the previous target every five years. Therefore, an accurate method for calculating greenhouse gas emissions is essential to be presented as a rational for implementing GHG reduction measures based on the reduction targets. Non-CO2 GHGs (CF4, NF3, N2O, SF6 and so on) are being widely used in fabrication process of semiconductor manufacturing, and etching/deposition process of display manufacturing process. The Global Warming Potential (GWP) value of Non-CO2 is much higher than CO2, which means it will have greater effect on a global warming than CO2. Therefore, GHG calculation methods of the electronics industry are provided by Intergovernmental Panel on climate change (IPCC) and U.S. Environmental Protection Agency (EPA), and it will be discussed at ISO/TC 146 meeting. As discussed earlier, being precise and accurate in calculating Non-CO2 GHG is becoming more important. Thus this study aims to discuss the implications of the calculating methods through comparing the methods of IPCC and EPA. As a conclusion, after analyzing the methods of IPCC & EPA, the method of EPA is more detailed and it also provides the calculation for N2O. In case of the default emission factor (by IPCC & EPA), IPCC provides more conservative results compared to that of EPA; The factor of IPCC was developed for calculating a national GHG emission, while the factor of EPA was specifically developed for the U.S. which means it must have been developed to address the environmental issue of the US. The semiconductor factory ‘A’ measured F gas according to the EPA Destruction and Removal Efficiency (DRE) protocol and estimated their own DRE, and it was observed that their emission factor shows higher DRE compared to default DRE factor of IPCC and EPA Therefore, each country can improve their GHG emission calculation by developing its own emission factor (if possible) at the time of reporting Nationally Determined Contributions (NDC). Acknowledgements: This work was supported by the Korea Evaluation Institute of Industrial Technology (No. 10053589).

Keywords: non-CO2 GHG, GHG emission, electronics industry, measuring method

Procedia PDF Downloads 274
1297 The Optimal Irrigation in the Mitidja Plain

Authors: Gherbi Khadidja

Abstract:

In the Mediterranean region, water resources are limited and very unevenly distributed in space and time. The main objective of this project is the development of a wireless network for the management of water resources in northern Algeria, the Mitidja plain, which helps farmers to irrigate in the most optimized way and solve the problem of water shortage in the region. Therefore, we will develop an aid tool that can modernize and replace some traditional techniques, according to the real needs of the crops and according to the soil conditions as well as the climatic conditions (soil moisture, precipitation, characteristics of the unsaturated zone), These data are collected in real-time by sensors and analyzed by an algorithm and displayed on a mobile application and the website. The results are essential information and alerts with recommendations for action to farmers to ensure the sustainability of the agricultural sector under water shortage conditions. In the first part: We want to set up a wireless sensor network, for precise management of water resources, by presenting another type of equipment that allows us to measure the water content of the soil, such as the Watermark probe connected to the sensor via the acquisition card and an Arduino Uno, which allows collecting the captured data and then program them transmitted via a GSM module that will send these data to a web site and store them in a database for a later study. In a second part: We want to display the results on a website or a mobile application using the database to remotely manage our smart irrigation system, which allows the farmer to use this technology and offers the possibility to the growers to access remotely via wireless communication to see the field conditions and the irrigation operation, at home or at the office. The tool to be developed will be based on satellite imagery as regards land use and soil moisture. These tools will make it possible to follow the evolution of the needs of the cultures in time, but also to time, and also to predict the impact on water resources. According to the references consulted, if such a tool is used, it can reduce irrigation volumes by up to up to 40%, which represents more than 100 million m3 of savings per year for the Mitidja. This volume is equivalent to a medium-size dam.

Keywords: optimal irrigation, soil moisture, smart irrigation, water management

Procedia PDF Downloads 92
1296 Food Security and Utilization in Ethiopia

Authors: Tuji Jemal Ahmed

Abstract:

Food security and utilization are critical aspects of ensuring the well-being and prosperity of a nation. This paper examines the current state of food security and utilization in Ethiopia, focusing on the challenges, opportunities, and strategies employed to address the issue. Ethiopia, a country in East Africa, has made significant progress in recent years to improve food security and utilization for its population. However, persistent challenges such as recurrent droughts, limited access to resources, and low agricultural productivity continue to pose obstacles to achieving sustainable food security. The paper begins by providing an overview of the concept of food security, emphasizing its multidimensional nature and the importance of access, availability, utilization, and stability. It then explores the specific factors influencing food security and utilization in Ethiopia, including natural resources, climate variability, agricultural practices, infrastructure, and socio-economic factors. Furthermore, the paper highlights the initiatives and interventions implemented by the Ethiopian government, non-governmental organizations, and international partners to enhance food security and utilization. These efforts include agricultural extension programs, irrigation projects, investments in rural infrastructure, and social safety nets to protect vulnerable populations. The study also examines the role of technology and innovation in improving food security and utilization in Ethiopia. It explores the potential of sustainable agricultural practices, such as conservation agriculture, improved seed varieties, and precision farming techniques. Additionally, it discusses the role of digital technologies in enhancing access to market information, financial services, and agricultural inputs for smallholder farmers. Finally, the paper discusses the importance of collaboration and partnerships between stakeholders, including government agencies, development organizations, research institutions, and communities, in addressing food security and utilization challenges. It emphasizes the need for integrated and holistic approaches that consider both production and consumption aspects of the food system.

Keywords: food security, utilization, Ethiopia, challenges

Procedia PDF Downloads 83
1295 Food Security and Utilization in Ethiopia

Authors: Tuji Jemal Ahmed

Abstract:

Food security and utilization are critical aspects of ensuring the well-being and prosperity of a nation. This paper examines the current state of food security and utilization in Ethiopia, focusing on the challenges, opportunities, and strategies employed to address the issue. Ethiopia, a country in East Africa, has made significant progress in recent years to improve food security and utilization for its population. However, persistent challenges such as recurrent droughts, limited access to resources, and low agricultural productivity continue to pose obstacles to achieving sustainable food security. The paper begins by providing an overview of the concept of food security, emphasizing its multidimensional nature and the importance of access, availability, utilization, and stability. It then explores the specific factors influencing food security and utilization in Ethiopia, including natural resources, climate variability, agricultural practices, infrastructure, and socio-economic factors. Furthermore, the paper highlights the initiatives and interventions implemented by the Ethiopian government, non-governmental organizations, and international partners to enhance food security and utilization. These efforts include agricultural extension programs, irrigation projects, investments in rural infrastructure, and social safety nets to protect vulnerable populations. The study also examines the role of technology and innovation in improving food security and utilization in Ethiopia. It explores the potential of sustainable agricultural practices, such as conservation agriculture, improved seed varieties, and precision farming techniques. Additionally, it discusses the role of digital technologies in enhancing access to market information, financial services, and agricultural inputs for smallholder farmers. Finally, the paper discusses the importance of collaboration and partnerships between stakeholders, including government agencies, development organizations, research institutions, and communities, in addressing food security and utilization challenges. It emphasizes the need for integrated and holistic approaches that consider both production and consumption aspects of the food system.

Keywords: food security, utilization, Ethiopia, challenges

Procedia PDF Downloads 70
1294 Impact of Primary Care Telemedicine Consultations On Health Care Resource Utilisation: A Systematic Review

Authors: Anastasia Constantinou, Stephen Morris

Abstract:

Background: The adoption of synchronous and asynchronous telemedicine modalities for primary care consultations has exponentially increased since the COVID-19 pandemic. However, there is limited understanding of how virtual consultations influence healthcare resource utilization and other quality measures including safety, timeliness, efficiency, patient and provider satisfaction, cost-effectiveness and environmental impact. Aim: Quantify the rate of follow-up visits, emergency department visits, hospitalizations, request for investigations and prescriptions and comment on the effect on different quality measures associated with different telemedicine modalities used for primary care services and primary care referrals to secondary care Design and setting: Systematic review in primary care Methods: A systematic search was carried out across three databases (Medline, PubMed and Scopus) between August and November 2023, using terms related to telemedicine, general practice, electronic referrals, follow-up, use and efficiency and supported by citation searching. This was followed by screening according to pre-defined criteria, data extraction and critical appraisal. Narrative synthesis and metanalysis of quantitative data was used to summarize findings. Results: The search identified 2230 studies; 50 studies are included in this review. There was a prevalence of asynchronous modalities in both primary care services (68%) and referrals from primary care to secondary care (83%), and most of the study participants were females (63.3%), with mean age of 48.2. The average follow-up for virtual consultations in primary care was 28.4% (eVisits: 36.8%, secure messages 18.7%, videoconference 23.5%) with no significant difference between them or F2F consultations. There was an average annual reduction of primary care visits by 0.09/patient, an increase in telephone visits by 0.20/patient, an increase in ED encounters by 0.011/patient, an increase in hospitalizations by 0.02/patient and an increase in out of hours visits by 0.019/patient. Laboratory testing was requested on average for 10.9% of telemedicine patients, imaging or procedures for 5.6% and prescriptions for 58.7% of patients. When looking at referrals to secondary care, on average 36.7% of virtual referrals required follow-up visit, with the average rate of follow-up for electronic referrals being higher than for videoconferencing (39.2% vs 23%, p=0.167). Technical failures were reported on average for 1.4% of virtual consultations to primary care. When using carbon footprint estimates, we calculate that the use of telemedicine in primary care services can potentially provide a net decrease in carbon footprint by 0.592kgCO2/patient/year. When follow-up rates are taken into account, we estimate that virtual consultations reduce carbon footprint for primary care services by 2.3 times, and for secondary care referrals by 2.2 times. No major concerns regarding quality of care, or patient satisfaction were identified. 5/7 studies that addressed cost-effectiveness, reported increased savings. Conclusions: Telemedicine provides quality, cost-effective, and environmentally sustainable care for patients in primary care with inconclusive evidence regarding the rates of subsequent healthcare utilization. The evidence is limited by heterogeneous, small-scale studies and lack of prospective comparative studies. Further research to identify the most appropriate telemedicine modality for different patient populations, clinical presentations, service provision (e.g. used to follow-up patients instead of initial diagnosis) as well as further education for patients and providers alike on how to make best use of this service is expected to improve outcomes and influence practice.

Keywords: telemedicine, healthcare utilisation, digital interventions, environmental impact, sustainable healthcare

Procedia PDF Downloads 46
1293 Computational Fluid Dynamicsfd Simulations of Air Pollutant Dispersion: Validation of Fire Dynamic Simulator Against the Cute Experiments of the Cost ES1006 Action

Authors: Virginie Hergault, Siham Chebbah, Bertrand Frere

Abstract:

Following in-house objectives, Central laboratory of Paris police Prefecture conducted a general review on models and Computational Fluid Dynamics (CFD) codes used to simulate pollutant dispersion in the atmosphere. Starting from that review and considering main features of Large Eddy Simulation, Central Laboratory Of Paris Police Prefecture (LCPP) postulates that the Fire Dynamics Simulator (FDS) model, from National Institute of Standards and Technology (NIST), should be well suited for air pollutant dispersion modeling. This paper focuses on the implementation and the evaluation of FDS in the frame of the European COST ES1006 Action. This action aimed at quantifying the performance of modeling approaches. In this paper, the CUTE dataset carried out in the city of Hamburg, and its mock-up has been used. We have performed a comparison of FDS results with wind tunnel measurements from CUTE trials on the one hand, and, on the other, with the models results involved in the COST Action. The most time-consuming part of creating input data for simulations is the transfer of obstacle geometry information to the format required by SDS. Thus, we have developed Python codes to convert automatically building and topographic data to the FDS input file. In order to evaluate the predictions of FDS with observations, statistical performance measures have been used. These metrics include the fractional bias (FB), the normalized mean square error (NMSE) and the fraction of predictions within a factor of two of observations (FAC2). As well as the CFD models tested in the COST Action, FDS results demonstrate a good agreement with measured concentrations. Furthermore, the metrics assessment indicate that FB and NMSE meet the tolerance acceptable.

Keywords: numerical simulations, atmospheric dispersion, cost ES1006 action, CFD model, cute experiments, wind tunnel data, numerical results

Procedia PDF Downloads 121
1292 Thermally Stable Crystalline Triazine-Based Organic Polymeric Nanodendrites for Mercury(2+) Ion Sensing

Authors: Dimitra Das, Anuradha Mitra, Kalyan Kumar Chattopadhyay

Abstract:

Organic polymers, constructed from light elements like carbon, hydrogen, nitrogen, oxygen, sulphur, and boron atoms, are the emergent class of non-toxic, metal-free, environmental benign advanced materials. Covalent triazine-based polymers with a functional triazine group are significant class of organic materials due to their remarkable stability arising out of strong covalent bonds. They can conventionally form hydrogen bonds, favour π–π contacts, and they were recently revealed to be involved in interesting anion–π interactions. The present work mainly focuses upon the development of a single-crystalline, highly cross-linked triazine-based nitrogen-rich organic polymer with nanodendritic morphology and significant thermal stability. The polymer has been synthesized through hydrothermal treatment of melamine and ethylene glycol resulting in cross-polymerization via condensation-polymerization reaction. The crystal structure of the polymer has been evaluated by employing Rietveld whole profile fitting method. The polymer has been found to be composed of monoclinic melamine having space group P21/a. A detailed insight into the chemical structure of the as synthesized polymer has been elucidated by Fourier Transform Infrared Spectroscopy (FTIR) and Raman spectroscopic analysis. X-Ray Photoelectron Spectroscopic (XPS) analysis has also been carried out for further understanding of the different types of linkages required to create the backbone of the polymer. The unique rod-like morphology of the triazine based polymer has been revealed from the images obtained from Field Emission Scanning Electron Microscopy (FESEM) and Transmission Electron Microscopy (TEM). Interestingly, this polymer has been found to selectively detect mercury (Hg²⁺) ions at an extremely low concentration through fluorescent quenching with detection limit as low as 0.03 ppb. The high toxicity of mercury ions (Hg²⁺) arise from its strong affinity towards the sulphur atoms of biological building blocks. Even a trace quantity of this metal is dangerous for human health. Furthermore, owing to its small ionic radius and high solvation energy, Hg²⁺ ions remain encapsulated by water molecules making its detection a challenging task. There are some existing reports on fluorescent-based heavy metal ion sensors using covalent organic frameworks (COFs) but reports on mercury sensing using triazine based polymers are rather undeveloped. Thus, the importance of ultra-trace detection of Hg²⁺ ions with high level of selectivity and sensitivity has contemporary significance. A plausible sensing phenomenon by the polymer has been proposed to understand the applicability of the material as a potential sensor. The impressive sensitivity of the polymer sample towards Hg²⁺ is the very first report in the field of highly crystalline triazine based polymers (without the introduction of any sulphur groups or functionalization) towards mercury ion detection through photoluminescence quenching technique. This crystalline metal-free organic polymer being cheap, non-toxic and scalable has current relevance and could be a promising candidate for Hg²⁺ ion sensing at commercial level.

Keywords: fluorescence quenching , mercury ion sensing, single-crystalline, triazine-based polymer

Procedia PDF Downloads 119
1291 Near-Peer Mentoring/Curriculum and Community Enterprise for Environmental Restoration Science

Authors: Lauren B. Birney

Abstract:

The BOP-CCERS (Billion Oyster Project- Curriculum and Community Enterprise for Restoration Science) Near-Peer Mentoring Program provides the long-term (five-year) support network to motivate and guide students toward restoration science-based CTE pathways. Students are selected from middle schools with actively participating BOP-CCERS teachers. Teachers will nominate students from grades 6-8 to join cohorts of between 10 and 15 students each. Cohorts are comprised primarily of students from the same school in order to facilitate mentors' travel logistics as well as to sustain connections with students and their families. Each cohort is matched with an exceptional undergraduate or graduate student, either a BOP research associate or STEM mentor recruited from collaborating City University of New York (CUNY) partner programs. In rare cases, an exceptional high school junior or senior may be matched with a cohort in addition to a research associate or graduate student. In no case is a high school student or minor be placed individually with a cohort. Mentors meet with students at least once per month and provide at least one offsite field visit per month, either to a local STEM Hub or research lab. Keeping with its five-year trajectory, the near-peer mentoring program will seek to retain students in the same cohort with the same mentor for the full duration of middle school and for at least two additional years of high school. Upon reaching the final quarter of 8th grade, the mentor will develop a meeting plan for each individual mentee. The mentee and the mentor will be required to meet individually or in small groups once per month. Once per quarter, individual meetings will be substituted for full cohort professional outings. The mentor will organize the entire cohort on a field visit or educational workshop with a museum or aquarium partner. In addition to the mentor-mentee relationship, each participating student will also be asked to conduct and present his or her own BOP field research. This research is ideally carried out with the support of the students’ regular high school STEM subject teacher; however, in cases where the teacher or school does not permit independent study, the student will be asked to conduct the research on an extracurricular basis. Near-peer mentoring affects students’ social identities and helps them to connect to role models from similar groups, ultimately giving them a sense of belonging. Qualitative and quantitative analytics were performed throughout the study. Interviews and focus groups also ensued. Additionally, an external evaluator was utilized to ensure project efficacy, efficiency, and effectiveness throughout the entire project. The BOP-CCERS Near Peer Mentoring program is a peer support network in which high school students with interest or experience in BOP (Billion Oyster Project) topics and activities (such as classroom oyster tanks, STEM Hubs, or digital platform research) provide mentorship and support for middle school or high school freshmen mentees. Peer mentoring not only empowers those students being taught but also increases the content knowledge and engagement of mentors. This support provides the necessary resources, structure, and tools to assist students in finding success.

Keywords: STEM education, environmental science, citizen science, near peer mentoring

Procedia PDF Downloads 79
1290 Role of Higher Education Commission (HEC) in Strengthening the Academia and Industry Relationships: The Case of Pakistan

Authors: Shah Awan, Fahad Sultan, Shahid Jan Kakakhel

Abstract:

Higher education in the 21st century has been faced with game-changing developments impacting teaching and learning and also strengthening the academia and industry relationship. The academia and industry relationship plays a key role in economic development in developed, developing and emerging economies. The partnership not only explores innovation but also provide a real time experience of the theoretical knowledge. For this purpose, the paper assessing the role of HEC in the Pakistan and discusses the way in academia and industry contribute their role in improving Pakistani economy. Successive studies have reported the importance of innovation and technology , research development initiatives in public sector universities, and the significance of role of higher education commission in strengthening the academia and industrial relationship to improve performance and minimize failure. The paper presents the results of interviews conducted, using semi-structured interviews amongst 26 staff members of two public sector universities, higher education commission and managers from corporate sector.The study shows public sector universities face the several barriers in developing economy like Pakistan, to establish the successful collaboration between universities and industry. Of the participants interviewed, HEC provides an insufficient road map to improve organisational capabilities in facilitating and enhance the performance. The results of this study have demonstrated that HEC has to embrace and internalize support to industry and public sector universities to compete in the era of globalization. Publication of this research paper will help higher education sector to further strengthen research sector through industry and university collaboration. The research findings corroborate the findings of Dooley and Kirk who highlights the features of university-industry collaboration. Enhanced communication has implications for the quality of the product and human resource. Crucial for developing economies, feasible organisational design and framework is essential for the university-industry relationship.

Keywords: higher education commission, role, academia and industry relationship, Pakistan

Procedia PDF Downloads 454
1289 The Risks of 'Techtopia': Reviewing the Negative Lessons of Smart City Development

Authors: Amanda Grace Ahl, Matthew Brummer

Abstract:

‘Smart cities’ are not always as ‘smart’ as the term suggests, which is not often covered in the associated academic and public policy literatures. In what has become known as the smart city approach to urban planning, governments around the world are seeking to harness the power of information and communications technology with increasingly advanced data analytics to address major social, economic, and environmental issues reshaping the ways people live. The definitional and theoretical boundaries of the smart city framework are broad and at times ambiguous, as is empirical treatment of the topic. However, and for all the disparity, in investigating any number of institutional and policy prescriptions to the challenges faced by current and emerging metropoles, scholarly thought has hinged overwhelmingly on value-positive conceptions of informatics-centered design. From enhanced quality of services, to increased efficiency of resources, to improved communication between societal stakeholders, the smart city design is championed as a technological wellspring capable of providing answers to the systemic issues stymying a utopian image of the city. However, it is argued that this ‘techtopia’, has resulted in myopia within the discipline as to value-negative implications of such planning, such as weaknesses in practicality, scalability, social equity and affordability of solutions. In order to more carefully examine this observation - that ‘stupid’ represents an omitted variable bias in the study of ‘smart’ - this paper reviews critical cases of unsuccessful smart city developments. It is argued that also understanding the negative factors affiliated with the development processes is imperative for the advancement of theoretical foundations, policies, and strategies to further the smart city as an equitable, holistic urban innovation. What emerges from the process-tracing carried out in this study are distinctly negative lessons of smart city projects, the significance of which are vital for understanding how best to conceive smart urban planning in the 21st century.

Keywords: case study, city management, innovation system, negative lessons, smart city development

Procedia PDF Downloads 399
1288 Derivation of Human NK Cells from T Cell-Derived Induced Pluripotent Stem Cells Using Xenogeneic Serum-Free and Feeder Cell-Free Culture System

Authors: Aliya Sekenova, Vyacheslav Ogay

Abstract:

The derivation of human induced pluripotent stem cells (iPSCs) from somatic cells by direct reprogramming opens wide perspectives in the regenerative medicine. It means the possibility to develop the personal and, consequently, any immunologically compatible cells for applications in cell-based therapy. The purpose of our study was to develop the technology for the production of NK cells from T cell-derived induced pluripotent stem cells (TiPSCs) for subsequent application in adoptive cancer immunotherapy. Methods: In this study iPSCs were derived from peripheral blood T cells using Sendai virus vectors expressing Oct4, Sox2, Klf4 and c-Myc. Pluripotent characteristics of TiPSCs were examined and confirmed with alkaline phosphatase staining, immunocytochemistry and RT-PCR analysis. For NK cell differentiation, embryoid bodies (EB) formed from (TiPSCs) were cultured in xenogeneic serum-free medium containing human serum, IL-3, IL-7, IL-15, SCF, FLT3L without using M210-B4 and AFT-024 stromal feeder cells. After differentiation, NK cells were characterized with immunofluorescence analysis, flow cytometry and cytotoxicity assay. Results: Here, we for the first time demonstrate that TiPSCs can effectively differentiate into functionally active NK cells without M210-B4 and AFT-024 xenogeneic stroma cells. Immunofluorescence and flow cytometry analysis showed that EB-derived cells can differentiate into a homogeneous population of NK cell expressing high levels of CD56, CD45 and CD16 specific markers. Moreover, these cells significantly express killing activation receptors such as NKp44 and NKp46. In the comparative analysis, we observed that NK cells derived using feeder-free culture system have more high killing activity against K-562 tumor cells, than NK cells derived by feeder-dependent method. Thus, we think that our obtained data will be useful for the development of large-scale production of NK cells for translation into cancer immunotherapy.

Keywords: induced pluripotent stem cells, NK cells, T cells, cell diffentiation, feeder cell-free culture system

Procedia PDF Downloads 314
1287 Finite Element Analysis of Shape Memory Alloy Stents in Coronary Arteries

Authors: Amatulraheem Al-Abassi, K. Khanafer, Ibrahim Deiab

Abstract:

The coronary artery stent is a promising technology that can treat various coronary diseases. Materials used for manufacturing medical stents should have high biocompatible properties. Stent alloys, in particular, are remarkably promising good clinical outcomes, however, there is threaten of restenosis (reoccurring of artery narrowing due to fatty plaque), stent recoiling, or in long-term the occurrence of stent fracture. However, stents that are made of Nickel-titanium (Nitinol) can bare extensive plastic deformation and resist restenosis. This shape memory alloy has outstanding mechanical properties. Nitinol is a unique shape memory alloy as it has unique mechanical properties such as; biocompatibility, super-elasticity, and recovery to original shape under certain loads. Stent failure may cause complications in vascular diseases and possibly blockage of blood flow. Thus, studying the behaviors of the stent under different medical conditions will help the doctors and cardiologists to predict when it is necessary to change the stent in order to prevent any severe morbidity outcomes. To the best of our knowledge, there are limited published papers that analyze the stent behavior with regards to the contact surfaces of plaque layer and blood vessel. Thus, stent material properties will be discussed in this investigation to highlight the mechanical and clinical differences between various stents. This research analyzes the performance of Nitinol stent in well-known stent design to determine its bearing with stress and its dislocation in blood vessels, in comparison to stents made of different biocompatible materials. In addition, a study of its performance will be represented in the system. Finite Element Analysis is the core of this study. Thus, a physical representative model will be discussed to show the distribution of stress and strain along the interaction surface between the stent and the artery. The reaction of vascular tissue to the stent will be evaluated to predict the possibility of restenosis within the treated area.

Keywords: shape memory alloy, stent, coronary artery, finite element analysis

Procedia PDF Downloads 191
1286 A Review of Benefit-Risk Assessment over the Product Lifecycle

Authors: M. Miljkovic, A. Urakpo, M. Simic-Koumoutsaris

Abstract:

Benefit-risk assessment (BRA) is a valuable tool that takes place in multiple stages during a medicine's lifecycle, and this assessment can be conducted in a variety of ways. The aim was to summarize current BRA methods used during approval decisions and in post-approval settings and to see possible future directions. Relevant reviews, recommendations, and guidelines published in medical literature and through regulatory agencies over the past five years have been examined. BRA implies the review of two dimensions: the dimension of benefits (determined mainly by the therapeutic efficacy) and the dimension of risks (comprises the safety profile of a drug). Regulators, industry, and academia have developed various approaches, ranging from descriptive textual (qualitative) to decision-analytic (quantitative) models, to facilitate the BRA of medicines during the product lifecycle (from Phase I trials, to authorization procedure, post-marketing surveillance and health technology assessment for inclusion in public formularies). These approaches can be classified into the following categories: stepwise structured approaches (frameworks); measures for benefits and risks that are usually endpoint specific (metrics), simulation techniques and meta-analysis (estimation techniques), and utility survey techniques to elicit stakeholders’ preferences (utilities). All these approaches share the following two common goals: to assist this analysis and to improve the communication of decisions, but each is subject to its own specific strengths and limitations. Before using any method, its utility, complexity, the extent to which it is established, and the ease of results interpretation should be considered. Despite widespread and long-time use, BRA is subject to debate, suffers from a number of limitations, and currently is still under development. The use of formal, systematic structured approaches to BRA for regulatory decision-making and quantitative methods to support BRA during the product lifecycle is a standard practice in medicine that is subject to continuous improvement and modernization, not only in methodology but also in cooperation between organizations.

Keywords: benefit-risk assessment, benefit-risk profile, product lifecycle, quantitative methods, structured approaches

Procedia PDF Downloads 141
1285 Variations in Spatial Learning and Memory across Natural Populations of Zebrafish, Danio rerio

Authors: Tamal Roy, Anuradha Bhat

Abstract:

Cognitive abilities aid fishes in foraging, avoiding predators & locating mates. Factors like predation pressure & habitat complexity govern learning & memory in fishes. This study aims to compare spatial learning & memory across four natural populations of zebrafish. Zebrafish, a small cyprinid inhabits a diverse range of freshwater habitats & this makes it amenable to studies investigating role of native environment in spatial cognitive abilities. Four populations were collected across India from waterbodies with contrasting ecological conditions. Habitat complexity of the water-bodies was evaluated as a combination of channel substrate diversity and diversity of vegetation. Experiments were conducted on populations under controlled laboratory conditions. A square shaped spatial testing arena (maze) was constructed for testing the performance of adult zebrafish. The square tank consisted of an inner square shaped layer with the edges connected to the diagonal ends of the tank-walls by connections thereby forming four separate chambers. Each of the four chambers had a main door in the centre. Each chamber had three sections separated by two windows. A removable coloured window-pane (red, yellow, green or blue) identified each main door. A food reward associated with an artificial plant was always placed inside the left-hand section of the red-door chamber. The position of food-reward and plant within the red-door chamber was fixed. A test fish would have to explore the maze by taking turns and locate the food inside the right-side section of the red-door chamber. Fishes were sorted from each population stock and kept individually in separate containers for identification. At a time, a test fish was released into the arena and allowed 20 minutes to explore in order to find the food-reward. In this way, individual fishes were trained through the maze to locate the food reward for eight consecutive days. The position of red door, with the plant and the reward, was shuffled every day. Following training, an intermission of four days was given during which the fishes were not subjected to trials. Post-intermission, the fishes were re-tested on the 13th day following the same protocol for their ability to remember the learnt task. Exploratory tendencies and latency of individuals to explore on 1st day of training, performance time across trials, and number of mistakes made each day were recorded. Additionally, mechanism used by individuals to solve the maze each day was analyzed across populations. Fishes could be expected to use algorithm (sequence of turns) or associative cues in locating the food reward. Individuals of populations did not differ significantly in latencies and tendencies to explore. No relationship was found between exploration and learning across populations. High habitat-complexity populations had higher rates of learning & stronger memory while low habitat-complexity populations had lower rates of learning and much reduced abilities to remember. High habitat-complexity populations used associative cues more than algorithm for learning and remembering while low habitat-complexity populations used both equally. The study, therefore, helped understand the role of natural ecology in explaining variations in spatial learning abilities across populations.

Keywords: algorithm, associative cue, habitat complexity, population, spatial learning

Procedia PDF Downloads 277
1284 Wind Turbine Scaling for the Investigation of Vortex Shedding and Wake Interactions

Authors: Sarah Fitzpatrick, Hossein Zare-Behtash, Konstantinos Kontis

Abstract:

Traditionally, the focus of horizontal axis wind turbine (HAWT) blade aerodynamic optimisation studies has been the outer working region of the blade. However, recent works seek to better understand, and thus improve upon, the performance of the inboard blade region to enhance power production, maximise load reduction and better control the wake behaviour. This paper presents the design considerations and characterisation of a wind turbine wind tunnel model devised to further the understanding and fundamental definition of horizontal axis wind turbine root vortex shedding and interactions. Additionally, the application of passive and active flow control mechanisms – vortex generators and plasma actuators – to allow for the manipulation and mitigation of unsteady aerodynamic behaviour at the blade inboard section is investigated. A static, modular blade wind turbine model has been developed for use in the University of Glasgow’s de Havilland closed return, low-speed wind tunnel. The model components - which comprise of a half span blade, hub, nacelle and tower - are scaled using the equivalent full span radius, R, for appropriate Mach and Strouhal numbers, and to achieve a Reynolds number in the range of 1.7x105 to 5.1x105 for operational speeds up to 55m/s. The half blade is constructed to be modular and fully dielectric, allowing for the integration of flow control mechanisms with a focus on plasma actuators. Investigations of root vortex shedding and the subsequent wake characteristics using qualitative – smoke visualisation, tufts and china clay flow – and quantitative methods – including particle image velocimetry (PIV), hot wire anemometry (HWA), and laser Doppler anemometry (LDA) – were conducted over a range of blade pitch angles 0 to 15 degrees, and Reynolds numbers. This allowed for the identification of shed vortical structures from the maximum chord position, the transitional region where the blade aerofoil blends into a cylindrical joint, and the blade nacelle connection. Analysis of the trailing vorticity interactions between the wake core and freestream shows the vortex meander and diffusion is notably affected by the Reynold’s number. It is hypothesized that the shed vorticity from the blade root region directly influences and exacerbates the nacelle wake expansion in the downstream direction. As the design of inboard blade region form is, by necessity, driven by function rather than aerodynamic optimisation, a study is undertaken for the application of flow control mechanisms to manipulate the observed vortex phenomenon. The designed model allows for the effective investigation of shed vorticity and wake interactions with a focus on the accurate geometry of a root region which is representative of small to medium power commercial HAWTs. The studies undertaken allow for an enhanced understanding of the interplay of shed vortices and their subsequent effect in the near and far wake. This highlights areas of interest within the inboard blade area for the potential use of passive and active flow control devices which contrive to produce a more desirable wake quality in this region.

Keywords: vortex shedding, wake interactions, wind tunnel model, wind turbine

Procedia PDF Downloads 221
1283 Development of a Novel Antibacterial to Block Growth of Pseudomonas Aeruginosa and Prevent Biofilm Formation

Authors: Clara Franch de la Cal, Christopher J Morris, Michael McArthur

Abstract:

Cystic fibrosis (CF) is an autosomal recessive genetic disorder characterized by abnormal transport of chloride and sodium across the lung epithelium, leading to thick and viscous secretions. Within which CF patients suffer from repeated bacterial pulmonary infections, with Pseudomonas aeru-ginosa (PA) eliciting the greatest inflammatory response, causing an irreversible loss of lung func-tion that determines morbidity and mortality. The cell wall of PA is a permeability barrier to many antibacterials and the rise of Mutli-Drug Resistant strains (MDR) is eroding the efficacy of the few remaining clinical options. In addition when PA infection becomes established it forms an antibi-otic-resistant biofilm, embedded in which are slow growing cells that are refractive to drug treat-ment. Making the development of new antibacterials a major challenge. This work describes the development of new type of nanoparticulate oligonucleotide antibacterial capable of tackling PA infections, including MDR strains. It is being developed to both block growth and prevent biofilm formation. These oligonucleotide therapeutics, Transcription Factor Decoys (TFD), act on novel genomic targets by capturing key regulatory proteins to block essential bacterial genes and defeat infection. They have been successfully transfected into a wide range of pathogenic bacteria, both in vitro and in vivo, using a proprietary delivery technology. The surfactant used self-assembles with TFD to form a nanoparticle stable in biological fluids, which protects the TFD from degradation and preferentially transfects prokaryotic membranes. Key challenges are to adapt the nanoparticle so it is active against PA in the context of biofilms and to formulate it for administration by inhalation. This would allow the drug to be delivered to the respiratory tract, thereby achieving drug concentrations sufficient to eradicate the pathogenic organisms at the site of infection.

Keywords: antibacterials, transcriptional factor decoys (TFDs), pseudomonas aeruginosa

Procedia PDF Downloads 270
1282 Clinical and Radiographic Evaluation of Split-Crest Technique by Ultrasonic Bone Surgery Combined with Platelet Concentrates Prior to Dental Implant Placement

Authors: Ahmed Mohamed El-Shamy, Akram Abbas El-Awady, Mahmoud Taha Eldestawy

Abstract:

Background: The present study was to evaluate clinically and radiographically the combined effect of split crest technique by ultrasonic bone surgery and platelet concentrates in implant site development. Methods: Forty patients with narrow ridge were participated in this study. Patients were assigned randomly into one of the following four groups according to treatment: Group 1: Patients received split-crest technique by ultrasonic bone surgery with implant placement. Group 2: Patients received split-crest technique by ultrasonic bone surgery with implant placement and PRF. Group 3: Patients received split-crest technique by ultrasonic bone surgery with implant placement and PRP. Group 4: Patients received split-crest technique by ultrasonic bone surgery with implant placement and collagen membrane. Modified plaque index, modified sulcus bleeding index, and implant stability were recorded as a baseline and measured again at 3 and 6 months. CBCT scans were taken immediately after surgery completion and at 9 months to evaluate bone density at the bone-implant interface. Results after 6 months; collagen group showed statistically significantly lower mean modified bleeding index than the other groups. After 3 months, the PRF group showed statistically significantly higher mean implant stability with ostell ISQ units' than the other groups. After 6 months, the PRF group showed statistically significantly higher mean implant stability with ostell ISQ units' than the other groups. After 6 months, the PRF group showed statistically significantly higher mean bone density than the collagen group. Conclusion: Ultrasonic bone surgery in split-crest technique can be a successful option for increasing implant stability values throughout the healing period. The use of a combined technique of ultrasonic bone surgery with PRF and simultaneous implant placement potentially improves osseointegration (bone density). PRF membranes represent advanced technology for the stimulation and acceleration of bone regeneration.

Keywords: dental implants, split-crest, PRF, PRP

Procedia PDF Downloads 147
1281 Off-Body Sub-GHz Wireless Channel Characterization for Dairy Cows in Barns

Authors: Said Benaissa, David Plets, Emmeric Tanghe, Jens Trogh, Luc Martens, Leen Vandaele, Annelies Van Nuffel, Frank A. M. Tuyttens, Bart Sonck, Wout Joseph

Abstract:

The herd monitoring and managing - in particular the detection of ‘attention animals’ that require care, treatment or assistance is crucial for effective reproduction status, health, and overall well-being of dairy cows. In large sized farms, traditional methods based on direct observation or analysis of video recordings become labour-intensive and time-consuming. Thus, automatic monitoring systems using sensors have become increasingly important to continuously and accurately track the health status of dairy cows. Wireless sensor networks (WSNs) and internet-of-things (IoT) can be effectively used in health tracking of dairy cows to facilitate herd management and enhance the cow welfare. Since on-cow measuring devices are energy-constrained, a proper characterization of the off-body wireless channel between the on-cow sensor nodes and the back-end base station is required for a power-optimized deployment of these networks in barns. The aim of this study was to characterize the off-body wireless channel in indoor (barns) environment at 868 MHz using LoRa nodes. LoRa is an emerging wireless technology mainly targeted at WSNs and IoT networks. Both large scale fading (i.e., path loss) and temporal fading were investigated. The obtained path loss values as a function of the transmitter-receiver separation were well fitted by a lognormal path loss model. The path loss showed an additional increase of 4 dB when the wireless node was actually worn by the cow. The temporal fading due to movement of other cows was well described by Rician distributions with a K-factor of 8.5 dB. Based on this characterization, network planning and energy consumption optimization of the on-body wireless nodes could be performed, which enables the deployment of reliable dairy cow monitoring systems.

Keywords: channel, channel modelling, cow monitoring, dairy cows, health monitoring, IoT, LoRa, off-body propagation, PLF, propagation

Procedia PDF Downloads 305
1280 Polymer-Layered Gold Nanoparticles: Preparation, Properties and Uses of a New Class of Materials

Authors: S. M. Chabane sari S. Zargou, A.R. Senoudi, F. Benmouna

Abstract:

Immobilization of nano particles (NPs) is the subject of numerous studies pertaining to the design of polymer nano composites, supported catalysts, bioactive colloidal crystals, inverse opals for novel optical materials, latex templated-hollow inorganic capsules, immunodiagnostic assays; “Pickering” emulsion polymerization for making latex particles and film-forming composites or Janus particles; chemo- and biosensors, tunable plasmonic nano structures, hybrid porous monoliths for separation science and technology, biocidal polymer/metal nano particle composite coatings, and so on. Particularly, in the recent years, the literature has witnessed an impressive progress of investigations on polymer coatings, grafts and particles as supports for anchoring nano particles. This is actually due to several factors: polymer chains are flexible and may contain a variety of functional groups that are able to efficiently immobilize nano particles and their precursors by dispersive or van der Waals, electrostatic, hydrogen or covalent bonds. We review methods to prepare polymer-immobilized nano particles through a plethora of strategies in view of developing systems for separation, sensing, extraction and catalysis. The emphasis is on methods to provide (i) polymer brushes and grafts; (ii) monoliths and porous polymer systems; (iii) natural polymers and (iv) conjugated polymers as platforms for anchoring nano particles. The latter range from soft bio macromolecular species (proteins, DNA) to metallic, C60, semiconductor and oxide nano particles; they can be attached through electrostatic interactions or covalent bonding. It is very clear that physicochemical properties of polymers (e.g. sensing and separation) are enhanced by anchored nano particles, while polymers provide excellent platforms for dispersing nano particles for e.g. high catalytic performances. We thus anticipate that the synergetic role of polymeric supports and anchored particles will increasingly be exploited in view of designing unique hybrid systems with unprecedented properties.

Keywords: gold, layer, polymer, macromolecular

Procedia PDF Downloads 382
1279 Energy Efficiency Approach to Reduce Costs of Ownership of Air Jet Weaving

Authors: Corrado Grassi, Achim Schröter, Yves Gloy, Thomas Gries

Abstract:

Air jet weaving is the most productive, but also the most energy consuming weaving method. Increasing energy costs and environmental impact are constantly a challenge for the manufacturers of weaving machines. Current technological developments concern with low energy costs, low environmental impact, high productivity, and constant product quality. The high degree of energy consumption of the method can be ascribed to the high need of compressed air. An energy efficiency method is applied to the air jet weaving technology. Such method identifies and classifies the main relevant energy consumers and processes from the exergy point of view and it leads to the identification of energy efficiency potentials during the weft insertion process. Starting from the design phase, energy efficiency is considered as the central requirement to be satisfied. The initial phase of the method consists of an analysis of the state of the art of the main weft insertion components in order to point out a prioritization of the high demanding energy components and processes. The identified major components are investigated to reduce the high demand of energy of the weft insertion process. During the interaction of the flow field coming from the relay nozzles within the profiled reed, only a minor part of the stream is really accelerating the weft yarn, hence resulting in large energy inefficiency. Different tools such as FEM analysis, CFD simulation models and experimental analysis are used in order to design a more energy efficient design of the involved components in the filling insertion. A different concept for the metal strip of the profiled reed is developed. The developed metal strip allows a reduction of the machine energy consumption. Based on a parametric and aerodynamic study, the designed reed transmits higher values of the flow power to the filling yarn. The innovative reed fulfills both the requirement of raising energy efficiency and the compliance with the weaving constraints.

Keywords: air jet weaving, aerodynamic simulation, energy efficiency, experimental validation, weft insertion

Procedia PDF Downloads 183
1278 Insecurity and Insurgency on Economic Development of Nigeria

Authors: Uche Lucy Onyekwelu, Uche B. Ugwuanyi

Abstract:

Suffice to say that socio-economic disruptions of any form is likely to affect the wellbeing of the citizenry. The upsurge of social disequilibrium caused by the incessant disruptive tendencies exhibited by youths and some others in Nigeria are not helping matters. In Nigeria the social unrest has caused different forms of draw backs in Socio Economic Development. This study has empirically evaluated the impact of insecurity and insurgency on the Economic Development of Nigeria. The paper noted that the different forms of insecurity in Nigeria are namely: Insurgency and Banditry as witnessed in Northern Nigeria; Militancy: Niger Delta area and self-determination groups pursuing various forms of agenda such as Sit –at- Home Syndrome in the South Eastern Nigeria and other secessionist movements. All these have in one way or the other hampered Economic development in Nigeria. Data for this study were collected through primary and secondary sources using questionnaire and some existing documentations. Cost of investment in different aspects of security outfits in Nigeria represents the independent variable while the differentials in the Gross Domestic Product(GDP) and Human Development Index(HDI) are the measures of the dependent variable. Descriptive statistics and Simple Linear Regression analytical tool were employed in the data analysis. The result revealed that Insurgency/Insecurity negatively affect the economic development of the different parts of Nigeria. Following the findings, a model to analyse the effect of insecurity and insurgency was developed, named INSECUREDEVNIG. It implies that the economic development of Nigeria will continue to deteriorate if insurgency and insecurity continue. The study therefore recommends that the government should do all it could to nurture its human capital, adequately fund the state security apparatus and employ individuals of high integrity to manage the various security outfits in Nigeria. The government should also as a matter of urgency train the security personnel in intelligence cum Information and Communications Technology to enable them ensure the effectiveness of implementation of security policies needed to sustain Gross Domestic Product and Human Capital Index of Nigeria.

Keywords: insecurity, insurgency, gross domestic product, human development index, Nigeria

Procedia PDF Downloads 87
1277 Diagnostic Clinical Skills in Cardiology: Improving Learning and Performance with Hybrid Simulation, Scripted Histories, Wearable Technology, and Quantitative Grading – The Assimilate Excellence Study

Authors: Daly M. J, Condron C, Mulhall C, Eppich W, O'Neill J.

Abstract:

Introduction: In contemporary clinical cardiology, comprehensive and holistic bedside evaluation including accurate cardiac auscultation is in decline despite having positive effects on patients and their outcomes. Methods: Scripted histories and scoring checklists for three clinical scenarios in cardiology were co-created and refined through iterative consensus by a panel of clinical experts; these were then paired with recordings of auscultatory findings from three actual patients with known valvular heart disease. A wearable vest with embedded pressure-sensitive panel speakers was developed to transmit these recordings when examined at the standard auscultation points. RCSI medical students volunteered for a series of three formative long case examinations in cardiology (LC1 – LC3) using this hybrid simulation. Participants were randomised into two groups: Group 1 received individual teaching from an expert trainer between LC1 and LC2; Group 2 received the same intervention between LC2 and LC3. Each participant’s long case examination performance was recorded and blindly scored by two peer participants and two RCSI examiners. Results: Sixty-eight participants were included in the study (age 27.6 ± 0.1 years; 74% female) and randomised into two groups; there were no significant differences in baseline characteristics between groups. Overall, the median total faculty examiner score was 39.8% (35.8 – 44.6%) in LC1 and increased to 63.3% (56.9 – 66.4%) in LC3, with those in Group 1 showing a greater improvement in LC2 total score than that observed in Group 2 (p < .001). Using the novel checklist, intraclass correlation coefficients (ICC) were excellent between examiners in all cases: ICC .994 – .997 (p < .001); correlation between peers and examiners improved in LC2 following peer grading of LC1 performances: ICC .857 – .867 (p < .001). Conclusion: Hybrid simulation and quantitative grading improve learning, standardisation of assessment, and direct comparisons of both performance and acumen in clinical cardiology.

Keywords: cardiology, clinical skills, long case examination, hybrid simulation, checklist

Procedia PDF Downloads 97
1276 Association between Polygenic Risk of Alzheimer's Dementia, Brain MRI and Cognition in UK Biobank

Authors: Rachana Tank, Donald. M. Lyall, Kristin Flegal, Joey Ward, Jonathan Cavanagh

Abstract:

Alzheimer’s research UK estimates by 2050, 2 million individuals will be living with Late Onset Alzheimer’s disease (LOAD). However, individuals experience considerable cognitive deficits and brain pathology over decades before reaching clinically diagnosable LOAD and studies have utilised gene candidate studies such as genome wide association studies (GWAS) and polygenic risk (PGR) scores to identify high risk individuals and potential pathways. This investigation aims to determine whether high genetic risk of LOAD is associated with worse brain MRI and cognitive performance in healthy older adults within the UK Biobank cohort. Previous studies investigating associations of PGR for LOAD and measures of MRI or cognitive functioning have focused on specific aspects of hippocampal structure, in relatively small sample sizes and with poor ‘controlling’ for confounders such as smoking. Both the sample size of this study and the discovery GWAS sample are bigger than previous studies to our knowledge. Genetic interaction between loci showing largest effects in GWAS have not been extensively studied and it is known that APOE e4 poses the largest genetic risk of LOAD with potential gene-gene and gene-environment interactions of e4, for this reason we  also analyse genetic interactions of PGR with the APOE e4 genotype. High genetic loading based on a polygenic risk score of 21 SNPs for LOAD is associated with worse brain MRI and cognitive outcomes in healthy individuals within the UK Biobank cohort. Summary statistics from Kunkle et al., GWAS meta-analyses (case: n=30,344, control: n=52,427) will be used to create polygenic risk scores based on 21 SNPs and analyses will be carried out in N=37,000 participants in the UK Biobank. This will be the largest study to date investigating PGR of LOAD in relation to MRI. MRI outcome measures include WM tracts, structural volumes. Cognitive function measures include reaction time, pairs matching, trail making, digit symbol substitution and prospective memory. Interaction of the APOE e4 alleles and PGR will be analysed by including APOE status as an interaction term coded as either 0, 1 or 2 e4 alleles. Models will be adjusted partially for adjusted for age, BMI, sex, genotyping chip, smoking, depression and social deprivation. Preliminary results suggest PGR score for LOAD is associated with decreased hippocampal volumes including hippocampal body (standardised beta = -0.04, P = 0.022) and tail (standardised beta = -0.037, P = 0.030), but not with hippocampal head. There were also associations of genetic risk with decreased cognitive performance including fluid intelligence (standardised beta = -0.08, P<0.01) and reaction time (standardised beta = 2.04, P<0.01). No genetic interactions were found between APOE e4 dose and PGR score for MRI or cognitive measures. The generalisability of these results is limited by selection bias within the UK Biobank as participants are less likely to be obese, smoke, be socioeconomically deprived and have fewer self-reported health conditions when compared to the general population. Lack of a unified approach or standardised method for calculating genetic risk scores may also be a limitation of these analyses. Further discussion and results are pending.

Keywords: Alzheimer's dementia, cognition, polygenic risk, MRI

Procedia PDF Downloads 102
1275 Characterization of Forest Fire Fuel in Shivalik Himalayas Using Hyperspectral Remote Sensing

Authors: Neha Devi, P. K. Joshi

Abstract:

Fire fuel map is one of the most critical factors for planning and managing the fire hazard and risk. One of the most significant forms of global disturbance, impacting community dynamics, biogeochemical cycles and local and regional climate across a wide range of ecosystems ranging from boreal forests to tropical rainforest is wildfire Assessment of fire danger is a function of forest type, fuelwood stock volume, moisture content, degree of senescence and fire management strategy adopted in the ground. Remote sensing has potential of reduction the uncertainty in mapping fuels. Hyperspectral remote sensing is emerging to be a very promising technology for wildfire fuels characterization. Fine spectral information also facilitates mapping of biophysical and chemical information that is directly related to the quality of forest fire fuels including above ground live biomass, canopy moisture, etc. We used Hyperion imagery acquired in February, 2016 and analysed four fuel characteristics using Hyperion sensor data on-board EO-1 satellite, acquired over the Shiwalik Himalayas covering the area of Champawat, Uttarakhand state. The main objective of this study was to present an overview of methodologies for mapping fuel properties using hyperspectral remote sensing data. Fuel characteristics analysed include fuel biomass, fuel moisture, and fuel condition and fuel type. Fuel moisture and fuel biomass were assessed through the expression of the liquid water bands. Fuel condition and type was assessed using green vegetation, non-photosynthetic vegetation and soil as Endmember for spectral mixture analysis. Linear Spectral Unmixing, a partial spectral unmixing algorithm, was used to identify the spectral abundance of green vegetation, non-photosynthetic vegetation and soil.

Keywords: forest fire fuel, Hyperion, hyperspectral, linear spectral unmixing, spectral mixture analysis

Procedia PDF Downloads 148
1274 Dynamic Modeling of the Green Building Movement in the U.S.: Strategies to Reduce Carbon Footprint of Residential Building Stock

Authors: Nuri Onat, Omer Tatari, Gokhan Egilmez

Abstract:

The U.S. buildings consume significant amount of energy and natural resources and they are responsible for approximately 40 % of the greenhouse gases emitted in the United States. Awareness of these environmental impacts paved the way for the adoption of green building movement. The green building movement is a rapidly increasing trend. Green Construction market has generated $173 billion dollars in GDP, supported over 2.4 million jobs, and provided $123 billion dollars in labor earnings. The number of LEED certified buildings is projected to be almost half of the all new, nonresidential buildings by 2015. National Science and Technology Council (NSTC) aims to increase number of net-zero energy buildings (NZB). The ultimate goal is to have all commercial NZB by 2050 in the US (NSTC 2008). Green Building Initiative (GBI) became the first green building organization that is accredited by American National Standards Institute (ANSI), which will also boost number of green buildings certified by Green Globes. However, there is much less focus on greening the residential buildings, although the environmental impacts of existing residential buildings are more than that of commercial buildings. In this regard, current research aims to model the residential green building movement with a dynamic model approach and assess the possible strategies to stabilize the carbon footprint of the U.S. residential building stock. Three aspects of sustainable development are considered in policy making, namely: high performance green building (HPGB) construction, NZB construction and building retrofitting. 19 different policy options are proposed and analyzed. Results of this study explored that increasing the construction rate of HPGBs or NZBs is not a sufficient policy to stabilize the carbon footprint of the residential buildings. Energy efficient building retrofitting options are found to be more effective strategies then increasing HPGBs and NZBs construction. Also, significance of shifting to renewable energy sources for electricity generation is stressed.

Keywords: green building movement, residential buildings, carbon footprint, system dynamics

Procedia PDF Downloads 406
1273 Medical Workforce Knowledge of Adrenaline (Epinephrine) Administration in Anaphylaxis in Adults Considerably Improved with Training in an UK Hospital from 2010 to 2017

Authors: Jan C. Droste, Justine Burns, Nithin Narayan

Abstract:

Introduction: Life-threatening detrimental effects of inappropriate adrenaline (epinephrine) administration, e.g., by giving the wrong dose, in the context of anaphylaxis management is well documented in the medical literature. Half of the fatal anaphylactic reactions in the UK are iatrogenic, and the median time to a cardio-respiratory arrest can be as short as 5 minutes. It is therefore imperative that hospital doctors of all grades have active and accurate knowledge of the correct route, site, and dosage of administration of adrenaline. Given this time constraint and the potential fatal outcome with inappropriate management of anaphylaxis, it is alarming that surveys over the last 15 years have repeatedly shown only a minority of doctors to have accurate knowledge of adrenaline administration as recommended by the UK Resuscitation Council guidelines (2008 updated 2012). This comparison of survey results of the medical workforce over several years in a small NHS District General Hospital was conducted in order to establish the effect of the employment of multiple educational methods regarding adrenaline administration in anaphylaxis in adults. Methods: Between 2010 and 2017, several education methods and tools were used to repeatedly inform the medical workforce (doctors and advanced clinical practitioners) in a single district general hospital regarding the treatment of anaphylaxis in adults. Whilst the senior staff remained largely the same cohort, junior staff had changed fully in every survey. Examples included: (i) Formal teaching -in Grand Rounds; during the junior doctors’ induction process; advanced life support courses (ii) In-situ simulation training performed by the clinical skills simulation team –several ad hoc sessions and one 3-day event in 2017 visiting 16 separate clinical areas performing an acute anaphylaxis scenario using actors- around 100 individuals from multi-disciplinary teams were involved (iii) Hospital-wide distribution of the simulation event via the Trust’s Simulation Newsletter (iv) Laminated algorithms were attached to the 'crash trolleys' (v) A short email 'alert' was sent to all medical staff 3 weeks prior to the survey detailing the emergency treatment of anaphylaxis (vi) In addition, the performance of the surveys themselves represented a teaching opportunity when gaps in knowledge could be addressed. Face to face surveys were carried out in 2010 ('pre-intervention), 2015, and 2017, in the latter two occasions including advanced clinical practitioners (ACP). All surveys consisted of convenience samples. If verbal consent to conduct the survey was obtained, the medical practitioners' answers were recorded immediately on a data collection sheet. Results: There was a sustained improvement in the knowledge of the medical workforce from 2010 to 2017: Answers improved regarding correct drug by 11% (84%, 95%, and 95%); the correct route by 20% (76%, 90%, and 96%); correct site by 40% (43%, 83%, and 83%) and the correct dose by 45% (27%, 54%, and 72%). Overall, knowledge of all components -correct drug, route, site, and dose-improved from 13% in 2010 to 62% in 2017. Conclusion: This survey comparison shows knowledge of the medical workforce regarding adrenaline administration for treatment of anaphylaxis in adults can be considerably improved by employing a variety of educational methods.

Keywords: adrenaline, anaphylaxis, epinephrine, medical education, patient safety

Procedia PDF Downloads 116
1272 Technological Transference Tools to Diffuse Low-Cost Earthquake Resistant Construction with Adobe in Rural Areas of the Peruvian Andes

Authors: Marcial Blondet, Malena Serrano, Álvaro Rubiños, Elin Mattsson

Abstract:

In Peru, there are more than two million houses made of adobe (sun dried mud bricks) or rammed earth (35% of the total houses), in which almost 9 million people live, mainly because they cannot afford to purchase industrialized construction materials. Although adobe houses are cheap to build and thermally comfortable, their seismic performance is very poor, and they usually suffer significant damage or collapse with tragic loss of life. Therefore, over the years, researchers at the Pontifical Catholic University of Peru and other institutions have developed many reinforcement techniques as an effort to improve the structural safety of earthen houses located in seismic areas. However, most rural communities live under unacceptable seismic risk conditions because these techniques have not been adopted massively, mainly due to high cost and lack of diffusion. The nylon rope mesh reinforcement technique is simple and low-cost, and two technological transference tools have been developed to diffuse it among rural communities: 1) Scale seismic simulations using a portable shaking table have been designed to prove its effectiveness to protect adobe houses; 2) A step-by-step illustrated construction manual has been developed to guide the complete building process of a nylon rope mesh reinforced adobe house. As a study case, it was selected the district of Pullo: a small rural community in the Peruvian Andes where more than 80% of its inhabitants live in adobe houses and more than 60% are considered to live in poverty or extreme poverty conditions. The research team carried out a one-day workshop in May 2015 and a two-day workshop in September 2015. Results were positive: First, the nylon rope mesh reinforcement procedure was proven simple enough to be replicated by adults, both young and seniors, and participants handled ropes and knots easily as they use them for daily livestock activity. In addition, nylon ropes were proven highly available in the study area as they were found at two local stores in variety of color and size.. Second, the portable shaking table demonstration successfully showed the effectiveness of the nylon rope mesh reinforcement and generated interest on learning about it. On the first workshop, more than 70% of the participants were willing to formally subscribe and sign up for practical training lessons. On the second workshop, more than 80% of the participants returned the second day to receive introductory practical training. Third, community members found illustrations on the construction manual simple and friendly but the roof system illustrations led to misinterpretation so they were improved. The technological transfer tools developed in this project can be used to train rural dwellers on earthquake-resistant self-construction with adobe, which is still very common in the Peruvian Andes. This approach would allow community members to develop skills and capacities to improve safety of their households on their own, thus, mitigating their high seismic risk and preventing tragic losses. Furthermore, proper training in earthquake-resistant self-construction with adobe would prevent rural dwellers from depending on external aid after an earthquake and become agents of their own development.

Keywords: adobe, Peruvian Andes, safe housing, technological transference

Procedia PDF Downloads 284
1271 Advanced Study on Hydrogen Evolution Reaction based on Nickel sulfide Catalyst

Authors: Kishor Kumar Sadasivuni, Mizaj Shabil Sha, Assim Alajali, Godlaveeti Sreenivasa Kumar, Aboubakr M. Abdullah, Bijandra Kumar, Mithra Geetha

Abstract:

A potential pathway for efficient hydrogen production from water splitting electrolysis involves catalysis or electrocatalysis, which plays a crucial role in energy conversion and storage. Hydrogen generated by electrocatalytic water splitting requires active, stable, and low-cost catalysts or electrocatalysts to be developed for practical applications. In this study, we evaluated combination of 2D materials of NiS nanoparticle catalysts for hydrogen evolution reactions. The photocatalytic H₂ production rate of this nanoparticle is high and exceeds that obtained on components alone. Nanoparticles serve as electron collectors and transporters, which explains this improvement. Moreover, a current density was recorded at reduced working potential by 0.393 mA. Calculations based on density functional theory indicate that the nanoparticle's hydrogen evolution reaction catalytic activity is caused by strong interaction between its components at the interface. The samples were analyzed by XPS and morphologically by FESEM for the best outcome, depending on their structural shapes. Use XPS and morphologically by FESEM for the best results. This nanocomposite demonstrated higher electro-catalytic activity, and a low tafel slope of 60 mV/dec. Additionally, despite 1000 cycles into a durability test, the electrocatalyst still displays excellent stability with minimal current loss. The produced catalyst has shown considerable potential for use in the evolution of hydrogen due to its robust synthesis. According to these findings, the combination of 2D materials of nickel sulfide sample functions as good electocatalyst for H₂ evolution. Additionally, the research being done in this fascinating field will surely push nickel sulfide-based technology closer to becoming an industrial reality and revolutionize existing energy issues in a sustainable and clean manner.

Keywords: electrochemical hydrogenation, nickel sulfide, electrocatalysts, energy conversion, catalyst

Procedia PDF Downloads 107
1270 Environmental Planning for Sustainable Utilization of Lake Chamo Biodiversity Resources: Geospatially Supported Approach, Ethiopia

Authors: Alemayehu Hailemicael Mezgebe, A. J. Solomon Raju

Abstract:

Context: Lake Chamo is a significant lake in the Ethiopian Rift Valley, known for its diversity of wildlife and vegetation. However, the lake is facing various threats due to human activities and global effects. The poor management of resources could lead to food insecurity, ecological degradation, and loss of biodiversity. Research Aim: The aim of this study is to analyze the environmental implications of lake level changes using GIS and remote sensing. The research also aims to examine the floristic composition of the lakeside vegetation and propose spatially oriented environmental planning for the sustainable utilization of the biodiversity resources. Methodology: The study utilizes multi-temporal satellite images and aerial photographs to analyze the changes in the lake area over the past 45 years. Geospatial analysis techniques are employed to assess land use and land cover changes and change detection matrix. The composition and role of the lakeside vegetation in the ecological and hydrological functions are also examined. Findings: The analysis reveals that the lake has shrunk by 14.42% over the years, with significant modifications to its upstream segment. The study identifies various threats to the lake-wetland ecosystem, including changes in water chemistry, overfishing, and poor waste management. The study also highlights the impact of human activities on the lake's limnology, with an increase in conductivity, salinity, and alkalinity. Floristic composition analysis of the lake-wetland ecosystem showed definite pattern of the vegetation distribution. The vegetation composition can be generally categorized into three belts namely, the herbaceous belt, the legume belt and the bush-shrub-small trees belt. The vegetation belts collectively act as different-sized sieve screen system and calm down the pace of incoming foreign matter. This stratified vegetation provides vital information to decide the management interventions for the sustainability of lake-wetland ecosystem.Theoretical Importance: The study contributes to the understanding of the environmental changes and threats faced by Lake Chamo. It provides insights into the impact of human activities on the lake-wetland ecosystem and emphasizes the need for sustainable resource management. Data Collection and Analysis Procedures: The study utilizes aerial photographs, satellite imagery, and field observations to collect data. Geospatial analysis techniques are employed to process and analyze the data, including land use/land cover changes and change detection matrices. Floristic composition analysis is conducted to assess the vegetation patterns Question Addressed: The study addresses the question of how lake level changes and human activities impact the environmental health and biodiversity of Lake Chamo. It also explores the potential opportunities and threats related to water utilization and waste management. Conclusion: The study recommends the implementation of spatially oriented environmental planning to ensure the sustainable utilization and maintenance of Lake Chamo's biodiversity resources. It emphasizes the need for proper waste management, improved irrigation facilities, and a buffer zone with specific vegetation patterns to restore and protect the lake outskirt.

Keywords: buffer zone, geo-spatial, lake chamo, lake level changes, sustainable utilization

Procedia PDF Downloads 61
1269 Effect of Nanoparticles on Wheat Seed Germination and Seedling Growth

Authors: Pankaj Singh Rawat, Rajeew Kumar, Pradeep Ram, Priyanka Pandey

Abstract:

Wheat is an important cereal crop for food security. Boosting the wheat production and productivity is the major challenge across the nation. Good quality of seed is required for maintaining optimum plant stand which ultimately increases grain yield. Ensuring a good germination is one of the key steps to ensure proper plant stand and moisture assurance during seed germination may help to speed up the germination. The tiny size of nanoparticles may help in entry of water into seed without disturbing their internal structure. Considering above, a laboratory experiment was conducted during 2012-13 at G.B. Pant University of Agriculture and Technology, Pantnagar, India. The completely randomized design was used for statistical analysis. The experiment was conducted in two phases. In the first phase, the appropriate concentration of nanoparticles for seed treatment was screened. In second phase seed soaking hours of nanoparticles for better seed germination were standardized. Wheat variety UP2526 was taken as test crop. Four nanoparticles (TiO2, ZnO, nickel and chitosan) were taken for study. The crop germination studies were done in petri dishes and standard package and practices were used to raise the seedlings. The germination studies were done by following standard procedure. In first phase of the experiment, seeds were treated with 50 and 300 ppm of nanoparticles and control was also maintained for comparison. In the second phase of experiment, seeds were soaked for 4 hours, 6 hours and 8 hours with 50 ppm nanoparticles of TiO2, ZnO, nickel and chitosan along with control treatment to identify the soaking time for better seed germination. Experiment revealed that the application of nanoparticles help to enhance seed germination. The study revealed that seed treatment with  nanoparticles at 50 ppm concentration increases root length, shoot length, seedling length, shoot dry weight, seedling dry weight, seedling vigour index I and seedling vigour index II as compared to seed soaking at 300 ppm concentration. This experiment showed that seed soaking up to 4 hr was better as compared to 6 and 8 hrs. Seed soaking with nanoparticles specially TiO2, ZnO, and chitosan proved to enhance germination and seedling growth indices of wheat crop.

Keywords: nanoparticles, seed germination, seed soaking, wheat

Procedia PDF Downloads 209