Search results for: real estate prediction
1664 A Pilot Study on Integration of Simulation in the Nursing Educational Program: Hybrid Simulation
Authors: Vesile Unver, Tulay Basak, Hatice Ayhan, Ilknur Cinar, Emine Iyigun, Nuran Tosun
Abstract:
The aim of this study is to analyze the effects of the hybrid simulation. In this simulation, types standardized patients and task trainers are employed simultaneously. For instance, in order to teach the IV activities standardized patients and IV arm models are used. The study was designed as a quasi-experimental research. Before the implementation an ethical permission was taken from the local ethical commission and administrative permission was granted from the nursing school. The universe of the study included second-grade nursing students (n=77). The participants were selected through simple random sample technique and total of 39 nursing students were included. The views of the participants were collected through a feedback form with 12 items. The form was developed by the authors and “Patient intervention self-confidence/competence scale”. Participants reported advantages of the hybrid simulation practice. Such advantages include the following: developing connections between the simulated scenario and real life situations in clinical conditions; recognition of the need for learning more about clinical practice. They all stated that the implementation was very useful for them. They also added three major gains; improvement of critical thinking skills (94.7%) and the skill of making decisions (97.3%); and feeling as if a nurse (92.1%). In regard to the mean scores of the participants in the patient intervention self-confidence/competence scale, it was found that the total mean score for the scale was 75.23±7.76. The findings obtained in the study suggest that the hybrid simulation has positive effects on the integration of theoretical and practical activities before clinical activities for the nursing students.Keywords: hybrid simulation, clinical practice, nursing education, nursing students
Procedia PDF Downloads 2931663 Software-Defined Architecture and Front-End Optimization for DO-178B Compliant Distance Measuring Equipment
Authors: Farzan Farhangian, Behnam Shakibafar, Bobda Cedric, Rene Jr. Landry
Abstract:
Among the air navigation technologies, many of them are capable of increasing aviation sustainability as well as accuracy improvement in Alternative Positioning, Navigation, and Timing (APNT), especially avionics Distance Measuring Equipment (DME), Very high-frequency Omni-directional Range (VOR), etc. The integration of these air navigation solutions could make a robust and efficient accuracy in air mobility, air traffic management and autonomous operations. Designing a proper RF front-end, power amplifier and software-defined transponder could pave the way for reaching an optimized avionics navigation solution. In this article, the possibility of reaching an optimum front-end to be used with single low-cost Software-Defined Radio (SDR) has been investigated in order to reach a software-defined DME architecture. Our software-defined approach uses the firmware possibilities to design a real-time software architecture compatible with a Multi Input Multi Output (MIMO) BladeRF to estimate an accurate time delay between a Transmission (Tx) and the reception (Rx) channels using the synchronous scheduled communication. We could design a novel power amplifier for the transmission channel of the DME to pass the minimum transmission power. This article also investigates designing proper pair pulses based on the DO-178B avionics standard. Various guidelines have been tested, and the possibility of passing the certification process for each standard term has been analyzed. Finally, the performance of the DME was tested in the laboratory environment using an IFR6000, which showed that the proposed architecture reached an accuracy of less than 0.23 Nautical mile (Nmi) with 98% probability.Keywords: avionics, DME, software defined radio, navigation
Procedia PDF Downloads 791662 Strategic Management Education: A Driver of Architectural Career Development in a Changing Environment
Authors: Rigved Chandrashekhar Nimkhedkar, Rajat Agrawal, Vinay Sharma
Abstract:
Architects need help with a demand for an expanded skill set to effectively navigate a landscape of evolving opportunities and challenges in the dynamic realm of the architectural profession. This literature and survey-based study investigates the reasons behind architects’ choices of careers, as well as the effects of the evolving architectural scenario. The traditional role of architects in construction projects evolves as they explore diverse career motivations, face financial constraints due to an oversupply of professionals, and experience specialisation and upskilling trends. Architects inherently derive numerous value chains as more and more disciplines have been introduced into the design-construction-operation supply chain. This insight emphasizes the importance of integrating management and entrepreneurial education into architectural education rather than keeping them separate entities. The study reveals the complex nature of the entrepreneurially challenging architectural profession, including cash flow management, market competition, environmental sustainability, and innovation opportunities. Loyal to their professional identity, architects express dissatisfaction while envisioning a future in which they play a more significant role in shaping reputable brands and contributing to education. The study emphasizes the importance of dovetailing management and entrepreneurial education in architecture education in preparing graduates for the industry’s changing nature, emphasising the need for real-world skills. This research contributes insights into the architectural profession’s transformative trajectory, emphasising adaptability, upskilling, and educational enhancements as critical success factors.Keywords: architects, career path, education, management, specialisation
Procedia PDF Downloads 661661 Buffer Allocation and Traffic Shaping Policies Implemented in Routers Based on a New Adaptive Intelligent Multi Agent Approach
Authors: M. Taheri Tehrani, H. Ajorloo
Abstract:
In this paper, an intelligent multi-agent framework is developed for each router in which agents have two vital functionalities, traffic shaping and buffer allocation and are positioned in the ports of the routers. With traffic shaping functionality agents shape the traffic forward by dynamic and real time allocation of the rate of generation of tokens in a Token Bucket algorithm and with buffer allocation functionality agents share their buffer capacity between each other based on their need and the conditions of the network. This dynamic and intelligent framework gives this opportunity to some ports to work better under burst and more busy conditions. These agents work intelligently based on Reinforcement Learning (RL) algorithm and will consider effective parameters in their decision process. As RL have limitation considering much parameter in its decision process due to the volume of calculations, we utilize our novel method which invokes Principle Component Analysis (PCA) on the RL and gives a high dimensional ability to this algorithm to consider as much as needed parameters in its decision process. This implementation when is compared to our previous work where traffic shaping was done without any sharing and dynamic allocation of buffer size for each port, the lower packet drop in the whole network specifically in the source routers can be seen. These methods are implemented in our previous proposed intelligent simulation environment to be able to compare better the performance metrics. The results obtained from this simulation environment show an efficient and dynamic utilization of resources in terms of bandwidth and buffer capacities pre allocated to each port.Keywords: principal component analysis, reinforcement learning, buffer allocation, multi- agent systems
Procedia PDF Downloads 5181660 Revisiting Ecotourism Development Strategy of Cuc Phuong National Park in Vietnam: Considering Residents’ Perception and Attitudes
Authors: Bui Duc Sinh
Abstract:
Ecotourism in national parks seemed to be one of the options in the conservation of the natural resources and to improve the living condition of local communities. However, ecotourism development will be useless if it lacks the perception and support of local communities and appropriate ecotourism strategies. The aims of this study were to measure residents’ perception and satisfaction towards ecotourism impacts and their attitudes for ecotourism development in Cuc Phuong National Park; to assess the current ecotourism strategies based on ecotourism criteria and then to provide recommendations on ecotourism development strategies. The primary data were collected through personal observations, in-depth interviews with residents and national park staffs, and from surveys on households in all of the five communes in the Cuc Phuong National Park. The results depicted that local communities were aware of ecotourism impacts and had positive attitudes toward ecotourism development, and were satisfied of ecotourism development. However, higher perception rate was found on specific groups such as the young, the high income and educated, and those with jobs related to ecotourism. The study revealed the issues of concerns about the current ecotourism development strategies in Cuc Phuong National Park. The major hindrances for ecotourism development were lack of local participation and unattractive ecotourism services. It was also suggested that Cuc Phuong National Park should use ecotourism criteria to implement ecotourism activities sustainably and to harmonize the sharing of benefits amongst the stakeholders. The approaches proposed were to: create local employment through reengineering, improve the ecotourism quality, appropriate tourism benefits to the stakeholders, and carry out education and training programs. Furthermore, the results of the study helped tour operators and tourism promoters aware the real concerns, issues on current ecotourism activities in Cuc Phuong National Park.Keywords: ecotourism, ecotourism impact, local community, national park
Procedia PDF Downloads 3401659 Learners’ Violent Behaviour and Drug Abuse as Major Causes of Tobephobia in Schools
Authors: Prakash Singh
Abstract:
Many schools throughout the world are facing constant pressure to cope with the violence and drug abuse of learners who show little or no respect for acceptable and desirable social norms. These delinquent learners tend to harbour feelings of being beyond reproach because they strongly believe that it is well within their rights to engage in violent and destructive behaviour. Knives, guns, and other weapons appear to be more readily used by them on the school premises than before. It is known that learners smoke, drink alcohol, and use drugs during school hours, hence, their ability to concentrate, work, and learn, is affected. They become violent and display disruptive behaviour in their classrooms as well as on the school premises, and this atrocious behaviour makes it possible for drug dealers and gangsters to gain access onto the school premises. The primary purpose of this exploratory quantitative study was therefore to establish how tobephobia (TBP), caused by school violence and drug abuse, affects teaching and learning in schools. The findings of this study affirmed that poor discipline resulted in producing poor quality education. Most of the teachers in this study agreed that educating learners who consumed alcohol and other drugs on the school premises resulted in them suffering from TBP. These learners are frequently abusive and disrespectful, and resort to violence to seek attention. As a result, teachers feel extremely demotivated and suffer from high levels of anxiety and stress. The word TBP will surely be regarded as a blessing by many teachers throughout the world because finally, there is a word that will make people sit up and listen to their problems that cause real fear and anxiety in schools.Keywords: aims and objectives of quality education, debilitating effects of tobephobia, fear of failure associated with education, learners' violent behaviour and drug abuse
Procedia PDF Downloads 2781658 [Keynote Talk]: Uptake of Co(II) Ions from Aqueous Solutions by Low-Cost Biopolymers and Their Hybrid
Authors: Kateryna Zhdanova, Evelyn Szeinbaum, Michelle Lo, Yeonjae Jo, Abel E. Navarro
Abstract:
Alginate hydrogel beads (AB), spent peppermint leaf (PM), and a hybrid adsorbent of these two materials (ABPM) were studied as potential biosorbents of Cobalt (II) ions from aqueous solutions. Cobalt ion is a commonly underestimated pollutant that is responsible for several health problems. Discontinuous batch experiments were conducted at room temperature to evaluate the effect of solution acidity, mass of adsorbent on the adsorption of Co(II) ions. The interfering effect of salinity, the presence of surfactants, an organic dye, and Pb(II) ions were also studied to resemble the application of these adsorbents in real wastewater. Equilibrium results indicate that Co(II) uptake is maximized at pH values higher than 5, with adsorbent doses of 200 mg, 200 mg, and 120 mg for AB, PM, and ABPM, respectively. Co(II) adsorption followed the trend AB > ABPM > PM with Adsorption percentages of 77%, 71% and 64%, respectively. Salts had a strong negative effect on the adsorption due to the increase of the ionic strength and the competition for adsorption sites. The presence of Pb(II) ions, surfactant, and dye BY57 had a slightly negative effect on the adsorption, apparently due to their interaction with different adsorption sites that do not interfere with the removal of Co(II). A polar-electrostatic adsorption mechanism is proposed based on the experimental results. Scanning electron microscopy indicates that adsorbent has appropriate morphological and textural properties, and also that ABPM encapsulated most of the PM inside of the hydrogel beads. These experimental results revealed that AB, PM, and ABPM are promising adsorbents for the elimination of Co(II) ions from aqueous solutions under different experimental conditions. These biopolymers are proposed as eco-friendly alternatives for the removal of heavy metal ions at lower costs than the conventional techniques.Keywords: adsorption, Co(II) ions, alginate hydrogel beads, spent peppermint leaf, pH
Procedia PDF Downloads 1281657 Entrepreneurship in Pakistan: Opportunities and Challenges
Authors: Bushra Jamil, Nudrat Baqri, Muhammad Hassan Saeed
Abstract:
Entrepreneurship is creating or setting up a business not only for the purpose of generating profit but also for providing job opportunities. Entrepreneurs are problem solvers and product developers. They use their financial asset for hiring a professional team and combine the innovation, knowledge, and leadership leads to a successful startup or a business. To be a successful entrepreneur, one should be people-oriented and have perseverance. One must have the ability to take risk, believe in his/her potential, and have the courage to move forward in all circumstances. Most importantly, have the ability to take risk and can assess the risk. For STEM students, entrepreneurship is of specific importance and relevance as it helps them not just to be able to solve real life existing complications but to be able to recognize and identify emerging needs and glitches. It is becoming increasingly apparent that in today’s world, there is a need as well as a desire for STEM and entrepreneurship to work together. In Pakistan, entrepreneurship is slowly emerging, yet we are far behind. It is high time that we should introduce modern teaching methods and inculcate entrepreneurial initiative in students. A course on entrepreneurship can be included in the syllabus, and we must invite businessmen and policy makers to motivate young minds for entrepreneurship. This must be pitching competitions, opportunities to win seed funding, and facilities of incubation centers. In Pakistan, there are many good public sector research institutes, yet there is a void gap in the private sector. Only few research institute are meant for research and development. BJ Micro Lab is one of them. It is SECP registered company and is working in academia to promote and facilitate research in STEM. BJ Micro Lab is a women led initiative, and we are trying to promote research as a passion, not as an arduous burden. For this, we are continuously arranging training workshops and sessions. More than 100 students have been trained in ten different workshops arranged at BJ Micro Lab.Keywords: entrepreneurship, STEM, challenges, oppurtunties
Procedia PDF Downloads 1291656 An Agent-Based Model of Innovation Diffusion Using Heterogeneous Social Interaction and Preference
Authors: Jang kyun Cho, Jeong-dong Lee
Abstract:
The advent of the Internet, mobile communications, and social network services has stimulated social interactions among consumers, allowing people to affect one another’s innovation adoptions by exchanging information more frequently and more quickly. Previous diffusion models, such as the Bass model, however, face limitations in reflecting such recent phenomena in society. These models are weak in their ability to model interactions between agents; they model aggregated-level behaviors only. The agent based model, which is an alternative to the aggregate model, is good for individual modeling, but it is still not based on an economic perspective of social interactions so far. This study assumes the presence of social utility from other consumers in the adoption of innovation and investigates the effect of individual interactions on innovation diffusion by developing a new model called the interaction-based diffusion model. By comparing this model with previous diffusion models, the study also examines how the proposed model explains innovation diffusion from the perspective of economics. In addition, the study recommends the use of a small-world network topology instead of cellular automata to describe innovation diffusion. This study develops a model based on individual preference and heterogeneous social interactions using utility specification, which is expandable and, thus, able to encompass various issues in diffusion research, such as reservation price. Furthermore, the study proposes a new framework to forecast aggregated-level market demand from individual level modeling. The model also exhibits a good fit to real market data. It is expected that the study will contribute to our understanding of the innovation diffusion process through its microeconomic theoretical approach.Keywords: innovation diffusion, agent based model, small-world network, demand forecasting
Procedia PDF Downloads 3411655 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 1641654 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 2071653 Plasmonic Nanoshells Based Metabolite Detection for in-vitro Metabolic Diagnostics and Therapeutic Evaluation
Authors: Deepanjali Gurav, Kun Qian
Abstract:
In-vitro metabolic diagnosis relies on designed materials-based analytical platforms for detection of selected metabolites in biological samples, which has a key role in disease detection and therapeutic evaluation in clinics. However, the basic challenge deals with developing a simple approach for metabolic analysis in bio-samples with high sample complexity and low molecular abundance. In this work, we report a designer plasmonic nanoshells based platform for direct detection of small metabolites in clinical samples for in-vitro metabolic diagnostics. We first synthesized a series of plasmonic core-shell particles with tunable nanoshell structures. The optimized plasmonic nanoshells as new matrices allowed fast, multiplex, sensitive, and selective LDI MS (Laser desorption/ionization mass spectrometry) detection of small metabolites in 0.5 μL of bio-fluids without enrichment or purification. Furthermore, coupling with isotopic quantification of selected metabolites, we demonstrated the use of these plasmonic nanoshells for disease detection and therapeutic evaluation in clinics. For disease detection, we identified patients with postoperative brain infection through glucose quantitation and daily monitoring by cerebrospinal fluid (CSF) analysis. For therapeutic evaluation, we investigated drug distribution in blood and CSF systems and validated the function and permeability of blood-brain/CSF-barriers, during therapeutic treatment of patients with cerebral edema for pharmacokinetic study. Our work sheds light on the design of materials for high-performance metabolic analysis and precision diagnostics in real cases.Keywords: plasmonic nanoparticles, metabolites, fingerprinting, mass spectrometry, in-vitro diagnostics
Procedia PDF Downloads 1381652 Evaluation of Gene Expression after in Vitro Differentiation of Human Bone Marrow-Derived Stem Cells to Insulin-Producing Cells
Authors: Mahmoud M. Zakaria, Omnia F. Elmoursi, Mahmoud M. Gabr, Camelia A. AbdelMalak, Mohamed A. Ghoneim
Abstract:
Many protocols were publicized for differentiation of human mesenchymal stem cells (MSCS) into insulin-producing cells (IPCs) in order to excrete insulin hormone ingoing to treat diabetes disease. Our aim is to evaluate relative gene expression for each independent protocol. Human bone marrow cells were derived from three volunteers that suffer diabetes disease. After expansion of mesenchymal stem cells, differentiation of these cells was done by three different protocols (the one-step protocol was used conophylline protein, the two steps protocol was depending on trichostatin-A, and the three-step protocol was started by beta-mercaptoethanol). Evaluation of gene expression was carried out by real-time PCR: Pancreatic endocrine genes, transcription factors, glucose transporter, precursor markers, pancreatic enzymes, proteolytic cleavage, extracellular matrix and cell surface protein. Quantitation of insulin secretion was detected by immunofluorescence technique in 24-well plate. Most of the genes studied were up-regulated in the in vitro differentiated cells, and also insulin production was observed in the three independent protocols. There were some slight increases in expression of endocrine mRNA of two-step protocol and its insulin production. So, the two-step protocol was showed a more efficient in expressing of pancreatic endocrine genes and its insulin production than the other two protocols.Keywords: mesenchymal stem cells, insulin producing cells, conophylline protein, trichostatin-A, beta-mercaptoethanol, gene expression, immunofluorescence technique
Procedia PDF Downloads 2151651 Cross-Sectoral Energy Demand Prediction for Germany with a 100% Renewable Energy Production in 2050
Authors: Ali Hashemifarzad, Jens Zum Hingst
Abstract:
The structure of the world’s energy systems has changed significantly over the past years. One of the most important challenges in the 21st century in Germany (and also worldwide) is the energy transition. This transition aims to comply with the recent international climate agreements from the United Nations Climate Change Conference (COP21) to ensure sustainable energy supply with minimal use of fossil fuels. Germany aims for complete decarbonization of the energy sector by 2050 according to the federal climate protection plan. One of the stipulations of the Renewable Energy Sources Act 2017 for the expansion of energy production from renewable sources in Germany is that they cover at least 80% of the electricity requirement in 2050; The Gross end energy consumption is targeted for at least 60%. This means that by 2050, the energy supply system would have to be almost completely converted to renewable energy. An essential basis for the development of such a sustainable energy supply from 100% renewable energies is to predict the energy requirement by 2050. This study presents two scenarios for the final energy demand in Germany in 2050. In the first scenario, the targets for energy efficiency increase and demand reduction are set very ambitiously. To build a comparison basis, the second scenario provides results with less ambitious assumptions. For this purpose, first, the relevant framework conditions (following CUTEC 2016) were examined, such as the predicted population development and economic growth, which were in the past a significant driver for the increase in energy demand. Also, the potential for energy demand reduction and efficiency increase (on the demand side) was investigated. In particular, current and future technological developments in energy consumption sectors and possible options for energy substitution (namely the electrification rate in the transport sector and the building renovation rate) were included. Here, in addition to the traditional electricity sector, the areas of heat, and fuel-based consumptions in different sectors such as households, commercial, industrial and transport are taken into account, supporting the idea that for a 100% supply from renewable energies, the areas currently based on (fossil) fuels must be almost completely be electricity-based by 2050. The results show that in the very ambitious scenario a final energy demand of 1,362 TWh/a is required, which is composed of 818 TWh/a electricity, 229 TWh/a ambient heat for electric heat pumps and approx. 315 TWh/a non-electric energy (raw materials for non-electrifiable processes). In the less ambitious scenario, in which the targets are not fully achieved by 2050, the final energy demand will need a higher electricity part of almost 1,138 TWh/a (from the total: 1,682 TWh/a). It has also been estimated that 50% of the electricity revenue must be saved to compensate for fluctuations in the daily and annual flows. Due to conversion and storage losses (about 50%), this would mean that the electricity requirement for the very ambitious scenario would increase to 1,227 TWh / a.Keywords: energy demand, energy transition, German Energiewende, 100% renewable energy production
Procedia PDF Downloads 1341650 Experimental and Numerical Performance Analysis for Steam Jet Ejectors
Authors: Abdellah Hanafi, G. M. Mostafa, Mohamed Mortada, Ahmed Hamed
Abstract:
The steam ejectors are the heart of most of the desalination systems that employ vacuum. The systems that employ low grade thermal energy sources like solar energy and geothermal energy use the ejector to drive the system instead of high grade electric energy. The jet-ejector is used to create vacuum employing the flow of steam or air and using the severe pressure drop at the outlet of the main nozzle. The present work involves developing a one dimensional mathematical model for designing jet-ejectors and transform it into computer code using Engineering Equation solver (EES) software. The model receives the required operating conditions at the inlets and outlet of the ejector as inputs and produces the corresponding dimensions required to reach these conditions. The one-dimensional model has been validated using an existed model working on Abu-Qir power station. A prototype has been designed according to the one-dimensional model and attached to a special test bench to be tested before using it in the solar desalination pilot plant. The tested ejector will be responsible for the startup evacuation of the system and adjusting the vacuum of the evaporating effects. The tested prototype has shown a good agreement with the results of the code. In addition a numerical analysis has been applied on one of the designed geometry to give an image of the pressure and velocity distribution inside the ejector from a side, and from other side, to show the difference in results between the two-dimensional ideal gas model and real prototype. The commercial edition of ANSYS Fluent v.14 software is used to solve the two-dimensional axisymmetric case.Keywords: solar energy, jet ejector, vacuum, evaporating effects
Procedia PDF Downloads 6211649 Ocean Planner: A Web-Based Decision Aid to Design Measures to Best Mitigate Underwater Noise
Authors: Thomas Folegot, Arnaud Levaufre, Léna Bourven, Nicolas Kermagoret, Alexis Caillard, Roger Gallou
Abstract:
Concern for negative impacts of anthropogenic noise on the ocean’s ecosystems has increased over the recent decades. This concern leads to a similar increased willingness to regulate noise-generating activities, of which shipping is one of the most significant. Dealing with ship noise requires not only knowledge about the noise from individual ships, but also how the ship noise is distributed in time and space within the habitats of concern. Marine mammals, but also fish, sea turtles, larvae and invertebrates are mostly dependent on the sounds they use to hunt, feed, avoid predators, during reproduction to socialize and communicate, or to defend a territory. In the marine environment, sight is only useful up to a few tens of meters, whereas sound can propagate over hundreds or even thousands of kilometers. Directive 2008/56/EC of the European Parliament and of the Council of June 17, 2008 called the Marine Strategy Framework Directive (MSFD) require the Member States of the European Union to take the necessary measures to reduce the impacts of maritime activities to achieve and maintain a good environmental status of the marine environment. The Ocean-Planner is a web-based platform that provides to regulators, managers of protected or sensitive areas, etc. with a decision support tool that enable to anticipate and quantify the effectiveness of management measures in terms of reduction or modification the distribution of underwater noise, in response to Descriptor 11 of the MSFD and to the Marine Spatial Planning Directive. Based on the operational sound modelling tool Quonops Online Service, Ocean-Planner allows the user via an intuitive geographical interface to define management measures at local (Marine Protected Area, Natura 2000 sites, Harbors, etc.) or global (Particularly Sensitive Sea Area) scales, seasonal (regulation over a period of time) or permanent, partial (focused to some maritime activities) or complete (all maritime activities), etc. Speed limit, exclusion area, traffic separation scheme (TSS), and vessel sound level limitation are among the measures supported be the tool. Ocean Planner help to decide on the most effective measure to apply to maintain or restore the biodiversity and the functioning of the ecosystems of the coastal seabed, maintain a good state of conservation of sensitive areas and maintain or restore the populations of marine species.Keywords: underwater noise, marine biodiversity, marine spatial planning, mitigation measures, prediction
Procedia PDF Downloads 1221648 Altered Expression of Ubiquitin Editing Complex in Ulcerative Colitis
Authors: Ishani Majumdar, Jaishree Paul
Abstract:
Introduction: Ulcerative Colitis (UC) is an inflammatory disease of the colon resulting from an autoimmune response towards individual’s own microbiota. Excessive inflammation is characterized by hyper-activation of NFkB, a transcription factor regulating expression of various pro-inflammatory genes. The ubiquitin editing complex consisting of TNFAIP3, ITCH, RNF11 and TAX1BP1 maintains homeostatic levels of active NFkB through feedback inhibition and assembles in response to various stimuli that activate NFkB. TNFAIP3 deubiquitinates key signaling molecules involved in NFkB activation pathway. ITCH, RNF11 and TAX1BP1 provide substrate specificity, acting as adaptors for TNFAIP3 function. Aim: This study aimed to find expression of members of the ubiquitin editing complex at the transcript level in inflamed colon tissues of UC patients. Materials and Methods: Colonic biopsy samples were collected from 30 UC patients recruited at Department of Gastroenterology, AIIMS (New Delhi). Control group (n= 10) consisted of individuals undergoing examination for functional disorders. Real Time PCR was used to determine relative expression with GAPDH as housekeeping gene. Results: Expression of members of the ubiquitin editing complex was significantly altered during active disease. Expression of TNFAIP3 was upregulated while concomitant decrease in expression of ITCH, RNF11, TAX1BP1 was seen in UC patients. Discussion: This study reveals that increase in expression of TNFAIP3 was unable to control inflammation during active UC. Further, insufficient upregulation of ITCH, RNF11, TAX1BP1 may limit the formation of the ubiquitin complex and contribute to pathogenesis of UC.Keywords: altered expression, inflammation, ubiquitin editing complex, ulcerative colitis
Procedia PDF Downloads 2621647 Impact of Unusual Dust Event on Regional Climate in India
Authors: Kanika Taneja, V. K. Soni, Kafeel Ahmad, Shamshad Ahmad
Abstract:
A severe dust storm generated from a western disturbance over north Pakistan and adjoining Afghanistan affected the north-west region of India between May 28 and 31, 2014, resulting in significant reductions in air quality and visibility. The air quality of the affected region degraded drastically. PM10 concentration peaked at a very high value of around 1018 μgm-3 during dust storm hours of May 30, 2014 at New Delhi. The present study depicts aerosol optical properties monitored during the dust days using ground based multi-wavelength Sky radiometer over the National Capital Region of India. High Aerosol Optical Depth (AOD) at 500 nm was observed as 1.356 ± 0.19 at New Delhi while Angstrom exponent (Alpha) dropped to 0.287 on May 30, 2014. The variation in the Single Scattering Albedo (SSA) and real n(λ) and imaginary k(λ) parts of the refractive index indicated that the dust event influences the optical state to be more absorbing. The single scattering albedo, refractive index, volume size distribution and asymmetry parameter (ASY) values suggested that dust aerosols were predominant over the anthropogenic aerosols in the urban environment of New Delhi. The large reduction in the radiative flux at the surface level caused significant cooling at the surface. Direct Aerosol Radiative Forcing (DARF) was calculated using a radiative transfer model during the dust period. A consistent increase in surface cooling was evident, ranging from -31 Wm-2 to -82 Wm-2 and an increase in heating of the atmosphere from 15 Wm-2 to 92 Wm-2 and -2 Wm-2 to 10 Wm-2 at top of the atmosphere.Keywords: aerosol optical properties, dust storm, radiative transfer model, sky radiometer
Procedia PDF Downloads 3771646 Zeolite Supported Iron-Sensitized TIO₂ for Tetracycline Photocatalytic Degradation under Visible Light: A Comparison between Doping and Ion Exchange
Authors: Ghadeer Jalloul, Nour Hijazi, Cassia Boyadjian, Hussein Awala, Mohammad N. Ahmad, Ahmad Albadarin
Abstract:
In this study, we applied Fe-sensitized TiO₂ supported over embryonic Beta zeolite (BEA) zeolite for the photocatalytic degradation of Tetracycline (TC) antibiotic under visible light. Four different samples having 20, 40, 60, and 100% w/w as a ratio of TiO₂/BEA were prepared. The immobilization of solgel TiO₂ (33 m²/g) over BEA (390 m²/g) increased its surface area to (227 m²/g) and enhanced its adsorption capacity from 8% to 19%. To expand the activity of TiO₂ photocatalyst towards the visible light region (λ>380 nm), we explored two different metal sensitization techniques with Iron ions (Fe³⁺). In the ion-exchange method, the substitutional cations in the zeolite in TiO₂/BEA were exchanged with (Fe³⁺) in an aqueous solution of FeCl₃. In the doping technique, solgel TiO₂ was doped with (Fe³⁺) from FeCl₃ precursor during its synthesis and before its immobilization over BEA. (Fe-TiO₂/BEA) catalysts were characterized using SEM, XRD, BET, UV-VIS DRS, and FTIR. After testing the performance of the various ion-exchanged catalysts under blue and white lights, only (Fe-TiO₂/BEA 60%) showed better activity as compared to pure TiO₂ under white light with 100 ppm initial catalyst concentration and 20 ppm TC concentration. As compared to ion-exchanged (Fe-TiO₂/BEA), doped (Fe-TiO₂/BEA) resulted in higher photocatalytic efficiencies under blue and white lights. The 3%-Fe-doped TiO₂/BEA removed 92% of TC compared to 54% by TiO₂ under white light. The catalysts were also tested under real solar irradiations. This improvement in the photocatalytic performance of TiO₂ was due to its higher adsorption capacity due to BEA support combined with the presence of Iron ions that enhance the visible light absorption and minimize the recombination effect by the charge carriers. Keywords: Tetracycline, photocatalytic degradation, immobilized TiO₂, zeolite, iron-doped TiO₂, ion-exchange
Procedia PDF Downloads 1081645 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review
Authors: Faisal Muhibuddin, Ani Dijah Rahajoe
Abstract:
This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review
Procedia PDF Downloads 661644 Comparison between Some of Robust Regression Methods with OLS Method with Application
Authors: Sizar Abed Mohammed, Zahraa Ghazi Sadeeq
Abstract:
The use of the classic method, least squares (OLS) to estimate the linear regression parameters, when they are available assumptions, and capabilities that have good characteristics, such as impartiality, minimum variance, consistency, and so on. The development of alternative statistical techniques to estimate the parameters, when the data are contaminated with outliers. These are powerful methods (or resistance). In this paper, three of robust methods are studied, which are: Maximum likelihood type estimate M-estimator, Modified Maximum likelihood type estimate MM-estimator and Least Trimmed Squares LTS-estimator, and their results are compared with OLS method. These methods applied to real data taken from Duhok company for manufacturing furniture, the obtained results compared by using the criteria: Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE) and Mean Sum of Absolute Error (MSAE). Important conclusions that this study came up with are: a number of typical values detected by using four methods in the furniture line and very close to the data. This refers to the fact that close to the normal distribution of standard errors, but typical values in the doors line data, using OLS less than that detected by the powerful ways. This means that the standard errors of the distribution are far from normal departure. Another important conclusion is that the estimated values of the parameters by using the lifeline is very far from the estimated values using powerful methods for line doors, gave LTS- destined better results using standard MSE, and gave the M- estimator better results using standard MAPE. Moreover, we noticed that using standard MSAE, and MM- estimator is better. The programs S-plus (version 8.0, professional 2007), Minitab (version 13.2) and SPSS (version 17) are used to analyze the data.Keywords: Robest, LTS, M estimate, MSE
Procedia PDF Downloads 2321643 Technical Aspects of Closing the Loop in Depth-of-Anesthesia Control
Authors: Gorazd Karer
Abstract:
When performing a diagnostic procedure or surgery in general anesthesia (GA), a proper introduction and dosing of anesthetic agents are one of the main tasks of the anesthesiologist. However, depth of anesthesia (DoA) also seems to be a suitable process for closed-loop control implementation. To implement such a system, one must be able to acquire the relevant signals online and in real-time, as well as stream the calculated control signal to the infusion pump. However, during a procedure, patient monitors and infusion pumps are purposely unable to connect to an external (possibly medically unapproved) device for safety reasons, thus preventing closed-loop control. The paper proposes a conceptual solution to the aforementioned problem. First, it presents some important aspects of contemporary clinical practice. Next, it introduces the closed-loop-control-system structure and the relevant information flow. Focusing on transferring the data from the patient to the computer, it presents a non-invasive image-based system for signal acquisition from a patient monitor for online depth-of-anesthesia assessment. Furthermore, it introduces a UDP-based communication method that can be used for transmitting the calculated anesthetic inflow to the infusion pump. The proposed system is independent of a medical device manufacturer and is implemented in Matlab-Simulink, which can be conveniently used for DoA control implementation. The proposed scheme has been tested in a simulated GA setting and is ready to be evaluated in an operating theatre. However, the proposed system is only a step towards a proper closed-loop control system for DoA, which could routinely be used in clinical practice.Keywords: closed-loop control, depth of anesthesia (DoA), modeling, optical signal acquisition, patient state index (PSi), UDP communication protocol
Procedia PDF Downloads 2171642 Metaphors of Love and Passion in Lithuanian Comics
Authors: Saulutė Juzelėnienė, Skirmantė Šarkauskienė
Abstract:
In this paper, it is aimed to analyse the multimodal representations of the concepts of LOVE and PASSION in Lithuanian graphic novel “Gertrūda”, by Gerda Jord. The research is based on the earlier findings by Forceville (2005), Eerden (2009) as well as insights made by Shihara and Matsunaka (2009) and Kövecses (2000). The domains of target and source of LOVE and PASSION metaphors in comics are expressed by verbal and non-verbal cues. The analysis of non-verbal cues adopts the concepts of rune and indexes. A pictorial rune is a graphic representation of an object that does not exist in reality in comics, such as lines, dashes, text "balloons", and pictorial index – a graphically represented object of reality, a real symptom expressing a certain emotion, such as a wide smile, furrowed eyebrows, etc. Indexes are often hyperbolized in comics. The research revealed that most frequent source domains are CLOSINESS/UNITY, NATURAL/ PHYSICAL FORCE, VALUABLE OBJECT, PRESSURE. The target is the emotion of LOVE/PASSION which belongs to a more abstract domain of psychological experience. In this kind of metaphor, the picture can be interpreted as representing the emotion of happiness. Data are taken from Lithuanian comic books and Internet sites, where comics have been presented. The data and the analysis we are providing in this article aims to reveal that there are pictorial metaphors that manifest conceptual metaphors that are also expressed verbally and that methodological framework constructed for the analysis in the papers by Forceville at all is applicable to other emotions and culture specific pictorial manifestations.Keywords: multimodal metaphor, conceptual metaphor, comics, graphic novel, concept of love/passion
Procedia PDF Downloads 671641 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data
Procedia PDF Downloads 3341640 Building Biodiversity Conservation Plans Robust to Human Land Use Uncertainty
Authors: Yingxiao Ye, Christopher Doehring, Angelos Georghiou, Hugh Robinson, Phebe Vayanos
Abstract:
Human development is a threat to biodiversity, and conservation organizations (COs) are purchasing land to protect areas for biodiversity preservation. However, COs have limited budgets and thus face hard prioritization decisions that are confounded by uncertainty in future human land use. This research proposes a data-driven sequential planning model to help COs choose land parcels that minimize the uncertain human impact on biodiversity. The proposed model is robust to uncertain development, and the sequential decision-making process is adaptive, allowing land purchase decisions to adapt to human land use as it unfolds. The cellular automata model is leveraged to simulate land use development based on climate data, land characteristics, and development threat index from NASA Socioeconomic Data and Applications Center. This simulation is used to model uncertainty in the problem. This research leverages state-of-the-art techniques in the robust optimization literature to propose a computationally tractable reformulation of the model, which can be solved routinely by off-the-shelf solvers like Gurobi or CPLEX. Numerical results based on real data from the Jaguar in Central and South America show that the proposed method reduces conservation loss by 19.46% on average compared to standard approaches such as MARXAN used in practice for biodiversity conservation. Our method may better help guide the decision process in land acquisition and thereby allow conservation organizations to maximize the impact of limited resources.Keywords: data-driven robust optimization, biodiversity conservation, uncertainty simulation, adaptive sequential planning
Procedia PDF Downloads 2101639 A Theoretical and Experimental Evaluation of a Solar-Powered Off-Grid Air Conditioning System for Residential Buildings
Authors: Adam Y. Sulaiman, Gerard I.Obasi, Roma Chang, Hussein Sayed Moghaieb, Ming J. Huang, Neil J. Hewitt
Abstract:
Residential air-conditioning units are essential for quality indoor comfort in hot climate countries. Nevertheless, because of their non-renewable energy sources and the contribution of ecologically unfriendly working fluids, these units are a major source of CO2 emissions in these countries. The utilisation of sustainable technologies nowadays is essential to reduce the adverse effects of CO2 emissions by replacing conventional technologies. This paper investigates the feasibility of running an off-grid solar-powered air-conditioning bed unit using three low GWP refrigerants (R32, R290, and R600a) to supersede conventional refrigerants.A prototype air conditioning unit was built to supply cold air to a canopy that was connected to it. The assembled unit was designed to distribute cold air to a canopy connected to it. This system is powered by two 400 W photovoltaic panels, with battery storage supplying power to the unit at night-time. Engineering Equation Solver (EES) software is used to mathematically model the vapor compression cycle (VCC) and predict the unit's energetic and exergetic performance. The TRNSYS software was used to simulate the electricity storage performance of the batteries, whereas the IES-VE was used to determine the amount of solar energy required to power the unit. The article provides an analytical design guideline, as well as a comprehensible process system. Combining a renewable energy source to power an AC based-VCC provides an excellent solution to the real problems of high-energy consumption in warm-climate countries.Keywords: air-conditioning, refrigerants, PV panel, energy storages, VCC, exergy
Procedia PDF Downloads 1751638 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping
Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM
Procedia PDF Downloads 941637 Sea Surface Trend over the Arabian Sea and Its Influence on the South West Monsoon Rainfall Variability over Sri Lanka
Authors: Sherly Shelton, Zhaohui Lin
Abstract:
In recent decades, the inter-annual variability of summer precipitation over the India and Sri Lanka has intensified significantly with an increased frequency of both abnormally dry and wet summers. Therefore prediction of the inter-annual variability of summer precipitation is crucial and urgent for water management and local agriculture scheduling. However, none of the hypotheses put forward so far could understand the relationship to monsoon variability and related factors that affect to the South West Monsoon (SWM) variability in Sri Lanka. This study focused to identify the spatial and temporal variability of SWM rainfall events from June to September (JJAS) over Sri Lanka and associated trend. The monthly rainfall records covering 1980-2013 over the Sri Lanka are used for 19 stations to investigate long-term trends in SWM rainfall over Sri Lanka. The linear trends of atmospheric variables are calculated to understand the drivers behind the changers described based on the observed precipitation, sea surface temperature and atmospheric reanalysis products data for 34 years (1980–2013). Empirical orthogonal function (EOF) analysis was applied to understand the spatial and temporal behaviour of seasonal SWM rainfall variability and also investigate whether the trend pattern is the dominant mode that explains SWM rainfall variability. The spatial and stations based precipitation over the country showed statistically insignificant decreasing trends except few stations. The first two EOFs of seasonal (JJAS) mean of rainfall explained 52% and 23 % of the total variance and first PC showed positive loadings of the SWM rainfall for the whole landmass while strongest positive lording can be seen in western/ southwestern part of the Sri Lanka. There is a negative correlation (r ≤ -0.3) between SMRI and SST in the Arabian Sea and Central Indian Ocean which indicate that lower temperature in the Arabian Sea and Central Indian Ocean are associated with greater rainfall over the country. This study also shows that consistently warming throughout the Indian Ocean. The result shows that the perceptible water over the county is decreasing with the time which the influence to the reduction of precipitation over the area by weakening drawn draft. In addition, evaporation is getting weaker over the Arabian Sea, Bay of Bengal and Sri Lankan landmass which leads to reduction of moisture availability required for the SWM rainfall over Sri Lanka. At the same time, weakening of the SST gradients between Arabian Sea and Bay of Bengal can deteriorate the monsoon circulation, untimely which diminish SWM over Sri Lanka. The decreasing trends of moisture, moisture transport, zonal wind, moisture divergence with weakening evaporation over Arabian Sea, during the past decade having an aggravating influence on decreasing trends of monsoon rainfall over the Sri Lanka.Keywords: Arabian Sea, moisture flux convergence, South West Monsoon, Sri Lanka, sea surface temperature
Procedia PDF Downloads 1321636 Innovative Waste Management Practices in Remote Areas
Authors: Dolores Hidalgo, Jesús M. Martín-Marroquín, Francisco Corona
Abstract:
Municipal waste consist of a variety of items that are everyday discarded by the population. They are usually collected by municipalities and include waste generated by households, commercial activities (local shops) and public buildings. The composition of municipal waste varies greatly from place to place, being mostly related to levels and patterns of consumption, rates of urbanization, lifestyles, and local or national waste management practices. Each year, a huge amount of resources is consumed in the EU, and according to that, also a huge amount of waste is produced. The environmental problems derived from the management and processing of these waste streams are well known, and include impacts on land, water and air. The situation in remote areas is even worst. Difficult access when climatic conditions are adverse, remoteness of centralized municipal treatment systems or dispersion of the population, are all factors that make remote areas a real municipal waste treatment challenge. Furthermore, the scope of the problem increases significantly because the total lack of awareness of the existing risks in this area together with the poor implementation of advanced culture on waste minimization and recycling responsibly. The aim of this work is to analyze the existing situation in remote areas in reference to the production of municipal waste and evaluate the efficiency of different management alternatives. Ideas for improving waste management in remote areas include, for example: the implementation of self-management systems for the organic fraction; establish door-to-door collection models; promote small-scale treatment facilities or adjust the rates of waste generation thereof.Keywords: door to door collection, islands, isolated areas, municipal waste, remote areas, rural communities
Procedia PDF Downloads 2601635 [Keynote Talk]: Some Underlying Factors and Partial Solutions to the Global Water Crisis
Authors: Emery Jr. Coppola
Abstract:
Water resources are being depleted and degraded at an alarming and non-sustainable rate worldwide. In some areas, it is progressing more slowly. In other areas, irreversible damage has already occurred, rendering regions largely unsuitable for human existence with destruction of the environment and the economy. Today, 2.5 billion people or 36 percent of the world population live in water-stressed areas. The convergence of factors that created this global water crisis includes local, regional, and global failures. In this paper, a survey of some of these factors is presented. They include abuse of political power and regulatory acquiescence, improper planning and design, ignoring good science and models, systemic failures, and division between the powerful and the powerless. Increasing water demand imposed by exploding human populations and growing economies with short-falls exacerbated by climate change and continuing water quality degradation will accelerate this growing water crisis in many areas. Without regional measures to improve water efficiencies and protect dwindling and vulnerable water resources, environmental and economic displacement of populations and conflict over water resources will only grow. Perhaps more challenging, a global commitment is necessary to curtail if not reverse the devastating effects of climate change. Factors will be illustrated by real-world examples, followed by some partial solutions offered by water experts for helping to mitigate the growing water crisis. These solutions include more water efficient technologies, education and incentivization for water conservation, wastewater treatment for reuse, and improved data collection and utilization.Keywords: climate change, water conservation, water crisis, water technologies
Procedia PDF Downloads 235