Search results for: storage costs
3271 Evaluation of NoSQL in the Energy Marketplace with GraphQL Optimization
Authors: Michael Howard
Abstract:
The growing popularity of electric vehicles in the United States requires an ever-expanding infrastructure of commercial DC fast charging stations. The U.S. Department of Energy estimates 33,355 publicly available DC fast charging stations as of September 2023. In 2017, 115,370 gasoline stations were operating in the United States, much more ubiquitous than DC fast chargers. Range anxiety is an important impediment to the adoption of electric vehicles and is even more relevant in underserved regions in the country. The peer-to-peer energy marketplace helps fill the demand by allowing private home and small business owners to rent their 240 Volt, level-2 charging facilities. The existing, publicly accessible outlets are wrapped with a Cloud-connected microcontroller managing security and charging sessions. These microcontrollers act as Edge devices communicating with a Cloud message broker, while both buyer and seller users interact with the framework via a web-based user interface. The database storage used by the marketplace framework is a key component in both the cost of development and the performance that contributes to the user experience. A traditional storage solution is the SQL database. The architecture and query language have been in existence since the 1970s and are well understood and documented. The Structured Query Language supported by the query engine provides fine granularity with user query conditions. However, difficulty in scaling across multiple nodes and cost of its server-based compute have resulted in a trend in the last 20 years towards other NoSQL, serverless approaches. In this study, we evaluate the NoSQL vs. SQL solutions through a comparison of Google Cloud Firestore and Cloud SQL MySQL offerings. The comparison pits Google's serverless, document-model, non-relational, NoSQL against the server-base, table-model, relational, SQL service. The evaluation is based on query latency, flexibility/scalability, and cost criteria. Through benchmarking and analysis of the architecture, we determine whether Firestore can support the energy marketplace storage needs and if the introduction of a GraphQL middleware layer can overcome its deficiencies.Keywords: non-relational, relational, MySQL, mitigate, Firestore, SQL, NoSQL, serverless, database, GraphQL
Procedia PDF Downloads 633270 Supply Chain Resource Optimization Model for E-Commerce Pure Players
Authors: Zair Firdaous, Fourka Mohamed, Elfelsoufi Zoubir
Abstract:
The arrival of e-commerce has changed the supply chain management on the operational level as well as on the organization and strategic and even tactical decisions of the companies. The optimization of resources is an issue that is needed on the tactical and operational strategic plan. This work considers the allocation of resources in the case of pure players that have launched online sales. The aim is to improve the level of customer satisfaction and maintaining the benefits of e-retailer and of its cooperators and reducing costs and risks. We first modeled the B2C chain with all operations that integrates and possible scenarios since online retailers offer a wide selection of personalized service. The personalized services that online shopping companies offer to the clients can be embodied in many aspects, such as the customizations of payment, the distribution methods, and after-sales service choices. Every aspect of customized service has several modes. At that time, we analyzed the optimization problems of supply chain resource in customized online shopping service mode. Then, we realized an optimization model and algorithm for the development based on the analysis of the of the B2C supply chain resources. It is a multi-objective optimization that considers the collaboration of resources in operations, time and costs but also the risks and the quality of services as well as dynamic and uncertain characters related to the request.Keywords: supply chain resource, e-commerce, pure-players, optimization
Procedia PDF Downloads 2483269 Reverse Logistics Network Optimization for E-Commerce
Authors: Albert W. K. Tan
Abstract:
This research consolidates a comprehensive array of publications from peer-reviewed journals, case studies, and seminar reports focused on reverse logistics and network design. By synthesizing this secondary knowledge, our objective is to identify and articulate key decision factors crucial to reverse logistics network design for e-commerce. Through this exploration, we aim to present a refined mathematical model that offers valuable insights for companies seeking to optimize their reverse logistics operations. The primary goal of this research endeavor is to develop a comprehensive framework tailored to advising organizations and companies on crafting effective networks for their reverse logistics operations, thereby facilitating the achievement of their organizational goals. This involves a thorough examination of various network configurations, weighing their advantages and disadvantages to ensure alignment with specific business objectives. The key objectives of this research include: (i) Identifying pivotal factors pertinent to network design decisions within the realm of reverse logistics across diverse supply chains. (ii) Formulating a structured framework designed to offer informed recommendations for sound network design decisions applicable to relevant industries and scenarios. (iii) Propose a mathematical model to optimize its reverse logistics network. A conceptual framework for designing a reverse logistics network has been developed through a combination of insights from the literature review and information gathered from company websites. This framework encompasses four key stages in the selection of reverse logistics operations modes: (1) Collection, (2) Sorting and testing, (3) Processing, and (4) Storage. Key factors to consider in reverse logistics network design: I) Centralized vs. decentralized processing: Centralized processing, a long-standing practice in reverse logistics, has recently gained greater attention from manufacturing companies. In this system, all products within the reverse logistics pipeline are brought to a central facility for sorting, processing, and subsequent shipment to their next destinations. Centralization offers the advantage of efficiently managing the reverse logistics flow, potentially leading to increased revenues from returned items. Moreover, it aids in determining the most appropriate reverse channel for handling returns. On the contrary, a decentralized system is more suitable when products are returned directly from consumers to retailers. In this scenario, individual sales outlets serve as gatekeepers for processing returns. Considerations encompass the product lifecycle, product value and cost, return volume, and the geographic distribution of returns. II) In-house vs. third-party logistics providers: The decision between insourcing and outsourcing in reverse logistics network design is pivotal. In insourcing, a company handles the entire reverse logistics process, including material reuse. In contrast, outsourcing involves third-party providers taking on various aspects of reverse logistics. Companies may choose outsourcing due to resource constraints or lack of expertise, with the extent of outsourcing varying based on factors such as personnel skills and cost considerations. Based on the conceptual framework, the authors have constructed a mathematical model that optimizes reverse logistics network design decisions. The model will consider key factors identified in the framework, such as transportation costs, facility capacities, and lead times. The authors have employed mixed LP to find the optimal solutions that minimize costs while meeting organizational objectives.Keywords: reverse logistics, supply chain management, optimization, e-commerce
Procedia PDF Downloads 413268 Optimal Geothermal Borehole Design Guided By Dynamic Modeling
Authors: Hongshan Guo
Abstract:
Ground-source heat pumps provide stable and reliable heating and cooling when designed properly. The confounding effect of the borehole depth for a GSHP system, however, is rarely taken into account for any optimization: the determination of the borehole depth usually comes prior to the selection of corresponding system components and thereafter any optimization of the GSHP system. The depth of the borehole is important to any GSHP system because the shallower the borehole, the larger the fluctuation of temperature of the near-borehole soil temperature. This could lead to fluctuations of the coefficient of performance (COP) for the GSHP system in the long term when the heating/cooling demand is large. Yet the deeper the boreholes are drilled, the more the drilling cost and the operational expenses for the circulation. A controller that reads different building load profiles, optimizing for the smallest costs and temperature fluctuation at the borehole wall, eventually providing borehole depth as the output is developed. Due to the nature of the nonlinear dynamic nature of the GSHP system, it was found that between conventional optimal controller problem and model predictive control problem, the latter was found to be more feasible due to a possible history of both the trajectory during the iteration as well as the final output could be computed and compared against. Aside from a few scenarios of different weighting factors, the resulting system costs were verified with literature and reports and were found to be relatively accurate, while the temperature fluctuation at the borehole wall was also found to be within acceptable range. It was therefore determined that the MPC is adequate to optimize for the investment as well as the system performance for various outputs.Keywords: geothermal borehole, MPC, dynamic modeling, simulation
Procedia PDF Downloads 2873267 Modeling of Gas Extraction from a Partially Gas-Saturated Porous Gas Hydrate Reservoir with Respect to Thermal Interactions with Surrounding Rocks
Authors: Angelina Chiglintseva, Vladislav Shagapov
Abstract:
We know from the geological data that quite sufficient gas reserves are concentrated in hydrates that occur on the Earth and on the ocean floor. Therefore, the development of these sources of energy and the storage of large reserves of gas hydrates is an acute global problem. An advanced technology for utilizing gas is to store it in a gas-hydrate state. Under natural conditions, storage facilities can be established, e.g., in underground reservoirs, where quite large volumes of gas can be conserved compared with reservoirs of pure gas. An analysis of the available experimental data of the kinetics and the mechanism of the gas-hydrate formation process shows the self-conservation effect that allows gas to be stored at negative temperatures and low values of pressures of up to several atmospheres. A theoretical model has been constructed for the gas-hydrate reservoir that represents a unique natural chemical reactor, and the principal possibility of the full extraction of gas from a hydrate due to the thermal reserves of the reservoirs themselves and the surrounding rocks has been analyzed. The influence exerted on the evolution of a gas hydrate reservoir by the reservoir thicknesses and the parameters that determine its initial state (a temperature, pressure, hydrate saturation) has been studied. It has been established that the shortest time of exploitation required by the reservoirs with a thickness of a few meters for the total hydrate decomposition is recorded in the cyclic regime when gas extraction alternated with the subsequent conservation of the gas hydrate deposit. The study was performed by a grant from the Russian Science Foundation (project No.15-11-20022).Keywords: conservation, equilibrium state, gas hydrate reservoir, rocks
Procedia PDF Downloads 3013266 Effect of Modified Atmosphere Packaging and Storage Temperatures on Quality of Shelled Raw Walnuts
Authors: M. Javanmard
Abstract:
This study was aimed at analyzing the effects of packaging (MAP) and preservation conditions on the packaged fresh walnut kernel quality. The central composite plan was used for evaluating the effect of oxygen (0–10%), carbon dioxide (0-10%), and temperature (4-26 °C) on qualitative characteristics of walnut kernels. Also, the response level technique was used to find the optimal conditions for interactive effects of factors, as well as estimating the best conditions of process using least amount of testing. Measured qualitative parameters were: peroxide index, color, decreased weight, mould and yeast counting test, and sensory evaluation. The results showed that the defined model for peroxide index, color, weight loss, and sensory evaluation is significant (p < 0.001), so that increase of temperature causes the peroxide value, color variation, and weight loss to increase and it reduces the overall acceptability of walnut kernels. An increase in oxygen percentage caused the color variation level and peroxide value to increase and resulted in lower overall acceptability of the walnuts. An increase in CO2 percentage caused the peroxide value to decrease, but did not significantly affect other indices (p ≥ 0.05). Mould and yeast were not found in any samples. Optimal packaging conditions to achieve maximum quality of walnuts include: 1.46% oxygen, 10% carbon dioxide, and temperature of 4 °C.Keywords: shelled walnut, MAP, quality, storage temperature
Procedia PDF Downloads 3893265 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model
Authors: Nicolae Bold, Daniel Nijloveanu
Abstract:
The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.Keywords: chromosomes, cropping, genetic algorithm, genes
Procedia PDF Downloads 4283264 Assessment of Routine Health Information System (RHIS) Quality Assurance Practices in Tarkwa Sub-Municipal Health Directorate, Ghana
Authors: Richard Okyere Boadu, Judith Obiri-Yeboah, Kwame Adu Okyere Boadu, Nathan Kumasenu Mensah, Grace Amoh-Agyei
Abstract:
Routine health information system (RHIS) quality assurance has become an important issue, not only because of its significance in promoting a high standard of patient care but also because of its impact on government budgets for the maintenance of health services. A routine health information system comprises healthcare data collection, compilation, storage, analysis, report generation, and dissemination on a routine basis in various healthcare settings. The data from RHIS give a representation of health status, health services, and health resources. The sources of RHIS data are normally individual health records, records of services delivered, and records of health resources. Using reliable information from routine health information systems is fundamental in the healthcare delivery system. Quality assurance practices are measures that are put in place to ensure the health data that are collected meet required quality standards. Routine health information system quality assurance practices ensure that data that are generated from the system are fit for use. This study considered quality assurance practices in the RHIS processes. Methods: A cross-sectional study was conducted in eight health facilities in Tarkwa Sub-Municipal Health Service in the western region of Ghana. The study involved routine quality assurance practices among the 90 health staff and management selected from facilities in Tarkwa Sub-Municipal who collected or used data routinely from 24th December 2019 to 20th January 2020. Results: Generally, Tarkwa Sub-Municipal health service appears to practice quality assurance during data collection, compilation, storage, analysis and dissemination. The results show some achievement in quality control performance in report dissemination (77.6%), data analysis (68.0%), data compilation (67.4%), report compilation (66.3%), data storage (66.3%) and collection (61.1%). Conclusions: Even though the Tarkwa Sub-Municipal Health Directorate engages in some control measures to ensure data quality, there is a need to strengthen the process to achieve the targeted percentage of performance (90.0%). There was a significant shortfall in quality assurance practices performance, especially during data collection, with respect to the expected performance.Keywords: quality assurance practices, assessment of routine health information system quality, routine health information system, data quality
Procedia PDF Downloads 813263 Optimisation of B2C Supply Chain Resource Allocation
Authors: Firdaous Zair, Zoubir Elfelsoufi, Mohammed Fourka
Abstract:
The allocation of resources is an issue that is needed on the tactical and operational strategic plan. This work considers the allocation of resources in the case of pure players, manufacturers and Click & Mortars that have launched online sales. The aim is to improve the level of customer satisfaction and maintaining the benefits of e-retailer and of its cooperators and reducing costs and risks. Our contribution is a decision support system and tool for improving the allocation of resources in logistics chains e-commerce B2C context. We first modeled the B2C chain with all operations that integrates and possible scenarios since online retailers offer a wide selection of personalized service. The personalized services that online shopping companies offer to the clients can be embodied in many aspects, such as the customizations of payment, the distribution methods, and after-sales service choices. In addition, every aspect of customized service has several modes. At that time, we analyzed the optimization problems of supply chain resource allocation in customized online shopping service mode, which is different from the supply chain resource allocation under traditional manufacturing or service circumstances. Then we realized an optimization model and algorithm for the development based on the analysis of the allocation of the B2C supply chain resources. It is a multi-objective optimization that considers the collaboration of resources in operations, time and costs but also the risks and the quality of services as well as dynamic and uncertain characters related to the request.Keywords: e-commerce, supply chain, B2C, optimisation, resource allocation
Procedia PDF Downloads 2743262 Going beyond the Traditional Offering in Modern Financial Services
Authors: Cam-Duc Au, Philippe Krahnhof, Lars Klingenberger
Abstract:
German banks are experiencing harsh times due to rising costs and declining profits. On the one hand, acquisition costs for new customers are increasing because of the rise of innovative FinTechs, which entered the market with one specific goal: disrupting the whole financial services industry by occupying parts of the value chain. On the other hand, the COVID-19 pandemic, as well as an overall low level of interest rates, cause the traditional source of bank income to still drain. Consequently, traditional banks must rethink their strategies or their identity, so to speak, because they go beyond their traditional offering of products and services. Having said that, banks may create new sources of income to stabilize their economic situation and replenish profits. The given paper aims to research the opportunities of establishing an ecosystem model. In doing so, the paper contributes to the current literature debate and provide reference points for traditional banks to start. Firstly, a systematic literature review introduces a selection of research works the author regards as significant. In the following step, quantitative data from an online survey with bank clients are analysed by means of descriptive statistics to show the perspective of Germans with regards to an ecosystem offering. The final research findings indicate that the surveyed retail banking clients express interest in the new offer, whereas non-financial products and services are of lower interest than their financial pendants.Keywords: banking, ecosystem, disruptive innovation, digital offering, open-banking-strategy, financial services industry
Procedia PDF Downloads 1333261 Non-Invasive Viscosity Determination of Liquid Organic Hydrogen Carriers by Alteration of Temperature and Flow Velocity Using Cavity Based Permittivity Measurement
Authors: I. Wiemann, N. Weiß, E. Schlücker, M. Wensing, A. Kölpin
Abstract:
Chemical storage of hydrogen by liquid organic hydrogen carriers (LOHC) is a very promising alternative to compression or cryogenics. These carriers have high energy density and allow at the same time efficient and safe storage of hydrogen under ambient conditions and without leakage losses. Another benefit of LOHC is the possibility to transport it using already available infrastructure for transport of fossil fuels. Efficient use of LOHC is related to a precise process control, which requires a number of sensors in order to measure all relevant process parameters, for example, to measure the level of hydrogen loading of the carrier. The degree of loading is relevant for the energy content of the storage carrier and represents simultaneously the modification in chemical structure of the carrier molecules. This variation can be detected in different physical properties like viscosity, permittivity or density. Thereby, each degree of loading corresponds to different viscosity values. Conventional measurements currently use invasive viscosity measurements or near-line measurements to obtain quantitative information. Avoiding invasive measurements has several severe advantages. Efforts are currently taken to provide a precise, non-invasive measurement method with equal or higher precision of the obtained results. This study investigates a method for determination of the viscosity of LOHC. Since the viscosity can retroactively derived from the degree of loading, permittivity is a target parameter as it is a suitable for determining the hydrogenation degree. This research analyses the influence of common physical properties on permittivity. The permittivity measurement system is based on a cavity resonator, an electromagnetic resonant structure, whose resonation frequency depends on its dimensions as well as the permittivity of the medium inside. For known resonator dimensions, the resonation frequency directly characterizes the permittivity. In order to determine the dependency of the permittivity on temperature and flow velocity, an experimental setup with heating device and flow test bench was designed. By varying temperature in the range of 293,15 K -393,15 K and flow velocity up to 140 mm/s, corresponding changes in the resonation frequency were measured in the hundredths of the GHz range.Keywords: liquid organic hydrogen carriers, measurement, permittivity, viscosity., temperature, flow process
Procedia PDF Downloads 1003260 A Novel PfkB Gene Cloning and Characterization for Expression in Potato Plants
Authors: Arfan Ali, Idrees Ahmad Nasir
Abstract:
Potato (Solanum tuberosum) is an important cash crop and popular vegetable in Pakistan and throughout the world. Cold storage of potatoes accelerates the conversion of starch into reduced sugars (glucose and fructose). This process causes dry mass and bitter taste in the potatoes that are not acceptable to end consumers. In the current study, the phosphofructokinase B gene was cloned into the pET-30 vector for protein expression and the pCambia-1301 vector for plant expression. Amplification of a 930bp product from an E. coli strain determined the successful isolation of the phosphofructokinase B gene. Restriction digestion using NcoI and BglII along with the amplification of the 930bp product using gene specific primers confirmed the successful cloning of the PfkB gene in both vectors. The protein was expressed as a His-PfkB fusion protein. Western blot analysis confirmed the presence of the 35 Kda PfkB protein when hybridized with anti-His antibodies. The construct Fani-01 was evaluated transiently using a histochemical gus assay. The appearance of blue color in the agroinfiltrated area of potato leaves confirmed the successful expression of construct Fani-01. Further, the area displaying gus expression was evaluated for PfkB expression using ELISA. Moreover, PfkB gene expression evaluated through transient expression determined successful gene expression and highlighted its potential utilization for stable expression in potato to reduce sweetening due to long-term storage.Keywords: potato, Solanum tuberosum, transformation, PfkB, anti-sweetening
Procedia PDF Downloads 4733259 Artificial Intelligence-Based Thermal Management of Battery System for Electric Vehicles
Authors: Raghunandan Gurumurthy, Aricson Pereira, Sandeep Patil
Abstract:
The escalating adoption of electric vehicles (EVs) across the globe has underscored the critical importance of advancing battery system technologies. This has catalyzed a shift towards the design and development of battery systems that not only exhibit higher energy efficiency but also boast enhanced thermal performance and sophisticated multi-material enclosures. A significant leap in this domain has been the incorporation of simulation-based design optimization for battery packs and Battery Management Systems (BMS), a move further enriched by integrating artificial intelligence/machine learning (AI/ML) approaches. These strategies are pivotal in refining the design, manufacturing, and operational processes for electric vehicles and energy storage systems. By leveraging AI/ML, stakeholders can now predict battery performance metrics—such as State of Health, State of Charge, and State of Power—with unprecedented accuracy. Furthermore, as Li-ion batteries (LIBs) become more prevalent in urban settings, the imperative for bolstering thermal and fire resilience has intensified. This has propelled Battery Thermal Management Systems (BTMs) to the forefront of energy storage research, highlighting the role of machine learning and AI not just as tools for enhanced safety management through accurate temperature forecasts and diagnostics but also as indispensable allies in the early detection and warning of potential battery fires.Keywords: electric vehicles, battery thermal management, industrial engineering, machine learning, artificial intelligence, manufacturing
Procedia PDF Downloads 973258 Appraisal of Transaction Cost in South African Construction Projects
Authors: Kenneth O. Otasowie, Matthew Ikuabe, Clinton Aigbavboa, Ayodeji Oke
Abstract:
Construction project cost are not only made up of production costs. This cost comprises of many other elements such as the preparation of a bidding document, cost estimations, drafting contractual agreements and monitoring that contractual obligations are met. Several studies have stressed the need for transaction costs (TC) to be defined in a way that covers all phases of a project and not only the pre-contract phase. Hence, this study aims to appraise transaction cost in South African (SA) construction projects by assessing what constitutes transaction cost, influencing factors and possible optimisation measures. A survey design was adopted. A total number of eighty (80) questionnaires were administered to quantity surveyors, procurement managers and project managers in Guateng Province, SA and seventy-two (72) were returned and found suitable for analysis. Collected data was analysed using percentage, mean item score, standard deviation, one-sample t-test. The findings show that external technical interaction, uncertainty, human factors are the most significant constituents of TC in SA, while technical competency, experience in similar project type and project characteristics are the leading influencing factors. Furthermore, understanding project characteristics, clear communication and technically competent project teams are most of the significant measures for optimising TC in SA construction projects. Therefore, this study recommends that a competent project team and a clear communication are fundamental to proper management of TC in SA construction projects.Keywords: construction projects, project cost, South Africa, transaction cost
Procedia PDF Downloads 993257 Controlling the Oxygen Vacancies in the Structure of Anode Materials for Improved Electrochemical Performance in Lithium-Ion Batteries
Authors: Moustafa M. S. Sanad
Abstract:
The worsening of energy supply crisis and the exacerbation of climate change by environmental pollution problems have become the greatest threat to human life. One of the ways to confront these problems is to rely on renewable energy and its storage systems. Nowadays, huge attention has been directed to the development of lithium-ion batteries (LIBs) as efficient tools for storing the clean energy produced by green sources like solar and wind energies. Accordingly, the demand for powerful electrode materials with excellent electrochemical characteristics has been progressively increased to meet fast and continuous growth in the market of energy storage systems. Therefore, the electronic and electrical properties of conversion anode materials for rechargeable lithium-ion batteries (LIBs) can be enhanced by introducing lattice defects and oxygen vacancies in the crystal structure. In this regard, the intended presentation will demonstrate new insights and effective ways for enhancing the electrical conductivity and improving the electrochemical performance of different anode materials such as MgFe₂O₄, CdFe₂O₄, Fe₃O₄, LiNbO₃ and Nb₂O₅. The changes in the physicochemical and morphological properties have been deeply investigated via structural and spectroscopic analyses (e.g., XRD, FESEM, HRTEM, and XPS). Moreover, the enhancement in the electrochemical properties of these anode materials will be discussed through Galvanostatic Cycling (GC), Cyclic Voltammetry (CV) and Electrochemical Impedance Spectroscopy (EIS) techniques.Keywords: structure modification, cationic substitution, non-stoichiometric synthesis, plasma treatment, lithium-ion batteries
Procedia PDF Downloads 623256 Valuing Public Urban Street Trees and Their Environmental Spillover Benefits
Authors: Sofia F. Franco, Jacob Macdonald
Abstract:
This paper estimates the value of urban public street trees and their complementary and substitution value with other broader urban amenities and dis-amenities via the residential housing market. We estimate a lower bound value on a city’s tree amenities under instrumental variable and geographic regression discontinuity approaches with an application to Lisbon, Portugal. For completeness, we also explore how urban trees and in particular public street trees impact house prices across the city. Finally, we jointly analyze the planting and maintenance costs and benefits of urban street trees. The estimated value of all public trees in Lisbon is €8.84M. When considering specifically trees planted alongside roads and in public squares, the value is €6.06M or €126.64 per tree. This value is conditional on the distribution of trees in terms of their broader density, with higher effects coming from the overall greening of larger areas of the city compared to the greening of the direct neighborhood. Detrimental impacts are found when the number of trees is higher near street canyons, where they may exacerbate the stagnation of air pollution from traffic. Urban street trees also have important spillover benefits due to pollution mitigation around €6.21 million, or an additional €129.93 per tree. There are added benefits of €26.32 and €28.58 per tree in terms of flooding and heat mitigation, respectively. With significant resources and policies aimed at urban greening, the value obtained is shown to be important for discussions on the benefits of urban trees as compared to mitigation and abatement costs undertaken by a municipality.Keywords: urban public goods, urban street trees, spatial boundary discontinuities, geospatial and remote sensing methods
Procedia PDF Downloads 1783255 An Efficient Traceability Mechanism in the Audited Cloud Data Storage
Authors: Ramya P, Lino Abraham Varghese, S. Bose
Abstract:
By cloud storage services, the data can be stored in the cloud, and can be shared across multiple users. Due to the unexpected hardware/software failures and human errors, which make the data stored in the cloud be lost or corrupted easily it affected the integrity of data in cloud. Some mechanisms have been designed to allow both data owners and public verifiers to efficiently audit cloud data integrity without retrieving the entire data from the cloud server. But public auditing on the integrity of shared data with the existing mechanisms will unavoidably reveal confidential information such as identity of the person, to public verifiers. Here a privacy-preserving mechanism is proposed to support public auditing on shared data stored in the cloud. It uses group signatures to compute verification metadata needed to audit the correctness of shared data. The identity of the signer on each block in shared data is kept confidential from public verifiers, who are easily verifying shared data integrity without retrieving the entire file. But on demand, the signer of the each block is reveal to the owner alone. Group private key is generated once by the owner in the static group, where as in the dynamic group, the group private key is change when the users revoke from the group. When the users leave from the group the already signed blocks are resigned by cloud service provider instead of owner is efficiently handled by efficient proxy re-signature scheme.Keywords: data integrity, dynamic group, group signature, public auditing
Procedia PDF Downloads 3943254 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA
Authors: Marek Dosbaba
Abstract:
Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data
Procedia PDF Downloads 1113253 Activities of Processors in Domestication/Conservation and Processing of Oil Bean (Pentaclethra macrophylla) in Enugu State, South East Nigeria
Authors: Iwuchukwu J. C., Mbah C.
Abstract:
There seems to be dearth on information on how oil bean is being exploited, processed and conserved locally. This gap stifles initiatives on the evaluation of the suitability of the methods used and the invention of new and better methods. The study; therefore, assesses activities of processors in domestication/conservation and processing of oil bean (Pentaclethra macrophylla) Enugu State, South East Nigeria. Three agricultural zones, three blocks, nine circles and seventy-two respondents that were purposively selected made up the sample for the study. Data were presented in percentage, chart and mean score. The result shows that processors of oil bean in the area were middle-aged, married with relatively large household size and long years of experience in processing. They sourced oil bean they processed from people’s farmland and sourced information on processing of oil bean from friends and relatives. Activities involved in processing of oil bean were boiling, dehulling, washing, sieving, slicing, wrapping. However, the sequence of these activities varies among these processors. Little or nothing was done by the processors towards the conservation of the crop while poor storage and processing facilities and lack of knowledge on modern preservation technique were major constraints to processing of oil bean in the area. The study concluded that efforts should be made by governments and processors through cooperative group in provision of processing and storage facility for oil bean while research institute should conserve and generate improved specie of the crop to arouse interest of the farmers and processors on the crop which will invariably increase productivity.Keywords: conservation, domestication, oil bean, processing
Procedia PDF Downloads 3083252 Supply Chain Network Design for Perishable Products in Developing Countries
Authors: Abhishek Jain, Kavish Kejriwal, V. Balaji Rao, Abhigna Chavda
Abstract:
Increasing environmental and social concerns are forcing companies to take a fresh view of the impact of supply chain operations on environment and society when designing a supply chain. A challenging task in today’s food industry is the distribution of high-quality food items throughout the food supply chain. Improper storage and unwanted transportation are the major hurdles in food supply chain and can be tackled by making dynamic storage facility location decisions with the distribution network. Since food supply chain in India is one of the biggest supply chains in the world, the companies should also consider environmental impact caused by the supply chain. This project proposes a multi-objective optimization model by integrating sustainability in decision-making, on distribution in a food supply chain network (SCN). A Multi-Objective Mixed-Integer Linear Programming (MOMILP) model between overall cost and environmental impact caused by the SCN is formulated for the problem. The goal of MOMILP is to determine the pareto solutions for overall cost and environmental impact caused by the supply chain. This is solved by using GAMS with CPLEX as third party solver. The outcomes of the project are pareto solutions for overall cost and environmental impact, facilities to be operated and the amount to be transferred to each warehouse during the time horizon.Keywords: multi-objective mixed linear programming, food supply chain network, GAMS, multi-product, multi-period, environment
Procedia PDF Downloads 3213251 Fintech Credit and Bank Efficiency Two-way Relationship: A Comparison Study Across Country Groupings
Authors: Tan Swee Liang
Abstract:
This paper studies the two-way relationship between fintech credit and banking efficiency using the Generalized panel Method of Moment (GMM) estimation in structural equation modeling (SEM). Banking system efficiency, defined as its ability to produce the existing level of outputs with minimal inputs, is measured using input-oriented data envelopment analysis (DEA), where the whole banking system of an economy is treated as a single DMU. Banks are considered an intermediary between depositors and borrowers, utilizing inputs (deposits and overhead costs) to provide outputs (increase credits to the private sector and its earnings). Analysis of the interrelationship between fintech credit and bank efficiency is conducted to determine the impact in different country groupings (ASEAN, Asia and OECD), in particular the banking system response to fintech credit platforms. Our preliminary results show that banks do respond to the greater pressure caused by fintech platforms to enhance their efficiency, but differently across the different groups. The author’s earlier research on ASEAN-5 high bank overhead costs (as a share of total assets) as the determinant of economic growth suggests that expenses may not have been channeled efficiently to income-generating activities. One practical implication of the findings is that policymakers should enable alternative financing, such as fintech credit, as a warning or encouragement for banks to improve their efficiency.Keywords: fintech lending, banking efficiency, data envelopment analysis, structural equation modeling
Procedia PDF Downloads 943250 Video Compression Using Contourlet Transform
Authors: Delara Kazempour, Mashallah Abasi Dezfuli, Reza Javidan
Abstract:
Video compression used for channels with limited bandwidth and storage devices has limited storage capabilities. One of the most popular approaches in video compression is the usage of different transforms. Discrete cosine transform is one of the video compression methods that have some problems such as blocking, noising and high distortion inappropriate effect in compression ratio. wavelet transform is another approach is better than cosine transforms in balancing of compression and quality but the recognizing of curve curvature is so limit. Because of the importance of the compression and problems of the cosine and wavelet transforms, the contourlet transform is most popular in video compression. In the new proposed method, we used contourlet transform in video image compression. Contourlet transform can save details of the image better than the previous transforms because this transform is multi-scale and oriented. This transform can recognize discontinuity such as edges. In this approach we lost data less than previous approaches. Contourlet transform finds discrete space structure. This transform is useful for represented of two dimension smooth images. This transform, produces compressed images with high compression ratio along with texture and edge preservation. Finally, the results show that the majority of the images, the parameters of the mean square error and maximum signal-to-noise ratio of the new method based contourlet transform compared to wavelet transform are improved but in most of the images, the parameters of the mean square error and maximum signal-to-noise ratio in the cosine transform is better than the method based on contourlet transform.Keywords: video compression, contourlet transform, discrete cosine transform, wavelet transform
Procedia PDF Downloads 4463249 A Review on Application of Phase Change Materials in Textiles Finishing
Authors: Mazyar Ahrari, Ramin Khajavi, Mehdi Kamali Dolatabadi, Tayebeh Toliyat, Abosaeed Rashidi
Abstract:
Fabric as the first and most common layer that is in permanent contact with human skin is a very good interface to provide coverage, as well as heat and cold insulation. Phase change materials (PCMs) are organic and inorganic compounds which have the capability of absorbing and releasing noticeable amounts of latent heat during phase transitions between solid and liquid phases at a low temperature range. PCMs come across phase changes (liquid-solid and solid-liquid transitions) during absorbing and releasing thermal heat; so, in order to use them for a long time, they should have been encapsulated in polymeric shells, so-called microcapsules. Microencapsulation and nanoencapsulation methods have been developed in order to reduce the reactivity of a PCM with outside environment, promoting the ease of handling, decreasing the diffusion and evaporation rates. Methods of incorporation of PCMs in textiles such as electrospinning and determining thermal properties had been summarized. Paraffin waxes catch a lot of attention due to their high thermal storage density, repeatability of phase change, thermal stability, small volume change during phase transition, chemical stability, non-toxicity, non-flammability, non-corrosive and low cost and they seem to play a key role in confronting with climate change and global warming. In this article, we aimed to review the researches concentrating on the characteristics of PCMs and new materials and methods of microencapsulation.Keywords: thermoregulation, microencapsulation, phase change materials, thermal energy storage, nanoencapsulation
Procedia PDF Downloads 3883248 Climate Change Impact on Water Resources Management in Remote Islands Using Hybrid Renewable Energy Systems
Authors: Elissavet Feloni, Ioannis Kourtis, Konstantinos Kotsifakis, Evangelos Baltas
Abstract:
Water inadequacy in small dry islands scattered in the Aegean Sea (Greece) is a major problem regarding Water Resources Management (WRM), especially during the summer period due to tourism. In the present work, various WRM schemes are designed and presented. The WRM schemes take into account current infrastructure and include Rainwater Harvesting tanks and Reverse Osmosis Desalination Units. The energy requirements are covered mainly by wind turbines and/or a seawater pumped storage system. Sizing is based on the available data for population and tourism per island, after taking into account a slight increase in the population (up to 1.5% per year), and it guarantees at least 80% reliability for the energy supply and 99.9% for potable water. Evaluation of scenarios is carried out from a financial perspective, after calculating the Life Cycle Cost (LCC) of each investment for a lifespan of 30 years. The wind-powered desalination plant was found to be the most cost-effective practice, from an economic point of view. Finally, in order to estimate the Climate Change (CC) impact, six different CC scenarios were investigated. The corresponding rate of on-grid versus off-grid energy required for ensuring the targeted reliability for the zero and each climatic scenario was investigated per island. The results revealed that under CC the grid-on energy required would increase and as a result, the reduction in wind turbines and seawater pumped storage systems’ reliability will be in the range of 4 to 44%. However, the range of this percentage change does not exceed 22% per island for all examined CC scenarios. Overall, CC is proposed to be incorporated into the design process for WRM-related projects. Acknowledgements: This research is co-financed by Greece and the European Union (European Social Fund - ESF) through the Operational Program «Human Resources Development, Education and Lifelong Learning 2014-2020» in the context of the project “Development of a combined rain harvesting and renewable energy-based system for covering domestic and agricultural water requirements in small dry Greek Islands” (MIS 5004775).Keywords: small dry islands, water resources management, climate change, desalination, RES, seawater pumped storage system, rainwater harvesting
Procedia PDF Downloads 1173247 Effects of Bipolar Plate Coating Layer on Performance Degradation of High-Temperature Proton Exchange Membrane Fuel Cell
Authors: Chen-Yu Chen, Ping-Hsueh We, Wei-Mon Yan
Abstract:
Over the past few centuries, human requirements for energy have been met by burning fossil fuels. However, exploiting this resource has led to global warming and innumerable environmental issues. Thus, finding alternative solutions to the growing demands for energy has recently been driving the development of low-carbon and even zero-carbon energy sources. Wind power and solar energy are good options but they have the problem of unstable power output due to unpredictable weather conditions. To overcome this problem, a reliable and efficient energy storage sub-system is required in future distributed-power systems. Among all kinds of energy storage technologies, the fuel cell system with hydrogen storage is a promising option because it is suitable for large-scale and long-term energy storage. The high-temperature proton exchange membrane fuel cell (HT-PEMFC) with metallic bipolar plates is a promising fuel cell system because an HT-PEMFC can tolerate a higher CO concentration and the utilization of metallic bipolar plates can reduce the cost of the fuel cell stack. However, the operating life of metallic bipolar plates is a critical issue because of the corrosion phenomenon. As a result, in this work, we try to apply different coating layer on the metal surface and to investigate the protection performance of the coating layers. The tested bipolar plates include uncoated SS304 bipolar plates, titanium nitride (TiN) coated SS304 bipolar plates and chromium nitride (CrN) coated SS304 bipolar plates. The results show that the TiN coated SS304 bipolar plate has the lowest contact resistance and through-plane resistance and has the best cell performance and operating life among all tested bipolar plates. The long-term in-situ fuel cell tests show that the HT-PEMFC with TiN coated SS304 bipolar plates has the lowest performance decay rate. The second lowest is CrN coated SS304 bipolar plate. The uncoated SS304 bipolar plate has the worst performance decay rate. The performance decay rates with TiN coated SS304, CrN coated SS304 and uncoated SS304 bipolar plates are 5.324×10⁻³ % h⁻¹, 4.513×10⁻² % h⁻¹ and 7.870×10⁻² % h⁻¹, respectively. In addition, the EIS results indicate that the uncoated SS304 bipolar plate has the highest growth rate of ohmic resistance. However, the ohmic resistance with the TiN coated SS304 bipolar plates only increases slightly with time. The growth rate of ohmic resistances with TiN coated SS304, CrN coated SS304 and SS304 bipolar plates are 2.85×10⁻³ h⁻¹, 3.56×10⁻³ h⁻¹, and 4.33×10⁻³ h⁻¹, respectively. On the other hand, the charge transfer resistances with these three bipolar plates all increase with time, but the growth rates are all similar. In addition, the effective catalyst surface areas with all bipolar plates do not change significantly with time. Thus, it is inferred that the major reason for the performance degradation is the elevated ohmic resistance with time, which is associated with the corrosion and oxidation phenomena on the surface of the stainless steel bipolar plates.Keywords: coating layer, high-temperature proton exchange membrane fuel cell, metallic bipolar plate, performance degradation
Procedia PDF Downloads 2823246 Determinants of Extra Charges for Container Shipments: A Case Study of Nexus Zone Logistics
Authors: Zety Shakila Binti Mohd Yusof, Muhammad Adib Bin Ishak, Hajah Fatimah Binti Hussein
Abstract:
The international shipping business is related to numerous controls or regulations of export and import shipments. It is costly and time consuming, and when something goes wrong or when the buyer or seller fails to comply with the regulations, it can result in penalties, delays, and unexpected costs etc. For the focus of this study, the researchers have selected a local forwarder that provides forwarding and clearance services, Nexus Zone Logistics. It was identified that this company currently has many extra costs to be paid including local and detention charges, which negatively impacts the flow of income and reduces overall stability. Two variables have been identified as factors of extra charges; loaded containers entering the port by exceeded closing time and late delivery of empty containers to the container yard. This study is a qualitative in nature and the secondary data collected was analyzed using self-administered observation. The findings of this study were covered by one selected case for each export and import shipment between July and December 2014. The data were analyzed using frequency analysis based on tables and graphs. The researcher recommends Nexus Zone Logistics impose a 1% deposit payment per container for each shipment (export and import) to its customers.Keywords: international shipping, export and import, detention charges, container shipment
Procedia PDF Downloads 3843245 Direct and Indirect Impacts of Predator Conflict in Kanha National Park, India
Authors: Diane H. Dotson, Shari L. Rodriguez
Abstract:
Habitat for predators is on the decline worldwide, which often brings humans and predators into conflict over remaining shared space and common resources. While the direct impacts of human predator conflict on humans (i.e., attacks on livestock or humans resulting in injury or death) are well documented, the indirect impacts of conflict on humans (i.e., downstream effects such as fear, stress, opportunity costs, PTSD) have not been addressed. We interviewed 437 people living in 54 villages on the periphery of Kanha National Park, India, to assess the amount and severity of direct and indirect impacts of predator conflict. While 58% of livestock owners believed that predator attacks on livestock guards occurred frequently and 62% of those who collect forest products believed that predator attacks on those collecting occurred frequently, less than 20% of all participants knew of someone who had experienced an attack. Data related to indirect impacts suggest that such impacts are common; 76% of participants indicated they were afraid a predator will physically injure them. Livestock owners reported that livestock guarding took time away from their primary job (61%) and getting enough sleep (73%), and believed that it increased their vulnerability to illnesses (80%). These results suggest that the perceptions of risk of predator attack are likely inflated, yet the costs of human predator impacts may be substantially higher than previously estimated, particularly related to human well-being, making the implementation of appropriate and effective conservation and conflict mitigation strategies and policies increasingly urgent.Keywords: direct impacts, indirect impacts, human-predator conflict, India
Procedia PDF Downloads 1563244 Patient’s Knowledge and Use of Sublingual Glyceryl Trinitrate Therapy in Taiping Hospital, Malaysia
Authors: Wan Azuati Wan Omar, Selva Rani John Jasudass, Siti Rohaiza Md. Saad
Abstract:
Introduction & objective: The objectives of this study were to assess patient’s knowledge of appropriate sublingual glyceryl trinitrate (GTN) use as well as to investigate how patients commonly store and carry their sublingual GTN tablets. Methodology: This was a cross-sectional survey, using a validated researcher-administered questionnaire. The study involved cardiac patients receiving sublingual GTN attending the outpatient and inpatient departments of Taiping Hospital, a non-academic public care hospital. The minimum calculated sample size was 92, but 100 patients were conveniently sampled. Respondents were interviewed on 3 areas, including demographic data, knowledge and use of sublingual GTN. Eight items were used to calculate each subject’s knowledge score and six items were used to calculate use score. Results: Of the 96 patients who consented to participate, majority (96.9%) were well aware of the indication of sublingual GTN. With regards to the mechanism of action of sublingual GTN, 73 (76%) patients did not know how the medication works. Majority of the patients (66.7%) knew about the proper storage of the tablet. In relation to the maximum number of sublingual GTN tablets that can be taken during each angina episode, 36.5% did not know that up to 3 tablets of sublingual GTN can be taken during each episode of angina. Fifty four (56.2%) patients were not aware that they need to replace sublingual GTN every 8 weeks after receiving the tablets. Majority (69.8%) of the patients demonstrated lack of knowledge with regards to the use of sublingual GTN as prevention of chest pain. Conclusion: Overall, patients’ knowledge regarding the self administration of sublingual GTN is still inadequate. The findings support the need for more frequent reinforcement of patient education, especially in the areas of preventive use, storage and drug stability.Keywords: glyceryl trinitrate, knowledge, adherence, patient education
Procedia PDF Downloads 3993243 A Data Driven Methodological Approach to Economic Pre-Evaluation of Reuse Projects of Ancient Urban Centers
Authors: Pietro D'Ambrosio, Roberta D'Ambrosio
Abstract:
The upgrading of the architectural and urban heritage of the urban historic centers almost always involves the planning for the reuse and refunctionalization of the structures. Such interventions have complexities linked to the need to take into account the urban and social context in which the structure and its intrinsic characteristics such as historical and artistic value are inserted. To these, of course, we have to add the need to make a preliminary estimate of recovery costs and more generally to assess the economic and financial sustainability of the whole project of re-socialization. Particular difficulties are encountered during the pre-assessment of costs since it is often impossible to perform analytical surveys and structural tests for both structural conditions and obvious cost and time constraints. The methodology proposed in this work, based on a multidisciplinary and data-driven approach, is aimed at obtaining, at very low cost, reasonably priced economic evaluations of the interventions to be carried out. In addition, the specific features of the approach used, derived from the predictive analysis techniques typically applied in complex IT domains (big data analytics), allow to obtain as a result indirectly the evaluation process of a shared database that can be used on a generalized basis to estimate such other projects. This makes the methodology particularly indicated in those cases where it is expected to intervene massively across entire areas of historical city centers. The methodology has been partially tested during a study aimed at assessing the feasibility of a project for the reuse of the monumental complex of San Massimo, located in the historic center of Salerno, and is being further investigated.Keywords: evaluation, methodology, restoration, reuse
Procedia PDF Downloads 1873242 Profile of Cross-Reactivity Allergens Highlighted by Multiplex Technology “Alex Microchip Technique” in the Diagnosis of Type I Hypersensitivity
Authors: Gadiri Sabiha
Abstract:
Introduction: Current allergy diagnostic tools using Multiplex technology have made it possible to increase the efficiency of the search for specific IgE. This opportunity is provided by the newly developed “Alex Biochip”, consisting of a panel of 282 allergens in native and molecular form, a CCD inhibitor, and the potential for detecting cross-reactive allergens. We evaluated the performance of this technology in detecting cross-reactivity in previously explored patients. Material/Method: The sera of 39 patients presenting sensitization and polysensitization profiles were explored. The search for specific IgE is carried out by the Alex ® IgE Biochip, and the results are analyzed by nature and by molecular family of allergens using specific software. Results/Discussion: The analysis gave a particular profile of cross-reactivity allergens: 33% for the Ole e1 family, 31% for NPC2, 26% for storage proteins, 20% for Tropomyosin, 10% for LTPs, 10% for Arginine Kinase and 10% for Uteroglobin CCDs were absent in all patients. The “Ole e1” allergen is responsible for a pollen-pollen cross allergy. The storage proteins found and LTP are not species-specific, causing cross-pollen-food allergy. The nDer p2 of the NPC2 family is responsible for cross-reactivity between mite species. Conclusion: The cross-reactivities responsible for mixed syndromes at diagnosis in our patients were dominated by pollen-pollen and pollen-food syndromes. They allow the identification of severity factors linked to the prognosis and the best-adapted immunotherapy.Keywords: specific IgE, allergy, cross reactivity, molecular allergens
Procedia PDF Downloads 67