Search results for: disposal costs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2652

Search results for: disposal costs

312 Ethicality of Algorithmic Pricing and Consumers’ Resistance

Authors: Zainab Atia, Hongwei He, Panagiotis Sarantopoulos

Abstract:

Over the past few years, firms have witnessed a massive increase in sophisticated algorithmic deployment, which has become quite pervasive in today’s modern society. With the wide availability of data for retailers, the ability to track consumers using algorithmic pricing has become an integral option in online platforms. As more companies are transforming their businesses and relying more on massive technological advancement, pricing algorithmic systems have brought attention and given rise to its wide adoption, with many accompanying benefits and challenges to be found within its usage. With the overall aim of increasing profits by organizations, algorithmic pricing is becoming a sound option by enabling suppliers to cut costs, allowing better services, improving efficiency and product availability, and enhancing overall consumer experiences. The adoption of algorithms in retail has been pioneered and widely used in literature across varied fields, including marketing, computer science, engineering, economics, and public policy. However, what is more, alarming today is the comprehensive understanding and focus of this technology and its associated ethical influence on consumers’ perceptions and behaviours. Indeed, due to algorithmic ethical concerns, consumers are found to be reluctant in some instances to share their personal data with retailers, which reduces their retention and leads to negative consumer outcomes in some instances. This, in its turn, raises the question of whether firms can still manifest the acceptance of such technologies by consumers while minimizing the ethical transgressions accompanied by their deployment. As recent modest research within the area of marketing and consumer behavior, the current research advances the literature on algorithmic pricing, pricing ethics, consumers’ perceptions, and price fairness literature. With its empirical focus, this paper aims to contribute to the literature by applying the distinction of the two common types of algorithmic pricing, dynamic and personalized, while measuring their relative effect on consumers’ behavioural outcomes. From a managerial perspective, this research offers significant implications that pertain to providing a better human-machine interactive environment (whether online or offline) to improve both businesses’ overall performance and consumers’ wellbeing. Therefore, by allowing more transparent pricing systems, businesses can harness their generated ethical strategies, which fosters consumers’ loyalty and extend their post-purchase behaviour. Thus, by defining the correct balance of pricing and right measures, whether using dynamic or personalized (or both), managers can hence approach consumers more ethically while taking their expectations and responses at a critical stance.

Keywords: algorithmic pricing, dynamic pricing, personalized pricing, price ethicality

Procedia PDF Downloads 62
311 Five Years Analysis and Mitigation Plans on Adjustment Orders Impacts on Projects in Kuwait's Oil and Gas Sector

Authors: Rawan K. Al-Duaij, Salem A. Al-Salem

Abstract:

Projects, the unique and temporary process of achieving a set of requirements have always been challenging; Planning the schedule and budget, managing the resources and risks are mostly driven by a similar past experience or the technical consultations of experts in the matter. With that complexity of Projects in Scope, Time, and execution environment, Adjustment Orders are tools to reflect changes to the original project parameters after Contract signature. Adjustment Orders are the official/legal amendments to the terms and conditions of a live Contract. Reasons for issuing Adjustment Orders arise from changes in Contract scope, technical requirement and specification resulting in scope addition, deletion, or alteration. It can be as well a combination of most of these parameters resulting in an increase or decrease in time and/or cost. Most business leaders (handling projects in the interest of the owner) refrain from using Adjustment Orders considering their main objectives of staying within budget and on schedule. Success in managing the changes results in uninterrupted execution and agreed project costs as well as schedule. Nevertheless, this is not always practically achievable. In this paper, a detailed study through utilizing Industrial Engineering & Systems Management tools such as Six Sigma, Data Analysis, and Quality Control were implemented on the organization’s five years records of the issued Adjustment Orders in order to investigate their prevalence, and time and cost impact. The analysis outcome revealed and helped to identify and categorize the predominant causations with the highest impacts, which were considered most in recommending the corrective measures to reach the objective of minimizing the Adjustment Orders impacts. Data analysis demonstrated no specific trend in the AO frequency in past five years; however, time impact is more than the cost impact. Although Adjustment Orders might never be avoidable; this analysis offers’ some insight to the procedural gaps, and where it is highly impacting the organization. Possible solutions are concluded such as improving project handling team’s coordination and communication, utilizing a blanket service contract, and modifying the projects gate system procedures to minimize the possibility of having similar struggles in future. Projects in the Oil and Gas sector are always evolving and demand a certain amount of flexibility to sustain the goals of the field. As it will be demonstrated, the uncertainty of project parameters, in adequate project definition, operational constraints and stringent procedures are main factors resulting in the need for Adjustment Orders and accordingly the recommendation will be to address that challenge.

Keywords: adjustment orders, data analysis, oil and gas sector, systems management

Procedia PDF Downloads 133
310 Production of Ferroboron by SHS-Metallurgy from Iron-Containing Rolled Production Wastes for Alloying of Cast Iron

Authors: G. Zakharov, Z. Aslamazashvili, M. Chikhradze, D. Kvaskhvadze, N. Khidasheli, S. Gvazava

Abstract:

Traditional technologies for processing iron-containing industrial waste, including steel-rolling production, are associated with significant energy costs, the long duration of processes, and the need to use complex and expensive equipment. Waste generated during the industrial process negatively affects the environment, but at the same time, it is a valuable raw material and can be used to produce new marketable products. The study of the effectiveness of self-propagating high-temperature synthesis (SHS) methods, which are characterized by the simplicity of the necessary equipment, the purity of the final product, and the high processing speed, is under the wide scientific and practical interest to solve the set problem. The work presents technological aspects of the production of Ferro boron by the method of SHS - metallurgy from iron-containing wastes of rolled production for alloying of cast iron and results of the effect of alloying element on the degree of boron assimilation with liquid cast iron. Features of Fe-B system combustion have been investigated, and the main parameters to control the phase composition of synthesis products have been experimentally established. Effect of overloads on patterns of cast ligatures formation and mechanisms structure formation of SHS products was studied. It has been shown that an increase in the content of hematite Fe₂O₃ in iron-containing waste leads to an increase in the content of phase FeB and, accordingly, the amount of boron in the ligature. Boron content in ligature is within 3-14%, and the phase composition of obtained ligatures consists of Fe₂B and FeB phases. Depending on the initial composition of the wastes, the yield of the end product reaches 91 - 94%, and the extraction of boron is 70 - 88%. Combustion processes of high exothermic mixtures allow to obtain a wide range of boron-containing ligatures from industrial wastes. In view of the relatively low melting point of the obtained SHS-ligature, the positive dynamics of boron absorption by liquid iron is established. According to the obtained data, the degree of absorption of the ligature by alloying gray cast iron at 1450°C is 80-85%. When combined with the treatment of liquid cast iron with magnesium, followed by alloying with the developed ligature, boron losses are reduced by 5-7%. At that, uniform distribution of boron micro-additives in the volume of treated liquid metal is provided. Acknowledgment: This work was supported by Shota Rustaveli Georgian National Science Foundation of Georgia (SRGNSFG) under the GENIE project (grant number № CARYS-19-802).

Keywords: self-propagating high-temperature synthesis, cast iron, industrial waste, ductile iron, structure formation

Procedia PDF Downloads 99
309 Analysis of the Savings Behaviour of Rice Farmers in Tiaong, Quezon, Philippines

Authors: Angelika Kris D. Dalangin, Cesar B. Quicoy

Abstract:

Rice farming is a major source of livelihood and employment in the Philippines, but it requires a substantial amount of capital. Capital may come from income (farm, non-farm, and off-farm), savings and credit. However, rice farmers suffer from lack of capital due to high costs of inputs and low productivity. Capital insufficiency, coupled with low productivity, hindered them to meet their basic household and production needs. Hence, they resorted to borrowing money, mostly from informal lenders who charge very high interest rates. As another source of capital, savings can help rice farmers meet their basic needs for both the household and the farm. However, information is inadequate whether the farmers save or not, as well as, why they do not depend on savings to augment their lack of capital. Thus, it is worth analyzing how rice farmers saved. The study revealed, using the actual savings which is the difference between the household income and expenditure, that about three-fourths (72%) of the total number of farmers interviewed are savers. However, when they were asked whether they are savers or not, more than half of them considered themselves as non-savers. This gap shows that there are many farmers who think that they do not have savings at all; hence they continue to borrow money and do not depend on savings to augment their lack of capital. The study also identified the forms of savings, saving motives, and savings utilization among rice farmers. Results revealed that, for the past 12 months, most of the farmers saved cash at home for liquidity purposes while others deposited cash in banks and/or saved their money in the form of livestock. Among the most important reasons of farmers for saving are for daily household expenses, for building a house, for emergency purposes, for retirement, and for their next production. Furthermore, the study assessed the factors affecting the rice farmers’ savings behaviour using logistic regression. Results showed that the factors found to be significant were presence of non-farm income, per capita net farm income, and per capita household expense. The presence of non-farm income and per capita net farm income positively affects the farmers’ savings behaviour. On the other hand, per capita household expenses have negative effect. The effect, however, of per capita net farm income and household expenses is very negligible because of the very small chance that the farmer is a saver. Generally, income and expenditure were proved to be significant factors that affect the savings behaviour of the rice farmers. However, most farmers could not save regularly due to low farm income and high household and farm expenditures. Thus, it is highly recommended that government should develop programs or implement policies that will create more jobs for the farmers and their family members. In addition, programs and policies should be implemented to increase farm productivity and income.

Keywords: agricultural economics, agricultural finance, binary logistic regression, logit, Philippines, Quezon, rice farmers, savings, savings behaviour

Procedia PDF Downloads 204
308 Unequal Traveling: How School District System and School District Housing Characteristics Shape the Duration of Families Commuting

Authors: Geyang Xia

Abstract:

In many countries, governments have responded to the growing demand for educational resources through school district systems, and there is substantial evidence that school district systems have been effective in promoting inter-district and inter-school equity in educational resources. However, the scarcity of quality educational resources has brought about varying levels of education among different school districts, making it a common choice for many parents to buy a house in the school district where a quality school is located, and they are even willing to bear huge commuting costs for this purpose. Moreover, this is evidenced by the fact that parents of families in school districts with quality education resources have longer average commute lengths and longer average commute distances than parents in average school districts. This "unequal traveling" under the influence of the school district system is more common in school districts at the primary level of education. This further reinforces the differential hierarchy of educational resources and raises issues of inequitable educational public services, education-led residential segregation, and gentrification of school district housing. Against this background, this paper takes Nanjing, a famous educational city in China, as a case study and selects the school districts where the top 10 public elementary schools are located. The study first identifies the spatio-temporal behavioral trajectory dataset of these high-quality school district households by using spatial vector data, decrypted cell phone signaling data, and census data. Then, by constructing a "house-school-work (HSW)" commuting pattern of the population in the school district where the high-quality educational resources are located, and based on the classification of the HSW commuting pattern of the population, school districts with long employment hours were identified. Ultimately, the mechanisms and patterns inherent in this unequal commuting are analyzed in terms of six aspects, including the centrality of school district location, functional diversity, and accessibility. The results reveal that the "unequal commuting" of Nanjing's high-quality school districts under the influence of the school district system occurs mainly in the peripheral areas of the city, and the schools matched with these high-quality school districts are mostly branches of prestigious schools in the built-up areas of the city's core. At the same time, the centrality of school district location and the diversity of functions are the most important influencing factors of unequal commuting in high-quality school districts. Based on the research results, this paper proposes strategies to optimize the spatial layout of high-quality educational resources and corresponding transportation policy measures.

Keywords: school-district system, high quality school district, commuting pattern, unequal traveling

Procedia PDF Downloads 66
307 Estimation of the Dynamic Fragility of Padre Jacinto Zamora Bridge Due to Traffic Loads

Authors: Kimuel Suyat, Francis Aldrine Uy, John Paul Carreon

Abstract:

The Philippines, composed of many islands, is connected with approximately 8030 bridges. Continuous evaluation of the structural condition of these bridges is needed to safeguard the safety of the general public. With most bridges reaching its design life, retrofitting and replacement may be needed. Concerned government agencies allocate huge costs for periodic monitoring and maintenance of these structures. The rising volume of traffic and aging of these infrastructures is challenging structural engineers to give rise for structural health monitoring techniques. Numerous techniques are already proposed and some are now being employed in other countries. Vibration Analysis is one way. The natural frequency and vibration of a bridge are design criteria in ensuring the stability, safety and economy of the structure. Its natural frequency must not be so high so as not to cause discomfort and not so low that the structure is so stiff causing it to be both costly and heavy. It is well known that the stiffer the member is, the more load it attracts. The frequency must not also match the vibration caused by the traffic loads. If this happens, a resonance occurs. Vibration that matches a systems frequency will generate excitation and when this exceeds the member’s limit, a structural failure will happen. This study presents a method for calculating dynamic fragility through the use of vibration-based monitoring system. Dynamic fragility is the probability that a structural system exceeds a limit state when subjected to dynamic loads. The bridge is modeled in SAP2000 based from the available construction drawings provided by the Department of Public Works and Highways. It was verified and adjusted based from the actual condition of the bridge. The bridge design specifications are also checked using nondestructive tests. The approach used in this method properly accounts the uncertainty of observed values and code-based structural assumptions. The vibration response of the structure due to actual loads is monitored using installed sensors on the bridge. From the determinacy of these dynamic characteristic of a system, threshold criteria can be established and fragility curves can be estimated. This study conducted in relation with the research project between Department of Science and Technology, Mapúa Institute of Technology, and the Department of Public Works and Highways also known as Mapúa-DOST Smart Bridge Project deploys Structural Health Monitoring Sensors at Zamora Bridge. The bridge is selected in coordination with the Department of Public Works and Highways. The structural plans for the bridge are also readily available.

Keywords: structural health monitoring, dynamic characteristic, threshold criteria, traffic loads

Procedia PDF Downloads 240
306 Numerical Investigation of Phase Change Materials (PCM) Solidification in a Finned Rectangular Heat Exchanger

Authors: Mounir Baccar, Imen Jmal

Abstract:

Because of the rise in energy costs, thermal storage systems designed for the heating and cooling of buildings are becoming increasingly important. Energy storage can not only reduce the time or rate mismatch between energy supply and demand but also plays an important role in energy conservation. One of the most preferable storage techniques is the Latent Heat Thermal Energy Storage (LHTES) by Phase Change Materials (PCM) due to its important energy storage density and isothermal storage process. This paper presents a numerical study of the solidification of a PCM (paraffin RT27) in a rectangular thermal storage exchanger for air conditioning systems taking into account the presence of natural convection. Resolution of continuity, momentum and thermal energy equations are treated by the finite volume method. The main objective of this numerical approach is to study the effect of natural convection on the PCM solidification time and the impact of fins number on heat transfer enhancement. It also aims at investigating the temporal evolution of PCM solidification, as well as the longitudinal profiles of the HTF circling in the duct. The present research undertakes the study of two cases: the first one treats the solidification of PCM in a PCM-air heat exchanger without fins, while the second focuses on the solidification of PCM in a heat exchanger of the same type with the addition of fins (3 fins, 5 fins, and 9 fins). Without fins, the stratification of the PCM from colder to hotter during the heat transfer process has been noted. This behavior prevents the formation of thermo-convective cells in PCM area and then makes transferring almost conductive. In the presence of fins, energy extraction from PCM to airflow occurs at a faster rate, which contributes to the reduction of the discharging time and the increase of the outlet air temperature (HTF). However, for a great number of fins (9 fins), the enhancement of the solidification process is not significant because of the effect of confinement of PCM liquid spaces for the development of thermo-convective flow. Hence, it can be concluded that the effect of natural convection is not very significant for a high number of fins. In the optimum case, using 3 fins, the increasing temperature of the HTF exceeds approximately 10°C during the first 30 minutes. When solidification progresses from the surfaces of the PCM-container and propagates to the central liquid phase, an insulating layer will be created in the vicinity of the container surfaces and the fins, causing a low heat exchange rate between PCM and air. As the solid PCM layer gets thicker, a progressive regression of the field of movements is induced in the liquid phase, thus leading to the inhibition of heat extraction process. After about 2 hours, 68% of the PCM became solid, and heat transfer was almost dominated by conduction mechanism.

Keywords: heat transfer enhancement, front solidification, PCM, natural convection

Procedia PDF Downloads 162
305 The Impact of Formulate and Implementation Strategy for an Organization to Better Financial Consequences in Malaysian Private Hospital

Authors: Naser Zouri

Abstract:

Purpose: Measures of formulate and implementation strategy shows amount of product rate-market based strategic management category such as courtesy, competence, and compliance to reach the high loyalty of financial ecosystem. Despite, it solves the market place error intention to fair trade organization. Finding: Finding shows the ability of executives’ level of management to motivate and better decision-making to solve the treatments in business organization. However, it made ideal level of each interposition policy for a hypothetical household. Methodology/design. Style of questionnaire about the data collection was selected to survey of both pilot test and real research. Also, divide of questionnaire and using of Free Scale Semiconductor`s between the finance employee was famous of this instrument. Respondent`s nominated basic on non-probability sampling such as convenience sampling to answer the questionnaire. The way of realization costs to performed the questionnaire divide among the respondent`s approximately was suitable as a spend the expenditure to reach the answer but very difficult to collect data from hospital. However, items of research survey was formed of implement strategy, environment, supply chain, employee from impact of implementation strategy on reach to better financial consequences and also formulate strategy, comprehensiveness strategic design, organization performance from impression on formulate strategy and financial consequences. Practical Implication: Dynamic capability approach of formulate and implement strategy focuses on the firm-specific processes through which firms integrate, build, or reconfigure resources valuable for making a theoretical contribution. Originality/ value of research: Going beyond the current discussion, we show that case studies have the potential to extend and refine theory. We present new light on how dynamic capabilities can benefit from case study research by discovering the qualifications that shape the development of capabilities and determining the boundary conditions of the dynamic capabilities approach. Limitation of the study :Present study also relies on survey of methodology for data collection and the response perhaps connection by financial employee was difficult to responds the question because of limitation work place.

Keywords: financial ecosystem, loyalty, Malaysian market error, dynamic capability approach, rate-market, optimization intelligence strategy, courtesy, competence, compliance

Procedia PDF Downloads 272
304 Economic Impact and Benefits of Integrating Augmented Reality Technology in the Healthcare Industry: A Systematic Review

Authors: Brenda Thean I. Lim, Safurah Jaafar

Abstract:

Augmented reality (AR) in the healthcare industry has been gaining popularity in recent years, principally in areas of medical education, patient care and digital health solutions. One of the drivers in deciding to invest in AR technology is the potential economic benefits it could bring for patients and healthcare providers, including the pharmaceutical and medical technology sectors. Works of literature have shown that the benefits and impact of AR technologies have left trails of achievements in improving medical education and patient health outcomes. However, little has been published on the economic impact of AR in healthcare, a very resource-intensive industry. This systematic review was performed on studies focused on the benefits and impact of AR in healthcare to appraise if they meet the founded quality criteria so as to identify relevant publications for an in-depth analysis of the economic impact assessment. The literature search was conducted using multiple databases such as PubMed, Cochrane, Science Direct and Nature. Inclusion criteria include research papers on AR implementation in healthcare, from education to diagnosis and treatment. Only papers written in English language were selected. Studies on AR prototypes were excluded. Although there were many articles that have addressed the benefits of AR in the healthcare industry in the area of medical education, treatment and diagnosis and dental medicine, there were very few publications that identified the specific economic impact of technology within the healthcare industry. There were 13 publications included in the analysis based on the inclusion criteria. Out of the 13 studies, none comprised a systematically comprehensive cost impact evaluation. An outline of the cost-effectiveness and cost-benefit framework was made based on an AR article from another industry as a reference. This systematic review found that while the advancements of AR technology is growing rapidly and industries are starting to adopt them into respective sectors, the technology and its advancements in healthcare were still in their early stages. There are still plenty of room for further advancements and integration of AR into different sectors within the healthcare industry. Future studies will require more comprehensive economic analyses and costing evaluations to enable economic decisions for or against implementing AR technology in healthcare. This systematic review concluded that the current literature lacked detailed examination and conduct of economic impact and benefit analyses. Recommendations for future research would be to include details of the initial investment and operational costs for the AR infrastructure in healthcare settings while comparing the intervention to its conventional counterparts or alternatives so as to provide a comprehensive comparison on impact, benefit and cost differences.

Keywords: augmented reality, benefit, economic impact, healthcare, patient care

Procedia PDF Downloads 177
303 Customized Temperature Sensors for Sustainable Home Appliances

Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy

Abstract:

Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.

Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency

Procedia PDF Downloads 48
302 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics

Authors: Maria Arechavaleta, Mark Halpin

Abstract:

In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.

Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems

Procedia PDF Downloads 211
301 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution

Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino

Abstract:

This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.

Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization

Procedia PDF Downloads 108
300 Argos System: Improvements and Future of the Constellation

Authors: Sophie Baudel, Aline Duplaa, Jean Muller, Stephan Lauriol, Yann Bernard

Abstract:

Argos is the main satellite telemetry system used by the wildlife research community, since its creation in 1978, for animal tracking and scientific data collection all around the world, to analyze and understand animal migrations and behavior. The marine mammals' biology is one of the major disciplines which had benefited from Argos telemetry, and conversely, marine mammals biologists’ community has contributed a lot to the growth and development of Argos use cases. The Argos constellation with 6 satellites in orbit in 2017 (Argos 2 payload on NOAA 15, NOAA 18, Argos 3 payload on NOAA 19, SARAL, METOP A and METOP B) is being extended in the following years with Argos 3 payload on METOP C (launch in October 2018), and Argos 4 payloads on Oceansat 3 (launch in 2019), CDARS in December 2021 (to be confirmed), METOP SG B1 in December 2022, and METOP-SG-B2 in 2029. Argos 4 will allow more frequency bands (600 kHz for Argos4NG, instead of 110 kHz for Argos 3), new modulation dedicated to animal (sea turtle) tracking allowing very low transmission power transmitters (50 to 100mW), with very low data rates (124 bps), enhancement of high data rates (1200-4800 bps), and downlink performance, at the whole contribution to enhance the system capacity (50,000 active beacons per month instead of 20,000 today). In parallel of this ‘institutional Argos’ constellation, in the context of a miniaturization trend in the spatial industry in order to reduce the costs and multiply the satellites to serve more and more societal needs, the French Space Agency CNES, which designs the Argos payloads, is innovating and launching the Argos ANGELS project (Argos NEO Generic Economic Light Satellites). ANGELS will lead to a nanosatellite prototype with an Argos NEO instrument (30 cm x 30 cm x 20cm) that will be launched in 2019. In the meantime, the design of the renewal of the Argos constellation, called Argos For Next Generations (Argos4NG), is on track and will be operational in 2022. Based on Argos 4 and benefitting of the feedback from ANGELS project, this constellation will allow revisiting time of fewer than 20 minutes in average between two satellite passes, and will also bring more frequency bands to improve the overall capacity of the system. The presentation will then be an overview of the Argos system, present and future and new capacities coming with it. On top of that, use cases of two Argos hardware modules will be presented: the goniometer pathfinder allowing recovering Argos beacons at sea or on the ground in a 100 km radius horizon-free circle around the beacon location and the new Argos 4 chipset called ‘Artic’, already available and tested by several manufacturers.

Keywords: Argos satellite telemetry, marine protected areas, oceanography, maritime services

Procedia PDF Downloads 146
299 Analytical Study of the Structural Response to Near-Field Earthquakes

Authors: Isidro Perez, Maryam Nazari

Abstract:

Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.

Keywords: near-field, pulse, pushover, time-history

Procedia PDF Downloads 114
298 Pharmacovigilance in Hospitals: Retrospective Study at the Pharmacovigilance Service of UHE-Oran, Algeria

Authors: Nadjet Mekaouche, Hanane Zitouni, Fatma Boudia, Habiba Fetati, A. Saleh, A. Lardjam, H. Geniaux, A. Coubret, H. Toumi

Abstract:

Medicines have undeniably played a major role in prolonging shelf life and improving quality. The absolute efficacy of the drug remains a lever for innovation, its benefit/risk balance is not always assured and it does not always have the expected effects. Prior to marketing, knowledge about adverse drug reactions is incomplete. Once on the market, phase IV drug studies begin. For years, the drug was prescribed with less care to a large number of very heterogeneous patients and often in combination with other drugs. It is at this point that previously unknown adverse effects may appear, hence the need for the implementation of a pharmacovigilance system. Pharmacovigilance represents all methods for detecting, evaluating, informing and preventing the risks of adverse drug reactions. The most severe adverse events occur frequently in hospital and that a significant proportion of adverse events result in hospitalizations. In addition, the consequences of hospital adverse events in terms of length of stay, mortality and costs are considerable. It, therefore, appears necessary to develop ‘hospital pharmacovigilance’ aimed at reducing the incidence of adverse reactions in hospitals. The most widely used monitoring method in pharmacovigilance is spontaneous notification. However, underreporting of adverse drug reactions is common in many countries and is a major obstacle to pharmacovigilance assessment. It is in this context that this study aims to describe the experience of the pharmacovigilance service at the University Hospital of Oran (EHUO). This is a retrospective study extending from 2011 to 2017, carried out on archived records of declarations collected at the level of the EHUO Pharmacovigilance Department. Reporting was collected by two methods: ‘spontaneous notification’ and ‘active pharmacovigilance’ targeting certain clinical services. We counted 217 statements. It involved 56% female patients and 46% male patients. Age ranged from 5 to 78 years with an average of 46 years. The most common adverse reaction was drug toxidermy. For the drugs in question, they were essentially according to the ATC classification of anti-infectives followed by anticancer drugs. As regards the evolution of declarations by year, a low rate of notification was noted in 2011. That is why we decided to set up an active approach at the level of some services where a resident of reference attended the staffs every week. This has resulted in an increase in the number of reports. The declarations came essentially from the services where the active approach was installed. This highlights the need for ongoing communication between all relevant health actors to stimulate reporting and secure drug treatments.

Keywords: adverse drug reactions, hospital, pharmacovigilance, spontaneous notification

Procedia PDF Downloads 143
297 Stochastic Modelling for Mixed Mode Fatigue Delamination Growth of Wind Turbine Composite Blades

Authors: Chi Zhang, Hua-Peng Chen

Abstract:

With the increasingly demanding resources in the word, renewable and clean energy has been considered as an alternative way to replace traditional ones. Thus, one of practical examples for using wind energy is wind turbine, which has gained more attentions in recent research. Like most offshore structures, the blades, which is the most critical components of the wind turbine, will be subjected to millions of loading cycles during service life. To operate safely in marine environments, the blades are typically made from fibre reinforced composite materials to resist fatigue delamination and harsh environment. The fatigue crack development of blades is uncertain because of indeterminate mechanical properties for composite and uncertainties under offshore environment like wave loads, wind loads, and humid environments. There are three main delamination failure modes for composite blades, and the most common failure type in practices is subjected to mixed mode loading, typically a range of opening (mode 1) and shear (mode 2). However, the fatigue crack development for mixed mode cannot be predicted as deterministic values because of various uncertainties in realistic practical situation. Therefore, selecting an effective stochastic model to evaluate the mixed mode behaviour of wind turbine blades is a critical issue. In previous studies, gamma process has been considered as an appropriate stochastic approach, which simulates the stochastic deterioration process to proceed in one direction such as realistic situation for fatigue damage failure of wind turbine blades. On the basis of existing studies, various Paris Law equations are discussed to simulate the propagation of the fatigue crack growth. This paper develops a Paris model with the stochastic deterioration modelling according to gamma process for predicting fatigue crack performance in design service life. A numerical example of wind turbine composite materials is investigated to predict the mixed mode crack depth by Paris law and the probability of fatigue failure by gamma process. The probability of failure curves under different situations are obtained from the stochastic deterioration model for comparisons. Compared with the results from experiments, the gamma process can take the uncertain values into consideration for crack propagation of mixed mode, and the stochastic deterioration process shows a better agree well with realistic crack process for composite blades. Finally, according to the predicted results from gamma stochastic model, assessment strategies for composite blades are developed to reduce total lifecycle costs and increase resistance for fatigue crack growth.

Keywords: Reinforced fibre composite, Wind turbine blades, Fatigue delamination, Mixed failure mode, Stochastic process.

Procedia PDF Downloads 384
296 Governance of Social Media Using the Principles of Community Radio

Authors: Ken Zakreski

Abstract:

Regulating Canadian Facebook Groups, of a size and type, when they reach a threshold of audio video content. Consider the evolution of the Streaming Act, Parl GC Bill C-11 (44-1) and the regulations that will certainly follow. The Canadian Heritage Minister's office stipulates, "the Broadcasting Act only applies to audio and audiovisual content, not written journalism.” Governance— After 10 years, a community radio station for Gabriola Island, BC – Canadian Radio-television and Telecommunications Commission (“CRTC”) was approved but never started – became a Facebook Group “Community Bulletin Board - Life on Gabriola“ referred to as CBBlog. After CBBlog started and began to gather real traction, a member of the Group cloned the membership and ran their competing Facebook group under the banner of "free speech”. Here we see an inflection point [change of cultural stewardship] with two different mathematical results [engagement and membership growth]. Canada's telecommunication history of “portability” and “interoperability” made that Facebook Group CBBlog the better option, over broadcast FM radio for a community pandemic information sharing service for Gabriola Island, BC. A culture of ignorance flourishes in social media. Often people do not understand their own experience, or the experience of others because they do not have the concepts needed for understanding. It is thus important they are not denied concepts required for their full understanding. For example, Legislators need to know something about gay culture before they can make any decisions about it. Community Media policies and CRTC regulations are known and regulators can use that history to forge forward with regulations for internet platforms of a size and content type that reach a threshold of audio / video content. Mostly volunteer run media services, provide order of magnitude lower costs over commercial media. (Treating) Facebook Groups as new media.? Cathy Edwards, executive director of the Canadian Association of Community Television Users and Stations (“CACTUS”), calls it new media in that the distribution platform is not the issue. What does make community groups community media? Cathy responded, "... it's bylaws, articles of incorporation that state they are community media, they have accessibility, commitments to skills training, any member of the community can be a member, and there is accountability to a board of directors". Eligibility for funding through CACTUS requires these same commitments. It is risky for a community to invest into a platform as ownership has not been litigated. Is a FaceBook Group an asset of a not for profit society? The memo, from law student, Jared Hubbard summarizes, “Rights and interests in a Facebook group could, in theory, be transferred as property... This theory is currently unconfirmed by Canadian courts. “

Keywords: social media, governance, community media, Canadian radio

Procedia PDF Downloads 41
295 Intelligent Control of Agricultural Farms, Gardens, Greenhouses, Livestock

Authors: Vahid Bairami Rad

Abstract:

The intelligentization of agricultural fields can control the temperature, humidity, and variables affecting the growth of agricultural products online and on a mobile phone or computer. Smarting agricultural fields and gardens is one of the best and best ways to optimize agricultural equipment and has a 100 percent direct effect on the growth of plants and agricultural products and farms. Smart farms are the topic that we are going to discuss today, the Internet of Things and artificial intelligence. Agriculture is becoming smarter every day. From large industrial operations to individuals growing organic produce locally, technology is at the forefront of reducing costs, improving results and ensuring optimal delivery to market. A key element to having a smart agriculture is the use of useful data. Modern farmers have more tools to collect intelligent data than in previous years. Data related to soil chemistry also allows people to make informed decisions about fertilizing farmland. Moisture meter sensors and accurate irrigation controllers have made the irrigation processes to be optimized and at the same time reduce the cost of water consumption. Drones can apply pesticides precisely on the desired point. Automated harvesting machines navigate crop fields based on position and capacity sensors. The list goes on. Almost any process related to agriculture can use sensors that collect data to optimize existing processes and make informed decisions. The Internet of Things (IoT) is at the center of this great transformation. Internet of Things hardware has grown and developed rapidly to provide low-cost sensors for people's needs. These sensors are embedded in IoT devices with a battery and can be evaluated over the years and have access to a low-power and cost-effective mobile network. IoT device management platforms have also evolved rapidly and can now be used securely and manage existing devices at scale. IoT cloud services also provide a set of application enablement services that can be easily used by developers and allow them to build application business logic. Focus on yourself. These development processes have created powerful and new applications in the field of Internet of Things, and these programs can be used in various industries such as agriculture and building smart farms. But the question is, what makes today's farms truly smart farms? Let us put this question in another way. When will the technologies associated with smart farms reach the point where the range of intelligence they provide can exceed the intelligence of experienced and professional farmers?

Keywords: food security, IoT automation, wireless communication, hybrid lifestyle, arduino Uno

Procedia PDF Downloads 24
294 International Students into the Irish Higher Education System: Supporting the Transition

Authors: Tom Farrelly, Yvonne Kavanagh, Tony Murphy

Abstract:

The sharp rise in international students into Ireland has provided colleges with a number of opportunities but also a number of challenges, both at an institutional and individual lecturer level and of course for the incoming student. Previously, Ireland’s population, particularly its higher education student population was largely homogenous, largely drawn from its own shores and thus reflecting the ethnic, cultural and religious demographics of the day. However, over the twenty years Ireland witnessed considerable economic growth, downturn and subsequent growth all of which has resulted in an Ireland that has changed both culturally and demographically. Propelled by Ireland’s economic success up to the late 2000s, one of the defining features of this change was an unprecedented rise in the number of migrants, both academic and economic. In 2013, Ireland’s National Forum for the Enhancement for Teaching and Learning in Higher Education (hereafter the National Forum) invited proposals for inter-institutional collaborative projects aimed at different student groups’ transitioning in or out of higher education. Clearly, both as a country and a higher education sector we want incoming students to have a productive and enjoyable time in Ireland. One of the ways that will help the sector help the students make a successful transition is by developing strategies and polices that are well informed and student driven. This abstract outlines the research undertaken by the five colleges Institutes of Technology: Carlow; Cork; Tralee & Waterford and University College Cork) in Ireland that constitute the Southern cluster aimed at helping international students transition into the Irish higher education system. The aim of the southern clusters’ project was to develop a series of online learning units that can be accessed by prospective incoming international students prior to coming to Ireland and by Irish based lecturing staff. However, in order to make the units as relevant and informed as possible there was a strong research element to the project. As part of the southern cluster’s research strategy a large-scale online survey using SurveyMonkey was undertaken across the five colleges drawn from their respective international student communities. In total, there were 573 responses from students coming from over twenty different countries. The results from the survey have provided some interesting insights into the way that international students interact with and understand the Irish higher education system. The research and results will act as a model for consistent practice applicable across institutional clusters, thereby allowing institutions to minimise costs and focus on the unique aspects of transitioning international students into their institution.

Keywords: digital, international, support, transitions

Procedia PDF Downloads 264
293 A New Approach for Preparation of Super Absorbent Polymers: In-Situ Surface Cross-Linking

Authors: Reyhan Özdoğan, Mithat Çelebi, Özgür Ceylan, Mehmet Arif Kaya

Abstract:

Super absorbent polymers (SAPs) are defined as materials that can absorb huge amount of water or aqueous solution in comparison to their own mass and retain in their lightly cross-linked structure. SAPs were produced from water soluble monomers via polymerization subsequently controlled crosslinking. SAPs are generally used for water absorbing applications such as baby diapers, patient or elder pads and other hygienic product industries. Crosslinking density (CD) of SAP structure is an essential factor for water absortion capacity (WAC). Low internal CD leads to high WAC values and vice versa. However, SAPs have low CD and high swelling capacities and tend to disintegrate when pressure is applied upon them, so SAPs under load cannot absorb liquids effectively. In order to prevent this undesired situation and to obtain suitable SAP structures having high swelling capacity and ability to work under load, surface crosslinking can be the answer. In industry, these superabsorbent gels are mostly produced via solution polymerization and then they need to be dried, grinded, sized, post polymerized and finally surface croslinked (involves spraying of a crosslinking solution onto dried and grinded SAP particles, and then curing by heat). It can easily be seen that these steps are time consuming and should be handled carefully for the desired final product. If we could synthesize desired final SAPs using less processes it will help reducing time and production costs which are very important for any industries. In this study, synthesis of SAPs were achieved successfully by inverse suspension (Pickering type) polymerization and subsequently in-situ surface cross-linking via using proper surfactants in high boiling point solvents. Our one-pot synthesis of surface cross-linked SAPs invovles only one-step for preparation, thus it can be said that this technique exhibits more preferable characteristic for the industry in comparison to conventional methods due to its one-step easy process. Effects of different surface crosslinking agents onto properties of poly(acrylic acid-co-sodium acrylate) based SAPs are investigated. Surface crosslink degrees are evaluated by swelling under load (SUL) test. It was determined water absorption capacities of obtained SAPs decrease with the increasing surface crosslink density while their mechanic properties are improved.

Keywords: inverse suspension polymerization, polyacrylic acid, super absorbent polymers (SAPs), surface crosslinking, sodium polyacrylate

Procedia PDF Downloads 295
292 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 149
291 Humic Acid and Azadirachtin Derivatives for the Management of Crop Pests

Authors: R. S. Giraddi, C. M. Poleshi

Abstract:

Organic cultivation of crops is gaining importance consumer awareness towards pesticide residue free foodstuffs is increasing globally. This is also because of high costs of synthetic fertilizers and pesticides, making the conventional farming non-remunerative. In India, organic manures (such as vermicompost) are an important input in organic agriculture.  Though vermicompost obtained through earthworm and microbe-mediated processes is known to comprise most of the crop nutrients, but they are in small amounts thus necessitating enrichment of nutrients so that crop nourishment is complete. Another characteristic of organic manures is that the pest infestations are kept under check due to induced resistance put up by the crop plants. In the present investigation, deoiled neem cake containing azadirachtin, copper ore tailings (COT), a source of micro-nutrients and microbial consortia were added for enrichment of vermicompost. Neem cake is a by-product obtained during the process of oil extraction from neem plant seeds. Three enriched vermicompost blends were prepared using vermicompost (at 70, 65 and 60%), deoiled neem cake (25, 30 and 35%), microbial consortia and COTwastes (5%). Enriched vermicompost was thoroughly mixed, moistened (25+5%), packed and incubated for 15 days at room temperature. In the crop response studies, the field trials on chili (Capsicum annum var. longum) and soybean, (Glycine max cv JS 335) were conducted during Kharif 2015 at the Main Agricultural Research Station, UAS, Dharwad-Karnataka, India. The vermicompost blend enriched with neem cake (known to possess higher amounts of nutrients) and vermicompost were applied to the crops and at two dosages and at two intervals of crop cycle (at sowing and 30 days after sowing) as per the treatment plan along with 50% recommended dose of fertilizer (RDF). 10 plants selected randomly in each plot were studied for pest density and plant damage. At maturity, crops were harvested, and the yields were recorded as per the treatments, and the data were analyzed using appropriate statistical tools and procedures. In the crops, chili and soybean, crop nourishment with neem enriched vermicompost reduced insect density and plant damage significantly compared to other treatments. These treatments registered as much yield (16.7 to 19.9 q/ha) as that realized in conventional chemical control (18.2 q/ha) in soybean, while 72 to 77 q/ha of green chili was harvested in the same treatments, being comparable to the chemical control (74 q/ha). The yield superiority of the treatments was of the order neem enriched vermicompost>conventional chemical control>neem cake>vermicompost>untreated control.  The significant features of the result are that it reduces use of inorganic manures by 50% and synthetic chemical insecticides by 100%.

Keywords: humic acid, azadirachtin, vermicompost, insect-pest

Procedia PDF Downloads 251
290 Food Security in Germany: Inclusion of the Private Sector through Law Reform Faces Challenges

Authors: Agnetha Schuchardt, Jennifer Hartmann, Laura Schulte, Roman Peperhove, Lars Gerhold

Abstract:

If critical infrastructures fail, even for a short period of time, it can have significant negative consequences for the affected population. This is especially true for the food sector that is strongly interlinked with other sectors like the power supply. A blackout could lead to several cities being without food supply for numerous days, simply because cash register systems do no longer work properly. Following the public opinion, securing the food supply in emergencies is considered a task of the state, however, in the German context, the key players are private enterprises and private households. Both are not aware of their responsibility and both cannot be forced to take any preventive measures prior to an emergency. This problem became evident to officials and politicians so that the law covering food security was revised in order to include private stakeholders into mitigation processes. The paper will present a scientific review of governmental and regulatory literature. The focus is the inclusion of the food industry through a law reform and the challenges that still exist. Together with legal experts, an analysis of regulations will be presented that explains the development of the law reform concerning food security and emergency storage in Germany. The main findings are that the existing public food emergency storage is out-dated, insufficient and too expensive. The state is required to protect food as a critical infrastructure but does not have the capacities to live up to this role. Through a law reform in 2017, new structures should to established. The innovation was to include the private sector into the civil defense concept since it has the required knowledge and experience. But the food industry is still reluctant. Preventive measures do not serve economic purposes – on the contrary, they cost money. The paper will discuss respective examples like equipping supermarkets with emergency power supply or self-sufficient cash register systems and why the state is not willing to cover the costs of these measures, but neither is the economy. The biggest problem with the new law is that private enterprises can only be forced to support food security if the state of emergency has occurred already and not one minute earlier. The paper will cover two main results: the literature review and an expert workshop that will be conducted in summer 2018 with stakeholders from different parts of the food supply chain as well as officials of the public food emergency concept. The results from this participative process will be presented and recommendations will be offered that show how the private economy could be better included into a modern food emergency concept (e. g. tax reductions for stockpiling).

Keywords: critical infrastructure, disaster control, emergency food storage, food security, private economy, resilience

Procedia PDF Downloads 154
289 Effect of Fresh Concrete Curing Methods on Its Compressive Strength

Authors: Xianghe Dai, Dennis Lam, Therese Sheehan, Naveed Rehman, Jie Yang

Abstract:

Concrete is one of the most used construction materials that may be made onsite as fresh concrete and then placed in formwork to produce the desired shapes of structures. It has been recognized that the raw materials and mix proportion of concrete dominate the mechanical characteristics of hardened concrete, and the curing method and environment applied to the concrete in early stages of hardening will significantly influence the concrete properties, such as compressive strength, durability, permeability etc. In construction practice, there are various curing methods to maintain the presence of mixing water throughout the early stages of concrete hardening. They are also beneficial to concrete in hot weather conditions as they provide cooling and prevent the evaporation of water. Such methods include ponding or immersion, spraying or fogging, saturated wet covering etc. Also there are various curing methods that may be implemented to decrease the level of water lost which belongs to the concrete surface, such as putting a layer of impervious paper, plastic sheeting or membrane on the concrete to cover it. In the concrete material laboratory, accelerated strength gain methods supply the concrete with heat and additional moisture by applying live steam, coils that are subject to heating or pads that have been warmed electrically. Currently when determining the mechanical parameters of a concrete, the concrete is usually sampled from fresh concrete on site and then cured and tested in laboratories where standardized curing procedures are adopted. However, in engineering practice, curing procedures in the construction sites after the placing of concrete might be very different from the laboratory criteria, and this includes some standard curing procedures adopted in the laboratory that can’t be applied on site. Sometimes the contractor compromises the curing methods in order to reduce construction costs etc. Obviously the difference between curing procedures adopted in the laboratory and those used on construction sites might over- or under-estimate the real concrete quality. This paper presents the effect of three typical curing methods (air curing, water immersion curing, plastic film curing) and of maintaining concrete in steel moulds on the compressive strength development of normal concrete. In this study, Portland cement with 30% fly ash was used and different curing periods, 7 days, 28 days and 60 days were applied. It was found that the highest compressive strength was observed from concrete samples to which 7-day water immersion curing was applied and from samples maintained in steel moulds up to the testing date. The research results implied that concrete used as infill in steel tubular members might develop a higher strength than predicted by design assumptions based on air curing methods. Wrapping concrete with plastic film as a curing method might delay the concrete strength development in the early stages. Water immersion curing for 7 days might significantly increase the concrete compressive strength.

Keywords: compressive strength, air curing, water immersion curing, plastic film curing, maintaining in steel mould, comparison

Procedia PDF Downloads 267
288 The GRIT Study: Getting Global Rare Disease Insights Through Technology Study

Authors: Aneal Khan, Elleine Allapitan, Desmond Koo, Katherine-Ann Piedalue, Shaneel Pathak, Utkarsh Subnis

Abstract:

Background: Disease management of metabolic, genetic disorders is long-term and can be cumbersome to patients and caregivers. Patient-Reported Outcome Measures (PROMs) have been a useful tool in capturing patient perspectives to help enhance treatment compliance and engagement with health care providers, reduce utilization of emergency services, and increase satisfaction with their treatment choices. Currently, however, PROMs are collected during infrequent and decontextualized clinic visits, which makes translation of patient experiences challenging over time. The GRIT study aims to evaluate a digital health journal application called Zamplo that provides a personalized health diary to record self-reported health outcomes accurately and efficiently in patients with metabolic, genetic disorders. Methods: This is a randomized controlled trial (RCT) (1:1) that assesses the efficacy of Zamplo to increase patient activation (primary outcome), improve healthcare satisfaction and confidence to manage medications (secondary outcomes), and reduce costs to the healthcare system (exploratory). Using standardized online surveys, assessments will be collected at baseline, 1 month, 3 months, 6 months, and 12 months. Outcomes will be compared between patients who were given access to the application versus those with no access. Results: Seventy-seven patients were recruited as of November 30, 2021. Recruitment for the study commenced in November 2020 with a target of n=150 patients. The accrual rate was 50% from those eligible and invited for the study, with the majority of patients having Fabry disease (n=48) and the remaining having Pompe disease and mitochondrial disease. Real-time clinical responses, such as pain, are being measured and correlated to disease-modifying therapies, supportive treatments like pain medications, and lifestyle interventions. Engagement with the application, along with compliance metrics of surveys and journal entries, are being analyzed. An interim analysis of the engagement data along with preliminary findings from this pilot RCT, and qualitative patient feedback will be presented. Conclusions: The digital self-care journal provides a unique approach to disease management, allowing patients direct access to their progress and actively participating in their care. Findings from the study can help serve the virtual care needs of patients with metabolic, genetic disorders in North America and the world over.

Keywords: eHealth, mobile health, rare disease, patient outcomes, quality of life (QoL), pain, Fabry disease, Pompe disease

Procedia PDF Downloads 128
287 Benefits of Shaping a Balance on Environmental and Economic Sustainability for Population Health

Authors: Edna Negron-Martinez

Abstract:

Our time's global challenges and trends —like those associated with climate change, demographics displacements, growing health inequalities, and increasing burden of diseases— have complex connections to the determinants of health. Information on the burden of disease causes and prevention is fundamental for public health actions, like preparedness and responses for disasters, and recovery resources after the event. For instance, there is an increasing consensus about key findings of the effects and connections of the global burden of disease, as it generates substantial healthcare costs, consumes essential resources and prevents the attainment of optimal health and well-being. The goal of this research endeavor is to promote a comprehensive understanding of the connections between social, environmental, and economic influences on health. These connections are illustrated by pulling from clearly the core curriculum of multidisciplinary areas —as urban design, energy, housing, and economy— as well as in the health system itself. A systematic review of primary and secondary data included a variety of issues as global health, natural disasters, and critical pollution impacts on people's health and the ecosystems. Environmental health is challenged by the unsustainable consumption patterns and the resulting contaminants that abound in many cities and urban settings around the world. Poverty, inadequate housing, and poor health are usually linked. The house is a primary environmental health context for any individual and especially for more vulnerable groups; such as children, older adults and those who are sick. Nevertheless, very few countries show strong decoupling of environmental degradation from economic growth, as indicated by a recent 2017 Report of the World Bank. Worth noting, the environmental fraction of the global burden of disease in a 2016 World Health Organization (WHO) report estimated that 12.6 million global deaths, accounting for 23% (95% CI: 13-34%) of all deaths were attributable to the environment. Among the environmental contaminants include heavy metals, noise pollution, light pollution, and urban sprawl. Those key findings make a call to the significance to urgently adopt in a global scale the United Nations post-2015 Sustainable Development Goals (SDGs). The SDGs address the social, environmental, and economic factors that influence health and health inequalities, advising how these sectors, in turn, benefit from a healthy population. Consequently, more actions are necessary from an inter-sectoral and systemic paradigm to enforce an integrated sustainability policy implementation aimed at the environmental, social, and economic determinants of health.

Keywords: building capacity for workforce development, ecological and environmental health effects of pollution, public health education, sustainability

Procedia PDF Downloads 84
286 The Scenario Analysis of Shale Gas Development in China by Applying Natural Gas Pipeline Optimization Model

Authors: Meng Xu, Alexis K. H. Lau, Ming Xu, Bill Barron, Narges Shahraki

Abstract:

As an emerging unconventional energy, shale gas has been an economically viable step towards a cleaner energy future in U.S. China also has shale resources that are estimated to be potentially the largest in the world. In addition, China has enormous unmet for a clean alternative to substitute coal. Nonetheless, the geological complexity of China’s shale basins and issues of water scarcity potentially impose serious constraints on shale gas development in China. Further, even if China could replicate to a significant degree the U.S. shale gas boom, China faces the problem of transporting the gas efficiently overland with its limited pipeline network throughput capacity and coverage. The aim of this study is to identify the potential bottlenecks in China’s gas transmission network, as well as to examine the shale gas development affecting particular supply locations and demand centers. We examine this through application of three scenarios with projecting domestic shale gas supply by 2020: optimistic, medium and conservative shale gas supply, taking references from the International Energy Agency’s (IEA’s) projections and China’s shale gas development plans. Separately we project the gas demand at provincial level, since shale gas will have more significant impact regionally than nationally. To quantitatively assess each shale gas development scenario, we formulated a gas pipeline optimization model. We used ArcGIS to generate the connectivity parameters and pipeline segment length. Other parameters are collected from provincial “twelfth-five year” plans and “China Oil and Gas Pipeline Atlas”. The multi-objective optimization model uses GAMs and Matlab. It aims to minimize the demands that are unable to be met, while simultaneously seeking to minimize total gas supply and transmission costs. The results indicate that, even if the primary objective is to meet the projected gas demand rather than cost minimization, there’s a shortfall of 9% in meeting total demand under the medium scenario. Comparing the results between the optimistic and medium supply of shale gas scenarios, almost half of the shale gas produced in Sichuan province and Chongqing won’t be able to be transmitted out by pipeline. On the demand side, the Henan province and Shanghai gas demand gap could be filled as much as 82% and 39% respectively, with increased shale gas supply. To conclude, the pipeline network in China is currently not sufficient in meeting the projected natural gas demand in 2020 under medium and optimistic scenarios, indicating the need for substantial pipeline capacity expansion for some of the existing network, and the importance of constructing new pipelines from particular supply to demand sites. If the pipeline constraint is overcame, Beijing, Shanghai, Jiangsu and Henan’s gas demand gap could potentially be filled, and China could thereby reduce almost 25% its dependency on LNG imports under the optimistic scenario.

Keywords: energy policy, energy systematic analysis, scenario analysis, shale gas in China

Procedia PDF Downloads 256
285 Translation and Validation of the Pain Resilience Scale in a French Population Suffering from Chronic Pain

Authors: Angeliki Gkiouzeli, Christine Rotonda, Elise Eby, Claire Touchet, Marie-Jo Brennstuhl, Cyril Tarquinio

Abstract:

Resilience is a psychological concept of possible relevance to the development and maintenance of chronic pain (CP). It refers to the ability of individuals to maintain reasonably healthy levels of physical and psychological functioning when exposed to an isolated and potentially highly disruptive event. Extensive research in recent years has supported the importance of this concept in the CP literature. Increased levels of resilience were associated with lower levels of perceived pain intensity and better mental health outcomes in adults with persistent pain. The ongoing project seeks to include the concept of pain-specific resilience in the French literature in order to provide more appropriate measures for assessing and understanding the complexities of CP in the near future. To the best of our knowledge, there is currently no validated version of the pain-specific resilience measure, the Pain Resilience scale (PRS), for French-speaking populations. Therefore, the present work aims to address this gap, firstly by performing a linguistic and cultural translation of the scale into French and secondly by studying the internal validity and reliability of the PRS for French CP populations. The forward-translation-back translation methodology was used to achieve as perfect a cultural and linguistic translation as possible according to the recommendations of the COSMIN (Consensus-based Standards for the selection of health Measurement Instruments) group, and an online survey is currently conducted among a representative sample of the French population suffering from CP. To date, the survey has involved one hundred respondents, with a total target of around three hundred participants at its completion. We further seek to study the metric properties of the French version of the PRS, ''L’Echelle de Résilience à la Douleur spécifique pour les Douleurs Chroniques'' (ERD-DC), in French patients suffering from CP, assessing the level of pain resilience in the context of CP. Finally, we will explore the relationship between the level of pain resilience in the context of CP and other variables of interest commonly assessed in pain research and treatment (i.e., general resilience, self-efficacy, pain catastrophising, and quality of life). This study will provide an overview of the methodology used to address our research objectives. We will also present for the first time the main findings and further discuss the validity of the scale in the field of CP research and pain management. We hope that this tool will provide a better understanding of how CP-specific resilience processes can influence the development and maintenance of this disease. This could ultimately result in better treatment strategies specifically tailored to individual needs, thus leading to reduced healthcare costs and improved patient well-being.

Keywords: chronic pain, pain measure, pain resilience, questionnaire adaptation

Procedia PDF Downloads 64
284 Natural Monopolies and Their Regulation in Georgia

Authors: Marina Chavleishvili

Abstract:

Introduction: Today, the study of monopolies, including natural monopolies, is topical. In real life, pure monopolies are natural monopolies. Natural monopolies are used widely and are regulated by the state. In particular, the prices and rates are regulated. The paper considers the problems associated with the operation of natural monopolies in Georgia, in particular, their microeconomic analysis, pricing mechanisms, and legal mechanisms of their operation. The analysis was carried out on the example of the power industry. The rates of natural monopolies in Georgia are controlled by the Georgian National Energy and Water Supply Regulation Commission. The paper analyzes the positive role and importance of the regulatory body and the issues of improving the legislative base that will support the efficient operation of the branch. Methodology: In order to highlight natural monopolies market tendencies, the domestic and international markets are studied. An analysis of monopolies is carried out based on the endogenous and exogenous factors that determine the condition of companies, as well as the strategies chosen by firms to increase the market share. According to the productivity-based competitiveness assessment scheme, the segmentation opportunities, business environment, resources, and geographical location of monopolist companies are revealed. Main Findings: As a result of the analysis, certain assessments and conclusions were made. Natural monopolies are quite a complex and versatile economic element, and it is important to specify and duly control their frame conditions. It is important to determine the pricing policy of natural monopolies. The rates should be transparent, should show the level of life in the country, and should correspond to the incomes. The analysis confirmed the significance of the role of the Antimonopoly Service in the efficient management of natural monopolies. The law should adapt to reality and should be applied only to regulate the market. The present-day differential electricity tariffs varying depending on the consumed electrical power need revision. The effects of the electricity price discrimination are important, segmentation in different seasons in particular. Consumers use more electricity in winter than in summer, which is associated with extra capacities and maintenance costs. If the price of electricity in winter is higher than in summer, the electricity consumption will decrease in winter. The consumers will start to consume the electricity more economically, what will allow reducing extra capacities. Conclusion: Thus, the practical realization of the views given in the paper will contribute to the efficient operation of natural monopolies. Consequently, their activity will be oriented not on the reduction but on the increase of increments of the consumers or producers. Overall, the optimal management of the given fields will allow for improving the well-being throughout the country. In the article, conclusions are made, and the recommendations are developed to deliver effective policies and regulations toward the natural monopolies in Georgia.

Keywords: monopolies, natural monopolies, regulation, antimonopoly service

Procedia PDF Downloads 62
283 Pump-as-Turbine: Testing and Characterization as an Energy Recovery Device, for Use within the Water Distribution Network

Authors: T. Lydon, A. McNabola, P. Coughlan

Abstract:

Energy consumption in the water distribution network (WDN) is a well established problem equating to the industry contributing heavily to carbon emissions, with 0.9 kg CO2 emitted per m3 of water supplied. It is indicated that 85% of energy wasted in the WDN can be recovered by installing turbines. Existing potential in networks is present at small capacity sites (5-10 kW), numerous and dispersed across networks. However, traditional turbine technology cannot be scaled down to this size in an economically viable fashion, thus alternative approaches are needed. This research aims to enable energy recovery potential within the WDN by exploring the potential of pumps-as-turbines (PATs), to realise this potential. PATs are estimated to be ten times cheaper than traditional micro-hydro turbines, presenting potential to contribute to an economically viable solution. However, a number of technical constraints currently prohibit their widespread use, including the inability of a PAT to control pressure, difficulty in the selection of PATs due to lack of performance data and a lack of understanding on how PATs can cater for fluctuations as extreme as +/- 50% of the average daily flow, characteristic of the WDN. A PAT prototype is undergoing testing in order to identify the capabilities of the technology. Results of preliminary testing, which involved testing the efficiency and power potential of the PAT for varying flow and pressure conditions, in order to develop characteristic and efficiency curves for the PAT and a baseline understanding of the technologies capabilities, are presented here: •The limitations of existing selection methods which convert BEP from pump operation to BEP in turbine operation was highlighted by the failure of such methods to reflect the conditions of maximum efficiency of the PAT. A generalised selection method for the WDN may need to be informed by an understanding of impact of flow variations and pressure control on system power potential capital cost, maintenance costs, payback period. •A clear relationship between flow and efficiency rate of the PAT has been established. The rate of efficiency reductions for flows +/- 50% BEP is significant and more extreme for deviations in flow above the BEP than below, but not dissimilar to the reaction of efficiency of other turbines. •PAT alone is not sufficient to regulate pressure, yet the relationship of pressure across the PAT is foundational in exploring ways which PAT energy recovery systems can maintain required pressure level within the WDN. Efficiencies of systems of PAT energy recovery systems operating conditions of pressure regulation, which have been conceptualise in current literature, need to be established. Initial results guide the focus of forthcoming testing and exploration of PAT technology towards how PATs can form part of an efficiency energy recovery system.

Keywords: energy recovery, pump-as-turbine, water distribution network, water distribution network

Procedia PDF Downloads 237