Search results for: stochastic volatility
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 669

Search results for: stochastic volatility

69 An Integrated Approach to Handle Sour Gas Transportation Problems and Pipeline Failures

Authors: Venkata Madhusudana Rao Kapavarapu

Abstract:

The Intermediate Slug Catcher (ISC) facility was built to process nominally 234 MSCFD of export gas from the booster station on a day-to-day basis and to receive liquid slugs up to 1600 m³ (10,000 BBLS) in volume when the incoming 24” gas pipelines are pigged following upsets or production of non-dew-pointed gas from gathering centers. The maximum slug sizes expected are 812 m³ (5100 BBLS) in winter and 542 m³ (3400 BBLS) in summer after operating for a month or more at 100 MMSCFD of wet gas, being 60 MMSCFD of treated gas from the booster station, combined with 40 MMSCFD of untreated gas from gathering center. The water content is approximately 60% but may be higher if the line is not pigged for an extended period, owing to the relative volatility of the condensate compared to water. In addition to its primary function as a slug catcher, the ISC facility will receive pigged liquids from the upstream and downstream segments of the 14” condensate pipeline, returned liquids from the AGRP, pigged through the 8” pipeline, and blown-down fluids from the 14” condensate pipeline prior to maintenance. These fluids will be received in the condensate flash vessel or the condensate separator, depending on the specific operation, for the separation of water and condensate and settlement of solids scraped from the pipelines. Condensate meeting the colour and 200 ppm water specifications will be dispatched to the AGRP through the 14” pipeline, while off-spec material will be returned to BS-171 via the existing 10” condensate pipeline. When they are not in operation, the existing 24” export gas pipeline and the 10” condensate pipeline will be maintained under export gas pressure, ready for operation. The gas manifold area contains the interconnecting piping and valves needed to align the slug catcher with either of the 24” export gas pipelines from the booster station and to direct the gas to the downstream segment of either of these pipelines. The manifold enables the slug catcher to be bypassed if it needs to be maintained or if through-pigging of the gas pipelines is to be performed. All gas, whether bypassing the slug catcher or returning to the gas pipelines from it, passes through black powder filters to reduce the level of particulates in the stream. These items are connected to the closed drain vessel to drain the liquid collected. Condensate from the booster station is transported to AGRP through 14” condensate pipeline. The existing 10” condensate pipeline will be used as a standby and for utility functions such as returning condensate from AGRP to the ISC or booster station or for transporting off-spec fluids from the ISC back to booster station. The manifold contains block valves that allow the two condensate export lines to be segmented at the ISC, thus facilitating bi-directional flow independently in the upstream and downstream segments, which ensures complete pipeline integrity and facility integrity. Pipeline failures will be attended to with the latest technologies by remote techno plug techniques, and repair activities will be carried out as needed. Pipeline integrity will be evaluated with ili pigging to estimate the pipeline conditions.

Keywords: integrity, oil & gas, innovation, new technology

Procedia PDF Downloads 67
68 Understanding the Influence of Fibre Meander on the Tensile Properties of Advanced Composite Laminates

Authors: Gaoyang Meng, Philip Harrison

Abstract:

When manufacturing composite laminates, the fibre directions within the laminate are never perfectly straight and inevitably contain some degree of stochastic in-plane waviness or ‘meandering’. In this work we aim to understand the relationship between the degree of meandering of the fibre paths, and the resulting uncertainty in the laminate’s final mechanical properties. To do this, a numerical tool is developed to automatically generate meandering fibre paths in each of the laminate's 8 plies (using Matlab) and after mapping this information into finite element simulations (using Abaqus), the statistical variability of the tensile mechanical properties of a [45°/90°/-45°/0°]s carbon/epoxy (IM7/8552) laminate is predicted. The stiffness, first ply failure strength and ultimate failure strength are obtained. Results are generated by inputting the degree of variability in the fibre paths and the laminate is then examined in all directions (from 0° to 359° in increments of 1°). The resulting predictions are output as flower (polar) plots for convenient analysis. The average fibre orientation of each ply in a given laminate is determined by the laminate layup code [45°/90°/-45°/0°]s. However, in each case, the plies contain increasingly large amounts of in-plane waviness (quantified by the standard deviation of the fibre direction in each ply across the laminate. Four different amounts of variability in the fibre direction are tested (2°, 4°, 6° and 8°). Results show that both the average tensile stiffness and the average tensile strength decrease, while the standard deviations increase, with an increasing degree of fibre meander. The variability in stiffness is found to be relatively insensitive to the rotation angle, but the variability in strength is sensitive. Specifically, the uncertainty in laminate strength is relatively low at orientations centred around multiples of 45° rotation angle, and relatively high between these rotation angles. To concisely represent all the information contained in the various polar plots, rotation-angle dependent Weibull distribution equations are fitted to the data. The resulting equations can be used to quickly estimate the size of the errors bars for the different mechanical properties, resulting from the amount of fibre directional variability contained within the laminate. A longer term goal is to use these equations to quickly introduce realistic variability at the component level.

Keywords: advanced composite laminates, FE simulation, in-plane waviness, tensile properties, uncertainty quantification

Procedia PDF Downloads 84
67 Cascade Multilevel Inverter-Based Grid-Tie Single-Phase and Three-Phase-Photovoltaic Power System Controlling and Modeling

Authors: Syed Masood Hussain

Abstract:

An effective control method, including system-level control and pulse width modulation for quasi-Z-source cascade multilevel inverter (qZS-CMI) based grid-tie photovoltaic (PV) power system is proposed. The system-level control achieves the grid-tie current injection, independent maximum power point tracking (MPPT) for separate PV panels, and dc-link voltage balance for all quasi-Z-source H-bridge inverter (qZS-HBI) modules. A recent upsurge in the study of photovoltaic (PV) power generation emerges, since they directly convert the solar radiation into electric power without hampering the environment. However, the stochastic fluctuation of solar power is inconsistent with the desired stable power injected to the grid, owing to variations of solar irradiation and temperature. To fully exploit the solar energy, extracting the PV panels’ maximum power and feeding them into grids at unity power factor become the most important. The contributions have been made by the cascade multilevel inverter (CMI). Nevertheless, the H-bridge inverter (HBI) module lacks boost function so that the inverter KVA rating requirement has to be increased twice with a PV voltage range of 1:2; and the different PV panel output voltages result in imbalanced dc-link voltages. However, each HBI module is a two-stage inverter, and many extra dc–dc converters not only increase the complexity of the power circuit and control and the system cost, but also decrease the efficiency. Recently, the Z-source/quasi-Z-source cascade multilevel inverter (ZS/qZS-CMI)-based PV systems were proposed. They possess the advantages of both traditional CMI and Z-source topologies. In order to properly operate the ZS/qZS-CMI, the power injection, independent control of dc-link voltages, and the pulse width modulation (PWM) are necessary. The main contributions of this paper include: 1) a novel multilevel space vector modulation (SVM) technique for the single phase qZS-CMI is proposed, which is implemented without additional resources; 2) a grid-connected control for the qZS-CMI based PV system is proposed, where the all PV panel voltage references from their independent MPPTs are used to control the grid-tie current; the dual-loop dc-link peak voltage control.

Keywords: Quzi-Z source inverter, Photo voltaic power system, space vector modulation, cascade multilevel inverter

Procedia PDF Downloads 539
66 Adaption of the Design Thinking Method for Production Planning in the Meat Industry Using Machine Learning Algorithms

Authors: Alica Höpken, Hergen Pargmann

Abstract:

The resource-efficient planning of the complex production planning processes in the meat industry and the reduction of food waste is a permanent challenge. The complexity of the production planning process occurs in every part of the supply chain, from agriculture to the end consumer. It arises from long and uncertain planning phases. Uncertainties such as stochastic yields, fluctuations in demand, and resource variability are part of this process. In the meat industry, waste mainly relates to incorrect storage, technical causes in production, or overproduction. The high amount of food waste along the complex supply chain in the meat industry could not be reduced by simple solutions until now. Therefore, resource-efficient production planning by conventional methods is currently only partially feasible. The realization of intelligent, automated production planning is basically possible through the application of machine learning algorithms, such as those of reinforcement learning. By applying the adapted design thinking method, machine learning methods (especially reinforcement learning algorithms) are used for the complex production planning process in the meat industry. This method represents a concretization to the application area. A resource-efficient production planning process is made available by adapting the design thinking method. In addition, the complex processes can be planned efficiently by using this method, since this standardized approach offers new possibilities in order to challenge the complexity and the high time consumption. It represents a tool to support the efficient production planning in the meat industry. This paper shows an elegant adaption of the design thinking method to apply the reinforcement learning method for a resource-efficient production planning process in the meat industry. Following, the steps that are necessary to introduce machine learning algorithms into the production planning of the food industry are determined. This is achieved based on a case study which is part of the research project ”REIF - Resource Efficient, Economic and Intelligent Food Chain” supported by the German Federal Ministry for Economic Affairs and Climate Action of Germany and the German Aerospace Center. Through this structured approach, significantly better planning results are achieved, which would be too complex or very time consuming using conventional methods.

Keywords: change management, design thinking method, machine learning, meat industry, reinforcement learning, resource-efficient production planning

Procedia PDF Downloads 123
65 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life

Authors: Desplanches Maxime

Abstract:

Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.

Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression

Procedia PDF Downloads 63
64 Captive Insurance in Hong Kong and Singapore: A Promising Risk Management Solution for Asian Companies

Authors: Jin Sheng

Abstract:

This paper addresses a promising area of insurance sector to develop in Asia. Captive insurance, which provides risk-mitigation services for its parent company, has great potentials to develop in energy, infrastructure, agriculture, logistics, catastrophe, and alternative risk transfer (ART), and will greatly affect the framework of insurance industry. However, the Asian captive insurance market only takes a small proportion in the global market. The recent supply chain interruption case of Hanjin Shipping indicates the significance of risk management for an Asian company’s sustainability and resilience. China has substantial needs and great potentials to develop captive insurance, on account of the currency volatility, enterprises’ credit risks, and legal and operational risks of the Belt and Road initiative. Up to date, Mainland Chinese enterprises only have four offshore captives incorporated by CNOOC, Sinopec, Lenovo and CGN Power), three onshore captive insurance companies incorporated by CNPC, China Railway, and COSCO, as well as one industrial captive insurance organization - China Ship-owners Mutual Assurance Association. Its captive market grows slowly with one or two captive insurers licensed yearly after September 2011. As an international financial center, Hong Kong has comparative advantages in taxation, professionals, market access and well-established financial infrastructure to develop a functional captive insurance market. For example, Hong Kong’s income tax for an insurance company is 16.5%; while China's income tax for an insurance company is 25% plus business tax of 5%. Furthermore, restrictions on market entry and operations of China’s onshore captives make establishing offshore captives in international or regional captive insurance centers such as Singapore, Hong Kong, and other overseas jurisdictions to become attractive options. Thus, there are abundant business opportunities in this area. Using methodology of comparative studies and case analysis, this paper discusses the incorporation, regulatory issues, taxation and prospect of captive insurance market in Hong Kong, China and Singapore. Hong Kong and Singapore are both international financial centers with prominent advantages in tax concessions, technology, implementation, professional services, and well-functioning legal system. Singapore, as the domicile of 71 active captives, has been the largest captive insurance hub in Asia, as well as an established reinsurance hub. Hong Kong is an emerging captive insurance hub with 5 to 10 newly licensed captives each year, according to the Hong Kong Financial Services Development Council. It is predicted that Hong Kong will become a domicile for 50 captive insurers by 2025. This paper also compares the formation of a captive in Singapore with other jurisdictions such as Bermuda and Vermont.

Keywords: Alternative Risk Transfer (ART), captive insurance company, offshore captives, risk management, reinsurance, self-insurance fund

Procedia PDF Downloads 221
63 Content Monetization as a Mark of Media Economy Quality

Authors: Bela Lebedeva

Abstract:

Characteristics of the Web as a channel of information dissemination - accessibility and openness, interactivity and multimedia news - become wider and cover the audience quickly, positively affecting the perception of content, but blur out the understanding of the journalistic work. As a result audience and advertisers continue migrating to the Internet. Moreover, online targeting allows monetizing not only the audience (as customarily given to traditional media) but also the content and traffic more accurately. While the users identify themselves with the qualitative characteristics of the new market, its actors are formed. Conflict of interests is laid in the base of the economy of their relations, the problem of traffic tax as an example. Meanwhile, content monetization actualizes fiscal interest of the state too. The balance of supply and demand is often violated due to the political risks, particularly in terms of state capitalism, populism and authoritarian methods of governance such social institutions as the media. A unique example of access to journalistic material, limited by monetization of content is a television channel Dozhd' (Rain) in Russian web space. Its liberal-minded audience has a better possibility for discussion. However, the channel could have been much more successful in terms of unlimited free speech. Avoiding state pressure and censorship its management has decided to save at least online performance and monetizing all of the content for the core audience. The study Methodology was primarily based on the analysis of journalistic content, on the qualitative and quantitative analysis of the audience. Reconstructing main events and relationships of actors on the market for the last six years researcher has reached some conclusions. First, under the condition of content monetization the capitalization of its quality will always strive to quality characteristics of user, thereby identifying him. Vice versa, the user's demand generates high-quality journalism. The second conclusion follows the previous one. The growth of technology, information noise, new political challenges, the economy volatility and the cultural paradigm change – all these factors form the content paying model for an individual user. This model defines him as a beneficiary of specific knowledge and indicates the constant balance of supply and demand other conditions being equal. As a result, a new economic quality of information is created. This feature is an indicator of the market as a self-regulated system. Monetized information quality is less popular than that of the Public Broadcasting Service, but this audience is able to make decisions. These very users keep the niche sectors which have more potential of technology development, including the content monetization ways. The third point of the study allows develop it in the discourse of media space liberalization. This cultural phenomenon may open opportunities for the development of social and economic relations architecture both locally and regionally.

Keywords: content monetization, state capitalism, media liberalization, media economy, information quality

Procedia PDF Downloads 241
62 Integrating Multiple Types of Value in Natural Capital Accounting Systems: Environmental Value Functions

Authors: Pirta Palola, Richard Bailey, Lisa Wedding

Abstract:

Societies and economies worldwide fundamentally depend on natural capital. Alarmingly, natural capital assets are quickly depreciating, posing an existential challenge for humanity. The development of robust natural capital accounting systems is essential for transitioning towards sustainable economic systems and ensuring sound management of capital assets. However, the accurate, equitable and comprehensive estimation of natural capital asset stocks and their accounting values still faces multiple challenges. In particular, the representation of socio-cultural values held by groups or communities has arguably been limited, as to date, the valuation of natural capital assets has primarily been based on monetary valuation methods and assumptions of individual rationality. People relate to and value the natural environment in multiple ways, and no single valuation method can provide a sufficiently comprehensive image of the range of values associated with the environment. Indeed, calls have been made to improve the representation of multiple types of value (instrumental, intrinsic, and relational) and diverse ontological and epistemological perspectives in environmental valuation. This study addresses this need by establishing a novel valuation framework, Environmental Value Functions (EVF), that allows for the integration of multiple types of value in natural capital accounting systems. The EVF framework is based on the estimation and application of value functions, each of which describes the relationship between the value and quantity (or quality) of an ecosystem component of interest. In this framework, values are estimated in terms of change relative to the current level instead of calculating absolute values. Furthermore, EVF was developed to also support non-marginalist conceptualizations of value: it is likely that some environmental values cannot be conceptualized in terms of marginal changes. For example, ecological resilience value may, in some cases, be best understood as a binary: it either exists (1) or is lost (0). In such cases, a logistic value function may be used as the discriminator. Uncertainty in the value function parameterization can be considered through, for example, Monte Carlo sampling analysis. The use of EVF is illustrated with two conceptual examples. For the first time, EVF offers a clear framework and concrete methodology for the representation of multiple types of value in natural capital accounting systems, simultaneously enabling 1) the complementary use and integration of multiple valuation methods (monetary and non-monetary); 2) the synthesis of information from diverse knowledge systems; 3) the recognition of value incommensurability; 4) marginalist and non-marginalist value analysis. Furthermore, with this advancement, the coupling of EVF and ecosystem modeling can offer novel insights to the study of spatial-temporal dynamics in natural capital asset values. For example, value time series can be produced, allowing for the prediction and analysis of volatility, long-term trends, and temporal trade-offs. This approach can provide essential information to help guide the transition to a sustainable economy.

Keywords: economics of biodiversity, environmental valuation, natural capital, value function

Procedia PDF Downloads 190
61 Method for Requirements Analysis and Decision Making for Restructuring Projects in Factories

Authors: Rene Hellmuth

Abstract:

The requirements for the factory planning and the building concerned have changed in the last years. Factory planning has the task of designing products, plants, processes, organization, areas, and the building of a factory. Regular restructuring gains more importance in order to maintain the competitiveness of a factory. Restrictions regarding new areas, shorter life cycles of product and production technology as well as a VUCA (volatility, uncertainty, complexity and ambiguity) world cause more frequently occurring rebuilding measures within a factory. Restructuring of factories is the most common planning case today. Restructuring is more common than new construction, revitalization and dismantling of factories. The increasing importance of restructuring processes shows that the ability to change was and is a promising concept for the reaction of companies to permanently changing conditions. The factory building is the basis for most changes within a factory. If an adaptation of a construction project (factory) is necessary, the inventory documents must be checked and often time-consuming planning of the adaptation must take place to define the relevant components to be adapted, in order to be able to finally evaluate them. The different requirements of the planning participants from the disciplines of factory planning (production planner, logistics planner, automation planner) and industrial construction planning (architect, civil engineer) come together during reconstruction and must be structured. This raises the research question: Which requirements do the disciplines involved in the reconstruction planning place on a digital factory model? A subordinate research question is: How can model-based decision support be provided for a more efficient design of the conversion within a factory? Because of the high adaptation rate of factories and its building described above, a methodology for rescheduling factories based on the requirements engineering method from software development is conceived and designed for practical application in factory restructuring projects. The explorative research procedure according to Kubicek is applied. Explorative research is suitable if the practical usability of the research results has priority. Furthermore, it will be shown how to best use a digital factory model in practice. The focus will be on mobile applications to meet the needs of factory planners on site. An augmented reality (AR) application will be designed and created to provide decision support for planning variants. The aim is to contribute to a shortening of the planning process and model-based decision support for more efficient change management. This requires the application of a methodology that reduces the deficits of the existing approaches. The time and cost expenditure are represented in the AR tablet solution based on a building information model (BIM). Overall, the requirements of those involved in the planning process for a digital factory model in the case of restructuring within a factory are thus first determined in a structured manner. The results are then applied and transferred to a construction site solution based on augmented reality.

Keywords: augmented reality, digital factory model, factory planning, restructuring

Procedia PDF Downloads 124
60 Digital Structural Monitoring Tools @ADaPT for Cracks Initiation and Growth due to Mechanical Damage Mechanism

Authors: Faizul Azly Abd Dzubir, Muhammad F. Othman

Abstract:

Conventional structural health monitoring approach for mechanical equipment uses inspection data from Non-Destructive Testing (NDT) during plant shut down window and fitness for service evaluation to estimate the integrity of the equipment that is prone to crack damage. Yet, this forecast is fraught with uncertainty because it is often based on assumptions of future operational parameters, and the prediction is not continuous or online. Advanced Diagnostic and Prognostic Technology (ADaPT) uses Acoustic Emission (AE) technology and a stochastic prognostic model to provide real-time monitoring and prediction of mechanical defects or cracks. The forecast can help the plant authority handle their cracked equipment before it ruptures, causing an unscheduled shutdown of the facility. The ADaPT employs process historical data trending, finite element analysis, fitness for service, and probabilistic statistical analysis to develop a prediction model for crack initiation and growth due to mechanical damage. The prediction model is combined with live equipment operating data for real-time prediction of the remaining life span owing to fracture. ADaPT was devised at a hot combined feed exchanger (HCFE) that had suffered creep crack damage. The ADaPT tool predicts the initiation of a crack at the top weldment area by April 2019. During the shutdown window in April 2019, a crack was discovered and repaired. Furthermore, ADaPT successfully advised the plant owner to run at full capacity and improve output by up to 7% by April 2019. ADaPT was also used on a coke drum that had extensive fatigue cracking. The initial cracks are declared safe with ADaPT, with remaining crack lifetimes extended another five (5) months, just in time for another planned facility downtime to execute repair. The prediction model, when combined with plant information data, allows plant operators to continuously monitor crack propagation caused by mechanical damage for improved maintenance planning and to avoid costly shutdowns to repair immediately.

Keywords: mechanical damage, cracks, continuous monitoring tool, remaining life, acoustic emission, prognostic model

Procedia PDF Downloads 71
59 Recycling the Lanthanides from Permanent Magnets by Electrochemistry in Ionic Liquid

Authors: Celine Bonnaud, Isabelle Billard, Nicolas Papaiconomou, Eric Chainet

Abstract:

Thanks to their high magnetization and low mass, permanent magnets (NdFeB and SmCo) have quickly became essential for new energies (wind turbines, electrical vehicles…). They contain large quantities of neodymium, samarium and dysprosium, that have been recently classified as critical elements and that therefore need to be recycled. Electrochemical processes including electrodissolution followed by electrodeposition are an elegant and environmentally friendly solution for the recycling of such lanthanides contained in permanent magnets. However, electrochemistry of the lanthanides is a real challenge as their standard potentials are highly negative (around -2.5V vs ENH). Consequently, non-aqueous solvents are required. Ionic liquids (IL) are novel electrolytes exhibiting physico-chemical properties that fulfill many requirements of the sustainable chemistry principles, such as extremely low volatility and non-flammability. Furthermore, their chemical and electrochemical properties (solvation of metallic ions, large electrochemical windows, etc.) render them very attractive media to implement alternative and sustainable processes in view of integrated processes. All experiments that will be presented were carried out using butyl-methylpyrrolidinium bis(trifluoromethanesulfonyl)imide. Linear sweep, cyclic voltammetry and potentiostatic electrochemical techniques were used. The reliability of electrochemical experiments, performed without glove box, for the classic three electrodes cell used in this study has been assessed. Deposits were obtained by chronoamperometry and were characterized by scanning electron microscopy and energy-dispersive X-ray spectroscopy. The IL cathodic behavior under different constraints (argon, nitrogen, oxygen atmosphere or water content) and using several electrode materials (Pt, Au, GC) shows that with argon gas flow and gold as a working electrode, the cathodic potential can reach the maximum value of -3V vs Fc+/Fc; thus allowing a possible reduction of lanthanides. On a gold working electrode, the reduction potential of samarium and neodymium was found to be -1.8V vs Fc+/Fc while that of dysprosium was -2.1V vs Fc+/Fc. The individual deposits obtained were found to be porous and presented some significant amounts of C, N, F, S and O atoms. Selective deposition of neodymium in presence of dysprosium was also studied and will be discussed. Next, metallic Sm, Nd and Dy electrodes were used in replacement of Au, which induced changes in the reduction potential values and the deposit structures of lanthanides. The individual corrosion potentials were also measured in order to determine the parameters influencing the electrodissolution of these metals. Finally, a full recycling process was investigated. Electrodissolution of a real permanent magnet sample was monitored kinetically. Then, the sequential electrodeposition of all lanthanides contained in the IL was investigated. Yields, quality of the deposits and consumption of chemicals will be discussed in depth, in view of the industrial feasibility of this process for real permanent magnets recycling.

Keywords: electrodeposition, electrodissolution, ionic liquids, lanthanides, rcycling

Procedia PDF Downloads 267
58 The Impact of Artificial Intelligence on Agricultural Machines and Plant Nutrition

Authors: Kirolos Gerges Yakoub Gerges

Abstract:

Self-sustaining agricultural machines act in stochastic surroundings and therefore, should be capable of perceive the surroundings in real time. This notion can be done using image sensors blended with superior device learning, mainly Deep mastering. Deep convolutional neural networks excel in labeling and perceiving colour pix and since the fee of RGB-cameras is low, the hardware cost of accurate notion relies upon heavily on memory and computation power. This paper investigates the opportunity of designing lightweight convolutional neural networks for semantic segmentation (pixel clever class) with reduced hardware requirements, to allow for embedded usage in self-reliant agricultural machines. The usage of compression techniques, a lightweight convolutional neural community is designed to carry out actual-time semantic segmentation on an embedded platform. The community is skilled on two big datasets, ImageNet and Pascal Context, to apprehend as much as four hundred man or woman instructions. The 400 training are remapped into agricultural superclasses (e.g. human, animal, sky, road, area, shelterbelt and impediment) and the capacity to provide correct actual-time perception of agricultural environment is studied. The network is carried out to the case of self-sufficient grass mowing the usage of the NVIDIA Tegra X1 embedded platform. Feeding case-unique pics to the community consequences in a fully segmented map of the superclasses within the picture. As the network remains being designed and optimized, handiest a qualitative analysis of the technique is entire on the abstract submission deadline. intending this cut-off date, the finalized layout is quantitatively evaluated on 20 annotated grass mowing pictures. Light-weight convolutional neural networks for semantic segmentation can be implemented on an embedded platform and show aggressive performance on the subject of accuracy and speed. It’s miles viable to offer value-efficient perceptive capabilities related to semantic segmentation for autonomous agricultural machines.

Keywords: centrifuge pump, hydraulic energy, agricultural applications, irrigationaxial flux machines, axial flux applications, coreless machines, PM machinesautonomous agricultural machines, deep learning, safety, visual perception

Procedia PDF Downloads 15
57 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)

Authors: Ahmed E. Hodaib, Mohamed A. Hashem

Abstract:

In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.

Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization

Procedia PDF Downloads 251
56 Ho-Doped Lithium Niobate Thin Films: Raman Spectroscopy, Structure and Luminescence

Authors: Edvard Kokanyan, Narine Babajanyan, Ninel Kokanyan, Marco Bazzan

Abstract:

Lithium niobate (LN) crystals, renowned for their exceptional nonlinear optical, electro-optical, piezoelectric, and photorefractive properties, stand as foundational materials in diverse fields of study and application. While they have long been utilized in frequency converters of laser radiation, electro-optical modulators, and holographic information recording media, LN crystals doped with rare earth ions represent a compelling frontier for modern compact devices. These materials exhibit immense potential as key components in infrared lasers, optical sensors, self-cooling systems, and radiation balanced laser setups. In this study, we present the successful synthesis of Ho-doped lithium niobate (LN:Ho) thin films on sapphire substrates employing the Sol-Gel technique. The films exhibit a strong crystallographic orientation along the perpendicular direction to the substrate surface, with X-ray diffraction analysis confirming the predominant alignment of the film's "c" axis, notably evidenced by the intense (006) reflection peak. Further characterization through Raman spectroscopy, employing a confocal Raman microscope (LabRAM HR Evolution) with exciting wavelengths of 532 nm and 785 nm, unraveled intriguing insights. Under excitation with a 785 nm laser, Raman scattering obeyed selection rules, while employing a 532 nm laser unveiled additional forbidden lines reminiscent of behaviors observed in bulk LN:Ho crystals. These supplementary lines were attributed to luminescence induced by excitation at 532 nm. Leveraging data from anti-Stokes Raman lines facilitated the disentanglement of luminescence spectra from the investigated samples. Surface scanning affirmed the uniformity of both structure and luminescence across the thin films. Notably, despite the robust orientation of the "c" axis perpendicular to the substrate surface, Raman signals indicated a stochastic distribution of "a" and "b" axes, validating the mosaic structure of the films along the mentioned axis. This study offers valuable insights into the structural properties of Ho-doped lithium niobate thin films, with the observed luminescence behavior holding significant promise for potential applications in optoelectronic devices.

Keywords: lithium niobate, Sol-Gel, luminescence, Raman spectroscopy

Procedia PDF Downloads 52
55 Q-Efficient Solutions of Vector Optimization via Algebraic Concepts

Authors: Elham Kiyani

Abstract:

In this paper, we first introduce the concept of Q-efficient solutions in a real linear space not necessarily endowed with a topology, where Q is some nonempty (not necessarily convex) set. We also used the scalarization technique including the Gerstewitz function generated by a nonconvex set to characterize these Q-efficient solutions. The algebraic concepts of interior and closure are useful to study optimization problems without topology. Studying nonconvex vector optimization is valuable since topological interior is equal to algebraic interior for a convex cone. So, we use the algebraic concepts of interior and closure to define Q-weak efficient solutions and Q-Henig proper efficient solutions of set-valued optimization problems, where Q is not a convex cone. Optimization problems with set-valued maps have a wide range of applications, so it is expected that there will be a useful analytical tool in optimization theory for set-valued maps. These kind of optimization problems are closely related to stochastic programming, control theory, and economic theory. The paper focus on nonconvex problems, the results are obtained by assuming generalized non-convexity assumptions on the data of the problem. In convex problems, main mathematical tools are convex separation theorems, alternative theorems, and algebraic counterparts of some usual topological concepts, while in nonconvex problems, we need a nonconvex separation function. Thus, we consider the Gerstewitz function generated by a general set in a real linear space and re-examine its properties in the more general setting. A useful approach for solving a vector problem is to reduce it to a scalar problem. In general, scalarization means the replacement of a vector optimization problem by a suitable scalar problem which tends to be an optimization problem with a real valued objective function. The Gerstewitz function is well known and widely used in optimization as the basis of the scalarization. The essential properties of the Gerstewitz function, which are well known in the topological framework, are studied by using algebraic counterparts rather than the topological concepts of interior and closure. Therefore, properties of the Gerstewitz function, when it takes values just in a real linear space are studied, and we use it to characterize Q-efficient solutions of vector problems whose image space is not endowed with any particular topology. Therefore, we deal with a constrained vector optimization problem in a real linear space without assuming any topology, and also Q-weak efficient and Q-proper efficient solutions in the senses of Henig are defined. Moreover, by means of the Gerstewitz function, we provide some necessary and sufficient optimality conditions for set-valued vector optimization problems.

Keywords: algebraic interior, Gerstewitz function, vector closure, vector optimization

Procedia PDF Downloads 210
54 The Study of Intangible Assets at Various Firm States

Authors: Gulnara Galeeva, Yulia Kasperskaya

Abstract:

The study deals with the relevant problem related to the formation of the efficient investment portfolio of an enterprise. The structure of the investment portfolio is connected to the degree of influence of intangible assets on the enterprise’s income. This determines the importance of research on the content of intangible assets. However, intangible assets studies do not take into consideration how the enterprise state can affect the content and the importance of intangible assets for the enterprise`s income. This affects accurateness of the calculations. In order to study this problem, the research was divided into several stages. In the first stage, intangible assets were classified based on their synergies as the underlying intangibles and the additional intangibles. In the second stage, this classification was applied. It showed that the lifecycle model and the theory of abrupt development of the enterprise, that are taken into account while designing investment projects, constitute limit cases of a more general theory of bifurcations. The research identified that the qualitative content of intangible assets significant depends on how close the enterprise is to being in crisis. In the third stage, the author developed and applied the Wide Pairwise Comparison Matrix method. This allowed to establish that using the ratio of the standard deviation to the mean value of the elements of the vector of priority of intangible assets makes it possible to estimate the probability of a full-blown crisis of the enterprise. The author has identified a criterion, which allows making fundamental decisions on investment feasibility. The study also developed an additional rapid method of assessing the enterprise overall status based on using the questionnaire survey with its Director. The questionnaire consists only of two questions. The research specifically focused on the fundamental role of stochastic resonance in the emergence of bifurcation (crisis) in the economic development of the enterprise. The synergetic approach made it possible to describe the mechanism of the crisis start in details and also to identify a range of universal ways of overcoming the crisis. It was outlined that the structure of intangible assets transforms into a more organized state with the strengthened synchronization of all processes as a result of the impact of the sporadic (white) noise. Obtained results offer managers and business owners a simple and an affordable method of investment portfolio optimization, which takes into account how close the enterprise is to a state of a full-blown crisis.

Keywords: analytic hierarchy process, bifurcation, investment portfolio, intangible assets, wide matrix

Procedia PDF Downloads 205
53 Nanoporous Metals Reinforced with Fullerenes

Authors: Deni̇z Ezgi̇ Gülmez, Mesut Kirca

Abstract:

Nanoporous (np) metals have attracted considerable attention owing to their cellular morphological features at atomistic scale which yield ultra-high specific surface area awarding a great potential to be employed in diverse applications such as catalytic, electrocatalytic, sensing, mechanical and optical. As one of the carbon based nanostructures, fullerenes are also another type of outstanding nanomaterials that have been extensively investigated due to their remarkable chemical, mechanical and optical properties. In this study, the idea of improving the mechanical behavior of nanoporous metals by inclusion of the fullerenes, which offers a new metal-carbon nanocomposite material, is examined and discussed. With this motivation, tensile mechanical behavior of nanoporous metals reinforced with carbon fullerenes is investigated by classical molecular dynamics (MD) simulations. Atomistic models of the nanoporous metals with ultrathin ligaments are obtained through a stochastic process simply based on the intersection of spherical volumes which has been used previously in literature. According to this technique, the atoms within the ensemble of intersecting spherical volumes is removed from the pristine solid block of the selected metal, which results in porous structures with spherical cells. Following this, fullerene units are added into the cellular voids to obtain final atomistic configurations for the numerical tensile tests. Several numerical specimens are prepared with different number of fullerenes per cell and with varied fullerene sizes. LAMMPS code was used to perform classical MD simulations to conduct uniaxial tension experiments on np models filled by fullerenes. The interactions between the metal atoms are modeled by using embedded atomic method (EAM) while adaptive intermolecular reactive empirical bond order (AIREBO) potential is employed for the interaction of carbon atoms. Furthermore, atomic interactions between the metal and carbon atoms are represented by Lennard-Jones potential with appropriate parameters. In conclusion, the ultimate goal of the study is to present the effects of fullerenes embedded into the cellular structure of np metals on the tensile response of the porous metals. The results are believed to be informative and instructive for the experimentalists to synthesize hybrid nanoporous materials with improved properties and multifunctional characteristics.

Keywords: fullerene, intersecting spheres, molecular dynamic, nanoporous metals

Procedia PDF Downloads 237
52 A Socio-Spatial Analysis of Financialization and the Formation of Oligopolies in Brazilian Basic Education

Authors: Gleyce Assis Da Silva Barbosa

Abstract:

In recent years, we have witnessed a vertiginous growth of large education companies. Daughters of national and world capital, these companies expand both through consolidated physical networks in the form of branches spread across the territory and through institutional networks such as business networks through mergers, acquisitions, creation of new companies and influence. They do this by incorporating small, medium and large schools and universities, teaching systems and other products and services. They are also able to weave their webs directly or indirectly in philanthropic circles, limited partnerships, family businesses and even in public education through various mechanisms of outsourcing, privatization and commercialization of products for the sector. Although the growth of these groups in basic education seems to us a recent phenomenon in peripheral countries such as Brazil, its diffusion is closely linked to higher education conglomerates and other sectors of the economy forming oligopolies, which began to expand in the 1990s with strong state support and through political reforms that redefined its role, transforming it into a fundamental agent in the formation of guidelines to boost the incorporation of neoliberal logic. This expansion occurred through the objectification of education, commodifying it and transforming students into consumer clients. Financial power combined with the neo-liberalization of state public policies allowed the profusion of social exclusion, the increase of individuals without access to basic services, deindustrialization, automation, capital volatility and the indetermination of the economy; in addition, this process causes capital to be valued and devalued at rates never seen before, which together generates various impacts such as the precariousness of work. Understanding the connection between these processes, which engender the economy, allows us to see their consequences in labor relations and in the territory. In this sense, it is necessary to analyze the geographic-economic context and the role of the facilitating agents of this process, which can give us clues about the ongoing transformations and the directions of education in the national and even international scenario since this process is linked to the multiple scales of financial globalization. Therefore, the present research has the general objective of analyzing the socio-spatial impacts of financialization and the formation of oligopolies in Brazilian basic education. For this, the survey of laws, data, and public policies on the subject in question was used as a methodology. As a methodology, the work was based on some data from these companies available on websites for investors. Survey of information from global and national companies that operate in Brazilian basic education. In addition to mapping the expansion of educational oligopolies using public data on the location of schools. With this, the research intends to provide information about the ongoing commodification process in the country. Discuss the consequences of the oligopolization of education, considering the impacts that financialization can bring to teaching work.

Keywords: financialization, oligopolies, education, Brazil

Procedia PDF Downloads 59
51 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests

Authors: Huseyin Guler, Cigdem Kosar

Abstract:

The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.

Keywords: bridge estimators, HEGY test, model selection, seasonal unit root

Procedia PDF Downloads 333
50 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)

Authors: Eric Pla Erra, Mariana Jimenez Martinez

Abstract:

While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.

Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)

Procedia PDF Downloads 100
49 Further Development of Offshore Floating Solar and Its Design Requirements

Authors: Madjid Karimirad

Abstract:

Floating solar was not very well-known in the renewable energy field a decade ago; however, there has been tremendous growth internationally with a Compound Annual Growth Rate (CAGR) of nearly 30% in recent years. To reach the goal of global net-zero emission by 2050, all renewable energy sources including solar should be used. Considering that 40% of the world’s population lives within 100 kilometres of the coasts, floating solar in coastal waters is an obvious energy solution. However, this requires more robust floating solar solutions. This paper tries to enlighten the fundamental requirements in the design of floating solar for offshore installations from the hydrodynamic and offshore engineering points of view. In this regard, a closer look at dynamic characteristics, stochastic behaviour and nonlinear phenomena appearing in this kind of structure is a major focus of the current article. Floating solar structures are alternative and very attractive green energy installations with (a) Less strain on land usage for densely populated areas; (b) Natural cooling effect with efficiency gain; and (c) Increased irradiance from the reflectivity of water. Also, floating solar in conjunction with the hydroelectric plants can optimise energy efficiency and improve system reliability. The co-locating of floating solar units with other types such as offshore wind, wave energy, tidal turbines as well as aquaculture (fish farming) can result in better ocean space usage and increase the synergies. Floating solar technology has seen considerable developments in installed capacities in the past decade. Development of design standards and codes of practice for floating solar technologies deployed on both inland water-bodies and offshore is required to ensure robust and reliable systems that do not have detrimental impacts on the hosting water body. Floating solar will account for 17% of all PV energy produced worldwide by 2030. To enhance the development, further research in this area is needed. This paper aims to discuss the main critical design aspects in light of the load and load effects that the floating solar platforms are subjected to. The key considerations in hydrodynamics, aerodynamics and simultaneous effects from the wind and wave load actions will be discussed. The link of dynamic nonlinear loading, limit states and design space considering the environmental conditions is set to enable a better understanding of the design requirements of fast-evolving floating solar technology.

Keywords: floating solar, offshore renewable energy, wind and wave loading, design space

Procedia PDF Downloads 70
48 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel

Authors: Hamed Kalhori, Lin Ye

Abstract:

In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.

Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction

Procedia PDF Downloads 531
47 Deep Learning Framework for Predicting Bus Travel Times with Multiple Bus Routes: A Single-Step Multi-Station Forecasting Approach

Authors: Muhammad Ahnaf Zahin, Yaw Adu-Gyamfi

Abstract:

Bus transit is a crucial component of transportation networks, especially in urban areas. Any intelligent transportation system must have accurate real-time information on bus travel times since it minimizes waiting times for passengers at different stations along a route, improves service reliability, and significantly optimizes travel patterns. Bus agencies must enhance the quality of their information service to serve their passengers better and draw in more travelers since people waiting at bus stops are frequently anxious about when the bus will arrive at their starting point and when it will reach their destination. For solving this issue, different models have been developed for predicting bus travel times recently, but most of them are focused on smaller road networks due to their relatively subpar performance in high-density urban areas on a vast network. This paper develops a deep learning-based architecture using a single-step multi-station forecasting approach to predict average bus travel times for numerous routes, stops, and trips on a large-scale network using heterogeneous bus transit data collected from the GTFS database. Over one week, data was gathered from multiple bus routes in Saint Louis, Missouri. In this study, Gated Recurrent Unit (GRU) neural network was followed to predict the mean vehicle travel times for different hours of the day for multiple stations along multiple routes. Historical time steps and prediction horizon were set up to 5 and 1, respectively, which means that five hours of historical average travel time data were used to predict average travel time for the following hour. The spatial and temporal information and the historical average travel times were captured from the dataset for model input parameters. As adjacency matrices for the spatial input parameters, the station distances and sequence numbers were used, and the time of day (hour) was considered for the temporal inputs. Other inputs, including volatility information such as standard deviation and variance of journey durations, were also included in the model to make it more robust. The model's performance was evaluated based on a metric called mean absolute percentage error (MAPE). The observed prediction errors for various routes, trips, and stations remained consistent throughout the day. The results showed that the developed model could predict travel times more accurately during peak traffic hours, having a MAPE of around 14%, and performed less accurately during the latter part of the day. In the context of a complicated transportation network in high-density urban areas, the model showed its applicability for real-time travel time prediction of public transportation and ensured the high quality of the predictions generated by the model.

Keywords: gated recurrent unit, mean absolute percentage error, single-step forecasting, travel time prediction.

Procedia PDF Downloads 67
46 User Experience in Relation to Eye Tracking Behaviour in VR Gallery

Authors: Veslava Osinska, Adam Szalach, Dominik Piotrowski

Abstract:

Contemporary VR technologies allow users to explore virtual 3D spaces where they can work, socialize, learn, and play. User's interaction with GUI and the pictures displayed implicate perceptual and also cognitive processes which can be monitored due to neuroadaptive technologies. These modalities provide valuable information about the users' intentions, situational interpretations, and emotional states, to adapt an application or interface accordingly. Virtual galleries outfitted by specialized assets have been designed using the Unity engine BITSCOPE project in the frame of CHIST-ERA IV program. Users interaction with gallery objects implies the questions about his/her visual interests in art works and styles. Moreover, an attention, curiosity, and other emotional states are possible to be monitored and analyzed. Natural gaze behavior data and eye position were recorded by built-in eye-tracking module within HTC Vive headset gogle for VR. Eye gaze results are grouped due to various users’ behavior schemes and the appropriate perpetual-cognitive styles are recognized. Parallelly usability tests and surveys were adapted to identify the basic features of a user-centered interface for the virtual environments across most of the timeline of the project. A total of sixty participants were selected from the distinct faculties of University and secondary schools. Users’ primary knowledge about art and was evaluated during pretest and this way the level of art sensitivity was described. Data were collected during two months. Each participant gave written informed consent before participation. In data analysis reducing the high-dimensional data into a relatively low-dimensional subspace ta non linear algorithms were used such as multidimensional scaling and novel technique technique t-Stochastic Neighbor Embedding. This way it can classify digital art objects by multi modal time characteristics of eye tracking measures and reveal signatures describing selected artworks. Current research establishes the optimal place on aesthetic-utility scale because contemporary interfaces of most applications require to be designed in both functional and aesthetical ways. The study concerns also an analysis of visual experience for subsamples of visitors, differentiated, e.g., in terms of frequency of museum visits, cultural interests. Eye tracking data may also show how to better allocate artefacts and paintings or increase their visibility when possible.

Keywords: eye tracking, VR, UX, visual art, virtual gallery, visual communication

Procedia PDF Downloads 37
45 Economic Efficiency of Cassava Production in Nimba County, Liberia: An Output-Oriented Approach

Authors: Kollie B. Dogba, Willis Oluoch-Kosura, Chepchumba Chumo

Abstract:

In Liberia, many of the agricultural households cultivate cassava for either sustenance purposes, or to generate farm income. Many of the concentrated cassava farmers reside in Nimba, a north-eastern County that borders two other economies: the Republics of Cote D’Ivoire and Guinea. With a high demand for cassava output and products in emerging Asian markets coupled with an objective of the Liberia agriculture policies to increase the competitiveness of valued agriculture crops; there is a need to examine the level of resource-use efficiency for many agriculture crops. However, there is a scarcity of information on the efficiency of many agriculture crops, including cassava. Hence the study applying an output-oriented method seeks to assess the economic efficiency of cassava farmers in Nimba County, Liberia. A multi-stage sampling technique was employed to generate a sample for the study. From 216 cassava farmers, data related to on-farm attributes, socio-economic and institutional factors were collected. The stochastic frontier models, using the Translog functional forms, of production and revenue, were used to determine the level of revenue efficiency and its determinants. The result showed that most of the cassava farmers are male (60%). Many of the farmers are either married, engaged or living together with a spouse (83%), with a mean household size of nine persons. Farmland is prevalently obtained by inheritance (95%), average farm size is 1.34 hectares, and most cassava farmers did not access agriculture credits (76%) and extension services (91%). The mean cassava output per hectare is 1,506.02 kg, which estimates average revenue of L$23,551.16 (Liberian dollars). Empirical results showed that the revenue efficiency of cassava farmers varies from 0.1% to 73.5%; with the mean revenue efficiency of 12.9%. This indicates that on average, there is a vast potential of 87.1% to increase the economic efficiency of cassava farmers in Nimba by improving technical and allocative efficiencies. For the significant determinants of revenue efficiency, age and group membership had negative effects on revenue efficiency of cassava production; while farming experience, access to extension, formal education, and average wage rate have positive effects. The study recommends the setting-up and incentivizing of farmer field schools for cassava farmers to primarily share their farming experiences with others and to learn robust cultivation techniques of sustainable agriculture. Also, farm managers and farmers should consider a fix wage rate in labor contracts for all stages of cassava farming.

Keywords: economic efficiency, frontier production and revenue functions, Nimba County, Liberia, output-oriented approach, revenue efficiency, sustainable agriculture

Procedia PDF Downloads 122
44 Transforming Challenges of Urban and Peri-Urban Agriculture into Opportunities for Urban Food Security in India

Authors: G. Kiran Kumar, K. Padmaja

Abstract:

The rise of urban and peri-urban agriculture (UPA) is an important urban phenomenon that needs to be well understood before we pronounce a verdict whether it is beneficial or not. The challenge of supply of safe and nutritious food is faced by urban inhabitants. The definition of urban and peri-urban varies from city to city depending on the local policies framed with a view to bring regulated urban habitations as part of governance. Expansion of cities and the blurring of boundaries between urban and rural areas make it difficult to define peri-urban agriculture. The problem is further exacerbated by the fact that definition adopted in one region may not fit in the other. On the other hand the proportion of urban population is on the rise vis-à-vis rural. The rise of UPA does not promise that the food requirements of cities can be entirely met from this practice, since availability of enormous amounts of spaces on rooftops and vacant plots is impossible for raising crops. However, UPA reduces impact of price volatility, particularly for vegetables, which relatively have a longer shelf life. UPA improves access to fresh, nutritious and safe food for the urban poor. UPA provides employment to food handlers and traders in the supply chain. UPA can pose environmental and health risks from inappropriate agricultural practices; increased competition for land, water and energy; alter the ecological landscape and make it vulnerable to increased pollution. The present work is based on case studies in peri-urban agriculture in Hyderabad, India and relies on secondary data. This paper tries to analyze the need for more intensive production technologies without affecting the environment. An optimal solution in terms of urban-rural linkages has to be devised. There is a need to develop a spatial vision and integrate UPA in urban planning in a harmonious manner. Zoning of peri-urban areas for agriculture, milk and poultry production is an essential step to preserve the traditional nurturing character of these areas. Urban local bodies in conjunction with Departments of Agriculture and Horticulture can provide uplift to existing UPA models, without which the UPA can develop into a haphazard phenomenon and add to the increasing list of urban challenges. Land to be diverted for peri-urban agriculture may render the concept of urban and peri-urban forestry ineffective. This paper suggests that UPA may be practiced for high value vegetables which can be cultivated under protected conditions and are better resilient to climate change. UPA can provide models for climate resilient agriculture in urban areas which can be replicated in rural areas. Production of organic farm produce is another option for promote UPA owing to the proximity to informed consumers and access to markets within close range. Waste lands in peri-urban areas can be allotted to unemployed rural youth with the support of Urban Local Bodies (ULBs) and used for UPA. This can serve the purposes of putting wastelands to food production, enhancing employment opportunities and enhancing access to fresh produce for urban consumers.

Keywords: environment, food security, urban and peri-urban agriculture, zoning

Procedia PDF Downloads 314
43 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 120
42 Development of a Conceptual Framework for Supply Chain Management Strategies Maximizing Resilience in Volatile Business Environments: A Case of Ventilator Challenge UK

Authors: Elena Selezneva

Abstract:

Over the last two decades, an unprecedented growth in uncertainty and volatility in all aspects of the business environment has caused major global supply chain disruptions and malfunctions. The effects of one failed company in a supply chain can ripple up and down the chain, causing a number of entities or an entire supply chain to collapse. The complicating factor is that an increasingly unstable and unpredictable business environment fuels the growing complexity of global supply chain networks. That makes supply chain operations extremely unpredictable and hard to manage with the established methods and strategies. It has caused the premature demise of many companies around the globe as they could not withstand or adapt to the storm of change. Solutions to this problem are not easy to come by. There is a lack of new empirically tested theories and practically viable supply chain resilience strategies. The mainstream organizational approach to managing supply chain resilience is rooted in well-established theories developed in the 1960-1980s. However, their effectiveness is questionable in currently extremely volatile business environments. The systems thinking approach offers an alternative view of supply chain resilience. Still, it is very much in the development stage. The aim of this explorative research is to investigate supply chain management strategies that are successful in taming complexity in volatile business environments and creating resilience in supply chains. The design of this research methodology was guided by an interpretivist paradigm. A literature review informed the selection of the systems thinking approach to supply chain resilience. Therefore, an explorative single case study of Ventilator Challenge UK was selected as a case study for its extremely resilient performance of its supply chain during a period of national crisis. Ventilator Challenge UK is intensive care ventilators supply project for the NHS. It ran for 3.5 months and finished in 2020. The participants moved on with their lives, and most of them are not employed by the same organizations anymore. Therefore, the study data includes documents, historical interviews, live interviews with participants, and social media postings. The data analysis was accomplished in two stages. First, data were thematically analyzed. In the second stage, pattern matching and pattern identification were used to identify themes that formed the findings of the research. The findings from the Ventilator Challenge UK case study supply management practices demonstrated all the features of an adaptive dynamic system. They cover all the elements of supply chain and employ an entire arsenal of adaptive dynamic system strategies enabling supply chain resilience. Also, it is not a simple sum of parts and strategies. Bonding elements and connections between the components of a supply chain and its environment enabled the amplification of resilience in the form of systemic emergence. Enablers are categorized into three subsystems: supply chain central strategy, supply chain operations, and supply chain communications. Together, these subsystems and their interconnections form the resilient supply chain system framework conceptualized by the author.

Keywords: enablers of supply chain resilience, supply chain resilience strategies, systemic approach in supply chain management, resilient supply chain system framework, ventilator challenge UK

Procedia PDF Downloads 77
41 Forecasting Residential Water Consumption in Hamilton, New Zealand

Authors: Farnaz Farhangi

Abstract:

Many people in New Zealand believe that the access to water is inexhaustible, and it comes from a history of virtually unrestricted access to it. For the region like Hamilton which is one of New Zealand’s fastest growing cities, it is crucial for policy makers to know about the future water consumption and implementation of rules and regulation such as universal water metering. Hamilton residents use water freely and they do not have any idea about how much water they use. Hence, one of proposed objectives of this research is focusing on forecasting water consumption using different methods. Residential water consumption time series exhibits seasonal and trend variations. Seasonality is the pattern caused by repeating events such as weather conditions in summer and winter, public holidays, etc. The problem with this seasonal fluctuation is that, it dominates other time series components and makes difficulties in determining other variations (such as educational campaign’s effect, regulation, etc.) in time series. Apart from seasonality, a stochastic trend is also combined with seasonality and makes different effects on results of forecasting. According to the forecasting literature, preprocessing (de-trending and de-seasonalization) is essential to have more performed forecasting results, while some other researchers mention that seasonally non-adjusted data should be used. Hence, I answer the question that is pre-processing essential? A wide range of forecasting methods exists with different pros and cons. In this research, I apply double seasonal ARIMA and Artificial Neural Network (ANN), considering diverse elements such as seasonality and calendar effects (public and school holidays) and combine their results to find the best predicted values. My hypothesis is the examination the results of combined method (hybrid model) and individual methods and comparing the accuracy and robustness. In order to use ARIMA, the data should be stationary. Also, ANN has successful forecasting applications in terms of forecasting seasonal and trend time series. Using a hybrid model is a way to improve the accuracy of the methods. Due to the fact that water demand is dominated by different seasonality, in order to find their sensitivity to weather conditions or calendar effects or other seasonal patterns, I combine different methods. The advantage of this combination is reduction of errors by averaging of each individual model. It is also useful when we are not sure about the accuracy of each forecasting model and it can ease the problem of model selection. Using daily residential water consumption data from January 2000 to July 2015 in Hamilton, I indicate how prediction by different methods varies. ANN has more accurate forecasting results than other method and preprocessing is essential when we use seasonal time series. Using hybrid model reduces forecasting average errors and increases the performance.

Keywords: artificial neural network (ANN), double seasonal ARIMA, forecasting, hybrid model

Procedia PDF Downloads 333
40 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss

Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin

Abstract:

Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.

Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links

Procedia PDF Downloads 125