Search results for: optimal route
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3691

Search results for: optimal route

961 Technical and Economic Analysis of Smart Micro-Grid Renewable Energy Systems: An Applicable Case Study

Authors: M. A. Fouad, M. A. Badr, Z. S. Abd El-Rehim, Taher Halawa, Mahmoud Bayoumi, M. M. Ibrahim

Abstract:

Renewable energy-based micro-grids are presently attracting significant consideration. The smart grid system is presently considered a reliable solution for the expected deficiency in the power required from future power systems. The purpose of this study is to determine the optimal components sizes of a micro-grid, investigating technical and economic performance with the environmental impacts. The micro grid load is divided into two small factories with electricity, both on-grid and off-grid modes are considered. The micro-grid includes photovoltaic cells, back-up diesel generator wind turbines, and battery bank. The estimated load pattern is 76 kW peak. The system is modeled and simulated by MATLAB/Simulink tool to identify the technical issues based on renewable power generation units. To evaluate system economy, two criteria are used: the net present cost and the cost of generated electricity. The most feasible system components for the selected application are obtained, based on required parameters, using HOMER simulation package. The results showed that a Wind/Photovoltaic (W/PV) on-grid system is more economical than a Wind/Photovoltaic/Diesel/Battery (W/PV/D/B) off-grid system as the cost of generated electricity (COE) is 0.266 $/kWh and 0.316 $/kWh, respectively. Considering the cost of carbon dioxide emissions, the off-grid will be competitive to the on-grid system as COE is found to be (0.256 $/kWh, 0.266 $/kWh), for on and off grid systems.

Keywords: renewable energy sources, micro-grid system, modeling and simulation, on/off grid system, environmental impacts

Procedia PDF Downloads 259
960 Solving the Economic Load Dispatch Problem Using Differential Evolution

Authors: Alaa Sheta

Abstract:

Economic Load Dispatch (ELD) is one of the vital optimization problems in power system planning. Solving the ELD problems mean finding the best mixture of power unit outputs of all members of the power system network such that the total fuel cost is minimized while sustaining operation requirements limits satisfied across the entire dispatch phases. Many optimization techniques were proposed to solve this problem. A famous one is the Quadratic Programming (QP). QP is a very simple and fast method but it still suffer many problem as gradient methods that might trapped at local minimum solutions and cannot handle complex nonlinear functions. Numbers of metaheuristic algorithms were used to solve this problem such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). In this paper, another meta-heuristic search algorithm named Differential Evolution (DE) is used to solve the ELD problem in power systems planning. The practicality of the proposed DE based algorithm is verified for three and six power generator system test cases. The gained results are compared to existing results based on QP, GAs and PSO. The developed results show that differential evolution is superior in obtaining a combination of power loads that fulfill the problem constraints and minimize the total fuel cost. DE found to be fast in converging to the optimal power generation loads and capable of handling the non-linearity of ELD problem. The proposed DE solution is able to minimize the cost of generated power, minimize the total power loss in the transmission and maximize the reliability of the power provided to the customers.

Keywords: economic load dispatch, power systems, optimization, differential evolution

Procedia PDF Downloads 275
959 Tensile Retention Properties of Thermoplastic Starch Based Biocomposites Modified with Glutaraldehyde

Authors: Jen-Taut Yeh, Yuan-jing Hou, Li Cheng, Ya Zhou Wang, Zhi Yu Zhang

Abstract:

Tensile retention properties of bacterial cellulose (BC) reinforced thermoplastic starch (TPS) resins were successfully improved by reacting with glutaraldehyde (GA) in their gelatinization processes. Small amounts of poly (lactic acid) (PLA) were blended with GA modified TPS resins to improve their processability. As evidenced by the newly developed ether (-C-O-C-) stretching bands on FT-IR spectra of TPS100BC0.02GAx series specimens, hydroxyl groups of TPS100BC0.02 resins were successfully reacted with the aldehyde groups of GA molecules during their modification processes. The retention values of tensile strengths (σf) of TPS100BC0.02GAx and (TPS100BC0.02GAx)75PLA25 specimens improved significantly and reached a maximal value as GA contents approached an optimal value at 0.5 part per hundred parts of TPS resin (PHR). By addition of 0.5 PHR GA in biocomposite specimens, the initial tensile strength and elongation at break values of (TPS100BC0.02GA0.5)75PLA25 specimen improved to 24.6 MPa and 5.6%, respectively, which were slightly improved than those of (TPS100BC0.02)75PLA25 specimen. However, the retention values of tensile strengths of (TPS100BC0.02GA0.5)75PLA25 specimen reached around 82.5%, after placing the specimen under 20oC/50% relative humidity for 56 days, which were significantly better than those of the (TPS100BC0.02)75PLA25 specimen. In order to understand these interesting tensile retention properties found for (TPS100BC0.02GAx)75PLA25 specimens. Thermal analyses of initial and aged TPS100BC0.02, TPS100BC0.02GAx and (TPS100BC0.02GAx)75PLA25 specimens were also performed in this investigation. Possible reasons accounting for the significantly improved tensile retention properties of TPS100BC0.02GAx and (TPS100BC0.02GAx)75PLA25 specimens are proposed.

Keywords: biocomposite, strength retention, thermoplastic starch, tensile retention

Procedia PDF Downloads 364
958 Load Balancing Technique for Energy - Efficiency in Cloud Computing

Authors: Rani Danavath, V. B. Narsimha

Abstract:

Cloud computing is emerging as a new paradigm of large scale distributed computing. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., three service models, and four deployment networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics models. Load balancing is one of the main challenges in cloud computing, which is required to distribute the dynamic workload across multiple nodes, to ensure that no single node is overloaded. It helps in optimal utilization of resources, enhancing the performance of the system. The goal of the load balancing is to minimize the resource consumption and carbon emission rate, that is the direct need of cloud computing. This determined the need of new metrics energy consumption and carbon emission for energy-efficiency load balancing techniques in cloud computing. Existing load balancing techniques mainly focuses on reducing overhead, services, response time and improving performance etc. In this paper we introduced a Technique for energy-efficiency, but none of the techniques have considered the energy consumption and carbon emission. Therefore, our proposed work will go towards energy – efficiency. So this energy-efficiency load balancing technique can be used to improve the performance of cloud computing by balancing the workload across all the nodes in the cloud with the minimum resource utilization, in turn, reducing energy consumption, and carbon emission to an extent, which will help to achieve green computing.

Keywords: cloud computing, distributed computing, energy efficiency, green computing, load balancing, energy consumption, carbon emission

Procedia PDF Downloads 439
957 Effect of Acids with Different Chain Lengths Modified by Methane Sulfonic Acid and Temperature on the Properties of Thermoplastic Starch/Glycerin Blends

Authors: Chi-Yuan Huang, Mei-Chuan Kuo, Ching-Yi Hsiao

Abstract:

In this study, acids with various chain lengths (C6, C8, C10 and C12) modified by methane sulfonic acid (MSA) and temperature were used to modify tapioca starch (TPS), then the glycerol (GA) were added into modified starch, to prepare new blends. The mechanical properties, thermal properties and physical properties of blends were studied. This investigation was divided into two parts.  First, the biodegradable materials were used such as starch and glycerol with hexanedioic acid (HA), suberic acid (SBA), sebacic acid (SA), decanedicarboxylic acid (DA) manufacturing with different temperatures (90, 110 and 130 °C). And then, the solution was added into modified starch to prepare the blends by using single-screw extruder. The FT-IR patterns indicated that the characteristic peak of C=O in ester was observed at 1730 cm-1. It is proved that different chain length acids (C6, C8, C10 and C12) reacted with glycerol by esterification and these are used to plasticize blends during extrusion. In addition, the blends would improve the hydrolysis and thermal stability. The water contact angle increased from 43.0° to 64.0°.  Second, the HA (110 °C), SBA (110 °C), SA (110 °C), and DA blends (130 °C) were used in study, because they possessed good mechanical properties, water resistances and thermal stability. On the other hand, the various contents (0, 0.005, 0.010, 0.020 g) of MSA were also used to modify the mechanical properties of blends. We observed that the blends were added to MSA, and then the FT-IR patterns indicated that the C=O ester appeared at 1730 cm-1. For this reason, the hydrophobic blends were produced. The water contact angle of the MSA blends increased from 55.0° to 71.0°. Although break elongation of the MSA blends reduced from the original 220% to 128%, the stress increased from 2.5 MPa to 5.1 MPa. Therefore, the optimal composition of blends was the DA blend (130 °C) with adding of MSA (0.005 g).

Keywords: chain length acids, methane sulfonic acid, Tapioca starch (TPS), tensile stress

Procedia PDF Downloads 236
956 Border Security: Implementing the “Memory Effect” Theory in Irregular Migration

Authors: Iliuta Cumpanasu, Veronica Oana Cumpanasu

Abstract:

This paper focuses on studying the conjunction between the new emerged theory of “Memory Effect” in Irregular Migration and Related Criminality and the notion of securitization, and its impact on border management, bringing about a scientific advancement in the field by identifying the patterns corresponding to the linkage of the two concepts, for the first time, and developing a theoretical explanation, with respect to the effects of the non-military threats on border security. Over recent years, irregular migration has experienced a significant increase worldwide. The U.N.'s refugee agency reports that the number of displaced people is at its highest ever - surpassing even post-World War II numbers when the world was struggling to come to terms with the most devastating event in history. This is also the fresh reality within the core studied coordinate, the Balkan Route of Irregular Migration, which starts from Asia and Africa and continues to Turkey, Greece, North Macedonia or Bulgaria, Serbia, and ends in Romania, where thousands of migrants find themselves in an irregular situation concerning their entry to the European Union, with its important consequences concerning the related criminality. The data from the past six years was collected by making use of semi-structured interviews with experts in the field of migration and desk research within some organisations involved in border security, pursuing the gathering of genuine insights from the aforementioned field, which was constantly addressed the existing literature and subsequently subjected to the mixed methods of analysis, including the use of the Vector Auto-Regression estimates model. Thereafter, the analysis of the data followed the processes and outcomes in Grounded Theory, and a new Substantive Theory emerged, explaining how the phenomena of irregular migration and cross-border criminality are the decisive impetus for implementing the concept of securitization in border management by using the proposed pattern. The findings of the study are therefore able to capture an area that has not yet benefitted from a comprehensive approach in the scientific community, such as the seasonality, stationarity, dynamics, predictions, or the pull and push factors in Irregular Migration, also highlighting how the recent ‘Pandemic’ interfered with border security. Therefore, the research uses an inductive revelatory theoretical approach which aims at offering a new theory in order to explain a phenomenon, triggering a practically handy contribution for the scientific community, research institutes or Academia and also usefulness to organizational practitioners in the field, among which UN, IOM, UNHCR, Frontex, Interpol, Europol, or national agencies specialized in border security. The scientific outcomes of this study were validated on June 30, 2021, when the author defended his dissertation for the European Joint Master’s in Strategic Border Management, a two years prestigious program supported by the European Commission and Frontex Agency and a Consortium of six European Universities and is currently one of the research objectives of his pending PhD research at the West University Timisoara.

Keywords: migration, border, security, memory effect

Procedia PDF Downloads 80
955 Design and Development of Power Sources for Plasma Actuators to Control Flow Separation

Authors: Himanshu J. Bahirat, Apoorva S. Janawlekar

Abstract:

Plasma actuators are essential for aerodynamic flow separation control due to their lack of mechanical parts, lightweight, and high response frequency, which have numerous applications in hypersonic or supersonic aircraft. The working of these actuators is based on the formation of a low-temperature plasma between a pair of parallel electrodes by the application of a high-voltage AC signal across the electrodes, after which air molecules from the air surrounding the electrodes are ionized and accelerated through the electric field. The high-frequency operation is required in dielectric discharge barriers to ensure plasma stability. To carry out flow separation control in a hypersonic flow, the optimal design and construction of a power supply to generate dielectric barrier discharges is carried out in this paper. In this paper, it is aspired to construct a simplified circuit topology to emulate the dielectric barrier discharge and study its various frequency responses. The power supply can generate high voltage pulses up to 20kV at the repetitive frequency range of 20-50kHz with an input power of 500W. The power supply has been designed to be short circuit proof and can endure variable plasma load conditions. Its general outline is to charge a capacitor through a half-bridge converter and then later discharge it through a step-up transformer at a high frequency in order to generate high voltage pulses. After simulating the circuit, the PCB design and, eventually, lab tests are carried out to study its effectiveness in controlling flow separation.

Keywords: aircraft propulsion, dielectric barrier discharge, flow separation control, power source

Procedia PDF Downloads 119
954 An Optimization Model for the Arrangement of Assembly Areas Considering Time Dynamic Area Requirements

Authors: Michael Zenker, Henrik Prinzhorn, Christian Böning, Tom Strating

Abstract:

Large-scale products are often assembled according to the job-site principle, meaning that during the assembly the product is located at a fixed position, while the area requirements are constantly changing. On one hand, the product itself is growing with each assembly step, whereas varying areas for storage, machines or working areas are temporarily required. This is an important factor when arranging products to be assembled within the factory. Currently, it is common to reserve a fixed area for each product to avoid overlaps or collisions with the other assemblies. Intending to be large enough to include the product and all adjacent areas, this reserved area corresponds to the superposition of the maximum extents of all required areas of the product. In this procedure, the reserved area is usually poorly utilized over the course of the entire assembly process; instead a large part of it remains unused. If the available area is a limited resource, a systematic arrangement of the products, which complies with the dynamic area requirements, will lead to an increased area utilization and productivity. This paper presents the results of a study on the arrangement of assembly objects assuming dynamic, competing area requirements. First, the problem situation is extensively explained, and existing research on associated topics is described and evaluated on the possibility of an adaptation. Then, a newly developed mathematical optimization model is introduced. This model allows an optimal arrangement of dynamic areas, considering logical and practical constraints. Finally, in order to quantify the potential of the developed method, some test series results are presented, showing the possible increase in area utilization.

Keywords: dynamic area requirements, facility layout problem, optimization model, product assembly

Procedia PDF Downloads 222
953 Optimal Image Representation for Linear Canonical Transform Multiplexing

Authors: Navdeep Goel, Salvador Gabarda

Abstract:

Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4x4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4*4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4*4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.

Keywords: chirp signals, image multiplexing, image transformation, linear canonical transform, polynomial approximation

Procedia PDF Downloads 405
952 A BIM-Based Approach to Assess COVID-19 Risk Management Regarding Indoor Air Ventilation and Pedestrian Dynamics

Authors: T. Delval, C. Sauvage, Q. Jullien, R. Viano, T. Diallo, B. Collignan, G. Picinbono

Abstract:

In the context of the international spread of COVID-19, the Centre Scientifique et Technique du Bâtiment (CSTB) has led a joint research with the French government authorities Hauts-de-Seine department, to analyse the risk in school spaces according to their configuration, ventilation system and spatial segmentation strategy. This paper describes the main results of this joint research. A multidisciplinary team involving experts in indoor air quality/ventilation, pedestrian movements and IT domains was established to develop a COVID risk analysis tool based on Building Information Model. The work started with specific analysis on two pilot schools in order to provide for the local administration specifications to minimize the spread of the virus. Different recommendations were published to optimize/validate the use of ventilation systems and the strategy of student occupancy and student flow segmentation within the building. This COVID expertise has been digitized in order to manage a quick risk analysis on the entire building that could be used by the public administration through an easy user interface implemented in a free BIM Management software. One of the most interesting results is to enable a dynamic comparison of different ventilation system scenarios and space occupation strategy inside the BIM model. This concurrent engineering approach provides users with the optimal solution according to both ventilation and pedestrian flow expertise.

Keywords: BIM, knowledge management, system expert, risk management, indoor ventilation, pedestrian movement, integrated design

Procedia PDF Downloads 98
951 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach

Authors: Jean Berger, Nassirou Lo, Martin Noel

Abstract:

Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.

Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization

Procedia PDF Downloads 362
950 Audit of Post-Caesarean Section Analgesia

Authors: Rachel Ashwell, Sally Millett

Abstract:

Introduction: Adequate post-operative pain relief is a key priority in the delivery of caesarean sections. This improves patient experience, reduces morbidity and enables optimal mother-infant interaction. Recommendations outlined in the NICE guidelines for caesarean section (CS) include offering peri-operative intrathecal/epidural diamorphine and post-operative opioid analgesics; offering non-steroidal anti-inflammatory drugs (NSAIDs) unless contraindicated and taking hourly observations for 12 hours following intrathecal diamorphine. Method: This audit assessed the provision of post-CS analgesia in 29 women over a two-week period. Indicators used were the use of intrathecal/epidural opioids, use of post-operative opioids and NSAIDs, frequency of observations and patient satisfaction with pain management on post-operative days 1 and 2. Results: All women received intrathecal/epidural diamorphine, 97% were prescribed post-operative opioids and all were prescribed NSAIDs unless contraindicated. Hourly observations were not maintained for 12 hours following intrathecal diamorphine. 97% of women were satisfied with their pain management on post-operative day 1 whereas only 75% were satisfied on day 2. Discussion: This service meets the proposed standards for the provision of post-operative analgesia, achieving high levels of patient satisfaction 1 day after CS. However, patient satisfaction levels are significantly lower on post-operative day 2, which may be due to reduced frequency of observations. The lack of an official audit standard for patient satisfaction on postoperative day 2 may result in reduced incentive to prioritise pain management at this stage.

Keywords: Caesarean section, analgesia, postoperative care, patient satisfaction

Procedia PDF Downloads 378
949 Does Pakistan Stock Exchange Offer Diversification Benefits to Regional and International Investors: A Time-Frequency (Wavelets) Analysis

Authors: Syed Jawad Hussain Shahzad, Muhammad Zakaria, Mobeen Ur Rehman, Saniya Khaild

Abstract:

This study examines the co-movement between the Pakistan, Indian, S&P 500 and Nikkei 225 stock markets using weekly data from 1998 to 2013. The time-frequency relationship between the selected stock markets is conducted by using measures of continuous wavelet power spectrum, cross-wavelet transform and cross (squared) wavelet coherency. The empirical evidence suggests strong dependence between Pakistan and Indian stock markets. The co-movement of Pakistani index with U.S and Japanese, the developed markets, varies over time and frequency where the long-run relationship is dominant. The results of cross wavelet and wavelet coherence analysis indicate moderate covariance and correlation between stock indexes and the markets are in phase (i.e. cyclical in nature) over varying durations. Pakistan stock market was lagging during the entire period in relation to Indian stock market, corresponding to the 8~32 and then 64~256 weeks scale. Similar findings are evident for S&P 500 and Nikkei 225 indexes, however, the relationship occurs during the later period of study. All three wavelet indicators suggest strong evidence of higher co-movement during 2008-09 global financial crises. The empirical analysis reveals a strong evidence that the portfolio diversification benefits vary across frequencies and time. This analysis is unique and have several practical implications for regional and international investors while assigning the optimal weightage of different assets in portfolio formulation.

Keywords: co-movement, Pakistan stock exchange, S&P 500, Nikkei 225, wavelet analysis

Procedia PDF Downloads 351
948 Task Scheduling and Resource Allocation in Cloud-based on AHP Method

Authors: Zahra Ahmadi, Fazlollah Adibnia

Abstract:

Scheduling of tasks and the optimal allocation of resources in the cloud are based on the dynamic nature of tasks and the heterogeneity of resources. Applications that are based on the scientific workflow are among the most widely used applications in this field, which are characterized by high processing power and storage capacity. In order to increase their efficiency, it is necessary to plan the tasks properly and select the best virtual machine in the cloud. The goals of the system are effective factors in scheduling tasks and resource selection, which depend on various criteria such as time, cost, current workload and processing power. Multi-criteria decision-making methods are a good choice in this field. In this research, a new method of work planning and resource allocation in a heterogeneous environment based on the modified AHP algorithm is proposed. In this method, the scheduling of input tasks is based on two criteria of execution time and size. Resource allocation is also a combination of the AHP algorithm and the first-input method of the first client. Resource prioritization is done with the criteria of main memory size, processor speed and bandwidth. What is considered in this system to modify the AHP algorithm Linear Max-Min and Linear Max normalization methods are the best choice for the mentioned algorithm, which have a great impact on the ranking. The simulation results show a decrease in the average response time, return time and execution time of input tasks in the proposed method compared to similar methods (basic methods).

Keywords: hierarchical analytical process, work prioritization, normalization, heterogeneous resource allocation, scientific workflow

Procedia PDF Downloads 138
947 Sustainability of Photovoltaic Recycling Planning

Authors: Jun-Ki Choi

Abstract:

The usage of valuable resources and the potential for waste generation at the end of the life cycle of photovoltaic (PV) technologies necessitate a proactive planning for a PV recycling infrastructure. To ensure the sustainability of PV in large scales of deployment, it is vital to develop and institute low-cost recycling technologies and infrastructure for the emerging PV industry in parallel with the rapid commercialization of these new technologies. There are various issues involved in the economics of PV recycling and this research examine those at macro and micro levels, developing a holistic interpretation of the economic viability of the PV recycling systems. This study developed mathematical models to analyze the profitability of recycling technologies and to guide tactical decisions for allocating optimal location of PV take-back centers (PVTBC), necessary for the collection of end of life products. The economic decision is usually based on the level of the marginal capital cost of each PVTBC, cost of reverse logistics, distance traveled, and the amount of PV waste collected from various locations. Results illustrated that the reverse logistics costs comprise a major portion of the cost of PVTBC; PV recycling centers can be constructed in the optimally selected locations to minimize the total reverse logistics cost for transporting the PV wastes from various collection facilities to the recycling center. In the micro- process level, automated recycling processes should be developed to handle the large amount of growing PV wastes economically. The market price of the reclaimed materials are important factors for deciding the profitability of the recycling process and this illustrates the importance of the recovering the glass and expensive metals from PV modules.

Keywords: photovoltaic, recycling, mathematical models, sustainability

Procedia PDF Downloads 246
946 Study on the Process of Detumbling Space Target by Laser

Authors: Zhang Pinliang, Chen Chuan, Song Guangming, Wu Qiang, Gong Zizheng, Li Ming

Abstract:

The active removal of space debris and asteroid defense are important issues in human space activities. Both of them need a detumbling process, for almost all space debris and asteroid are in a rotating state, and it`s hard and dangerous to capture or remove a target with a relatively high tumbling rate. So it`s necessary to find a method to reduce the angular rate first. The laser ablation method is an efficient way to tackle this detumbling problem, for it`s a contactless technique and can work at a safe distance. In existing research, a laser rotational control strategy based on the estimation of the instantaneous angular velocity of the target has been presented. But their calculation of control torque produced by a laser, which is very important in detumbling operation, is not accurate enough, for the method they used is only suitable for the plane or regularly shaped target, and they did not consider the influence of irregular shape and the size of the spot. In this paper, based on the triangulation reconstruction of the target surface, we propose a new method to calculate the impulse of the irregularly shaped target under both the covered irradiation and spot irradiation of the laser and verify its accuracy by theoretical formula calculation and impulse measurement experiment. Then we use it to study the process of detumbling cylinder and asteroid by laser. The result shows that the new method is universally practical and has high precision; it will take more than 13.9 hours to stop the rotation of Bennu with 1E+05kJ laser pulse energy; the speed of the detumbling process depends on the distance between the spot and the centroid of the target, which can be found an optimal value in every particular case.

Keywords: detumbling, laser ablation drive, space target, space debris remove

Procedia PDF Downloads 75
945 The Effect of Micro/Nano Structure of Poly (ε-caprolactone) (PCL) Film Using a Two-Step Process (Casting/Plasma) on Cellular Responses

Authors: JaeYoon Lee, Gi-Hoon Yang, JongHan Ha, MyungGu Yeo, SeungHyun Ahn, Hyeongjin Lee, HoJun Jeon, YongBok Kim, Minseong Kim, GeunHyung Kim

Abstract:

One of the important factors in tissue engineering is to design optimal biomedical scaffolds, which can be governed by topographical surface characteristics, such as size, shape, and direction. Of these properties, we focused on the effects of nano- to micro-sized hierarchical surface. To fabricate the hierarchical surface structure on poly(ε-caprolactone) (PCL) film, we employed a micro-casting technique by pressing the mold and nano-etching technique using a modified plasma process. The micro-sized topography of PCL film was controlled by sizes of the micro structures on lotus leaf. Also, the nano-sized topography and hydrophilicity of PCL film were controlled by a modified plasma process. After the plasma treatment, the hydrophobic property of the PCL film was significantly changed into hydrophilic property, and the nano-sized structure was well developed. The surface properties of the modified PCL film were investigated in terms of initial cell morphology, attachment, and proliferation using osteoblast-like-cells (MG63). In particular, initial cell attachment, proliferation and osteogenic differentiation in the hierarchical structure were enhanced dramatically compared to those of the smooth surface. We believe that these results are because of a synergistic effect between the hierarchical structure and the reactive functional groups due to the plasma process. Based on the results presented here, we propose a new biomimetic surface model that maybe useful for effectively regenerating hard tissues.

Keywords: hierarchical surface, lotus leaf, nano-etching, plasma treatment

Procedia PDF Downloads 369
944 An Intelligent Transportation System for Safety and Integrated Management of Railway Crossings

Authors: M. Magrini, D. Moroni, G. Palazzese, G. Pieri, D. Azzarelli, A. Spada, L. Fanucci, O. Salvetti

Abstract:

Railway crossings are complex entities whose optimal management cannot be addressed unless with the help of an intelligent transportation system integrating information both on train and vehicular flows. In this paper, we propose an integrated system named SIMPLE (Railway Safety and Infrastructure for Mobility applied at level crossings) that, while providing unparalleled safety in railway level crossings, collects data on rail and road traffic and provides value-added services to citizens and commuters. Such services include for example alerts, via variable message signs to drivers and suggestions for alternative routes, towards a more sustainable, eco-friendly and efficient urban mobility. To achieve these goals, SIMPLE is organized as a System of Systems (SoS), with a modular architecture whose components range from specially-designed radar sensors for obstacle detection to smart ETSI M2M-compliant camera networks for urban traffic monitoring. Computational unit for performing forecast according to adaptive models of train and vehicular traffic are also included. The proposed system has been tested and validated during an extensive trial held in the mid-sized Italian town of Montecatini, a paradigmatic case where the rail network is inextricably linked with the fabric of the city. Results of the tests are reported and discussed.

Keywords: Intelligent Transportation Systems (ITS), railway, railroad crossing, smart camera networks, radar obstacle detection, real-time traffic optimization, IoT, ETSI M2M, transport safety

Procedia PDF Downloads 491
943 Designing Web Application to Simulate Agricultural Management for Smart Farmer: Land Development Department’s Integrated Management Farm

Authors: Panasbodee Thachaopas, Duangdorm Gamnerdsap, Waraporn Inthip, Arissara Pungpa

Abstract:

LDD’s IM Farm or Land Development Department’s Integrated Management Farm is the agricultural simulation application developed by Land Development Department relies on actual data in simulation game to grow 12 cash crops which are rice, corn, cassava, sugarcane, soybean, rubber tree, oil palm, pineapple, longan, rambutan, durian, and mangosteen. Launching in simulation game, players could select preferable areas for cropping from base map or Orthophoto map scale 1:4,000. Farm management is simulated from field preparation to harvesting. The system uses soil group, and present land use database to facilitate player to know whether what kind of crop is suitable to grow in each soil groups and integrate LDD’s data with other agencies which are soil types, soil properties, soil problems, climate, cultivation cost, fertilizer use, fertilizer price, socio-economic data, plant diseases, weed, pest, interest rate for taking on loan from Bank for Agriculture and Agricultural Cooperatives (BAAC), labor cost, market prices. These mentioned data affect the cost and yield differently to each crop. After completing, the player will know the yield, income and expense, profit/loss. The player could change to other crops that are more suitable to soil groups for optimal yields and profits.

Keywords: agricultural simulation, smart farmer, web application, factors of agricultural production

Procedia PDF Downloads 193
942 hsa-miR-1204 and hsa-miR-639 Prominent Role in Tamoxifen's Molecular Mechanisms on the EMT Phenomenon in Breast Cancer Patients

Authors: Mahsa Taghavi

Abstract:

In the treatment of breast cancer, tamoxifen is a regularly prescribed medication. The effect of tamoxifen on breast cancer patients' EMT pathways was studied. In this study to see if it had any effect on the cancer cells' resistance to tamoxifen and to look for specific miRNAs associated with EMT. In this work, we used continuous and integrated bioinformatics analysis to choose the optimal GEO datasets. Once we had sorted the gene expression profile, we looked at the mechanism of signaling, the ontology of genes, and the protein interaction of each gene. In the end, we used the GEPIA database to confirm the candidate genes. after that, I investigated critical miRNAs related to candidate genes. There were two gene expression profiles that were categorized into two distinct groups. Using the expression profile of genes that were lowered in the EMT pathway, the first group was examined. The second group represented the polar opposite of the first. A total of 253 genes from the first group and 302 genes from the second group were found to be common. Several genes in the first category were linked to cell death, focal adhesion, and cellular aging. Two genes in the second group were linked to cell death, focal adhesion, and cellular aging. distinct cell cycle stages were observed. Finally, proteins such as MYLK, SOCS3, and STAT5B from the first group and BIRC5, PLK1, and RAPGAP1 from the second group were selected as potential candidates linked to tamoxifen's influence on the EMT pathway. hsa-miR-1204 and hsa-miR-639 have a very close relationship with the candidates genes according to the node degrees and betweenness index. With this, the action of tamoxifen on the EMT pathway was better understood. It's important to learn more about how tamoxifen's target genes and proteins work so that we can better understand the drug.

Keywords: tamoxifen, breast cancer, bioinformatics analysis, EMT, miRNAs

Procedia PDF Downloads 124
941 Cryptocurrency as a Payment Method in the Tourism Industry: A Comparison of Volatility, Correlation and Portfolio Performance

Authors: Shu-Han Hsu, Jiho Yoon, Chwen Sheu

Abstract:

With the rapidly growing of blockchain technology and cryptocurrency, various industries which include tourism has added in cryptocurrency as the payment method of their transaction. More and more tourism companies accept payments in digital currency for flights, hotel reservations, transportation, and more. For travellers and tourists, using cryptocurrency as a payment method has become a way to circumvent costs and prevent risks. Understanding volatility dynamics and interdependencies between standard currency and cryptocurrency is important for appropriate financial risk management to assist policy-makers and investors in marking more informed decisions. The purpose of this paper has been to understand and explain the risk spillover effects between six major cryptocurrencies and the top ten most traded standard currencies. Using data for the daily closing price of cryptocurrencies and currency exchange rates from 7 August 2015 to 10 December 2019, with 1,133 observations. The diagonal BEKK model was used to analyze the co-volatility spillover effects between cryptocurrency returns and exchange rate returns, which are measures of how the shocks to returns in different assets affect each other’s subsequent volatility. The empirical results show there are co-volatility spillover effects between the cryptocurrency returns and GBP/USD, CNY/USD and MXN/USD exchange rate returns. Therefore, currencies (British Pound, Chinese Yuan and Mexican Peso) and cryptocurrencies (Bitcoin, Ethereum, Ripple, Tether, Litecoin and Stellar) are suitable for constructing a financial portfolio from an optimal risk management perspective and also for dynamic hedging purposes.

Keywords: blockchain, co-volatility effects, cryptocurrencies, diagonal BEKK model, exchange rates, risk spillovers

Procedia PDF Downloads 133
940 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 272
939 Optimization of Smart Beta Allocation by Momentum Exposure

Authors: J. B. Frisch, D. Evandiloff, P. Martin, N. Ouizille, F. Pires

Abstract:

Smart Beta strategies intend to be an asset management revolution with reference to classical cap-weighted indices. Indeed, these strategies allow a better control on portfolios risk factors and an optimized asset allocation by taking into account specific risks or wishes to generate alpha by outperforming indices called 'Beta'. Among many strategies independently used, this paper focuses on four of them: Minimum Variance Portfolio, Equal Risk Contribution Portfolio, Maximum Diversification Portfolio, and Equal-Weighted Portfolio. Their efficiency has been proven under constraints like momentum or market phenomenon, suggesting a reconsideration of cap-weighting.
 To further increase strategy return efficiency, it is proposed here to compare their strengths and weaknesses inside time intervals corresponding to specific identifiable market phases, in order to define adapted strategies depending on pre-specified situations. 
Results are presented as performance curves from different combinations compared to a benchmark. If a combination outperforms the applicable benchmark in well-defined actual market conditions, it will be preferred. It is mainly shown that such investment 'rules', based on both historical data and evolution of Smart Beta strategies, and implemented according to available specific market data, are providing very interesting optimal results with higher return performance and lower risk.
 Such combinations have not been fully exploited yet and justify present approach aimed at identifying relevant elements characterizing them.

Keywords: smart beta, minimum variance portfolio, equal risk contribution portfolio, maximum diversification portfolio, equal weighted portfolio, combinations

Procedia PDF Downloads 331
938 IoT and Deep Learning approach for Growth Stage Segregation and Harvest Time Prediction of Aquaponic and Vermiponic Swiss Chards

Authors: Praveen Chandramenon, Andrew Gascoyne, Fideline Tchuenbou-Magaia

Abstract:

Aquaponics offers a simple conclusive solution to the food and environmental crisis of the world. This approach combines the idea of Aquaculture (growing fish) to Hydroponics (growing vegetables and plants in a soilless method). Smart Aquaponics explores the use of smart technology including artificial intelligence and IoT, to assist farmers with better decision making and online monitoring and control of the system. Identification of different growth stages of Swiss Chard plants and predicting its harvest time is found to be important in Aquaponic yield management. This paper brings out the comparative analysis of a standard Aquaponics with a Vermiponics (Aquaponics with worms), which was grown in the controlled environment, by implementing IoT and deep learning-based growth stage segregation and harvest time prediction of Swiss Chards before and after applying an optimal freshwater replenishment. Data collection, Growth stage classification and Harvest Time prediction has been performed with and without water replenishment. The paper discusses the experimental design, IoT and sensor communication with architecture, data collection process, image segmentation, various regression and classification models and error estimation used in the project. The paper concludes with the results comparison, including best models that performs growth stage segregation and harvest time prediction of the Aquaponic and Vermiponic testbed with and without freshwater replenishment.

Keywords: aquaponics, deep learning, internet of things, vermiponics

Procedia PDF Downloads 58
937 A Constructivist Grounded Theory Study on the Impact of Automation on People and Gardening

Authors: Hamilton V. Niculescu

Abstract:

Following a three year study conducted on eighteen Irish people that are involved in growing vegetables in various community gardens around Dublin, Republic of Ireland, it was revealed that addition of some automated features aimed at improving agricultural practices represented a process which was regarded as potentially beneficial, and as a great tool to closely monitor climate conditions inside the greenhouses. The participants were provided with a free custom-built mobile app through which they could remotely monitor and control features such as irrigation, air ventilation, and windows to ensure optimal growing conditions for vegetables growing inside purpose-built greenhouses. While the initial interest was generally high, within weeks, the participants' level of interaction with the enclosures slowly declined. By employing a constructivist grounded theory methodology, following focus group discussions, in-depth semi-structured interviews, and observations, it was revealed that participants' trust in newer technologies, and renewables, in particular, was low. There are various reasons for this, but because the participants in this study consist of mainly working-class people, it can be argued that lack of education and knowledge are the main barriers acting against the adoption of innovations. Consequently, it was revealed that most participants eventually decided to "set and forget" the systems in automatic working mode, indicating that the immediate effect of introducing people to assisting technologies also introduced some unintended consequences into their lifestyle. It is argued that this occurrence also indicates the fact that people initially "read" newer technologies and only adopt those features that they find useful and less intrusive in regards to their current lifestyle.

Keywords: automation, communication, greenhouse, sustainable

Procedia PDF Downloads 114
936 Analysis Influence Variation Frequency on Characterization of Nano-Particles in Preteatment Bioetanol Oil Palm Stem (Elaeis guineensis JACQ) Use Sonication Method with Alkaline Peroxide Activators on Improvement of Celullose

Authors: Luristya Nur Mahfut, Nada Mawarda Rilek, Ameiga Cautsarina Putri, Mujaroh Khotimah

Abstract:

The use of bioetanol from lignocellulosic material has begone to be developed. In Indonesia the most abundant lignocellulosic material is stem of palm which contain 32.22% of cellulose. Indonesia produces approximatelly 300.375.000 tons of stem of palm each year. To produce bioetanol from lignocellulosic material, the first process is pretreatment. But, until now the method of lignocellulosic pretretament is uneffective. This is related to the particle size and the method of pretreatment of less than optimal so that led to an overhaul of the lignin insufficient, consequently increased levels of cellulose was not significant resulting in low yield of bioetanol. To solve the problem, this research was implemented by using the process of pretreatment method ultasonifikasi in order to produce higher pulp with nano-sized particles that will obtain higher of yield ethanol from stem of palm. Research methods used in this research is the RAK that is composed of one factor which is the frequency ultrasonic waves with three varians, they are 30 kHz, 40 kHz, 50 kHz, and use constant variable is concentration of NaOH. The analysis conducted in this research is the influence of the frequency of the wave to increase levels of cellulose and change size on the scale of nanometers on pretreatment process by using the PSA methods (Particle Size Analyzer), and a Cheason. For the analysis of the results, data, and best treatment using ANOVA and test BNT with confidence interval 5%. The best treatment was obtained by combination X3 (frequency of sonication 50 kHz) and lignin (19,6%) cellulose (59,49%) and hemicellulose (11,8%) with particle size 385,2nm (18,8%).

Keywords: bioethanol, pretreatment, stem of palm, cellulosa

Procedia PDF Downloads 320
935 Investigation of Yard Seam Workings for the Proposed Newcastle Light Rail Project

Authors: David L. Knott, Robert Kingsland, Alistair Hitchon

Abstract:

The proposed Newcastle Light Rail is a key part of the revitalisation of Newcastle, NSW and will provide a frequent and reliable travel option throughout the city centre, running from Newcastle Interchange at Wickham to Pacific Park in Newcastle East, a total of 2.7 kilometers in length. Approximately one-third of the route, along Hunter and Scott Streets, is subject to potential shallow underground mine workings. The extent of mining and seams mined is unclear. Convicts mined the Yard Seam and overlying Dudley (Dirty) Seam in Newcastle sometime between 1800 and 1830. The Australian Agricultural Company mined the Yard Seam from about 1831 to the 1860s in the alignment area. The Yard Seam was about 3 feet (0.9m) thick, and therefore, known as the Yard Seam. Mine maps do not exist for the workings in the area of interest and it was unclear if both or just one seam was mined. Information from 1830s geological mapping and other data showing shaft locations were used along Scott Street and information from the 1908 Royal Commission was used along Hunter Street to develop an investigation program. In addition, mining was encountered for several sites to the south of the alignment at depths of about 7 m to 25 m. Based on the anticipated depths of mining, it was considered prudent to assess the potential for sinkhole development on the proposed alignment and realigned underground utilities and to obtain approval for the work from Subsidence Advisory NSW (SA NSW). The assessment consisted of a desktop study, followed by a subsurface investigation. Four boreholes were drilled along Scott Street and three boreholes were drilled along Hunter Street using HQ coring techniques in the rock. The placement of boreholes was complicated by the presence of utilities in the roadway and traffic constraints. All the boreholes encountered the Yard Seam, with conditions varying from unmined coal to an open void, indicating the presence of mining. The geotechnical information obtained from the boreholes was expanded by using various downhole techniques including; borehole camera, borehole sonar, and downhole geophysical logging. The camera provided views of the rock and helped to explain zones of no recovery. In addition, timber props within the void were observed. Borehole sonar was performed in the void and provided an indication of room size as well as the presence of timber props within the room. Downhole geophysical logging was performed in the boreholes to measure density, natural gamma, and borehole deviation. The data helped confirm that all the mining was in the Yard Seam and that the overlying Dudley Seam had been eroded in the past over much of the alignment. In summary, the assessment allowed the potential for sinkhole subsidence to be assessed and a mitigation approach developed to allow conditional approval by SA NSW. It also confirmed the presence of mining in the Yard Seam, the depth to the seam and mining conditions, and indicated that subsidence did not appear to have occurred in the past.

Keywords: downhole investigation techniques, drilling, mine subsidence, yard seam

Procedia PDF Downloads 305
934 Distributed Generation Connection to the Network: Obtaining Stability Using Transient Behavior

Authors: A. Hadadi, M. Abdollahi, A. Dustmohammadi

Abstract:

The growing use of DGs in distribution networks provide many advantages and also cause new problems which should be anticipated and be solved with appropriate solutions. One of the problems is transient voltage drop and short circuit in the electrical network, in the presence of distributed generation - which can lead to instability. The appearance of the short circuit will cause loss of generator synchronism, even though if it would be able to recover synchronizing mode after removing faulty generator, it will be stable. In order to increase system reliability and generator lifetime, some strategies should be planned to apply even in some situations which a fault prevent generators from separation. In this paper, one fault current limiter is installed due to prevent DGs separation from the grid when fault occurs. Furthermore, an innovative objective function is applied to determine the impedance optimal amount of fault current limiter in order to improve transient stability of distributed generation. Fault current limiter can prevent generator rotor's sudden acceleration after fault occurrence and thereby improve the network transient stability by reducing the current flow in a fast and effective manner. In fact, by applying created impedance by fault current limiter when a short circuit happens on the path of current injection DG to the fault location, the critical fault clearing time improve remarkably. Therefore, protective relay has more time to clear fault and isolate the fault zone without any instability. Finally, different transient scenarios of connection plan sustainability of small scale synchronous generators to the distribution network are presented.

Keywords: critical clearing time, fault current limiter, synchronous generator, transient stability, transient states

Procedia PDF Downloads 188
933 Physiological Normoxia and Cellular Adhesion of Diffuse Large B-Cell Lymphoma Primary Cells: Real-Time PCR and Immunohistochemistry Study

Authors: Kamila Duś-Szachniewicz, Kinga M. Walaszek, Paweł Skiba, Paweł Kołodziej, Piotr Ziółkowski

Abstract:

Cell adhesion is of fundamental importance in the cell communication, signaling, and motility, and its dysfunction occurs prevalently during cancer progression. The knowledge of the molecular and cellular processes involved in abnormalities in cancer cells adhesion has greatly increased, and it has been focused mainly on cellular adhesion molecules (CAMs) and tumor microenvironment. Unfortunately, most of the data regarding CAMs expression relates to study on cells maintained in standard oxygen condition of 21%, while the emerging evidence suggests that culturing cells in ambient air is far from physiological. In fact, oxygen in human tissues ranges from 1 to 11%. The aim of this study was to compare the effects of physiological lymph node normoxia (5% O2), and hyperoxia (21% O2) on the expression of cellular adhesion molecules of primary diffuse large B-cell lymphoma cells (DLBCL) isolated from 10 lymphoma patients. Quantitative RT-PCR and immunohistochemistry were used to confirm the differential expression of several CAMs, including ICAM, CD83, CD81, CD44, depending on the level of oxygen. Our findings also suggest that DLBCL cells maintained at ambient O2 (21%) exhibit reduced growth rate and migration ability compared to the cells growing in normoxia conditions. Taking into account all the observations, we emphasize the need to identify the optimal human cell culture conditions mimicking the physiological aspects of tumor growth and differentiation.

Keywords: adhesion molecules, diffuse large B-cell lymphoma, physiological normoxia, quantitative RT-PCR

Procedia PDF Downloads 268
932 Impact of Drought in Farm Level Income in the United States

Authors: Anil Giri, Kyle Lovercamp, Sankalp Sharma

Abstract:

Farm level incomes fluctuate significantly due to extreme weather events such as drought. In the light of recent extreme weather events it is important to understand the implications of extreme weather events, flood and drought, on farm level incomes. This study examines the variation in farm level incomes for the United States in drought and no- drought years. Factoring heterogeneity in different enterprises (crop, livestock) and geography this paper analyzes the impact of drought in farm level incomes at state and national level. Livestock industry seems to be affected more by the lag in production of input feed for production, crops, as preliminary results show. Furthermore, preliminary results also show that while crop producers are not affected much due to drought, as price and quantity effect worked on opposite direction with same magnitude, that was not the case for livestock and horticulture enterprises. Results also showed that even when price effect was not as high the crop insurance component helped absorb much of shock for crop producers. Finally, the effect was heterogeneous for different states more on the coastal states compared Midwest region. This study should generate a lot of interest from policy makers across the world as some countries are actively seeking to increase subsidies in their agriculture sector. This study shows how subsidies absorb the shocks for one enterprise more than others. Finally, this paper should also be able to give an insight to economists to design/recommend policies such that it is optimal given the production level of different enterprises in different countries.

Keywords: farm level income, United States, crop, livestock

Procedia PDF Downloads 273