Search results for: logistics network optimization
5280 Navigating Government Finance Statistics: Effortless Retrieval and Comparative Analysis through Data Science and Machine Learning
Authors: Kwaku Damoah
Abstract:
This paper presents a methodology and software application (App) designed to empower users in accessing, retrieving, and comparatively exploring data within the hierarchical network framework of the Government Finance Statistics (GFS) system. It explores the ease of navigating the GFS system and identifies the gaps filled by the new methodology and App. The GFS, embodies a complex Hierarchical Network Classification (HNC) structure, encapsulating institutional units, revenues, expenses, assets, liabilities, and economic activities. Navigating this structure demands specialized knowledge, experience, and skill, posing a significant challenge for effective analytics and fiscal policy decision-making. Many professionals encounter difficulties deciphering these classifications, hindering confident utilization of the system. This accessibility barrier obstructs a vast number of professionals, students, policymakers, and the public from leveraging the abundant data and information within the GFS. Leveraging R programming language, Data Science Analytics and Machine Learning, an efficient methodology enabling users to access, navigate, and conduct exploratory comparisons was developed. The machine learning Fiscal Analytics App (FLOWZZ) democratizes access to advanced analytics through its user-friendly interface, breaking down expertise barriers.Keywords: data science, data wrangling, drilldown analytics, government finance statistics, hierarchical network classification, machine learning, web application.
Procedia PDF Downloads 705279 SISSLE in Consensus-Based Ripple: Some Improvements in Speed, Security, Last Mile Connectivity and Ease of Use
Authors: Mayank Mundhra, Chester Rebeiro
Abstract:
Cryptocurrencies are rapidly finding wide application in areas such as Real Time Gross Settlements and Payments Systems. Ripple is a cryptocurrency that has gained prominence with banks and payment providers. It solves the Byzantine General’s Problem with its Ripple Protocol Consensus Algorithm (RPCA), where each server maintains a list of servers, called Unique Node List (UNL) that represents the network for the server, and will not collectively defraud it. The server believes that the network has come to a consensus when members of the UNL come to a consensus on a transaction. In this paper we improve Ripple to achieve better speed, security, last mile connectivity and ease of use. We implement guidelines and automated systems for building and maintaining UNLs for resilience, robustness, improved security, and efficient information propagation. We enhance the system so as to ensure that each server receives information from across the whole network rather than just from the UNL members. We also introduce the paradigm of UNL overlap as a function of information propagation and the trust a server assigns to its own UNL. Our design not only reduces vulnerabilities such as eclipse attacks, but also makes it easier to identify malicious behaviour and entities attempting to fraudulently Double Spend or stall the system. We provide experimental evidence of the benefits of our approach over the current Ripple scheme. We observe ≥ 4.97x and 98.22x in speedup and success rate for information propagation respectively, and ≥ 3.16x and 51.70x in speedup and success rate in consensus.Keywords: Ripple, Kelips, unique node list, consensus, information propagation
Procedia PDF Downloads 1455278 Value Chain Analysis of the Seabass Industry in Doumen
Authors: Tiantian Ma
Abstract:
The district of Doumen, Zhuhai has a sophisticated seabass value chain. However, unlike typical Global Value Chain (GVC) industries, the seabass value chain in Doumen is highly domestic both in terms of production and consumption. Still, since the highly-industrialized and capital-intensive industry involves many off-farm segments in both upstream and downstream, this paper will be utilizing the method of value chain analysis. To be specific, the paper will concentrate on two research goals: 1) the value chain mapping of the seabass industry, such as identifying actors in the hatchery, fish feed, fishponds, processing, logistics, and distribution, 2) the SWOT analysis of the seabass industry in Doumen, including incompetence of the waste disposal, the strategy of marketing, and the supportive role of the government, etc. In general, the seabass industry in Doumen is a sophisticated but not yet comprehensive value chain. It has achieved a lot in industrializing aqua-food products and fostering development, but there are still improvements that could be carried out, such as upholding environmental sustainability and promoting the brand better.Keywords: agricultural value chain, fish farming, regional development, SWOT analysis, value chain mapping
Procedia PDF Downloads 1525277 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data
Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora
Abstract:
Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.Keywords: drilling optimization, geological formations, machine learning, rate of penetration
Procedia PDF Downloads 1315276 Integrated Free Space Optical Communication and Optical Sensor Network System with Artificial Intelligence Techniques
Authors: Yibeltal Chanie Manie, Zebider Asire Munyelet
Abstract:
5G and 6G technology offers enhanced quality of service with high data transmission rates, which necessitates the implementation of the Internet of Things (IoT) in 5G/6G architecture. In this paper, we proposed the integration of free space optical communication (FSO) with fiber sensor networks for IoT applications. Recently, free-space optical communications (FSO) are gaining popularity as an effective alternative technology to the limited availability of radio frequency (RF) spectrum. FSO is gaining popularity due to flexibility, high achievable optical bandwidth, and low power consumption in several applications of communications, such as disaster recovery, last-mile connectivity, drones, surveillance, backhaul, and satellite communications. Hence, high-speed FSO is an optimal choice for wireless networks to satisfy the full potential of 5G/6G technology, offering 100 Gbit/s or more speed in IoT applications. Moreover, machine learning must be integrated into the design, planning, and optimization of future optical wireless communication networks in order to actualize this vision of intelligent processing and operation. In addition, fiber sensors are important to achieve real-time, accurate, and smart monitoring in IoT applications. Moreover, we proposed deep learning techniques to estimate the strain changes and peak wavelength of multiple Fiber Bragg grating (FBG) sensors using only the spectrum of FBGs obtained from the real experiment.Keywords: optical sensor, artificial Intelligence, Internet of Things, free-space optics
Procedia PDF Downloads 635275 GIS-Based Identification of Overloaded Distribution Transformers and Calculation of Technical Electric Power Losses
Authors: Awais Ahmed, Javed Iqbal
Abstract:
Pakistan has been for many years facing extreme challenges in energy deficit due to the shortage of power generation compared to increasing demand. A part of this energy deficit is also contributed by the power lost in transmission and distribution network. Unfortunately, distribution companies are not equipped with modern technologies and methods to identify and eliminate these losses. According to estimate, total energy lost in early 2000 was between 20 to 26 percent. To address this issue the present research study was designed with the objectives of developing a standalone GIS application for distribution companies having the capability of loss calculation as well as identification of overloaded transformers. For this purpose, Hilal Road feeder in Faisalabad Electric Supply Company (FESCO) was selected as study area. An extensive GPS survey was conducted to identify each consumer, linking it to the secondary pole of the transformer, geo-referencing equipment and documenting conductor sizes. To identify overloaded transformer, accumulative kWH reading of consumer on transformer was compared with threshold kWH. Technical losses of 11kV and 220V lines were calculated using the data from substation and resistance of the network calculated from the geo-database. To automate the process a standalone GIS application was developed using ArcObjects with engineering analysis capabilities. The application uses GIS database developed for 11kV and 220V lines to display and query spatial data and present results in the form of graphs. The result shows that about 14% of the technical loss on both high tension (HT) and low tension (LT) network while about 4 out of 15 general duty transformers were found overloaded. The study shows that GIS can be a very effective tool for distribution companies in management and planning of their distribution network.Keywords: geographical information system, GIS, power distribution, distribution transformers, technical losses, GPS, SDSS, spatial decision support system
Procedia PDF Downloads 3765274 Internet Optimization by Negotiating Traffic Times
Authors: Carlos Gonzalez
Abstract:
This paper describes a system to optimize the use of the internet by clients requiring downloading of videos at peak hours. The system consists of a web server belonging to a provider of video contents, a provider of internet communications and a software application running on a client’s computer. The client using the application software will communicate to the video provider a list of the client’s future video demands. The video provider calculates which videos are going to be more in demand for download in the immediate future, and proceeds to request the internet provider the most optimal hours to do the downloading. The times of the downloading will be sent to the application software, which will use the information of pre-established hours negotiated between the video provider and the internet provider to download those videos. The videos will be saved in a special protected section of the user’s hard disk, which will only be accessed by the application software in the client’s computer. When the client is ready to see a video, the application will search the list of current existent videos in the area of the hard disk; if it does exist, it will use this video directly without the need for internet access. We found that the best way to optimize the download traffic of videos is by negotiation between the internet communication provider and the video content provider.Keywords: internet optimization, video download, future demands, secure storage
Procedia PDF Downloads 1365273 Implementation of the Interlock Protocol to Enhance Security in Unmanned Aerial Vehicles
Authors: Vikram Prabhu, Mohammad Shikh Bahaei
Abstract:
This paper depicts the implementation of a new infallible technique to protect an Unmanned Aerial Vehicle from cyber-attacks. An Unmanned Aerial Vehicle (UAV) could be vulnerable to cyber-attacks because of jammers or eavesdroppers over the network which pose as a threat to the security of the UAV. In the field of network security, there are quite a few protocols which can be used to establish a secure connection between UAVs and their Operators. In this paper, we discuss how the Interlock Protocol could be implemented to foil the Man-in-the-Middle Attack. In this case, Wireshark has been used as the sniffer (man-in-the-middle). This paper also shows a comparison between the Interlock Protocol and the TCP Protocols using cryptcat and netcat and at the same time highlights why the Interlock Protocol is the most efficient security protocol to prevent eavesdropping over the communication channel.Keywords: interlock protocol, Diffie-Hellman algorithm, unmanned aerial vehicles, control station, man-in-the-middle attack, Wireshark
Procedia PDF Downloads 3015272 Optimized Brain Computer Interface System for Unspoken Speech Recognition: Role of Wernicke Area
Authors: Nassib Abdallah, Pierre Chauvet, Abd El Salam Hajjar, Bassam Daya
Abstract:
In this paper, we propose an optimized brain computer interface (BCI) system for unspoken speech recognition, based on the fact that the constructions of unspoken words rely strongly on the Wernicke area, situated in the temporal lobe. Our BCI system has four modules: (i) the EEG Acquisition module based on a non-invasive headset with 14 electrodes; (ii) the Preprocessing module to remove noise and artifacts, using the Common Average Reference method; (iii) the Features Extraction module, using Wavelet Packet Transform (WPT); (iv) the Classification module based on a one-hidden layer artificial neural network. The present study consists of comparing the recognition accuracy of 5 Arabic words, when using all the headset electrodes or only the 4 electrodes situated near the Wernicke area, as well as the selection effect of the subbands produced by the WPT module. After applying the articial neural network on the produced database, we obtain, on the test dataset, an accuracy of 83.4% with all the electrodes and all the subbands of 8 levels of the WPT decomposition. However, by using only the 4 electrodes near Wernicke Area and the 6 middle subbands of the WPT, we obtain a high reduction of the dataset size, equal to approximately 19% of the total dataset, with 67.5% of accuracy rate. This reduction appears particularly important to improve the design of a low cost and simple to use BCI, trained for several words.Keywords: brain-computer interface, speech recognition, artificial neural network, electroencephalography, EEG, wernicke area
Procedia PDF Downloads 2715271 Feature Based Unsupervised Intrusion Detection
Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein
Abstract:
The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.Keywords: information gain (IG), intrusion detection system (IDS), k-means clustering, Weka
Procedia PDF Downloads 2965270 Off-Topic Text Detection System Using a Hybrid Model
Authors: Usama Shahid
Abstract:
Be it written documents, news columns, or students' essays, verifying the content can be a time-consuming task. Apart from the spelling and grammar mistakes, the proofreader is also supposed to verify whether the content included in the essay or document is relevant or not. The irrelevant content in any document or essay is referred to as off-topic text and in this paper, we will address the problem of off-topic text detection from a document using machine learning techniques. Our study aims to identify the off-topic content from a document using Echo state network model and we will also compare data with other models. The previous study uses Convolutional Neural Networks and TFIDF to detect off-topic text. We will rearrange the existing datasets and take new classifiers along with new word embeddings and implement them on existing and new datasets in order to compare the results with the previously existing CNN model.Keywords: off topic, text detection, eco state network, machine learning
Procedia PDF Downloads 855269 Influence of Fermentation Conditions on Humic Acids Production by Trichoderma viride Using an Oil Palm Empty Fruit Bunch as the Substrate
Authors: F. L. Motta, M. H. A. Santana
Abstract:
Humic Acids (HA) were produced by a Trichoderma viride strain under submerged fermentation in a medium based on the oil palm Empty Fruit Bunch (EFB) and the main variables of the process were optimized by using response surface methodology. A temperature of 40°C and concentrations of 50g/L EFB, 5.7g/L potato peptone and 0.11g/L (NH4)2SO4 were the optimum levels of the variables that maximize the HA production, within the physicochemical and biological limits of the process. The optimized conditions led to an experimental HA concentration of 428.4±17.5 mg/L, which validated the prediction from the statistical model of 412.0mg/L. This optimization increased about 7–fold the HA production previously reported in the literature. Additionally, the time profiles of HA production and fungal growth confirmed our previous findings that HA production preferably occurs during fungal sporulation. The present study demonstrated that T. viride successfully produced HA via the submerged fermentation of EFB and the process parameters were successfully optimized using a statistics-based response surface model. To the best of our knowledge, the present work is the first report on the optimization of HA production from EFB by a biotechnological process, whose feasibility was only pointed out in previous works.Keywords: empty fruit bunch, humic acids, submerged fermentation, Trichoderma viride
Procedia PDF Downloads 3065268 An Application of Fuzzy Analytical Network Process to Select a New Production Base: An AEC Perspective
Authors: Walailak Atthirawong
Abstract:
By the end of 2015, the Association of Southeast Asian Nations (ASEAN) countries proclaim to transform into the next stage of an economic era by having a single market and production base called ASEAN Economic Community (AEC). One objective of the AEC is to establish ASEAN as a single market and one production base making ASEAN highly competitive economic region and competitive with new mechanisms. As a result, it will open more opportunities to enterprises in both trade and investment, which offering a competitive market of US$ 2.6 trillion and over 622 million people. Location decision plays a key role in achieving corporate competitiveness. Hence, it may be necessary for enterprises to redesign their supply chains via enlarging a new production base which has low labor cost, high labor skill and numerous of labor available. This strategy will help companies especially for apparel industry in order to maintain a competitive position in the global market. Therefore, in this paper a generic model for location selection decision for Thai apparel industry using Fuzzy Analytical Network Process (FANP) is proposed. Myanmar, Vietnam and Cambodia are referred for alternative location decision from interviewing expert persons in this industry who have planned to enlarge their businesses in AEC countries. The contribution of this paper lies in proposing an approach model that is more practical and trustworthy to top management in making a decision on location selection.Keywords: apparel industry, ASEAN Economic Community (AEC), Fuzzy Analytical Network Process (FANP), location decision
Procedia PDF Downloads 2365267 Comparative Analysis of Hybrid and Non-hybrid Cooled 185 KW High-Speed Permanent Magnet Synchronous Machine for Air Suspension Blower
Authors: Usman Abubakar, Xiaoyuan Wang, Sayyed Haleem Shah, Sadiq Ur Rahman, Rabiu Saleh Zakariyya
Abstract:
High-speed Permanent magnet synchronous machine (HSPMSM) uses in different industrial applications like blowers, compressors as a result of its superb performance. Nevertheless, the over-temperature rise of both winding and PM is one of their substantial problem for a high-power HSPMSM, which affects its lifespan and performance. According to the literature, HSPMSM with a Hybrid cooling configuration has a much lower temperature rise than non-hybrid cooling. This paper presents the design 185kW, 26K rpm with two different cooling configurations, i.e., hybrid cooling configuration (forced air and housing spiral water jacket) and non-hybrid (forced air cooling assisted with winding’s potting material and sleeve’s material) to enhance the heat dissipation of winding and PM respectively. Firstly, the machine’s electromagnetic design is conducted by the finite element method to accurately account for machine losses. Then machine’s cooling configurations are introduced, and their effectiveness is validated by lumped parameter thermal network (LPTN). Investigation shows that using potting, sleeve materials to assist non-hybrid cooling configuration makes the machine’s winding and PM temperature closer to hybrid cooling configuration. Therefore, the machine with non-hybrid cooling is prototyped and tested due to its simplicity, lower energy consumption and can still maintain the lifespan and performance of the HSPMSM.Keywords: airflow network, axial ventilation, high-speed PMSM, thermal network
Procedia PDF Downloads 2315266 Simulating Lean and Green Correlation in Supply Chain Context
Authors: Rachid Benmoussa, Fatima Ezzahra Essaber, Roland De Guio, Fatima Zahra Ben Moussa
Abstract:
Implementing green practices in supply chain management is a complex task mainly because ecological, economical and operational goals are usually in conflict. Green practices might thus face companies’ reluctance because managers can consider its implementation obviously as a performance lean degradation. To implement lean and green practices successfully, companies need relevant decision-making tools to highlight the correlation between them. To contribute to this issue, this work tries to answer the following research question: How to use simulation to assess correlation (antagonism or convergence) between lean and green goals? To answer this question, we propose in this paper a based simulation process that measures correlation generally between two variables. So as to prove its relevance, a logistics academic case study is used to illustrate all its stages. It shows, as for example, that Lean goal 'Stock' and Green goal 'CO₂ emission' are not conceptually correlated (linearly).Keywords: simulation, lean, green, supply chain
Procedia PDF Downloads 5015265 Optimization of Cacao Fermentation in Davao Philippines Using Sustainable Method
Authors: Ian Marc G. Cabugsa, Kim Ryan Won, Kareem Mamac, Manuel Dee, Merlita Garcia
Abstract:
An optimized cacao fermentation technique was developed for the cacao farmers of Davao City Philippines. Cacao samples with weights ranging from 150-250 kilograms were collected from various cacao farms in Davao City and Zamboanga City Philippines. Different fermentation techniques were used starting with design of the sweat box, prefermentation conditionings, number of days for fermentation and number of turns. As the beans are being fermented, its temperature was regularly monitored using a digital thermometer. The resultant cacao beans were assessed using physical and chemical means. For the physical assessment, the bean cut test, bean count tests, and sensory test were used. Quantification of theobromine, caffeine, and antioxidants in the form of equivalent quercetin was used for chemical assessment. Both the theobromine and caffeine were analyzed using HPLC method while the antioxidant was analyzed spectrometrically. To come up with the best fermentation procedure, the different assessment were given priority coefficients wherein the physical tests – taste test, cut, and bean count tests were given priority over the results of the chemical test. The result of the study was an optimized fermentation protocol that is readily adaptable and transferable to any cacao cooperatives or groups in Mindanao or even Philippines as a whole.Keywords: cacao, fermentation, HPLC, optimization, Philippines
Procedia PDF Downloads 4525264 Reconfigurable Ubiquitous Computing Infrastructure for Load Balancing
Authors: Khaled Sellami, Lynda Sellami, Pierre F. Tiako
Abstract:
Ubiquitous computing helps make data and services available to users anytime and anywhere. This makes the cooperation of devices a crucial need. In return, such cooperation causes an overload of the devices and/or networks, resulting in network malfunction and suspension of its activities. Our goal in this paper is to propose an approach of devices reconfiguration in order to help to reduce the energy consumption in ubiquitous environments. The idea is that when high-energy consumption is detected, we proceed to a change in component distribution on the devices to reduce and/or balance the energy consumption. We also investigate the possibility to detect high-energy consumption of devices/network based on devices abilities. As a result, our idea realizes a reconfiguration of devices aimed at reducing the consumption of energy and/or load balancing in ubiquitous environments.Keywords: ubiquitous computing, load balancing, device energy consumption, reconfiguration
Procedia PDF Downloads 2755263 Fault Tolerant (n,k)-star Power Network Topology for Multi-Agent Communication in Automated Power Distribution Systems
Authors: Ning Gong, Michael Korostelev, Qiangguo Ren, Li Bai, Saroj K. Biswas, Frank Ferrese
Abstract:
This paper investigates the joint effect of the interconnected (n,k)-star network topology and Multi-Agent automated control on restoration and reconfiguration of power systems. With the increasing trend in development in Multi-Agent control technologies applied to power system reconfiguration in presence of faulty components or nodes. Fault tolerance is becoming an important challenge in the design processes of the distributed power system topology. Since the reconfiguration of a power system is performed by agent communication, the (n,k)-star interconnected network topology is studied and modeled in this paper to optimize the process of power reconfiguration. In this paper, we discuss the recently proposed (n,k)-star topology and examine its properties and advantages as compared to the traditional multi-bus power topologies. We design and simulate the topology model for distributed power system test cases. A related lemma based on the fault tolerance and conditional diagnosability properties is presented and proved both theoretically and practically. The conclusion is reached that (n,k)-star topology model has measurable advantages compared to standard bus power systems while exhibiting fault tolerance properties in power restoration, as well as showing efficiency when applied to power system route discovery.Keywords: (n, k)-star topology, fault tolerance, conditional diagnosability, multi-agent system, automated power system
Procedia PDF Downloads 5125262 Fault Tolerant (n, k)-Star Power Network Topology for Multi-Agent Communication in Automated Power Distribution Systems
Authors: Ning Gong, Michael Korostelev, Qiangguo Ren, Li Bai, Saroj Biswas, Frank Ferrese
Abstract:
This paper investigates the joint effect of the interconnected (n,k)-star network topology and Multi-Agent automated control on restoration and reconfiguration of power systems. With the increasing trend in development in Multi-Agent control technologies applied to power system reconfiguration in presence of faulty components or nodes. Fault tolerance is becoming an important challenge in the design processes of the distributed power system topology. Since the reconfiguration of a power system is performed by agent communication, the (n,k)-star interconnected network topology is studied and modeled in this paper to optimize the process of power reconfiguration. In this paper, we discuss the recently proposed (n,k)-star topology and examine its properties and advantages as compared to the traditional multi-bus power topologies. We design and simulate the topology model for distributed power system test cases. A related lemma based on the fault tolerance and conditional diagnosability properties is presented and proved both theoretically and practically. The conclusion is reached that (n,k)-star topology model has measurable advantages compared to standard bus power systems while exhibiting fault tolerance properties in power restoration, as well as showing efficiency when applied to power system route discovery.Keywords: (n, k)-star topology, fault tolerance, conditional diagnosability, multi-agent system, automated power system
Procedia PDF Downloads 4655261 Particle Filter State Estimation Algorithm Based on Improved Artificial Bee Colony Algorithm
Authors: Guangyuan Zhao, Nan Huang, Xuesong Han, Xu Huang
Abstract:
In order to solve the problem of sample dilution in the traditional particle filter algorithm and achieve accurate state estimation in a nonlinear system, a particle filter method based on an improved artificial bee colony (ABC) algorithm was proposed. The algorithm simulated the process of bee foraging and optimization and made the high likelihood region of the backward probability of particles moving to improve the rationality of particle distribution. The opposition-based learning (OBL) strategy is introduced to optimize the initial population of the artificial bee colony algorithm. The convergence factor is introduced into the neighborhood search strategy to limit the search range and improve the convergence speed. Finally, the crossover and mutation operations of the genetic algorithm are introduced into the search mechanism of the following bee, which makes the algorithm jump out of the local extreme value quickly and continue to search the global extreme value to improve its optimization ability. The simulation results show that the improved method can improve the estimation accuracy of particle filters, ensure the diversity of particles, and improve the rationality of particle distribution.Keywords: particle filter, impoverishment, state estimation, artificial bee colony algorithm
Procedia PDF Downloads 1515260 A Comprehensive Evaluation of Supervised Machine Learning for the Phase Identification Problem
Authors: Brandon Foggo, Nanpeng Yu
Abstract:
Power distribution circuits undergo frequent network topology changes that are often left undocumented. As a result, the documentation of a circuit’s connectivity becomes inaccurate with time. The lack of reliable circuit connectivity information is one of the biggest obstacles to model, monitor, and control modern distribution systems. To enhance the reliability and efficiency of electric power distribution systems, the circuit’s connectivity information must be updated periodically. This paper focuses on one critical component of a distribution circuit’s topology - the secondary transformer to phase association. This topology component describes the set of phase lines that feed power to a given secondary transformer (and therefore a given group of power consumers). Finding the documentation of this component is call Phase Identification, and is typically performed with physical measurements. These measurements can take time lengths on the order of several months, but with supervised learning, the time length can be reduced significantly. This paper compares several such methods applied to Phase Identification for a large range of real distribution circuits, describes a method of training data selection, describes preprocessing steps unique to the Phase Identification problem, and ultimately describes a method which obtains high accuracy (> 96% in most cases, > 92% in the worst case) using only 5% of the measurements typically used for Phase Identification.Keywords: distribution network, machine learning, network topology, phase identification, smart grid
Procedia PDF Downloads 2995259 Modeling Fertility and Production of Hazelnut Cultivars through the Artificial Neural Network under Climate Change of Karaj
Authors: Marziyeh Khavari
Abstract:
In recent decades, climate change, global warming, and the growing population worldwide face some challenges, such as increasing food consumption and shortage of resources. Assessing how climate change could disturb crops, especially hazelnut production, seems crucial for sustainable agriculture production. For hazelnut cultivation in the mid-warm condition, such as in Iran, here we present an investigation of climate parameters and how much they are effective on fertility and nut production of hazelnut trees. Therefore, the climate change of the northern zones in Iran has investigated (1960-2017) and was reached an uptrend in temperature. Furthermore, the descriptive analysis performed on six cultivars during seven years shows how this small-scale survey could demonstrate the effects of climate change on hazelnut production and stability. Results showed that some climate parameters are more significant on nut production, such as solar radiation, soil temperature, relative humidity, and precipitation. Moreover, some cultivars have produced more stable production, for instance, Negret and Segorbe, while the Mervill de Boliver recorded the most variation during the study. Another aspect that needs to be met is training and predicting an actual model to simulate nut production through a neural network and linear regression simulation. The study developed and estimated the ANN model's generalization capability with different criteria such as RMSE, SSE, and accuracy factors for dependent and independent variables (environmental and yield traits). The models were trained and tested while the accuracy of the model is proper to predict hazelnut production under fluctuations in weather parameters.Keywords: climate change, neural network, hazelnut, global warming
Procedia PDF Downloads 1325258 Optimal Design of InGaP/GaAs Heterojonction Solar Cell
Authors: Djaafar F., Hadri B., Bachir G.
Abstract:
We studied mainly the influence of temperature, thickness, molar fraction and the doping of the various layers (emitter, base, BSF and window) on the performances of a photovoltaic solar cell. In a first stage, we optimized the performances of the InGaP/GaAs dual-junction solar cell while varying its operation temperature from 275°K to 375 °K with an increment of 25°C using a virtual wafer fabrication TCAD Silvaco. The optimization at 300°K led to the following result Icc =14.22 mA/cm2, Voc =2.42V, FF =91.32 %, η = 22.76 % which is close with those found in the literature. In a second stage ,we have varied the molar fraction of different layers as well their thickness and the doping of both emitters and bases and we have registered the result of each variation until obtaining an optimal efficiency of the proposed solar cell at 300°K which was of Icc=14.35mA/cm2,Voc=2.47V,FF=91.34,and η =23.33% for In(1-x)Ga(x)P molar fraction( x=0.5).The elimination of a layer BSF on the back face of our cell, enabled us to make a remarkable improvement of the short-circuit current (Icc=14.70 mA/cm2) and a decrease in open circuit voltage Voc and output η which reached 1.46V and 11.97% respectively. Therefore, we could determine the critical parameters of the cell and optimize its various technological parameters to obtain the best performance for a dual junction solar cell. This work opens the way with new prospects in the field of the photovoltaic one. Such structures will thus simplify the manufacturing processes of the cells; will thus reduce the costs while producing high outputs of photovoltaic conversion.Keywords: modeling, simulation, multijunction, optimization, silvaco ATLAS
Procedia PDF Downloads 6215257 Intriguing Modulations in the Excited State Intramolecular Proton Transfer Process of Chrysazine Governed by Host-Guest Interactions with Macrocyclic Molecules
Authors: Poojan Gharat, Haridas Pal, Sharmistha Dutta Choudhury
Abstract:
Tuning photophysical properties of guest dyes through host-guest interactions involving macrocyclic hosts are the attractive research areas since past few decades, as these changes can directly be implemented in chemical sensing, molecular recognition, fluorescence imaging and dye laser applications. Excited state intramolecular proton transfer (ESIPT) is an intramolecular prototautomerization process display by some specific dyes. The process is quite amenable to tunability by the presence of different macrocyclic hosts. The present study explores the interesting effect of p-sulfonatocalix[n]arene (SCXn) and cyclodextrin (CD) hosts on the excited-state prototautomeric equilibrium of Chrysazine (CZ), a model antitumour drug. CZ exists exclusively in its normal form (N) in the ground state. However, in the excited state, the excited N* form undergoes ESIPT along with its pre-existing intramolecular hydrogen bonds, giving the excited state prototautomer (T*). Accordingly, CZ shows a single absorption band due to N form, but two emission bands due to N* and T* forms. Facile prototautomerization of CZ is considerably inhibited when the dye gets bound to SCXn hosts. However, in spite of lower binding affinity, the inhibition is more profound with SCX6 host as compared to SCX4 host. For CD-CZ system, while prototautomerization process is hindered by the presence of β-CD, it remains unaffected in the presence of γCD. Reduction in the prototautomerization process of CZ by SCXn and βCD hosts is unusual, because T* form is less dipolar in nature than the N*, hence binding of CZ within relatively hydrophobic hosts cavities should have enhanced the prototautomerization process. At the same time, considering the similar chemical nature of two CD hosts, their effect on prototautomerization process of CZ would have also been similar. The atypical effects on the prototautomerization process of CZ by the studied hosts are suggested to arise due to the partial inclusion or external binding of CZ with the hosts. As a result, there is a strong possibility of intermolecular H-bonding interaction between CZ dye and the functional groups present at the portals of SCXn and βCD hosts. Formation of these intermolecular H-bonds effectively causes the pre-existing intramolecular H-bonding network within CZ molecule to become weak, and this consequently reduces the prototautomerization process for the dye. Our results suggest that rather than the binding affinity between the dye and host, it is the orientation of CZ in the case of SCXn-CZ complexes and the binding stoichiometry in the case of CD-CZ complexes that play the predominant role in influencing the prototautomeric equilibrium of the dye CZ. In the case of SCXn-CZ complexes, the results obtained through experimental findings are well supported by quantum chemical calculations. Similarly for CD-CZ systems, binding stoichiometries obtained through geometry optimization studies on the complexes between CZ and CD hosts correlate nicely with the experimental results. Formation of βCD-CZ complexes with 1:1 stoichiometry while formation of γCD-CZ complexes with 1:1, 1:2 and 2:2 stoichiometries are revealed from geometry optimization studies and these results are in good accordance with the observed effects by the βCD and γCD hosts on the ESIPT process of CZ dye.Keywords: intermolecular proton transfer, macrocyclic hosts, quantum chemical studies, photophysical studies
Procedia PDF Downloads 1215256 Improve Closed Loop Performance and Control Signal Using Evolutionary Algorithms Based PID Controller
Authors: Mehdi Shahbazian, Alireza Aarabi, Mohsen Hadiyan
Abstract:
Proportional-Integral-Derivative (PID) controllers are the most widely used controllers in industry because of its simplicity and robustness. Different values of PID parameters make different step response, so an increasing amount of literature is devoted to proper tuning of PID controllers. The problem merits further investigation as traditional tuning methods make large control signal that can damages the system but using evolutionary algorithms based tuning methods improve the control signal and closed loop performance. In this paper three tuning methods for PID controllers have been studied namely Ziegler and Nichols, which is traditional tuning method and evolutionary algorithms based tuning methods, that are, Genetic algorithm and particle swarm optimization. To examine the validity of PSO and GA tuning methods a comparative analysis of DC motor plant is studied. Simulation results reveal that evolutionary algorithms based tuning method have improved control signal amplitude and quality factors of the closed loop system such as rise time, integral absolute error (IAE) and maximum overshoot.Keywords: evolutionary algorithm, genetic algorithm, particle swarm optimization, PID controller
Procedia PDF Downloads 4835255 A Design of the Infrastructure and Computer Network for Distance Education, Online Learning via New Media, E-Learning and Blended Learning
Authors: Sumitra Nuanmeesri
Abstract:
The research focus on study, analyze and design the model of the infrastructure and computer networks for distance education, online learning via new media, e-learning and blended learning. The collected information from study and analyze process that information was evaluated by the index of item objective congruence (IOC) by 9 specialists to design model. The results of evaluate the model with the mean and standard deviation by the sample of 9 specialists value is 3.85. The results showed that the infrastructure and computer networks are designed to be appropriate to a great extent appropriate to a great extent.Keywords: blended learning, new media, infrastructure and computer network, tele-education, online learning
Procedia PDF Downloads 4025254 Optimization of Alkali Assisted Microwave Pretreatments of Sorghum Straw for Efficient Bioethanol Production
Authors: Bahiru Tsegaye, Chandrajit Balomajumder, Partha Roy
Abstract:
The limited supply and related negative environmental consequence of fossil fuels are driving researcher for finding sustainable sources of energy. Lignocellulose biomass like sorghum straw is considered as among cheap, renewable and abundantly available sources of energy. However, lignocellulose biomass conversion to bioenergy like bioethanol is hindered due to the reluctant nature of lignin in the biomass. Therefore, removal of lignin is a vital step for lignocellulose conversion to renewable energy. The aim of this study is to optimize microwave pretreatment conditions using design expert software to remove lignin and to release maximum possible polysaccharides from sorghum straw for efficient hydrolysis and fermentation process. Sodium hydroxide concentration between 0.5-1.5%, v/v, pretreatment time from 5-25 minutes and pretreatment temperature from 120-2000C were considered to depolymerize sorghum straw. The effect of pretreatment was studied by analyzing the compositional changes before and after pretreatments following renewable energy laboratory procedure. Analysis of variance (ANOVA) was used to test the significance of the model used for optimization. About 32.8%-48.27% of hemicellulose solubilization, 53% -82.62% of cellulose release, and 49.25% to 78.29% lignin solubilization were observed during microwave pretreatment. Pretreatment for 10 minutes with alkali concentration of 1.5% and temperature of 1400C released maximum cellulose and lignin. At this optimal condition, maximum of 82.62% of cellulose release and 78.29% of lignin removal was achieved. Sorghum straw at optimal pretreatment condition was subjected to enzymatic hydrolysis and fermentation. The efficiency of hydrolysis was measured by analyzing reducing sugars by 3, 5 dinitrisylicylic acid method. Reducing sugars of about 619 mg/g of sorghum straw were obtained after enzymatic hydrolysis. This study showed a significant amount of lignin removal and cellulose release at optimal condition. This enhances the yield of reducing sugars as well as ethanol yield. The study demonstrates the potential of microwave pretreatments for enhancing bioethanol yield from sorghum straw.Keywords: cellulose, hydrolysis, lignocellulose, optimization
Procedia PDF Downloads 2715253 Finite Element Modeling of Mass Transfer Phenomenon and Optimization of Process Parameters for Drying of Paddy in a Hybrid Solar Dryer
Authors: Aprajeeta Jha, Punyadarshini P. Tripathy
Abstract:
Drying technologies for various food processing operations shares an inevitable linkage with energy, cost and environmental sustainability. Hence, solar drying of food grains has become imperative choice to combat duo challenges of meeting high energy demand for drying and to address climate change scenario. But performance and reliability of solar dryers depend hugely on sunshine period, climatic conditions, therefore, offer a limited control over drying conditions and have lower efficiencies. Solar drying technology, supported by Photovoltaic (PV) power plant and hybrid type solar air collector can potentially overpower the disadvantages of solar dryers. For development of such robust hybrid dryers; to ensure quality and shelf-life of paddy grains the optimization of process parameter becomes extremely critical. Investigation of the moisture distribution profile within the grains becomes necessary in order to avoid over drying or under drying of food grains in hybrid solar dryer. Computational simulations based on finite element modeling can serve as potential tool in providing a better insight of moisture migration during drying process. Hence, present work aims at optimizing the process parameters and to develop a 3-dimensional (3D) finite element model (FEM) for predicting moisture profile in paddy during solar drying. COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Furthermore, optimization of process parameters (power level, air velocity and moisture content) was done using response surface methodology in design expert software. 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed and validated with experimental data. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Furthermore, optimized process parameters for drying paddy were found to be 700 W, 2.75 m/s at 13% (wb) with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product. PV-integrated hybrid solar dryers can be employed as potential and cutting edge drying technology alternative for sustainable energy and food security.Keywords: finite element modeling, moisture migration, paddy grain, process optimization, PV integrated hybrid solar dryer
Procedia PDF Downloads 1505252 Application of Adaptive Neural Network Algorithms for Determination of Salt Composition of Waters Using Laser Spectroscopy
Authors: Tatiana A. Dolenko, Sergey A. Burikov, Alexander O. Efitorov, Sergey A. Dolenko
Abstract:
In this study, a comparative analysis of the approaches associated with the use of neural network algorithms for effective solution of a complex inverse problem – the problem of identifying and determining the individual concentrations of inorganic salts in multicomponent aqueous solutions by the spectra of Raman scattering of light – is performed. It is shown that application of artificial neural networks provides the average accuracy of determination of concentration of each salt no worse than 0.025 M. The results of comparative analysis of input data compression methods are presented. It is demonstrated that use of uniform aggregation of input features allows decreasing the error of determination of individual concentrations of components by 16-18% on the average.Keywords: inverse problems, multi-component solutions, neural networks, Raman spectroscopy
Procedia PDF Downloads 5285251 Concrete Mix Design Using Neural Network
Authors: Rama Shanker, Anil Kumar Sachan
Abstract:
Basic ingredients of concrete are cement, fine aggregate, coarse aggregate and water. To produce a concrete of certain specific properties, optimum proportion of these ingredients are mixed. The important factors which govern the mix design are grade of concrete, type of cement and size, shape and grading of aggregates. Concrete mix design method is based on experimentally evolved empirical relationship between the factors in the choice of mix design. Basic draw backs of this method are that it does not produce desired strength, calculations are cumbersome and a number of tables are to be referred for arriving at trial mix proportion moreover, the variation in attainment of desired strength is uncertain below the target strength and may even fail. To solve this problem, a lot of cubes of standard grades were prepared and attained 28 days strength determined for different combination of cement, fine aggregate, coarse aggregate and water. An artificial neural network (ANN) was prepared using these data. The input of ANN were grade of concrete, type of cement, size, shape and grading of aggregates and output were proportions of various ingredients. With the help of these inputs and outputs, ANN was trained using feed forward back proportion model. Finally trained ANN was validated, it was seen that it gave the result with/ error of maximum 4 to 5%. Hence, specific type of concrete can be prepared from given material properties and proportions of these materials can be quickly evaluated using the proposed ANN.Keywords: aggregate proportions, artificial neural network, concrete grade, concrete mix design
Procedia PDF Downloads 389