Search results for: hybrid PSO-GA algorithm and mutual information
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15452

Search results for: hybrid PSO-GA algorithm and mutual information

13652 Battery Grading Algorithm in 2nd-Life Repurposing LI-Ion Battery System

Authors: Ya L. V., Benjamin Ong Wei Lin, Wanli Niu, Benjamin Seah Chin Tat

Abstract:

This article introduces a methodology that improves reliability and cyclability of 2nd-life Li-ion battery system repurposed as an energy storage system (ESS). Most of the 2nd-life retired battery systems in the market have module/pack-level state-of-health (SOH) indicator, which is utilized for guiding appropriate depth-of-discharge (DOD) in the application of ESS. Due to the lack of cell-level SOH indication, the different degrading behaviors among various cells cannot be identified upon reaching retired status; in the end, considering end-of-life (EOL) loss and pack-level DOD, the repurposed ESS has to be oversized by > 1.5 times to complement the application requirement of reliability and cyclability. This proposed battery grading algorithm, using non-invasive methodology, is able to detect outlier cells based on historical voltage data and calculate cell-level historical maximum temperature data using semi-analytic methodology. In this way, the individual battery cell in the 2nd-life battery system can be graded in terms of SOH on basis of the historical voltage fluctuation and estimated historical maximum temperature variation. These grades will have corresponding DOD grades in the application of the repurposed ESS to enhance system reliability and cyclability. In all, this introduced battery grading algorithm is non-invasive, compatible with all kinds of retired Li-ion battery systems which lack of cell-level SOH indication, as well as potentially being embedded into battery management software for preventive maintenance and real-time cyclability optimization.

Keywords: battery grading algorithm, 2nd-life repurposing battery system, semi-analytic methodology, reliability and cyclability

Procedia PDF Downloads 203
13651 Lithium and Sodium Ion Capacitors with High Energy and Power Densities based on Carbons from Recycled Olive Pits

Authors: Jon Ajuria, Edurne Redondo, Roman Mysyk, Eider Goikolea

Abstract:

Hybrid capacitor configurations are now of increasing interest to overcome the current energy limitations of supercapacitors entirely based on non-Faradaic charge storage. Among them, Li-ion capacitors including a negative battery-type lithium intercalation electrode and a positive capacitor-type electrode have achieved tremendous progress and have gone up to commercialization. Inexpensive electrode materials from renewable sources have recently received increased attention since cost is a persistently major criterion to make supercapacitors a more viable energy solution, with electrode materials being a major contributor to supercapacitor cost. Additionally, Na-ion battery chemistries are currently under development as less expensive and accessible alternative to Li-ion based battery electrodes. In this work, we are presenting both lithium and sodium ion capacitor (LIC & NIC) entirely based on electrodes prepared from carbon materials derived from recycled olive pits. Yearly, around 1 million ton of olive pit waste is generated worldwide, of which a third originates in the Spanish olive oil industry. On the one hand, olive pits were pyrolized at different temperatures to obtain a low specific surface area semigraphitic hard carbon to be used as the Li/Na ion intercalation (battery-type) negative electrode. The best hard carbon delivers a total capacity of 270mAh/g vs Na/Na+ in 1M NaPF6 and 350mAh/g vs Li/Li+ in 1M LiPF6. On the other hand, the same hard carbon is chemically activated with KOH to obtain high specific surface area -about 2000 m2g-1- activated carbon that is further used as the ion-adsorption (capacitor-type) positive electrode. In a voltage window of 1.5-4.2V, activated carbon delivers a specific capacity of 80 mAh/g vs. Na/Na+ and 95 mAh/g vs. Li/Li+ at 0.1A /g. Both electrodes were assembled in the same hybrid cell to build a LIC/NIC. For comparison purposes, a symmetric EDLC supercapacitor cell using the same activated carbon in 1.5M Et4NBF4 electrolyte was also built. Both LIC & NIC demonstrates considerable improvements in the energy density over its EDLC counterpart, delivering a maximum energy density of 110Wh/Kg at a power density of 30W/kg AM and a maximum power density of 6200W/Kg at an energy density of 27 Wh/Kg in the case of NIC and a maximum energy density of 110Wh/Kg at a power density of 30W/kg and a maximum power density of 18000W/Kg at an energy density of 22 Wh/Kg in the case of LIC. In conclusion, our work demonstrates that the same biomass waste can be adapted to offer a hybrid capacitor/battery storage device overcoming the limited energy density of corresponding double layer capacitors.

Keywords: hybrid supercapacitor, Na-Ion capacitor, supercapacitor, Li-Ion capacitor, EDLC

Procedia PDF Downloads 201
13650 Developing a Systems Dynamics Model for Security Management

Authors: Kuan-Chou Chen

Abstract:

This paper will demonstrate a simulation model of an information security system by using the systems dynamic approach. The relationships in the system model are designed to be simple and functional and do not necessarily represent any particular information security environments. The purpose of the paper aims to develop a generic system dynamic information security system model with implications on information security research. The interrelated and interdependent relationships of five primary sectors in the system dynamic model will be presented in this paper. The integrated information security systems model will include (1) information security characteristics, (2) users, (3) technology, (4) business functions, and (5) policy and management. Environments, attacks, government and social culture will be defined as the external sector. The interactions within each of these sectors will be depicted by system loop map as well. The proposed system dynamic model will not only provide a conceptual framework for information security analysts and designers but also allow information security managers to remove the incongruity between the management of risk incidents and the management of knowledge and further support information security managers and decision makers the foundation for managerial actions and policy decisions.

Keywords: system thinking, information security systems, security management, simulation

Procedia PDF Downloads 429
13649 Combined PV Cooling and Nighttime Power Generation through Smart Thermal Management of Photovoltaic–Thermoelectric Hybrid Systems

Authors: Abdulrahman M. Alajlan, Saichao Dang, Qiaoqiang Gan

Abstract:

Photovoltaic (PV) cells, while pivotal for solar energy harnessing, confront a challenge due to the presence of persistent residual heat. This thermal energy poses significant obstacles to the performance and longevity of PV cells. Mitigating this thermal issue is imperative, particularly in tropical regions where solar abundance coexists with elevated ambient temperatures. In response, a sustainable and economically viable solution has been devised, incorporating water-passive cooling within a Photovoltaic-Thermoelectric (PV-TEG) hybrid system to address PV cell overheating. The implemented system has significantly reduced the operating temperatures of PV cells, achieving a notable reduction of up to 15 °C below the temperature observed in standalone PV systems. In addition, a thermoelectric generator (TEG) integrated into the system significantly enhances power generation, particularly during nighttime operation. The developed hybrid system demonstrates its capability to generate power at a density of 0.5 Wm⁻² during nighttime, which is sufficient to concurrently power multiple light-emitting diodes, demonstrating practical applications for nighttime power generation. Key findings from this research include a consistent temperature reduction exceeding 10 °C for PV cells, translating to a 5% average enhancement in PV output power compared to standalone PV systems. Experimental demonstrations underscore nighttime power generation of 0.5 Wm⁻², with the potential to achieve 0.8 Wm⁻² through simple geometric optimizations. The optimal cooling of PV cells is determined by the volume of water in the heat storage unit, exhibiting an inverse relationship with the optimal performance for nighttime power generation. Furthermore, the TEG output effectively powers a lighting system with up to 5 LEDs during the night. This research not only proposes a practical solution for maximizing solar radiation utilization but also charts a course for future advancements in energy harvesting technologies.

Keywords: photovoltaic-thermoelectric systems, nighttime power generation, PV thermal management, PV cooling

Procedia PDF Downloads 84
13648 Hybrid Reusable Launch Vehicle for Space Application A Naval Approach

Authors: Rajasekar Elangopandian, Anand Shanmugam

Abstract:

In order to reduce the cost of launching satellite and payloads to the orbit this project envisages some immense combined technology. This new technology in space odyssey contains literally four concepts. The first mode in this innovation is flight mission characteristics which, says how the mission will induct. The conventional technique of magnetic levitation will help us to produce the initial thrust. The name states reusable launch vehicle shows its viability of reuseness. The flight consists miniature rocket which produces the required thrust and the two JATO (jet assisted takeoff) boosters which gives the initial boost for the vehicle. The vehicle ostensibly looks like an airplane design and will be located on the super conducting rail track. When the high power electric current given to the rail track, the vehicle starts floating as per the principle of magnetic levitation. If the flight reaches the particular takeoff distance the two boosters gets starts and will give the 48KN thrust each. Obviously it`ll follow the vertical path up to the atmosphere end/start to space. As soon as it gets its speed the two boosters will cutoff. Once it reaches the space the inbuilt spacecraft keep the satellite in the desired orbit. When the work finishes, the apogee motors gives the initial kick to the vehicle to come in to the earth’s atmosphere with 22N thrust and automatically comes to the ground by following the free fall, the help of gravitational force. After the flying region it makes the spiral flight mode then gets landing where the super conducting levitated rail track located. It will catch up the vehicle and keep it by changing the poles of magnets and varying the current. Initial cost for making this vehicle might be high but for the frequent usage this will reduce the launch cost exactly half than the now-a-days technology. The incorporation of such a mechanism gives `hybrid` and the reusability gives `reusable launch vehicle` and ultimately Hybrid reusable launch vehicle.

Keywords: the two JATO (jet assisted takeoff) boosters, magnetic levitation, 48KN thrust each, 22N thrust and automatically comes to the ground

Procedia PDF Downloads 427
13647 Partially Knowing of Least Support Orthogonal Matching Pursuit (PKLS-OMP) for Recovering Signal

Authors: Israa Sh. Tawfic, Sema Koc Kayhan

Abstract:

Given a large sparse signal, great wishes are to reconstruct the signal precisely and accurately from lease number of measurements as possible as it could. Although this seems possible by theory, the difficulty is in built an algorithm to perform the accuracy and efficiency of reconstructing. This paper proposes a new proved method to reconstruct sparse signal depend on using new method called Least Support Matching Pursuit (LS-OMP) merge it with the theory of Partial Knowing Support (PSK) given new method called Partially Knowing of Least Support Orthogonal Matching Pursuit (PKLS-OMP). The new methods depend on the greedy algorithm to compute the support which depends on the number of iterations. So to make it faster, the PKLS-OMP adds the idea of partial knowing support of its algorithm. It shows the efficiency, simplicity, and accuracy to get back the original signal if the sampling matrix satisfies the Restricted Isometry Property (RIP). Simulation results also show that it outperforms many algorithms especially for compressible signals.

Keywords: compressed sensing, lest support orthogonal matching pursuit, partial knowing support, restricted isometry property, signal reconstruction

Procedia PDF Downloads 241
13646 A Fast Algorithm for Electromagnetic Compatibility Estimation for Radio Communication Network Equipment in a Complex Electromagnetic Environment

Authors: C. Temaneh-Nyah

Abstract:

Electromagnetic compatibility (EMC) is the ability of a Radio Communication Equipment (RCE) to operate with a desired quality of service in a given Electromagnetic Environment (EME) and not to create harmful interference with other RCE. This paper presents an algorithm which improves the simulation speed of estimating EMC of RCE in a complex EME, based on a stage by stage frequency-energy criterion of filtering. This algorithm considers different interference types including: Blocking and intermodulation. It consist of the following steps: simplified energy criterion where filtration is based on comparing the free space interference level to the industrial noise, frequency criterion which checks whether the interfering emissions characteristic overlap with the receiver’s channels characteristic and lastly the detailed energy criterion where the real channel interference level is compared to the noise level. In each of these stages, some interference cases are filtered out by the relevant criteria. This reduces the total number of dual and different combinations of RCE involved in the tedious detailed energy analysis and thus provides an improved simulation speed.

Keywords: electromagnetic compatibility, electromagnetic environment, simulation of communication network

Procedia PDF Downloads 218
13645 A Metaheuristic Approach for the Pollution-Routing Problem

Authors: P. Parthiban, Sonu Rajak, R. Dhanalakshmi

Abstract:

This paper presents an Ant Colony Optimization (ACO) approach, combined with a Speed Optimization Algorithm (SOA) to solve the Vehicle Routing Problem (VRP) with environmental considerations, which is well known as Pollution-Routing Problem (PRP). It consists of routing a number of vehicles to serve a set of customers, and determining fuel consumption, driver wages and their speed on each route segment, while respecting the capacity constraints and time windows. Since VRP is NP-hard problem, so PRP also a NP-hard problem, which requires metaheuristics to solve this type of problems. The proposed solution method consists of two stages. Stage one is to solve a Vehicle Routing Problem with Time Window (VRPTW) using ACO and in the second stage, a SOA is run on the resulting VRPTW solution. Given a vehicle route, the SOA consists of finding the optimal speed on each arc of the route to minimize an objective function comprising fuel consumption costs and driver wages. The proposed algorithm tested on benchmark problem, the preliminary results show that the proposed algorithm can provide good solutions within reasonable computational time.

Keywords: ant colony optimization, CO2 emissions, speed optimization, vehicle routing

Procedia PDF Downloads 360
13644 Cluster-Based Multi-Path Routing Algorithm in Wireless Sensor Networks

Authors: Si-Gwan Kim

Abstract:

Small-size and low-power sensors with sensing, signal processing and wireless communication capabilities is suitable for the wireless sensor networks. Due to the limited resources and battery constraints, complex routing algorithms used for the ad-hoc networks cannot be employed in sensor networks. In this paper, we propose node-disjoint multi-path hexagon-based routing algorithms in wireless sensor networks. We suggest the details of the algorithm and compare it with other works. Simulation results show that the proposed scheme achieves better performance in terms of efficiency and message delivery ratio.

Keywords: clustering, multi-path, routing protocol, sensor network

Procedia PDF Downloads 403
13643 Learning from Small Amount of Medical Data with Noisy Labels: A Meta-Learning Approach

Authors: Gorkem Algan, Ilkay Ulusoy, Saban Gonul, Banu Turgut, Berker Bakbak

Abstract:

Computer vision systems recently made a big leap thanks to deep neural networks. However, these systems require correctly labeled large datasets in order to be trained properly, which is very difficult to obtain for medical applications. Two main reasons for label noise in medical applications are the high complexity of the data and conflicting opinions of experts. Moreover, medical imaging datasets are commonly tiny, which makes each data very important in learning. As a result, if not handled properly, label noise significantly degrades the performance. Therefore, a label-noise-robust learning algorithm that makes use of the meta-learning paradigm is proposed in this article. The proposed solution is tested on retinopathy of prematurity (ROP) dataset with a very high label noise of 68%. Results show that the proposed algorithm significantly improves the classification algorithm's performance in the presence of noisy labels.

Keywords: deep learning, label noise, robust learning, meta-learning, retinopathy of prematurity

Procedia PDF Downloads 161
13642 Comparative Study Using WEKA for Red Blood Cells Classification

Authors: Jameela Ali, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy

Abstract:

Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal, or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithm tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital-alaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.

Keywords: K-nearest neighbors algorithm, radial basis function neural network, red blood cells, support vector machine

Procedia PDF Downloads 410
13641 Probabilistic Approach of Dealing with Uncertainties in Distributed Constraint Optimization Problems and Situation Awareness for Multi-agent Systems

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how Bayesian inferential reasoning will contributes in obtaining a well-satisfied prediction for Distributed Constraint Optimization Problems (DCOPs) with uncertainties. We also demonstrate how DCOPs could be merged to multi-agent knowledge understand and prediction (i.e. Situation Awareness). The DCOPs functions were merged with Bayesian Belief Network (BBN) in the form of situation, awareness, and utility nodes. We describe how the uncertainties can be represented to the BBN and make an effective prediction using the expectation-maximization algorithm or conjugate gradient descent algorithm. The idea of variable prediction using Bayesian inference may reduce the number of variables in agents’ sampling domain and also allow missing variables estimations. Experiment results proved that the BBN perform compelling predictions with samples containing uncertainties than the perfect samples. That is, Bayesian inference can help in handling uncertainties and dynamism of DCOPs, which is the current issue in the DCOPs community. We show how Bayesian inference could be formalized with Distributed Situation Awareness (DSA) using uncertain and missing agents’ data. The whole framework was tested on multi-UAV mission for forest fire searching. Future work focuses on augmenting existing architecture to deal with dynamic DCOPs algorithms and multi-agent information merging.

Keywords: DCOP, multi-agent reasoning, Bayesian reasoning, swarm intelligence

Procedia PDF Downloads 119
13640 A New Internal Architecture Based On Feature Selection for Holonic Manufacturing System

Authors: Jihan Abdulazeez Ahmed, Adnan Mohsin Abdulazeez Brifcani

Abstract:

This paper suggests a new internal architecture of holon based on feature selection model using the combination of Bees Algorithm (BA) and Artificial Neural Network (ANN). BA is used to generate features while ANN is used as a classifier to evaluate the produced features. Proposed system is applied on the Wine data set, the statistical result proves that the proposed system is effective and has the ability to choose informative features with high accuracy.

Keywords: artificial neural network, bees algorithm, feature selection, Holon

Procedia PDF Downloads 457
13639 Low Density Parity Check Codes

Authors: Kassoul Ilyes

Abstract:

The field of error correcting codes has been revolutionized by the introduction of iteratively decoded codes. Among these, LDPC codes are now a preferred solution thanks to their remarkable performance and low complexity. The binary version of LDPC codes showed even better performance, although it’s decoding introduced greater complexity. This thesis studies the performance of binary LDPC codes using simplified weighted decisions. Information is transported between a transmitter and a receiver by digital transmission systems, either by propagating over a radio channel or also by using a transmission medium such as the transmission line. The purpose of the transmission system is then to carry the information from the transmitter to the receiver as reliably as possible. These codes have not generated enough interest within the coding theory community. This forgetfulness will last until the introduction of Turbo-codes and the iterative principle. Then it was proposed to adopt Pearl's Belief Propagation (BP) algorithm for decoding these codes. Subsequently, Luby introduced irregular LDPC codes characterized by a parity check matrix. And finally, we study simplifications on binary LDPC codes. Thus, we propose a method to make the exact calculation of the APP simpler. This method leads to simplifying the implementation of the system.

Keywords: LDPC, parity check matrix, 5G, BER, SNR

Procedia PDF Downloads 154
13638 Global Navigation Satellite System and Precise Point Positioning as Remote Sensing Tools for Monitoring Tropospheric Water Vapor

Authors: Panupong Makvichian

Abstract:

Global Navigation Satellite System (GNSS) is nowadays a common technology that improves navigation functions in our life. Additionally, GNSS is also being employed on behalf of an accurate atmospheric sensor these times. Meteorology is a practical application of GNSS, which is unnoticeable in the background of people’s life. GNSS Precise Point Positioning (PPP) is a positioning method that requires data from a single dual-frequency receiver and precise information about satellite positions and satellite clocks. In addition, careful attention to mitigate various error sources is required. All the above data are combined in a sophisticated mathematical algorithm. At this point, the research is going to demonstrate how GNSS and PPP method is capable to provide high-precision estimates, such as 3D positions or Zenith tropospheric delays (ZTDs). ZTDs combined with pressure and temperature information allows us to estimate the water vapor in the atmosphere as precipitable water vapor (PWV). If the process is replicated for a network of GNSS sensors, we can create thematic maps that allow extract water content information in any location within the network area. All of the above are possible thanks to the advances in GNSS data processing. Therefore, we are able to use GNSS data for climatic trend analysis and acquisition of the further knowledge about the atmospheric water content.

Keywords: GNSS, precise point positioning, Zenith tropospheric delays, precipitable water vapor

Procedia PDF Downloads 198
13637 The Role of Information Technology in the Supply Chain Management

Authors: Azar Alizadeh, Mohammad Reza Naserkhaki

Abstract:

The application of the IT systems for collecting and analyzing the data can have a significant effect on the performance of any company. In recent decade, different advancements and achievements in the field of information technology have changed the industry compared to the previous decade. The adoption and application of the information technology are one of the ways to achieve a distinctive competitive personality to the companies and their supply chain. The acceptance of the IT and its proper implementation cam reinforce and improve the cooperation between different parts of the supply chain by rapid transfer and distribution of the precise information and the application of the informational systems, leading to the increase in the supply chain efficiency. The main objective of this research is to study the effects and applications of the information technology on and in the supply chain management and to introduce the effective factors on the acceptance of information technology in the companies. Moreover, in order to understand the subject, we will investigate the role and importance of the information and electronic commerce in the supply chain and the characteristics of the supply chain based on the information flow approach.

Keywords: electronic commerce, industry, information technology, management, supply chain, system

Procedia PDF Downloads 485
13636 A Polynomial Time Clustering Algorithm for Solving the Assignment Problem in the Vehicle Routing Problem

Authors: Lydia Wahid, Mona F. Ahmed, Nevin Darwish

Abstract:

The vehicle routing problem (VRP) consists of a group of customers that needs to be served. Each customer has a certain demand of goods. A central depot having a fleet of vehicles is responsible for supplying the customers with their demands. The problem is composed of two subproblems: The first subproblem is an assignment problem where the number of vehicles that will be used as well as the customers assigned to each vehicle are determined. The second subproblem is the routing problem in which for each vehicle having a number of customers assigned to it, the order of visits of the customers is determined. Optimal number of vehicles, as well as optimal total distance, should be achieved. In this paper, an approach for solving the first subproblem (the assignment problem) is presented. In the approach, a clustering algorithm is proposed for finding the optimal number of vehicles by grouping the customers into clusters where each cluster is visited by one vehicle. Finding the optimal number of clusters is NP-hard. This work presents a polynomial time clustering algorithm for finding the optimal number of clusters and solving the assignment problem.

Keywords: vehicle routing problems, clustering algorithms, Clarke and Wright Saving Method, agglomerative hierarchical clustering

Procedia PDF Downloads 393
13635 Adapting the Chemical Reaction Optimization Algorithm to the Printed Circuit Board Drilling Problem

Authors: Taisir Eldos, Aws Kanan, Waleed Nazih, Ahmad Khatatbih

Abstract:

Chemical Reaction Optimization (CRO) is an optimization metaheuristic inspired by the nature of chemical reactions as a natural process of transforming the substances from unstable to stable states. Starting with some unstable molecules with excessive energy, a sequence of interactions takes the set to a state of minimum energy. Researchers reported successful application of the algorithm in solving some engineering problems, like the quadratic assignment problem, with superior performance when compared with other optimization algorithms. We adapted this optimization algorithm to the Printed Circuit Board Drilling Problem (PCBDP) towards reducing the drilling time and hence improving the PCB manufacturing throughput. Although the PCBDP can be viewed as instance of the popular Traveling Salesman Problem (TSP), it has some characteristics that would require special attention to the transactions that explore the solution landscape. Experimental test results using the standard CROToolBox are not promising for practically sized problems, while it could find optimal solutions for artificial problems and small benchmarks as a proof of concept.

Keywords: evolutionary algorithms, chemical reaction optimization, traveling salesman, board drilling

Procedia PDF Downloads 519
13634 Optimal Design of Linear Generator to Recharge the Smartphone Battery

Authors: Jin Ho Kim, Yujeong Shin, Seong-Jin Cho, Dong-Jin Kim, U-Syn Ha

Abstract:

Due to the development of the information industry and technologies, cellular phones have must not only function to communicate, but also have functions such as the Internet, e-banking, entertainment, etc. These phones are called smartphones. The performance of smartphones has improved, because of the various functions of smartphones, and the capacity of the battery has been increased gradually. Recently, linear generators have been embedded in smartphones in order to recharge the smartphone's battery. In this study, optimization is performed and an array change of permanent magnets is examined in order to increase efficiency. We propose an optimal design using design of experiments (DOE) to maximize the generated induced voltage. The thickness of the poleshoe and permanent magnet (PM), the height of the poleshoe and PM, and the thickness of the coil are determined to be design variables. We made 25 sampling points using an orthogonal array according to four design variables. We performed electromagnetic finite element analysis to predict the generated induced voltage using the commercial electromagnetic analysis software ANSYS Maxwell. Then, we made an approximate model using the Kriging algorithm, and derived optimal values of the design variables using an evolutionary algorithm. The commercial optimization software PIAnO (Process Integration, Automation, and Optimization) was used with these algorithms. The result of the optimization shows that the generated induced voltage is improved.

Keywords: smartphone, linear generator, design of experiment, approximate model, optimal design

Procedia PDF Downloads 345
13633 Fiber-Reinforced Sandwich Structures Based on Selective Laser Sintering: A Technological View

Authors: T. Häfele, J. Kaspar, M. Vielhaber, W. Calles, J. Griebsch

Abstract:

The demand for an increasing diversification of the product spectrum associated with the current huge customization desire and subsequently the decreasing unit quantities of each production lot is gaining more and more importance within a great variety of industrial branches, e.g. automotive industry. Nevertheless, traditional product development and production processes (molding, extrusion) are already reaching their limits or fail to address these trends of a flexible and digitized production in view of a product variability up to lot size one. Thus, upcoming innovative production concepts like the additive manufacturing technology basically create new opportunities with regard to extensive potentials in product development (constructive optimization) and manufacturing (economic individualization), but mostly suffer from insufficient strength regarding structural components. Therefore, this contribution presents an innovative technological and procedural conception of a hybrid additive manufacturing process (fiber-reinforced sandwich structures based on selective laser sintering technology) to overcome these current structural weaknesses, and consequently support the design of complex lightweight components.

Keywords: additive manufacturing, fiber-reinforced plastics (FRP), hybrid design, lightweight design

Procedia PDF Downloads 297
13632 Assessment of Mortgage Applications Using Fuzzy Logic

Authors: Swathi Sampath, V. Kalaichelvi

Abstract:

The assessment of the risk posed by a borrower to a lender is one of the common problems that financial institutions have to deal with. Consumers vying for a mortgage are generally compared to each other by the use of a number called the Credit Score, which is generated by applying a mathematical algorithm to information in the applicant’s credit report. The higher the credit score, the lower the risk posed by the candidate, and the better he is to be taken on by the lender. The objective of the present work is to use fuzzy logic and linguistic rules to create a model that generates Credit Scores.

Keywords: credit scoring, fuzzy logic, mortgage, risk assessment

Procedia PDF Downloads 405
13631 Computing Continuous Skyline Queries without Discriminating between Static and Dynamic Attributes

Authors: Ibrahim Gomaa, Hoda M. O. Mokhtar

Abstract:

Although most of the existing skyline queries algorithms focused basically on querying static points through static databases; with the expanding number of sensors, wireless communications and mobile applications, the demand for continuous skyline queries has increased. Unlike traditional skyline queries which only consider static attributes, continuous skyline queries include dynamic attributes, as well as the static ones. However, as skyline queries computation is based on checking the domination of skyline points over all dimensions, considering both the static and dynamic attributes without separation is required. In this paper, we present an efficient algorithm for computing continuous skyline queries without discriminating between static and dynamic attributes. Our algorithm in brief proceeds as follows: First, it excludes the points which will not be in the initial skyline result; this pruning phase reduces the required number of comparisons. Second, the association between the spatial positions of data points is examined; this phase gives an idea of where changes in the result might occur and consequently enables us to efficiently update the skyline result (continuous update) rather than computing the skyline from scratch. Finally, experimental evaluation is provided which demonstrates the accuracy, performance and efficiency of our algorithm over other existing approaches.

Keywords: continuous query processing, dynamic database, moving object, skyline queries

Procedia PDF Downloads 210
13630 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: computer vision, deep learning, object detection, semiconductor

Procedia PDF Downloads 136
13629 Deep Reinforcement Learning Approach for Optimal Control of Industrial Smart Grids

Authors: Niklas Panten, Eberhard Abele

Abstract:

This paper presents a novel approach for real-time and near-optimal control of industrial smart grids by deep reinforcement learning (DRL). To achieve highly energy-efficient factory systems, the energetic linkage of machines, technical building equipment and the building itself is desirable. However, the increased complexity of the interacting sub-systems, multiple time-variant target values and stochastic influences by the production environment, weather and energy markets make it difficult to efficiently control the energy production, storage and consumption in the hybrid industrial smart grids. The studied deep reinforcement learning approach allows to explore the solution space for proper control policies which minimize a cost function. The deep neural network of the DRL agent is based on a multilayer perceptron (MLP), Long Short-Term Memory (LSTM) and convolutional layers. The agent is trained within multiple Modelica-based factory simulation environments by the Advantage Actor Critic algorithm (A2C). The DRL controller is evaluated by means of the simulation and then compared to a conventional, rule-based approach. Finally, the results indicate that the DRL approach is able to improve the control performance and significantly reduce energy respectively operating costs of industrial smart grids.

Keywords: industrial smart grids, energy efficiency, deep reinforcement learning, optimal control

Procedia PDF Downloads 195
13628 A Monocular Measurement for 3D Objects Based on Distance Area Number and New Minimize Projection Error Optimization Algorithms

Authors: Feixiang Zhao, Shuangcheng Jia, Qian Li

Abstract:

High-precision measurement of the target’s position and size is one of the hotspots in the field of vision inspection. This paper proposes a three-dimensional object positioning and measurement method using a monocular camera and GPS, namely the Distance Area Number-New Minimize Projection Error (DAN-NMPE). Our algorithm contains two parts: DAN and NMPE; specifically, DAN is a picture sequence algorithm, NMPE is a relatively positive optimization algorithm, which greatly improves the measurement accuracy of the target’s position and size. Comprehensive experiments validate the effectiveness of our proposed method on a self-made traffic sign dataset. The results show that with the laser point cloud as the ground truth, the size and position errors of the traffic sign measured by this method are ± 5% and 0.48 ± 0.3m, respectively. In addition, we also compared it with the current mainstream method, which uses a monocular camera to locate and measure traffic signs. DAN-NMPE attains significant improvements compared to existing state-of-the-art methods, which improves the measurement accuracy of size and position by 50% and 15.8%, respectively.

Keywords: monocular camera, GPS, positioning, measurement

Procedia PDF Downloads 144
13627 Light Weight Fly Ash Based Composite Material for Thermal Insulation Applications

Authors: Bharath Kenchappa, Kunigal Shivakumar

Abstract:

Lightweight, low thermal conductivity and high temperature resistant materials or the system with moderate mechanical properties and capable of taking high heating rates are needed in both commercial and military applications. A single material with these attributes is very difficult to find and one needs to come with innovative ideas to make such material system using what is available. To bring down the cost of the system, one has to be conscious about the cost of basic materials. Such a material system can be called as the thermal barrier system. This paper focuses on developing, testing and characterization of material system for thermal barrier applications. The material developed is porous, low density, low thermal conductivity of 0.1062 W/m C and glass transition temperature about 310 C. Also, the thermal properties of the developed material was measured in both longitudinal and thickness direction to highlight the fact that the material shows isotropic behavior. The material is called modified Eco-Core which uses only less than 9% weight of high-char resin in the composite. The filler (reinforcing material) is a component of fly ash called Cenosphere, they are hollow micro-bubbles made of ceramic materials. Special mixing-technique is used to surface coat the fillers with a thin layer of resin to develop a point-to-point contact of particles. One could use commercial ceramic micro-bubbles instead of Cenospheres, but it is expensive. The bulk density of Cenospheres is about 0.35 g/cc and we could accomplish the composite density of about 0.4 g/cc. One percent filler weight of 3mm length standard drywall grade fibers was used to bring the added toughness. Both thermal and mechanical characterization was performed and properties are documented. For higher temperature applications (up to 1,000 C), a hybrid system was developed using an aerogel mat. Properties of combined material was characterized and documented. Thermal tests were conducted on both the bare modified Eco-Core and hybrid materials to assess the suitability of the material to a thermal barrier application. The hybrid material system was found to meet the requirement of the application.

Keywords: aerogel, fly ash, porous material, thermal barrier

Procedia PDF Downloads 111
13626 Tools for Transparency: The Role of Civic Technology in Increasing the Transparency of the State

Authors: Rebecca Rumbul

Abstract:

The operation of the state can often appear opaque to citizens wishing to access official information, who have to negotiate a path through numerous levels of bureaucracy rationalized through institutional policy to acquire what information they want. Even where individual states have 'Right to Information' legislation guaranteeing citizen access to information, public sector conformity to such laws vary between states and between state organizations. In response to such difficulties in bringing citizens and information together, many NGO's around the world have begun designing and hosting digital portals to facilitate the requesting and receiving of official information. How then, are these 'civic technology' tools affecting the behavior of the state? Are they increasing the transparency of the state? This study looked at 5 Right to Information civic technology sites in Chile, Uruguay, Ukraine, Hungary and the UK, and found that such sites were providing a useful platform to publish official information, but that states were still reluctant to comply with all requests. It concludes that civic technology can be an important tool in increasing the transparency of the state, but that the state must have an institutional commitment to information rights for this to be fully effective.

Keywords: digital, ICT, transparency, civic technology

Procedia PDF Downloads 662
13625 Templating Copper on Polymer/DNA Hybrid Nanowires

Authors: Mahdi Almaky, Reda Hassanin, Benjamin Horrocks, Andrew Houlton

Abstract:

DNA-templated poly(N-substituted pyrrole)bipyridinium nanowires were synthesised at room temperature using the chemical oxidation method. The resulting CPs/DNA hybrids have been characterised using electronic and vibrational spectroscopic methods especially Ultraviolet-Visible (UV-Vis) spectroscopy and FTIR spectroscpy. The nanowires morphology was characterised using Atomic Force Microscopy (AFM). The electrical properties of the prepared nanowires were characterised using Electrostatic Force Microscopy (EFM), and measured using conductive AFM (c-AFM) and two terminal I/V technique, where the temperature dependence of the conductivity was probed. The conductivities of the prepared CPs/DNA nanowires are generally lower than PPy/DNA nanowires showingthe large effect on N-alkylation in decreasing the conductivity of the polymer, butthese are higher than the conductivity of their corresponding bulk films.This enhancement in conductivity could be attributed to the ordering of the polymer chains on DNA during the templating process. The prepared CPs/DNA nanowires were used as templates for the growth of copper nanowires at room temperature using aqueous solution of Cu(NO3)2as a source of Cu2+ and ascorbic acid as reducing agent. AFM images showed that these nanowires were uniform and continuous compared to copper nanowires prepared using the templating method directly onto DNA. Electrical characterization of the nanowires by c AFM revealed slight improvement in conductivity of these nanowires (Cu-CPs/DNA) compared to CPs/DNA nanowires before metallisation.

Keywords: templating, copper nanowires, polymer/DNA hybrid, chemical oxidation method

Procedia PDF Downloads 363
13624 Control Algorithm for Home Automation Systems

Authors: Marek Długosz, Paweł Skruch

Abstract:

One of purposes of home automation systems is to provide appropriate comfort to the users by suitable air temperature control and stabilization inside the rooms. The control of temperature level is not a simple task and the basic difficulty results from the fact that accurate parameters of the object of control, that is a building, remain unknown. Whereas the structure of the model is known, the identification of model parameters is a difficult task. In this paper, a control algorithm allowing the present temperature to be reached inside the building within the specified time without the need to know accurate parameters of the building itself is presented.

Keywords: control, home automation system, wireless networking, automation engineering

Procedia PDF Downloads 618
13623 An Optimized Approach to Generate the Possible States of Football Tournaments Final Table

Authors: Mouslem Damkhi

Abstract:

This paper focuses on possible states of a football tournament final table according to the number of participating teams. Each team holds a position in the table with which it is possible to determine the highest and lowest points for that team. This paper proposes an optimized search space based on the minimum and maximum number of points which can be gained by each team to produce and enumerate the possible states for a football tournament final table. The proposed search space minimizes producing the invalid states which cannot occur during a football tournament. The generated states are filtered by a validity checking algorithm which seeks to reach a tournament graph based on a generated state. Thus, the algorithm provides a way to determine which team’s wins, draws and loses values guarantee a particular table position. The paper also presents and discusses the experimental results of the approach on the tournaments with up to eight teams. Comparing with a blind search algorithm, our proposed approach reduces generating the invalid states up to 99.99%, which results in a considerable optimization in term of the execution time.

Keywords: combinatorics, enumeration, graph, tournament

Procedia PDF Downloads 122