Search results for: algorithm techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9584

Search results for: algorithm techniques

7544 Determination of Weathering at Kilistra Ancient City by Using Non-Destructive Techniques, Central Anatolia, Turkey

Authors: İsmail İnce, Osman Günaydin, Fatma Özer

Abstract:

Stones used in the construction of historical structures are exposed to various direct or indirect atmospheric effects depending on climatic conditions. Building stones deteriorate partially or fully as a result of this exposure. The historic structures are important symbols of any cultural heritage. Therefore, it is important to protect and restore these historical structures. The aim of this study is to determine the weathering conditions at the Kilistra ancient city. It is located in the southwest of the Konya city, Central Anatolia, and was built by carving into pyroclastic rocks during the Byzantine Era. For this purpose, the petrographic and mechanical properties of the pyroclastic rocks were determined. In the assessment of weathering of structures in the ancient city, in-situ non-destructive testing (i.e., Schmidt hardness rebound value, relative humidity measurement) methods were applied.

Keywords: cultural heritage, Kilistra ancient city, non-destructive techniques, weathering

Procedia PDF Downloads 342
7543 Secure Automatic Key SMS Encryption Scheme Using Hybrid Cryptosystem: An Approach for One Time Password Security Enhancement

Authors: Pratama R. Yunia, Firmansyah, I., Ariani, Ulfa R. Maharani, Fikri M. Al

Abstract:

Nowadays, notwithstanding that the role of SMS as a means of communication has been largely replaced by online applications such as WhatsApp, Telegram, and others, the fact that SMS is still used for certain and important communication needs is indisputable. Among them is for sending one time password (OTP) as an authentication media for various online applications ranging from chatting, shopping to online banking applications. However, the usage of SMS does not pretty much guarantee the security of transmitted messages. As a matter of fact, the transmitted messages between BTS is still in the form of plaintext, making it extremely vulnerable to eavesdropping, especially if the message is confidential, for instance, the OTP. One solution to overcome this problem is to use an SMS application which provides security services for each transmitted message. Responding to this problem, in this study, an automatic key SMS encryption scheme was designed as a means to secure SMS communication. The proposed scheme allows SMS sending, which is automatically encrypted with keys that are constantly changing (automatic key update), automatic key exchange, and automatic key generation. In terms of the security method, the proposed scheme applies cryptographic techniques with a hybrid cryptosystem mechanism. Proofing the proposed scheme, a client to client SMS encryption application was developed using Java platform with AES-256 as encryption algorithm, RSA-768 as public and private key generator and SHA-256 for message hashing function. The result of this study is a secure automatic key SMS encryption scheme using hybrid cryptosystem which can guarantee the security of every transmitted message, so as to become a reliable solution in sending confidential messages through SMS although it still has weaknesses in terms of processing time.

Keywords: encryption scheme, hybrid cryptosystem, one time password, SMS security

Procedia PDF Downloads 114
7542 Neutral Heavy Scalar Searches via Standard Model Gauge Boson Decays at the Large Hadron Electron Collider with Multivariate Techniques

Authors: Luigi Delle Rose, Oliver Fischer, Ahmed Hammad

Abstract:

In this article, we study the prospects of the proposed Large Hadron electron Collider (LHeC) in the search for heavy neutral scalar particles. We consider a minimal model with one additional complex scalar singlet that interacts with the Standard Model (SM) via mixing with the Higgs doublet, giving rise to an SM-like Higgs boson and a heavy scalar particle. Both scalar particles are produced via vector boson fusion and can be tested via their decays into pairs of SM particles, analogously to the SM Higgs boson. Using multivariate techniques, we show that the LHeC is sensitive to heavy scalars with masses between 200 and 800 GeV down to scalar mixing of order 0.01.

Keywords: beyond the standard model, large hadron electron collider, multivariate analysis, scalar singlet

Procedia PDF Downloads 123
7541 Attack Redirection and Detection using Honeypots

Authors: Chowduru Ramachandra Sharma, Shatunjay Rawat

Abstract:

A false positive state is when the IDS/IPS identifies an activity as an attack, but the activity is acceptable behavior in the system. False positives in a Network Intrusion Detection System ( NIDS ) is an issue because they desensitize the administrator. It wastes computational power and valuable resources when rules are not tuned properly, which is the main issue with anomaly NIDS. Furthermore, most false positives reduction techniques are not performed during the real-time of attempted intrusions; instead, they have applied afterward on collected traffic data and generate alerts. Of course, false positives detection in ‘offline mode’ is tremendously valuable. Nevertheless, there is room for improvement here; automated techniques still need to reduce False Positives in real-time. This paper uses the Snort signature detection model to redirect the alerted attacks to Honeypots and verify attacks.

Keywords: honeypot, TPOT, snort, NIDS, honeybird, iptables, netfilter, redirection, attack detection, docker, snare, tanner

Procedia PDF Downloads 140
7540 Study on Control Techniques for Adaptive Impact Mitigation

Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty

Abstract:

Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.

Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber

Procedia PDF Downloads 76
7539 Modeling of Sediment Yield and Streamflow of Watershed Basin in the Philippines Using the Soil Water Assessment Tool Model for Watershed Sustainability

Authors: Warda L. Panondi, Norihiro Izumi

Abstract:

Sedimentation is a significant threat to the sustainability of reservoirs and their watershed. In the Philippines, the Pulangi watershed experienced a high sediment loss mainly due to land conversions and plantations that showed critical erosion rates beyond the tolerable limit of -10 ton/ha/yr in all of its sub-basin. From this event, the prediction of runoff volume and sediment yield is essential to examine using the country's soil conservation techniques realistically. In this research, the Pulangi watershed was modeled using the soil water assessment tool (SWAT) to predict its watershed basin's annual runoff and sediment yield. For the calibration and validation of the model, the SWAT-CUP was utilized. The model was calibrated with monthly discharge data for 1990-1993 and validated for 1994-1997. Simultaneously, the sediment yield was calibrated in 2014 and validated in 2015 because of limited observed datasets. Uncertainty analysis and calculation of efficiency indexes were accomplished through the SUFI-2 algorithm. According to the coefficient of determination (R2), Nash Sutcliffe efficiency (NSE), King-Gupta efficiency (KGE), and PBIAS, the calculation of streamflow indicates a good performance for both calibration and validation periods while the sediment yield resulted in a satisfactory performance for both calibration and validation. Therefore, this study was able to identify the most critical sub-basin and severe needs of soil conservation. Furthermore, this study will provide baseline information to prevent floods and landslides and serve as a useful reference for land-use policies and watershed management and sustainability in the Pulangi watershed.

Keywords: Pulangi watershed, sediment yield, streamflow, SWAT model

Procedia PDF Downloads 186
7538 Design of Low Latency Multiport Network Router on Chip

Authors: P. G. Kaviya, B. Muthupandian, R. Ganesan

Abstract:

On-chip routers typically have buffers are used input or output ports for temporarily storing packets. The buffers are consuming some router area and power. The multiple queues in parallel as in VC router. While running a traffic trace, not all input ports have incoming packets needed to be transferred. Therefore large numbers of queues are empty and others are busy in the network. So the time consumption should be high for the high traffic. Therefore using a RoShaQ, minimize the buffer area and time The RoShaQ architecture was send the input packets are travel through the shared queues at low traffic. At high load traffic the input packets are bypasses the shared queues. So the power and area consumption was reduced. A parallel cross bar architecture is proposed in this project in order to reduce the power consumption. Also a new adaptive weighted routing algorithm for 8-port router architecture is proposed in order to decrease the delay of the network on chip router. The proposed system is simulated using Modelsim and synthesized using Xilinx Project Navigator.

Keywords: buffer, RoShaQ architecture, shared queue, VC router, weighted routing algorithm

Procedia PDF Downloads 529
7537 Locomotion Effects of Redundant Degrees of Freedom in Multi-Legged Quadruped Robots

Authors: Hossein Keshavarz, Alejandro Ramirez-Serrano

Abstract:

Energy efficiency and locomotion speed are two key parameters for legged robots; thus, finding ways to improve them are important. This paper proposes a locomotion framework to analyze the energy usage and speed of quadruped robots via a Genetic Algorithm (GA) optimization process. For this, a quadruped robot platform with joint redundancy in its hind legs that we believe will help multi-legged robots improve their speed and energy consumption is used. ContinuO, the quadruped robot of interest, has 14 active degrees of freedom (DoFs), including three DoFs for each front leg, and unlike previously developed quadruped robots, four DoFs for each hind leg. ContinuO aims to realize a cost-effective quadruped robot for real-world scenarios with high speeds and the ability to overcome large obstructions. The proposed framework is used to locomote the robot and analyze its energy consumed at diverse stride lengths and locomotion speeds. The analysis is performed by comparing the obtained results in two modes, with and without the joint redundancy on the robot’s hind legs.

Keywords: genetic algorithm optimization, locomotion path planning, quadruped robots, redundant legs

Procedia PDF Downloads 71
7536 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings

Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun

Abstract:

Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.

Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building

Procedia PDF Downloads 159
7535 On the Influence of the Metric Space in the Critical Behavior of Magnetic Temperature

Authors: J. C. Riaño-Rojas, J. D. Alzate-Cardona, E. Restrepo-Parra

Abstract:

In this work, a study of generic magnetic nanoparticles varying the metric space is presented. As the metric space is changed, the nanoparticle form and the inner product are also varied, since the energetic scale is not conserved. This study is carried out using Monte Carlo simulations combined with the Wolff embedding and Metropolis algorithms. The Metropolis algorithm is used at high temperature regions to reach the equilibrium quickly. The Wolff embedding algorithm is used at low and critical temperature regions in order to reduce the critical slowing down phenomenon. The ions number is kept constant for the different forms and the critical temperatures using finite size scaling are found. We observed that critical temperatures don't exhibit significant changes when the metric space was varied. Additionally, the effective dimension according the metric space was determined. A study of static behavior for reaching the static critical exponents was developed. The objective of this work is to observe the behavior of the thermodynamic quantities as energy, magnetization, specific heat, susceptibility and Binder's cumulants at the critical region, in order to demonstrate if the magnetic nanoparticles describe their magnetic interactions in the Euclidean space or if there is any correspondence in other metric spaces.

Keywords: nanoparticles, metric, Monte Carlo, critical behaviour

Procedia PDF Downloads 501
7534 Optimization Techniques for Microwave Structures

Authors: Malika Ourabia

Abstract:

A new and efficient method is presented for the analysis of arbitrarily shaped discontinuities. The discontinuities is characterized using a hybrid spectral/numerical technique. This structure presents an arbitrary number of ports, each one with different orientation and dimensions. This article presents a hybrid method based on multimode contour integral and mode matching techniques. The process is based on segmentation and dividing the structure into key building blocks. We use the multimode contour integral method to analyze the blocks including irregular shape discontinuities. Finally, the multimode scattering matrix of the whole structure can be found by cascading the blocks. Therefore, the new method is suitable for analysis of a wide range of waveguide problems. Therefore, the present approach can be applied easily to the analysis of any multiport junctions and cascade blocks. The accuracy of the method is validated comparing with results for several complex problems found in the literature. CPU times are also included to show the efficiency of the new method proposed.

Keywords: segmentation, s parameters, simulation, optimization

Procedia PDF Downloads 514
7533 Quantitative Elemental Analysis of Cyperus rotundus Medicinal Plant by Particle Induced X-Ray Emission and ICP-MS Techniques

Authors: J. Chandrasekhar Rao, B. G. Naidu, G. J. Naga Raju, P. Sarita

Abstract:

Particle Induced X-ray Emission (PIXE) and Inductively Coupled Plasma Mass Spectroscopy (ICP-MS) techniques have been employed in this work to determine the elements present in the root of Cyperus rotundus medicinal plant used in the treatment of rheumatoid arthritis. The elements V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Rb, and Sr were commonly identified and quantified by both PIXE and ICP-MS whereas the elements Li, Be, Al, As, Se, Ag, Cd, Ba, Tl, Pb and U were determined by ICP-MS and Cl, K, Ca, Ti and Br were determined by PIXE. The regional variation of elemental content has also been studied by analyzing the same plant collected from different geographical locations. Information on the elemental content of the medicinal plant would be helpful in correlating its ability in the treatment of rheumatoid arthritis and also in deciding the dosage of this herbal medicine from the metal toxicity point of view. Principal component analysis and cluster analysis were also applied to the data matrix to understand the correlation among the elements.

Keywords: PIXE, CP-MS, elements, Cyperus rotundus, rheumatoid arthritis

Procedia PDF Downloads 320
7532 A Review on Modeling and Optimization of Integration of Renewable Energy Resources (RER) for Minimum Energy Cost, Minimum CO₂ Emissions and Sustainable Development, in Recent Years

Authors: M. M. Wagh, V. V. Kulkarni

Abstract:

The rising economic activities, growing population and improving living standards of world have led to a steady growth in its appetite for quality and quantity of energy services. As the economy expands the electricity demand is going to grow further, increasing the challenges of the more generation and stresses on the utility grids. Appropriate energy model will help in proper utilization of the locally available renewable energy sources such as solar, wind, biomass, small hydro etc. to integrate in the available grid, reducing the investments in energy infrastructure. Further to these new technologies like smart grids, decentralized energy planning, energy management practices, energy efficiency are emerging. In this paper, the attempt has been made to study and review the recent energy planning models, energy forecasting models, and renewable energy integration models. In addition, various modeling techniques and tools are reviewed and discussed.

Keywords: energy modeling, integration of renewable energy, energy modeling tools, energy modeling techniques

Procedia PDF Downloads 325
7531 Energy Efficient Clustering with Adaptive Particle Swarm Optimization

Authors: KumarShashvat, ArshpreetKaur, RajeshKumar, Raman Chadha

Abstract:

Wireless sensor networks have principal characteristic of having restricted energy and with limitation that energy of the nodes cannot be replenished. To increase the lifetime in this scenario WSN route for data transmission is opted such that utilization of energy along the selected route is negligible. For this energy efficient network, dandy infrastructure is needed because it impinges the network lifespan. Clustering is a technique in which nodes are grouped into disjoints and non–overlapping sets. In this technique data is collected at the cluster head. In this paper, Adaptive-PSO algorithm is proposed which forms energy aware clusters by minimizing the cost of locating the cluster head. The main concern is of the suitability of the swarms by adjusting the learning parameters of PSO. Particle Swarm Optimization converges quickly at the beginning stage of the search but during the course of time, it becomes stable and may be trapped in local optima. In suggested network model swarms are given the intelligence of the spiders which makes them capable enough to avoid earlier convergence and also help them to escape from the local optima. Comparison analysis with traditional PSO shows that new algorithm considerably enhances the performance where multi-dimensional functions are taken into consideration.

Keywords: Particle Swarm Optimization, adaptive – PSO, comparison between PSO and A-PSO, energy efficient clustering

Procedia PDF Downloads 229
7530 Count of Trees in East Africa with Deep Learning

Authors: Nubwimana Rachel, Mugabowindekwe Maurice

Abstract:

Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods.

Keywords: remote sensing, deep learning, tree counting, image segmentation, object detection, visualization

Procedia PDF Downloads 46
7529 Load Balancing Technique for Energy - Efficiency in Cloud Computing

Authors: Rani Danavath, V. B. Narsimha

Abstract:

Cloud computing is emerging as a new paradigm of large scale distributed computing. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., three service models, and four deployment networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics models. Load balancing is one of the main challenges in cloud computing, which is required to distribute the dynamic workload across multiple nodes, to ensure that no single node is overloaded. It helps in optimal utilization of resources, enhancing the performance of the system. The goal of the load balancing is to minimize the resource consumption and carbon emission rate, that is the direct need of cloud computing. This determined the need of new metrics energy consumption and carbon emission for energy-efficiency load balancing techniques in cloud computing. Existing load balancing techniques mainly focuses on reducing overhead, services, response time and improving performance etc. In this paper we introduced a Technique for energy-efficiency, but none of the techniques have considered the energy consumption and carbon emission. Therefore, our proposed work will go towards energy – efficiency. So this energy-efficiency load balancing technique can be used to improve the performance of cloud computing by balancing the workload across all the nodes in the cloud with the minimum resource utilization, in turn, reducing energy consumption, and carbon emission to an extent, which will help to achieve green computing.

Keywords: cloud computing, distributed computing, energy efficiency, green computing, load balancing, energy consumption, carbon emission

Procedia PDF Downloads 430
7528 Multi Objective Optimization for Two-Sided Assembly Line Balancing

Authors: Srushti Bhatt, M. B. Kiran

Abstract:

Two-sided assembly line balancing problem is yet to be addressed simply to compete for the global market for manufacturers. The task assigned in an ordered sequence to get optimum performance of the system is known as assembly line balancing problem mainly classified as single and two sided. It is very challenging in manufacturing industries to balance two-sided assembly line, wherein the set of sequential workstations the task operations are performed in two sides of the line. The conflicting major objective in two-sided assembly line balancing problem is either to maximize /minimize the performance parameters. The present study emphases on combining different evolutionary algorithm; ant colony, Tabu search and petri net method; and compares their results of an algorithm for solving two-sided assembly line balancing problem. The concept of multi objective optimization of performance parameters is now a day adopted to make a decision involving more than one objective function to be simultaneously optimized. The optimum result can be expected among the selected methods using multi-objective optimization. The performance parameters considered in the present study are a number of workstation, slickness and smoothness index. The simulation of the assembly line balancing problem provides optimal results of classical and practical problems.

Keywords: Ant colony, petri net, tabu search, two sided ALBP

Procedia PDF Downloads 264
7527 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 159
7526 Health of Riveted Joints with Active and Passive Structural Health Monitoring Techniques

Authors: Javad Yarmahmoudi, Alireza Mirzaee

Abstract:

Many active and passive structural health monitoring (SHM) techniques have been developed for detection of the defects of plates. Generally, riveted joints hold the plates together and their failure may create accidents. In this study, well known active and passive methods were modified for the evaluation of the health of the riveted joints between the plates. The active method generated Lamb waves and monitored their propagation by using lead zirconate titanate (PZT) disks. The signal was analyzed by using the wavelet transformations. The passive method used the Fiber Bragg Grating (FBG) sensors and evaluated the spectral characteristics of the signals by using Fast Fourier Transformation (FFT). The results indicated that the existing methods designed for the evaluation of the health of individual plates may be used for inspection of riveted joints with software modifications.

Keywords: structural health monitoring, SHM, active SHM, passive SHM, fiber bragg grating sensor, lead zirconate titanate, PZT

Procedia PDF Downloads 309
7525 Simulation-Based Optimization of a Non-Uniform Piezoelectric Energy Harvester with Stack Boundary

Authors: Alireza Keshmiri, Shahriar Bagheri, Nan Wu

Abstract:

This research presents an analytical model for the development of an energy harvester with piezoelectric rings stacked at the boundary of the structure based on the Adomian decomposition method. The model is applied to geometrically non-uniform beams to derive the steady-state dynamic response of the structure subjected to base motion excitation and efficiently harvest the subsequent vibrational energy. The in-plane polarization of the piezoelectric rings is employed to enhance the electrical power output. A parametric study for the proposed energy harvester with various design parameters is done to prepare the dataset required for optimization. Finally, simulation-based optimization technique helps to find the optimum structural design with maximum efficiency. To solve the optimization problem, an artificial neural network is first trained to replace the simulation model, and then, a genetic algorithm is employed to find the optimized design variables. Higher geometrical non-uniformity and length of the beam lowers the structure natural frequency and generates a larger power output.

Keywords: piezoelectricity, energy harvesting, simulation-based optimization, artificial neural network, genetic algorithm

Procedia PDF Downloads 107
7524 Heritage Buildings an Inspiration for Energy Conservation under Solar Control – a Case Study of Hadoti Region of India.

Authors: Abhinav Chaturvedi, Joohi Chaturvedi, Renu Chaturvedi

Abstract:

With rapid urbanization and growth of population, more buildings are require to be constructed to meet the increasing demand of the shelter. 80 % of the world population is living in developing countries, but the adequate energy supplied to only 30% of it. In India situation get little more difficult as majority of the villages of India are still deprived of energy. 1/3 of the Indian household does not have energy supply. So there is big gap between energy demand and supply. Moreover India is producing around 65 % of the energy from Non – Renewable sources and 25 % of the Energy is imported in the form of oil and gas and only 10% of the total, is generated from other sources like solar power, wind power etc. Present modern structures are big energy consumers as they are consuming 40 % of the total energy in providing comfort conditions to the users, in from of heating and cooling,5 % in Building Construction, 20 % in transportation and 20 % in industrial process and 10 % in other processes. If we minimize this Heating and Cooling and lighting load of the building we can conserve huge amount of energy for the future. In history, buildings do not have artificial systems of cooling or heating. These buildings, especially in Hadoti Region which have Semi Arid Climatic conditions, are provided with Solar Passive Design Techniques that is the reason of comfort inside the buildings. So if we use some appropriate elements of these heritage structures, in our present age building design we can find some certain solution to energy crises. Present paper describes Various Solar Passive design techniques used in past, and the same could be used in present to reduce the consumption of energy.

Keywords: energy conservation, Hadoti region, solar passive design techniques , semi - arid climatic condition

Procedia PDF Downloads 461
7523 Resource-Constrained Assembly Line Balancing Problems with Multi-Manned Workstations

Authors: Yin-Yann Chen, Jia-Ying Li

Abstract:

Assembly line balancing problems can be categorized into one-sided, two-sided, and multi-manned ones by using the number of operators deployed at workstations. This study explores the balancing problem of a resource-constrained assembly line with multi-manned workstations. Resources include machines or tools in assembly lines such as jigs, fixtures, and hand tools. A mathematical programming model was developed to carry out decision-making and planning in order to minimize the numbers of workstations, resources, and operators for achieving optimal production efficiency. To improve the solution-finding efficiency, a genetic algorithm (GA) and a simulated annealing algorithm (SA) were designed and developed in this study to be combined with a practical case in car making. Results of the GA/SA and mathematics programming were compared to verify their validity. Finally, analysis and comparison were conducted in terms of the target values, production efficiency, and deployment combinations provided by the algorithms in order for the results of this study to provide references for decision-making on production deployment.

Keywords: heuristic algorithms, line balancing, multi-manned workstation, resource-constrained

Procedia PDF Downloads 189
7522 Techniques of Construction Management in Civil Engineering

Authors: Mamoon M. Atout

Abstract:

The Middle East Gulf region has witnessed rapid growth and development in many areas over the last two decades. The development of the real-estate sector, construction industry and infrastructure projects are a major share of the development that has participated in the civilization of the countries of the Gulf. Construction industry projects were planned and managed by different types of experts, who came from all over the world having different types of experiences in construction management and industry. Some of these projects were completed on time, while many were not, due to many accumulating factors. Many accumulated factors are considered as the principle reason for the problem experienced at the project construction stage, which reflected negatively on the project success. Specific causes of delay have been identified by construction managers to avoid any unexpected delays through proper analysis and considerations to some implications such as risk assessment and analysis for many potential problems to ensure that projects will be delivered on time. Construction management implications were adopted and considered by project managers who have experience and knowledge in applying the techniques of the system of engineering construction management. The aim of this research is to determine the benefits of the implications of construction management by the construction team and level of considerations of the techniques and processes during the project development and construction phases to avoid any delay in the projects. It also aims to determine the factors that participate to project completion delays in case project managers are not well committed to their roles and responsibilities. The results of the analysis will determine the necessity of the applications required by the project team to avoid the causes of delays that help them deliver projects on time, e.g. verifying tender documents, quantities and preparing the construction method of the project.

Keywords: construction management, control process, cost control, planning and scheduling

Procedia PDF Downloads 230
7521 A Monopole Intravascular Antenna with Three Parasitic Elements Optimized for Higher Tesla MRI Systems

Authors: Mohammad Mohammadzadeh, Alireza Ghasempour

Abstract:

In this paper, a new design of monopole antenna has been proposed that increases the contrast of intravascular magnetic resonance images through increasing the homogeneity of the intrinsic signal-to-noise ratio (ISNR) distribution around the antenna. The antenna is made of a coaxial cable with three parasitic elements. Lengths and positions of the elements are optimized by the improved genetic algorithm (IGA) for 1.5, 3, 4.7, and 7Tesla MRI systems based on a defined cost function. Simulations were also conducted to verify the performance of the designed antenna. Our simulation results show that each time IGA is executed different values for the parasitic elements are obtained so that the cost functions of those antennas are high. According to the obtained results, IGA can also find the best values for the parasitic elements (regarding cost function) in the next executions. Additionally, two dimensional and one-dimensional maps of ISNR were drawn for the proposed antenna and compared to the previously published monopole antenna with one parasitic element at the frequency of 64MHz inside a saline phantom. Results verified that in spite of ISNR decreasing, there is a considerable improvement in the homogeneity of ISNR distribution of the proposed antenna so that their multiplication increases.

Keywords: intravascular MR antenna, monopole antenna, parasitic elements, signal-to-noise ratio (SNR), genetic algorithm

Procedia PDF Downloads 284
7520 The Geometry of Natural Formation: an Application of Geometrical Analysis for Complex Natural Order of Pomegranate

Authors: Anahita Aris

Abstract:

Geometry always plays a key role in natural structures, which can be a source of inspiration for architects and urban designers to create spaces. By understanding formative principles in nature, a variety of options can be provided that lead to freedom of formation. The main purpose of this paper is to analyze the geometrical order found in pomegranate to find formative principles explaining its complex structure. The point is how spherical arils of pomegranate pressed together inside the fruit and filled the space as they expand in the growing process, which made a self-organized system leads to the formation of each of the arils are unique in size, topology and shape. The main challenge of this paper would be using advanced architectural modeling techniques to discover these principles.

Keywords: advanced modeling techniques, architectural modeling, computational design, the geometry of natural formation, geometrical analysis, the natural order of pomegranate, voronoi diagrams

Procedia PDF Downloads 206
7519 Algorithms used in Spatial Data Mining GIS

Authors: Vahid Bairami Rad

Abstract:

Extracting knowledge from spatial data like GIS data is important to reduce the data and extract information. Therefore, the development of new techniques and tools that support the human in transforming data into useful knowledge has been the focus of the relatively new and interdisciplinary research area ‘knowledge discovery in databases’. Thus, we introduce a set of database primitives or basic operations for spatial data mining which are sufficient to express most of the spatial data mining algorithms from the literature. This approach has several advantages. Similar to the relational standard language SQL, the use of standard primitives will speed-up the development of new data mining algorithms and will also make them more portable. We introduced a database-oriented framework for spatial data mining which is based on the concepts of neighborhood graphs and paths. A small set of basic operations on these graphs and paths were defined as database primitives for spatial data mining. Furthermore, techniques to efficiently support the database primitives by a commercial DBMS were presented.

Keywords: spatial data base, knowledge discovery database, data mining, spatial relationship, predictive data mining

Procedia PDF Downloads 442
7518 Graph Codes - 2D Projections of Multimedia Feature Graphs for Fast and Effective Retrieval

Authors: Stefan Wagenpfeil, Felix Engel, Paul McKevitt, Matthias Hemmje

Abstract:

Multimedia Indexing and Retrieval is generally designed and implemented by employing feature graphs. These graphs typically contain a significant number of nodes and edges to reflect the level of detail in feature detection. A higher level of detail increases the effectiveness of the results but also leads to more complex graph structures. However, graph-traversal-based algorithms for similarity are quite inefficient and computation intensive, especially for large data structures. To deliver fast and effective retrieval, an efficient similarity algorithm, particularly for large graphs, is mandatory. Hence, in this paper, we define a graph-projection into a 2D space (Graph Code) as well as the corresponding algorithms for indexing and retrieval. We show that calculations in this space can be performed more efficiently than graph-traversals due to a simpler processing model and a high level of parallelization. In consequence, we prove that the effectiveness of retrieval also increases substantially, as Graph Codes facilitate more levels of detail in feature fusion. Thus, Graph Codes provide a significant increase in efficiency and effectiveness (especially for Multimedia indexing and retrieval) and can be applied to images, videos, audio, and text information.

Keywords: indexing, retrieval, multimedia, graph algorithm, graph code

Procedia PDF Downloads 142
7517 An Optimal Algorithm for Finding (R, Q) Policy in a Price-Dependent Order Quantity Inventory System with Soft Budget Constraint

Authors: S. Hamid Mirmohammadi, Shahrazad Tamjidzad

Abstract:

This paper is concerned with the single-item continuous review inventory system in which demand is stochastic and discrete. The budget consumed for purchasing the ordered items is not restricted but it incurs extra cost when exceeding specific value. The unit purchasing price depends on the quantity ordered under the all-units discounts cost structure. In many actual systems, the budget as a resource which is occupied by the purchased items is limited and the system is able to confront the resource shortage by charging more costs. Thus, considering the resource shortage costs as a part of system costs, especially when the amount of resource occupied by the purchased item is influenced by quantity discounts, is well motivated by practical concerns. In this paper, an optimization problem is formulated for finding the optimal (R, Q) policy, when the system is influenced by the budget limitation and a discount pricing simultaneously. Properties of the cost function are investigated and then an algorithm based on a one-dimensional search procedure is proposed for finding an optimal (R, Q) policy which minimizes the expected system costs .

Keywords: (R, Q) policy, stochastic demand, backorders, limited resource, quantity discounts

Procedia PDF Downloads 625
7516 A Comprehensive Study on Quality Assurance in Game Development

Authors: Maria Komal, Zaineb Khalil, Mehreen Sirshar

Abstract:

Due to the recent technological advancements, Games have become one of the most demanding applications. Gaming industry is rapidly growing and the key to success in this industry is the development of good quality games, which is a highly competitive issue. The ultimate goal of game developers is to provide player’s satisfaction by developing high-quality games. This research is the comprehensive survey of techniques followed by game industries to ensure games quality. After analysis of various techniques, it has been found that quality simulation according to ISO standards and play test methods are used to ensure games quality. Because game development requires cross-disciplined team, an increasing trend towards distributed game development has been observed. This paper evaluates the strengths and weaknesses of current methodologies used in game industry and draws a conclusion. We have also proposed quality parameters which can be used as a heuristic framework to identify those attributes which have high testing priorities.

Keywords: game development, computer games, video games, gaming industry, quality assurance, playability, user experience

Procedia PDF Downloads 516
7515 Measuring Delay Using Software Defined Networks: Limitations, Challenges, and Suggestions for Openflow

Authors: Ahmed Alutaibi, Ganti Sudhakar

Abstract:

Providing better Quality-of-Service (QoS) to end users has been a challenging problem for researchers and service providers. Building applications relying on best effort network protocols hindered the adoption of guaranteed service parameters and, ultimately, Quality of Service. The introduction of Software Defined Networking (SDN) opened the door for a new paradigm shift towards a more controlled programmable configurable behavior. Openflow has been and still is the main implementation of the SDN vision. To facilitate better QoS for applications, the network must calculate and measure certain parameters. One of those parameters is the delay between the two ends of the connection. Using the power of SDN and the knowledge of application and network behavior, SDN networks can adjust to different conditions and specifications. In this paper, we use the capabilities of SDN to implement multiple algorithms to measure delay end-to-end not only inside the SDN network. The results of applying the algorithms on an emulated environment show that we can get measurements close to the emulated delay. The results also show that depending on the algorithm, load on the network and controller can differ. In addition, the transport layer handshake algorithm performs best among the tested algorithms. Out of the results and implementation, we show the limitations of Openflow and develop suggestions to solve them.

Keywords: software defined networking, quality of service, delay measurement, openflow, mininet

Procedia PDF Downloads 150