Search results for: invasive weed optimization algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6811

Search results for: invasive weed optimization algorithm

5641 Clinical Validation of C-PDR Methodology for Accurate Non-Invasive Detection of Helicobacter pylori Infection

Authors: Suman Som, Abhijit Maity, Sunil B. Daschakraborty, Sujit Chaudhuri, Manik Pradhan

Abstract:

Background: Helicobacter pylori is a common and important human pathogen and the primary cause of peptic ulcer disease and gastric cancer. Currently H. pylori infection is detected by both invasive and non-invasive way but the diagnostic accuracy is not up to the mark. Aim: To set up an optimal diagnostic cut-off value of 13C-Urea Breath Test to detect H. pylori infection and evaluate a novel c-PDR methodology to overcome of inconclusive grey zone. Materials and Methods: All 83 subjects first underwent upper-gastrointestinal endoscopy followed by rapid urease test and histopathology and depending on these results; we classified 49 subjects as H. pylori positive and 34 negative. After an overnight, fast patients are taken 4 gm of citric acid in 200 ml water solution and 10 minute after ingestion of the test meal, a baseline exhaled breath sample was collected. Thereafter an oral dose of 75 mg 13C-Urea dissolved in 50 ml water was given and breath samples were collected upto 90 minute for 15 minute intervals and analysed by laser based high precisional cavity enhanced spectroscopy. Results: We studied the excretion kinetics of 13C isotope enrichment (expressed as δDOB13C ‰) of exhaled breath samples and found maximum enrichment around 30 minute of H. pylori positive patients, it is due to the acid mediated stimulated urease enzyme activity and maximum acidification happened within 30 minute but no such significant isotopic enrichment observed for H. pylori negative individuals. Using Receiver Operating Characteristic (ROC) curve an optimal diagnostic cut-off value, δDOB13C ‰ = 3.14 was determined at 30 minute exhibiting 89.16% accuracy. Now to overcome grey zone problem we explore percentage dose of 13C recovered per hour, i.e. 13C-PDR (%/hr) and cumulative percentage dose of 13C recovered, i.e. c-PDR (%) in exhaled breath samples for the present 13C-UBT. We further explored the diagnostic accuracy of 13C-UBT by constructing ROC curve using c-PDR (%) values and an optimal cut-off value was estimated to be c-PDR = 1.47 (%) at 60 minute, exhibiting 100 % diagnostic sensitivity , 100 % specificity and 100 % accuracy of 13C-UBT for detection of H. pylori infection. We also elucidate the gastric emptying process of present 13C-UBT for H. pylori positive patients. The maximal emptying rate found at 36 minute and half empting time of present 13C-UBT was found at 45 minute. Conclusions: The present study exhibiting the importance of c-PDR methodology to overcome of grey zone problem in 13C-UBT for accurate determination of infection without any risk of diagnostic errors and making it sufficiently robust and novel method for an accurate and fast non-invasive diagnosis of H. pylori infection for large scale screening purposes.

Keywords: 13C-Urea breath test, c-PDR methodology, grey zone, Helicobacter pylori

Procedia PDF Downloads 301
5640 Artificial Intelligence for Generative Modelling

Authors: Shryas Bhurat, Aryan Vashistha, Sampreet Dinakar Nayak, Ayush Gupta

Abstract:

As the technology is advancing more towards high computational resources, there is a paradigm shift in the usage of these resources to optimize the design process. This paper discusses the usage of ‘Generative Design using Artificial Intelligence’ to build better models that adapt the operations like selection, mutation, and crossover to generate results. The human mind thinks of the simplest approach while designing an object, but the intelligence learns from the past & designs the complex optimized CAD Models. Generative Design takes the boundary conditions and comes up with multiple solutions with iterations to come up with a sturdy design with the most optimal parameter that is given, saving huge amounts of time & resources. The new production techniques that are at our disposal allow us to use additive manufacturing, 3D printing, and other innovative manufacturing techniques to save resources and design artistically engineered CAD Models. Also, this paper discusses the Genetic Algorithm, the Non-Domination technique to choose the right results using biomimicry that has evolved for current habitation for millions of years. The computer uses parametric models to generate newer models using an iterative approach & uses cloud computing to store these iterative designs. The later part of the paper compares the topology optimization technology with Generative Design that is previously being used to generate CAD Models. Finally, this paper shows the performance of algorithms and how these algorithms help in designing resource-efficient models.

Keywords: genetic algorithm, bio mimicry, generative modeling, non-dominant techniques

Procedia PDF Downloads 149
5639 Cylindrical Spacer Shape Optimization for Enhanced Inhalation Therapy

Authors: Shahab Azimi, Siamak Arzanpour, Anahita Sayyar

Abstract:

Asthma and Chronic obstructive pulmonary disease (COPD) are common lung diseases that have a significant global impact. Pressurized metered dose inhalers (pMDIs) are widely used for treatment, but they can have limitations such as high medication release speed resulting in drug deposition in the mouth or oral cavity and difficulty achieving proper synchronization with inhalation by users. Spacers are add-on devices that improve the efficiency of pMDIs by reducing the release speed and providing space for aerosol particle breakup to have finer and medically effective medication. The aim of this study is to optimize the size and cylindrical shape of spacers to enhance their drug delivery performance. The study was based on fluid dynamics theory and employed Ansys software for simulation and optimization. Results showed that optimization of the spacer's geometry greatly influenced its performance and improved drug delivery. This study provides a foundation for future research on enhancing the efficiency of inhalation therapy for lung diseases.

Keywords: asthma, COPD, pressurized metered dose inhalers, spacers, CFD, shape optimization

Procedia PDF Downloads 97
5638 Two Stage Assembly Flowshop Scheduling Problem Minimizing Total Tardiness

Authors: Ali Allahverdi, Harun Aydilek, Asiye Aydilek

Abstract:

The two stage assembly flowshop scheduling problem has lots of application in real life. To the best of our knowledge, the two stage assembly flowshop scheduling problem with total tardiness performance measure and separate setup times has not been addressed so far, and hence, it is addressed in this paper. Different dominance relations are developed and several algorithms are proposed. Extensive computational experiments are conducted to evaluate the proposed algorithms. The computational experiments have shown that one of the algorithms performs much better than the others. Moreover, the experiments have shown that the best performing algorithm performs much better than the best existing algorithm for the case of zero setup times in the literature. Therefore, the proposed best performing algorithm not only can be used for problems with separate setup times but also for the case of zero setup times.

Keywords: scheduling, assembly flowshop, total tardiness, algorithm

Procedia PDF Downloads 344
5637 Structural Optimization of Shell and Arched Structures

Authors: Mitchell Gohnert, Ryan Bradley

Abstract:

This paper reviews some fundamental concepts of structural optimization, which are based on the type of materials used in construction and the shape of the structure. The first step in structural optimization is to break down all internal forces in a structure into fundamental stresses, which are tensions and compressions. Knowing the stress patterns directs our selection of structural shapes and the most appropriate type of construction material. In our selection of materials, it is essential to understand all construction materials have flaws, or micro-cracks, which reduce the capacity of the material, especially when subjected to tensions. Because of material defects, many construction materials perform significantly better when subjected to compressive forces. Structures are also more efficient if bending moments are eliminated. Bending stresses produce high peak stresses at each face of the member, and therefore, substantially more material is required to resist bending. The shape of the structure also has a profound effect on stress levels. Stress may be reduced dramatically by simply changing the shape. Catenary, triangular and linear shapes are the fundamental structural forms to achieve optimal stress flow. If the natural flow of stress matches the shape of the structures, the most optimal shape is determined.

Keywords: arches, economy of stresses, material strength, optimization, shells

Procedia PDF Downloads 116
5636 Optimal Voltage and Frequency Control of a Microgrid Using the Harmony Search Algorithm

Authors: Hossein Abbasi

Abstract:

The stability is an important topic to plan and manage the energy in the microgrids as the same as the conventional power systems. The voltage and frequency stability is one of the most important issues recently studied in microgrids. The objectives of this paper are the modelling and designing of the components and optimal controllers for the voltage and frequency control of the AC/DC hybrid microgrid under the different disturbances. Since the PI controllers have the advantages of simple structure and easy implementation, so they are designed and modeled in this paper. The harmony search (HS) algorithm is used to optimize the controllers’ parameters. According to the achieved results, the PI controllers have a good performance in voltage and frequency control of the microgrid.

Keywords: frequency control, HS algorithm, microgrid, PI controller, voltage control

Procedia PDF Downloads 391
5635 Node Optimization in Wireless Sensor Network: An Energy Approach

Authors: Y. B. Kirankumar, J. D. Mallapur

Abstract:

Wireless Sensor Network (WSN) is an emerging technology, which has great invention for various low cost applications both for mass public as well as for defence. The wireless sensor communication technology allows random participation of sensor nodes with particular applications to take part in the network, which results in most of the uncovered simulation area, where fewer nodes are located at far distances. The drawback of such network would be that the additional energy is spent by the nodes located in a pattern of dense location, using more number of nodes for a smaller distance of communication adversely in a region with less number of nodes and additional energy is again spent by the source node in order to transmit a packet to neighbours, thereby transmitting the packet to reach the destination. The proposed work is intended to develop Energy Efficient Node Placement Algorithm (EENPA) in order to place the sensor node efficiently in simulated area, where all the nodes are equally located on a radial path to cover maximum area at equidistance. The total energy consumed by each node compared to random placement of nodes is less by having equal burden on fewer nodes of far location, having distributed the nodes in whole of the simulation area. Calculating the network lifetime also proves to be efficient as compared to random placement of nodes, hence increasing the network lifetime, too. Simulation is been carried out in a qualnet simulator, results are obtained on par with random placement of nodes with EENP algorithm.

Keywords: energy, WSN, wireless sensor network, energy approach

Procedia PDF Downloads 312
5634 Algorithms for Fast Computation of Pan Matrix Profiles of Time Series Under Unnormalized Euclidean Distances

Authors: Jing Zhang, Daniel Nikovski

Abstract:

We propose an approximation algorithm called LINKUMP to compute the Pan Matrix Profile (PMP) under the unnormalized l∞ distance (useful for value-based similarity search) using double-ended queue and linear interpolation. The algorithm has comparable time/space complexities as the state-of-the-art algorithm for typical PMP computation under the normalized l₂ distance (useful for shape-based similarity search). We validate its efficiency and effectiveness through extensive numerical experiments and a real-world anomaly detection application.

Keywords: pan matrix profile, unnormalized euclidean distance, double-ended queue, discord discovery, anomaly detection

Procedia PDF Downloads 247
5633 Pion/Muon Identification in a Nuclear Emulsion Cloud Chamber Using Neural Networks

Authors: Kais Manai

Abstract:

The main part of this work focuses on the study of pion/muon separation at low energy using a nuclear Emulsion Cloud Chamber (ECC) made of lead and nuclear emulsion films. The work consists of two parts: particle reconstruction algorithm and a Neural Network that assigns to each reconstructed particle the probability to be a muon or a pion. The pion/muon separation algorithm has been optimized by using a detailed Monte Carlo simulation of the ECC and tested on real data. The algorithm allows to achieve a 60% muon identification efficiency with a pion misidentification smaller than 3%.

Keywords: nuclear emulsion, particle identification, tracking, neural network

Procedia PDF Downloads 506
5632 Optimization Design of Superposition Wave Form Automotive Exhaust Bellows Structure

Authors: Zhang Jianrun, He Tangling

Abstract:

Superposition wave form automotive exhaust bellows is a new type of bellows, which has the characteristics of large compensation, good vibration isolation performance and long life. It has been paid more and more attention and applications in automotive exhaust pipe system. Aiming at the lack of current design methods of superposition wave form automotive exhaust bellows, this paper proposes a response surface parameter optimization method where the fatigue life and vibration transmissibility of the bellows are set as objectives. The parametric modeling of bellow structure is also adopted to achieve the high efficiency in the design. The approach proposed in this paper provides a new way for the design of superposition wave form automotive exhaust bellows. It embodies good engineering application value.

Keywords: superposition wave form, exhaust bellows, optimization, vibration, fatigue life

Procedia PDF Downloads 96
5631 Optimization of Doubly Fed Induction Generator Equivalent Circuit Parameters by Direct Search Method

Authors: Mamidi Ramakrishna Rao

Abstract:

Doubly-fed induction generator (DFIG) is currently the choice for many wind turbines. These generators, when connected to the grid through a converter, is subjected to varied power system conditions like voltage variation, frequency variation, short circuit fault conditions, etc. Further, many countries like Canada, Germany, UK, Scotland, etc. have distinct grid codes relating to wind turbines. Accordingly, following the network faults, wind turbines have to supply a definite reactive current. To satisfy the requirements including reactive current capability, an optimum electrical design becomes a mandate for DFIG to function. This paper intends to optimize the equivalent circuit parameters of an electrical design for satisfactory DFIG performance. Direct search method has been used for optimization of the parameters. The variables selected include electromagnetic core dimensions (diameters and stack length), slot dimensions, radial air gap between stator and rotor and winding copper cross section area. Optimization for 2 MW DFIG has been executed separately for three objective functions - maximum reactive power capability (Case I), maximum efficiency (Case II) and minimum weight (Case III). In the optimization analysis program, voltage variations (10%), power factor- leading and lagging (0.95), speeds for corresponding to slips (-0.3 to +0.3) have been considered. The optimum designs obtained for objective functions were compared. It can be concluded that direct search method of optimization helps in determining an optimum electrical design for each objective function like efficiency or reactive power capability or weight minimization.

Keywords: direct search, DFIG, equivalent circuit parameters, optimization

Procedia PDF Downloads 256
5630 Differentiated Surgical Treatment of Patients With Nontraumatic Intracerebral Hematomas

Authors: Mansur Agzamov, Valery Bersnev, Natalia Ivanova, Istam Agzamov, Timur Khayrullaev, Yulduz Agzamova

Abstract:

Objectives. Treatment of hypertensive intracerebral hematoma (ICH) is controversial. Advantage of one surgical method on other has not been established. Recent reports suggest a favorable effect of minimally invasive surgery. We conducted a small comparative study of different surgical methods. Methods. We analyzed the result of surgical treatment of 176 patients with intracerebral hematomas at the age from 41 to 78 years. Men were been113 (64.2%), women - 63 (35.8%). Level of consciousness: conscious -18, lethargy -63, stupor –55, moderate coma - 40. All patients on admission and in the dynamics underwent computer tomography (CT) of the brain. ICH was located in the putamen in 87 cases, thalamus in 19, in the mix area in 50, in the lobar area in 20. Ninety seven patients of them had an intraventricular hemorrhage component. The baseline volume of the ICH was measured according to a bedside method of measuring CT intracerebral hematomas volume. Depending on the intervention of the patients were divided into three groups. Group 1 patients, 90 patients, operated open craniotomy. Level of consciousness: conscious-11, lethargy-33, stupor–18, moderate coma -18. The hemorrhage was located in the putamen in 51, thalamus in 3, in the mix area in 25, in the lobar area in 11. Group 2 patients, 22 patients, underwent smaller craniotomy with endoscopic-assisted evacuation. Level of consciousness: conscious-4, lethargy-9, stupor–5, moderate coma -4. The hemorrhage was located in the putamen in 5, thalamus in 15, in the mix area in 2. Group 3 patients, 64 patients, was conducted minimally invasive removal of intracerebral hematomas using the original device (patent of Russian Federation № 65382). The device - funnel cannula - which after the special markings introduced into the hematoma cavity. Level of consciousness: conscious-3, lethargy-21, stupor–22, moderate coma -18. The hemorrhage was located in the putamen in 31, in the mix area in 23, thalamus in 1, in the lobar area in 9. Results of treatment were evaluated by Glasgow outcome scale. Results. The study showed that the results of surgical treatment in three groups depending on the degree of consciousness, the volume and localization of hematoma. In group 1, good recovery observed in 8 cases (8.9%), moderate disability in 22 (24.4%), severe disability - 17 (18.9%), death-43 (47.8%). In group 2, good recovery observed in 7 cases (31.8%), moderate disability in 7 (31.8%), severe disability - 5 (29.7%), death-7 (31.8%). In group 3, good recovery was observed in 9 cases (14.1%), moderate disability-17 (26.5%), severe disability-19 (29.7%), death-19 (29.7%). Conclusions. The method of using cannulae allowed to abandon from open craniotomy of the majority of patients with putaminal hematomas. Minimally invasive technique reduced the postoperative mortality and improves treatment outcomes of these patients.

Keywords: nontraumatic intracerebral hematoma, minimal invasive surgical technique, funnel canula, differentiated surcical treatment

Procedia PDF Downloads 84
5629 Linear Frequency Modulation-Frequency Shift Keying Radar with Compressive Sensing

Authors: Ho Jeong Jin, Chang Won Seo, Choon Sik Cho, Bong Yong Choi, Kwang Kyun Na, Sang Rok Lee

Abstract:

In this paper, a radar signal processing technique using the LFM-FSK (Linear Frequency Modulation-Frequency Shift Keying) is proposed for reducing the false alarm rate based on the compressive sensing. The LFM-FSK method combines FMCW (Frequency Modulation Continuous Wave) signal with FSK (Frequency Shift Keying). This shows an advantage which can suppress the ghost phenomenon without the complicated CFAR (Constant False Alarm Rate) algorithm. Moreover, the parametric sparse algorithm applying the compressive sensing that restores signals efficiently with respect to the incomplete data samples is also integrated, leading to reducing the burden of ADC in the receiver of radars. 24 GHz FMCW signal is applied and tested in the real environment with FSK modulated data for verifying the proposed algorithm along with the compressive sensing.

Keywords: compressive sensing, LFM-FSK radar, radar signal processing, sparse algorithm

Procedia PDF Downloads 483
5628 Designing Floor Planning in 2D and 3D with an Efficient Topological Structure

Authors: V. Nagammai

Abstract:

Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. Development of technology increases the complexity in IC manufacturing which may vary the power consumption, increase the size and latency period. Topology defines a number of connections between network. In this project, NoC topology is generated using atlas tool which will increase performance in turn determination of constraints are effective. The routing is performed by XY routing algorithm and wormhole flow control. In NoC topology generation, the value of power, area and latency are predetermined. In previous work, placement, routing and shortest path evaluation is performed using an algorithm called floor planning with cluster reconstruction and path allocation algorithm (FCRPA) with the account of 4 3x3 switch, 6 4x4 switch, and 2 5x5 switches. The usage of the 4x4 and 5x5 switch will increase the power consumption and area of the block. In order to avoid the problem, this paper has used one 8x8 switch and 4 3x3 switches. This paper uses IPRCA which of 3 steps they are placement, clustering, and shortest path evaluation. The placement is performed using min – cut placement and clustering are performed using an algorithm called cluster generation. The shortest path is evaluated using an algorithm called Dijkstra's algorithm. The power consumption of each block is determined. The experimental result shows that the area, power, and wire length improved simultaneously.

Keywords: application specific noc, b* tree representation, floor planning, t tree representation

Procedia PDF Downloads 394
5627 Multi Tier Data Collection and Estimation, Utilizing Queue Model in Wireless Sensor Networks

Authors: Amirhossein Mohajerzadeh, Abolghasem Mohajerzadeh

Abstract:

In this paper, target parameter is estimated with desirable precision in hierarchical wireless sensor networks (WSN) while the proposed algorithm also tries to prolong network lifetime as much as possible, using efficient data collecting algorithm. Target parameter distribution function is considered unknown. Sensor nodes sense the environment and send the data to the base station called fusion center (FC) using hierarchical data collecting algorithm. FC builds underlying phenomena based on collected data. Considering the aggregation level, x, the goal is providing the essential infrastructure to find the best value for aggregation level in order to prolong network lifetime as much as possible, while desirable accuracy is guaranteed (required sample size is fully depended on desirable precision). First, the sample size calculation algorithm is discussed, second, the average queue length based on M/M[x]/1/K queue model is determined and it is used for energy consumption calculation. Nodes can decrease transmission cost by aggregating incoming data. Furthermore, the performance of the new algorithm is evaluated in terms of lifetime and estimation accuracy.

Keywords: aggregation, estimation, queuing, wireless sensor network

Procedia PDF Downloads 186
5626 Extraction of Road Edge Lines from High-Resolution Remote Sensing Images Based on Energy Function and Snake Model

Authors: Zuoji Huang, Haiming Qian, Chunlin Wang, Jinyan Sun, Nan Xu

Abstract:

In this paper, the strategy to extract double road edge lines from acquired road stripe image was explored. The workflow is as follows: the road stripes are acquired by probabilistic boosting tree algorithm and morphological algorithm immediately, and road centerlines are detected by thinning algorithm, so the initial road edge lines can be acquired along the road centerlines. Then we refine the results with big variation of local curvature of centerlines. Specifically, the energy function of edge line is constructed by gradient feature and spectral information, and Dijkstra algorithm is used to optimize the initial road edge lines. The Snake model is constructed to solve the fracture problem of intersection, and the discrete dynamic programming algorithm is used to solve the model. After that, we could get the final road network. Experiment results show that the strategy proposed in this paper can be used to extract the continuous and smooth road edge lines from high-resolution remote sensing images with an accuracy of 88% in our study area.

Keywords: road edge lines extraction, energy function, intersection fracture, Snake model

Procedia PDF Downloads 338
5625 Optimization of Operational Parameters and Design of an Electrochlorination System to Produce Naclo

Authors: Pablo Ignacio Hernández Arango, Niels Lindemeyer

Abstract:

Chlorine, as Sodium Hypochlorite (NaClO) solution in water, is an effective, worldwide spread, and economical substance to eliminate germs in the water. The disinfection potential of chlorine lies in its ability to degrade the outer surfaces of bacterial cells and viruses. This contribution reports the main parameters of the brine electrolysis for the production of NaClO, which is afterward used for the disinfection of water either for drinking or recreative uses. Herein, the system design was simulated, optimized, build, and tested based on titanium electrodes. The process optimization considers the whole process, from the salt (NaCl) dilution tank in order to maximize its operation time util the electrolysis itself in order to maximize the chlorine production reducing the energy and raw material (salt and water) consumption. One novel idea behind this optimization process is the modification of the flow pattern inside the electrochemical reactors. The increasing turbulence and residence time impact positively the operations figures. The operational parameters, which are defined in this study were compared and benchmarked with the parameters of actual commercial systems in order to validate the pertinency of those results.

Keywords: electrolysis, water disinfection, sodium hypochlorite, process optimization

Procedia PDF Downloads 128
5624 Optimal Design of Composite Cylindrical Shell Based on Nonlinear Finite Element Analysis

Authors: Haider M. Alsaeq

Abstract:

The present research is an attempt to figure out the best configuration of composite cylindrical shells of the sandwich type, i.e. the lightest design of such shells required to sustain a certain load over a certain area. The optimization is based on elastic-plastic geometrically nonlinear incremental-iterative finite element analysis. The nine-node degenerated curved shell element is used in which five degrees of freedom are specified at each nodal point, with a layered model. The formulation of the geometrical nonlinearity problem is carried out using the well-known total Lagrangian principle. For the structural optimization problem, which is dealt with as a constrained nonlinear optimization, the so-called Modified Hooke and Jeeves method is employed by considering the weight of the shell as the objective function with stress and geometrical constraints. It was concluded that the optimum design of composite sandwich cylindrical shell that have a rigid polyurethane foam core and steel facing occurs when the area covered by the shell becomes almost square with a ratio of core thickness to facing thickness lies between 45 and 49, while the optimum height to length ration varies from 0.03 to 0.08 depending on the aspect ratio of the shell and its boundary conditions.

Keywords: composite structure, cylindrical shell, optimization, non-linear analysis, finite element

Procedia PDF Downloads 391
5623 Brand Content Optimization: A Major Challenge for Sellers on Marketplaces

Authors: Richardson Ciguene, Bertrand Marron, Nicolas Habert

Abstract:

Today, more and more consumers are purchasing their products and services online. At the same time, the penetration rate of very small and medium-sized businesses on marketplaces continues to increase, which has the direct impact of intensifying competition between sellers. Thus, only the best-optimized deals are ranked well by algorithms and are visible to consumers. However, it is almost impossible to know all the Brand Content rules and criteria established by marketplaces, which is essential to optimizing their product sheets, especially since these rules change constantly. In this paper, we propose to detail this question of Brand Content optimization by taking into account the case of Amazon in order to capture the scientific dimension behind such a subject. In a second step, we will present the genesis of our research project, DEEPERFECT, which aims to set up original methods and effective tools in order to help sellers present on marketplaces in the optimization of their branded content.

Keywords: e-commerce, scoring, marketplace, Amazon, brand content, product sheets

Procedia PDF Downloads 123
5622 Pudhaiyal: A Maze-Based Treasure Hunt Game for Tamil Words

Authors: Aarthy Anandan, Anitha Narasimhan, Madhan Karky

Abstract:

Word-based games are popular in helping people to improve their vocabulary skills. Games like ‘word search’ and crosswords provide a smart way of increasing vocabulary skills. Word search games are fun to play, but also educational which actually helps to learn a language. Finding the words from word search puzzle helps the player to remember words in an easier way, and it also helps to learn the spellings of words. In this paper, we present a tile distribution algorithm for a Maze-Based Treasure Hunt Game 'Pudhaiyal’ for Tamil words, which describes how words can be distributed horizontally, vertically or diagonally in a 10 x 10 grid. Along with the tile distribution algorithm, we also present an algorithm for the scoring model of the game. The proposed game has been tested with 20,000 Tamil words.

Keywords: Pudhaiyal, Tamil word game, word search, scoring, maze, algorithm

Procedia PDF Downloads 442
5621 Comparative Study of IC and Perturb and Observe Method of MPPT Algorithm for Grid Connected PV Module

Authors: Arvind Kumar, Manoj Kumar, Dattatraya H. Nagaraj, Amanpreet Singh, Jayanthi Prattapati

Abstract:

The purpose of this paper is to study and compare two maximum power point tracking (MPPT) algorithms in a photovoltaic simulation system and also show a simulation study of maximum power point tracking (MPPT) for photovoltaic systems using perturb and observe algorithm and Incremental conductance algorithm. Maximum power point tracking (MPPT) plays an important role in photovoltaic systems because it maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency and minimize the overall system cost. Since the maximum power point (MPP) varies, based on the irradiation and cell temperature, appropriate algorithms must be utilized to track the (MPP) and maintain the operation of the system in it. MATLAB/Simulink is used to establish a model of photovoltaic system with (MPPT) function. This system is developed by combining the models established of solar PV module and DC-DC Boost converter. The system is simulated under different climate conditions. Simulation results show that the photovoltaic simulation system can track the maximum power point accurately.

Keywords: incremental conductance algorithm, perturb and observe algorithm, photovoltaic system, simulation results

Procedia PDF Downloads 556
5620 Relay-Augmented Bottleneck Throughput Maximization for Correlated Data Routing: A Game Theoretic Perspective

Authors: Isra Elfatih Salih Edrees, Mehmet Serdar Ufuk Türeli

Abstract:

In this paper, an energy-aware method is presented, integrating energy-efficient relay-augmented techniques for correlated data routing with the goal of optimizing bottleneck throughput in wireless sensor networks. The system tackles the dual challenge of throughput optimization while considering sensor network energy consumption. A unique routing metric has been developed to enable throughput maximization while minimizing energy consumption by utilizing data correlation patterns. The paper introduces a game theoretic framework to address the NP-complete optimization problem inherent in throughput-maximizing correlation-aware routing with energy limitations. By creating an algorithm that blends energy-aware route selection strategies with the best reaction dynamics, this framework provides a local solution. The suggested technique considerably raises the bottleneck throughput for each source in the network while reducing energy consumption by choosing the best routes that strike a compromise between throughput enhancement and energy efficiency. Extensive numerical analyses verify the efficiency of the method. The outcomes demonstrate the significant decrease in energy consumption attained by the energy-efficient relay-augmented bottleneck throughput maximization technique, in addition to confirming the anticipated throughput benefits.

Keywords: correlated data aggregation, energy efficiency, game theory, relay-augmented routing, throughput maximization, wireless sensor networks

Procedia PDF Downloads 82
5619 Engineering Optimization of Flexible Energy Absorbers

Authors: Reza Hedayati, Meysam Jahanbakhshi

Abstract:

Elastic energy absorbers which consist of a ring-liked plate and springs can be a good choice for increasing the impact duration during an accident. In the current project, an energy absorber system is optimized using four optimizing methods Kuhn-Tucker, Sequential Linear Programming (SLP), Concurrent Subspace Design (CSD), and Pshenichny-Lim-Belegundu-Arora (PLBA). Time solution, convergence, Programming Length and accuracy of the results were considered to find the best solution algorithm. Results showed the superiority of PLBA over the other algorithms.

Keywords: Concurrent Subspace Design (CSD), Kuhn-Tucker, Pshenichny-Lim-Belegundu-Arora (PLBA), Sequential Linear Programming (SLP)

Procedia PDF Downloads 399
5618 Heart Murmurs and Heart Sounds Extraction Using an Algorithm Process Separation

Authors: Fatima Mokeddem

Abstract:

The phonocardiogram signal (PCG) is a physiological signal that reflects heart mechanical activity, is a promising tool for curious researchers in this field because it is full of indications and useful information for medical diagnosis. PCG segmentation is a basic step to benefit from this signal. Therefore, this paper presents an algorithm that serves the separation of heart sounds and heart murmurs in case they exist in order to use them in several applications and heart sounds analysis. The separation process presents here is founded on three essential steps filtering, envelope detection, and heart sounds segmentation. The algorithm separates the PCG signal into S1 and S2 and extract cardiac murmurs.

Keywords: phonocardiogram signal, filtering, Envelope, Detection, murmurs, heart sounds

Procedia PDF Downloads 141
5617 Association Rules Mining and NOSQL Oriented Document in Big Data

Authors: Sarra Senhadji, Imene Benzeguimi, Zohra Yagoub

Abstract:

Big Data represents the recent technology of manipulating voluminous and unstructured data sets over multiple sources. Therefore, NOSQL appears to handle the problem of unstructured data. Association rules mining is one of the popular techniques of data mining to extract hidden relationship from transactional databases. The algorithm for finding association dependencies is well-solved with Map Reduce. The goal of our work is to reduce the time of generating of frequent itemsets by using Map Reduce and NOSQL database oriented document. A comparative study is given to evaluate the performances of our algorithm with the classical algorithm Apriori.

Keywords: Apriori, Association rules mining, Big Data, Data Mining, Hadoop, MapReduce, MongoDB, NoSQL

Procedia PDF Downloads 162
5616 Optimizing of Machining Parameters of Plastic Material Using Taguchi Method

Authors: Jumazulhisham Abdul Shukor, Mohd. Sazali Said, Roshanizah Harun, Shuib Husin, Ahmad Razlee Ab Kadir

Abstract:

This paper applies Taguchi Optimization Method in determining the best machining parameters for pocket milling process on Polypropylene (PP) using CNC milling machine where the surface roughness is considered and the Carbide inserts cutting tool are used. Three machining parameters; speed, feed rate and depth of cut are investigated along three levels; low, medium and high of each parameter (Taguchi Orthogonal Arrays). The setting of machining parameters were determined by using Taguchi Method and the Signal-to-Noise (S/N) ratio are assessed to define the optimal levels and to predict the effect of surface roughness with assigned parameters based on L9. The final experimental outcomes are presented to prove the optimization parameters recommended by manufacturer are accurate.

Keywords: inserts, milling process, signal-to-noise (S/N) ratio, surface roughness, Taguchi Optimization Method

Procedia PDF Downloads 637
5615 Stochastic Programming and C-Somga: Animal Ration Formulation

Authors: Pratiksha Saxena, Dipti Singh, Neha Khanna

Abstract:

A self-organizing migrating genetic algorithm(C-SOMGA) is developed for animal diet formulation. This paper presents animal diet formulation using stochastic and genetic algorithm. Tri-objective models for cost minimization and shelf life maximization are developed. These objectives are achieved by combination of stochastic programming and C-SOMGA. Stochastic programming is used to introduce nutrient variability for animal diet. Self-organizing migrating genetic algorithm provides exact and quick solution and presents an innovative approach towards successful application of soft computing technique in the area of animal diet formulation.

Keywords: animal feed ration, feed formulation, linear programming, stochastic programming, self-migrating genetic algorithm, C-SOMGA technique, shelf life maximization, cost minimization, nutrient maximization

Procedia PDF Downloads 443
5614 Application of Groundwater Model for Optimization of Denitrification Strategies to Minimize Public Health Risk

Authors: Mukesh A. Modi

Abstract:

High-nitrate concentration in groundwater of unconfined aquifers has been a serious issue for public health risk at a global scale. Various anthropogenic activities in agricultural land and urban land of alluvial soil have been observed to be responsible for the increment of nitrate in groundwater. The present study was designed to identify suitable denitrification strategies to minimize the effects of high nitrate in groundwater near the Mahi River of Vadodara block, Gujarat. There were 11 wells of Jal Jeevan Mission, Ministry of Jal Shakti, along with 3 observation wells of Gujarat Water Resources Development Corporation have been used for the duration of 21 years. MODFLOW and MT3DMS codes have been used to simulate solute transport phenomena along with attempted effectively for optimization. Current research is one step ahead by optimizing various denitrification strategies with the simulation of the model. The in-situ and ex-situ denitrification strategies viz. NAS (No Action Scenario), CAS (Crop Alternation Scenario), PS (Phytoremediation Scenario), and CAS + PS (Crop Alternation Scenario + Phytoremediation Scenario) have been selected for the optimization. The groundwater model simulates the most suitable denitrification strategy considering the hydrogeological characteristics at the targeted well.

Keywords: groundwater, high nitrate, MODFLOW, MT3DMS, optimization, denitrification strategy

Procedia PDF Downloads 31
5613 Simulation of Obstacle Avoidance for Multiple Autonomous Vehicles in a Dynamic Environment Using Q-Learning

Authors: Andreas D. Jansson

Abstract:

The availability of inexpensive, yet competent hardware allows for increased level of automation and self-optimization in the context of Industry 4.0. However, such agents require high quality information about their surroundings along with a robust strategy for collision avoidance, as they may cause expensive damage to equipment or other agents otherwise. Manually defining a strategy to cover all possibilities is both time-consuming and counter-productive given the capabilities of modern hardware. This paper explores the idea of a model-free self-optimizing obstacle avoidance strategy for multiple autonomous agents in a simulated dynamic environment using the Q-learning algorithm.

Keywords: autonomous vehicles, industry 4.0, multi-agent system, obstacle avoidance, Q-learning, simulation

Procedia PDF Downloads 138
5612 Execution Time Optimization of Workflow Network with Activity Lead-Time

Authors: Xiaoping Qiu, Binci You, Yue Hu

Abstract:

The executive time of the workflow network has an important effect on the efficiency of the business process. In this paper, the activity executive time is divided into the service time and the waiting time, then the lead time can be extracted from the waiting time. The executive time formulas of the three basic structures in the workflow network are deduced based on the activity lead time. Taken the process of e-commerce logistics as an example, insert appropriate lead time for key activities by using Petri net, and the executive time optimization model is built to minimize the waiting time with the time-cost constraints. Then the solution program-using VC++6.0 is compiled to get the optimal solution, which reduces the waiting time of key activities in the workflow, and verifies the role of lead time in the timeliness of e-commerce logistics.

Keywords: electronic business, execution time, lead time, optimization model, petri net, time workflow network

Procedia PDF Downloads 176