Search results for: computational neuroscience
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2138

Search results for: computational neuroscience

1538 A Computational Approach for the Prediction of Relevant Olfactory Receptors in Insects

Authors: Zaide Montes Ortiz, Jorge Alberto Molina, Alejandro Reyes

Abstract:

Insects are extremely successful organisms. A sophisticated olfactory system is in part responsible for their survival and reproduction. The detection of volatile organic compounds can positively or negatively affect many behaviors in insects. Compounds such as carbon dioxide (CO2), ammonium, indol, and lactic acid are essential for many species of mosquitoes like Anopheles gambiae in order to locate vertebrate hosts. For instance, in A. gambiae, the olfactory receptor AgOR2 is strongly activated by indol, which accounts for almost 30% of human sweat. On the other hand, in some insects of agricultural importance, the detection and identification of pheromone receptors (PRs) in lepidopteran species has become a promising field for integrated pest management. For example, with the disruption of the pheromone receptor, BmOR1, mediated by transcription activator-like effector nucleases (TALENs), the sensitivity to bombykol was completely removed affecting the pheromone-source searching behavior in male moths. Then, the detection and identification of olfactory receptors in the genomes of insects is fundamental to improve our understanding of the ecological interactions, and to provide alternatives in the integrated pests and vectors management. Hence, the objective of this study is to propose a bioinformatic workflow to enhance the detection and identification of potential olfactory receptors in genomes of relevant insects. Applying Hidden Markov models (Hmms) and different computational tools, potential candidates for pheromone receptors in Tuta absoluta were obtained, as well as potential carbon dioxide receptors in Rhodnius prolixus, the main vector of Chagas disease. This study showed the validity of a bioinformatic workflow with a potential to improve the identification of certain olfactory receptors in different orders of insects.

Keywords: bioinformatic workflow, insects, olfactory receptors, protein prediction

Procedia PDF Downloads 150
1537 Fast and Non-Invasive Patient-Specific Optimization of Left Ventricle Assist Device Implantation

Authors: Huidan Yu, Anurag Deb, Rou Chen, I-Wen Wang

Abstract:

The use of left ventricle assist devices (LVADs) in patients with heart failure has been a proven and effective therapy for patients with severe end-stage heart failure. Due to the limited availability of suitable donor hearts, LVADs will probably become the alternative solution for patient with heart failure in the near future. While the LVAD is being continuously improved toward enhanced performance, increased device durability, reduced size, a better understanding of implantation management becomes critical in order to achieve better long-term blood supplies and less post-surgical complications such as thrombi generation. Important issues related to the LVAD implantation include the location of outflow grafting (OG), the angle of the OG, the combination between LVAD and native heart pumping, uniform or pulsatile flow at OG, etc. We have hypothesized that an optimal implantation of LVAD is patient specific. To test this hypothesis, we employ a novel in-house computational modeling technique, named InVascular, to conduct a systematic evaluation of cardiac output at aortic arch together with other pertinent hemodynamic quantities for each patient under various implantation scenarios aiming to get an optimal implantation strategy. InVacular is a powerful computational modeling technique that integrates unified mesoscale modeling for both image segmentation and fluid dynamics with the cutting-edge GPU parallel computing. It first segments the aortic artery from patient’s CT image, then seamlessly feeds extracted morphology, together with the velocity wave from Echo Ultrasound image of the same patient, to the computation model to quantify 4-D (time+space) velocity and pressure fields. Using one NVIDIA Tesla K40 GPU card, InVascular completes a computation from CT image to 4-D hemodynamics within 30 minutes. Thus it has the great potential to conduct massive numerical simulation and analysis. The systematic evaluation for one patient includes three OG anastomosis (ascending aorta, descending thoracic aorta, and subclavian artery), three combinations of LVAD and native heart pumping (1:1, 1:2, and 1:3), three angles of OG anastomosis (inclined upward, perpendicular, and inclined downward), and two LVAD inflow conditions (uniform and pulsatile). The optimal LVAD implantation is suggested through a comprehensive analysis of the cardiac output and related hemodynamics from the simulations over the fifty-four scenarios. To confirm the hypothesis, 5 random patient cases will be evaluated.

Keywords: graphic processing unit (GPU) parallel computing, left ventricle assist device (LVAD), lumped-parameter model, patient-specific computational hemodynamics

Procedia PDF Downloads 133
1536 Solving LWE by Pregressive Pumps and Its Optimization

Authors: Leizhang Wang, Baocang Wang

Abstract:

General Sieve Kernel (G6K) is considered as currently the fastest algorithm for the shortest vector problem (SVP) and record holder of open SVP challenge. We study the lattice basis quality improvement effects of the Workout proposed in G6K, which is composed of a series of pumps to solve SVP. Firstly, we use a low-dimensional pump output basis to propose a predictor to predict the quality of high-dimensional Pumps output basis. Both theoretical analysis and experimental tests are performed to illustrate that it is more computationally expensive to solve the LWE problems by using a G6K default SVP solving strategy (Workout) than these lattice reduction algorithms (e.g. BKZ 2.0, Progressive BKZ, Pump, and Jump BKZ) with sieving as their SVP oracle. Secondly, the default Workout in G6K is optimized to achieve a stronger reduction and lower computational cost. Thirdly, we combine the optimized Workout and the Pump output basis quality predictor to further reduce the computational cost by optimizing LWE instances selection strategy. In fact, we can solve the TU LWE challenge (n = 65, q = 4225, = 0:005) 13.6 times faster than the G6K default Workout. Fourthly, we consider a combined two-stage (Preprocessing by BKZ- and a big Pump) LWE solving strategy. Both stages use dimension for free technology to give new theoretical security estimations of several LWE-based cryptographic schemes. The security estimations show that the securities of these schemes with the conservative Newhope’s core-SVP model are somewhat overestimated. In addition, in the case of LAC scheme, LWE instances selection strategy can be optimized to further improve the LWE-solving efficiency even by 15% and 57%. Finally, some experiments are implemented to examine the effects of our strategies on the Normal Form LWE problems, and the results demonstrate that the combined strategy is four times faster than that of Newhope.

Keywords: LWE, G6K, pump estimator, LWE instances selection strategy, dimension for free

Procedia PDF Downloads 60
1535 Modelling of Heat Generation in a 18650 Lithium-Ion Battery Cell under Varying Discharge Rates

Authors: Foo Shen Hwang, Thomas Confrey, Stephen Scully, Barry Flannery

Abstract:

Thermal characterization plays an important role in battery pack design. Lithium-ion batteries have to be maintained between 15-35 °C to operate optimally. Heat is generated (Q) internally within the batteries during both the charging and discharging phases. This can be quantified using several standard methods. The most common method of calculating the batteries heat generation is through the addition of both the joule heating effects and the entropic changes across the battery. In addition, such values can be derived by identifying the open-circuit voltage (OCV), nominal voltage (V), operating current (I), battery temperature (T) and the rate of change of the open-circuit voltage in relation to temperature (dOCV/dT). This paper focuses on experimental characterization and comparative modelling of the heat generation rate (Q) across several current discharge rates (0.5C, 1C, and 1.5C) of a 18650 cell. The analysis is conducted utilizing several non-linear mathematical functions methods, including polynomial, exponential, and power models. Parameter fitting is carried out over the respective function orders; polynomial (n = 3~7), exponential (n = 2) and power function. The generated parameter fitting functions are then used as heat source functions in a 3-D computational fluid dynamics (CFD) solver under natural convection conditions. Generated temperature profiles are analyzed for errors based on experimental discharge tests, conducted at standard room temperature (25°C). Initial experimental results display low deviation between both experimental and CFD temperature plots. As such, the heat generation function formulated could be easier utilized for larger battery applications than other methods available.

Keywords: computational fluid dynamics, curve fitting, lithium-ion battery, voltage drop

Procedia PDF Downloads 96
1534 Cognitive Rehabilitation in Schizophrenia: A Review of the Indian Scenario

Authors: Garima Joshi, Pratap Sharan, V. Sreenivas, Nand Kumar, Kameshwar Prasad, Ashima N. Wadhawan

Abstract:

Schizophrenia is a debilitating disorder and is marked by cognitive impairment, which deleteriously impacts the social and professional functioning along with the quality of life of the patients and the caregivers. Often the cognitive symptoms are in their prodromal state and worsen as the illness progresses; they have proven to have a good predictive value for the prognosis of the illness. It has been shown that intensive cognitive rehabilitation (CR) leads to improvements in the healthy as well as cognitively-impaired subjects. As the majority of population in India falls in the lower to middle socio-economic status and have low education levels, using the existing packages, a majority of which are developed in the West, for cognitive rehabilitation becomes difficult. The use of technology is also restricted due to the high costs involved and the limited availability and familiarity with computers and other devices, which pose as an impedance for continued therapy. Cognitive rehabilitation in India uses a plethora of retraining methods for the patients with schizophrenia targeting the functions of attention, information processing, executive functions, learning and memory, and comprehension along with Social Cognition. Psychologists often have to follow an integrative therapy approach involving social skills training, family therapy and psychoeducation in order to maintain the gains from the cognitive rehabilitation in the long run. This paper reviews the methodologies and cognitive retaining programs used in India. It attempts to elucidate the evolution and development of methodologies used, from traditional paper-pencil based retraining to more sophisticated neuroscience-informed techniques in cognitive rehabilitation of deficits in schizophrenia as home-based or supervised and guided programs for cognitive rehabilitation.

Keywords: schizophrenia, cognitive rehabilitation, neuropsychological interventions, integrated approached to rehabilitation

Procedia PDF Downloads 363
1533 Finite Element Analysis for Earing Prediction Incorporating the BBC2003 Material Model with Fully Implicit Integration Method: Derivation and Numerical Algorithm

Authors: Sajjad Izadpanah, Seyed Hadi Ghaderi, Morteza Sayah Irani, Mahdi Gerdooei

Abstract:

In this research work, a sophisticated yield criterion known as BBC2003, capable of describing planar anisotropic behaviors of aluminum alloy sheets, was integrated into the commercial finite element code ABAQUS/Standard via a user subroutine. The complete formulation of the implementation process using a fully implicit integration scheme, i.e., the classic backward Euler method, is presented, and relevant aspects of the yield criterion are introduced. In order to solve nonlinear differential and algebraic equations, the line-search algorithm was adopted in the user-defined material subroutine (UMAT) to expand the convergence domain of the iterative Newton-Raphson method. The developed subroutine was used to simulate a challenging computational problem with complex stress states, i.e., deep drawing of an anisotropic aluminum alloy AA3105. The accuracy and stability of the developed subroutine were confirmed by comparing the numerically predicted earing and thickness variation profiles with the experimental results, which showed an excellent agreement between numerical and experimental earing and thickness profiles. The integration of the BBC2003 yield criterion into ABAQUS/Standard represents a significant contribution to the field of computational mechanics and provides a useful tool for analyzing the mechanical behavior of anisotropic materials subjected to complex loading conditions.

Keywords: BBC2003 yield function, plastic anisotropy, fully implicit integration scheme, line search algorithm, explicit and implicit integration schemes

Procedia PDF Downloads 75
1532 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 134
1531 Establishing Combustion Behaviour for Refuse Derived Fuel Firing at Kiln Inlet through Computational Fluid Dynamics at a Cement Plant in India

Authors: Prateek Sharma, Venkata Ramachandrarao Maddali, Kapil Kukreja, B. N. Mohapatra

Abstract:

Waste management is one of the pressing issues of India. Several initiatives by the Indian Government, including the recent one “Swachhata hi Seva” campaign launched by Prime Minister on 15th August 2018, can be one of the game changers to waste disposal. Under this initiative, the government, cement industry and other stakeholders are working hand in hand to dispose of single-use plastics in cement plants in rotary kilns. This is an exemplary effort and a move that establishes the Indian Cement industry as one of the key players in a circular economy. One of the cement plants in Southern India has been mandated by the state government to co-process shredded plastic and refuse-derived fuel (RDF) available in nearby regions as an alternative fuel in their cement plant. The plant has set a target of 25 % thermal substitution rate (TSR) by RDF in the next five years. Most of the cement plants in India and abroad have achieved high TSR through pre calciner firing. But the cement plant doesn’t have the precalciner and has to achieve this daunting task of 25 % TSR by firing through the main kiln burner. Since RDF is a heterogeneous waste with the change in fuel quality, it is difficult to achieve this task; hence plant has to resort to firing some portion of RDF/plastics at kiln inlet. But kiln inlet has reducing conditions as observed during measurements) under baseline condition. The combustion behavior of RDF of different sizes at different firing locations in riser was studied with the help of a computational fluid dynamics tool. It has been concluded that RDF above 50 mm size results in incomplete combustion leading to CO formation. Moreover, best firing location appears to be in the bottom portion of the kiln riser.

Keywords: kiln inlet, plastics, refuse derived fuel, thermal substitution rate

Procedia PDF Downloads 129
1530 Effects on Cortical Thickness due to Musical Training in Elementary School Children: The Importance of Manual Structural Analysis

Authors: Saba Daneshmand, Assal Habibi

Abstract:

Studying musicians has become a prominent approach in macrostructural neuroscience research aimed at exploring the influence of environmental factors on brain development due to the significant impact of musical training on the brain. Although longitudinal studies can establish a direct causal relationship between musical training and brain development, only a limited number of studies have been conducted for a long enough duration. We recruited children for the experimental music group to participate in an after-school music program which was compared to the control group that had no such after-school program or enrichment activities. We ultimately calculated cortical thickness, a distinct measure of development. When a task such as playing an instrument occurs frequently, the associated neural processes become quicker and more refined over time, causing only the necessary pathways to remain; this, therefore, results in cortical thinning. The Brain and Music Lab has identified the anterior and posterior superior temporal gyrus, Heschl's gyrus, and the inferior regions to be involved with musicianship. The past study only found that the posterior superior temporal gyrus experienced a larger thinning in the music group compared to the control; however, we expect our ongoing study to produce similar but more intense results, including thinning in the other regions associated with musicianship. We believe the limited results of the previous study are due to its short duration which is why this ongoing and more lengthy longitudinal study is a significant and indispensable contribution in helping us discover the important developmental aspects of musical training.

Keywords: cortical thickness, music, neuroimaging, child development

Procedia PDF Downloads 20
1529 Compressive Stresses near Crack Tip Induced by Thermo-Electric Field

Authors: Thomas Jin-Chee Liu

Abstract:

In this paper, the thermo-electro-structural coupled-field in a cracked metal plate is studied using the finite element analysis. From the computational results, the compressive stresses reveal near the crack tip. This conclusion agrees with the past reference. Furthermore, the compressive condition can retard and stop the crack growth during the Joule heating process.

Keywords: compressive stress, crack tip, Joule heating, finite element

Procedia PDF Downloads 408
1528 Globally Convergent Sequential Linear Programming for Multi-Material Topology Optimization Using Ordered Solid Isotropic Material with Penalization Interpolation

Authors: Darwin Castillo Huamaní, Francisco A. M. Gomes

Abstract:

The aim of the multi-material topology optimization (MTO) is to obtain the optimal topology of structures composed by many materials, according to a given set of constraints and cost criteria. In this work, we seek the optimal distribution of materials in a domain, such that the flexibility of the structure is minimized, under certain boundary conditions and the intervention of external forces. In the case we have only one material, each point of the discretized domain is represented by two values from a function, where the value of the function is 1 if the element belongs to the structure or 0 if the element is empty. A common way to avoid the high computational cost of solving integer variable optimization problems is to adopt the Solid Isotropic Material with Penalization (SIMP) method. This method relies on the continuous interpolation function, power function, where the base variable represents a pseudo density at each point of domain. For proper exponent values, the SIMP method reduces intermediate densities, since values other than 0 or 1 usually does not have a physical meaning for the problem. Several extension of the SIMP method were proposed for the multi-material case. The one that we explore here is the ordered SIMP method, that has the advantage of not being based on the addition of variables to represent material selection, so the computational cost is independent of the number of materials considered. Although the number of variables is not increased by this algorithm, the optimization subproblems that are generated at each iteration cannot be solved by methods that rely on second derivatives, due to the cost of calculating the second derivatives. To overcome this, we apply a globally convergent version of the sequential linear programming method, which solves a linear approximation sequence of optimization problems.

Keywords: globally convergence, multi-material design ordered simp, sequential linear programming, topology optimization

Procedia PDF Downloads 315
1527 Reduction of Cooling Demands in a Subtropical Humid Climate Zone: A Study on Roofs of Existing Residential Building Using Passive

Authors: Megha Jain, K. K. Pathak

Abstract:

In sub-tropical humid climates, it is estimated most of the urban peak load of energy consumption is used to satisfy air-conditioning or air-coolers cooling demand in summer time. As the urbanization rate in developing nation – like the case in India is rising rapidly, the pressure placed on energy resources to satisfy inhabitants’ indoor comfort requirements is consequently increasing too. This paper introduces passive cooling through roof as a means of reducing energy cooling loads for satisfying human comfort requirements in a sub-tropical climate. Experiments were performed by applying different insulators which are locally available solar reflective materials to insulate the roofs of five rooms of 4 case buildings; three rooms having RCC (Reinforced Cement Concrete) roof and two having Asbestos sheet roof of existing buildings. The results are verified by computer simulation using Computational Fluid Dynamics tools with FLUENT software. The result of using solar reflective paint with high albedo coating shows a fall of 4.8⁰C in peak hours and saves 303 kWh considering energy load with air conditioner during the summer season in comparison to non insulated flat roof energy load of residential buildings in Bhopal. An optimum solution of insulator for both types of roofs is presented. It is recommended that the selected cool roof solution be combined with insulation on other elements of envelope, to increase the indoor thermal comfort. The application is intended for low cost residential buildings in composite and warm climate like Bhopal.

Keywords: cool roof, computational fluid dynamics, energy loads, insulators, passive cooling, subtropical climate, thermal performance

Procedia PDF Downloads 170
1526 Nonlinear Waves in Two-Layer Systems with Heat Release/Consumption at the Interface

Authors: Ilya Simanovskii

Abstract:

Nonlinear convective flows developed under the joint action of buoyant and thermo-capillary effects in a two-layer system with periodic boundary conditions on the lateral walls have been investigated. The influence of an interfacial heat release on oscillatory regimes has been studied. The computational regions with different lengths have been considered. It is shown that the development of oscillatory instability can lead to the appearance of different no steady flows.

Keywords: interface, instabilities, two-layer systems, bioinformatics, biomedicine

Procedia PDF Downloads 402
1525 Numerical Simulation on Two Components Particles Flow in Fluidized Bed

Authors: Wang Heng, Zhong Zhaoping, Guo Feihong, Wang Jia, Wang Xiaoyi

Abstract:

Flow of gas and particles in fluidized beds is complex and chaotic, which is difficult to measure and analyze by experiments. Some bed materials with bad fluidized performance always fluidize with fluidized medium. The material and the fluidized medium are different in many properties such as density, size and shape. These factors make the dynamic process more complex and the experiment research more limited. Numerical simulation is an efficient way to describe the process of gas-solid flow in fluidized bed. One of the most popular numerical simulation methods is CFD-DEM, i.e., computational fluid dynamics-discrete element method. The shapes of particles are always simplified as sphere in most researches. Although sphere-shaped particles make the calculation of particle uncomplicated, the effects of different shapes are disregarded. However, in practical applications, the two-component systems in fluidized bed also contain sphere particles and non-sphere particles. Therefore, it is needed to study the two component flow of sphere particles and non-sphere particles. In this paper, the flows of mixing were simulated as the flow of molding biomass particles and quartz in fluidized bad. The integrated model was built on an Eulerian–Lagrangian approach which was improved to suit the non-sphere particles. The constructed methods of cylinder-shaped particles were different when it came to different numerical methods. Each cylinder-shaped particle was constructed as an agglomerate of fictitious small particles in CFD part, which means the small fictitious particles gathered but not combined with each other. The diameter of a fictitious particle d_fic and its solid volume fraction inside a cylinder-shaped particle α_fic, which is called the fictitious volume fraction, are introduced to modify the drag coefficient β by introducing the volume fraction of the cylinder-shaped particles α_cld and sphere-shaped particles α_sph. In a computational cell, the void ε, can be expressed as ε=1-〖α_cld α〗_fic-α_sph. The Ergun equation and the Wen and Yu equation were used to calculate β. While in DEM method, cylinder-shaped particles were built by multi-sphere method, in which small sphere element merged with each other. Soft sphere model was using to get the connect force between particles. The total connect force of cylinder-shaped particle was calculated as the sum of the small sphere particles’ forces. The model (size=1×0.15×0.032 mm3) contained 420000 sphere-shaped particles (diameter=0.8 mm, density=1350 kg/m3) and 60 cylinder-shaped particles (diameter=10 mm, length=10 mm, density=2650 kg/m3). Each cylinder-shaped particle was constructed by 2072 small sphere-shaped particles (d=0.8 mm) in CFD mesh and 768 sphere-shaped particles (d=3 mm) in DEM mesh. The length of CFD and DEM cells are 1 mm and 2 mm. Superficial gas velocity was changed in different models as 1.0 m/s, 1.5 m/s, 2.0m/s. The results of simulation were compared with the experimental results. The movements of particles were regularly as fountain. The effect of superficial gas velocity on cylinder-shaped particles was stronger than that of sphere-shaped particles. The result proved this present work provided a effective approach to simulation the flow of two component particles.

Keywords: computational fluid dynamics, discrete element method, fluidized bed, multiphase flow

Procedia PDF Downloads 327
1524 Finite Element Modeling of Aortic Intramural Haematoma Shows Size Matters

Authors: Aihong Zhao, Priya Sastry, Mark L Field, Mohamad Bashir, Arvind Singh, David Richens

Abstract:

Objectives: Intramural haematoma (IMH) is one of the pathologies, along with acute aortic dissection, that present as Acute Aortic Syndrome (AAS). Evidence suggests that unlike aortic dissection, some intramural haematomas may regress with medical management. However, intramural haematomas have been traditionally managed like acute aortic dissections. Given that some of these pathologies may regress with conservative management, it would be useful to be able to identify which of these may not need high risk emergency intervention. A computational aortic model was used in this study to try and identify intramural haematomas with risk of progression to aortic dissection. Methods: We created a computational model of the aorta with luminal blood flow. Reports in the literature have identified 11 mm as the radial clot thickness that is associated with heightened risk of progression of intramural haematoma. Accordingly, haematomas of varying sizes were implanted in the modeled aortic wall to test this hypothesis. The model was exposed to physiological blood flows and the stresses and strains in each layer of the aortic wall were recorded. Results: Size and shape of clot were seen to affect the magnitude of aortic stresses. The greatest stresses and strains were recorded in the intima of the model. When the haematoma exceeded 10 mm in all dimensions, the stress on the intima reached breaking point. Conclusion: Intramural clot size appears to be a contributory factor affecting aortic wall stress. Our computer simulation corroborates clinical evidence in the literature proposing that IMH diameter greater than 11 mm may be predictive of progression. This preliminary report suggests finite element modelling of the aortic wall may be a useful process by which to examine putative variables important in predicting progression or regression of intramural haematoma.

Keywords: intramural haematoma, acute aortic syndrome, finite element analysis,

Procedia PDF Downloads 432
1523 Preliminary Study of Hand Gesture Classification in Upper-Limb Prosthetics Using Machine Learning with EMG Signals

Authors: Linghui Meng, James Atlas, Deborah Munro

Abstract:

There is an increasing demand for prosthetics capable of mimicking natural limb movements and hand gestures, but precise movement control of prosthetics using only electrode signals continues to be challenging. This study considers the implementation of machine learning as a means of improving accuracy and presents an initial investigation into hand gesture recognition using models based on electromyographic (EMG) signals. EMG signals, which capture muscle activity, are used as inputs to machine learning algorithms to improve prosthetic control accuracy, functionality and adaptivity. Using logistic regression, a machine learning classifier, this study evaluates the accuracy of classifying two hand gestures from the publicly available Ninapro dataset using two-time series feature extraction algorithms: Time Series Feature Extraction (TSFE) and Convolutional Neural Networks (CNNs). Trials were conducted using varying numbers of EMG channels from one to eight to determine the impact of channel quantity on classification accuracy. The results suggest that although both algorithms can successfully distinguish between hand gesture EMG signals, CNNs outperform TSFE in extracting useful information for both accuracy and computational efficiency. In addition, although more channels of EMG signals provide more useful information, they also require more complex and computationally intensive feature extractors and consequently do not perform as well as lower numbers of channels. The findings also underscore the potential of machine learning techniques in developing more effective and adaptive prosthetic control systems.

Keywords: EMG, machine learning, prosthetic control, electromyographic prosthetics, hand gesture classification, CNN, computational neural networks, TSFE, time series feature extraction, channel count, logistic regression, ninapro, classifiers

Procedia PDF Downloads 38
1522 Information Visualization Methods Applied to Nanostructured Biosensors

Authors: Osvaldo N. Oliveira Jr.

Abstract:

The control of molecular architecture inherent in some experimental methods to produce nanostructured films has had great impact on devices of various types, including sensors and biosensors. The self-assembly monolayers (SAMs) and the electrostatic layer-by-layer (LbL) techniques, for example, are now routinely used to produce tailored architectures for biosensing where biomolecules are immobilized with long-lasting preserved activity. Enzymes, antigens, antibodies, peptides and many other molecules serve as the molecular recognition elements for detecting an equally wide variety of analytes. The principles of detection are also varied, including electrochemical methods, fluorescence spectroscopy and impedance spectroscopy. In this presentation an overview will be provided of biosensors made with nanostructured films to detect antibodies associated with tropical diseases and HIV, in addition to detection of analytes of medical interest such as cholesterol and triglycerides. Because large amounts of data are generated in the biosensing experiments, use has been made of computational and statistical methods to optimize performance. Multidimensional projection techniques such as Sammon´s mapping have been shown more efficient than traditional multivariate statistical analysis in identifying small concentrations of anti-HIV antibodies and for distinguishing between blood serum samples of animals infected with two tropical diseases, namely Chagas´ disease and Leishmaniasis. Optimization of biosensing may include a combination of another information visualization method, the Parallel Coordinate technique, with artificial intelligence methods in order to identify the most suitable frequencies for reaching higher sensitivity using impedance spectroscopy. Also discussed will be the possible convergence of technologies, through which machine learning and other computational methods may be used to treat data from biosensors within an expert system for clinical diagnosis.

Keywords: clinical diagnosis, information visualization, nanostructured films, layer-by-layer technique

Procedia PDF Downloads 337
1521 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability

Authors: Chin-Chia Jane

Abstract:

In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.

Keywords: quality of service, reliability, transportation network, travel time

Procedia PDF Downloads 222
1520 Hydrodynamic Analysis of Fish Fin Kinematics of Oreochromis Niloticus Using Machine Learning and Image Processing

Authors: Paramvir Singh

Abstract:

The locomotion of aquatic organisms has long fascinated biologists and engineers alike, with fish fins serving as a prime example of nature's remarkable adaptations for efficient underwater propulsion. This paper presents a comprehensive study focused on the hydrodynamic analysis of fish fin kinematics, employing an innovative approach that combines machine learning and image processing techniques. Through high-speed videography and advanced computational tools, we gain insights into the complex and dynamic motion of the fins of a Tilapia (Oreochromis Niloticus) fish. This study was initially done by experimentally capturing videos of the various motions of a Tilapia in a custom-made setup. Using deep learning and image processing on the videos, the motion of the Caudal and Pectoral fin was extracted. This motion included the fin configuration (i.e., the angle of deviation from the mean position) with respect to time. Numerical investigations for the flapping fins are then performed using a Computational Fluid Dynamics (CFD) solver. 3D models of the fins were created, mimicking the real-life geometry of the fins. Thrust Characteristics of separate fins (i.e., Caudal and Pectoral separately) and when the fins are together were studied. The relationship and the phase between caudal and pectoral fin motion were also discussed. The key objectives include mathematical modeling of the motion of a flapping fin at different naturally occurring frequencies and amplitudes. The interactions between both fins (caudal and pectoral) were also an area of keen interest. This work aims to improve on research that has been done in the past on similar topics. Also, these results can help in the better and more efficient design of the propulsion systems for biomimetic underwater vehicles that are used to study aquatic ecosystems, explore uncharted or challenging underwater regions, do ocean bed modeling, etc.

Keywords: biomimetics, fish fin kinematics, image processing, fish tracking, underwater vehicles

Procedia PDF Downloads 90
1519 Chemical Kinetics and Computational Fluid-Dynamics Analysis of H2/CO/CO2/CH4 Syngas Combustion and NOx Formation in a Micro-Pilot-Ignited Supercharged Dual Fuel Engine

Authors: Ulugbek Azimov, Nearchos Stylianidis, Nobuyuki Kawahara, Eiji Tomita

Abstract:

A chemical kinetics and computational fluid-dynamics (CFD) analysis was performed to evaluate the combustion of syngas derived from biomass and coke-oven solid feedstock in a micro-pilot ignited supercharged dual-fuel engine under lean conditions. For this analysis, a new reduced syngas chemical kinetics mechanism was constructed and validated by comparing the ignition delay and laminar flame speed data with those obtained from experiments and other detail chemical kinetics mechanisms available in the literature. The reaction sensitivity analysis was conducted for ignition delay at elevated pressures in order to identify important chemical reactions that govern the combustion process. The chemical kinetics of NOx formation was analyzed for H2/CO/CO2/CH4 syngas mixtures by using counter flow burner and premixed laminar flame speed reactor models. The new mechanism showed a very good agreement with experimental measurements and accurately reproduced the effect of pressure, temperature and equivalence ratio on NOx formation. In order to identify the species important for NOx formation, a sensitivity analysis was conducted for pressures 4 bar, 10 bar and 16 bar and preheat temperature 300 K. The results show that the NOx formation is driven mostly by hydrogen based species while other species, such as N2, CO2 and CH4, have also important effects on combustion. Finally, the new mechanism was used in a multidimensional CFD simulation to predict the combustion of syngas in a micro-pilot-ignited supercharged dual-fuel engine and results were compared with experiments. The mechanism showed the closest prediction of the in-cylinder pressure and the rate of heat release (ROHR).

Keywords: syngas, chemical kinetics mechanism, internal combustion engine, NOx formation

Procedia PDF Downloads 410
1518 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure

Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer

Abstract:

The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.

Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition

Procedia PDF Downloads 110
1517 A Neuroscience-Based Learning Technique: Framework and Application to STEM

Authors: Dante J. Dorantes-González, Aldrin Balsa-Yepes

Abstract:

Existing learning techniques such as problem-based learning, project-based learning, or case study learning are learning techniques that focus mainly on technical details, but give no specific guidelines on learner’s experience and emotional learning aspects such as arousal salience and valence, being emotional states important factors affecting engagement and retention. Some approaches involving emotion in educational settings, such as social and emotional learning, lack neuroscientific rigorousness and use of specific neurobiological mechanisms. On the other hand, neurobiology approaches lack educational applicability. And educational approaches mainly focus on cognitive aspects and disregard conditioning learning. First, authors start explaining the reasons why it is hard to learn thoughtfully, then they use the method of neurobiological mapping to track the main limbic system functions, such as the reward circuit, and its relations with perception, memories, motivations, sympathetic and parasympathetic reactions, and sensations, as well as the brain cortex. The authors conclude explaining the major finding: The mechanisms of nonconscious learning and the triggers that guarantee long-term memory potentiation. Afterward, the educational framework for practical application and the instructors’ guidelines are established. An implementation example in engineering education is given, namely, the study of tuned-mass dampers for earthquake oscillations attenuation in skyscrapers. This work represents an original learning technique based on nonconscious learning mechanisms to enhance long-term memories that complement existing cognitive learning methods.

Keywords: emotion, emotion-enhanced memory, learning technique, STEM

Procedia PDF Downloads 92
1516 Analysis and Modeling of the Building’s Facades in Terms of Different Convection Coefficients

Authors: Enes Yasa, Guven Fidan

Abstract:

Building Simulation tools need to better evaluate convective heat exchanges between external air and wall surfaces. Previous analysis demonstrated the significant effects of convective heat transfer coefficient values on the room energy balance. Some authors have pointed out that large discrepancies observed between widely used building thermal models can be attributed to the different correlations used to calculate or impose the value of the convective heat transfer coefficients. Moreover, numerous researchers have made sensitivity calculations and proved that the choice of Convective Heat Transfer Coefficient values can lead to differences from 20% to 40% of energy demands. The thermal losses to the ambient from a building surface or a roof mounted solar collector represent an important portion of the overall energy balance and depend heavily on the wind induced convection. In an effort to help designers make better use of the available correlations in the literature for the external convection coefficients due to the wind, a critical discussion and a suitable tabulation is presented, on the basis of algebraic form of the coefficients and their dependence upon characteristic length and wind direction, in addition to wind speed. Many research works have been conducted since early eighties focused on the convection heat transfer problems inside buildings. In this context, a Computational Fluid Dynamics (CFD) program has been used to predict external convective heat transfer coefficients at external building surfaces. For the building facades model, effects of wind speed and temperature differences between the surfaces and the external air have been analyzed, showing different heat transfer conditions and coefficients. In order to provide further information on external convective heat transfer coefficients, a numerical work is presented in this paper, using a Computational Fluid Dynamics (CFD) commercial package (CFX) to predict convective heat transfer coefficients at external building surface.

Keywords: CFD in buildings, external convective heat transfer coefficients, building facades, thermal modelling

Procedia PDF Downloads 421
1515 The Impact of Intelligent Control Systems on Biomedical Engineering and Research

Authors: Melkamu Tadesse Getachew

Abstract:

Intelligent control systems have revolutionized biomedical engineering, advancing research and enhancing medical practice. This review paper examines the impact of intelligent control on various aspects of biomedical engineering. It analyzes how these systems enhance precision and accuracy in biomedical instrumentation, improving diagnostics, monitoring, and treatment. Integration challenges are addressed, and potential solutions are proposed. The paper also investigates the optimization of drug delivery systems through intelligent control. It explores how intelligent systems contribute to precise dosing, targeted drug release, and personalized medicine. Challenges related to controlled drug release and patient variability are discussed, along with potential avenues for overcoming them. The comparison of algorithms used in intelligent control systems in biomedical control is also reviewed. The implications of intelligent control in computational and systems biology are explored, showcasing how these systems enable enhanced analysis and prediction of complex biological processes. Challenges such as interpretability, human-machine interaction, and machine reliability are examined, along with potential solutions. Intelligent control in biomedical engineering also plays a crucial role in risk management during surgical operations. This section demonstrates how intelligent systems improve patient safety and surgical outcomes when integrated into surgical robots, augmented reality, and preoperative planning. The challenges associated with these implementations and potential solutions are discussed in detail. In summary, this review paper comprehensively explores the widespread impact of intelligent control on biomedical engineering, showing the future of human health issues promising. It discusses application areas, challenges, and potential solutions, highlighting the transformative potential of these systems in advancing research and improving medical practice.

Keywords: Intelligent control systems, biomedical instrumentation, drug delivery systems, robotic surgical instruments, Computational monitoring and modeling

Procedia PDF Downloads 46
1514 DeepLig: A de-novo Computational Drug Design Approach to Generate Multi-Targeted Drugs

Authors: Anika Chebrolu

Abstract:

Mono-targeted drugs can be of limited efficacy against complex diseases. Recently, multi-target drug design has been approached as a promising tool to fight against these challenging diseases. However, the scope of current computational approaches for multi-target drug design is limited. DeepLig presents a de-novo drug discovery platform that uses reinforcement learning to generate and optimize novel, potent, and multitargeted drug candidates against protein targets. DeepLig’s model consists of two networks in interplay: a generative network and a predictive network. The generative network, a Stack- Augmented Recurrent Neural Network, utilizes a stack memory unit to remember and recognize molecular patterns when generating novel ligands from scratch. The generative network passes each newly created ligand to the predictive network, which then uses multiple Graph Attention Networks simultaneously to forecast the average binding affinity of the generated ligand towards multiple target proteins. With each iteration, given feedback from the predictive network, the generative network learns to optimize itself to create molecules with a higher average binding affinity towards multiple proteins. DeepLig was evaluated based on its ability to generate multi-target ligands against two distinct proteins, multi-target ligands against three distinct proteins, and multi-target ligands against two distinct binding pockets on the same protein. With each test case, DeepLig was able to create a library of valid, synthetically accessible, and novel molecules with optimal and equipotent binding energies. We propose that DeepLig provides an effective approach to design multi-targeted drug therapies that can potentially show higher success rates during in-vitro trials.

Keywords: drug design, multitargeticity, de-novo, reinforcement learning

Procedia PDF Downloads 99
1513 Computational Modeling of Load Limits of Carbon Fibre Composite Laminates Subjected to Low-Velocity Impact Utilizing Convolution-Based Fast Fourier Data Filtering Algorithms

Authors: Farhat Imtiaz, Umar Farooq

Abstract:

In this work, we developed a computational model to predict ply level failure in impacted composite laminates. Data obtained from physical testing from flat and round nose impacts of 8-, 16-, 24-ply laminates were considered. Routine inspections of the tested laminates were carried out to approximate ply by ply inflicted damage incurred. Plots consisting of load–time, load–deflection, and energy–time history were drawn to approximate the inflicted damages. Impact test generated unwanted data logged due to restrictions on testing and logging systems were also filtered. Conventional filters (built-in, statistical, and numerical) reliably predicted load thresholds for relatively thin laminates such as eight and sixteen ply panels. However, for relatively thick laminates such as twenty-four ply laminates impacted by flat nose impact generated clipped data which can just be de-noised using oscillatory algorithms. The literature search reveals that modern oscillatory data filtering and extrapolation algorithms have scarcely been utilized. This investigation reports applications of filtering and extrapolation of the clipped data utilising fast Fourier Convolution algorithm to predict load thresholds. Some of the results were related to the impact-induced damage areas identified with Ultrasonic C-scans and found to be in acceptable agreement. Based on consistent findings, utilizing of modern data filtering and extrapolation algorithms to data logged by the existing machines has efficiently enhanced data interpretations without resorting to extra resources. The algorithms could be useful for impact-induced damage approximations of similar cases.

Keywords: fibre reinforced laminates, fast Fourier algorithms, mechanical testing, data filtering and extrapolation

Procedia PDF Downloads 135
1512 Contextual Distribution for Textual Alignment

Authors: Yuri Bizzoni, Marianne Reboul

Abstract:

Our program compares French and Italian translations of Homer’s Odyssey, from the XVIth to the XXth century. We focus on the third point, showing how distributional semantics systems can be used both to improve alignment between different French translations as well as between the Greek text and a French translation. Although we focus on French examples, the techniques we display are completely language independent.

Keywords: classical receptions, computational linguistics, distributional semantics, Homeric poems, machine translation, translation studies, text alignment

Procedia PDF Downloads 435
1511 Ramp Rate and Constriction Factor Based Dual Objective Economic Load Dispatch Using Particle Swarm Optimization

Authors: Himanshu Shekhar Maharana, S. K .Dash

Abstract:

Economic Load Dispatch (ELD) proves to be a vital optimization process in electric power system for allocating generation amongst various units to compute the cost of generation, the cost of emission involving global warming gases like sulphur dioxide, nitrous oxide and carbon monoxide etc. In this dissertation, we emphasize ramp rate constriction factor based particle swarm optimization (RRCPSO) for analyzing various performance objectives, namely cost of generation, cost of emission, and a dual objective function involving both these objectives through the experimental simulated results. A 6-unit 30 bus IEEE test case system has been utilized for simulating the results involving improved weight factor advanced ramp rate limit constraints for optimizing total cost of generation and emission. This method increases the tendency of particles to venture into the solution space to ameliorate their convergence rates. Earlier works through dispersed PSO (DPSO) and constriction factor based PSO (CPSO) give rise to comparatively higher computational time and less good optimal solution at par with current dissertation. This paper deals with ramp rate and constriction factor based well defined ramp rate PSO to compute various objectives namely cost, emission and total objective etc. and compares the result with DPSO and weight improved PSO (WIPSO) techniques illustrating lesser computational time and better optimal solution. 

Keywords: economic load dispatch (ELD), constriction factor based particle swarm optimization (CPSO), dispersed particle swarm optimization (DPSO), weight improved particle swarm optimization (WIPSO), ramp rate and constriction factor based particle swarm optimization (RRCPSO)

Procedia PDF Downloads 382
1510 Fires in Historic Buildings: Assessment of Evacuation of People by Computational Simulation

Authors: Ivana R. Moser, Joao C. Souza

Abstract:

Building fires are random phenomena that can be extremely violent, and safe evacuation of people is the most guaranteed tactic in saving lives. The correct evacuation of buildings, and other spaces occupied by people, means leaving the place in a short time and by the appropriate way. It depends on the perception of spaces by the individual, the architectural layout and the presence of appropriate routing systems. As historical buildings were constructed in other times, when, as in general, the current security requirements were not available yet, it is necessary to adapt these spaces to make them safe. Computer models of evacuation simulation are widely used tools for assessing the safety of people in a building or agglomeration sites and these are associated with the analysis of human behaviour, makes the results of emergency evacuation more correct and conclusive. The objective of this research is the performance evaluation of historical interest buildings, regarding the safe evacuation of people, through computer simulation, using PTV Viswalk software. The buildings objects of study are the Colégio Catarinense, centennial building, located in the city of Florianópolis, Santa Catarina / Brazil. The software used uses the variables of human behaviour, such as: avoid collision with other pedestrians and avoid obstacles. Scenarios were run on the three-dimensional models and the contribution to safety in risk situations was verified as an alternative measure, especially in the impossibility of applying those measures foreseen by the current fire safety codes in Brazil. The simulations verified the evacuation time in situations of normality and emergency situations, as well as indicate the bottlenecks and critical points of the studied buildings, to seek solutions to prevent and correct these undesirable events. It is understood that adopting an advanced computational performance-based approach promotes greater knowledge of the building and how people behave in these specific environments, in emergency situations.

Keywords: computer simulation, escape routes, fire safety, historic buildings, human behavior

Procedia PDF Downloads 188
1509 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 111