Search results for: multi layer
2916 Adaptive Dehazing Using Fusion Strategy
Authors: M. Ramesh Kanthan, S. Naga Nandini Sujatha
Abstract:
The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples.Keywords: single image, fusion, dehazing, multi-scale fusion, per-pixel, weight map
Procedia PDF Downloads 4662915 Vibration of a Beam on an Elastic Foundation Using the Variational Iteration Method
Authors: Desmond Adair, Kairat Ismailov, Martin Jaeger
Abstract:
Modelling of Timoshenko beams on elastic foundations has been widely used in the analysis of buildings, geotechnical problems, and, railway and aerospace structures. For the elastic foundation, the most widely used models are one-parameter mechanical models or two-parameter models to include continuity and cohesion of typical foundations, with the two-parameter usually considered the better of the two. Knowledge of free vibration characteristics of beams on an elastic foundation is considered necessary for optimal design solutions in many engineering applications, and in this work, the efficient and accurate variational iteration method is developed and used to calculate natural frequencies of a Timoshenko beam on a two-parameter foundation. The variational iteration method is a technique capable of dealing with some linear and non-linear problems in an easy and efficient way. The calculations are compared with those using a finite-element method and other analytical solutions, and it is shown that the results are accurate and are obtained efficiently. It is found that the effect of the presence of the two-parameter foundation is to increase the beam’s natural frequencies and this is thought to be because of the shear-layer stiffness, which has an effect on the elastic stiffness. By setting the two-parameter model’s stiffness parameter to zero, it is possible to obtain a one-parameter foundation model, and so, comparison between the two foundation models is also made.Keywords: Timoshenko beam, variational iteration method, two-parameter elastic foundation model
Procedia PDF Downloads 1972914 Stability Analysis of Stagnation-Point Flow past a Shrinking Sheet in a Nanofluid
Authors: Amin Noor, Roslinda Nazar, Norihan Md. Arifin
Abstract:
In this paper, a numerical and theoretical study has been performed for the stagnation-point boundary layer flow and heat transfer towards a shrinking sheet in a nanofluid. The mathematical nanofluid model in which the effect of the nanoparticle volume fraction is taken into account is considered. The governing nonlinear partial differential equations are transformed into a system of nonlinear ordinary differential equations using a similarity transformation which is then solved numerically using the function bvp4c from Matlab. Numerical results are obtained for the skin friction coefficient, the local Nusselt number as well as the velocity and temperature profiles for some values of the governing parameters, namely the nanoparticle volume fraction Φ, the shrinking parameter λ and the Prandtl number Pr. Three different types of nanoparticles are considered, namely Cu, Al2O3 and TiO2. It is found that solutions do not exist for larger shrinking rates and dual (upper and lower branch) solutions exist when λ < -1.0. A stability analysis has been performed to show which branch solutions are stable and physically realizable. It is also found that the upper branch solutions are stable while the lower branch solutions are unstable.Keywords: heat transfer, nanofluid, shrinking sheet, stability analysis, stagnation-point flow
Procedia PDF Downloads 3832913 How to Perform Proper Indexing?
Authors: Watheq Mansour, Waleed Bin Owais, Mohammad Basheer Kotit, Khaled Khan
Abstract:
Efficient query processing is one of the utmost requisites in any business environment to satisfy consumer needs. This paper investigates the various types of indexing models, viz. primary, secondary, and multi-level. The investigation is done under the ambit of various types of queries to which each indexing model performs with efficacy. This study also discusses the inherent advantages and disadvantages of each indexing model and how indexing models can be chosen based on a particular environment. This paper also draws parallels between various indexing models and provides recommendations that would help a Database administrator to zero-in on a particular indexing model attributed to the needs and requirements of the production environment. In addition, to satisfy industry and consumer needs attributed to the colossal data generation nowadays, this study has proposed two novel indexing techniques that can be used to index highly unstructured and structured Big Data with efficacy. The study also briefly discusses some best practices that the industry should follow in order to choose an indexing model that is apposite to their prerequisites and requirements.Keywords: indexing, hashing, latent semantic indexing, B-tree
Procedia PDF Downloads 1612912 Low Power Glitch Free Dual Output Coarse Digitally Controlled Delay Lines
Authors: K. Shaji Mon, P. R. John Sreenidhi
Abstract:
In deep-submicrometer CMOS processes, time-domain resolution of a digital signal is becoming higher than voltage resolution of analog signals. This claim is nowadays pushing toward a new circuit design paradigm in which the traditional analog signal processing is expected to be progressively substituted by the processing of times in the digital domain. Within this novel paradigm, digitally controlled delay lines (DCDL) should play the role of digital-to-analog converters in traditional, analog-intensive, circuits. Digital delay locked loops are highly prevalent in integrated systems.The proposed paper addresses the glitches present in delay circuits along with area,power dissipation and signal integrity.The digitally controlled delay lines(DCDL) under study have been designed in a 90 nm CMOS technology 6 layer metal Copper Strained SiGe Low K Dielectric. Simulation and synthesis results show that the novel circuits exhibit no glitches for dual output coarse DCDL with less power dissipation and consumes less area compared to the glitch free NAND based DCDL.Keywords: glitch free, NAND-based DCDL, CMOS, deep-submicrometer
Procedia PDF Downloads 2462911 Three-Dimensional Carbon Foams for the Application as Electrode Material in Energy Storage Systems
Authors: H. Beisch, J. Marx, S. Garlof, R. Shvets, I. I. Grygorchak, A. Kityk, B. Fiedler
Abstract:
Carbon materials, especially three-dimensional carbon foams, show very high potential in the application as electrode material for energy storage systems such as batteries and supercapacitors with unique fast charging and discharging times. Regarding their high specific surface areas (SSA) high specific capacities can be reached. Globugraphite is a newly developed carbon foam with an interconnected globular carbon morphology. Especially, this foam has a statistically distributed hierarchical pore structure resulting from the manufacturing process based on sintered ceramic templates which are synthetized during a final chemical vapor deposition (CVD) process. For morphology characterization scanning electron (SEM) and transmission electron microscopy (TEM) is used. In addition, the SSA is carried out by nitrogen adsorption combined with the Brunauer–Emmett–Teller (BET) theory. Electrochemical measurements in organic and inorganic electrolyte provide high energy densities and power densities resulting from ion absorption by forming an electrochemical double layer. All values are summarized in a Ragone Diagram. Finally, power densities up to 833 W/kg and energy densities up to 48 Wh/kg could be achieved. The corresponding SSA is between 376 m²/g and 859 m²/g. For organic electrolyte a specific capacity of 71 F/g at a density of 20 mg/cm³ was achieved.Keywords: BET, CVD process, electron microscopy, Ragone diagram
Procedia PDF Downloads 1762910 Sum Capacity with Regularized Channel Inversion in Multi-Antenna Downlink Systems under Equal Power Constraint
Authors: Attaullah Khawaja, Amna Shabbir
Abstract:
Channel inversion is one of the simplest techniques for multiuser downlink systems with single-antenna users. In this paper regularized channel inversion under equal power constraint in the multiuser multiple input multiple output (MU-MIMO) broadcast channels has been considered. Sum capacity with plain channel inversion also known as Zero Forcing Beam Forming (ZFBF) and optimum sum capacity using Dirty Paper Coding (DPC) has also been investigated. Analysis and simulations show that regularization enhances the system performance and empower linear growth in Sum Capacity and specially work well at low signal to noise ratio (SNRs) regime.Keywords: broadcast channel, channel inversion, multiple antenna multiple-user wireless, multiple-input multiple-output (MIMO), regularization, dirty paper coding (DPC), sum capacity
Procedia PDF Downloads 5282909 The Multi-Lingual Acquisition Patterns of Elementary, High School and College Students in Angeles City, Philippines
Authors: Dennis Infante, Leonora Yambao
Abstract:
The Philippines is a multilingual community. A Filipino learns at least three languages throughout his lifespan. Since languages are learned and picked up simultaneously in the environment, a student naturally develops a language system that combines features of at least three languages: the local language, English and Filipino. This study seeks to investigate this particular phenomenon and aspires to propose a theoretical framework of unique language acquisition in the elementary, high school and college in the three languages spoken and used in media, community, business and school: Kapampangan, the local language; Filipino, the national language; and English. The study randomly selects five students from three participating schools in order to acquire language samples. The samples were analyzed in the subsentential, sentential and suprasentential levels using grammatical theories. The data are classified to map out the pattern of substitution or shifting from one language to another.Keywords: language acquisition, mother tongue, multiculturalism, multilingual education
Procedia PDF Downloads 3832908 Artificial Intelligence Methods in Estimating the Minimum Miscibility Pressure Required for Gas Flooding
Authors: Emad A. Mohammed
Abstract:
Utilizing the capabilities of Data Mining and Artificial Intelligence in the prediction of the minimum miscibility pressure (MMP) required for multi-contact miscible (MCM) displacement of reservoir petroleum by hydrocarbon gas flooding using Fuzzy Logic models and Artificial Neural Network models will help a lot in giving accurate results. The factors affecting the (MMP) as it is proved from the literature and from the dataset are as follows: XC2-6: Intermediate composition in the oil-containing C2-6, CO2 and H2S, in mole %, XC1: Amount of methane in the oil (%),T: Temperature (°C), MwC7+: Molecular weight of C7+ (g/mol), YC2+: Mole percent of C2+ composition in injected gas (%), MwC2+: Molecular weight of C2+ in injected gas. Fuzzy Logic and Neural Networks have been used widely in prediction and classification, with relatively high accuracy, in different fields of study. It is well known that the Fuzzy Inference system can handle uncertainty within the inputs such as in our case. The results of this work showed that our proposed models perform better with higher performance indices than other emprical correlations.Keywords: MMP, gas flooding, artificial intelligence, correlation
Procedia PDF Downloads 1472907 Coupling Large Language Models with Disaster Knowledge Graphs for Intelligent Construction
Authors: Zhengrong Wu, Haibo Yang
Abstract:
In the context of escalating global climate change and environmental degradation, the complexity and frequency of natural disasters are continually increasing. Confronted with an abundance of information regarding natural disasters, traditional knowledge graph construction methods, which heavily rely on grammatical rules and prior knowledge, demonstrate suboptimal performance in processing complex, multi-source disaster information. This study, drawing upon past natural disaster reports, disaster-related literature in both English and Chinese, and data from various disaster monitoring stations, constructs question-answer templates based on large language models. Utilizing the P-Tune method, the ChatGLM2-6B model is fine-tuned, leading to the development of a disaster knowledge graph based on large language models. This serves as a knowledge database support for disaster emergency response.Keywords: large language model, knowledge graph, disaster, deep learning
Procedia PDF Downloads 582906 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test
Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston
Abstract:
The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.Keywords: biomarker, diagnostic, neurology, TBI
Procedia PDF Downloads 702905 Core Number Optimization Based Scheduler to Order/Mapp Simulink Application
Authors: Asma Rebaya, Imen Amari, Kaouther Gasmi, Salem Hasnaoui
Abstract:
Over these last years, the number of cores witnessed a spectacular increase in digital signal and general use processors. Concurrently, significant researches are done to get benefit from the high degree of parallelism. Indeed, these researches are focused to provide an efficient scheduling from hardware/software systems to multicores architecture. The scheduling process consists on statically choose one core to execute one task and to specify an execution order for the application tasks. In this paper, we describe an efficient scheduler that calculates the optimal number of cores required to schedule an application, gives a heuristic scheduling solution and evaluates its cost. Our proposal results are evaluated and compared with Preesm scheduler results and we prove that ours allows better scheduling in terms of latency, computation time and number of cores.Keywords: computation time, hardware/software system, latency, optimization, multi-cores platform, scheduling
Procedia PDF Downloads 2842904 Settlement Analysis of Axially Loaded Bored Piles: A Case History
Authors: M. Mert, M. T. Ozkan
Abstract:
Pile load tests should be applied to check the bearing capacity calculations and to determine the settlement of the pile corresponding to test load. Strain gauges can be installed into pile in order to determine the shaft resistance of the piles for every soil layer respectively. Detailed results can be obtained by means of strain gauges placed at certain levels into test piles. In the scope of this study, pile load test data obtained from two different projects are examined. Instrumented static pile load tests were applied on totally 7 test bored piles of different diameters (80 cm, 150 cm, and 200 cm) and different lengths (between 30-76 m) in two different project site. Settlement analysis of test piles is done by using some of load transfer methods and finite element method. Plaxis 3D which is a three-dimensional finite element program is also used for settlement analysis of the test piles. In this study, firstly bearing capacity of test piles are determined and compared with strain gauge data which is required for settlement analysis. Then, settlement values of the test piles are estimated by using load transfer methods developed in recent years and finite element method. The aim of this study is to show similarities and differences between the results obtained from settlement analysis methods and instrumented pile load tests.Keywords: failure, finite element method, monitoring and instrumentation, pile, settlement
Procedia PDF Downloads 1722903 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters
Authors: Dylan Santos De Pinho, Nabil Ouerhani
Abstract:
Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization
Procedia PDF Downloads 1492902 Haemocompatibility of Surface Modified AISI 316L Austenitic Stainless Steel Tested in Artificial Plasma
Authors: W. Walke, J. Przondziono, K. Nowińska
Abstract:
The study comprises evaluation of suitability of passive layer created on the surface of AISI 316L stainless steel for products that are intended to have contact with blood. For that purpose, prior to and after chemical passivation, samples were subject to 7 day exposure in artificial plasma at the temperature of T=37°C. Next, tests of metallic ions infiltration from the surface to the solution were performed. The tests were performed with application of spectrometer JY 2000, by Yobin – Yvon, employing Inductively Coupled Plasma Atomic Emission Spectrometry (ICP-AES). In order to characterize physical and chemical features of electrochemical processes taking place during exposure of samples to artificial plasma, tests with application of electrochemical impedance spectroscopy were suggested. The tests were performed with application of measuring unit equipped with potentiostat PGSTAT 302n with an attachment for impedance tests FRA2. Measurements were made in the environment simulating human blood at the temperature of T=37°C. Performed tests proved that application of chemical passivation process for AISI 316L stainless steel used for production of goods intended to have contact with blood is well-grounded and useful in order to improve safety of their usage.Keywords: AISI 316L stainless steel, chemical passivation, artificial plasma, ions infiltration, EIS
Procedia PDF Downloads 2672901 Relay Node Selection Algorithm for Cooperative Communications in Wireless Networks
Authors: Sunmyeng Kim
Abstract:
IEEE 802.11a/b/g standards support multiple transmission rates. Even though the use of multiple transmission rates increase the WLAN capacity, this feature leads to the performance anomaly problem. Cooperative communication was introduced to relieve the performance anomaly problem. Data packets are delivered to the destination much faster through a relay node with high rate than through direct transmission to the destination at low rate. In the legacy cooperative protocols, a source node chooses a relay node only based on the transmission rate. Therefore, they are not so feasible in multi-flow environments since they do not consider the effect of other flows. To alleviate the effect, we propose a new relay node selection algorithm based on the transmission rate and channel contention level. Performance evaluation is conducted using simulation, and shows that the proposed protocol significantly outperforms the previous protocol in terms of throughput and delay.Keywords: cooperative communications, MAC protocol, relay node, WLAN
Procedia PDF Downloads 3342900 Polymerspolyaniline/CMK-3/Hydroquinone Composite Electrode for Supercapacitor Application
Authors: Hu-Cheng Weng, Jhen-Ting Huang, Chia-Chia Chang, An-Ya Lo
Abstract:
In this study, carbon mesoporous material, CMK-3, was adopted as supporting material for electroactive polymerspolyaniline (PANI), polyaniline, for supercapacitor application, where hydroquinone (HQ) was integrated to enhance the redox reaction of PANI. The results show that the addition of PANI improves the capacitance of electrode from 89 F/g (CMK-3) to 337 F/g (PANI/CMK-3), the addition of HQ furtherly improves the capacitance to 463 F/g (PANI/CMK-3/HQ). The PANI provides higher energy density and also acts as binder of the electrode; the CMK-3 provides higher electron double layer capacitance EDLC and stabilize the polyaniline by its highly porosity. With the addition of HQ, the capacitance of PANI/CMK-3 was further enhanced. In-situ analyses including cyclic voltammetry (CV), chronopotentiometry (CP), electron impedance spectrum (EIS) analyses were applied for electrode performance examination. For materials characterization, the crystal structure, morphology, microstructure, and porosity were examined by X-ray diffraction (XRD), scanning electron microscope (SEM), and transmission electron microscopy (TEM), and 77K N2 adsorption/desorption analyses, respectively. The effects of electrolyte pH value, PANI polymerization time, HQ concentration, and PANI/CMK-3 ratio on capacitance were discussed. The durability was also studied by long-term operation test. The results show that PANI/CMK-3/HQ with great potential for supercapacitor application. Finally, the potential of all-solid PANI/CMK-3/HQ based supercapacitor was successfully demonstrated.Keywords: CMK3, PANI, redox electrolyte, solid supercapacitor
Procedia PDF Downloads 1392899 Comparison of Parallel CUDA and OpenMP Implementations of Memetic Algorithms for Solving Optimization Problems
Authors: Jason Digalakis, John Cotronis
Abstract:
Memetic algorithms (MAs) are useful for solving optimization problems. It is quite difficult to search the search space of the optimization problem with large dimensions. There is a challenge to use all the cores of the system. In this study, a sequential implementation of the memetic algorithm is converted into a concurrent version, which is executed on the cores of both CPU and GPU. For this reason, CUDA and OpenMP libraries are operated on the parallel algorithm to make a concurrent execution on CPU and GPU, respectively. The aim of this study is to compare CPU and GPU implementation of the memetic algorithm. For this purpose, fourteen benchmark functions are selected as test problems. The obtained results indicate that our approach leads to speedups up to five thousand times higher compared to one CPU thread while maintaining a reasonable results quality. This clearly shows that GPUs have the potential to acceleration of MAs and allow them to solve much more complex tasks.Keywords: memetic algorithm, CUDA, GPU-based memetic algorithm, open multi processing, multimodal functions, unimodal functions, non-linear optimization problems
Procedia PDF Downloads 1052898 Validation of Asymptotic Techniques to Predict Bistatic Radar Cross Section
Authors: M. Pienaar, J. W. Odendaal, J. C. Smit, J. Joubert
Abstract:
Simulations are commonly used to predict the bistatic radar cross section (RCS) of military targets since characterization measurements can be expensive and time consuming. It is thus important to accurately predict the bistatic RCS of targets. Computational electromagnetic (CEM) methods can be used for bistatic RCS prediction. CEM methods are divided into full-wave and asymptotic methods. Full-wave methods are numerical approximations to the exact solution of Maxwell’s equations. These methods are very accurate but are computationally very intensive and time consuming. Asymptotic techniques make simplifying assumptions in solving Maxwell's equations and are thus less accurate but require less computational resources and time. Asymptotic techniques can thus be very valuable for the prediction of bistatic RCS of electrically large targets, due to the decreased computational requirements. This study extends previous work by validating the accuracy of asymptotic techniques to predict bistatic RCS through comparison with full-wave simulations as well as measurements. Validation is done with canonical structures as well as complex realistic aircraft models instead of only looking at a complex slicy structure. The slicy structure is a combination of canonical structures, including cylinders, corner reflectors and cubes. Validation is done over large bistatic angles and at different polarizations. Bistatic RCS measurements were conducted in a compact range, at the University of Pretoria, South Africa. The measurements were performed at different polarizations from 2 GHz to 6 GHz. Fixed bistatic angles of β = 30.8°, 45° and 90° were used. The measurements were calibrated with an active calibration target. The EM simulation tool FEKO was used to generate simulated results. The full-wave multi-level fast multipole method (MLFMM) simulated results together with the measured data were used as reference for validation. The accuracy of physical optics (PO) and geometrical optics (GO) was investigated. Differences relating to amplitude, lobing structure and null positions were observed between the asymptotic, full-wave and measured data. PO and GO were more accurate at angles close to the specular scattering directions and the accuracy seemed to decrease as the bistatic angle increased. At large bistatic angles PO did not perform well due to the shadow regions not being treated appropriately. PO also did not perform well for canonical structures where multi-bounce was the main scattering mechanism. PO and GO do not account for diffraction but these inaccuracies tended to decrease as the electrical size of objects increased. It was evident that both asymptotic techniques do not properly account for bistatic structural shadowing. Specular scattering was calculated accurately even if targets did not meet the electrically large criteria. It was evident that the bistatic RCS prediction performance of PO and GO depends on incident angle, frequency, target shape and observation angle. The improved computational efficiency of the asymptotic solvers yields a major advantage over full-wave solvers and measurements; however, there is still much room for improvement of the accuracy of these asymptotic techniques.Keywords: asymptotic techniques, bistatic RCS, geometrical optics, physical optics
Procedia PDF Downloads 2612897 Planning a Supply Chain with Risk and Environmental Objectives
Authors: Ghanima Al-Sharrah, Haitham M. Lababidi, Yusuf I. Ali
Abstract:
The main objective of the current work is to introduce sustainability factors in optimizing the supply chain model for process industries. The supply chain models are normally based on purely economic considerations related to costs and profits. To account for sustainability, two additional factors have been introduced; environment and risk. A supply chain for an entire petroleum organization has been considered for implementing and testing the proposed optimization models. The environmental and risk factors were introduced as indicators reflecting the anticipated impact of the optimal production scenarios on sustainability. The aggregation method used in extending the single objective function to multi-objective function is proven to be quite effective in balancing the contribution of each objective term. The results indicate that introducing sustainability factor would slightly reduce the economic benefit while improving the environmental and risk reduction performances of the process industries.Keywords: environmental indicators, optimization, risk, supply chain
Procedia PDF Downloads 3532896 A Generic Middleware to Instantly Sync Intensive Writes of Heterogeneous Massive Data via Internet
Authors: Haitao Yang, Zhenjiang Ruan, Fei Xu, Lanting Xia
Abstract:
Industry data centers often need to sync data changes reliably and instantly from a large-scale of heterogeneous autonomous relational databases accessed via the not-so-reliable Internet, for which a practical universal sync middle of low maintenance and operation costs is most wanted, but developing such a product and adapting it for various scenarios are a very sophisticated and continuous practice. The authors have been devising, applying, and optimizing a generic sync middleware system, named GSMS since 2006, holding the principles or advantages that the middleware must be SyncML-compliant and transparent to data application layer logic, need not refer to implementation details of databases synced, does not rely on host computer operating systems deployed, and its construction is light weighted and hence, of low cost. A series of ultimate experiments with GSMS sync performance were conducted for a persuasive example of a source relational database that underwent a broad range of write loads, say, from one thousand to one million intensive writes within a few minutes. The tests proved that GSMS has achieved an instant sync level of well below a fraction of millisecond per record sync, and GSMS’ smooth performances under ultimate write loads also showed it is feasible and competent.Keywords: heterogeneous massive data, instantly sync intensive writes, Internet generic middleware design, optimization
Procedia PDF Downloads 1232895 For Single to Multilayer Polyvinylidene Fluoride Based Polymer for Electro-Caloric Cooling
Authors: Nouh Zeggai, Lucas Debrux, Fabien Parrain, Brahim Dkhil, Martino Lobue, Morgan Almanza
Abstract:
Refrigeration and air conditioning are some of the most used energies in our daily life, especially vapor compression refrigeration. Electrocaloric material might appears as an alternative towards solid-state cooling. polyvinylidene fluoride (PVDF) based polymer has shown promising adiabatic temperature change (∆T) and entropy change (∆S). There is practically no limit to the electric field that can be applied, except the one that the material can withstand. However, when working with a large surface as required in a device, the chance to have a defect is larger and can drastically reduce the voltage breakdown, thus reducing the electrocaloric properties. In this work, we propose to study how the characteristic of a single film are transposed when going to multilayer. The laminator and the hot press appear as two interesting processes that have been investigating to achieve a multilayer film. The study is mainly focused on the breakdown field and the adiabatic temperature change, but the phase and crystallinity have also been measured. We process one layer-based PVDF and assemble them to obtain a multilayer. Pressing at hot temperature method and lamination were used for the production of the thin films. The multilayer film shows higher breakdown strength, temperature change, and crystallinity (beta phases) using the hot press technique.Keywords: PVDF-TrFE-CFE, multilayer, electrocaloric effect, hot press, cooling device
Procedia PDF Downloads 1722894 A CFD Study of the Performance Characteristics of Vented Cylinders as Vortex Generators
Authors: R. Kishan, R. M. Sumant, S. Suhas, Arun Mahalingam
Abstract:
This paper mainly researched on influence of vortex generator on lift coefficient and drag coefficient, when vortex generator is mounted on a flat plate. Vented cylinders were used as vortex generators which intensify vortex shedding in the wake of the vented cylinder as compared to base line circular cylinder which ensures more attached flow and increases lift force of the system. Firstly vented cylinders were analyzed in commercial CFD software which is compared with baseline cylinders for different angles of attack and further variation of lift and drag forces were studied by varying Reynolds number to account for influence of turbulence and boundary layer in the flow. Later vented cylinders were mounted on a flat plate and variation of lift and drag coefficients was studied by varying angles of attack and studying the dependence of Reynolds number and dimensions of vortex generator on the coefficients. Mesh grid sensitivity is studied to check the convergence of the results obtained It was found that usage of vented cylinders as vortex generators increased lift forces with small variation in drag forces by varying angle of attack.Keywords: CFD analysis, drag coefficient, FVM, lift coefficient, modeling, Reynolds number, simulation, vortex generators, vortex shedding
Procedia PDF Downloads 4322893 Formex Algebra Adaptation into Parametric Design Tools: Dome Structures
Authors: Réka Sárközi, Péter Iványi, Attila B. Széll
Abstract:
The aim of this paper is to present the adaptation of the dome construction tool for formex algebra to the parametric design software Grasshopper. Formex algebra is a mathematical system, primarily used for planning structural systems such like truss-grid domes and vaults, together with the programming language Formian. The goal of the research is to allow architects to plan truss-grid structures easily with parametric design tools based on the versatile formex algebra mathematical system. To produce regular structures, coordinate system transformations are used and the dome structures are defined in spherical coordinate system. Owing to the abilities of the parametric design software, it is possible to apply further modifications on the structures and gain special forms. The paper covers the basic dome types, and also additional dome-based structures using special coordinate-system solutions based on spherical coordinate systems. It also contains additional structural possibilities like making double layer grids in all geometry forms. The adaptation of formex algebra and the parametric workflow of Grasshopper together give the possibility of quick and easy design and optimization of special truss-grid domes.Keywords: parametric design, structural morphology, space structures, spherical coordinate system
Procedia PDF Downloads 2572892 An Assesment of Unconventional Hydrocarbon Potential of the Silurian Dadaş Shales in Diyarbakır Basin, Türkiye
Authors: Ceren Sevimli, Sedat İnan
Abstract:
The Silurian Dadaş Formation within the Diyarbakir Basin in SE Türkiye, like other Silurian shales in North Africa and Middle East, represents a significant prospect for conventional and unconventional hydrocarbon exploration. The Diyarbakır Basin remains relatively underexplored, presenting untapped potential that warrants further investigation. This study focuses on the thermal maturity and hydrocarbon generation histories of the Silurian Dadaş shales, utilizing basin modeling approach. The Dadaş shales are organic-rich and contain mainly Type II kerogen, especially the basal layer contains up to 10 wt. %TOC and thus it is named as “hot shale”. The research integrates geological, geochemical, and basin modeling data to elucidate the unconventional hydrocarbon potential of this formation, which is crucial given the global demand for energy and the need for new resources. The data obtained from previous studies were used to calibrate basin model that has been established by using PetroMod software (Schlumberger). The calibrated model results suggest that Dadaş shales are in oil generation window and that the major episode for thermal maturation and hydrocarbon generation took place prior rot Alpine orogeny (uplift and erosion) The modeling results elucidate the burial history, maturity history, and hydrocarbon production history of the Silurian-aged Dadaş shales, as well as its hydrocarbon content in the area.Keywords: dadaş formation, diyarbakır basin, silurian hot shale, unconventional hydrocarbon
Procedia PDF Downloads 372891 Production Plan and Technological Variants Optimization by Goal Programming Methods
Authors: Tunjo Perić, Franjo Bratić
Abstract:
In this paper the goal programming methodology for solving multiple objective problem of the technological variants and production plan optimization has been applied. The optimization criteria are determined and the multiple objective linear programming model for solving a problem of the technological variants and production plan optimization is formed and solved. Then the obtained results are analysed. The obtained results point out to the possibility of efficient application of the goal programming methodology in solving the problem of the technological variants and production plan optimization. The paper points out on the advantages of the application of the goal programming methodolohy compare to the Surrogat Worth Trade-off method in solving this problem.Keywords: goal programming, multi objective programming, production plan, SWT method, technological variants
Procedia PDF Downloads 3822890 An Approach of Node Model TCnNet: Trellis Coded Nanonetworks on Graphene Composite Substrate
Authors: Diogo Ferreira Lima Filho, José Roberto Amazonas
Abstract:
Nanotechnology opens the door to new paradigms that introduces a variety of novel tools enabling a plethora of potential applications in the biomedical, industrial, environmental, and military fields. This work proposes an integrated node model by applying the same concepts of TCNet to networks of nanodevices where the nodes are cooperatively interconnected with a low-complexity Mealy Machine (MM) topology integrating in the same electronic system the modules necessary for independent operation in wireless sensor networks (WSNs), consisting of Rectennas (RF to DC power converters), Code Generators based on Finite State Machine (FSM) & Trellis Decoder and On-chip Transmit/Receive with autonomy in terms of energy sources applying the Energy Harvesting technique. This approach considers the use of a Graphene Composite Substrate (GCS) for the integrated electronic circuits meeting the following characteristics: mechanical flexibility, miniaturization, and optical transparency, besides being ecological. In addition, graphene consists of a layer of carbon atoms with the configuration of a honeycomb crystal lattice, which has attracted the attention of the scientific community due to its unique Electrical Characteristics.Keywords: composite substrate, energy harvesting, finite state machine, graphene, nanotechnology, rectennas, wireless sensor networks
Procedia PDF Downloads 1082889 Experimental Investigation of Interfacial Bond Strength of Concrete Layers
Authors: Rajkamal Kumar, Sudhir Mishra
Abstract:
The connections between various elements of concrete structures play a vital role in determining the durability of structures. These connections produce discontinuities and to ensure the monolithic behavior of structures, these connections should be carefully designed. The connections between concrete layers may occur in various situations such as structure repairing and rehabilitation or construction of huge structures with cast-in-situ or pre-cast elements, etc. Bond strength at the interface of these concrete layers should be able to prevent the progressive slip from taking place and it should also ensure satisfactory performance of the structure. Different approaches to enhance the bond strength at interface have been a major area of research. Nowadays, micro-concrete is getting popular as a repair material. Under this ambit, this paper aims to present the experimental results of connections between concrete layers of different age with artificial indentation at interface with two types of repair material: Concrete with same parent concrete composition and ready-mix mortar (micro-concrete), artificial indentations (grooves and holes) were made on the old layer of concrete to increase the bond strength. Curing plays an important role in determining the bond strength. Optimum duration for curing have also been discussed for each type of repair material. Different types of failure patterns have also been mentioned.Keywords: adhesion, cohesion, compressive stress, micro-concrete, shear stress, slant shear test
Procedia PDF Downloads 3352888 Optimization of Surface Roughness in Additive Manufacturing Processes via Taguchi Methodology
Authors: Anjian Chen, Joseph C. Chen
Abstract:
This paper studies a case where the targeted surface roughness of fused deposition modeling (FDM) additive manufacturing process is improved. The process is designing to reduce or eliminate the defects and improve the process capability index Cp and Cpk for an FDM additive manufacturing process. The baseline Cp is 0.274 and Cpk is 0.654. This research utilizes the Taguchi methodology, to eliminate defects and improve the process. The Taguchi method is used to optimize the additive manufacturing process and printing parameters that affect the targeted surface roughness of FDM additive manufacturing. The Taguchi L9 orthogonal array is used to organize the parameters' (four controllable parameters and one non-controllable parameter) effectiveness on the FDM additive manufacturing process. The four controllable parameters are nozzle temperature [°C], layer thickness [mm], nozzle speed [mm/s], and extruder speed [%]. The non-controllable parameter is the environmental temperature [°C]. After the optimization of the parameters, a confirmation print was printed to prove that the results can reduce the amount of defects and improve the process capability index Cp from 0.274 to 1.605 and the Cpk from 0.654 to 1.233 for the FDM additive manufacturing process. The final results confirmed that the Taguchi methodology is sufficient to improve the surface roughness of FDM additive manufacturing process.Keywords: additive manufacturing, fused deposition modeling, surface roughness, six-sigma, Taguchi method, 3D printing
Procedia PDF Downloads 3952887 Utilizing Grid Computing to Enhance Power Systems Performance
Authors: Rafid A. Al-Khannak, Fawzi M. Al-Naima
Abstract:
Power load is one of the most important controlling keys which decide power demands and illustrate power usage to shape power market. Hence, power load forecasting is the parameter which facilitates understanding and analyzing all these aspects. In this paper, power load forecasting is solved under MATLAB environment by constructing a neural network for the power load to find an accurate simulated solution with the minimum error. A developed algorithm to achieve load forecasting application with faster technique is the aim for this paper. The algorithm is used to enable MATLAB power application to be implemented by multi machines in the Grid computing system, and to accomplish it within much less time, cost and with high accuracy and quality. Grid Computing, the modern computational distributing technology, has been used to enhance the performance of power applications by utilizing idle and desired Grid contributor(s) by sharing computational power resources.Keywords: DeskGrid, Grid Server, idle contributor(s), grid computing, load forecasting
Procedia PDF Downloads 477