Search results for: random routing optimization technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11050

Search results for: random routing optimization technique

8860 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem

Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly

Abstract:

We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.

Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard

Procedia PDF Downloads 515
8859 Determination of Klebsiella Pneumoniae Susceptibility to Antibiotics Using Infrared Spectroscopy and Machine Learning Algorithms

Authors: Manal Suleiman, George Abu-Aqil, Uraib Sharaha, Klaris Riesenberg, Itshak Lapidot, Ahmad Salman, Mahmoud Huleihel

Abstract:

Klebsiella pneumoniae is one of the most aggressive multidrug-resistant bacteria associated with human infections resulting in high mortality and morbidity. Thus, for an effective treatment, it is important to diagnose both the species of infecting bacteria and their susceptibility to antibiotics. Current used methods for diagnosing the bacterial susceptibility to antibiotics are time-consuming (about 24h following the first culture). Thus, there is a clear need for rapid methods to determine the bacterial susceptibility to antibiotics. Infrared spectroscopy is a well-known method that is known as sensitive and simple which is able to detect minor biomolecular changes in biological samples associated with developing abnormalities. The main goal of this study is to evaluate the potential of infrared spectroscopy in tandem with Random Forest and XGBoost machine learning algorithms to diagnose the susceptibility of Klebsiella pneumoniae to antibiotics within approximately 20 minutes following the first culture. In this study, 1190 Klebsiella pneumoniae isolates were obtained from different patients with urinary tract infections. The isolates were measured by the infrared spectrometer, and the spectra were analyzed by machine learning algorithms Random Forest and XGBoost to determine their susceptibility regarding nine specific antibiotics. Our results confirm that it was possible to classify the isolates into sensitive and resistant to specific antibiotics with a success rate range of 80%-85% for the different tested antibiotics. These results prove the promising potential of infrared spectroscopy as a powerful diagnostic method for determining the Klebsiella pneumoniae susceptibility to antibiotics.

Keywords: urinary tract infection (UTI), Klebsiella pneumoniae, bacterial susceptibility, infrared spectroscopy, machine learning

Procedia PDF Downloads 154
8858 The Role of Urban Development Patterns for Mitigating Extreme Urban Heat: The Case Study of Doha, Qatar

Authors: Yasuyo Makido, Vivek Shandas, David J. Sailor, M. Salim Ferwati

Abstract:

Mitigating extreme urban heat is challenging in a desert climate such as Doha, Qatar, since outdoor daytime temperature area often too high for the human body to tolerate. Recent studies demonstrate that cities in arid and semiarid areas can exhibit ‘urban cool islands’ - urban areas that are cooler than the surrounding desert. However, the variation of temperatures as a result of the time of day and factors leading to temperature change remain at the question. To address these questions, we examined the spatial and temporal variation of air temperature in Doha, Qatar by conducting multiple vehicle-base local temperature observations. We also employed three statistical approaches to model surface temperatures using relevant predictors: (1) Ordinary Least Squares, (2) Regression Tree Analysis and (3) Random Forest for three time periods. Although the most important determinant factors varied by day and time, distance to the coast was the significant determinant at midday. A 70%/30% holdout method was used to create a testing dataset to validate the results through Pearson’s correlation coefficient. The Pearson’s analysis suggests that the Random Forest model more accurately predicts the surface temperatures than the other methods. We conclude with recommendations about the types of development patterns that show the greatest potential for reducing extreme heat in air climates.

Keywords: desert cities, tree-structure regression model, urban cool Island, vehicle temperature traverse

Procedia PDF Downloads 383
8857 Influence of Thickness on Electrical and Structural Properties of Zinc Oxide (ZnO) Thin Films Prepared by RF Sputtering Technique

Authors: M. Momoh, S. Abdullahi, A. U. Moreh

Abstract:

Zinc oxide (ZnO) thin films were prepared on corning (7059) glass substrates at a thickness of 75.5 and 130.5 nm by RF sputtering technique. The deposition was carried out at room temperature after which the samples were annealed in open air at 150°C. The electrical and structural properties of these films were studied. The electrical properties of the films were monitored by four-point probe method while the structural properties were studied by X-ray diffraction (XRD). It was found that the electrical resistance of the films decreases with increase in the thickness of the films. The XRD analysis of the films showed that the films have a peak located at 34.31°-34.35° with hkl (002). Other parameters calculated include the stress (σ) and the grain size (D).

Keywords: electrical properties, film thickness, structural properties, zinc oxide

Procedia PDF Downloads 368
8856 Optimization and Evaluation of Different Pathways to Produce Biofuel from Biomass

Authors: Xiang Zheng, Zhaoping Zhong

Abstract:

In this study, Aspen Plus was used to simulate the whole process of biomass conversion to liquid fuel in different ways, and the main results of material and energy flow were obtained. The process optimization and evaluation were carried out on the four routes of cellulosic biomass pyrolysis gasification low-carbon olefin synthesis olefin oligomerization, biomass water pyrolysis and polymerization to jet fuel, biomass fermentation to ethanol, and biomass pyrolysis to liquid fuel. The environmental impacts of three biomass species (poplar wood, corn stover, and rice husk) were compared by the gasification synthesis pathway. The global warming potential, acidification potential, and eutrophication potential of the three biomasses were the same as those of rice husk > poplar wood > corn stover. In terms of human health hazard potential and solid waste potential, the results were poplar > rice husk > corn stover. In the popular pathway, 100 kg of poplar biomass was input to obtain 11.9 kg of aviation coal fraction and 6.3 kg of gasoline fraction. The energy conversion rate of the system was 31.6% when the output product energy included only the aviation coal product. In the basic process of hydrothermal depolymerization process, 14.41 kg aviation kerosene was produced per 100 kg biomass. The energy conversion rate of the basic process was 33.09%, which can be increased to 38.47% after the optimal utilization of lignin gasification and steam reforming for hydrogen production. The total exergy efficiency of the system increased from 30.48% to 34.43% after optimization, and the exergy loss mainly came from the concentration of precursor dilute solution. Global warming potential in environmental impact is mostly affected by the production process. Poplar wood was used as raw material in the process of ethanol production from cellulosic biomass. The simulation results showed that 827.4 kg of pretreatment mixture, 450.6 kg of fermentation broth, and 24.8 kg of ethanol were produced per 100 kg of biomass. The power output of boiler combustion reached 94.1 MJ, the unit power consumption in the process was 174.9 MJ, and the energy conversion rate was 33.5%. The environmental impact was mainly concentrated in the production process and agricultural processes. On the basis of the original biomass pyrolysis to liquid fuel, the enzymatic hydrolysis lignin residue produced by cellulose fermentation to produce ethanol was used as the pyrolysis raw material, and the fermentation and pyrolysis processes were coupled. In the coupled process, 24.8 kg ethanol and 4.78 kg upgraded liquid fuel were produced per 100 kg biomass with an energy conversion rate of 35.13%.

Keywords: biomass conversion, biofuel, process optimization, life cycle assessment

Procedia PDF Downloads 64
8855 Characterization of the Microbial Induced Carbonate Precipitation Technique as a Biological Cementing Agent for Sand Deposits

Authors: Sameh Abu El-Soud, Zahra Zayed, Safwan Khedr, Adel M. Belal

Abstract:

The population increase in Egypt is urging for horizontal land development which became a demand to allow the benefit of different natural resources and expand from the narrow Nile valley. However, this development is facing challenges preventing land development and agriculture development. Desertification and moving sand dunes in the west sector of Egypt are considered the major obstacle that is blocking the ideal land use and development. In the proposed research, the sandy soil is treated biologically using Bacillus pasteurii bacteria as these bacteria have the ability to bond the sand partials to change its state of loose sand to cemented sand, which reduces the moving ability of the sand dunes. The procedure of implementing the Microbial Induced Carbonate Precipitation Technique (MICP) technique is examined, and the different factors affecting on this process such as the medium of bacteria sample preparation, the optical density (OD600), the reactant concentration, injection rates and intervals are highlighted. Based on the findings of the MICP treatment for sandy soil, conclusions and future recommendations are reached.

Keywords: soil stabilization, biological treatment, microbial induced carbonate precipitation (MICP), sand cementation

Procedia PDF Downloads 233
8854 Object-Based Image Analysis for Gully-Affected Area Detection in the Hilly Loess Plateau Region of China Using Unmanned Aerial Vehicle

Authors: Hu Ding, Kai Liu, Guoan Tang

Abstract:

The Chinese Loess Plateau suffers from serious gully erosion induced by natural and human causes. Gully features detection including gully-affected area and its two dimension parameters (length, width, area et al.), is a significant task not only for researchers but also for policy-makers. This study aims at gully-affected area detection in three catchments of Chinese Loess Plateau, which were selected in Changwu, Ansai, and Suide by using unmanned aerial vehicle (UAV). The methodology includes a sequence of UAV data generation, image segmentation, feature calculation and selection, and random forest classification. Two experiments were conducted to investigate the influences of segmentation strategy and feature selection. Results showed that vertical and horizontal root-mean-square errors were below 0.5 and 0.2 m, respectively, which were ideal for the Loess Plateau region. The segmentation strategy adopted in this paper, which considers the topographic information, and optimal parameter combination can improve the segmentation results. Besides, the overall extraction accuracy in Changwu, Ansai, and Suide achieved was 84.62%, 86.46%, and 93.06%, respectively, which indicated that the proposed method for detecting gully-affected area is more objective and effective than traditional methods. This study demonstrated that UAV can bridge the gap between field measurement and satellite-based remote sensing, obtaining a balance in resolution and efficiency for catchment-scale gully erosion research.

Keywords: unmanned aerial vehicle (UAV), object-analysis image analysis, gully erosion, gully-affected area, Loess Plateau, random forest

Procedia PDF Downloads 205
8853 Preparation and Characterization of Phosphate-Nickel-Titanium Composite Coating Obtained by Sol Gel Process for Corrosion Protection

Authors: Khalidou Ba, Abdelkrim Chahine, Mohamed Ebn Touhami

Abstract:

A strong industrial interest is focused on the development of coatings for anticorrosion protection. In this context, phosphate composite materials are expanding strongly due to their chemical characteristics and their interesting physicochemical properties. Sol-gel coatings offer high homogeneity and purity that may lead to obtain coating presenting good adhesion to metal surface. The goal behind this work is to develop efficient coatings for corrosion protection of steel to extend its life. In this context, a sol gel process allowing to obtain thin film coatings on carbon steel with high resistance to corrosion has been developed. The optimization of several experimental parameters such as the hydrolysis time, the temperature, the coating technique, the molar ratio between precursors, the number of layers and the drying mode has been realized in order to obtain a coating showing the best anti-corrosion properties. The effect of these parameters on the microstructure and anticorrosion performance of the films sol gel coating has been investigated using different characterization methods (FTIR, XRD, Raman, XPS, SEM, Profilometer, Salt Spray Test, etc.). An optimized coating presenting good adhesion and very stable anticorrosion properties in salt spray test, which consists of a corrosive attack accelerated by an artificial salt spray consisting of a solution of 5% NaCl, pH neutral, under precise conditions of temperature (35 °C) and pressure has been obtained.

Keywords: sol gel, coating, corrosion, XPS

Procedia PDF Downloads 121
8852 Market Acceptance of Irradiated Food in the City of Piracicaba, Brazil

Authors: Vanessa de Cillos Silva, Fabrício José Piacente, Sônia Maria De Stefano Piedade, Valter Arthur

Abstract:

The increasing concern in relation to safety and hygiene of food consumption makes it so that food conservation is studied. Food radiation is a technique used for conservation, but many consumers associate this technique with dangers such as environmental contamination and development of diseases. This research had the objective of evaluating the acceptance of radiated products by the consumer market in the city of Piracicaba/SP-Brasil. The methodology adopted was the application of a questionnaire in the city’s supermarkets. After the application, the data was tabulated and analyzed. It was observed that the majority of interviewees would not eat irradiated food. The unfamiliarity and questions about the safety of irradiated food were the main causes of your rejection.

Keywords: irradiation, questionnaire, storage, market acceptance

Procedia PDF Downloads 396
8851 Daylightophil Approach towards High-Performance Architecture for Hybrid-Optimization of Visual Comfort and Daylight Factor in BSk

Authors: Mohammadjavad Mahdavinejad, Hadi Yazdi

Abstract:

The greatest influence we have from the world is shaped through the visual form, thus light is an inseparable element in human life. The use of daylight in visual perception and environment readability is an important issue for users. With regard to the hazards of greenhouse gas emissions from fossil fuels, and in line with the attitudes on the reduction of energy consumption, the correct use of daylight results in lower levels of energy consumed by artificial lighting, heating and cooling systems. Windows are usually the starting points for analysis and simulations to achieve visual comfort and energy optimization; therefore, attention should be paid to the orientation of buildings to minimize electrical energy and maximize the use of daylight. In this paper, by using the Design Builder Software, the effect of the orientation of an 18m2(3m*6m) room with 3m height in city of Tehran has been investigated considering the design constraint limitations. In these simulations, the dimensions of the building have been changed with one degree and the window is located on the smaller face (3m*3m) of the building with 80% ratio. The results indicate that the orientation of building has a lot to do with energy efficiency to meet high-performance architecture and planning goals and objectives.

Keywords: daylight, window, orientation, energy consumption, design builder

Procedia PDF Downloads 219
8850 Availability Analysis of Process Management in the Equipment Maintenance and Repair Implementation

Authors: Onur Ozveri, Korkut Karabag, Cagri Keles

Abstract:

It is an important issue that the occurring of production downtime and repair costs when machines fail in the machine intensive production industries. In the case of failure of more than one machine at the same time, which machines will have the priority to repair, how to determine the optimal repair time should be allotted for this machines and how to plan the resources needed to repair are the key issues. In recent years, Business Process Management (BPM) technique, bring effective solutions to different problems in business. The main feature of this technique is that it can improve the way the job done by examining in detail the works of interest. In the industries, maintenance and repair works are operating as a process and when a breakdown occurs, it is known that the repair work is carried out in a series of process. Maintenance main-process and repair sub-process are evaluated with process management technique, so it is thought that structure could bring a solution. For this reason, in an international manufacturing company, this issue discussed and has tried to develop a proposal for a solution. The purpose of this study is the implementation of maintenance and repair works which is integrated with process management technique and at the end of implementation, analyzing the maintenance related parameters like quality, cost, time, safety and spare part. The international firm that carried out the application operates in a free region in Turkey and its core business area is producing original equipment technologies, vehicle electrical construction, electronics, safety and thermal systems for the world's leading light and heavy vehicle manufacturers. In the firm primarily, a project team has been established. The team dealt with the current maintenance process again, and it has been revised again by the process management techniques. Repair process which is sub-process of maintenance process has been discussed again. In the improved processes, the ABC equipment classification technique was used to decide which machine or machines will be given priority in case of failure. This technique is a prioritization method of malfunctioned machine based on the effect of the production, product quality, maintenance costs and job security. Improved maintenance and repair processes have been implemented in the company for three months, and the obtained data were compared with the previous year data. In conclusion, breakdown maintenance was found to occur in a shorter time, with lower cost and lower spare parts inventory.

Keywords: ABC equipment classification, business process management (BPM), maintenance, repair performance

Procedia PDF Downloads 185
8849 Artificial Intelligence for Generative Modelling

Authors: Shryas Bhurat, Aryan Vashistha, Sampreet Dinakar Nayak, Ayush Gupta

Abstract:

As the technology is advancing more towards high computational resources, there is a paradigm shift in the usage of these resources to optimize the design process. This paper discusses the usage of ‘Generative Design using Artificial Intelligence’ to build better models that adapt the operations like selection, mutation, and crossover to generate results. The human mind thinks of the simplest approach while designing an object, but the intelligence learns from the past & designs the complex optimized CAD Models. Generative Design takes the boundary conditions and comes up with multiple solutions with iterations to come up with a sturdy design with the most optimal parameter that is given, saving huge amounts of time & resources. The new production techniques that are at our disposal allow us to use additive manufacturing, 3D printing, and other innovative manufacturing techniques to save resources and design artistically engineered CAD Models. Also, this paper discusses the Genetic Algorithm, the Non-Domination technique to choose the right results using biomimicry that has evolved for current habitation for millions of years. The computer uses parametric models to generate newer models using an iterative approach & uses cloud computing to store these iterative designs. The later part of the paper compares the topology optimization technology with Generative Design that is previously being used to generate CAD Models. Finally, this paper shows the performance of algorithms and how these algorithms help in designing resource-efficient models.

Keywords: genetic algorithm, bio mimicry, generative modeling, non-dominant techniques

Procedia PDF Downloads 138
8848 Suppression Subtractive Hybridization Technique for Identification of the Differentially Expressed Genes

Authors: Tuhina-khatun, Mohamed Hanafi Musa, Mohd Rafii Yosup, Wong Mui Yun, Aktar-uz-Zaman, Mahbod Sahebi

Abstract:

Suppression subtractive hybridization (SSH) method is valuable tool for identifying differentially regulated genes in disease specific or tissue specific genes important for cellular growth and differentiation. It is a widely used method for separating DNA molecules that distinguish two closely related DNA samples. SSH is one of the most powerful and popular methods for generating subtracted cDNA or genomic DNA libraries. It is based primarily on a suppression polymerase chain reaction (PCR) technique and combines normalization and subtraction in a solitary procedure. The normalization step equalizes the abundance of DNA fragments within the target population, and the subtraction step excludes sequences that are common to the populations being compared. This dramatically increases the probability of obtaining low-abundance differentially expressed cDNAs or genomic DNA fragments and simplifies analysis of the subtracted library. SSH technique is applicable to many comparative and functional genetic studies for the identification of disease, developmental, tissue specific, or other differentially expressed genes, as well as for the recovery of genomic DNA fragments distinguishing the samples under comparison.

Keywords: suppression subtractive hybridization, differentially expressed genes, disease specific genes, tissue specific genes

Procedia PDF Downloads 421
8847 Intrusion Detection in Cloud Computing Using Machine Learning

Authors: Faiza Babur Khan, Sohail Asghar

Abstract:

With an emergence of distributed environment, cloud computing is proving to be the most stimulating computing paradigm shift in computer technology, resulting in spectacular expansion in IT industry. Many companies have augmented their technical infrastructure by adopting cloud resource sharing architecture. Cloud computing has opened doors to unlimited opportunities from application to platform availability, expandable storage and provision of computing environment. However, from a security viewpoint, an added risk level is introduced from clouds, weakening the protection mechanisms, and hardening the availability of privacy, data security and on demand service. Issues of trust, confidentiality, and integrity are elevated due to multitenant resource sharing architecture of cloud. Trust or reliability of cloud refers to its capability of providing the needed services precisely and unfailingly. Confidentiality is the ability of the architecture to ensure authorization of the relevant party to access its private data. It also guarantees integrity to protect the data from being fabricated by an unauthorized user. So in order to assure provision of secured cloud, a roadmap or model is obligatory to analyze a security problem, design mitigation strategies, and evaluate solutions. The aim of the paper is twofold; first to enlighten the factors which make cloud security critical along with alleviation strategies and secondly to propose an intrusion detection model that identifies the attackers in a preventive way using machine learning Random Forest classifier with an accuracy of 99.8%. This model uses less number of features. A comparison with other classifiers is also presented.

Keywords: cloud security, threats, machine learning, random forest, classification

Procedia PDF Downloads 312
8846 A Constrained Neural Network Based Variable Neighborhood Search for the Multi-Objective Dynamic Flexible Job Shop Scheduling Problems

Authors: Aydin Teymourifar, Gurkan Ozturk, Ozan Bahadir

Abstract:

In this paper, a new neural network based variable neighborhood search is proposed for the multi-objective dynamic, flexible job shop scheduling problems. The neural network controls the problems' constraints to prevent infeasible solutions, while the Variable Neighborhood Search (VNS) applies moves, based on the critical block concept to improve the solutions. Two approaches are used for managing the constraints, in the first approach, infeasible solutions are modified according to the constraints, after the moves application, while in the second one, infeasible moves are prevented. Several neighborhood structures from the literature with some modifications, also new structures are used in the VNS. The suggested neighborhoods are more systematically defined and easy to implement. Comparison is done based on a multi-objective flexible job shop scheduling problem that is dynamic because of the jobs different release time and machines breakdowns. The results show that the presented method has better performance than the compared VNSs selected from the literature.

Keywords: constrained optimization, neural network, variable neighborhood search, flexible job shop scheduling, dynamic multi-objective optimization

Procedia PDF Downloads 335
8845 Design and Study of a Low Power High Speed Full Adder Using GDI Multiplexer

Authors: Biswarup Mukherjee, Aniruddha Ghosal

Abstract:

In this paper, we propose a new technique for implementing a low power full adder using a set of GDI multiplexers. Full adder circuits are used comprehensively in Application Specific Integrated Circuits (ASICs). Thus it is desirable to have low power operation for the sub components. The explored method of implementation achieves a low power design for the full adder. Simulated results using state-of-art Tanner tool indicates the superior performance of the proposed technique over conventional CMOS full adder. Detailed comparison of simulated results for the conventional and present method of implementation is presented.

Keywords: low power full adder, 2-T GDI MUX, ASIC (application specific integrated circuit), 12-T FA, CMOS (complementary metal oxide semiconductor)

Procedia PDF Downloads 339
8844 High Power Low Loss CMOS SPDT Antenna Switch for LTE-A Front End Module

Authors: Ki-Jin Kim, Suk-Hui LEE, Sanghoon Park, K. H. Ahn

Abstract:

A high power, low loss asymmetric single pole double through(SPDT) antenna switch for LTE-A Front-End Module(FEM) is presented in this paper by using CMOS technology. For the usage of LTE-A applications, low loss and high linearity are the key features which are very challenging works under CMOS process. To enhance insertion loss(IL) and power handling capability, this paper adopts asymmetric Transmitter (TX) and RX (Receiver) structure, floating body technique, multi-stacked structure, and feed forward capacitor technique. The designed SPDT switch shows TX IL 0.34 dB, RX IL 0.73 dB, P1dB 38.9 dBm at 0.9 GHz and TX IL 0.37 dB, RX IL 0.95 dB, P1dB 39.1 dBm at 2.5 GHz respectively.

Keywords: CMOS switch, SPDT switch, high power CMOS switch, LTE-A FEM

Procedia PDF Downloads 358
8843 Comparison of Analgesic Efficacy of Ropivacaine and Levobupivacaine in Labour Analgesia by Dural Puncture Epidural Technique – A Prospective Double-blinded Randomized Trial

Authors: J. Punj, R. K. Pandey, V. Darlong, K. Thangavel

Abstract:

Background: Dural puncture epidural (DPE) technique has been introduced recently for labour analgesia however, no study has compared ropivacaine and levobupivacaine for the same. Methods: The primary aim of the study was to compare time to onset of the Numerical Pain Rating Score (NPRS) ≤ 1 in labour analgesia with both drugs. After obtaining ethics and patient consent, ASA I and ASA II parturient with single foetus in vertex presentation and cervical dilatation <5.0 cm were included. DPE was performed with 16/ 26 G combined spinal epidural (CSE) technique, and parturients randomized into two groups. In Group R ( Ropivacaine) 20 ml 0.125% ropivacaine+ fentanyl 2µg/ml was injected to a maximum of 20 ml in 20 minutes and in Group L (Levobupivacaine), 20 ml 0.125% levobupivacaine + fentanyl 2µg/ml was injected. Outcomes were assessed at 0.5,2,4,6,8,10,12,14,16,18,20 and 30 minutes, then every 90 minutes until delivery. Appropriate statistical analysis was done, and p value of <0.05 was considered statistically significant. Results: The median time to onset of NPRS ≤1 in both groups was comparable (group R= 16 minutes vs group L= 18 minutes (p = 0.076). Volume of drug for NPR ≤1 in both groups was also comparable (Group R 15.95± 2.03 ml vs Group L 16.35 ± 1.34 ml (p=0.47). Conclusion: DPE with 16 G epidural needle and 26 gauge spinal needle with both 0.125% ropivacaine and 0.125% levobupivacaine results in similar efficacy of labour analgesia.

Keywords: dural puncture epidural, labour analgesia, obstetric analgesia, hypotension

Procedia PDF Downloads 72
8842 Non-Contact Measurement of Soil Deformation in a Cyclic Triaxial Test

Authors: Erica Elice Uy, Toshihiro Noda, Kentaro Nakai, Jonathan Dungca

Abstract:

Deformation in a conventional cyclic triaxial test is normally measured by using point-wise measuring device. In this study, non-contact measurement technique was applied to be able to monitor and measure the occurrence of non-homogeneous behavior of the soil under cyclic loading. Non-contact measurement is executed through image processing. Two-dimensional measurements were performed using Lucas and Kanade optical flow algorithm and it was implemented Labview. In this technique, the non-homogeneous deformation was monitored using a mirrorless camera. A mirrorless camera was used because it is economical and it has the capacity to take pictures at a fast rate. The camera was first calibrated to remove the distortion brought about the lens and the testing environment as well. Calibration was divided into 2 phases. The first phase was the calibration of the camera parameters and distortion caused by the lens. The second phase was to for eliminating the distortion brought about the triaxial plexiglass. A correction factor was established from this phase. A series of consolidated undrained cyclic triaxial test was performed using a coarse soil. The results from the non-contact measurement technique were compared to the measured deformation from the linear variable displacement transducer. It was observed that deformation was higher at the area where failure occurs.

Keywords: cyclic loading, non-contact measurement, non-homogeneous, optical flow

Procedia PDF Downloads 291
8841 Damping Optimal Design of Sandwich Beams Partially Covered with Damping Patches

Authors: Guerich Mohamed, Assaf Samir

Abstract:

The application of viscoelastic materials in the form of constrained layers in mechanical structures is an efficient and cost-effective technique for solving noise and vibration problems. This technique requires a design tool to select the best location, type, and thickness of the damping treatment. This paper presents a finite element model for the vibration of beams partially or fully covered with a constrained viscoelastic damping material. The model is based on Bernoulli-Euler theory for the faces and Timoshenko beam theory for the core. It uses four variables: the through-thickness constant deflection, the axial displacements of the faces, and the bending rotation of the beam. The sandwich beam finite element is compatible with the conventional C1 finite element for homogenous beams. To validate the proposed model, several free vibration analyses of fully or partially covered beams, with different locations of the damping patches and different percent coverage, are studied. The results show that the proposed approach can be used as an effective tool to study the influence of the location and treatment size on the natural frequencies and the associated modal loss factors. Then, a parametric study regarding the variation in the damping characteristics of partially covered beams has been conducted. In these studies, the effect of core shear modulus value, the effect of patch size variation, the thickness of constraining layer, and the core and the locations of the patches are considered. In partial coverage, the spatial distribution of additive damping by using viscoelastic material is as important as the thickness and material properties of the viscoelastic layer and the constraining layer. Indeed, to limit added mass and to attain maximum damping, the damping patches should be placed at optimum locations. These locations are often selected using the modal strain energy indicator. Following this approach, the damping patches are applied over regions of the base structure with the highest modal strain energy to target specific modes of vibration. In the present study, a more efficient indicator is proposed, which consists of placing the damping patches over regions of high energy dissipation through the viscoelastic layer of the fully covered sandwich beam. The presented approach is used in an optimization method to select the best location for the damping patches as well as the material thicknesses and material properties of the layers that will yield optimal damping with the minimum area of coverage.

Keywords: finite element model, damping treatment, viscoelastic materials, sandwich beam

Procedia PDF Downloads 138
8840 Optimization in the Compressive Strength of Iron Slag Self-Compacting Concrete

Authors: Luis E. Zapata, Sergio Ruiz, María F. Mantilla, Jhon A. Villamizar

Abstract:

Sand as fine aggregate for concrete production needs a feasible substitute due to several environmental issues. In this work, a study of the behavior of self-compacting concrete mixtures under replacement of sand by iron slag from 0.0% to 50.0% of weight and variations of water/cementitious material ratio between 0.3 and 0.5 is presented. Control fresh state tests of Slump flow, T500, J-ring and L-box were determined. In the hardened state, compressive strength was determined and optimization from response surface analysis was performed. The study of the variables in the hardened state was developed based on inferential statistical analyses using central composite design methodology and posterior analyses of variance (ANOVA). An increase in the compressive strength up to 50% higher than control mixtures at 7, 14, and 28 days of maturity was the most relevant result regarding the presence of iron slag as replacement of natural sand. Considering the obtained result, it is possible to infer that iron slag is an acceptable alternative replacement material of the natural fine aggregate to be used in structural concrete.

Keywords: ANOVA, iron slag, response surface analysis, self-compacting concrete

Procedia PDF Downloads 133
8839 Aspects of Tone in the Educated Nigeria Accent of English

Authors: Nkereke Essien

Abstract:

The study seeks to analyze tone in the Educated Nigerian accent of English (ENAE) using the three tones: Low (L), High (H) and Low-High (LH). The aim is to find out whether there are any differences or similarities in the performance of the experimental group and the control. To achieve this, twenty educated Nigerian speakers of English who are educated in the language were selected by a Stratified Random Sampling (SRS) technique from two federal universities in Nigeria. They were given a passage to read and their intonation patterns were compared with that of a native speaker (control). The data were analyzed using Pierrehumbert’s (1980) intonation system of analysis. Three different approaches were employed in the analysis of the intonation Phrase (IP) as used by Pierrehumbert: perceptual, statistical and acoustic. We first analyzed our data from the passage and utterances using Willcoxon Matched Pairs Signs Ranks Test to establish the differences between the performance of the experimental group and the control. Then, the one-way Analysis of variance (ANOVA) statistical and Tukey-Krammar Post Hoc Tests were used to test for any significant difference in the performances of the twenty subjects. The acoustic data were presented to corroborate both the perceptual and statistical findings. Finally, the tonal patterns of the selected subjects in the three categories - A, B, C, were compared with those of the control. Our findings revealed that the tonal pattern of the Educated Nigerian Accent of English (ENAE) is significantly different from the tonal pattern of the Standard British Accent of English (SBAE) as represented by the control. A high preference for unidirectional tones, especially, the high tones was observed in the performance of the experimental group. Also, high tones do not necessarily correspond to stressed syllables and low tones to unstressed syllables.

Keywords: accent, intonation phrase (IP), tonal patterns, tone

Procedia PDF Downloads 213
8838 The Power of the Proper Orthogonal Decomposition Method

Authors: Charles Lee

Abstract:

The Principal Orthogonal Decomposition (POD) technique has been used as a model reduction tool for many applications in engineering and science. In principle, one begins with an ensemble of data, called snapshots, collected from an experiment or laboratory results. The beauty of the POD technique is that when applied, the entire data set can be represented by the smallest number of orthogonal basis elements. It is the such capability that allows us to reduce the complexity and dimensions of many physical applications. Mathematical formulations and numerical schemes for the POD method will be discussed along with applications in NASA’s Deep Space Large Antenna Arrays, Satellite Image Reconstruction, Cancer Detection with DNA Microarray Data, Maximizing Stock Return, and Medical Imaging.

Keywords: reduced-order methods, principal component analysis, cancer detection, image reconstruction, stock portfolios

Procedia PDF Downloads 71
8837 Early Detection of Instability in Emulsions via Diffusing Wave Spectroscopy

Authors: Coline Bretz, Andrea Vaccaro, Dario Leumann

Abstract:

The food, personal care, and cosmetic industries are seeing increased consumer demand for more sustainable and innovative ingredients. When developing new formulations incorporating such ingredients, stability is one of the first criteria that must be assessed, and it is thus of great importance to have a method that can detect instabilities early and quickly. Diffusing Wave Spectroscopy (DWS) is a light scattering technique that probes the motion,i.e., the mean square displacement (MSD), of colloids, such as nanoparticles in a suspension or droplets in emulsions. From the MSD, the rheological properties of the surrounding medium can be determined via the so-called microrheology approach. In the case of purely viscous media, it is also possible to obtain information about particle size. DWS can thus be used to monitor the size evolution of particles, droplets, or bubbles in aging dispersions, emulsions, or foams. In the context of early instability detection in emulsions, DWS offers considerable advantages, as the samples are measured in a contact-free manner, using only small quantities of samples loaded in a sealable cuvette. The sensitivity and rapidity of the technique are key to detecting and following the ageing of emulsions reliably. We present applications of DWS focused on the characterization of emulsions. In particular, we demonstrate the ability to record very subtle changes in the structural properties early on. We also discuss the various mechanisms at play in the destabilization of emulsions, such as coalescence or Ostwald ripening, and how to identify them through this technique.

Keywords: instrumentation, emulsions, stability, DWS

Procedia PDF Downloads 56
8836 Mining Educational Data to Support Students’ Major Selection

Authors: Kunyanuth Kularbphettong, Cholticha Tongsiri

Abstract:

This paper aims to create the model for student in choosing an emphasized track of student majoring in computer science at Suan Sunandha Rajabhat University. The objective of this research is to develop the suggested system using data mining technique to analyze knowledge and conduct decision rules. Such relationships can be used to demonstrate the reasonableness of student choosing a track as well as to support his/her decision and the system is verified by experts in the field. The sampling is from student of computer science based on the system and the questionnaire to see the satisfaction. The system result is found to be satisfactory by both experts and student as well.

Keywords: data mining technique, the decision support system, knowledge and decision rules, education

Procedia PDF Downloads 416
8835 Optimization of Monitoring Networks for Air Quality Management in Urban Hotspots

Authors: Vethathirri Ramanujam Srinivasan, S. M. Shiva Nagendra

Abstract:

Air quality management in urban areas is a serious concern in both developed and developing countries. In this regard, more number of air quality monitoring stations are planned to mitigate air pollution in urban areas. In India, Central Pollution Control Board has set up 574 air quality monitoring stations across the country and proposed to set up another 500 stations in the next few years. The number of monitoring stations for each city has been decided based on population data. The setting up of ambient air quality monitoring stations and their operation and maintenance are highly expensive. Therefore, there is a need to optimize monitoring networks for air quality management. The present paper discusses the various methods such as Indian Standards (IS) method, US EPA method and European Union (EU) method to arrive at the minimum number of air quality monitoring stations. In addition, optimization of rain-gauge method and Inverse Distance Weighted (IDW) method using Geographical Information System (GIS) are also explored in the present work for the design of air quality network in Chennai city. In summary, additionally 18 stations are required for Chennai city, and the potential monitoring locations with their corresponding land use patterns are ranked and identified from the 1km x 1km sized grids.

Keywords: air quality monitoring network, inverse distance weighted method, population based method, spatial variation

Procedia PDF Downloads 176
8834 Least-Square Support Vector Machine for Characterization of Clusters of Microcalcifications

Authors: Baljit Singh Khehra, Amar Partap Singh Pharwaha

Abstract:

Clusters of Microcalcifications (MCCs) are most frequent symptoms of Ductal Carcinoma in Situ (DCIS) recognized by mammography. Least-Square Support Vector Machine (LS-SVM) is a variant of the standard SVM. In the paper, LS-SVM is proposed as a classifier for classifying MCCs as benign or malignant based on relevant extracted features from enhanced mammogram. To establish the credibility of LS-SVM classifier for classifying MCCs, a comparative evaluation of the relative performance of LS-SVM classifier for different kernel functions is made. For comparative evaluation, confusion matrix and ROC analysis are used. Experiments are performed on data extracted from mammogram images of DDSM database. A total of 380 suspicious areas are collected, which contain 235 malignant and 145 benign samples, from mammogram images of DDSM database. A set of 50 features is calculated for each suspicious area. After this, an optimal subset of 23 most suitable features is selected from 50 features by Particle Swarm Optimization (PSO). The results of proposed study are quite promising.

Keywords: clusters of microcalcifications, ductal carcinoma in situ, least-square support vector machine, particle swarm optimization

Procedia PDF Downloads 347
8833 Heavy Metal Contamination in Sediments of North East Coast of Tamilnadu by EDXRF Technique

Authors: R. Ravisankar, Tholkappian A. Chandrasekaran, Y. Raghu, K. K. Satapathy, M. V. R. Prasad, K. V. Kanagasabapathy

Abstract:

The coastal areas of Tamilnadu are assuming greater importance owing to increasing human population, urbanization and accelerated industrial activities. sIn the present study, sediment samples are collected along the east coast of Tamilnadu for assessment of heavy metal pollution. The concentration of 13 selected heavy metals such as Mg, Al, Si, K, Ca, Ti, Fe, V, Cr, Mn, Co, Ni and Zn determined by Energy dispersive X-ray fluorescence (EDXRF) technique. In order to describe the pollution status, Contamination factor and pollution load index are calculated and reported. This result suggests that sources of metal contamination were mainly attributed to natural inputs from surrounding environments.

Keywords: sediments, heavy metals, EDXRF, pollution contamination factors

Procedia PDF Downloads 323
8832 Improved Blood Glucose-Insulin Monitoring with Dual-Layer Predictive Control Design

Authors: Vahid Nademi

Abstract:

In response to widely used wearable medical devices equipped with a continuous glucose monitor (CGM) and insulin pump, the advanced control methods are still demanding to get the full benefit of these devices. Unlike costly clinical trials, implementing effective insulin-glucose control strategies can provide significant contributions to the patients suffering from chronic diseases such as diabetes. This study deals with a key role of two-layer insulin-glucose regulator based on model-predictive-control (MPC) scheme so that the patient’s predicted glucose profile is in compliance with the insulin level injected through insulin pump automatically. It is achieved by iterative optimization algorithm which is called an integrated perturbation analysis and sequential quadratic programming (IPA-SQP) solver for handling uncertainties due to unexpected variations in glucose-insulin values and body’s characteristics. The feasibility evaluation of the discussed control approach is also studied by means of numerical simulations of two case scenarios via measured data. The obtained results are presented to verify the superior and reliable performance of the proposed control scheme with no negative impact on patient safety.

Keywords: blood glucose monitoring, insulin pump, predictive control, optimization

Procedia PDF Downloads 130
8831 Application of the Best Technique for Estimating the Rest-Activity Rhythm Period in Shift Workers

Authors: Rakesh Kumar Soni

Abstract:

Under free living conditions, human biological clocks show a periodicity of 24 hour for numerous physiological, behavioral and biochemical variables. However, this period is not the original period; rather it merely exhibits synchronization with the solar clock. It is, therefore, most important to investigate characteristics of human circadian clock, essentially in shift workers, who normally confront with contrasting social clocks. Aim of the present study was to investigate rest-activity rhythm and to vouch for the best technique for the computation of periods in this rhythm in subjects randomly selected from different groups of shift workers. The rest-activity rhythm was studied in forty-eight shift workers from three different organizations, namely Newspaper Printing Press (NPP), Chhattisgarh State Electricity Board (CSEB) and Raipur Alloys (RA). Shift workers of NPP (N = 20) were working on a permanent night shift schedule (NS; 20:00-04:00). However, in CSEB (N = 14) and RA (N = 14), shift workers were working in a 3-shift system comprising of rotations from night (NS; 22:00-06:00) to afternoon (AS; 14:00-22:00) and to morning shift (MS; 06:00-14:00). Each subject wore an Actiwatch (AW64, Mini Mitter Co. Inc., USA) for 7 and/or 21 consecutive days, only after furnishing a certificate of consent. One-minute epoch length was chosen for the collection of wrist activity data. Period was determined by using Actiware sleep software (Periodogram), Lomb-Scargle Periodogram (LSP) and Spectral analysis software (Spectre). Other statistical techniques, such as ANOVA and Duncan’s multiple-range test were also used whenever required. A statistically significant circadian rhythm in rest-activity, gauged by cosinor, was documented in all shift workers, irrespective of shift work. Results indicate that the efficiency of the technique to determine the period (τ) depended upon the clipping limits of the τs. It appears that the technique of spectre is more reliable.

Keywords: biological clock, rest activity rhythm, spectre, periodogram

Procedia PDF Downloads 153