Search results for: Neural Processing Element (NPE)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7894

Search results for: Neural Processing Element (NPE)

1444 Nanobiosensor System for Aptamer Based Pathogen Detection in Environmental Waters

Authors: Nimet Yildirim Tirgil, Ahmed Busnaina, April Z. Gu

Abstract:

Environmental waters are monitored worldwide to protect people from infectious diseases primarily caused by enteric pathogens. All long, Escherichia coli (E. coli) is a good indicator for potential enteric pathogens in waters. Thus, a rapid and simple detection method for E. coli is very important to predict the pathogen contamination. In this study, to the best of our knowledge, as the first time we developed a rapid, direct and reusable SWCNTs (single walled carbon nanotubes) based biosensor system for sensitive and selective E. coli detection in water samples. We use a novel and newly developed flexible biosensor device which was fabricated by high-rate nanoscale offset printing process using directed assembly and transfer of SWCNTs. By simple directed assembly and non-covalent functionalization, aptamer (biorecognition element that specifically distinguish the E. coli O157:H7 strain from other pathogens) based SWCNTs biosensor system was designed and was further evaluated for environmental applications with simple and cost-effective steps. The two gold electrode terminals and SWCNTs-bridge between them allow continuous resistance response monitoring for the E. coli detection. The detection procedure is based on competitive mode detection. A known concentration of aptamer and E. coli cells were mixed and after a certain time filtered. The rest of free aptamers injected to the system. With hybridization of the free aptamers and their SWCNTs surface immobilized probe DNA (complementary-DNA for E. coli aptamer), we can monitor the resistance difference which is proportional to the amount of the E. coli. Thus, we can detect the E. coli without injecting it directly onto the sensing surface, and we could protect the electrode surface from the aggregation of target bacteria or other pollutants that may come from real wastewater samples. After optimization experiments, the linear detection range was determined from 2 cfu/ml to 10⁵ cfu/ml with higher than 0.98 R² value. The system was regenerated successfully with 5 % SDS solution over 100 times without any significant deterioration of the sensor performance. The developed system had high specificity towards E. coli (less than 20 % signal with other pathogens), and it could be applied to real water samples with 86 to 101 % recovery and 3 to 18 % cv values (n=3).

Keywords: aptamer, E. coli, environmental detection, nanobiosensor, SWCTs

Procedia PDF Downloads 176
1443 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation

Authors: Ekin Nurbaş

Abstract:

One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.

Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing

Procedia PDF Downloads 135
1442 The Functional Rehabilitation of Peri-Implant Tissue Defects: A Case Report

Authors: Özgür Öztürk, Cumhur Sipahi, Hande Yeşil

Abstract:

Implant retained restorations commonly consist of a metal-framework veneered with ceramic or composite facings. The increasing and expanding use of indirect resin composites in dentistry is a result of innovations in materials and processing techniques. Of special interest to the implant restorative field is the possibility that composites present significantly lower peak vertical and transverse forces transmitted at the peri-implant level compared to metal-ceramic supra structures in implant-supported restorations. A 43-year-old male patient referred to the department of prosthodontics for an implant retained fixed prosthesis. The clinical and radiographic examination of the patient demonstrated the presence of an implant in the right mandibular first molar tooth region. A considerable amount of marginal bone loss around the implant was detected in radiographic examinations combined with a remarkable peri-implant soft tissue deficiency. To minimize the chewing loads transmitted to the implant-bone interface it was decided to fabricate an indirect composite resin veneered single metal crown over a screw-retained abutment. At the end of the treatment, the functional and aesthetic deficiencies were fully compensated. After a 6 months clinical and radiographic follow-up period the not any additional pathologic invasion was detected in the implant-bone interface and implant retained restoration did not reveal any vehement complication.

Keywords: dental implant, fixed partial dentures, indirect composite resin, peri-implant defects

Procedia PDF Downloads 249
1441 Gas Phase Extraction: An Environmentally Sustainable and Effective Method for The Extraction and Recovery of Metal from Ores

Authors: Kolela J Nyembwe, Darlington C. Ashiegbu, Herman J. Potgieter

Abstract:

Over the past few decades, the demand for metals has increased significantly. This has led to a decrease and decline of high-grade ore over time and an increase in mineral complexity and matrix heterogeneity. In addition to that, there are rising concerns about greener processes and a sustainable environment. Due to these challenges, the mining and metal industry has been forced to develop new technologies that are able to economically process and recover metallic values from low-grade ores, materials having a metal content locked up in industrially processed residues (tailings and slag), and complex matrix mineral deposits. Several methods to address these issues have been developed, among which are ionic liquids (IL), heap leaching, and bioleaching. Recently, the gas phase extraction technique has been gaining interest because it eliminates many of the problems encountered in conventional mineral processing methods. The technique relies on the formation of volatile metal complexes, which can be removed from the residual solids by a carrier gas. The complexes can then be reduced using the appropriate method to obtain the metal and regenerate-recover the organic extractant. Laboratory work on the gas phase have been conducted for the extraction and recovery of aluminium (Al), iron (Fe), copper (Cu), chrome (Cr), nickel (Ni), lead (Pb), and vanadium V. In all cases the extraction revealed to depend of temperature and mineral surface area. The process technology appears very promising, offers the feasibility of recirculation, organic reagent regeneration, and has the potential to deliver on all promises of a “greener” process.

Keywords: gas-phase extraction, hydrometallurgy, low-grade ore, sustainable environment

Procedia PDF Downloads 114
1440 Comparison of Central Light Reflex Width-to-Retinal Vessel Diameter Ratio between Glaucoma and Normal Eyes by Using Edge Detection Technique

Authors: P. Siriarchawatana, K. Leungchavaphongse, N. Covavisaruch, K. Rojananuangnit, P. Boondaeng, N. Panyayingyong

Abstract:

Glaucoma is a disease that causes visual loss in adults. Glaucoma causes damage to the optic nerve and its overall pathophysiology is still not fully understood. Vasculopathy may be one of the possible causes of nerve damage. Photographic imaging of retinal vessels by fundus camera during eye examination may complement clinical management. This paper presents an innovation for measuring central light reflex width-to-retinal vessel diameter ratio (CRR) from digital retinal photographs. Using our edge detection technique, CRRs from glaucoma and normal eyes were compared to examine differences and associations. CRRs were evaluated on fundus photographs of participants from Mettapracharak (Wat Raikhing) Hospital in Nakhon Pathom, Thailand. Fifty-five photographs from normal eyes and twenty-one photographs from glaucoma eyes were included. Participants with hypertension were excluded. In each photograph, CRRs from four retinal vessels, including arteries and veins in the inferotemporal and superotemporal regions, were quantified using edge detection technique. From our finding, mean CRRs of all four retinal arteries and veins were significantly higher in persons with glaucoma than in those without glaucoma (0.34 vs. 0.32, p < 0.05 for inferotemporal vein, 0.33 vs. 0.30, p < 0.01 for inferotemporal artery, 0.34 vs. 0.31, p < 0.01 for superotemporal vein, and 0.33 vs. 0.30, p < 0.05 for superotemporal artery). From these results, an increase in CRRs of retinal vessels, as quantitatively measured from fundus photographs, could be associated with glaucoma.

Keywords: glaucoma, retinal vessel, central light reflex, image processing, fundus photograph, edge detection

Procedia PDF Downloads 309
1439 Effect of Cellular Water Transport on Deformation of Food Material during Drying

Authors: M. Imran Hossen Khan, M. Mahiuddin, M. A. Karim

Abstract:

Drying is a food processing technique where simultaneous heat and mass transfer take place from surface to the center of the sample. Deformation of food materials during drying is a common physical phenomenon which affects the textural quality and taste of the dried product. Most of the plant-based food materials are porous and hygroscopic in nature that contains about 80-90% water in different cellular environments: intercellular environment and intracellular environment. Transport of this cellular water has a significant effect on material deformation during drying. However, understanding of the scale of deformation is very complex due to diverse nature and structural heterogeneity of food material. Knowledge about the effect of transport of cellular water on deformation of material during drying is crucial for increasing the energy efficiency and obtaining better quality dried foods. Therefore, the primary aim of this work is to investigate the effect of intracellular water transport on material deformation during drying. In this study, apple tissue was taken for the investigation. The experiment was carried out using 1H-NMR T2 relaxometry with a conventional dryer. The experimental results are consistent with the understanding that transport of intracellular water causes cellular shrinkage associated with the anisotropic deformation of whole apple tissue. Interestingly, it is found that the deformation of apple tissue takes place at different stages of drying rather than deforming at one time. Moreover, it is found that the penetration rate of heat energy together with the pressure gradient between intracellular and intercellular environments is the responsible force to rupture the cell membrane.

Keywords: heat and mass transfer, food material, intracellular water, cell rupture, deformation

Procedia PDF Downloads 208
1438 Linux Security Management: Research and Discussion on Problems Caused by Different Aspects

Authors: Ma Yuzhe, Burra Venkata Durga Kumar

Abstract:

The computer is a great invention. As people use computers more and more frequently, the demand for PCs is growing, and the performance of computer hardware is also rising to face more complex processing and operation. However, the operating system, which provides the soul for computers, has stopped developing at a stage. In the face of the high price of UNIX (Uniplexed Information and Computering System), batch after batch of personal computer owners can only give up. Disk Operating System is too simple and difficult to bring innovation into play, which is not a good choice. And MacOS is a special operating system for Apple computers, and it can not be widely used on personal computers. In this environment, Linux, based on the UNIX system, was born. Linux combines the advantages of the operating system and is composed of many microkernels, which is relatively powerful in the core architecture. Linux system supports all Internet protocols, so it has very good network functions. Linux supports multiple users. Each user has no influence on their own files. Linux can also multitask and run different programs independently at the same time. Linux is a completely open source operating system. Users can obtain and modify the source code for free. Because of these advantages of Linux, it has also attracted a large number of users and programmers. The Linux system is also constantly upgraded and improved. It has also issued many different versions, which are suitable for community use and commercial use. Linux system has good security because it relies on a file partition system. However, due to the constant updating of vulnerabilities and hazards, the using security of the operating system also needs to be paid more attention to. This article will focus on the analysis and discussion of Linux security issues.

Keywords: Linux, operating system, system management, security

Procedia PDF Downloads 93
1437 Classifier for Liver Ultrasound Images

Authors: Soumya Sajjan

Abstract:

Liver cancer is the most common cancer disease worldwide in men and women, and is one of the few cancers still on the rise. Liver disease is the 4th leading cause of death. According to new NHS (National Health Service) figures, deaths from liver diseases have reached record levels, rising by 25% in less than a decade; heavy drinking, obesity, and hepatitis are believed to be behind the rise. In this study, we focus on Development of Diagnostic Classifier for Ultrasound liver lesion. Ultrasound (US) Sonography is an easy-to-use and widely popular imaging modality because of its ability to visualize many human soft tissues/organs without any harmful effect. This paper will provide an overview of underlying concepts, along with algorithms for processing of liver ultrasound images Naturaly, Ultrasound liver lesion images are having more spackle noise. Developing classifier for ultrasound liver lesion image is a challenging task. We approach fully automatic machine learning system for developing this classifier. First, we segment the liver image by calculating the textural features from co-occurrence matrix and run length method. For classification, Support Vector Machine is used based on the risk bounds of statistical learning theory. The textural features for different features methods are given as input to the SVM individually. Performance analysis train and test datasets carried out separately using SVM Model. Whenever an ultrasonic liver lesion image is given to the SVM classifier system, the features are calculated, classified, as normal and diseased liver lesion. We hope the result will be helpful to the physician to identify the liver cancer in non-invasive method.

Keywords: segmentation, Support Vector Machine, ultrasound liver lesion, co-occurance Matrix

Procedia PDF Downloads 393
1436 Aerothermal Analysis of the Brazilian 14-X Hypersonic Aerospace Vehicle at Mach Number 7

Authors: Felipe J. Costa, João F. A. Martos, Ronaldo L. Cardoso, Israel S. Rêgo, Marco A. S. Minucci, Antonio C. Oliveira, Paulo G. P. Toro

Abstract:

The Prof. Henry T. Nagamatsu Laboratory of Aerothermodynamics and Hypersonics, at the Institute for Advanced Studies designed the Brazilian 14-X Hypersonic Aerospace Vehicle, which is a technological demonstrator endowed with two innovative technologies: waverider technology, to obtain lift from conical shockwave during the hypersonic flight; and uses hypersonic airbreathing propulsion system called scramjet that is based on supersonic combustion, to perform flights on Earth's atmosphere at 30 km altitude at Mach numbers 7 and 10. The scramjet is an aeronautical engine without moving parts that promote compression and deceleration of freestream atmospheric air at the inlet through the conical/oblique shockwaves generated during the hypersonic flight. During high speed flight, the shock waves and the viscous forces yield the phenomenon called aerodynamic heating, where this physical meaning is the friction between the fluid filaments and the body or compression at the stagnation regions of the leading edge that converts the kinetic energy into heat within a thin layer of air which blankets the body. The temperature of this layer increases with the square of the speed. This high temperature is concentrated in the boundary-layer, where heat will flow readily from the boundary-layer to the hypersonic aerospace vehicle structure. Fay and Riddell and Eckert methods are applied to the stagnation point and to the flat plate segments in order to calculate the aerodynamic heating. On the understanding of the aerodynamic heating it is important to analyze the heat conduction transfer to the 14-X waverider internal structure. ANSYS Workbench software provides the Thermal Numerical Analysis, using Finite Element Method of the 14-X waverider unpowered scramjet at 30 km altitude at Mach number 7 and 10 in terms of temperature and heat flux. Finally, it is possible to verify if the internal temperature complies with the requirements for embedded systems, and, if is necessary to do modifications on the structure in terms of wall thickness and materials.

Keywords: aerodynamic heating, hypersonic, scramjet, thermal analysis

Procedia PDF Downloads 431
1435 Histone Deacetylases Inhibitor - Valproic Acid Sensitizes Human Melanoma Cells for alkylating agent and PARP inhibitor

Authors: Małgorzata Drzewiecka, Tomasz Śliwiński, Maciej Radek

Abstract:

The inhibition of histone deacetyles (HDACs) holds promise as a potential anti-cancer therapy because histone and non-histone protein acetylation is frequently disrupted in cancer, leading to cancer initiation and progression. Additionally, histone deacetylase inhibitors (HDACi) such as class I HDAC inhibitor - valproic acid (VPA) have been shown to enhance the effectiveness of DNA-damaging factors, such as cisplatin or radiation. In this study, we found that, using of VPA in combination with talazoparib (BMN-637 – PARP1 inhibitor – PARPi) and/or Dacarabazine (DTIC - alkylating agent) resulted in increased DNA double strand break (DSB) and reduced survival (while not affecting primary melanocytes )and proliferation of melanoma cells. Furthermore, pharmacologic inhibition of class I HDACs sensitizes melanoma cells to apoptosis following exposure to DTIC and BMN-637. In addition, inhibition of HDAC caused sensitization of melanoma cells to dacarbazine and BMN-637 in melanoma xenografts in vivo. At the mRNA and protein level histone deacetylase inhibitor downregulated RAD51 and FANCD2. This study provides that combining HDACi, alkylating agent and PARPi could potentially enhance the treatment of melanoma, which is known for being one of the most aggressive malignant tumors. The findings presented here point to a scenario in which HDAC via enhancing the HR-dependent repair of DSBs created during the processing of DNA lesions, are essential nodes in the resistance of malignant melanoma cells to methylating agent-based therapies.

Keywords: melanoma, hdac, parp inhibitor, valproic acid

Procedia PDF Downloads 62
1434 Assessment of Heavy Metals and Radionuclide Concentrations in Mafikeng Waste Water Treatment Plant

Authors: M. Mathuthu, N. N. Gaxela, R. Y. Olobatoke

Abstract:

A study was carried out to assess the heavy metal and radionuclide concentrations of water from the waste water treatment plant in Mafikeng Local Municipality to evaluate treatment efficiency. Ten water samples were collected from various stages of water treatment which included sewage delivered to the plant, the two treatment stages and the effluent and also the community. The samples were analyzed for heavy metal content using Inductive Coupled Plasma Mass Spectrometer. Gross α/β activity concentration in water samples was evaluated by Liquid Scintillation Counting whereas the concentration of individual radionuclides was measured by gamma spectroscopy. The results showed marked reduction in the levels of heavy metal concentration from 3 µg/L (As)–670 µg/L (Na) in sewage into the plant to 2 µg/L (As)–170 µg/L (Fe) in the effluent. Beta activity was not detected in water samples except in the in-coming sewage, the concentration of which was within reference limits. However, the gross α activity in all the water samples (7.7-8.02 Bq/L) exceeded the 0.1 Bq/L limit set by World Health Organization (WHO). Gamma spectroscopy analysis revealed very high concentrations of 235U and 226Ra in water samples, with the lowest concentrations (9.35 and 5.44 Bq/L respectively) in the in-coming sewage and highest concentrations (73.8 and 47 Bq/L respectively) in the community water suggesting contamination along water processing line. All the values were considerably higher than the limits of South Africa Target Water Quality Range and WHO. However, the estimated total doses of the two radionuclides for the analyzed water samples (10.62 - 45.40 µSv yr-1) were all well below the reference level of the committed effective dose of 100 µSv yr-1 recommended by WHO.

Keywords: gross α/β activity, heavy metals, radionuclides, 235U, 226Ra, water sample

Procedia PDF Downloads 430
1433 Thermal Evaluation of Printed Circuit Board Design Options and Voids in Solder Interface by a Simulation Tool

Authors: B. Arzhanov, A. Correia, P. Delgado, J. Meireles

Abstract:

Quad Flat No-Lead (QFN) packages have become very popular for turners, converters and audio amplifiers, among others applications, needing efficient power dissipation in small footprints. Since semiconductor junction temperature (TJ) is a critical parameter in the product quality. And to ensure that die temperature does not exceed the maximum allowable TJ, a thermal analysis conducted in an earlier development phase is essential to avoid repeated re-designs process with huge losses in cost and time. A simulation tool capable to estimate die temperature of components with QFN package was developed. Allow establish a non-empirical way to define an acceptance criterion for amount of voids in solder interface between its exposed pad and Printed Circuit Board (PCB) to be applied during industrialization process, and evaluate the impact of PCB designs parameters. Targeting PCB layout designer as an end user for the application, a user-friendly interface (GUI) was implemented allowing user to introduce design parameters in a convenient and secure way and hiding all the complexity of finite element simulation process. This cost effective tool turns transparent a simulating process and provides useful outputs after acceptable time, which can be adopted by PCB designers, preventing potential risks during the design stage and make product economically efficient by not oversizing it. This article gathers relevant information related to the design and implementation of the developed tool, presenting a parametric study conducted with it. The simulation tool was experimentally validated using a Thermal-Test-Chip (TTC) in a QFN open-cavity, in order to measure junction temperature (TJ) directly on the die under controlled and knowing conditions. Providing a short overview about standard thermal solutions and impacts in exposed pad packages (i.e. QFN), accurately describe the methods and techniques that the system designer should use to achieve optimum thermal performance, and demonstrate the effect of system-level constraints on the thermal performance of the design.

Keywords: QFN packages, exposed pads, junction temperature, thermal management and measurements

Procedia PDF Downloads 242
1432 Model Order Reduction of Complex Airframes Using Component Mode Synthesis for Dynamic Aeroelasticity Load Analysis

Authors: Paul V. Thomas, Mostafa S. A. Elsayed, Denis Walch

Abstract:

Airframe structural optimization at different design stages results in new mass and stiffness distributions which modify the critical design loads envelop. Determination of aircraft critical loads is an extensive analysis procedure which involves simulating the aircraft at thousands of load cases as defined in the certification requirements. It is computationally prohibitive to use a Global Finite Element Model (GFEM) for the load analysis, hence reduced order structural models are required which closely represent the dynamic characteristics of the GFEM. This paper presents the implementation of Component Mode Synthesis (CMS) method for the generation of high fidelity Reduced Order Model (ROM) of complex airframes. Here, sub-structuring technique is used to divide the complex higher order airframe dynamical system into a set of subsystems. Each subsystem is reduced to fewer degrees of freedom using matrix projection onto a carefully chosen reduced order basis subspace. The reduced structural matrices are assembled for all the subsystems through interface coupling and the dynamic response of the total system is solved. The CMS method is employed to develop the ROM of a Bombardier Aerospace business jet which is coupled with an aerodynamic model for dynamic aeroelasticity loads analysis under gust turbulence. Another set of dynamic aeroelastic loads is also generated employing a stick model of the same aircraft. Stick model is the reduced order modelling methodology commonly used in the aerospace industry based on stiffness generation by unitary loading application. The extracted aeroelastic loads from both models are compared against those generated employing the GFEM. Critical loads Modal participation factors and modal characteristics of the different ROMs are investigated and compared against those of the GFEM. Results obtained show that the ROM generated using Craig Bampton CMS reduction process has a superior dynamic characteristics compared to the stick model.

Keywords: component mode synthesis, craig bampton reduction method, dynamic aeroelasticity analysis, model order reduction

Procedia PDF Downloads 194
1431 Energy Efficient Massive Data Dissemination Through Vehicle Mobility in Smart Cities

Authors: Salman Naseer

Abstract:

One of the main challenges of operating a smart city (SC) is collecting the massive data generated from multiple data sources (DS) and to transmit them to the control units (CU) for further data processing and analysis. These ever-increasing data demands require not only more and more capacity of the transmission channels but also results in resource over-provision to meet the resilience requirements, thus the unavoidable waste because of the data fluctuations throughout the day. In addition, the high energy consumption (EC) and carbon discharges from these data transmissions posing serious issues to the environment we live in. Therefore, to overcome the issues of intensive EC and carbon emissions (CE) of massive data dissemination in Smart Cities, we propose an energy efficient and carbon reduction approach by utilizing the daily mobility of the existing vehicles as an alternative communications channel to accommodate the data dissemination in smart cities. To illustrate the effectiveness and efficiency of our approach, we take the Auckland City in New Zealand as an example, assuming massive data generated by various sources geographically scattered throughout the Auckland region to the control centres located in city centre. The numerical results show that our proposed approach can provide up to 5 times lower delay as transferring the large volume of data by utilizing the existing daily vehicles’ mobility than the conventional transmission network. Moreover, our proposed approach offers about 30% less EC and CE than that of conventional network transmission approach.

Keywords: smart city, delay tolerant network, infrastructure offloading, opportunistic network, vehicular mobility, energy consumption, carbon emission

Procedia PDF Downloads 127
1430 Photocatalytic Active Surface of LWSCC Architectural Concretes

Authors: P. Novosad, L. Osuska, M. Tazky, T. Tazky

Abstract:

Current trends in the building industry are oriented towards the reduction of maintenance costs and the ecological benefits of buildings or building materials. Surface treatment of building materials with photocatalytic active titanium dioxide added into concrete can offer a good solution in this context. Architectural concrete has one disadvantage – dust and fouling keep settling on its surface, diminishing its aesthetic value and increasing maintenance e costs. Concrete surface – silicate material with open porosity – fulfils the conditions of effective photocatalysis, in particular, the self-cleaning properties of surfaces. This modern material is advantageous in particular for direct finishing and architectural concrete applications. If photoactive titanium dioxide is part of the top layers of road concrete on busy roads and the facades of the buildings surrounding these roads, exhaust fumes can be degraded with the aid of sunshine; hence, environmental load will decrease. It is clear that options for removing pollutants like nitrogen oxides (NOx) must be found. Not only do these gases present a health risk, they also cause the degradation of the surfaces of concrete structures. The photocatalytic properties of titanium dioxide can in the long term contribute to the enhanced appearance of surface layers and eliminate harmful pollutants dispersed in the air, and facilitate the conversion of pollutants into less toxic forms (e.g., NOx to HNO3). This paper describes verification of the photocatalytic properties of titanium dioxide and presents the results of mechanical and physical tests on samples of architectural lightweight self-compacting concretes (LWSCC). The very essence of the use of LWSCC is their rheological ability to seep into otherwise extremely hard accessible or inaccessible construction areas, or sections thereof where concrete compacting will be a problem, or where vibration is completely excluded. They are also able to create a solid monolithic element with a large variety of shapes; the concrete will at the same meet the requirements of both chemical aggression and the influences of the surrounding environment. Due to their viscosity, LWSCCs are able to imprint the formwork elements into their structure and thus create high quality lightweight architectural concretes.

Keywords: photocatalytic concretes, titanium dioxide, architectural concretes, Lightweight Self-Compacting Concretes (LWSCC)

Procedia PDF Downloads 280
1429 Self-Energy Sufficiency Assessment of the Biorefinery Annexed to a Typical South African Sugar Mill

Authors: M. Ali Mandegari, S. Farzad, , J. F. Görgens

Abstract:

Sugar is one of the main agricultural industries in South Africa and approximately livelihoods of one million South Africans are indirectly dependent on sugar industry which is economically struggling with some problems and should re-invent in order to ensure a long-term sustainability. Second generation biorefinery is defined as a process to use waste fibrous for the production of biofuel, chemicals animal food, and electricity. Bioethanol is by far the most widely used biofuel for transportation worldwide and many challenges in front of bioethanol production were solved. Biorefinery annexed to the existing sugar mill for production of bioethanol and electricity is proposed to sugar industry and is addressed in this study. Since flowsheet development is the key element of the bioethanol process, in this work, a biorefinery (bioethanol and electricity production) annexed to a typical South African sugar mill considering 65ton/h dry sugarcane bagasse and tops/trash as feedstock was simulated. Aspen PlusTM V8.6 was applied as simulator and realistic simulation development approach was followed to reflect the practical behaviour of the plant. Latest results of other researches considering pretreatment, hydrolysis, fermentation, enzyme production, bioethanol production and other supplementary units such as evaporation, water treatment, boiler, and steam/electricity generation units were adopted to establish a comprehensive biorefinery simulation. Steam explosion with SO2 was selected for pretreatment due to minimum inhibitor production and simultaneous saccharification and fermentation (SSF) configuration was adopted for enzymatic hydrolysis and fermentation of cellulose and hydrolyze. Bioethanol purification was simulated by two distillation columns with side stream and fuel grade bioethanol (99.5%) was achieved using molecular sieve in order to minimize the capital and operating costs. Also boiler and steam/power generation were completed using industrial design data. Results indicates that the annexed biorefinery can be self-energy sufficient when 35% of feedstock (tops/trash) bypass the biorefinery process and directly be loaded to the boiler to produce sufficient steam and power for sugar mill and biorefinery plant.

Keywords: biorefinery, self-energy sufficiency, tops/trash, bioethanol, electricity

Procedia PDF Downloads 524
1428 Geographical Indication Protection for Agricultural Products: Contribution for Achieving Food Security in Indonesia

Authors: Mas Rahmah

Abstract:

Indonesia is the most populous Southeast Asian nations, as Indonesia`s population is constantly growing, food security has become a crucial trending issue. Although Indonesia has more than enough natural resources and agricultural products to ensure food security for all, Indonesia is still facing the problem of food security because of adverse weather conditions, increasing population, political instability, economic factors (unemployment, rising food prices), and the dependent system of agriculture. This paper will analyze that Geographical Indication (GI) can aid in transforming Indonesian agricultural-dependent system by tapping the unique product attributes of their quality products since Indonesia has a lot of agricultural products with unique quality and special characteristic associated with geographical factors such as Toraja Coffee, Alor Vanili, Banda Nutmeg, Java Tea, Deli Tobacco, Cianjur Rise etc. This paper argues that the reputation and agricultural products and their intrinsic quality should be protected under GI because GI will provide benefit supporting the food security program. Therefore, this paper will expose the benefit of GI protection such as increasing productivity, improving the exports of GI products, creating employment, adding economic value to products, and increasing the diversity of supply of natural and unique quality products, etc. that can contribute to food security. The analysis will finally conclude that the scenario of promoting GI may indirectly contribute to food security through adding value by incorporating territory specific cultural, environmental and social qualities into production, processing and developing of unique local, niche and special agricultural products.

Keywords: geographical indication, food security, agricultural product, Indonesia

Procedia PDF Downloads 357
1427 Evaluation of the Safety Status of Beef Meat During Processing at Slaughterhouse in Bouira, Algeria

Authors: A. Ameur Ameur, H. Boukherrouba

Abstract:

In red meat slaughterhouses a significant number of organs and carcasses were seized because of the presence of lesions of various origins. The objective of this study is to characterize and evaluate the frequency of these lesions in the slaughterhouse of the Wilaya of BOUIRA. On cattle slaughtered in 2646 and inspected 72% of these carcasses have been no seizures against 28% who have undergone at least one entry. 325 lung (44%), 164 livers (22%), 149 hearts (21%) are the main saisis.38 kidneys members (5%), 33 breasts (4%) and 16 whole carcasses (2%) are less seizures parties. The main reasons are the input hydatid cyst for most seized organs such as the lungs (64.5%), livers (51.8%), hearts (23.2%), hydronephrosis for the kidneys (39.4%), and chronic mastitis (54%) for the breasts. Then we recorded second-degree pneumonia (16%) to the lungs, chronic fascioliasis (25%) for livers. A significant difference was observed (p < 0.0001) by sex, race, origin and age of all cattle having been saisie.une a specific input patterns and So pathology was recorded based on race. The local breed presented (75.2%) of hydatid cyst, (95%) and chronic fascioliasis (60%) pyelonephritis, for against the improved breed presented the entire respiratory lesions include pneumonia (64%) the chronic tuberculosis (64%) and mastitis (76%). These results are an important step in the implementation of the concept of risk assessment as the scientific basis of food legislation, by the identification and characterization of macroscopic damage leading withdrawals in meat and to establish the level of inclusion of these injuries within the recommended risk assessment systems (HACCP).

Keywords: slaughterhouses, meat safety, seizure patterns, HACCP

Procedia PDF Downloads 439
1426 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 217
1425 Application of Interferometric Techniques for Quality Control Oils Used in the Food Industry

Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich

Abstract:

The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.

Keywords: food industry, interferometric, oils, quality control

Procedia PDF Downloads 359
1424 Q-Map: Clinical Concept Mining from Clinical Documents

Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala

Abstract:

Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.

Keywords: information retrieval, unified medical language system, syntax based analysis, natural language processing, medical informatics

Procedia PDF Downloads 117
1423 Using Non-Negative Matrix Factorization Based on Satellite Imagery for the Collection of Agricultural Statistics

Authors: Benyelles Zakaria, Yousfi Djaafar, Karoui Moussa Sofiane

Abstract:

Agriculture is fundamental and remains an important objective in the Algerian economy, based on traditional techniques and structures, it generally has a purpose of consumption. Collection of agricultural statistics in Algeria is done using traditional methods, which consists of investigating the use of land through survey and field survey. These statistics suffer from problems such as poor data quality, the long delay between collection of their last final availability and high cost compared to their limited use. The objective of this work is to develop a processing chain for a reliable inventory of agricultural land by trying to develop and implement a new method of extracting information. Indeed, this methodology allowed us to combine data from remote sensing and field data to collect statistics on areas of different land. The contribution of remote sensing in the improvement of agricultural statistics, in terms of area, has been studied in the wilaya of Sidi Bel Abbes. It is in this context that we applied a method for extracting information from satellite images. This method is called the non-negative matrix factorization, which does not consider the pixel as a single entity, but will look for components the pixel itself. The results obtained by the application of the MNF were compared with field data and the results obtained by the method of maximum likelihood. We have seen a rapprochement between the most important results of the FMN and those of field data. We believe that this method of extracting information from satellite data leads to interesting results of different types of land uses.

Keywords: blind source separation, hyper-spectral image, non-negative matrix factorization, remote sensing

Procedia PDF Downloads 405
1422 Integrated Free Space Optical Communication and Optical Sensor Network System with Artificial Intelligence Techniques

Authors: Yibeltal Chanie Manie, Zebider Asire Munyelet

Abstract:

5G and 6G technology offers enhanced quality of service with high data transmission rates, which necessitates the implementation of the Internet of Things (IoT) in 5G/6G architecture. In this paper, we proposed the integration of free space optical communication (FSO) with fiber sensor networks for IoT applications. Recently, free-space optical communications (FSO) are gaining popularity as an effective alternative technology to the limited availability of radio frequency (RF) spectrum. FSO is gaining popularity due to flexibility, high achievable optical bandwidth, and low power consumption in several applications of communications, such as disaster recovery, last-mile connectivity, drones, surveillance, backhaul, and satellite communications. Hence, high-speed FSO is an optimal choice for wireless networks to satisfy the full potential of 5G/6G technology, offering 100 Gbit/s or more speed in IoT applications. Moreover, machine learning must be integrated into the design, planning, and optimization of future optical wireless communication networks in order to actualize this vision of intelligent processing and operation. In addition, fiber sensors are important to achieve real-time, accurate, and smart monitoring in IoT applications. Moreover, we proposed deep learning techniques to estimate the strain changes and peak wavelength of multiple Fiber Bragg grating (FBG) sensors using only the spectrum of FBGs obtained from the real experiment.

Keywords: optical sensor, artificial Intelligence, Internet of Things, free-space optics

Procedia PDF Downloads 43
1421 Fast Return Path Planning for Agricultural Autonomous Terrestrial Robot in a Known Field

Authors: Carlo Cernicchiaro, Pedro D. Gaspar, Martim L. Aguiar

Abstract:

The agricultural sector is becoming more critical than ever in view of the expected overpopulation of the Earth. The introduction of robotic solutions in this field is an increasingly researched topic to make the most of the Earth's resources, thus going to avoid the problems of wear and tear of the human body due to the harsh agricultural work, and open the possibility of a constant careful processing 24 hours a day. This project is realized for a terrestrial autonomous robot aimed to navigate in an orchard collecting fallen peaches below the trees. When it receives the signal indicating the low battery, it has to return to the docking station where it will replace its battery and then return to the last work point and resume its routine. Considering a preset path in orchards with tree rows with variable length by which the robot goes iteratively using the algorithm D*. In case of low battery, the D* algorithm is still used to determine the fastest return path to the docking station as well as to come back from the docking station to the last work point. MATLAB simulations were performed to analyze the flexibility and adaptability of the developed algorithm. The simulation results show an enormous potential for adaptability, particularly in view of the irregularity of orchard field, since it is not flat and undergoes modifications over time from fallen branch as well as from other obstacles and constraints. The D* algorithm determines the best route in spite of the irregularity of the terrain. Moreover, in this work, it will be shown a possible solution to improve the initial points tracking and reduce time between movements.

Keywords: path planning, fastest return path, agricultural autonomous terrestrial robot, docking station

Procedia PDF Downloads 123
1420 Comparative Study of the Effects of Process Parameters on the Yield of Oil from Melon Seed (Cococynthis citrullus) and Coconut Fruit (Cocos nucifera)

Authors: Ndidi F. Amulu, Patrick E. Amulu, Gordian O. Mbah, Callistus N. Ude

Abstract:

Comparative analysis of the properties of melon seed, coconut fruit and their oil yield were evaluated in this work using standard analytical technique AOAC. The results of the analysis carried out revealed that the moisture contents of the samples studied are 11.15% (melon) and 7.59% (coconut). The crude lipid content are 46.10% (melon) and 55.15% (coconut).The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant difference (p < 0.05) in yield between the samples, with melon oil seed flour having a higher percentage range of oil yield (41.30 – 52.90%) and coconut (36.25 – 49.83%). The physical characterization of the extracted oil was also carried out. The values gotten for refractive index are 1.487 (melon seed oil) and 1.361 (coconut oil) and viscosities are 0.008 (melon seed oil) and 0.002 (coconut oil). The chemical analysis of the extracted oils shows acid value of 1.00mg NaOH/g oil (melon oil), 10.050mg NaOH/g oil (coconut oil) and saponification value of 187.00mg/KOH (melon oil) and 183.26mg/KOH (coconut oil). The iodine value of the melon oil gave 75.00mg I2/g and 81.00mg I2/g for coconut oil. A standard statistical package Minitab version 16.0 was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to optimize the leaching process. Both samples gave high oil yield at the same optimal conditions. The optimal conditions to obtain highest oil yield ≥ 52% (melon seed) and ≥ 48% (coconut seed) are solute - solvent ratio of 40g/ml, leaching time of 2hours and leaching temperature of 50oC. The two samples studied have potential of yielding oil with melon seed giving the higher yield.

Keywords: Coconut, Melon, Optimization, Processing

Procedia PDF Downloads 426
1419 High Temperature Oxidation of Additively Manufactured Silicon Carbide/Carbon Fiber Nanocomposites

Authors: Saja M. Nabat Al-Ajrash, Charles Browning, Rose Eckerle, Li Cao, Robyn L. Bradford, Donald Klosterman

Abstract:

An additive manufacturing process and subsequent pyrolysis cycle were used to fabricate SiC matrix/carbon fiber hybrid composites. The matrix was fabricated using a mixture of preceramic polymer and acrylate monomers, while polyacrylonitrile (PAN) precursor was used to fabricate fibers via electrospinning. The precursor matrix and reinforcing fibers at 0, 2, 5, or 10 wt% were printed using digital light processing, and both were simultaneously pyrolyzed to yield the final ceramic matrix composite structure. After pyrolysis, XRD and SEAD analysis proved the existence of SiC nanocrystals and turbostratic carbon structure in the matrix, while the reinforcement phase was shown to have a turbostratic carbon structure similar to commercial carbon fibers. Thermogravimetric analysis (TGA) in the air up to 1400 °C was used to evaluate the oxidation resistance of this material. TGA results showed some weight loss due to oxidation of SiC and/or carbon up to about 900 °C, followed by weight gain to about 1200 °C due to the formation of a protective SiO2 layer. Although increasing carbon fiber content negatively impacted the total mass loss for the first heating cycle, exposure of the composite to second-run air revealed negligible weight chance. This is explained by SiO2 layer formation, which acts as a protective film that prevents oxygen diffusion. Oxidation of SiC and the formation of a glassy layer has been proven to protect the sample from further oxidation, as well as provide healing of surface cracks and defects, as revealed by SEM analysis.

Keywords: silicon carbide, carbon fibers, additive manufacturing, composite

Procedia PDF Downloads 60
1418 Identification of High-Rise Buildings Using Object Based Classification and Shadow Extraction Techniques

Authors: Subham Kharel, Sudha Ravindranath, A. Vidya, B. Chandrasekaran, K. Ganesha Raj, T. Shesadri

Abstract:

Digitization of urban features is a tedious and time-consuming process when done manually. In addition to this problem, Indian cities have complex habitat patterns and convoluted clustering patterns, which make it even more difficult to map features. This paper makes an attempt to classify urban objects in the satellite image using object-oriented classification techniques in which various classes such as vegetation, water bodies, buildings, and shadows adjacent to the buildings were mapped semi-automatically. Building layer obtained as a result of object-oriented classification along with already available building layers was used. The main focus, however, lay in the extraction of high-rise buildings using spatial technology, digital image processing, and modeling, which would otherwise be a very difficult task to carry out manually. Results indicated a considerable rise in the total number of buildings in the city. High-rise buildings were successfully mapped using satellite imagery, spatial technology along with logical reasoning and mathematical considerations. The results clearly depict the ability of Remote Sensing and GIS to solve complex problems in urban scenarios like studying urban sprawl and identification of more complex features in an urban area like high-rise buildings and multi-dwelling units. Object-Oriented Technique has been proven to be effective and has yielded an overall efficiency of 80 percent in the classification of high-rise buildings.

Keywords: object oriented classification, shadow extraction, high-rise buildings, satellite imagery, spatial technology

Procedia PDF Downloads 133
1417 Knowledge Co-Production on Future Climate-Change-Induced Mass-Movement Risks in Alpine Regions

Authors: Elisabeth Maidl

Abstract:

The interdependence of climate change and natural hazard goes along with large uncertainties regarding future risks. Regional stakeholders, experts in natural hazards management and scientists have specific knowledge, resp. mental models on such risks. This diversity of views makes it difficult to find common and broadly accepted prevention measures. If the specific knowledge of these types of actors is shared in an interactive knowledge production process, this enables a broader and common understanding of complex risks and allows to agree on long-term solution strategies. Previous studies on mental models confirm that actors with specific vulnerabilities perceive different aspects of a topic and accordingly prefer different measures. In bringing these perspectives together, there is the potential to reduce uncertainty and to close blind spots in solution finding. However, studies that examine the mental models of regional actors on future concrete mass movement risks are lacking so far. The project tests and evaluates the feasibility of knowledge co-creation for the anticipatory prevention of climate change-induced mass movement risks in the Alps. As a key element, mental models of the three included groups of actors are compared. Being integrated into the research program Climate Change Impacts on Alpine Mass Movements (CCAMM2), this project is carried out in two Swiss mountain regions. The project is structured in four phases: 1) the preparatory phase, in which the participants are identified, 2) the baseline phase, in which qualitative interviews and a quantitative pre-survey are conducted with actors 3) the knowledge-co-creation phase, in which actors have a moderated exchange meeting, and a participatory modelling workshop on specific risks in the region, and 4) finally a public information event. Results show that participants' mental models are based on the place of origin, profession, believes, values, which results in narratives on climate change and hazard risks. Further, the more intensively participants interact with each other, the more likely is that they change their views. This provides empirical evidence on how changes in opinions and mindsets can be induced and fostered.

Keywords: climate change, knowledge-co-creation, participatory process, natural hazard risks

Procedia PDF Downloads 52
1416 Understanding the Influence of Fibre Meander on the Tensile Properties of Advanced Composite Laminates

Authors: Gaoyang Meng, Philip Harrison

Abstract:

When manufacturing composite laminates, the fibre directions within the laminate are never perfectly straight and inevitably contain some degree of stochastic in-plane waviness or ‘meandering’. In this work we aim to understand the relationship between the degree of meandering of the fibre paths, and the resulting uncertainty in the laminate’s final mechanical properties. To do this, a numerical tool is developed to automatically generate meandering fibre paths in each of the laminate's 8 plies (using Matlab) and after mapping this information into finite element simulations (using Abaqus), the statistical variability of the tensile mechanical properties of a [45°/90°/-45°/0°]s carbon/epoxy (IM7/8552) laminate is predicted. The stiffness, first ply failure strength and ultimate failure strength are obtained. Results are generated by inputting the degree of variability in the fibre paths and the laminate is then examined in all directions (from 0° to 359° in increments of 1°). The resulting predictions are output as flower (polar) plots for convenient analysis. The average fibre orientation of each ply in a given laminate is determined by the laminate layup code [45°/90°/-45°/0°]s. However, in each case, the plies contain increasingly large amounts of in-plane waviness (quantified by the standard deviation of the fibre direction in each ply across the laminate. Four different amounts of variability in the fibre direction are tested (2°, 4°, 6° and 8°). Results show that both the average tensile stiffness and the average tensile strength decrease, while the standard deviations increase, with an increasing degree of fibre meander. The variability in stiffness is found to be relatively insensitive to the rotation angle, but the variability in strength is sensitive. Specifically, the uncertainty in laminate strength is relatively low at orientations centred around multiples of 45° rotation angle, and relatively high between these rotation angles. To concisely represent all the information contained in the various polar plots, rotation-angle dependent Weibull distribution equations are fitted to the data. The resulting equations can be used to quickly estimate the size of the errors bars for the different mechanical properties, resulting from the amount of fibre directional variability contained within the laminate. A longer term goal is to use these equations to quickly introduce realistic variability at the component level.

Keywords: advanced composite laminates, FE simulation, in-plane waviness, tensile properties, uncertainty quantification

Procedia PDF Downloads 73
1415 A Systematic Review of Efficacy and Safety of Radiofrequency Ablation in Patients with Spinal Metastases

Authors: Pascale Brasseur, Binu Gurung, Nicholas Halfpenny, James Eaton

Abstract:

Development of minimally invasive treatments in recent years provides a potential alternative to invasive surgical interventions which are of limited value to patients with spinal metastases due to short life expectancy. A systematic review was conducted to explore the efficacy and safety of radiofrequency ablation (RFA), a minimally invasive treatment in patients with spinal metastases. EMBASE, Medline and CENTRAL were searched from database inception to March 2017 for randomised controlled trials (RCTs) and non-randomised studies. Conference proceedings for ASCO and ESMO published in 2015 and 2016 were also searched. Fourteen studies were included: three prospective interventional studies, four prospective case series and seven retrospective case series. No RCTs or studies comparing RFA with another treatment were identified. RFA was followed by cement augmentation in all patients in seven studies and some patients (40-96%) in the remaining seven studies. Efficacy was assessed as pain relief in 13/14 studies with the use of a numerical rating scale (NRS) or a visual analogue scale (VAS) at various time points. Ten of the 13 studies reported a significant decrease in pain outcome, post-RFA compared to baseline. NRS scores improved significantly at 1 week (5.9 to 3.5, p < 0.0001; 8 to 4.3, p < 0.02 and 8 to 3.9, p < 0.0001) and this improvement was maintained at 1 month post-RFA compared to baseline (5.9 to 2.6, p < 0.0001; 8 to 2.9, p < 0.0003; 8 to 2.9, p < 0.0001). Similarly, VAS scores decreased significantly at 1 week (7.5 to 2.7, p=0.00005; 7.51 to 1.73, p < 0.0001; 7.82 to 2.82, p < 0.001) and this pattern was maintained at 1 month post-RFA compared to baseline (7.51 to 2.25, p < 0.0001; 7.82 to 3.3; p < 0.001). A significant pain relief was achieved regardless of whether patients had cement augmentation in two studies assessing the impact of RFA with or without cement augmentation on VAS pain scores. In these two studies, a significant decrease in pain scores was reported for patients receiving RFA alone and RFA+cement at 1 week (4.3 to 1.7. p=0.0004 and 6.6 to 1.7, p=0.003 respectively) and 15-36 months (7.9 to 4, p=0.008 and 7.6 to 3.5, p=0.005 respectively) after therapy. Few minor complications were reported and these included neural damage, radicular pain, vertebroplasty leakage and lower limb pain/numbness. In conclusion, the efficacy and safety of RFA were consistently positive between prospective and retrospective studies with reductions in pain and few procedural complications. However, the lack of control groups in the identified studies indicates the possibility of selection bias inherent in single arm studies. Controlled trials exploring efficacy and safety of RFA in patients with spinal metastases are warranted to provide robust evidence. The identified studies provide an initial foundation for such future trials.

Keywords: pain relief, radiofrequency ablation, spinal metastases, systematic review

Procedia PDF Downloads 155