Search results for: high gain
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6213

Search results for: high gain

363 Experimental Studies of Sigma Thin-Walled Beams Strengthen by CFRP Tapes

Authors: Katarzyna Rzeszut, Ilona Szewczak

Abstract:

The review of selected methods of strengthening of steel structures with carbon fiber reinforced polymer (CFRP) tapes and the analysis of influence of composite materials on the steel thin-walled elements are performed in this paper. The study is also focused to the problem of applying fast and effective strengthening methods of the steel structures made of thin-walled profiles. It is worth noting that the issue of strengthening the thin-walled structures is a very complex, due to inability to perform welded joints in this type of elements and the limited ability to applying mechanical fasteners. Moreover, structures made of thin-walled cross-section demonstrate a high sensitivity to imperfections and tendency to interactive buckling, which may substantially contribute to the reduction of critical load capacity. Due to the lack of commonly used and recognized modern methods of strengthening of thin-walled steel structures, authors performed the experimental studies of thin-walled sigma profiles strengthened with CFRP tapes. The paper presents the experimental stand and the preliminary results of laboratory test concerning the analysis of the effectiveness of the strengthening steel beams made of thin-walled sigma profiles with CFRP tapes. The study includes six beams made of the cold-rolled sigma profiles with height of 140 mm, wall thickness of 2.5 mm, and a length of 3 m, subjected to the uniformly distributed load. Four beams have been strengthened with carbon fiber tape Sika CarboDur S, while the other two were tested without strengthening to obtain reference results. Based on the obtained results, the evaluation of the accuracy of applied composite materials for strengthening of thin-walled structures was performed.

Keywords: CFRP tapes, sigma profiles, steel thin-walled structures, strengthening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 864
362 Hands-off Parking: Deep Learning Gesture-Based System for Individuals with Mobility Needs

Authors: Javier Romera, Alberto Justo, Ignacio Fidalgo, Javier Araluce, Joshué Pérez

Abstract:

Nowadays, individuals with mobility needs face a significant challenge when docking vehicles. In many cases, after parking, they encounter insufficient space to exit, leading to two undesired outcomes: either avoiding parking in that spot or settling for improperly placed vehicles. To address this issue, this paper presents a parking control system employing gestural teleoperation. The system comprises three main phases: capturing body markers, interpreting gestures, and transmitting orders to the vehicle. The initial phase is centered around the MediaPipe framework, a versatile tool optimized for real-time gesture recognition. MediaPipe excels at detecting and tracing body markers, with a special emphasis on hand gestures. Hands detection is done by generating 21 reference points for each hand. Subsequently, after data capture, the project employs the MultiPerceptron Layer (MPL) for in-depth gesture classification. This tandem of MediaPipe’s extraction prowess and MPL’s analytical capability ensures that human gestures are translated into actionable commands with high precision. Furthermore, the system has been trained and validated within a built-in dataset. To prove the domain adaptation, a framework based on the Robot Operating System 2 (ROS2), as a communication backbone, alongside CARLA Simulator, is used. Following successful simulations, the system is transitioned to a real-world platform, marking a significant milestone in the project. This real-vehicle implementation verifies the practicality and efficiency of the system beyond theoretical constructs.

Keywords: Gesture detection, MediaPipe, MultiLayer Perceptron Layer, Robot Operating System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 136
361 A Modelling Study of the Photochemical and Particulate Pollution Characteristics above a Typical Southeast Mediterranean Urban Area

Authors: Kiriaki-Maria Fameli, Vasiliki D. Assimakopoulos, Vasiliki Kotroni

Abstract:

The Greater Athens Area (GAA) faces photochemical and particulate pollution episodes as a result of the combined effects of local pollutant emissions, regional pollution transport, synoptic circulation and topographic characteristics. The area has undergone significant changes since the Athens 2004 Olympic Games because of large scale infrastructure works that lead to the shift of population to areas previously characterized as rural, the increase of the traffic fleet and the operation of highways. However, few recent modelling studies have been performed due to the lack of an accurate, updated emission inventory. The photochemical modelling system MM5/CAMx was applied in order to study the photochemical and particulate pollution characteristics above the GAA for two distinct ten-day periods in the summer of 2006 and 2010, where air pollution episodes occurred. A new updated emission inventory was used based on official data. Comparison of modeled results with measurements revealed the importance and accuracy of the new Athens emission inventory as compared to previous modeling studies. The model managed to reproduce the local meteorological conditions, the daily ozone and particulates fluctuations at different locations across the GAA. Higher ozone levels were found at suburban and rural areas as well as over the sea at the south of the basin. Concerning PM10, high concentrations were computed at the city centre and the southeastern suburbs in agreement with measured data. Source apportionment analysis showed that different sources contribute to the ozone levels, the local sources (traffic, port activities) affecting its formation.

Keywords: Photochemical modelling, urban pollution, greater Athens area, MM5/CAMx.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1367
360 Factors Determining Intention to Pursue Genetic Testing for People in Taiwan

Authors: Ju-Chun Chien

Abstract:

The Ottawa Charter for Health Promotion proposed that the role of health services should shift the focus from cure to prevention. Nowadays, besides having physical examinations, people could also conduct genetic tests to provide important information for diagnosing, treating, and/or preventing illnesses. However, because of the incompletion of the Chinese Genetic Database, people in Taiwan were still unfamiliar with genetic testing. The purposes of the present study were to: (1) Figure out people’s attitudes towards genetic testing. (2) Examine factors that influence people’s intention to pursue genetic testing by means of the Health Belief Model (HBM). A pilot study was conducted on 249 Taiwanese in 2017 to test the feasibility of the self-developed instrument. The reliability and construct validity of scores on the self-developed questionnaire revealed that this HBM-based questionnaire with 40 items was a well-developed instrument. A total of 542 participants were recruited and the valid participants were 535 (99%) between the ages of 20 and 86. Descriptive statistics, one-way ANOVA, two-way contingency table analysis, Pearson’s correlation, and stepwise multiple regression analysis were used in this study. The main results were that only 32 participants (6%) had already undergone genetic testing; moreover, their attitude towards genetic testing was more positive than those who did not have the experience. Compared with people who never underwent genetic tests, those who had gone for genetic testing had higher self-efficacy, greater intention to pursue genetic testing, had academic majors in health-related fields, had chronic and genetic diseases, possessed Catastrophic Illness Cards, and all of them had heard about genetic testing. The variables that best predicted people’s intention to pursue genetic testing were cues to action, self-efficacy, and perceived benefits (the three variables all correlated with one another positively at high magnitudes). To sum up, the HBM could be effective in designing and identifying the needs and priorities of the target population to pursue genetic testing.

Keywords: Genetic testing, intention to pursue genetic testing, Taiwan, health belief model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 701
359 Qualitative Parametric Comparison of Load Balancing Algorithms in Parallel and Distributed Computing Environment

Authors: Amit Chhabra, Gurvinder Singh, Sandeep Singh Waraich, Bhavneet Sidhu, Gaurav Kumar

Abstract:

Decrease in hardware costs and advances in computer networking technologies have led to increased interest in the use of large-scale parallel and distributed computing systems. One of the biggest issues in such systems is the development of effective techniques/algorithms for the distribution of the processes/load of a parallel program on multiple hosts to achieve goal(s) such as minimizing execution time, minimizing communication delays, maximizing resource utilization and maximizing throughput. Substantive research using queuing analysis and assuming job arrivals following a Poisson pattern, have shown that in a multi-host system the probability of one of the hosts being idle while other host has multiple jobs queued up can be very high. Such imbalances in system load suggest that performance can be improved by either transferring jobs from the currently heavily loaded hosts to the lightly loaded ones or distributing load evenly/fairly among the hosts .The algorithms known as load balancing algorithms, helps to achieve the above said goal(s). These algorithms come into two basic categories - static and dynamic. Whereas static load balancing algorithms (SLB) take decisions regarding assignment of tasks to processors based on the average estimated values of process execution times and communication delays at compile time, Dynamic load balancing algorithms (DLB) are adaptive to changing situations and take decisions at run time. The objective of this paper work is to identify qualitative parameters for the comparison of above said algorithms. In future this work can be extended to develop an experimental environment to study these Load balancing algorithms based on comparative parameters quantitatively.

Keywords: SLB, DLB, Host, Algorithm and Load.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657
358 Necessary Condition to Utilize Adaptive Control in Wind Turbine Systems to Improve Power System Stability

Authors: Javad Taherahmadi, Mohammad Jafarian, Mohammad Naser Asefi

Abstract:

The global capacity of wind power has dramatically increased in recent years. Therefore, improving the technology of wind turbines to take different advantages of this enormous potential in the power grid, could be interesting subject for scientists. The doubly-fed induction generator (DFIG) wind turbine is a popular system due to its many advantages such as the improved power quality, high energy efficiency and controllability, etc. With an increase in wind power penetration in the network and with regard to the flexible control of wind turbines, the use of wind turbine systems to improve the dynamic stability of power systems has been of significance importance for researchers. Subsynchronous oscillations are one of the important issues in the stability of power systems. Damping subsynchronous oscillations by using wind turbines has been studied in various research efforts, mainly by adding an auxiliary control loop to the control structure of the wind turbine. In most of the studies, this control loop is composed of linear blocks. In this paper, simple adaptive control is used for this purpose. In order to use an adaptive controller, the convergence of the controller should be verified. Since adaptive control parameters tend to optimum values in order to obtain optimum control performance, using this controller will help the wind turbines to have positive contribution in damping the network subsynchronous oscillations at different wind speeds and system operating points. In this paper, the application of simple adaptive control in DFIG wind turbine systems to improve the dynamic stability of power systems is studied and the essential condition for using this controller is considered. It is also shown that this controller has an insignificant effect on the dynamic stability of the wind turbine, itself.

Keywords: Almost strictly positive real, doubly-fed induction generator, simple adaptive control, subsynchronous oscillations, wind turbine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1126
357 Identification of Promiscuous Epitopes for Cellular Immune Responses in the Major Antigenic Protein Rv3873 Encoded by Region of Difference 1 of Mycobacterium tuberculosis

Authors: Abu Salim Mustafa

Abstract:

Rv3873 is a relatively large size protein (371 amino acids in length) and its gene is located in the immunodominant genomic region of difference (RD)1 that is present in the genome of Mycobacterium tuberculosis but deleted from the genomes of all the vaccine strains of Bacillus Calmette Guerin (BCG) and most other mycobacteria. However, when tested for cellular immune responses using peripheral blood mononuclear cells from tuberculosis patients and BCG-vaccinated healthy subjects, this protein was found to be a major stimulator of cell mediated immune responses in both groups of subjects. In order to further identify the sequence of immunodominant epitopes and explore their Human Leukocyte Antigen (HLA)-restriction for epitope recognition, 24 peptides (25-mers overlapping with the neighboring peptides by 10 residues) covering the sequence of Rv3873 were synthesized chemically using fluorenylmethyloxycarbonyl chemistry and tested in cell mediated immune responses. The results of these experiments helped in the identification of an immunodominant peptide P9 that was recognized by people expressing varying HLA-DR types. Furthermore, it was also predicted to be a promiscuous binder with multiple epitopes for binding to HLA-DR, HLA-DP and HLA-DQ alleles of HLA-class II molecules that present antigens to T helper cells, and to HLA-class I molecules that present antigens to T cytotoxic cells. In addition, the evaluation of peptide P9 using an immunogenicity predictor server yielded a high score (0.94), which indicated a greater probability of this peptide to elicit a protective cellular immune response. In conclusion, P9, a peptide with multiple epitopes and ability to bind several HLA class I and class II molecules for presentation to cells of the cellular immune response, may be useful as a peptide-based vaccine against tuberculosis.

Keywords: Mycobacterium tuberculosis, Rv3873, peptides, vaccine

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 845
356 Investigation of the Properties of Epoxy Modified Binders Based on Epoxy Oligomer with Improved Deformation and Strength Properties

Authors: Hlaing Zaw Oo, N. Kostromina, V. Osipchik, T. Kravchenko, K. Yakovleva

Abstract:

The process of modification of ed-20 epoxy resin synthesized by vinyl-containing compounds is considered. It is shown that the introduction of vinyl-containing compounds into the composition based on epoxy resin ED-20 allows adjusting the technological and operational characteristics of the binder. For improvement of the properties of epoxy resin, following modifiers were selected: polyvinylformalethyl, polyvinyl butyral and composition of linear and aromatic amines (Аramine) as a hardener. Now the big range of hardeners of epoxy resins exists that allows varying technological properties of compositions, and also thermophysical and strength indicators. The nature of the aramin type hardener has a significant impact on the spatial parameters of the mesh, glass transition temperature, and strength characteristics. Epoxy composite materials based on ED-20 modified with polyvinyl butyral were obtained and investigated. It is shown that the composition of resins based on derivatives of polyvinyl butyral and ED-20 allows obtaining composite materials with a higher complex of deformation-strength, adhesion and thermal properties, better water resistance, frost resistance, chemical resistance, and impact strength. The magnitude of the effect depends on the chemical structure, temperature and curing time. In the area of concentrations, where the effect of composite synergy is appearing, the values of strength and stiffness significantly exceed the similar parameters of the individual components of the mixture. The polymer-polymer compositions form their class of materials with diverse specific properties that ensure their competitive application. Coatings with high performance under cyclic loading have been obtained based on epoxy oligomers modified with vinyl-containing compounds.

Keywords: Epoxy resins, modification, vinyl-containing compounds, deformation and strength properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 586
355 Potential Climate Change Impacts on the Hydrological System of the Harvey River Catchment

Authors: Hashim Isam Jameel Al-Safi, P. Ranjan Sarukkalige

Abstract:

Climate change is likely to impact the Australian continent by changing the trends of rainfall, increasing temperature, and affecting the accessibility of water quantity and quality. This study investigates the possible impacts of future climate change on the hydrological system of the Harvey River catchment in Western Australia by using the conceptual modelling approach (HBV mode). Daily observations of rainfall and temperature and the long-term monthly mean potential evapotranspiration, from six weather stations, were available for the period (1961-2015). The observed streamflow data at Clifton Park gauging station for 33 years (1983-2015) in line with the observed climate variables were used to run, calibrate and validate the HBV-model prior to the simulation process. The calibrated model was then forced with the downscaled future climate signals from a multi-model ensemble of fifteen GCMs of the CMIP3 model under three emission scenarios (A2, A1B and B1) to simulate the future runoff at the catchment outlet. Two periods were selected to represent the future climate conditions including the mid (2046-2065) and late (2080-2099) of the 21st century. A control run, with the reference climate period (1981-2000), was used to represent the current climate status. The modelling outcomes show an evident reduction in the mean annual streamflow during the mid of this century particularly for the A1B scenario relative to the control run. Toward the end of the century, all scenarios show a relatively high reduction trends in the mean annual streamflow, especially the A1B scenario, compared to the control run. The decline in the mean annual streamflow ranged between 4-15% during the mid of the current century and 9-42% by the end of the century.

Keywords: Climate change impact, Harvey catchment, HBV model, hydrological modelling, GCMs, LARS-WG, Australia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1434
354 Game-Tree Simplification by Pattern Matching and Its Acceleration Approach using an FPGA

Authors: Suguru Ochiai, Toru Yabuki, Yoshiki Yamaguchi, Yuetsu Kodama

Abstract:

In this paper, we propose a Connect6 solver which adopts a hybrid approach based on a tree-search algorithm and image processing techniques. The solver must deal with the complicated computation and provide high performance in order to make real-time decisions. The proposed approach enables the solver to be implemented on a single Spartan-6 XC6SLX45 FPGA produced by XILINX without using any external devices. The compact implementation is achieved through image processing techniques to optimize a tree-search algorithm of the Connect6 game. The tree search is widely used in computer games and the optimal search brings the best move in every turn of a computer game. Thus, many tree-search algorithms such as Minimax algorithm and artificial intelligence approaches have been widely proposed in this field. However, there is one fundamental problem in this area; the computation time increases rapidly in response to the growth of the game tree. It means the larger the game tree is, the bigger the circuit size is because of their highly parallel computation characteristics. Here, this paper aims to reduce the size of a Connect6 game tree using image processing techniques and its position symmetric property. The proposed solver is composed of four computational modules: a two-dimensional checkmate strategy checker, a template matching module, a skilful-line predictor, and a next-move selector. These modules work well together in selecting next moves from some candidates and the total amount of their circuits is small. The details of the hardware design for an FPGA implementation are described and the performance of this design is also shown in this paper.

Keywords: Connect6, pattern matching, game-tree reduction, hardware direct computation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1973
353 Applying Resilience Engineering to improve Safety Management in a Construction Site: Design and Validation of a Questionnaire

Authors: M. C. Pardo-Ferreira, J. C. Rubio-Romero, M. Martínez-Rojas

Abstract:

Resilience Engineering is a new paradigm of safety management that proposes to change the way of managing the safety to focus on the things that go well instead of the things that go wrong. Many complex and high-risk sectors such as air traffic control, health care, nuclear power plants, railways or emergencies, have applied this new vision of safety and have obtained very positive results. In the construction sector, safety management continues to be a problem as indicated by the statistics of occupational injuries worldwide. Therefore, it is important to improve safety management in this sector. For this reason, it is proposed to apply Resilience Engineering to the construction sector. The Construction Phase Health and Safety Plan emerges as a key element for the planning of safety management. One of the key tools of Resilience Engineering is the Resilience Assessment Grid that allows measuring the four essential abilities (respond, monitor, learn and anticipate) for resilient performance. The purpose of this paper is to develop a questionnaire based on the Resilience Assessment Grid, specifically on the ability to learn, to assess whether a Construction Phase Health and Safety Plans helps companies in a construction site to implement this ability. The research process was divided into four stages: (i) initial design of a questionnaire, (ii) validation of the content of the questionnaire, (iii) redesign of the questionnaire and (iii) application of the Delphi method. The questionnaire obtained could be used as a tool to help construction companies to evolve from Safety-I to Safety-II. In this way, companies could begin to develop the ability to learn, which will serve as a basis for the development of the other abilities necessary for resilient performance. The following steps in this research are intended to develop other questions that allow evaluating the rest of abilities for resilient performance such as monitoring, learning and anticipating.

Keywords: Resilience engineering, construction sector, resilience assessment grid, construction phase health and safety plan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1002
352 Studies on the Characterization and Machinability of Duplex Stainless Steel 2205 during Dry Turning

Authors: Gaurav D. Sonawane, Vikas G. Sargade

Abstract:

The present investigation is a study of the effect of advanced Physical Vapor Deposition (PVD) coatings on cutting temperature residual stresses and surface roughness during Duplex Stainless Steel (DSS) 2205 turning. Austenite stabilizers like nickel, manganese, and molybdenum reduced the cost of DSS. Surface Integrity (SI) plays an important role in determining corrosion resistance and fatigue life. Resistance to various types of corrosion makes DSS suitable for applications with critical environments like Heat exchangers, Desalination plants, Seawater pipes and Marine components. However, lower thermal conductivity, poor chip control and non-uniform tool wear make DSS very difficult to machine. Cemented carbide tools (M grade) were used to turn DSS in a dry environment. AlTiN and AlTiCrN coatings were deposited using advanced PVD High Pulse Impulse Magnetron Sputtering (HiPIMS) technique. Experiments were conducted with cutting speed of 100 m/min, 140 m/min and 180 m/min. A constant feed and depth of cut of 0.18 mm/rev and 0.8 mm were used, respectively. AlTiCrN coated tools followed by AlTiN coated tools outperformed uncoated tools due to properties like lower thermal conductivity, higher adhesion strength and hardness. Residual stresses were found to be compressive for all the tools used for dry turning, increasing the fatigue life of the machined component. Higher cutting temperatures were observed for coated tools due to its lower thermal conductivity, which results in very less tool wear than uncoated tools. Surface roughness with uncoated tools was found to be three times higher than coated tools due to lower coefficient of friction of coating used.

Keywords: Cutting temperatures, DSS2205, dry turning, HiPIMS, surface integrity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 886
351 Elaboration and Validation of a Survey about Research on the Characteristics of Mentoring of University Professors’ Lifelong Learning

Authors: Nagore Guerra Bilbao, Clemente Lobato Fraile

Abstract:

This paper outlines the design and development of the MENDEPRO questionnaire, designed to analyze mentoring performance within a professional development process carried out with professors at the University of the Basque Country, Spain. The study took into account the international research carried out over the past two decades into teachers' professional development, and was also based on a thorough review of the most common instruments used to identify and analyze mentoring styles, many of which fail to provide sufficient psychometric guarantees. The present study aimed to gather empirical data in order to verify the metric quality of the questionnaire developed. To this end, the process followed to validate the theoretical construct was as follows: The formulation of the items and indicators in accordance with the study variables; the analysis of the validity and reliability of the initial questionnaire; the review of the second version of the questionnaire and the definitive measurement instrument. Content was validated through the formal agreement and consensus of 12 university professor training experts. A reduced sample of professors who had participated in a lifelong learning program was then selected for a trial evaluation of the instrument developed. After the trial, 18 items were removed from the initial questionnaire. The final version of the instrument, comprising 33 items, was then administered to a sample group of 99 participants. The results revealed a five-dimensional structure matching theoretical expectations. Also, the reliability data for both the instrument as a whole (.98) and its various dimensions (between .91 and .97) were very high. The questionnaire was thus found to have satisfactory psychometric properties and can therefore be considered apt for studying the performance of mentoring in both induction programs for young professors and lifelong learning programs for senior faculty members.

Keywords: Higher education, mentoring, professional development, university teachers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 842
350 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique

Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki

Abstract:

Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.

Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1033
349 A Quantitative Model for Determining the Area of the “Core and Structural System Elements” of Tall Office Buildings

Authors: Görkem Arslan Kılınç

Abstract:

Due to the high construction, operation, and maintenance costs of tall buildings, quantification of the area in the plan layout which provides a financial return is an important design criterion. The area of the “core and the structural system elements” does not provide financial return but must exist in the plan layout. Some characteristic items of tall office buildings affect the size of these areas. From this point of view, 15 tall office buildings were systematically investigated. The typical office floor plans of these buildings were re-produced digitally. The area of the “core and the structural system elements” in each building and the characteristic items of each building were calculated. These characteristic items are the size of the long and short plan edge, plan length/width ratio, size of the core long and short edge, core length/width ratio, core area, slenderness, building height, number of floors, and floor height. These items were analyzed by correlation and regression analyses. Results of this paper put forward that; characteristic items which affect the area of "core and structural system elements" are plan long and short edge size, core short edge size, building height, and the number of floors. A one-unit increase in plan short side size increases the area of the "core and structural system elements" in the plan by 12,378 m2. An increase in core short edge size increases the area of the core and structural system elements in the plan by 25,650 m2. Subsequent studies can be conducted by expanding the sample of the study and considering the geographical location of the building.

Keywords: Core area, correlation analysis, floor area, regression analysis, space efficiency, tall office buildings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 506
348 Chatter Stability Characterization of Full-Immersion End-Milling Using a Generalized Modified Map of the Full-Discretization Method, Part 1: Validation of Results and Study of Stability Lobes by Numerical Simulation

Authors: Chigbogu G. Ozoegwu, Sam N. Omenyi

Abstract:

The objective in this work is to generate and discuss the stability results of fully-immersed end-milling process with parameters; tool mass m=0.0431kg,tool natural frequency ωn = 5700 rads^-1, damping factor ξ=0.002 and workpiece cutting coefficient C=3.5x10^7 Nm^-7/4. Different no of teeth is considered for the end-milling. Both 1-DOF and 2-DOF chatter models of the system are generated on the basis of non-linear force law. Chatter stability analysis is carried out using a modified form (generalized for both 1-DOF and 2-DOF models) of recently developed method called Full-discretization. The full-immersion three tooth end-milling together with higher toothed end-milling processes has secondary Hopf bifurcation lobes (SHBL’s) that exhibit one turning (minimum) point each. Each of such SHBL is demarcated by its minimum point into two portions; (i) the Lower Spindle Speed Portion (LSSP) in which bifurcations occur in the right half portion of the unit circle centred at the origin of the complex plane and (ii) the Higher Spindle Speed Portion (HSSP) in which bifurcations occur in the left half portion of the unit circle. Comments are made regarding why bifurcation lobes should generally get bigger and more visible with increase in spindle speed and why flip bifurcation lobes (FBL’s) could be invisible in the low-speed stability chart but visible in the high-speed stability chart of the fully-immersed three-tooth miller.

Keywords: Chatter, flip bifurcation, modified full-discretization map stability lobe, secondary Hopf bifurcation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832
347 Numerical Investigation of Multiphase Flow in Pipelines

Authors: Gozel Judakova, Markus Bause

Abstract:

We present and analyze reliable numerical techniques for simulating complex flow and transport phenomena related to natural gas transportation in pipelines. Such kind of problems are of high interest in the field of petroleum and environmental engineering. Modeling and understanding natural gas flow and transformation processes during transportation is important for the sake of physical realism and the design and operation of pipeline systems. In our approach a two fluid flow model based on a system of coupled hyperbolic conservation laws is considered for describing natural gas flow undergoing hydratization. The accurate numerical approximation of two-phase gas flow remains subject of strong interest in the scientific community. Such hyperbolic problems are characterized by solutions with steep gradients or discontinuities, and their approximation by standard finite element techniques typically gives rise to spurious oscillations and numerical artefacts. Recently, stabilized and discontinuous Galerkin finite element techniques have attracted researchers’ interest. They are highly adapted to the hyperbolic nature of our two-phase flow model. In the presentation a streamline upwind Petrov-Galerkin approach and a discontinuous Galerkin finite element method for the numerical approximation of our flow model of two coupled systems of Euler equations are presented. Then the efficiency and reliability of stabilized continuous and discontinous finite element methods for the approximation is carefully analyzed and the potential of the either classes of numerical schemes is investigated. In particular, standard benchmark problems of two-phase flow like the shock tube problem are used for the comparative numerical study.

Keywords: Discontinuous Galerkin method, Euler system, inviscid two-fluid model, streamline upwind Petrov-Galerkin method, two-phase flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 790
346 Modeling the Fischer-Tropsch Reaction In a Slurry Bubble Column Reactor

Authors: F. Gholami, M. Torabi Angaji, Z. Gholami

Abstract:

Fischer-Tropsch synthesis is one of the most important catalytic reactions that convert the synthetic gas to light and heavy hydrocarbons. One of the main issues is selecting the type of reactor. The slurry bubble reactor is suitable choice for Fischer- Tropsch synthesis because of its good qualification to transfer heat and mass, high durability of catalyst, low cost maintenance and repair. The more common catalysts for Fischer-Tropsch synthesis are Iron-based and Cobalt-based catalysts, the advantage of these catalysts on each other depends on which type of hydrocarbons we desire to produce. In this study, Fischer-Tropsch synthesis is modeled with Iron and Cobalt catalysts in a slurry bubble reactor considering mass and momentum balance and the hydrodynamic relations effect on the reactor behavior. Profiles of reactant conversion and reactant concentration in gas and liquid phases were determined as the functions of residence time in the reactor. The effects of temperature, pressure, liquid velocity, reactor diameter, catalyst diameter, gasliquid and liquid-solid mass transfer coefficients and kinetic coefficients on the reactant conversion have been studied. With 5% increase of liquid velocity (with Iron catalyst), H2 conversions increase about 6% and CO conversion increase about 4%, With 8% increase of liquid velocity (with Cobalt catalyst), H2 conversions increase about 26% and CO conversion increase about 4%. With 20% increase of gas-liquid mass transfer coefficient (with Iron catalyst), H2 conversions increase about 12% and CO conversion increase about 10% and with Cobalt catalyst H2 conversions increase about 10% and CO conversion increase about 6%. Results show that the process is sensitive to gas-liquid mass transfer coefficient and optimum condition operation occurs in maximum possible liquid velocity. This velocity must be more than minimum fluidization velocity and less than terminal velocity in such a way that avoid catalysts particles from leaving the fluidized bed.

Keywords: Modeling, Fischer-Tropsch Synthesis, Slurry Bubble Column Reactor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3020
345 High Securing Cover-File of Hidden Data Using Statistical Technique and AES Encryption Algorithm

Authors: A. A. Zaidan, Anas Majeed, B. B. Zaidan

Abstract:

Nowadays, the rapid development of multimedia and internet allows for wide distribution of digital media data. It becomes much easier to edit, modify and duplicate digital information Besides that, digital documents are also easy to copy and distribute, therefore it will be faced by many threatens. It-s a big security and privacy issue with the large flood of information and the development of the digital format, it become necessary to find appropriate protection because of the significance, accuracy and sensitivity of the information. Nowadays protection system classified with more specific as hiding information, encryption information, and combination between hiding and encryption to increase information security, the strength of the information hiding science is due to the non-existence of standard algorithms to be used in hiding secret messages. Also there is randomness in hiding methods such as combining several media (covers) with different methods to pass a secret message. In addition, there are no formal methods to be followed to discover the hidden data. For this reason, the task of this research becomes difficult. In this paper, a new system of information hiding is presented. The proposed system aim to hidden information (data file) in any execution file (EXE) and to detect the hidden file and we will see implementation of steganography system which embeds information in an execution file. (EXE) files have been investigated. The system tries to find a solution to the size of the cover file and making it undetectable by anti-virus software. The system includes two main functions; first is the hiding of the information in a Portable Executable File (EXE), through the execution of four process (specify the cover file, specify the information file, encryption of the information, and hiding the information) and the second function is the extraction of the hiding information through three process (specify the steno file, extract the information, and decryption of the information). The system has achieved the main goals, such as make the relation of the size of the cover file and the size of information independent and the result file does not make any conflict with anti-virus software.

Keywords: Cryptography, Steganography, Portable ExecutableFile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802
344 Drone On-time Obstacle Avoidance for Static and Dynamic Obstacles

Authors: Herath MPC Jayaweera, Samer Hanoun

Abstract:

Path planning for on-time obstacle avoidance is an essential and challenging task that enables drones to achieve safe operation in any application domain. The level of challenge increases significantly on the obstacle avoidance technique when the drone is following a ground mobile entity (GME). This is mainly due to the change in direction and magnitude of the GMEs velocity in dynamic and unstructured environments. Force field techniques are the most widely used obstacle avoidance methods due to their simplicity, ease of use and potential to be adopted for three-dimensional dynamic environments. However, the existing force field obstacle avoidance techniques suffer many drawbacks including their tendency to generate longer routes when the obstacles are sideways of the drones route, poor ability to find the shortest flyable path, propensity to fall into local minima, producing a non-smooth path, and high failure rate in the presence of symmetrical obstacles. To overcome these shortcomings, this paper proposes an on-time three-dimensional obstacle avoidance method for drones to effectively and efficiently avoid dynamic and static obstacles in unknown environments while pursuing a GME. This on-time obstacle avoidance technique generates velocity waypoints for its obstacle-free and efficient path based on the shape of the encountered obstacles. This method can be utilize on most types of drones that have basic distance measurement sensors and autopilot supported flight controllers. The proposed obstacle avoidance technique is validated and evaluated against existing force field methods for different simulation scenarios in Gazebo and ROS supported PX4-SITL. The simulation results show that the proposed obstacle avoidance technique outperforms the existing force field techniques and is better suited for real-world applications.

Keywords: Drones, force field methods, obstacle avoidance, path planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 78
343 Towards a Framework for Embedded Weight Comparison Algorithm with Business Intelligence in the Plantation Domain

Authors: M. Pushparani, A. Sagaya

Abstract:

Embedded systems have emerged as important elements in various domains with extensive applications in automotive, commercial, consumer, healthcare and transportation markets, as there is emphasis on intelligent devices. On the other hand, Business Intelligence (BI) has also been extensively used in a range of applications, especially in the agriculture domain which is the area of this research. The aim of this research is to create a framework for Embedded Weight Comparison Algorithm with Business Intelligence (EWCA-BI). The weight comparison algorithm will be embedded within the plantation management system and the weighbridge system. This algorithm will be used to estimate the weight at the site and will be compared with the actual weight at the plantation. The algorithm will be used to build the necessary alerts when there is a discrepancy in the weight, thus enabling better decision making. In the current practice, data are collected from various locations in various forms. It is a challenge to consolidate data to obtain timely and accurate information for effective decision making. Adding to this, the unstable network connection leads to difficulty in getting timely accurate information. To overcome the challenges embedding is done on a portable device that will have the embedded weight comparison algorithm to also assist in data capture and synchronize data at various locations overcoming the network short comings at collection points. The EWCA-BI will provide real-time information at any given point of time, thus enabling non-latent BI reports that will provide crucial information to enable efficient operational decision making. This research has a high potential in bringing embedded system into the agriculture industry. EWCA-BI will provide BI reports with accurate information with uncompromised data using an embedded system and provide alerts, therefore, enabling effective operation management decision-making at the site.

Keywords: Embedded business intelligence, weight comparison algorithm, oil palm plantation, embedded systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1182
342 Preparation, Characterisation, and Measurement of the in vitro Cytotoxicity of Mesoporous Silica Nanoparticles Loaded with Cytotoxic Pt(II) Oxadiazoline Complexes

Authors: G. Wagner, R. Herrmann

Abstract:

Cytotoxic platinum compounds play a major role in the chemotherapy of a large number of human cancers. However, due to the severe side effects for the patient and other problems associated with their use, there is a need for the development of more efficient drugs and new methods for their selective delivery to the tumours. One way to achieve the latter could be in the use of nanoparticular substrates that can adsorb or chemically bind the drug. In the cell, the drug is supposed to be slowly released, either by physical desorption or by dissolution of the particle framework. Ideally, the cytotoxic properties of the platinum drug unfold only then, in the cancer cell and over a longer period of time due to the gradual release. In this paper, we report on our first steps in this direction. The binding properties of a series of cytotoxic Pt(II) oxadiazoline compounds to mesoporous silica particles has been studied by NMR and UV/vis spectroscopy. High loadings were achieved when the Pt(II) compound was relatively polar, and has been dissolved in a relatively nonpolar solvent before the silica was added. Typically, 6-10 hours were required for complete equilibration, suggesting the adsorption did not only occur to the outer surface but also to the interior of the pores. The untreated and Pt(II) loaded particles were characterised by C, H, N combustion analysis, BET/BJH nitrogen sorption, electron microscopy (REM and TEM) and EDX. With the latter methods we were able to demonstrate the homogenous distribution of the Pt(II) compound on and in the silica particles, and no Pt(II) bulk precipitate had formed. The in vitro cytotoxicity in a human cancer cell line (HeLa) has been determined for one of the new platinum compounds adsorbed to mesoporous silica particles of different size, and compared with the corresponding compound in solution. The IC50 data are similar in all cases, suggesting that the release of the Pt(II) compound was relatively fast and possibly occurred before the particles reached the cells. Overall, the platinum drug is chemically stable on silica and retained its activity upon prolonged storage.

Keywords: Cytotoxicity, mesoporous silica, nanoparticles platinum compounds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643
341 An Intelligent Controller Augmented with Variable Zero Lag Compensation for Antilock Braking System

Authors: Benjamin C. Agwah, Paulinus C. Eze

Abstract:

Antilock braking system (ABS) is one of the important contributions by the automobile industry, designed to ensure road safety in such way that vehicles are kept steerable and stable when during emergency braking. This paper presents a wheel slip-based intelligent controller with variable zero lag compensation for ABS. It is required to achieve a very fast perfect wheel slip tracking during hard braking condition and eliminate chattering with improved transient and steady state performance, while shortening the stopping distance using effective braking torque less than maximum allowable torque to bring a braking vehicle to a stop. The dynamic of a vehicle braking with a braking velocity of 30 ms⁻¹ on a straight line was determined and modelled in MATLAB/Simulink environment to represent a conventional ABS system without a controller. Simulation results indicated that system without a controller was not able to track desired wheel slip and the stopping distance was 135.2 m. Hence, an intelligent control based on fuzzy logic controller (FLC) was designed with a variable zero lag compensator (VZLC) added to enhance the performance of FLC control variable by eliminating steady state error, provide improve bandwidth to eliminate the effect of high frequency noise such as chattering during braking. The simulation results showed that FLC-VZLC provided fast tracking of desired wheel slip, eliminated chattering, and reduced stopping distance by 70.5% (39.92 m), 63.3% (49.59 m), 57.6% (57.35 m) and 50% (69.13 m) on dry, wet, cobblestone and snow road surface conditions respectively. Generally, the proposed system used effective braking torque that is less than the maximum allowable braking torque to achieve efficient wheel slip tracking and overall robust control performance on different road surfaces.

Keywords: ABS, Fuzzy Logic Controller, Variable Zero Lag Compensator, Wheel Slip Tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 342
340 Assessment of Wastewater Reuse Potential for an Enamel Coating Industry

Authors: Guclu Insel, Efe Gumuslu, Gulten Yuksek, Nilay Sayi Ucar, Emine Ubay Cokgor, Tugba Olmez Hanci, Didem Okutman Tas, Fatos Germirli Babuna, Derya Firat Ertem, Okmen Yildirim, Ozge Erturan, Betul Kirci

Abstract:

In order to eliminate water scarcity problems, effective precautions must be taken. Growing competition for water is increasingly forcing facilities to tackle their own water scarcity problems. At this point, application of wastewater reclamation and reuse results in considerable economic advantageous. In this study, an enamel coating facility, which is one of the high water consumed facilities, is evaluated in terms of its wastewater reuse potential. Wastewater reclamation and reuse can be defined as one of the best available techniques for this sector. Hence, process and pollution profiles together with detailed characterization of segregated wastewater sources are appraised in a way to find out the recoverable effluent streams arising from enamel coating operations. Daily, 170 m3 of process water is required and 160 m3 of wastewater is generated. The segregated streams generated by two enamel coating processes are characterized in terms of conventional parameters. Relatively clean segregated wastewater streams (reusable wastewaters) are separately collected and experimental treatability studies are conducted on it. The results reflected that the reusable wastewater fraction has an approximate amount of 110 m3/day that accounts for 68% of the total wastewaters. The need for treatment applicable on reusable wastewaters is determined by considering water quality requirements of various operations and characterization of reusable wastewater streams. Ultra-filtration (UF), Nano-filtration (NF) and Reverse Osmosis (RO) membranes are subsequently applied on reusable effluent fraction. Adequate organic matter removal is not obtained with the mentioned treatment sequence.

Keywords: enamel coating, membrane, reuse, wastewater

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490
339 Countering Radicalization to Violent Extremism: A Comparative Study of Canada, the UK and South East Asia

Authors: Daniel Alati

Abstract:

Recent high-profile terrorist events in Canada, the United Kingdom and Europe – the London Bridge attacks, the terrorist attacks in Nice, France and Barcelona, Spain, the 2014 Ottawa Parliament attacks and the 2017 attacks in Edmonton – have all raised levels of public and academic concern with so-called “lone-wolf” and “radicalized” terrorism. Similarly, several countries outside of the “Western” world have been dealing with radicalization to violent extremism for several years. Many South East Asian countries, including Indonesia, Malaysia, Singapore and the Philippines have all had experience with what might be described as ISIS or extremist-inspired acts of terrorism. Indeed, it appears the greatest strength of groups such as ISIS has been their ability to spread a global message of violent extremism that has led to radicalization in markedly different jurisdictions throughout the world. These markedly different jurisdictions have responded with counter-radicalization strategies that warrant further comparative analysis. This paper utilizes an inter-disciplinary legal methodology. In doing so, it compares legal, political, cultural and historical aspects of the counter-radicalization strategies employed by Canada, the United Kingdom and several South East Asian countries (Indonesia, Malaysia, Singapore and the Philippines). Whilst acknowledging significant legal and political differences between these jurisdictions, the paper engages in these analyses with an eye towards understanding which best practices might be shared between the jurisdictions. In doing so, it presents valuable findings of a comparative nature that are useful to both academic and practitioner audiences in several jurisdictions.

Keywords: Canada, United Kingdom, South East Asia, comparative law and politics, radicalization to violent extremism, terrorism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1748
338 A Face-to-Face Education Support System Capable of Lecture Adaptation and Q&A Assistance Based On Probabilistic Inference

Authors: Yoshitaka Fujiwara, Jun-ichirou Fukushima, Yasunari Maeda

Abstract:

Keys to high-quality face-to-face education are ensuring flexibility in the way lectures are given, and providing care and responsiveness to learners. This paper describes a face-to-face education support system that is designed to raise the satisfaction of learners and reduce the workload on instructors. This system consists of a lecture adaptation assistance part, which assists instructors in adapting teaching content and strategy, and a Q&A assistance part, which provides learners with answers to their questions. The core component of the former part is a “learning achievement map", which is composed of a Bayesian network (BN). From learners- performance in exercises on relevant past lectures, the lecture adaptation assistance part obtains information required to adapt appropriately the presentation of the next lecture. The core component of the Q&A assistance part is a case base, which accumulates cases consisting of questions expected from learners and answers to them. The Q&A assistance part is a case-based search system equipped with a search index which performs probabilistic inference. A prototype face-to-face education support system has been built, which is intended for the teaching of Java programming, and this approach was evaluated using this system. The expected degree of understanding of each learner for a future lecture was derived from his or her performance in exercises on past lectures, and this expected degree of understanding was used to select one of three adaptation levels. A model for determining the adaptation level most suitable for the individual learner has been identified. An experimental case base was built to examine the search performance of the Q&A assistance part, and it was found that the rate of successfully finding an appropriate case was 56%.

Keywords: Bayesian network, face-to-face education, lecture adaptation, Q&A assistance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1358
337 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods

Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin

Abstract:

Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.

Keywords: Burgers’ equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 516
336 Destination Decision Model for Cruising Taxis Based on Embedding Model

Authors: Kazuki Kamada, Haruka Yamashita

Abstract:

In Japan, taxi is one of the popular transportations and taxi industry is one of the big businesses. However, in recent years, there has been a difficult problem of reducing the number of taxi drivers. In the taxi business, mainly three passenger catching methods are applied. One style is "cruising" that drivers catches passengers while driving on a road. Second is "waiting" that waits passengers near by the places with many requirements for taxies such as entrances of hospitals, train stations. The third one is "dispatching" that is allocated based on the contact from the taxi company. Above all, the cruising taxi drivers need the experience and intuition for finding passengers, and it is difficult to decide "the destination for cruising". The strong recommendation system for the cruising taxies supports the new drivers to find passengers, and it can be the solution for the decreasing the number of drivers in the taxi industry. In this research, we propose a method of recommending a destination for cruising taxi drivers. On the other hand, as a machine learning technique, the embedding models that embed the high dimensional data to a low dimensional space is widely used for the data analysis, in order to represent the relationship of the meaning between the data clearly. Taxi drivers have their favorite courses based on their experiences, and the courses are different for each driver. We assume that the course of cruising taxies has meaning such as the course for finding business man passengers (go around the business area of the city of go to main stations) and course for finding traveler passengers (go around the sightseeing places or big hotels), and extract the meaning of their destinations. We analyze the cruising history data of taxis based on the embedding model and propose the recommendation system for passengers. Finally, we demonstrate the recommendation of destinations for cruising taxi drivers based on the real-world data analysis using proposing method.

Keywords: Taxi industry, decision making, recommendation system, embedding model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 423
335 Evaluation of the Heating Capability and in vitro Hemolysis of Nanosized MgxMn1-xFe2O4 (x = 0.3 and 0.4) Ferrites Prepared by Sol-gel Method

Authors: Laura Elena De León Prado, Dora Alicia Cortés Hernández, Javier Sánchez

Abstract:

Among the different cancer treatments that are currently used, hyperthermia has a promising potential due to the multiple benefits that are obtained by this technique. In general terms, hyperthermia is a method that takes advantage of the sensitivity of cancer cells to heat, in order to damage or destroy them. Within the different ways of supplying heat to cancer cells and achieve their destruction or damage, the use of magnetic nanoparticles has attracted attention due to the capability of these particles to generate heat under the influence of an external magnetic field. In addition, these nanoparticles have a high surface area and sizes similar or even lower than biological entities, which allow their approaching and interaction with a specific region of interest. The most used magnetic nanoparticles for hyperthermia treatment are those based on iron oxides, mainly magnetite and maghemite, due to their biocompatibility, good magnetic properties and chemical stability. However, in order to fulfill more efficiently the requirements that demand the treatment of magnetic hyperthermia, there have been investigations using ferrites that incorporate different metallic ions, such as Mg, Mn, Co, Ca, Ni, Cu, Li, Gd, etc., in their structure. This paper reports the synthesis of nanosized MgxMn1-xFe2O4 (x = 0.3 and 0.4) ferrites by sol-gel method and their evaluation in terms of heating capability and in vitro hemolysis to determine the potential use of these nanoparticles as thermoseeds for the treatment of cancer by magnetic hyperthermia. It was possible to obtain ferrites with nanometric sizes, a single crystalline phase with an inverse spinel structure and a behavior near to that of superparamagnetic materials. Additionally, at concentrations of 10 mg of magnetic material per mL of water, it was possible to reach a temperature of approximately 45°C, which is within the range of temperatures used for the treatment of hyperthermia. The results of the in vitro hemolysis assay showed that, at the concentrations tested, these nanoparticles are non-hemolytic, as their percentage of hemolysis is close to zero. Therefore, these materials can be used as thermoseeds for the treatment of cancer by magnetic hyperthermia.

Keywords: Ferrites, heating capability, hemolysis, nanoparticles, sol-gel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 903
334 A Novel Strategy for Oriented Protein Immobilization

Authors: Ching-Wei Tsai, Chih-I Liu, Ruoh-Chyu Ruaana

Abstract:

A new strategy for oriented immobilization of proteins was proposed. The strategy contains two steps. The first step is to search for a docking site away from the active site on the protein surface. The second step is trying to find a ligand that is able to grasp the targeted site of the protein. To avoid ligand binding to the active site of protein, the targeted docking site is selected to own opposite charges to those near the active site. To enhance the ligand-protein binding, both hydrophobic and electrostatic interactions need to be included. The targeted docking site should therefore contain hydrophobic amino acids. The ligand is then selected through the help of molecular docking simulations. The enzyme α-amylase derived from Aspergillus oryzae (TAKA) was taken as an example for oriented immobilization. The active site of TAKA is surrounded by negatively charged amino acids. All the possible hydrophobic sites on the surface of TAKA were evaluated by the free energy estimation through benzene docking. A hydrophobic site on the opposite side of TAKA-s active site was found to be positive in net charges. A possible ligand, 3,3-,4,4- – Biphenyltetra- carboxylic acid (BPTA), was found to catch TAKA by the designated docking site. Then, the BPTA molecules were grafted onto silica gels and measured the affinity of TAKA adsorption and the specific activity of thereby immobilized enzymes. It was found that TAKA had a dissociation constant as low as 7.0×10-6 M toward the ligand BPTA on silica gel. The increase in ionic strength has little effect on the adsorption of TAKA, which indicated the existence of hydrophobic interaction between ligands and proteins. The specific activity of the immobilized TAKA was compared with the randomly adsorbed TAKA on primary amine containing silica gel. It was found that the orderly immobilized TAKA owns a specific activity twice as high as the one randomly adsorbed by ionic interaction.

Keywords: Protein Oriented immobilization, Molecular docking, ligand design, surface modification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1768