Search results for: electrical networks.
43 Security Model of a Unified Communications and Integrated Collaborations System in the Health Sector Environment of Developing Countries: A Case of Uganda
Authors: Excellence Favor, Bakari M. M. Mwinyiwiwa
Abstract:
Access to information holds the key to the empowerment of everybody despite where they are living. This research has been carried out in respect of the people living in developing countries, considering their plight and complex geographical, demographic, social-economic conditions surrounding the areas they live, which hinder access to information and of professionals providing services such as medical workers, which has led to high death rates and development stagnation. Research on Unified Communications and Integrated Collaborations (UCIC) system in the health sector of developing countries aims at creating a possible solution of bridging the digital canyon among the communities. The system is meant to deliver services in a seamless manner to assist health workers situated anywhere to be accessed easily and access information which will enhance service delivery. The proposed UCIC provides the most immersive telepresence experience for one-to-one or many-to-many meetings. Extending to locations anywhere in the world, the transformative platform delivers Ultra-low operating costs through the use of general purpose networks and using special lenses and track systems. The essence of this study is to create a security model for the deployment of the UCIC system in the health sector of developing countries. The model approach used for building the UCIC system security carefully considers the specific requirements for the health sector environment organization such as data centre, national, regional and district hospitals, and health centers IV, III, II and I and then builds the single best possible secure network to meet their needs. The security model demonstrates on how the components of the UCIC system will be protected physically and logically in the health sector environment. The UCIC system once adopted and implemented correctly will bring enhancement to the speed and quality of services offered by health workers. The capacities of UCIC will help health workers shorten decision cycles, accelerate service delivery and save lives by speeding access to information and by making it possible for all health workers and patients to collaborate ubiquitously.
Keywords: Developing Countries, Health Sector Environment, Security, Unified Communications and Integrated Collaborations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152942 A Multi-Level WEB Based Parallel Processing System A Hierarchical Volunteer Computing Approach
Authors: Abdelrahman Ahmed Mohamed Osman
Abstract:
Over the past few years, a number of efforts have been exerted to build parallel processing systems that utilize the idle power of LAN-s and PC-s available in many homes and corporations. The main advantage of these approaches is that they provide cheap parallel processing environments for those who cannot afford the expenses of supercomputers and parallel processing hardware. However, most of the solutions provided are not very flexible in the use of available resources and very difficult to install and setup. In this paper, a multi-level web-based parallel processing system (MWPS) is designed (appendix). MWPS is based on the idea of volunteer computing, very flexible, easy to setup and easy to use. MWPS allows three types of subscribers: simple volunteers (single computers), super volunteers (full networks) and end users. All of these entities are coordinated transparently through a secure web site. Volunteer nodes provide the required processing power needed by the system end users. There is no limit on the number of volunteer nodes, and accordingly the system can grow indefinitely. Both volunteer and system users must register and subscribe. Once, they subscribe, each entity is provided with the appropriate MWPS components. These components are very easy to install. Super volunteer nodes are provided with special components that make it possible to delegate some of the load to their inner nodes. These inner nodes may also delegate some of the load to some other lower level inner nodes .... and so on. It is the responsibility of the parent super nodes to coordinate the delegation process and deliver the results back to the user. MWPS uses a simple behavior-based scheduler that takes into consideration the current load and previous behavior of processing nodes. Nodes that fulfill their contracts within the expected time get a high degree of trust. Nodes that fail to satisfy their contract get a lower degree of trust. MWPS is based on the .NET framework and provides the minimal level of security expected in distributed processing environments. Users and processing nodes are fully authenticated. Communications and messages between nodes are very secure. The system has been implemented using C#. MWPS may be used by any group of people or companies to establish a parallel processing or grid environment.Keywords: Volunteer computing, Parallel Processing, XMLWebServices, .NET Remoting, Tuplespace.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 149441 Electronics Thermal Management Driven Design of an IP65-Rated Motor Inverter
Authors: Sachin Kamble, Raghothama Anekal, Shivakumar Bhavi
Abstract:
Thermal management of electronic components packaged inside an IP65 rated enclosure is of prime importance in industrial applications. Electrical enclosure protects the multiple board configurations such as inverter, power, controller board components, busbars, and various power dissipating components from harsh environments. Industrial environments often experience relatively warm ambient conditions, and the electronic components housed in the enclosure dissipate heat, due to which the enclosures and the components require thermal management as well as reduction of internal ambient temperatures. Design of Experiments based thermal simulation approach with MOSFET arrangement, Heat sink design, Enclosure Volume, Copper and Aluminum Spreader, Power density, and Printed Circuit Board (PCB) type were considered to optimize air temperature inside the IP65 enclosure to ensure conducive operating temperature for controller board and electronic components through the different modes of heat transfer viz. conduction, natural convection and radiation using Ansys ICEPAK. MOSFET’s with the parallel arrangement, IP65 enclosure molded heat sink with rectangular fins on both enclosures, specific enclosure volume to satisfy the power density, Copper spreader to conduct heat to the enclosure, optimized power density value and selecting Aluminum clad PCB which improves the heat transfer were the contributors towards achieving a conducive operating temperature inside the IP-65 rated Motor Inverter enclosure. A reduction of 52 ℃ was achieved in internal ambient temperature inside the IP65 enclosure between baseline and final design parameters, which met the operative temperature requirements of the electronic components inside the IP-65 rated Motor Inverter.
Keywords: Ansys ICEPAK, Aluminum Clad PCB, IP 65 enclosure, motor inverter, thermal simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66540 Influence of Thermo-fluid-dynamic Parameters on Fluidics in an Expanding Thermal Plasma Deposition Chamber
Authors: G. Zuppardi, F. Romano
Abstract:
Technology of thin film deposition is of interest in many engineering fields, from electronic manufacturing to corrosion protective coating. A typical deposition process, like that developed at the University of Eindhoven, considers the deposition of a thin, amorphous film of C:H or of Si:H on the substrate, using the Expanding Thermal arc Plasma technique. In this paper a computing procedure is proposed to simulate the flow field in a deposition chamber similar to that at the University of Eindhoven and a sensitivity analysis is carried out in terms of: precursor mass flow rate, electrical power, supplied to the torch and fluid-dynamic characteristics of the plasma jet, using different nozzles. To this purpose a deposition chamber similar in shape, dimensions and operating parameters to the above mentioned chamber is considered. Furthermore, a method is proposed for a very preliminary evaluation of the film thickness distribution on the substrate. The computing procedure relies on two codes working in tandem; the output from the first code is the input to the second one. The first code simulates the flow field in the torch, where Argon is ionized according to the Saha-s equation, and in the nozzle. The second code simulates the flow field in the chamber. Due to high rarefaction level, this is a (commercial) Direct Simulation Monte Carlo code. Gas is a mixture of 21 chemical species and 24 chemical reactions from Argon plasma and Acetylene are implemented in both codes. The effects of the above mentioned operating parameters are evaluated and discussed by 2-D maps and profiles of some important thermo-fluid-dynamic parameters, as per Mach number, velocity and temperature. Intensity, position and extension of the shock wave are evaluated and the influence of the above mentioned test conditions on the film thickness and uniformity of distribution are also evaluated.Keywords: Deposition chamber, Direct Simulation Mote Carlo method (DSMC), Plasma chemistry, Rarefied gas dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 169739 A POX Controller Module to Collect Web Traffic Statistics in SDN Environment
Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin
Abstract:
Software Defined Networking (SDN) is a new norm of networks. It is designed to facilitate the way of managing, measuring, debugging and controlling the network dynamically, and to make it suitable for the modern applications. Generally, measurement methods can be divided into two categories: Active and passive methods. Active measurement method is employed to inject test packets into the network in order to monitor their behaviour (ping tool as an example). Meanwhile the passive measurement method is used to monitor the traffic for the purpose of deriving measurement values. The measurement methods, both active and passive, are useful for the collection of traffic statistics, and monitoring of the network traffic. Although there has been a work focusing on measuring traffic statistics in SDN environment, it was only meant for measuring packets and bytes rates for non-web traffic. In this study, a feasible method will be designed to measure the number of packets and bytes in a certain time, and facilitate obtaining statistics for both web traffic and non-web traffic. Web traffic refers to HTTP requests that use application layer; while non-web traffic refers to ICMP and TCP requests. Thus, this work is going to be more comprehensive than previous works. With a developed module on POX OpenFlow controller, information will be collected from each active flow in the OpenFlow switch, and presented on Command Line Interface (CLI) and wireshark interface. Obviously, statistics that will be displayed on CLI and on wireshark interfaces include type of protocol, number of bytes and number of packets, among others. Besides, this module will show the number of flows added to the switch whenever traffic is generated from and to hosts in the same statistics list. In order to carry out this work effectively, our Python module will send a statistics request message to the switch requesting its current ports and flows statistics in every five seconds; while the switch will reply with the required information in a message called statistics reply message. Thus, POX controller will be notified and updated with any changes could happen in the entire network in a very short time. Therefore, our aim of this study is to prepare a list for the important statistics elements that are collected from the whole network, to be used for any further researches; particularly, those that are dealing with the detection of the network attacks that cause a sudden rise in the number of packets and bytes like Distributed Denial of Service (DDoS).
Keywords: Mininet, OpenFlow, POX controller, SDN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 297038 A Survey Proposal towards Holistic Management of Schizophrenia
Authors: Pronab Ganguly, Ahmed A. Moustafa
Abstract:
Holistic management of schizophrenia involves mainstream pharmacological intervention, complimentary medicine intervention, therapeutic intervention and other psychosocial factors such as accommodation, education, job training, employment, relationship, friendship, exercise, overall well-being, smoking, substance abuse, suicide prevention, stigmatisation, recreation, entertainment, violent behaviour, arrangement of public trusteeship and guardianship, day-day-living skill, integration with community, and management of overweight due to medications and other health complications related to medications amongst others. Our review shows that there is no integrated survey by combining all these factors. An international web-based survey was conducted to evaluate the significance of all these factors and present them in a unified manner. It is believed this investigation will contribute positively towards holistic management of schizophrenia. There will be two surveys. In the pharmacological intervention survey, five popular drugs for schizophrenia will be chosen and their efficacy as well as harmful side effects will be evaluated on a scale of 0 -10. This survey will be done by psychiatrists. In the second survey, each element of therapeutic intervention and psychosocial factors will be evaluated according to their significance on a scale of 0 - 10. This survey will be done by care givers, psychologists, case managers and case workers. For the first survey, professional bodies of psychiatrists in English speaking countries will be contacted to request them to ask their members to participate in the survey. For the second survey, professional bodies of clinical psychologist and care givers in English speaking countries will be contacted to request them to ask their members to participate in the survey. Additionally, for both the surveys, relevant professionals will be contacted through personal contact networks. For both the surveys, mean, mode, median, standard deviation and net promoter score will be calculated for each factor and then presented in a statistically significant manner. Subsequently each factor will be ranked according to their statistical significance. Additionally, country specific variation will be highlighted to identify the variation pattern. The results of these surveys will identify the relative significance of each type of pharmacological intervention, each type of therapeutic intervention and each type of psychosocial factor. The determination of this relative importance will definitely contribute to the improvement in quality of life for individuals with schizophrenia.
Keywords: Schizophrenia, holistic management, antipsychotics, quality of life.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83737 Body Composition Analysis of University Students by Anthropometry and Bioelectrical Impedance Analysis
Authors: Vinti Davar
Abstract:
Background: Worldwide, at least 2.8 million people die each year as a result of being overweight or obese, and 35.8 million (2.3%) of global DALYs are caused by overweight or obesity. Obesity is acknowledged as one of the burning public health problems reducing life expectancy and quality of life. The body composition analysis of the university population is essential in assessing the nutritional status, as well as the risk of developing diseases associated with abnormal body fat content so as to make nutritional recommendations. Objectives: The main aim was to determine the prevalence of obesity and overweight in University students using Anthropometric analysis and BIA methods. Material and Methods: In this cross-sectional study, 283 university students participated. The body composition analysis was undertaken by using mainly: i) Anthropometric Measurement: Height, Weight, BMI, waist circumference, hip circumference and skin fold thickness, ii) Bio-electrical impedance was used for analysis of body fat mass, fat percent and visceral fat which was measured by Tanita SC-330P Professional Body Composition Analyzer. The data so collected were compiled in MS Excel and analyzed for males and females using SPSS 16. Results and Discussion: The mean age of the male (n= 153) studied subjects was 25.37 ±2.39 years and females (n=130) was 22.53 ±2.31. The data of BIA revealed very high mean fat per cent of the female subjects i.e. 30.3±6.5 per cent whereas mean fat per cent of the male subjects was 15.60±6.02 per cent indicating a normal body fat range. The findings showed high visceral fat of both males (12.92±3.02) and females (16.86±4.98). BMI, BF% and WHR were higher among females, and BMI was higher among males. The most evident correlation was verified between BF% and WHR for female students (r=0.902; p<0.001). The correlation of BFM and BF% with thickness of triceps, sub scapular and abdominal skin folds and BMI was significant (P<0.001). Conclusion: The studied data made it obvious that there is a need to initiate lifestyle changing strategies especially for adult females and encourage them to improve their dietary intake to prevent incidence of noncommunicable diseases due to obesity and high fat percentage.Keywords: Anthropometry, bioelectrical impedance, body fat percentage, obesity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 283336 Early-Warning Lights Classification Management System for Industrial Parks in Taiwan
Authors: Yu-Min Chang, Kuo-Sheng Tsai, Hung-Te Tsai, Chia-Hsin Li
Abstract:
This paper presents the early-warning lights classification management system for industrial parks promoted by the Taiwan Environmental Protection Administration (EPA) since 2011, including the definition of each early-warning light, objectives, action program and accomplishments. All of the 151 industrial parks in Taiwan were classified into four early-warning lights, including red, orange, yellow and green, for carrying out respective pollution management according to the monitoring data of soil and groundwater quality, regulatory compliance, and regulatory listing of control site or remediation site. The Taiwan EPA set up a priority list for high potential polluted industrial parks and investigated their soil and groundwater qualities based on the results of the light classification and pollution potential assessment. In 2011-2013, there were 44 industrial parks selected and carried out different investigation, such as the early warning groundwater well networks establishment and pollution investigation/verification for the red and orange-light industrial parks and the environmental background survey for the yellow-light industrial parks. Among them, 22 industrial parks were newly or continuously confirmed that the concentrations of pollutants exceeded those in soil or groundwater pollution control standards. Thus, the further investigation, groundwater use restriction, listing of pollution control site or remediation site, and pollutant isolation measures were implemented by the local environmental protection and industry competent authorities; the early warning lights of those industrial parks were proposed to adjust up to orange or red-light. Up to the present, the preliminary positive effect of the soil and groundwater quality management system for industrial parks has been noticed in several aspects, such as environmental background information collection, early warning of pollution risk, pollution investigation and control, information integration and application, and inter-agency collaboration. Finally, the work and goal of self-initiated quality management of industrial parks will be carried out on the basis of the inter-agency collaboration by the classified lights system of early warning and management as well as the regular announcement of the status of each industrial park.
Keywords: Industrial park, soil and groundwater quality management, early-warning lights classification, SOP for reporting and treatment of monitored abnormal events.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 199135 Using GIS and Map Data for the Analysis of the Relationship between Soil and Groundwater Quality at Saline Soil Area of Kham Sakaesaeng District, Nakhon Ratchasima, Thailand
Authors: W. Thongwat, B. Terakulsatit
Abstract:
The study area is Kham Sakaesaeng District in Nakhon Ratchasima Province, the south section of Northeastern Thailand, located in the Lower Khorat-Ubol Basin. This region is the one of saline soil area, located in a dry plateau and regularly experience standing with periods of floods and alternating with periods of drought. Especially, the drought in the summer season causes the major saline soil and saline water problems of this region. The general cause of dry land salting resulted from salting on irrigated land, and an excess of water leading to the rising water table in the aquifer. The purpose of this study is to determine the relationship of physical and chemical properties between the soil and groundwater. The soil and groundwater samples were collected in both rainy and summer seasons. The content of pH, electrical conductivity (EC), total dissolved solids (TDS), chloride and salinity were investigated. The experimental result of soil and groundwater samples show the slightly pH less than 7, EC (186 to 8,156 us/cm and 960 to 10,712 us/cm), TDS (93 to 3,940 ppm and 480 to 5,356 ppm), chloride content (45.58 to 4,177,015 mg/l and 227.90 to 9,216,736 mg/l), and salinity (0.07 to 4.82 ppt and 0.24 to 14.46 ppt) in the rainy and summer seasons, respectively. The distribution of chloride content and salinity content were interpolated and displayed as a map by using ArcMap 10.3 program, according to the season. The result of saline soil and brined groundwater in the study area were related to the low-lying topography, drought area, and salt-source exposure. Especially, the Rock Salt Member of Maha Sarakham Formation was exposed or lies near the ground surface in this study area. During the rainy season, salt was eroded or weathered from the salt-source rock formation and transported by surface flow or leached into the groundwater. In the dry season, the ground surface is dry enough resulting salt precipitates from the brined surface water or rises from the brined groundwater influencing the increasing content of chloride and salinity in the ground surface and groundwater.Keywords: Environmental geology, soil salinity, geochemistry, groundwater hydrology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 194434 C-LNRD: A Cross-Layered Neighbor Route Discovery for Effective Packet Communication in Wireless Sensor Network
Authors: K. Kalaikumar, E. Baburaj
Abstract:
One of the problems to be addressed in wireless sensor networks is the issues related to cross layer communication. Cross layer architecture shares the information across the layer, ensuring Quality of Services (QoS). With this shared information, MAC protocol adapts effective functionality maintenance such as route selection on changeable sensor network environment. However, time slot assignment and neighbour route selection time duration for cross layer have not been carried out. The time varying physical layer communication over cross layer causes high traffic load in the sensor network. Though, the traffic load was reduced using cross layer optimization procedure, the computational cost is high. To improve communication efficacy in the sensor network, a self-determined time slot based Cross-Layered Neighbour Route Discovery (C-LNRD) method is presented in this paper. In the presented work, the initial process is to discover the route in the sensor network using Dynamic Source Routing based Medium Access Control (MAC) sub layers. This process considers MAC layer operation with dynamic route neighbour table discovery. Then, the discovered route path for packet communication employs Broad Route Distributed Time Slot Assignment method on Cross-Layered Sensor Network system. Broad Route means time slotting on varying length of the route paths. During packet communication in this sensor network, transmission of packets is adjusted over the different time with varying ranges for controlling the traffic rate. Finally, Rayleigh fading model is developed in C-LNRD to identify the performance of the sensor network communication structure. The main task of Rayleigh Fading is to measure the power level of each communication under MAC sub layer. The minimized power level helps to easily reduce the computational cost of packet communication in the sensor network. Experiments are conducted on factors such as power factor, on packet communication, neighbour route discovery time, and information (i.e., packet) propagation speed.
Keywords: Medium access control, neighbour route discovery, wireless sensor network, Rayleigh fading, distributed time slot assignment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77433 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Geryes Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g. Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple-views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.
Keywords: Smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25732 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: Metagenomics, phenotype prediction, deep learning, embeddings, multiple instance learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 91031 An Exploratory Approach of the Latin American Migrants’ Urban Space Transformation of Antofagasta City, Chile
Authors: Carolina Arriagada, Yasna Contreras
Abstract:
Since mid-2000, the migratory flows of Latin American migrants to Chile have been increasing constantly. There are two reasons that would explain why Chile is presented as an attractive country for the migrants. On the one hand, traditional centres of migrants’ attraction such as the United States and Europe have begun to close their borders. On the other hand, Chile exhibits relative economic and political stability, which offers greater job opportunities and better standard of living when compared to the migrants’ origin country. At the same time, the neoliberal economic model of Chile, developed under an extractive production of the natural resources, has privatized the urban space. The market regulates the growth of the fragmented and segregated cities. Then, the vulnerable population, most of the time, is located in the periphery and in the marginal areas of the urban space. In this aspect, the migrants have begun to occupy those degraded and depressed areas of the city. The problem raised is that the increase of the social spatial segregation could be also attributed to the migrants´ occupation of the marginal urban places of the city. The aim of this investigation is to carry out an analysis of the migrants’ housing strategies, which are transforming the marginal areas of the city. The methodology focused on the urban experience of the migrants, through the observation of spatial practices, ways of living and networks configuration in order to transform the marginal territory. The techniques applied in this study are semi–structured interviews in-depth interviews. The study reveals that the migrants housing strategies for living in the marginal areas of the city are built on a paradox way. On the one hand, the migrants choose proximity to their place of origin, maintaining their identity and customs. On the other hand, the migrants choose proximity to their social and familiar places, generating sense of belonging. In conclusion, the migration as international displacements under a globalized economic model increasing socio spatial segregation in cities is evidenced, but the transformation of the marginal areas is a fundamental resource of their integration migratory process. The importance of this research is that it is everybody´s responsibility not only the right to live in a city without any discrimination but also to integrate the citizens within the social urban space of a city.
Keywords: Inhabit, migrations, social spatial segregation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 88930 Development and Validation of Cylindrical Linear Oscillating Generator
Authors: Sungin Jeong
Abstract:
This paper presents a linear oscillating generator of cylindrical type for hybrid electric vehicle application. The focus of the study is the suggestion of the optimal model and the design rule of the cylindrical linear oscillating generator with permanent magnet in the back-iron translator. The cylindrical topology is achieved using equivalent magnetic circuit considering leakage elements as initial modeling. This topology with permanent magnet in the back-iron translator is described by number of phases and displacement of stroke. For more accurate analysis of an oscillating machine, it will be compared by moving just one-pole pitch forward and backward the thrust of single-phase system and three-phase system. Through the analysis and comparison, a single-phase system of cylindrical topology as the optimal topology is selected. Finally, the detailed design of the optimal topology takes the magnetic saturation effects into account by finite element analysis. Besides, the losses are examined to obtain more accurate results; copper loss in the conductors of machine windings, eddy-current loss of permanent magnet, and iron-loss of specific material of electrical steel. The considerations of thermal performances and mechanical robustness are essential, because they have an effect on the entire efficiency and the insulations of the machine due to the losses of the high temperature generated in each region of the generator. Besides electric machine with linear oscillating movement requires a support system that can resist dynamic forces and mechanical masses. As a result, the fatigue analysis of shaft is achieved by the kinetic equations. Also, the thermal characteristics are analyzed by the operating frequency in each region. The results of this study will give a very important design rule in the design of linear oscillating machines. It enables us to more accurate machine design and more accurate prediction of machine performances.
Keywords: Equivalent magnetic circuit, finite element analysis, hybrid electric vehicle, free piston engine, cylindrical linear oscillating generator
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 136729 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: Laser scribing, LightScribe DVD, graphene oxide, scanning electron microscopy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66328 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review
Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha
Abstract:
Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision making has not been far-fetched. Proper classification of these textual information in a given context has also been very difficult. As a result, a systematic review was conducted from previous literature on sentiment classification and AI-based techniques. The study was done in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that could correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy using the knowledge gain from the evaluation of different artificial intelligence techniques reviewed. The study evaluated over 250 articles from digital sources like ACM digital library, Google Scholar, and IEEE Xplore; and whittled down the number of research to 52 articles. Findings revealed that deep learning approaches such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Bidirectional Encoder Representations from Transformer (BERT), and Long Short-Term Memory (LSTM) outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also required to develop a robust sentiment classifier. Results also revealed that data can be obtained from places like Twitter, movie reviews, Kaggle, Stanford Sentiment Treebank (SST), and SemEval Task4 based on the required domain. The hybrid deep learning techniques like CNN+LSTM, CNN+ Gated Recurrent Unit (GRU), CNN+BERT outperformed single deep learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of development simplicity and AI-based library functionalities. Finally, the study recommended the findings obtained for building robust sentiment classifier in the future.
Keywords: Artificial Intelligence, Natural Language Processing, Sentiment Analysis, Social Network, Text.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59427 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs based on Machine Learning Algorithms
Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios
Abstract:
Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity and aflatoxinogenic capacity of the strains, topography, soil and climate parameters of the fig orchards are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high-performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques i.e., dimensionality reduction on the original dataset (Principal Component Analysis), metric learning (Mahalanobis Metric for Clustering) and K-nearest Neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson Correlation Coefficient (PCC) between observed and predicted values.
Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64826 Design and Modeling of Human Middle Ear for Harmonic Response Analysis
Authors: Shende Suraj Balu, A. B. Deoghare, K. M. Pandey
Abstract:
The human middle ear (ME) is a delicate and vital organ. It has a complex structure that performs various functions such as receiving sound pressure and producing vibrations of eardrum and propagating it to inner ear. It consists of Tympanic Membrane (TM), three auditory ossicles, various ligament structures and muscles. Incidents such as traumata, infections, ossification of ossicular structures and other pathologies may damage the ME organs. The conditions can be surgically treated by employing prosthesis. However, the suitability of the prosthesis needs to be examined in advance prior to the surgery. Few decades ago, this issue was addressed and analyzed by developing an equivalent representation either in the form of spring mass system, electrical system using R-L-C circuit or developing an approximated CAD model. But, nowadays a three-dimensional ME model can be constructed using micro X-Ray Computed Tomography (μCT) scan data. Moreover, the concern about patient specific integrity pertaining to the disease can be examined well in advance. The current research work emphasizes to develop the ME model from the stacks of μCT images which are used as input file to MIMICS Research 19.0 (Materialise Interactive Medical Image Control System) software. A stack of CT images is converted into geometrical surface model to build accurate morphology of ME. The work is further extended to understand the dynamic behaviour of Harmonic response of the stapes footplate and umbo for different sound pressure levels applied at lateral side of eardrum using finite element approach. The pathological condition Cholesteatoma of ME is investigated to obtain peak to peak displacement of stapes footplate and umbo. Apart from this condition, other pathologies, mainly, changes in the stiffness of stapedial ligament, TM thickness and ossicular chain separation and fixation are also explored. The developed model of ME for pathologies is validated by comparing the results available in the literatures and also with the results of a normal ME to calculate the percentage loss in hearing capability.
Keywords: Computed tomography, human middle ear, harmonic response, pathologies, tympanic membrane.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101325 Engineering Topology of Construction Ecology for Dynamic Integration of Sustainability Outcomes to Functions in Urban Environments: Spatial Modeling
Authors: Moustafa Osman Mohammed
Abstract:
Integration sustainability outcomes give attention to construction ecology in the design review of urban environments to comply with Earth’s System that is composed of integral parts of the (i.e., physical, chemical and biological components). Naturally, exchange patterns of industrial ecology have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. When engineering topology is affecting internal and external processes in system networks, it postulated the valence of the first-level spatial outcome (i.e., project compatibility success). These instrumentalities are dependent on relating the second-level outcome (i.e., participant security satisfaction). The construction ecology-based topology (i.e., as feedback energy system) flows from biotic and abiotic resources in the entire Earth’s ecosystems. These spatial outcomes are providing an innovation, as entails a wide range of interactions to state, regulate and feedback “topology” to flow as “interdisciplinary equilibrium” of ecosystems. The interrelation dynamics of ecosystems are performing a process in a certain location within an appropriate time for characterizing their unique structure in “equilibrium patterns”, such as biosphere and collecting a composite structure of many distributed feedback flows. These interdisciplinary systems regulate their dynamics within complex structures. These dynamic mechanisms of the ecosystem regulate physical and chemical properties to enable a gradual and prolonged incremental pattern to develop a stable structure. The engineering topology of construction ecology for integration sustainability outcomes offers an interesting tool for ecologists and engineers in the simulation paradigm as an initial form of development structure within compatible computer software. This approach argues from ecology, resource savings, static load design, financial other pragmatic reasons, while an artistic/architectural perspective, these are not decisive. The paper described an attempt to unify analytic and analogical spatial modeling in developing urban environments as a relational setting, using optimization software and applied as an example of integrated industrial ecology where the construction process is based on a topology optimization approach.
Keywords: Construction ecology, industrial ecology, urban topology, environmental planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 63724 A Look at the Gezi Park Protests through the Lens of Media
Authors: Süleyman Hakan Yılmaz, Yasemin Gülşen Yılmaz
Abstract:
The Gezi Park protests of 2013 have significantly changed the Turkish agenda and its effects have been felt historically. The protests, which rapidly spread throughout the country, were triggered by the proposal to recreate the Ottoman Army Barracks to function as a shopping mall on Gezi Park located in Istanbul’s Taksim neighbourhood despite the oppositions of several NGOs and when trees were cut in the park for this purpose. Once the news that the construction vehicles entered the park on May 27 spread on social media, activists moved into the park to stop the demolition, against whom the police used disproportioned force. With this police intervention and the then prime-minister Tayyip Erdoğan's insistent statements about the construction plans, the protests turned into anti- government demonstrations, which then spread to the rest of the country, mainly in big cities like Ankara and Izmir. According to the Ministry of Internal Affairs’ June 23rd reports, 2.5 million people joined the demonstrations in 79 provinces, that is all of them, except for the provinces of Bayburt and Bingöl, while even more people shared their opinions via social networks. As a result of these events, 8 civilians and 2 security personnel lost their lives, namely police chief Mustafa Sarı, police officer Ahmet Küçükdağ, citizens Mehmet Ayvalıtaş, Abdullah Cömert, Ethem Sarısülük, Ali İsmail Korkmaz, Ahmet Atakan, Berkin Elvan, Burak Can Karamanoğlu, Mehmet İstif, and Elif Çermik, and 8163 more were injured. Besides being a turning point in Turkish history, the Gezi Park protests also had broad repercussions in both in Turkish and in global media, which focused on Turkey throughout the events. Our study conducts content analysis of three Turkish reporting newspapers with varying ideological standpoints, Hürriyet, Cumhuriyet ve Yeni Şafak, in order to reveal their basic approach to news casting in context of the Gezi Park protests. Headlines, news segments, and news content relating to the Gezi protests were treated and analysed for this purpose. The aim of this study is to understand the social effects of the Gezi Park protests through media samples with varying political attitudes towards news casting.
Keywords: Gezi Park, media, news casting, tree.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 207023 Deep Injection Wells for Flood Prevention and Groundwater Management
Authors: Mohammad R. Jafari, Francois G. Bernardeau
Abstract:
With its arid climate, Qatar experiences low annual rainfall, intense storms, and high evaporation rates. However, the fast-paced rate of infrastructure development in the capital city of Doha has led to recurring instances of surface water flooding as well as rising groundwater levels. Public Work Authority (PWA/ASHGHAL) has implemented an approach to collect and discharge the flood water into a) positive gravity systems; b) Emergency Flooding Area (EFA) – Evaporation, Infiltration or Storage off-site using tankers; and c) Discharge to deep injection wells. As part of the flood prevention scheme, 21 deep injection wells have been constructed to discharge the collected surface and groundwater table in Doha city. These injection wells function as an alternative in localities that do not possess either positive gravity systems or downstream networks that can accommodate additional loads. These injection wells are 400-m deep and are constructed in a complex karstic subsurface condition with large cavities. The injection well system will discharge collected groundwater and storm surface runoff into the permeable Umm Er Radhuma Formation, which is an aquifer present throughout the Persian Gulf Region. The Umm Er Radhuma formation contains saline water that is not being used for water supply. The injection zone is separated by an impervious gypsum formation which acts as a barrier between upper and lower aquifer. State of the art drilling, grouting, and geophysical techniques have been implemented in construction of the wells to assure that the shallow aquifer would not be contaminated and impacted by injected water. Injection and pumping tests were performed to evaluate injection well functionality (injectability). The results of these tests indicated that majority of the wells can accept injection rate of 200 to 300 m3 /h (56 to 83 l/s) under gravity with average value of 250 m3 /h (70 l/s) compared to design value of 50 l/s. This paper presents design and construction process and issues associated with these injection wells, performing injection/pumping tests to determine capacity and effectiveness of the injection wells, the detailed design of collection system and conveying system into the injection wells, and the operation and maintenance process. This system is completed now and is under operation, and therefore, construction of injection wells is an effective option for flood control.Keywords: Deep injection well, wellhead assembly system, emergency flood area, flood prevention scheme, geophysical tests, pumping and injection tests, Qatar geology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 143622 Adaptive WiFi Fingerprinting for Location Approximation
Authors: Mohd Fikri Azli bin Abdullah, Khairul Anwar bin Kamarul Hatta, Esther Jeganathan
Abstract:
WiFi has become an essential technology that is widely used nowadays. It is famous due to its convenience to be used with mobile devices. This is especially true for Internet users worldwide that use WiFi connections. There are many location based services that are available nowadays which uses Wireless Fidelity (WiFi) signal fingerprinting. A common example that is gaining popularity in this era would be Foursquare. In this work, the WiFi signal would be used to estimate the user or client’s location. Similar to GPS, fingerprinting method needs a floor plan to increase the accuracy of location estimation. Still, the factor of inconsistent WiFi signal makes the estimation defer at different time intervals. Given so, an adaptive method is needed to obtain the most accurate signal at all times. WiFi signals are heavily distorted by external factors such as physical objects, radio frequency interference, electrical interference, and environmental factors to name a few. Due to these factors, this work uses a method of reducing the signal noise and estimation using the Nearest Neighbour based on past activities of the signal to increase the signal accuracy up to more than 80%. The repository yet increases the accuracy by using Artificial Neural Network (ANN) pattern matching. The repository acts as the server cum support of the client side application decision. Numerous previous works has adapted the methods of collecting signal strengths in the repository over the years, but mostly were just static. In this work, proposed solutions on how the adaptive method is done to match the signal received to the data in the repository are highlighted. With the said approach, location estimation can be done more accurately. Adaptive update allows the latest location fingerprint to be stored in the repository. Furthermore, any redundant location fingerprints are removed and only the updated version of the fingerprint is stored in the repository. How the location estimation of the user can be predicted would be highlighted more in the proposed solution section. After some studies on previous works, it is found that the Artificial Neural Network is the most feasible method to deploy in updating the repository and making it adaptive. The Artificial Neural Network functions are to do the pattern matching of the WiFi signal to the existing data available in the repository.
Keywords: Adaptive Repository, Artificial Neural Network, Location Estimation, Nearest Neighbour Euclidean Distance, WiFi RSSI Fingerprinting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 346021 A Geographical Spatial Analysis on the Benefits of Using Wind Energy in Kuwait
Authors: Obaid AlOtaibi, Salman Hussain
Abstract:
Wind energy is associated with many geographical factors including wind speed, climate change, surface topography, environmental impacts, and several economic factors, most notably the advancement of wind technology and energy prices. It is the fastest-growing and least economically expensive method for generating electricity. Wind energy generation is directly related to the characteristics of spatial wind. Therefore, the feasibility study for the wind energy conversion system is based on the value of the energy obtained relative to the initial investment and the cost of operation and maintenance. In Kuwait, wind energy is an appropriate choice as a source of energy generation. It can be used in groundwater extraction in agricultural areas such as Al-Abdali in the north and Al-Wafra in the south, or in fresh and brackish groundwater fields or remote and isolated locations such as border areas and projects away from conventional power electricity services, to take advantage of alternative energy, reduce pollutants, and reduce energy production costs. The study covers the State of Kuwait with an exception of metropolitan area. Climatic data were attained through the readings of eight distributed monitoring stations affiliated with Kuwait Institute for Scientific Research (KISR). The data were used to assess the daily, monthly, quarterly, and annual available wind energy accessible for utilization. The researchers applied the Suitability Model to analyze the study by using the ArcGIS program. It is a model of spatial analysis that compares more than one location based on grading weights to choose the most suitable one. The study criteria are: the average annual wind speed, land use, topography of land, distance from the main road networks, urban areas. According to the previous criteria, the four proposed locations to establish wind farm projects are selected based on the weights of the degree of suitability (excellent, good, average, and poor). The percentage of areas that represents the most suitable locations with an excellent rank (4) is 8% of Kuwait’s area. It is relatively distributed as follows: Al-Shqaya, Al-Dabdeba, Al-Salmi (5.22%), Al-Abdali (1.22%), Umm al-Hayman (0.70%), North Wafra and Al-Shaqeeq (0.86%). The study recommends to decision-makers to consider the proposed location (No.1), (Al-Shqaya, Al-Dabdaba, and Al-Salmi) as the most suitable location for future development of wind farms in Kuwait, this location is economically feasible.Keywords: Kuwait, renewable energy, spatial analysis, wind energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90020 Influence of Crystal Orientation on Electromechanical Behaviors of Relaxor Ferroelectric P(VDF-TrFE-CTFE) Terpolymer
Authors: Qing Liu, Jean-Fabien Capsal, Claude Richard
Abstract:
In this current contribution, authors are dedicated to investigate influence of the crystal lamellae orientation on electromechanical behaviors of relaxor ferroelectric Poly (vinylidene fluoride –trifluoroethylene -chlorotrifluoroethylene) (P(VDF-TrFE-CTFE)) films by control of polymer microstructure, aiming to picture the full map of structure-property relationship. In order to define their crystal orientation films, terpolymer films were fabricated by solution-casting, stretching and hot-pressing process. Differential scanning calorimetry, impedance analyzer, and tensile strength techniques were employed to characterize crystallographic parameters, dielectric permittivity, and elastic Young’s modulus respectively. In addition, large electrical induced out-of-plane electrostrictive strain was obtained by cantilever beam mode. Consequently, as-casted pristine films exhibited surprisingly high electrostrictive strain 0.1774% due to considerably small value of elastic Young’s modulus although relatively low dielectric permittivity. Such reasons contributed to large mechanical elastic energy density. Instead, due to 2 folds increase of elastic Young’s modulus and less than 50% augmentation of dielectric constant, fullycrystallized film showed weak electrostrictive behavior and mechanical energy density as well. And subjected to mechanical stretching process, Film C exhibited stronger dielectric constant and out-performed electrostrictive strain over Film B because edge-on crystal lamellae orientation induced by uniaxially mechanical stretch. Hot-press films were compared in term of cooling rate. Rather large electrostrictive strain of 0.2788% for hot-pressed Film D in quenching process was observed although its dielectric permittivity equivalent to that of pristine as-casted Film A, showing highest mechanical elastic energy density value of 359.5 J/m3. In hot-press cooling process, dielectric permittivity of Film E saw values at 48.8 concomitant with ca.100% increase of Young’s modulus. Films with intermediate mechanical energy density were obtained.
Keywords: Crystal orientation, electrostrictive strain, mechanical energy density, permittivity, relaxor ferroelectric.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 166419 Green Synthesis of Nanosilver-Loaded Hydrogel Nanocomposites for Antibacterial Application
Authors: D. Berdous, H. Ferfera-Harrar
Abstract:
Superabsorbent polymers (SAPs) or hydrogels with three-dimensional hydrophilic network structure are high-performance water absorbent and retention materials. The in situ synthesis of metal nanoparticles within polymeric network as antibacterial agents for bio-applications is an approach that takes advantage of the existing free-space into networks, which not only acts as a template for nucleation of nanoparticles, but also provides long term stability and reduces their toxicity by delaying their oxidation and release. In this work, SAP/nanosilver nanocomposites were successfully developed by a unique green process at room temperature, which involves in situ formation of silver nanoparticles (AgNPs) within hydrogels as a template. The aim of this study is to investigate whether these AgNPs-loaded hydrogels are potential candidates for antimicrobial applications. Firstly, the superabsorbents were prepared through radical copolymerization via grafting and crosslinking of acrylamide (AAm) onto chitosan backbone (Cs) using potassium persulfate as initiator and N,N’-methylenebisacrylamide as the crosslinker. Then, they were hydrolyzed to achieve superabsorbents with ampholytic properties and uppermost swelling capacity. Lastly, the AgNPs were biosynthesized and entrapped into hydrogels through a simple, eco-friendly and cost-effective method using aqueous silver nitrate as a silver precursor and curcuma longa tuber-powder extracts as both reducing and stabilizing agent. The formed superabsorbents nanocomposites (Cs-g-PAAm)/AgNPs were characterized by X-ray Diffraction (XRD), UV-visible Spectroscopy, Attenuated Total reflectance Fourier Transform Infrared Spectroscopy (ATR-FTIR), Inductively Coupled Plasma (ICP), and Thermogravimetric Analysis (TGA). Microscopic surface structure analyzed by Transmission Electron Microscopy (TEM) has showed spherical shapes of AgNPs with size in the range of 3-15 nm. The extent of nanosilver loading was decreased by increasing Cs content into network. The silver-loaded hydrogel was thermally more stable than the unloaded dry hydrogel counterpart. The swelling equilibrium degree (Q) and centrifuge retention capacity (CRC) in deionized water were affected by both contents of Cs and the entrapped AgNPs. The nanosilver-embedded hydrogels exhibited antibacterial activity against Escherichia coli and Staphylococcus aureus bacteria. These comprehensive results suggest that the elaborated AgNPs-loaded nanomaterials could be used to produce valuable wound dressing.
Keywords: Antibacterial activity, nanocomposites, silver nanoparticles, superabsorbent hydrogel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 170518 Gassing Tendency of Natural Ester Based Transformer Oils: Low Ethane Generation in Stray Gassing Behavior
Authors: Banti Sidhiwala, T. C. S. M. Gupta
Abstract:
Mineral oils of naphthenic and paraffinic type are in use as insulating liquids in the transformer applications to protect solid insulation from moisture and ensures effective heat transfer/cooling. The performance of these type of oils have been proven in the field over many decades and the condition monitoring and diagnosis of transformer performance have been successfully monitored through oil properties and dissolved gas analysis methods successfully. Different types of gases can represent various types of faults that may occur due to faulty components or unfavorable operating conditions. A large amount of database has been generated in the industry for dissolved gas analysis in mineral oil-based transformer oils, and various models have been developed to predict faults and analyze data. Additionally, oil specifications and standards have been updated to include stray gassing limits that cover low-temperature faults. This modification has become an effective preventative maintenance tool that can help greatly in understanding the reasons for breakdowns of electrical insulating materials and related components. Natural esters have seen a rise in popularity in recent years due to their "green" credentials. Some of its benefits include biodegradability, a higher fire point, improvement in load capability of transformer and improved solid insulation life than mineral oils. However, the stray gassing test shows that hydrogen and hydrocarbons like methane (CH4) and ethane (C2H6) show very high values which are much higher than the limits of mineral oil standards. Though the standards for these types of esters are yet to be evolved, the higher values of hydrocarbon gases that are available in the market is of concern which might be interpreted as a fault in transformer operation. The current paper focuses on developing a class of natural esters with low levels of stray gassing by American Society for Testing and Materials (ASTM) and International Electric Council (IEC) methods much lower values compared to the natural ester-based products reported in the literature. The experimental results of products are explained.
Keywords: Biodegradability, fire point, dissolved gas analysis, natural ester, stray gassing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19317 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential
Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen
Abstract:
Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.
Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101416 Experimental Analysis of the Influence of Water Mass Flow Rate on the Performance of a CO2 Direct-Expansion Solar Assisted Heat Pump
Authors: Sabrina N. Rabelo, Tiago de F. Paulino, Willian M. Duarte, Samer Sawalha, Luiz Machado
Abstract:
Energy use is one of the main indicators for the economic and social development of a country, reflecting directly in the quality of life of the population. The expansion of energy use together with the depletion of fossil resources and the poor efficiency of energy systems have led many countries in recent years to invest in renewable energy sources. In this context, solar-assisted heat pump has become very important in energy industry, since it can transfer heat energy from the sun to water or another absorbing source. The direct-expansion solar assisted heat pump (DX-SAHP) water heater system operates by receiving solar energy incident in a solar collector, which serves as an evaporator in a refrigeration cycle, and the energy reject by the condenser is used for water heating. In this paper, a DX-SAHP using carbon dioxide as refrigerant (R744) was assembled, and the influence of the variation of the water mass flow rate in the system was analyzed. The parameters such as high pressure, water outlet temperature, gas cooler outlet temperature, evaporator temperature, and the coefficient of performance were studied. The mainly components used to assemble the heat pump were a reciprocating compressor, a gas cooler which is a countercurrent concentric tube heat exchanger, a needle-valve, and an evaporator that is a copper bare flat plate solar collector designed to capture direct and diffuse radiation. Routines were developed in the LabVIEW and CoolProp through MATLAB software’s, respectively, to collect data and calculate the thermodynamics properties. The range of coefficient of performance measured was from 3.2 to 5.34. It was noticed that, with the higher water mass flow rate, the water outlet temperature decreased, and consequently, the coefficient of performance of the system increases since the heat transfer in the gas cooler is higher. In addition, the high pressure of the system and the CO2 gas cooler outlet temperature decreased. The heat pump using carbon dioxide as a refrigerant, especially operating with solar radiation has been proven to be a renewable source in an efficient system for heating residential water compared to electrical heaters reaching temperatures between 40 °C and 80 °C.
Keywords: Water mass flow rate, R-744, heat pump, solar evaporator, water heater.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 111215 Long-Term Economic-Ecological Assessment of Optimal Local Heat-Generating Technologies for the German Unrefurbished Residential Building Stock on the Quarter Level
Authors: M. A. Spielmann, L. Schebek
Abstract:
In order to reach the long-term national climate goals of the German government for the building sector, substantial energetic measures have to be executed. Historically, those measures were primarily energetic efficiency measures at the buildings’ shells. Advanced technologies for the on-site generation of heat (or other types of energy) often are not feasible at this small spatial scale of a single building. Therefore, the present approach uses the spatially larger dimension of a quarter. The main focus of the present paper is the long-term economic-ecological assessment of available decentralized heat-generating (CHP power plants and electrical heat pumps) technologies at the quarter level for the German unrefurbished residential buildings. Three distinct terms have to be described methodologically: i) Quarter approach, ii) Economic assessment, iii) Ecological assessment. The quarter approach is used to enable synergies and scaling effects over a single-building. For the present study, generic quarters that are differentiated according to significant parameters concerning their heat demand are used. The core differentiation of those quarters is made by the construction time period of the buildings. The economic assessment as the second crucial parameter is executed with the following structure: Full costs are quantized for each technology combination and quarter. The investment costs are analyzed on an annual basis and are modeled with the acquisition of debt. Annuity loans are assumed. Consequently, for each generic quarter, an optimal technology combination for decentralized heat generation is provided in each year of the temporal boundaries (2016-2050). The ecological assessment elaborates for each technology combination and each quarter a Life Cycle assessment. The measured impact category hereby is GWP 100. The technology combinations for heat production can be therefore compared against each other concerning their long-term climatic impacts. Core results of the approach can be differentiated to an economic and ecological dimension. With an annual resolution, the investment and running costs of different energetic technology combinations are quantified. For each quarter an optimal technology combination for local heat supply and/or energetic refurbishment of the buildings within the quarter is provided. Coherently to the economic assessment, the climatic impacts of the technology combinations are quantized and compared against each other.
Keywords: Building sector, heat, LCA, quarter level, systemic approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 94514 Recycling of Sintered NdFeB Magnet Waste via Oxidative Roasting and Selective Leaching
Authors: W. Kritsarikan, T. Patcharawit, T. Yingnakorn, S. Khumkoa
Abstract:
Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as automotive, electrical and medical devices. Because significant amounts of rare earth metals will be subjected to shortages in the future, therefore domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social and environmental impacts towards a circular economy. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd2O3) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550–800 oC to enable selective leaching of neodymium in the subsequent leaching step using H2SO4 at 2.5 M over 24 h. The leachate was then subjected to drying and roasting at 700–800 oC prior to precipitation by oxalic acid and calcination to obtain Nd2O3 as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to an increasing amount of hematite (Fe2O3) as the main composition with a smaller amount of magnetite (Fe3O4) found. Peaks of Nd2O3 were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO3) was present and its XRD peaks were pronounced at higher oxidative roasting temperatures. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form Fe2O3 as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of Fe3O4 was still detected by XRD. The higher roasting temperature at 800 oC resulted in a greater Fe2O3 to Nd2(SO4)3 ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 oC followed by acid leaching and roasting at 800 oC gave the optimum condition for further steps of precipitation and calcination to finally achieve Nd2O3.
Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 703