Search results for: back propagation neural network model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21818

Search results for: back propagation neural network model

18218 Mobility Management via Software Defined Networks (SDN) in Vehicular Ad Hoc Networks (VANETs)

Authors: Bilal Haider, Farhan Aadil

Abstract:

A Vehicular Ad hoc Network (VANET) provides various services to end-users traveling on the road at high speeds. However, this high-speed mobility of mobile nodes can cause frequent service disruptions. Various mobility management protocols exist for managing node mobility, but due to their centralized nature, they tend to suffer in the VANET environment. In this research, we proposed a distributed mobility management protocol using software-defined networks (SDN) for VANETs. Instead of relying on a centralized mobility anchor, the mobility functionality is distributed at multiple infrastructural nodes. The protocol is based on the classical Proxy Mobile IP version 6 (PMIPv6). It is evident from simulation results that this work has improved the network performance with respect to nodes throughput, delay, and packet loss.

Keywords: SDN, VANET, mobility management, optimization

Procedia PDF Downloads 170
18217 Multi-Agent System Based Distributed Voltage Control in Distribution Systems

Authors: A. Arshad, M. Lehtonen. M. Humayun

Abstract:

With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.

Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids

Procedia PDF Downloads 312
18216 Participation of Juvenile with Driven of Tobacco Control in Education Institute: Case Study of Suan Sunandha Rajabhat University

Authors: Sakapas Saengchai

Abstract:

This paper studied the participation of juvenile with driven of tobacco control in education institute: case study of Suan Sunandha Rajabhat University is qualitative research has objective to study participation of juvenile with driven of tobacco control in University, as guidance of development participation of juvenile with driven of tobacco control in education institute the university is also free-cigarette university. There are qualitative researches on collection data of participation observation, in-depth interview of group conversation and agent of student in each faculty and college and exchange opinion of student. Result of study found that participation in tobacco control has 3 parts; 1) Participation in campaign of tobacco control, 2) Academic training and activity of free-cigarette of university and 3) As model of juvenile in tobacco control. For guidelines on youth involvement in driven tobacco control is universities should promote tobacco control activities. Reduce smoking campaign continues include a specific area for smokers has living room as sign clearly, staying in the faculty / college and developing network of model students who are non-smoking. This is a key role in the coordination of university students driving to the free cigarette university. Including the strengthening of community in the area and outside the area as good social and quality of country.

Keywords: participation, juvenile, tobacco control, institute

Procedia PDF Downloads 272
18215 Cost of Outpatient Procedures for Ostomized Patients Treated in the Public Health Network in Brazil and Its Impact on the Budget of the Unified Health System

Authors: Karina Guimaraes, Lilian Santos

Abstract:

This study has the purpose of planning and instituting monitoring actions as a way of knowing the scenario of assistance to the patient with stoma, treated in the public health network in Brazil, from January to November of the year 2016, from the elaboration of a technical document containing the survey of the number of procedures offered and the value of the ostomy services, accredited in the Unified Health System-SUS. The purpose of this document is to improve the quality of these services in the efficient management of available financial resources, making it indispensable for the creation of strategies for the implementation and implementation of care services for people with stomata as a strategic tool in the promotion, prevention, qualification and efficiency in health care.

Keywords: health economic, management, ostomy, unified health system

Procedia PDF Downloads 311
18214 Jointly Optimal Statistical Process Control and Maintenance Policy for Deteriorating Processes

Authors: Lucas Paganin, Viliam Makis

Abstract:

With the advent of globalization, the market competition has become a major issue for most companies. One of the main strategies to overcome this situation is the quality improvement of the product at a lower cost to meet customers’ expectations. In order to achieve the desired quality of products, it is important to control the process to meet the specifications, and to implement the optimal maintenance policy for the machines and the production lines. Thus, the overall objective is to reduce process variation and the production and maintenance costs. In this paper, an integrated model involving Statistical Process Control (SPC) and maintenance is developed to achieve this goal. Therefore, the main focus of this paper is to develop the jointly optimal maintenance and statistical process control policy minimizing the total long run expected average cost per unit time. In our model, the production process can go out of control due to either the deterioration of equipment or other assignable causes. The equipment is also subject to failures in any of the operating states due to deterioration and aging. Hence, the process mean is controlled by an Xbar control chart using equidistant sampling epochs. We assume that the machine inspection epochs are the times when the control chart signals an out-of-control condition, considering both true and false alarms. At these times, the production process will be stopped, and an investigation will be conducted not only to determine whether it is a true or false alarm, but also to identify the causes of the true alarm, whether it was caused by the change in the machine setting, by other assignable causes, or by both. If the system is out of control, the proper actions will be taken to bring it back to the in-control state. At these epochs, a maintenance action can be taken, which can be no action, or preventive replacement of the unit. When the equipment is in the failure state, a corrective maintenance action is performed, which can be minimal repair or replacement of the machine and the process is brought to the in-control state. SMDP framework is used to formulate and solve the joint control problem. Numerical example is developed to demonstrate the effectiveness of the control policy.

Keywords: maintenance, semi-Markov decision process, statistical process control, Xbar control chart

Procedia PDF Downloads 91
18213 Applying the Crystal Model Approach on Light Nuclei for Calculating Radii and Density Distribution

Authors: A. Amar

Abstract:

A new model, namely the crystal model, has been modified to calculate the radius and density distribution of light nuclei up to ⁸Be. The crystal model has been modified according to solid-state physics, which uses the analogy between nucleon distribution and atoms distribution in the crystal. The model has analytical analysis to calculate the radius where the density distribution of light nuclei has obtained from analogy of crystal lattice. The distribution of nucleons over crystal has been discussed in a general form. The equation that has been used to calculate binding energy was taken from the solid-state model of repulsive and attractive force. The numbers of the protons were taken to control repulsive force, where the atomic number was responsible for the attractive force. The parameter has been calculated from the crystal model was found to be proportional to the radius of the nucleus. The density distribution of light nuclei was taken as a summation of two clusters distribution as in ⁶Li=alpha+deuteron configuration. A test has been done on the data obtained for radius and density distribution using double folding for d+⁶,⁷Li with M3Y nucleon-nucleon interaction. Good agreement has been obtained for both the radius and density distribution of light nuclei. The model failed to calculate the radius of ⁹Be, so modifications should be done to overcome discrepancy.

Keywords: nuclear physics, nuclear lattice, study nucleus as crystal, light nuclei till to ⁸Be

Procedia PDF Downloads 176
18212 RAPDAC: Role Centric Attribute Based Policy Driven Access Control Model

Authors: Jamil Ahmed

Abstract:

Access control models aim to decide whether a user should be denied or granted access to the user‟s requested activity. Various access control models have been established and proposed. The most prominent of these models include role-based, attribute-based, policy based access control models as well as role-centric attribute based access control model. In this paper, a novel access control model is presented called “Role centric Attribute based Policy Driven Access Control (RAPDAC) model”. RAPDAC incorporates the concept of “policy” in the “role centric attribute based access control model”. It leverages the concept of "policy‟ by precisely combining the evaluation of conditions, attributes, permissions and roles in order to allow authorization access. This approach allows capturing the "access control policy‟ of a real time application in a well defined manner. RAPDAC model allows making access decision at much finer granularity as illustrated by the case study of a real time library information system.

Keywords: authorization, access control model, role based access control, attribute based access control

Procedia PDF Downloads 159
18211 Optimized Cluster Head Selection Algorithm Based on LEACH Protocol for Wireless Sensor Networks

Authors: Wided Abidi, Tahar Ezzedine

Abstract:

Low-Energy Adaptive Clustering Hierarchy (LEACH) has been considered as one of the effective hierarchical routing algorithms that optimize energy and prolong the lifetime of network. Since the selection of Cluster Head (CH) in LEACH is carried out randomly, in this paper, we propose an approach of electing CH based on LEACH protocol. In other words, we present a formula for calculating the threshold responsible for CH election. In fact, we adopt three principle criteria: the remaining energy of node, the number of neighbors within cluster range and the distance between node and CH. Simulation results show that our proposed approach beats LEACH protocol in regards of prolonging the lifetime of network and saving residual energy.

Keywords: wireless sensors networks, LEACH protocol, cluster head election, energy efficiency

Procedia PDF Downloads 329
18210 A Method Development for Improving the Efficiency of Solid Waste Collection System Using Network Analyst

Authors: Dhvanidevi N. Jadeja, Daya S. Kaul, Anurag A. Kandya

Abstract:

Municipal Solid Waste (MSW) collection in a city is performed in less effective manner which results in the poor management of the environment and natural resources. Municipal corporation does not possess efficient waste management and recycling programs because of the complex task involving many factors. Solid waste collection system depends upon various factors such as manpower, number and size of vehicles, transfer station size, dustbin size and weight, on-road traffic, and many others. These factors affect the collection cost, energy and overall municipal tax for the city. Generally, different types of waste are scattered throughout the city in a heterogeneous way that poses changes for efficient collection of solid waste. Efficient waste collection and transportation strategy must be effectively undertaken which will include optimization of routes, volume of waste, and manpower. Being these optimized, the overall cost can be reduced as the fuel and energy requirements would be less and also the municipal waste taxes levied will be less. To carry out the optimization study of collection system various data needs to be collected from the Ahmedabad municipal corporation such as amount of waste generated per day, number of workers, collection schedule, road maps, number of transfer station, location of transfer station, number of equipment (tractors, machineries), number of zones, route of collection etc. The ArcGis Network Analyst is introduced for the best routing identification applied in municipal waste collection. The simulation consists of scenarios of visiting loading spots in the municipality of Ahmedabad, considering dynamic factors like network traffic changes, closed roads due to natural or technical causes. Different routes were selected in a particular area of Ahmedabad city, and present routes were optimized to reduce the length of the routes, by using ArcGis Network Analyst. The result indicates up to 35% length minimization in the routes.

Keywords: collection routes, efficiency, municipal solid waste, optimization

Procedia PDF Downloads 136
18209 Efficient Bargaining versus Right to Manage in the Era of Liberalization

Authors: Panagiota Koliousi, Natasha Miaouli

Abstract:

We compare product and labour market liberalization under the two trade union bargaining models: the Right-to-Manage (RTM) model and the Efficient Bargaining (EB) model. The vehicle is a dynamic general equilibrium (DGE) model that incorporates two types of agents (capitalists and workers), imperfectly competitive product and labour markets. The model is solved numerically employing common parameter values and data from the euro area. A key message is that product market deregulation is favourable under any labour market structure while opting for labour market deregulation one should provide special attention to the structure of the labour market such as the bargaining system of unions. If the prevailing way of bargaining is the RTM model then restructuring both markets is beneficial for all agents.

Keywords: market structure, structural reforms, trade unions, unemployment

Procedia PDF Downloads 196
18208 Visualization of Taiwan's Religious Social Networking Sites

Authors: Jia-Jane Shuai

Abstract:

Purpose of this research aims to improve understanding of the nature of online religion by examining the religious social websites. What motivates individual users to use the online religious social websites, and which factors affect those motivations. We survey various online religious social websites provided by different religions, especially the Taiwanese folk religion. Based on the theory of the Content Analysis and Social Network Analysis, religious social websites and religious web activities are examined. This research examined the folk religion websites’ presentation and contents that promote the religious use of the Internet in Taiwan. The difference among different religions and religious websites also be compared. First, this study used keywords to examine what types of messages gained the most clicks of “Like”, “Share” and comments on Facebook. Dividing the messages into four media types, namely, text, link, video, and photo, reveal which category receive more likes and comments than the others. Meanwhile, this study analyzed the five dialogic principles of religious websites accessed from mobile phones and also assessed their mobile readiness. Using the five principles of dialogic theory as a basis, do a general survey on the websites with elements of online religion. Second, the project analyzed the characteristics of Taiwanese participants for online religious activities. Grounded by social network analysis and text mining, this study comparatively explores the network structure, interaction pattern, and geographic distribution of users involved in communication networks of the folk religion in social websites and mobile sites. We studied the linkage preference of different religious groups. The difference among different religions and religious websites also be compared. We examined the reasons for the success of these websites, as well as reasons why young users accept new religious media. The outcome of the research will be useful for online religious service providers and non-profit organizations to manage social websites and internet marketing.

Keywords: content analysis, online religion, social network analysis, social websites

Procedia PDF Downloads 167
18207 Non-Autonomous Seasonal Variation Model for Vector-Borne Disease Transferral in Kampala of Uganda

Authors: Benjamin Aina Peter, Amos Wale Ogunsola

Abstract:

In this paper, a mathematical model of malaria transmission was presented with the effect of seasonal shift, due to global fluctuation in temperature, on the increase of conveyor of the infectious disease, which probably alters the region transmission potential of malaria. A deterministic compartmental model was proposed and analyzed qualitatively. Both qualitative and quantitative approaches of the model were considered. The next-generation matrix is employed to determine the basic reproduction number of the model. Equilibrium points of the model were determined and analyzed. The numerical simulation is carried out using Excel Micro Software to validate and support the qualitative results. From the analysis of the result, the optimal temperature for the transmission of malaria is between and . The result also shows that an increase in temperature due to seasonal shift gives rise to the development of parasites which consequently leads to an increase in the widespread of malaria transmission in Kampala. It is also seen from the results that an increase in temperature leads to an increase in the number of infectious human hosts and mosquitoes.

Keywords: seasonal variation, indoor residual spray, efficacy of spray, temperature-dependent model

Procedia PDF Downloads 169
18206 Enhancing Urban Sustainability through Integrated Green Spaces: A Focus on Tehran

Authors: Azadeh Mohajer Milani

Abstract:

Urbanization constitutes an irreversible global trend, presenting myriad challenges such as heightened energy consumption, pollution, congestion, and the depletion of natural resources. Today's urban landscapes have emerged as focal points for economic, social, and environmental challenges, underscoring the pressing need for sustainable development. This article delves into the realm of sustainable urban development, concentrating on the pivotal role played by integrated green spaces as an optimal solution to address environmental concerns within cities. The study utilizes Tehran as a case study. Our findings underscore the imperative of preserving and expanding green spaces in urban areas, coupled with the establishment of well-designed ecological networks, to enhance environmental quality and elevate the sustainability of cities. Notably, Tehran's urban green spaces exhibit a disjointed design, lacking a cohesive network to connect various patches and corridors, resulting in significant environmental impacts. The results emphasize the necessity of a balanced and proportional distribution of urban green spaces and the creation of a cohesive patch-corridor-matrix network tailored to the ecological and social needs of residents. This approach is crucial for fostering a more sustainable and livable urban environment for all species, with a specific focus on humans.

Keywords: ecology, sustainable urban development, sustainable landscape, urban green space network

Procedia PDF Downloads 83
18205 Assessment of Environmental Risk Factors of Railway Using Integrated ANP-DEMATEL Approach in Fuzzy Conditions

Authors: Mehrdad Abkenari, Mehmet Kunt, Mahdi Nourollahi

Abstract:

Evaluating the environmental risk factors is a combination of analysis of transportation effects. Various definitions for risk can be found in different scientific sources. Each definition depends on a specific and particular perspective or dimension. The effects of potential risks present along the new proposed routes and existing infrastructures of large transportation projects like railways should be studied under comprehensive engineering frameworks. Despite various definitions provided for ‘risk’, all include a uniform concept. Two obvious aspects, loss and unreliability, have always been pointed in all definitions of this term. But, selection as the third aspect is usually implied and means how one notices it. Currently, conducting engineering studies on the environmental effects of railway projects have become obligatory according to the Environmental Assessment Act in developing countries. Considering the longitudinal nature of these projects and probable passage of railways through various ecosystems, scientific research on the environmental risk of these projects have become of great interest. Although many areas of expertise such as road construction in developing countries have not seriously committed to these studies yet, attention to these subjects in establishment or implementation of different systems have become an inseparable part of this wave of research. The present study used environmental risks identified and existing in previous studies and stations to use in next step. The second step proposes a new hybrid approach of analytical network process (ANP) and DEMATEL in fuzzy conditions for assessment of determined risks. Since evaluation of identified risks was not an easy touch, mesh structure was an appropriate approach for analyzing complex systems which were accordingly employed for problem description and modeling. Researchers faced the shortage of real space data and also due to the ambiguity of experts’ opinions and judgments, they were declared in language variables instead of numerical ones. Since fuzzy logic is appropriate for ambiguity and uncertainty, formulation of experts’ opinions in the form of fuzzy numbers seemed an appropriate approach. Fuzzy DEMATEL method was used to extract the relations between major and minor risk factors. Considering the internal relations of risk major factors and its sub-factors in the analysis of fuzzy network, the weight of risk’s main factors and sub-factors were determined. In general, findings of the present study, in which effective railway environmental risk indicators were theoretically identified and rated through the first usage of combined model of DEMATEL and fuzzy network analysis, indicate that environmental risks can be evaluated more accurately and also employed in railway projects.

Keywords: DEMATEL, ANP, fuzzy, risk

Procedia PDF Downloads 413
18204 Evaluation of Urban-Rural Integration of Characteristic Towns in Yunnan Province

Authors: Huang Yong, Chen Qianting, Zhao Shurong

Abstract:

In order to identify the role and effect of Characteristic Towns as an important means to promote urban-rural integration, this paper uses Flow Theory and complex network analysis methods to jointly construct the identification path of urban-rural integration capabilities of Characteristic Towns. Take the National Characteristic Towns of Yunnan Province as the empirical objects to identify their role laws. The study found that in the implementation of the National Characteristic Town Project in Yunnan Province, (1) the population is more susceptible to the impact of the Characteristic Town Project than the technical elements, but the stability is poor; (2) The flow capacity of urban and rural technical elements is weak, and the quality of the enterprise cooperation network in general; (3) Compared with the batch of Characteristic Towns in 2016, its ability to promote urban-rural integration is higher in 2017; (4) The role of the Characteristic Town Project on urban-rural integration focuses on the improvement of the number of urban and rural flow elements. This paper analyzes the mode of the role of Characteristic Towns on urban-rural integration from the perspective of ‘flow,’ establishes a research paradigm for evaluating the role of Characteristic Towns in urban-rural integration capabilities, and builds a path for the application of Characteristic Towns to support the realization of urban-rural integration goals.

Keywords: characteristic town, urban-rural integration, flow theory, complex network analysis

Procedia PDF Downloads 139
18203 A Two Stage Stochastic Mathematical Model for the Tramp Ship Routing with Time Windows Problem

Authors: Amin Jamili

Abstract:

Nowadays, the majority of international trade in goods is carried by sea, and especially by ships deployed in the industrial and tramp segments. This paper addresses routing the tramp ships and determining the schedules including the arrival times to the ports, berthing times at the ports, and the departure times in an operational planning level. In the operational planning level, the weather can be almost exactly forecasted, however in some routes some uncertainties may remain. In this paper, the voyaging times between some of the ports are considered to be uncertain. To that end, a two-stage stochastic mathematical model is proposed. Moreover, a case study is tested with the presented model. The computational results show that this mathematical model is promising and can represent acceptable solutions.

Keywords: routing, scheduling, tram ships, two stage stochastic model, uncertainty

Procedia PDF Downloads 436
18202 Using Industrial Service Quality to Assess Service Quality Perception in Television Advertisement: A Case Study

Authors: Ana L. Martins, Rita S. Saraiva, João C. Ferreira

Abstract:

Much effort has been placed on the assessment of perceived service quality. Several models can be found in literature, but these are mainly focused on business-to-consumer (B2C) relationships. Literature on how to assess perceived quality in business-to-business (B2B) contexts is scarce both conceptually and in terms of its application. This research aims at filling this gap in literature by applying INDSERV to a case study situation. Under this scope, this research aims at analyzing the adequacy of the proposed assessment tool to other context besides the one where it was developed and by doing so analyzing the perceive quality of the advertisement service provided by a specific television network to its B2B customers. The INDSERV scale was adopted and applied to a sample of 33 clients, via questionnaires adapted to interviews. Data was collected in person or phone. Both quantitative and qualitative data collection was performed. Qualitative data analysis followed content analysis protocol. Quantitative analysis used hypotheses testing. Findings allowed to conclude that the perceived quality of the television service provided by television network is very positive, being the Soft Process Quality the parameter that reveals the highest perceived quality of the service as opposed to Potential Quality. To this end, some comments and suggestions were made by the clients regarding each one of these service quality parameters. Based on the hypotheses testing, it was noticed that only advertisement clients that maintain a connection to the television network from 5 to 10 years do show a significant different perception of the TV advertisement service provided by the company in what the Hard Process Quality parameter is concerned. Through the collected data content analysis, it was possible to obtain the percentage of clients which share the same opinions and suggestions for improvement. Finally, based on one of the four service quality parameter in a B2B context, managerial suggestions were developed aiming at improving the television network advertisement perceived quality service.

Keywords: B2B, case study, INDSERV, perceived service quality

Procedia PDF Downloads 206
18201 Microstructural Evolution of an Interface Region in a Nickel-Based Superalloy Joint Produced by Direct Energy Deposition

Authors: Matthew Ferguson, Tatyana Konkova, Ioannis Violatos

Abstract:

Microstructure analysis of additively manufactured (AM) materials is an important step in understanding the interrelationship between mechanical properties and materials performance. Literature on the effect of laser-based AM process parameters on the microstructure in the substrate-deposit interface is limited. The interface region, the adjoining area of substrate and deposit, is characterized by the presence of the fusion zone (FZ) and heat-affected zone (HAZ), experiencing rapid thermal gyrations resulting in thermal-induced transformations. Inconel 718 was utilized as work material for both the substrate and deposit. Three blocks of Inconel 718 material were deposited by Direct Energy Deposition (DED) using three different laser powers, 550W, 750W and 950W, respectively. A coupled thermo-mechanical transient approach was utilized to correlate temperature history to the evolution of microstructure. The thermal history of the deposition process was monitored with the thermocouples installed inside the substrate material. The interface region of the blocks was analyzed with Optical Microscopy (OM) and Scanning Electron Microscopy (SEM), including the electron back-scattered diffraction (EBSD) technique. Laser power was found to influence the dissolution of intermetallic precipitated phases in the substrate and grain growth in the interface region. Microstructure and thermal history data were utilized to draw conclusive comparisons between the investigated process parameters.

Keywords: additive manufacturing, direct energy deposition, electron back-scattered diffraction, finite element analysis, inconel 718, microstructure, optical microscopy, scanning electron microscopy, substrate-deposit interface region

Procedia PDF Downloads 203
18200 Earthquake Relocations and Constraints on the Lateral Velocity Variations along the Gulf of Suez, Using the Modified Joint Hypocenter Method Determination

Authors: Abu Bakr Ahmed Shater

Abstract:

Hypocenters of 250 earthquakes recorded by more than 5 stations from the Egyptian seismic network around the Gulf of Suez were relocated and the seismic stations correction for the P-wave is estimated, using the modified joint hypocenter method determination. Five stations TR1, SHR, GRB, ZAF and ZET have minus signs in the station P-wave travel time corrections and their values are -0.235, -0.366, -0.288, -0.366 and -0.058, respectively. It is possible to assume that, the underground model in this area has a particular characteristic of high velocity structure in which the other stations TR2, RDS, SUZ, HRG and ZNM have positive signs and their values are 0.024, 0.187, 0.314, 0.645 and 0.145, respectively. It is possible to assume that, the underground model in this area has particular characteristic of low velocity structure. The hypocenteral location determined by the Modified joint hypocenter method is more precise than those determined by the other routine work program. This method simultaneously solves the earthquake locations and station corrections. The station corrections reflect, not only the different crustal conditions in the vicinity of the stations, but also the difference between the actual and modeled seismic velocities along each of the earthquake - station ray paths. The stations correction obtained is correlated with the major surface geological features in the study area. As a result of the relocation, the low velocity area appears in the northeastern and southwestern sides of the Gulf of Suez, while the southeastern and northwestern parts are of high velocity area.

Keywords: gulf of Suez, seismicity, relocation of hypocenter, joint hypocenter determination

Procedia PDF Downloads 358
18199 Conduction Model Compatible for Multi-Physical Domain Dynamic Investigations: Bond Graph Approach

Authors: A. Zanj, F. He

Abstract:

In the current paper, a domain independent conduction model compatible for multi-physical system dynamic investigations is suggested. By means of a port-based approach, a classical nonlinear conduction model containing physical states is first represented. A compatible discrete configuration of the thermal domain in line with the elastic domain is then generated through the enhancement of the configuration of the conventional thermal element. The presented simulation results of a sample structure indicate that the suggested conductive model can cover a wide range of dynamic behavior of the thermal domain.

Keywords: multi-physical domain, conduction model, port based modeling, dynamic interaction, physical modeling

Procedia PDF Downloads 273
18198 Enhancement of Indexing Model for Heterogeneous Multimedia Documents: User Profile Based Approach

Authors: Aicha Aggoune, Abdelkrim Bouramoul, Mohamed Khiereddine Kholladi

Abstract:

Recent research shows that user profile as important element can improve heterogeneous information retrieval with its content. In this context, we present our indexing model for heterogeneous multimedia documents. This model is based on the combination of user profile to the indexing process. The general idea of our proposal is to operate the common concepts between the representation of a document and the definition of a user through his profile. These two elements will be added as additional indexing entities to enrich the heterogeneous corpus documents indexes. We have developed IRONTO domain ontology allowing annotation of documents. We will present also the developed tool validating the proposed model.

Keywords: indexing model, user profile, multimedia document, heterogeneous of sources, ontology

Procedia PDF Downloads 348
18197 An Improved C-Means Model for MRI Segmentation

Authors: Ying Shen, Weihua Zhu

Abstract:

Medical images are important to help identifying different diseases, for example, Magnetic resonance imaging (MRI) can be used to investigate the brain, spinal cord, bones, joints, breasts, blood vessels, and heart. Image segmentation, in medical image analysis, is usually the first step to find out some characteristics with similar color, intensity or texture so that the diagnosis could be further carried out based on these features. This paper introduces an improved C-means model to segment the MRI images. The model is based on information entropy to evaluate the segmentation results by achieving global optimization. Several contributions are significant. Firstly, Genetic Algorithm (GA) is used for achieving global optimization in this model where fuzzy C-means clustering algorithm (FCMA) is not capable of doing that. Secondly, the information entropy after segmentation is used for measuring the effectiveness of MRI image processing. Experimental results show the outperformance of the proposed model by comparing with traditional approaches.

Keywords: magnetic resonance image (MRI), c-means model, image segmentation, information entropy

Procedia PDF Downloads 226
18196 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 351
18195 Improving the Dielectric Strength of Transformer Oil for High Health Index: An FEM Based Approach Using Nanofluids

Authors: Fatima Khurshid, Noor Ul Ain, Syed Abdul Rehman Kashif, Zainab Riaz, Abdullah Usman Khan, Muhammad Imran

Abstract:

As the world is moving towards extra-high voltage (EHV) and ultra-high voltage (UHV) power systems, the performance requirements of power transformers are becoming crucial to the system reliability and security. With the transformers being an essential component of a power system, low health index of transformers poses greater risks for safe and reliable operation. Therefore, to meet the rising demands of the power system and transformer performance, researchers are being prompted to provide solutions for enhanced thermal and electrical properties of transformers. This paper proposes an approach to improve the health index of a transformer by using nano-technology in conjunction with bio-degradable oils. Vegetable oils can serve as potential dielectric fluid alternatives to the conventional mineral oils, owing to their numerous inherent benefits; namely, higher fire and flashpoints, and being environment-friendly in nature. Moreover, the addition of nanoparticles in the dielectric fluid further serves to improve the dielectric strength of the insulation medium. In this research, using the finite element method (FEM) in COMSOL Multiphysics environment, and a 2D space dimension, three different oil samples have been modelled, and the electric field distribution is computed for each sample at various electric potentials, i.e., 90 kV, 100 kV, 150 kV, and 200 kV. Furthermore, each sample has been modified with the addition of nanoparticles of different radii (50 nm and 100 nm) and at different interparticle distance (5 mm and 10 mm), considering an instant of time. The nanoparticles used are non-conductive and have been modelled as alumina (Al₂O₃). The geometry has been modelled according to IEC standard 60897, with a standard electrode gap distance of 25 mm. For an input supply voltage of 100 kV, the maximum electric field stresses obtained for the samples of synthetic vegetable oil, olive oil, and mineral oil are 5.08 ×10⁶ V/m, 5.11×10⁶ V/m and 5.62×10⁶ V/m, respectively. It is observed that for the unmodified samples, vegetable oils have a greater dielectric strength as compared to the conventionally used mineral oils because of their higher flash points and higher values of relative permittivity. Also, for the modified samples, the addition of nanoparticles inhibits the streamer propagation inside the dielectric medium and hence, serves to improve the dielectric properties of the medium.

Keywords: dielectric strength, finite element method, health index, nanotechnology, streamer propagation

Procedia PDF Downloads 141
18194 Learning to Translate by Learning to Communicate to an Entailment Classifier

Authors: Szymon Rutkowski, Tomasz Korbak

Abstract:

We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.

Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning

Procedia PDF Downloads 128
18193 Hybrid Hierarchical Routing Protocol for WSN Lifetime Maximization

Authors: H. Aoudia, Y. Touati, E. H. Teguig, A. Ali Cherif

Abstract:

Conceiving and developing routing protocols for wireless sensor networks requires considerations on constraints such as network lifetime and energy consumption. In this paper, we propose a hybrid hierarchical routing protocol named HHRP combining both clustering mechanism and multipath optimization taking into account residual energy and RSSI measures. HHRP consists of classifying dynamically nodes into clusters where coordinators nodes with extra privileges are able to manipulate messages, aggregate data and ensure transmission between nodes according to TDMA and CDMA schedules. The reconfiguration of the network is carried out dynamically based on a threshold value which is associated with the number of nodes belonging to the smallest cluster. To show the effectiveness of the proposed approach HHRP, a comparative study with LEACH protocol is illustrated in simulations.

Keywords: routing protocol, optimization, clustering, WSN

Procedia PDF Downloads 469
18192 Hybrid Multipath Congestion Control

Authors: Akshit Singhal, Xuan Wang, Zhijun Wang, Hao Che, Hong Jiang

Abstract:

Multiple Path Transmission Control Protocols (MPTCPs) allow flows to explore path diversity to improve the throughput, reliability and network resource utilization. However, the existing solutions may discourage users to adopt the solutions in the face of multipath scenario where different paths are charged based on different pricing structures, e.g., WiFi vs cellular connections, widely available for mobile phones. In this paper, we propose a Hybrid MPTCP (H-MPTCP) with a built-in mechanism to incentivize users to use multiple paths with different pricing structures. In the meantime, H-MPTCP preserves the nice properties enjoyed by the state-of-the-art MPTCP solutions. Extensive real Linux implementation results verify that H-MPTCP can indeed achieve the design objectives.

Keywords: network, TCP, WiFi, cellular, congestion control

Procedia PDF Downloads 718
18191 Addressing Public Concerns about Radiation Impacts by Looking Back in Nuclear Accidents Worldwide

Authors: Du Kim, Nelson Baro

Abstract:

According to a report of International Atomic Energy Agency (IAEA), there are approximately 437 nuclear power stations are in operation in the present around the world in order to meet increasing energy demands. Indeed, nearly, a third of the world’s energy demands are met through nuclear power because it is one of the most efficient and long-lasting sources of energy. However, there are also consequences when a major event takes place at a nuclear power station. Over the past years, a few major nuclear accidents have occurred around the world. According to a report of International Nuclear and Radiological Event Scale (INES), there are six nuclear accidents that are considered to be high level (risk) of the events: Fukushima Dai-chi (Level 7), Chernobyl (Level 7), Three Mile Island (Level 5), Windscale (Level 5), Kyshtym (Level 6) and Chalk River (Level 5). Today, many people still have doubt about using nuclear power. There is growing number of people who are against nuclear power after the serious accident occurred at the Fukushima Dai-chi nuclear power plant in Japan. In other words, there are public concerns about radiation impacts which emphasize Linear-No-Threshold (LNT) Issues, Radiation Health Effects, Radiation Protection and Social Impacts. This paper will address those keywords by looking back at the history of these major nuclear accidents worldwide, based on INES. This paper concludes that all major mistake from nuclear accidents are preventable due to the fact that most of them are caused by human error. In other words, the human factor has played a huge role in the malfunction and occurrence of most of those events. The correct handle of a crisis is determined, by having a good radiation protection program in place, it’s what has a big impact on society and determines how acceptable people are of nuclear.

Keywords: linear-no-threshold (LNT) issues, radiation health effects, radiation protection, social impacts

Procedia PDF Downloads 243
18190 A Model for Solid Transportation Problem with Three Hierarchical Objectives under Uncertain Environment

Authors: Wajahat Ali, Shakeel Javaid

Abstract:

In this study, we have developed a mathematical programming model for a solid transportation problem with three objective functions arranged in hierarchical order. The mathematical programming models with more than one objective function to be solved in hierarchical order is termed as a multi-level programming model. Our study explores a Multi-Level Solid Transportation Problem with Uncertain Parameters (MLSTPWU). The proposed MLSTPWU model consists of three objective functions, viz. minimization of transportation cost, minimization of total transportation time, and minimization of deterioration during transportation. These three objective functions are supposed to be solved by decision-makers at three consecutive levels. Three constraint functions are added to the model, restricting the total availability, total demand, and capacity of modes of transportation. All the parameters involved in the model are assumed to be uncertain in nature. A solution method based on fuzzy logic is also discussed to obtain the compromise solution for the proposed model. Further, a simulated numerical example is discussed to establish the efficiency and applicability of the proposed model.

Keywords: solid transportation problem, multi-level programming, uncertain variable, uncertain environment

Procedia PDF Downloads 83
18189 Experimental and Numerical Analyses of Tehran Research Reactor

Authors: A. Lashkari, H. Khalafi, H. Khazeminejad, S. Khakshourniya

Abstract:

In this paper, a numerical model is presented. The model is used to analyze a steady state thermo-hydraulic and reactivity insertion transient in TRR reference cores respectively. The model predictions are compared with the experiments and PARET code results. The model uses the piecewise constant and lumped parameter methods for the coupled point kinetics and thermal-hydraulics modules respectively. The advantages of the piecewise constant method are simplicity, efficiency and accuracy. A main criterion on the applicability range of this model is that the exit coolant temperature remains below the saturation temperature, i.e. no bulk boiling occurs in the core. The calculation values of power and coolant temperature, in steady state and positive reactivity insertion scenario, are in good agreement with the experiment values. However, the model is a useful tool for the transient analysis of most research reactor encountered in practice. The main objective of this work is using simple calculation methods and benchmarking them with experimental data. This model can be used for training proposes.

Keywords: thermal-hydraulic, research reactor, reactivity insertion, numerical modeling

Procedia PDF Downloads 401