Search results for: packet loss probability estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6410

Search results for: packet loss probability estimation

5300 A Mathematical Analysis of a Model in Capillary Formation: The Roles of Endothelial, Pericyte and Macrophages in the Initiation of Angiogenesis

Authors: Serdal Pamuk, Irem Cay

Abstract:

Our model is based on the theory of reinforced random walks coupled with Michealis-Menten mechanisms which view endothelial cell receptors as the catalysts for transforming both tumor and macrophage derived tumor angiogenesis factor (TAF) into proteolytic enzyme which in turn degrade the basal lamina. The model consists of two main parts. First part has seven differential equations (DE’s) in one space dimension over the capillary, whereas the second part has the same number of DE’s in two space dimensions in the extra cellular matrix (ECM). We connect these two parts via some boundary conditions to move the cells into the ECM in order to initiate capillary formation. But, when does this movement begin? To address this question we estimate the thresholds that activate the transport equations in the capillary. We do this by using steady-state analysis of TAF equation under some assumptions. Once these equations are activated endothelial, pericyte and macrophage cells begin to move into the ECM for the initiation of angiogenesis. We do believe that our results play an important role for the mechanisms of cell migration which are crucial for tumor angiogenesis. Furthermore, we estimate the long time tendency of these three cells, and find that they tend to the transition probability functions as time evolves. We provide our numerical solutions which are in good agreement with our theoretical results.

Keywords: angiogenesis, capillary formation, mathematical analysis, steady-state, transition probability function

Procedia PDF Downloads 156
5299 Missing Link Data Estimation with Recurrent Neural Network: An Application Using Speed Data of Daegu Metropolitan Area

Authors: JaeHwan Yang, Da-Woon Jeong, Seung-Young Kho, Dong-Kyu Kim

Abstract:

In terms of ITS, information on link characteristic is an essential factor for plan or operation. But in practical cases, not every link has installed sensors on it. The link that does not have data on it is called “Missing Link”. The purpose of this study is to impute data of these missing links. To get these data, this study applies the machine learning method. With the machine learning process, especially for the deep learning process, missing link data can be estimated from present link data. For deep learning process, this study uses “Recurrent Neural Network” to take time-series data of road. As input data, Dedicated Short-range Communications (DSRC) data of Dalgubul-daero of Daegu Metropolitan Area had been fed into the learning process. Neural Network structure has 17 links with present data as input, 2 hidden layers, for 1 missing link data. As a result, forecasted data of target link show about 94% of accuracy compared with actual data.

Keywords: data estimation, link data, machine learning, road network

Procedia PDF Downloads 508
5298 Formulation of Extended-Release Gliclazide Tablet Using a Mathematical Model for Estimation of Hypromellose

Authors: Farzad Khajavi, Farzaneh Jalilfar, Faranak Jafari, Leila Shokrani

Abstract:

Formulation of gliclazide in the form of extended-release tablet in 30 and 60 mg dosage forms was performed using hypromellose (HPMC K4M) as a retarding agent. Drug-release profiles were investigated in comparison with references Diamicron MR 30 and 60 mg tablets. The effect of size of powder particles, the amount of hypromellose in formulation, hardness of tablets, and also the effect of halving the tablets were investigated on drug release profile. A mathematical model which describes hypromellose behavior in initial times of drug release was proposed for the estimation of hypromellose content in modified-release gliclazide 60 mg tablet. This model is based on erosion of hypromellose in dissolution media. The model is applicable to describe release profiles of insoluble drugs. Therefore, by using dissolved amount of drug in initial times of dissolution and the model, the amount of hypromellose in formulation can be predictable. The model was used to predict the HPMC K4M content in modified-release gliclazide 30 mg and extended-release quetiapine 200 mg tablets.

Keywords: Gliclazide, hypromellose, drug release, modified-release tablet, mathematical model

Procedia PDF Downloads 220
5297 The Integrated Strategy of Maintenance with a Scientific Analysis

Authors: Mahmoud Meckawey

Abstract:

This research is dealing with one of the most important aspects of maintenance fields, that is Maintenance Strategy. It's the branch which concerns the concepts and the schematic thoughts in how to manage maintenance and how to deal with the defects in the engineering products (buildings, machines, etc.) in general. Through the papers we will act with the followings: i) The Engineering Product & the Technical Systems: When we act with the maintenance process, in a strategic view, we act with an (engineering product) which consists of multi integrated systems. In fact, there is no engineering product with only one system. We will discuss and explain this topic, through which we will derivate a developed definition for the maintenance process. ii) The factors or basis of the functionality efficiency: That is the main factors affect the functional efficiency of the systems and the engineering products, then by this way we can give a technical definition of defects and how they occur. iii) The legality of occurrence of defects (Legal defects and Illegal defects): with which we assume that all the factors of the functionality efficiency been applied, and then we will discuss the results. iv) The Guarantee, the Functional Span Age and the Technical surplus concepts: In the complementation with the above topic, and associated with the Reliability theorems, where we act with the Probability of Failure state, with which we almost interest with the design stages, that is to check and adapt the design of the elements. But in Maintainability we act in a different way as we act with the actual state of the systems. So, we act with the rest of the story that means we have to act with the complementary part of the probability of failure term which refers to the actual surplus of the functionality for the systems.

Keywords: engineering product and technical systems, functional span age, legal and illegal defects, technical and functional surplus

Procedia PDF Downloads 474
5296 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College

Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa

Abstract:

This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.

Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling

Procedia PDF Downloads 231
5295 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations

Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso

Abstract:

Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.

Keywords: pipeline, leakage, detection, AI

Procedia PDF Downloads 189
5294 The Probability of Smallholder Broiler Chicken Farmers' Participation in the Mainstream Market within Maseru District in Lesotho

Authors: L. E. Mphahama, A. Mushunje, A. Taruvinga

Abstract:

Although broiler production does not generate any large incomes among the smallholder community, it represents the main source of livelihood and part of nutritional requirement. As a result, market for broiler meat is growing faster than that of any other meat products and is projected to continue growing in the coming decades. However, the implication is that a multitude of factors manipulates transformation of smallholder broiler farmers participating in the mainstream markets. From 217 smallholder broiler farmers, socio-economic and institutional factors in broiler farming were incorporated into Binary model to estimate the probability of broiler farmers’ participation in the mainstream markets within the Maseru district in Lesotho. Of the thirteen (13) predictor variables fitted into the model, six (6) variables (household size, number of years in broiler business, stock size, access to transport, access to extension services and access to market information) had significant coefficients while seven (7) variables (level of education, marital status, price of broilers, poultry association, access to contract, access to credit and access to storage) did not have a significant impact. It is recommended that smallholder broiler farmers organize themselves into cooperatives which will act as a vehicle through which they can access contracts and formal markets. These cooperatives will also enable easy training and workshops for broiler rearing and marketing/markets through extension visits.

Keywords: broiler chicken, mainstream market, Maseru district, participation, smallholder farmers

Procedia PDF Downloads 148
5293 A New Microstrip Diplexer Using Coupled Stepped Impedance Resonators

Authors: A. Chinig, J. Zbitou, A. Errkik, L. Elabdellaoui, A. Tajmouati, A. Tribak, M. Latrach

Abstract:

This paper presents a new structure of microstrip band pass filter (BPF) based on coupled stepped impedance resonators. Each filter consists of two coupled stepped impedance resonators connected to microstrip feed lines. The coupled junction is utilized to connect the two BPFs to the antenna. This two band pass filters are designed and simulated to operate for the digital communication system (DCS) and Industrial Scientific and Medical (ISM) bands at 1.8 GHz and 2.45 GHz respectively. The proposed circuit presents good performances with an insertion loss lower than 2.3 dB and isolation between the two channels greater than 21 dB. The prototype of the optimized diplexer have been investigated numerically by using ADS Agilent and verified with CST microwave software.

Keywords: band pass filter, coupled junction, coupled stepped impedance resonators, diplexer, insertion loss, isolation

Procedia PDF Downloads 430
5292 Downtime Estimation of Building Structures Using Fuzzy Logic

Authors: M. De Iuliis, O. Kammouh, G. P. Cimellaro, S. Tesfamariam

Abstract:

Community Resilience has gained a significant attention due to the recent unexpected natural and man-made disasters. Resilience is the process of maintaining livable conditions in the event of interruptions in normally available services. Estimating the resilience of systems, ranging from individuals to communities, is a formidable task due to the complexity involved in the process. The most challenging parameter involved in the resilience assessment is the 'downtime'. Downtime is the time needed for a system to recover its services following a disaster event. Estimating the exact downtime of a system requires a lot of inputs and resources that are not always obtainable. The uncertainties in the downtime estimation are usually handled using probabilistic methods, which necessitates acquiring large historical data. The estimation process also involves ignorance, imprecision, vagueness, and subjective judgment. In this paper, a fuzzy-based approach to estimate the downtime of building structures following earthquake events is proposed. Fuzzy logic can integrate descriptive (linguistic) knowledge and numerical data into the fuzzy system. This ability allows the use of walk down surveys, which collect data in a linguistic or a numerical form. The use of fuzzy logic permits a fast and economical estimation of parameters that involve uncertainties. The first step of the method is to determine the building’s vulnerability. A rapid visual screening is designed to acquire information about the analyzed building (e.g. year of construction, structural system, site seismicity, etc.). Then, a fuzzy logic is implemented using a hierarchical scheme to determine the building damageability, which is the main ingredient to estimate the downtime. Generally, the downtime can be divided into three main components: downtime due to the actual damage (DT1); downtime caused by rational and irrational delays (DT2); and downtime due to utilities disruption (DT3). In this work, DT1 is computed by relating the building damageability results obtained from the visual screening to some already-defined components repair times available in the literature. DT2 and DT3 are estimated using the REDITM Guidelines. The Downtime of the building is finally obtained by combining the three components. The proposed method also allows identifying the downtime corresponding to each of the three recovery states: re-occupancy; functional recovery; and full recovery. Future work is aimed at improving the current methodology to pass from the downtime to the resilience of buildings. This will provide a simple tool that can be used by the authorities for decision making.

Keywords: resilience, restoration, downtime, community resilience, fuzzy logic, recovery, damage, built environment

Procedia PDF Downloads 158
5291 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition

Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman

Abstract:

Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.

Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat

Procedia PDF Downloads 146
5290 Analysis of the Predictive Performance of Value at Risk Estimations in Times of Financial Crisis

Authors: Alexander Marx

Abstract:

Measuring and mitigating market risk is essential for the stability of enterprises, especially for major banking corporations and investment bank firms. To employ these risk measurement and mitigation processes, the Value at Risk (VaR) is the most commonly used risk metric by practitioners. In the past years, we have seen significant weaknesses in the predictive performance of the VaR in times of financial market crisis. To address this issue, the purpose of this study is to investigate the value-at-risk (VaR) estimation models and their predictive performance by applying a series of backtesting methods on the stock market indices of the G7 countries (Canada, France, Germany, Italy, Japan, UK, US, Europe). The study employs parametric, non-parametric, and semi-parametric VaR estimation models and is conducted during three different periods which cover the most recent financial market crisis: the overall period (2006–2022), the global financial crisis period (2008–2009), and COVID-19 period (2020–2022). Since the regulatory authorities have introduced and mandated the Conditional Value at Risk (Expected Shortfall) as an additional regulatory risk management metric, the study will analyze and compare both risk metrics on their predictive performance.

Keywords: value at risk, financial market risk, banking, quantitative risk management

Procedia PDF Downloads 92
5289 Environmental and Economic Impact of Mangrove Deforestation: Case Study of Vadamaradchy East, Sri Lanka

Authors: Kumaraamy Sasikumar

Abstract:

The study was conducted in Vadamarachchi-East in Sri Lanka. Data collection was done for a period of two months from June to July 2011. The main focus of this study was to examine factors contributing to mangrove deforestation within the study area, and resultant impacts from deforestation. The study found that, the main factors that have contributed to deforestation include: Long civil wars in the region, poverty which pushed people to clear the forest to earn income through the sale of firewood and timber among others, industrial development, increasing demand for farm and settlement land, limited knowledge within the local community, weak government polices and implementation strategies, and natural disasters especially the 2004 Tsunami destruction. The impacts presented are those that impact both on the environment and the economy including; loss of income sources, loss of biodiversity, climate change, desertification, conflicts in the use of forest products and loss of land productivity due to reduced fertility caused by soil erosion. However, a few strategies have been put in place by the government to ensure the sustainable use of mangrove forest products, though these have not proved successful in reducing deforestation. The recommendations make suggestions to the government and other stakeholders to work together in ensuring sustainable use of natural resources, for example implementing laws and regulations aimed at controlling deforestation among others.

Keywords: deforestation, impacts, actors, environment, economic, sustainable development

Procedia PDF Downloads 352
5288 Repeatable Scalable Business Models: Can Innovation Drive an Entrepreneurs Un-Validated Business Model?

Authors: Paul Ojeaga

Abstract:

Can the level of innovation use drive un-validated business models across regions? To what extent does industrial sector attractiveness drive firm’s success across regions at the time of start-up? This study examines the role of innovation on start-up success in six regions of the world (namely Sub Saharan Africa, the Middle East and North Africa, Latin America, South East Asia Pacific, the European Union and the United States representing North America) using macroeconomic variables. While there have been studies using firm level data, results from such studies are not suitable for national policy decisions. The need to drive a regional innovation policy also begs for an answer, therefore providing room for this study. Results using dynamic panel estimation show that innovation counts in the early infancy stage of new business life cycle. The results are robust even after controlling for time fixed effects and the study present variance-covariance estimation robust standard errors.

Keywords: industrial economics, un-validated business models, scalable models, entrepreneurship

Procedia PDF Downloads 280
5287 Ultra-High Frequency Passive Radar Coverage for Cars Detection in Semi-Urban Scenarios

Authors: Pedro Gómez-del-Hoyo, Jose-Luis Bárcena-Humanes, Nerea del-Rey-Maestre, María-Pilar Jarabo-Amores, David Mata-Moya

Abstract:

A study of achievable coverages using passive radar systems in terrestrial traffic monitoring applications is presented. The study includes the estimation of the bistatic radar cross section of different commercial vehicle models that provide challenging low values which make detection really difficult. A semi-urban scenario is selected to evaluate the impact of excess propagation losses generated by an irregular relief. A bistatic passive radar exploiting UHF frequencies radiated by digital video broadcasting transmitters is assumed. A general method of coverage estimation using electromagnetic simulators in combination with estimated car average bistatic radar cross section is applied. In order to reduce the computational cost, hybrid solution is implemented, assuming free space for the target-receiver path but estimating the excess propagation losses for the transmitter-target one.

Keywords: bistatic radar cross section, passive radar, propagation losses, radar coverage

Procedia PDF Downloads 334
5286 Survey of Prevalence of Noise Induced Hearing Loss in Hawkers and Shopkeepers in Noisy Areas of Mumbai City

Authors: Hitesh Kshayap, Shantanu Arya, Ajay Basod, Sachin Sakhuja

Abstract:

This study was undertaken to measure the overall noise levels in different locations/zones and to estimate the prevalence of Noise induced hearing loss in Hawkers & Shopkeepers in Mumbai, India. The Hearing Test developed by American Academy Of Otolaryngology, translated from English to Hindi, and validated is used as a screening tool for hearing sensitivity was employed. The tool is having 14 items. Each item is scored on a scale 0, 1, 2 and 3. The score 6 and above indicated some difficulty or definite difficulty in hearing in daily activities and low score indicated lesser difficulty or normal hearing. The subjects who scored 6 or above or having tinnitus were made to undergo hearing evaluation by Pure tone audiometer. Further, the environmental noise levels were measured from Morning to Evening at road side at different Location/Hawking zones in Mumbai city using SLM9 Agronic 8928B & K type Digital Sound Level Meter) in dB (A). The maximum noise level of 100.0 dB (A) was recorded during evening hours from Chattrapati Shivaji Terminal to Colaba with overall noise level of 79.0 dB (A). However, the minimum noise level in this area was 72.6 dB (A) at any given point of time. Further, 54.6 dB (A) was recorded as minimum noise level during 8-9 am at Sion Circle. Further, commencement of flyovers with 2-tier traffic, sky walks, increasing number of vehicular traffic at road, high rise buildings and other commercial & urbanization activities in the Mumbai city most probably have resulted in increasing the overall environmental noise levels. Trees which acted as noise absorbers have been cut owing to rapid construction. The study involved 100 participants in the age range of 18 to 40 years of age, with the mean age of 29 years (S.D. =6.49). 46 participants having tinnitus or have obtained the score of 6 were made to undergo Pure Tone Audiometry and it was found that the prevalence rate of hearing loss in hawkers & shopkeepers is 19% (10% Hawkers and 9 % Shopkeepers). The results found indicates that 29 (42.6%) out of 64 Hawkers and 17 (47.2%) out of 36 Shopkeepers who underwent PTA had no significant difference in percentage of Noise Induced Hearing loss. The study results also reveal that participants who exhibited tinnitus 19 (41.30%) out of 46 were having mild to moderate sensorineural hearing loss between 3000Hz to 6000Hz. The Pure tone Audiogram pattern revealed Hearing loss at 4000 Hz and 6000 Hz while hearing at adjacent frequencies were nearly normal. 7 hawkers and 8 shopkeepers had mild notch while 3 hawkers and 1 shopkeeper had a moderate degree of notch. It is thus inferred that tinnitus is a strong indicator for presence of hearing loss and 4/6 KHz notch is a strong marker for road/traffic/ environmental noise as an occupational hazard for hawkers and shopkeepers. Mass awareness about these occupational hazards, regular hearing check up, early intervention along with sustainable development juxtaposed with social and urban forestry can help in this regard.

Keywords: NIHL, noise, sound level meter, tinnitus

Procedia PDF Downloads 198
5285 Eye Diagram for a System of Highly Mode Coupled PMD/PDL Fiber

Authors: Suad M. Abuzariba, Liang Chen, Saeed Hadjifaradji

Abstract:

To evaluate the optical eye diagram due to polarization-mode dispersion (PMD), polarization-dependent loss (PDL), and chromatic dispersion (CD) for a system of highly mode coupled fiber with lumped section at any given optical pulse sequence we present an analytical modle. We found that with considering PDL and the polarization direction correlation between PMD and PDL, a system with highly mode coupled fiber with lumped section can have either higher or lower Q-factor than a highly mode coupled system with same root mean square PDL/PMD values. Also we noticed that a system of two highly mode coupled fibers connected together is not equivalent to a system of highly mode coupled fiber when fluctuation is considered

Keywords: polarization mode dispersion, polarization dependent loss, chromatic dispersion, optical eye diagram

Procedia PDF Downloads 863
5284 Electro-Fenton Degradation of Erythrosine B Using Carbon Felt as a Cathode: Doehlert Design as an Optimization Technique

Authors: Sourour Chaabane, Davide Clematis, Marco Panizza

Abstract:

This study investigates the oxidation of Erythrosine B (EB) food dye by a homogeneous electro-Fenton process using iron (II) sulfate heptahydrate as a catalyst, carbon felt as cathode, and Ti/RuO2. The treated synthetic wastewater contains 100 mg L⁻¹ of EB and has a pH = 3. The effects of three independent variables have been considered for process optimization, such as applied current intensity (0.1 – 0.5 A), iron concentration (1 – 10 mM), and stirring rate (100 – 1000 rpm). Their interactions were investigated considering response surface methodology (RSM) based on Doehlert design as optimization method. EB removal efficiency and energy consumption were considered model responses after 30 minutes of electrolysis. Analysis of variance (ANOVA) revealed that the quadratic model was adequately fitted to the experimental data with R² (0.9819), adj-R² (0.9276) and low Fisher probability (< 0.0181) for EB removal model, and R² (0.9968), adj-R² (0.9872) and low Fisher probability (< 0.0014) relative to the energy consumption model reflected a robust statistical significance. The energy consumption model significantly depends on current density, as expected. The foregoing results obtained by RSM led to the following optimal conditions for EB degradation: current intensity of 0.2 A, iron concentration of 9.397 mM, and stirring rate of 500 rpm, which gave a maximum decolorization rate of 98.15 % with a minimum energy consumption of 0.74 kWh m⁻³ at 30 min of electrolysis.

Keywords: electrofenton, erythrosineb, dye, response serface methdology, carbon felt

Procedia PDF Downloads 70
5283 Seismic Hazard Assessment of Tehran

Authors: Dorna Kargar, Mehrasa Masih

Abstract:

Due to its special geological and geographical conditions, Iran has always been exposed to various natural hazards. Earthquake is one of the natural hazards with random nature that can cause significant financial damages and casualties. This is a serious threat, especially in areas with active faults. Therefore, considering the population density in some parts of the country, locating and zoning high-risk areas are necessary and significant. In the present study, seismic hazard assessment via probabilistic and deterministic method for Tehran, the capital of Iran, which is located in Alborz-Azerbaijan province, has been done. The seismicity study covers a range of 200 km from the north of Tehran (X=35.74° and Y= 51.37° in LAT-LONG coordinate system) to identify the seismic sources and seismicity parameters of the study region. In order to identify the seismic sources, geological maps at the scale of 1: 250,000 are used. In this study, we used Kijko-Sellevoll's method (1992) to estimate seismicity parameters. The maximum likelihood estimation of earthquake hazard parameters (maximum regional magnitude Mmax, activity rate λ, and the Gutenberg-Richter parameter b) from incomplete data files is extended to the case of uncertain magnitude values. By the combination of seismicity and seismotectonic studies of the site, the acceleration with antiseptic probability may happen during the useful life of the structure is calculated with probabilistic and deterministic methods. Applying the results of performed seismicity and seismotectonic studies in the project and applying proper weights in used attenuation relationship, maximum horizontal and vertical acceleration for return periods of 50, 475, 950 and 2475 years are calculated. Horizontal peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.12g, 0.30g, 0.37g and 0.50, and Vertical peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.08g, 0.21g, 0.27g and 0.36g.

Keywords: peak ground acceleration, probabilistic and deterministic, seismic hazard assessment, seismicity parameters

Procedia PDF Downloads 68
5282 An Estimating Parameter of the Mean in Normal Distribution by Maximum Likelihood, Bayes, and Markov Chain Monte Carlo Methods

Authors: Autcha Araveeporn

Abstract:

This paper is to compare the parameter estimation of the mean in normal distribution by Maximum Likelihood (ML), Bayes, and Markov Chain Monte Carlo (MCMC) methods. The ML estimator is estimated by the average of data, the Bayes method is considered from the prior distribution to estimate Bayes estimator, and MCMC estimator is approximated by Gibbs sampling from posterior distribution. These methods are also to estimate a parameter then the hypothesis testing is used to check a robustness of the estimators. Data are simulated from normal distribution with the true parameter of mean 2, and variance 4, 9, and 16 when the sample sizes is set as 10, 20, 30, and 50. From the results, it can be seen that the estimation of MLE, and MCMC are perceivably different from the true parameter when the sample size is 10 and 20 with variance 16. Furthermore, the Bayes estimator is estimated from the prior distribution when mean is 1, and variance is 12 which showed the significant difference in mean with variance 9 at the sample size 10 and 20.

Keywords: Bayes method, Markov chain Monte Carlo method, maximum likelihood method, normal distribution

Procedia PDF Downloads 355
5281 FPGA Implementation of Adaptive Clock Recovery for TDMoIP Systems

Authors: Semih Demir, Anil Celebi

Abstract:

Circuit switched networks widely used until the end of the 20th century have been transformed into packages switched networks. Time Division Multiplexing over Internet Protocol (TDMoIP) is a system that enables Time Division Multiplexing (TDM) traffic to be carried over packet switched networks (PSN). In TDMoIP systems, devices that send TDM data to the PSN and receive it from the network must operate with the same clock frequency. In this study, it was aimed to implement clock synchronization process in Field Programmable Gate Array (FPGA) chips using time information attached to the packages received from PSN. The designed hardware is verified using the datasets obtained for the different carrier types and comparing the results with the software model. Field tests are also performed by using the real time TDMoIP system.

Keywords: clock recovery on TDMoIP, FPGA, MATLAB reference model, clock synchronization

Procedia PDF Downloads 277
5280 Diffusion Mechanism of Aroma Compound (2-Acetyl-1-Pyrroline) in Rice During Storage

Authors: Mary Ann U. Baradi, Arnold R. Elepaño, Manuel Jose C. Regalado

Abstract:

Aromatic rice has become popular and continues to command higher price than ordinary rice because of its distinctive scent that makes it special. Freshly harvested aromatic rice exhibits strong aromatic scent but decreases with time and conditions during storage. Of the many volatile compounds in aromatic rice, 2-acetyl-1-pyrroline (2AP) is a major compound that gives rice its popcorn-like aroma. The diffusion mechanism of 2AP in rice was investigated. Semi-empirical models explaining 2AP diffusion as affected by temperature and duration were developed. Storage time and temperature affected 2AP loss via diffusion. The amount of 2AP in rice decreased with time. Free 2AP, being volatile, is lost due to diffusion. Storage experiment indicated rapid 2AP loss during the first five weeks and subsequently leveled off afterwards; attaining level of starch bound 2AP. Decline of 2AP during storage followed exponential equation and exhibited four stages; i.e. the initial, second, third and final stage. Free 2AP is easily lost while bound 2AP is left, only to be released upon exposure to high temperature such as cooking. Both free and bound 2AP is found in endosperm while free 2AP is in the bran. Around 63–67% of total 2AP was lost in brown and milled rice of MS 6 paddy kept at ambient. Samples stored at higher temperature (27°C) recorded higher 2AP loss than those kept at lower temperature (15°C). The study should be able to guide processors in understanding and controlling parameters in storage to produce high quality rice.

Keywords: 2-acetyl-1-pyrroline, aromatic rice, diffusion mechanism, storage

Procedia PDF Downloads 336
5279 Weight Loss and Symptom Improvement in Women with Secondary Lymphedema Using Semaglutide

Authors: Shivani Thakur, Jasmin Dominguez Cervantes, Ahmed Zabiba, Fatima Zabiba, Sandhini Agarwal, Kamalpreet Kaur, Hussein Maatouk, Shae Chand, Omar Madriz, Tiffany Huang, Saloni Bansal

Abstract:

The prevalence of lymphedema in women in rural communities highlights the importance of developing effective treatment and prevention methods. Subjects with secondary lymphedema in California’s Central Valley were surveyed at 6 surgical clinics to assess demographics and symptoms of lymphedema. Additionally, subjects on semaglutide treatment for obesity and/or T2DM were monitored for their diabetes management, weight loss progress, and lymphedema symptoms compared to subjects who were not treated with semaglutide. The subjects were followed for 12 months. Subjects who were treated with semaglutide completed pre-treatment questionnaires and follow-up post-treatment questionnaires at 3, 6, 9, 12 months, along with medical assessment. The untreated subjects completed similar questionnaires. The questionnaires investigated subjective feelings regarding lymphedema symptoms and management using a Likert-scale; quantitative leg measurements were collected, and blood work reviewed at these appointments. Paired difference t-tests, chi-squared tests, and independent sample t-tests were performed. 50 subjects, aged 18-75 years, completed the surveys evaluating secondary lymphedema: 90% female, 69% Hispanic, 45% Spanish speaking, 42% disabled, 57 % employed, 54% income range below 30 thousand dollars, and average BMI of 40. Both treatment and non-treatment groups noted the most common symptoms were leg swelling (x̄=3.2, ▁d= 1.3), leg pain (x̄=3.2, ▁d=1.6 ), loss of daily function (x̄=3, ▁d=1.4 ), and negative body image (x̄=4.4, ▁d=0.54). Subjects in the semaglutide treatment group >3 months of treatment compared to the untreated group demonstrated: 55% subject in the treated group had a 10% weight loss vs 3% in the untreated group (average BMI reduction by 11% vs untreated by 2.5%, p<0.05) and improved subjective feelings about their lymphedema symptoms: leg swelling (x̄=2.4, ▁d=0.45 vs x̄=3.2, ▁d=1.3, p<0.05), leg pain (x̄=2.2, ▁d=0.45 vs x̄= 3.2, ▁d= 1.6, p<0.05), and heaviness (x̄=2.2, ▁d=0.45 vs x̄=3, ▁d=1.56, p<0.05). Improvement in diabetes management was demonstrated by an average of 0.9 % decrease in A1C values compared to untreated 0.1 %, p<0.05. In comparison to untreated subjects, treatment subjects on semaglutide noted 6 cm decrease in the circumference of the leg, knee, calf, and ankle compared to 2 cm in untreated subjects, p<0.05. Semaglutide was shown to significantly improve weight loss, T2DM management, leg circumference, and secondary lymphedema functional, physical and psychosocial symptoms.

Keywords: diabetes, secondary lymphedema, semaglutide, obesity

Procedia PDF Downloads 59
5278 Tranexamic Acid in Orthopedic Surgery in Children

Authors: K. Amanzoui, A. Erragh, M. Elharit, A. Afif, K. Elfakhr, S. Kalouch, A. Chlilek

Abstract:

Orthopedic surgery is a provider of pre and postoperative bleeding; patients are exposed to several risks, and different measures are proposed to reduce bleeding during surgery, called the transfusion-sparing method, including tranexamic acid, which has shown its effectiveness in numerous studies. A prospective analytical study in 50 children was carried out in the orthopedic traumatology operating room of the EL HAROUCHI hospital of the CHU IBN ROCHD in Casablanca over a period of six months (April to October 2022). Two groups were randomized: one receiving tranexamic acid (Group A) and a non-receiving control group (Group B). The average age was 10.3 years, of which 58.8% were female. The first type of surgery was thoracolumbar scoliosis (52%). The average preoperative hemoglobin was 12.28 g/dl in group A, against 12.67 g/dl in the control group. There was no significant difference between the two groups (p=0.148). Mean intraoperative bleeding was 396.29 ml in group A versus 412 ml in the control group. No significant difference was observed for this parameter (p=0.632). The average hemoglobin level in the immediate postoperative period in our patients is 10.2 g/dl. In group A, it was 10.95 g/dl versus 10.93 g/dl in group B. At H24 postoperative, the mean hemoglobin value was 10.29 g/dl in group A against 9.5 g/dl in group B. For group A, the blood loss recorded during the first 24 hours was 209.43 ml, against 372 ml in group B, with a significant difference between the two groups (p=0.001). There is no statistically significant difference between the 2 groups in terms of the use of fillers, ephedrine or intraoperative transfusion. While for postoperative transfusion, we note the existence of a statistically significant difference between group A and group B. It is suggested that the use of tranexamic acid is an effective, simple, and low-cost way to limit postoperative blood loss and the need for transfusion.

Keywords: tranexamic acid, blood loss, orthopedic surgery, children

Procedia PDF Downloads 65
5277 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 154
5276 Adopting Collaborative Business Processes to Prevent the Loss of Information in Public Administration Organisations

Authors: A. Capodieci, G. Del Fiore, L. Mainetti

Abstract:

Recently, the use of web 2.0 tools has increased in companies and public administration organizations. This phenomenon, known as "Enterprise 2.0", has, de facto, modified common organizational and operative practices. This has led “knowledge workers” to change their working practices through the use of Web 2.0 communication tools. Unfortunately, these tools have not been integrated with existing enterprise information systems, a situation that could potentially lead to a loss of information. This is an important problem in an organizational context, because knowledge of information exchanged within the organization is needed to increase the efficiency and competitiveness of the organization. In this article we demonstrate that it is possible to capture this knowledge using collaboration processes, which are processes of abstraction created in accordance with design patterns and applied to new organizational operative practices.

Keywords: business practices, business process patterns, collaboration tools, enterprise 2.0, knowledge workers

Procedia PDF Downloads 357
5275 The MTHFR C677T Polymorphism Screening: A Challenge in Recurrent Pregnancy Loss

Authors: Rim Frikha, Nouha Bouayed, Afifa Sellami, Nozha Chakroun, Salima Daoud, Leila Keskes, Tarek Rebai

Abstract:

Introduction: Recurrent pregnancy loss (RPL) defined as two or more pregnancy losses, is a serious clinical problem. Methylene-tetrahydro-folate-reductase (MTHFR) polymorphisms, commonly the variant C677T is recognized as an inherited thrombophilia which might affect embryonic development and pregnancy success and cause pregnancy complications as RPL. Material and Methods DNA was extracted from peripheral blood samples and PCR-RFLP was performed for the molecular diagnosis of the C677T MTHFR polymorphism among 70 patients (35 couples) with more than 2 fetal losses. Aims and Objective: The aim of this study is to determine the frequency of MTHFR C677T among Tunisian couples with RPL and to critically analyze the available literature on the importance of MTHFR polymorphism testing in the management of RPL. Result and comments: No C677T mutation was detected in the carriers of RPL. This result would be related to sample size and to different criteria (number of abortion), - The association between MTHFR polymorphisms and pregnancy complications has been reported but with controversial results. - A lack of evidence for MTHFR polymorphism testing previously recommended by ACMG (American College of Medical medicine). Our study highlights the importance of screening of MTHFR polymorphism since the real impact of such thrombotic molecular defect on the pregnancy outcome is evident. - Folic supplementation of these patients during pregnancy can prevent such complications and lead to a successful pregnancy outcome.

Keywords: methylenetetrahydrofolate reductase, C677T, recurrent pregnancy loss, genetic testing

Procedia PDF Downloads 304
5274 Learning a Bayesian Network for Situation-Aware Smart Home Service: A Case Study with a Robot Vacuum Cleaner

Authors: Eu Tteum Ha, Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

The smart home environment backed up by IoT (internet of things) technologies enables intelligent services based on the awareness of the situation a user is currently in. One of the convenient sensors for recognizing the situations within a home is the smart meter that can monitor the status of each electrical appliance in real time. This paper aims at learning a Bayesian network that models the causal relationship between the user situations and the status of the electrical appliances. Using such a network, we can infer the current situation based on the observed status of the appliances. However, learning the conditional probability tables (CPTs) of the network requires many training examples that cannot be obtained unless the user situations are closely monitored by any means. This paper proposes a method for learning the CPT entries of the network relying only on the user feedbacks generated occasionally. In our case study with a robot vacuum cleaner, the feedback comes in whenever the user gives an order to the robot adversely from its preprogrammed setting. Given a network with randomly initialized CPT entries, our proposed method uses this feedback information to adjust relevant CPT entries in the direction of increasing the probability of recognizing the desired situations. Simulation experiments show that our method can rapidly improve the recognition performance of the Bayesian network using a relatively small number of feedbacks.

Keywords: Bayesian network, IoT, learning, situation -awareness, smart home

Procedia PDF Downloads 521
5273 Case Study Hyperbaric Oxygen Therapy for Idiopathic Sudden Sensorineural Hearing Loss

Authors: Magdy I. A. Alshourbagi

Abstract:

Background: The National Institute for Deafness and Communication Disorders defines idiopathic sudden sensorineural hearing loss as the idiopathic loss of hearing of at least 30 dB across 3 contiguous frequencies occurring within 3 days.The most common clinical presentation involves an individual experiencing a sudden unilateral hearing loss, tinnitus, a sensation of aural fullness and vertigo. The etiologies and pathologies of ISSNHL remain unclear. Several pathophysiological mechanisms have been described including: vascular occlusion, viral infections, labyrinthine membrane breaks, immune associated disease, abnormal cochlear stress response, trauma, abnormal tissue growth, toxins, ototoxic drugs and cochlear membrane damage. The rationale for the use of hyperbaric oxygen to treat ISSHL is supported by an understanding of the high metabolism and paucity of vascularity to the cochlea. The cochlea and the structures within it require a high oxygen supply. The direct vascular supply, particularly to the organ of Corti, is minimal. Tissue oxygenation to the structures within the cochlea occurs via oxygen diffusion from cochlear capillary networks into the perilymph and the cortilymph. . The perilymph is the primary oxygen source for these intracochlear structures. Unfortunately, perilymph oxygen tension is decreased significantly in patients with ISSHL. To achieve a consistent rise of perilymph oxygen content, the arterial-perilymphatic oxygen concentration difference must be extremely high. This can be restored with hyperbaric oxygen therapy. Subject and Methods: A 37 year old man was presented at the clinic with a five days history of muffled hearing and tinnitus of the right ear. Symptoms were sudden onset, with no associated pain, dizziness or otorrhea and no past history of hearing problems or medical illness. Family history was negative. Physical examination was normal. Otologic examination revealed normal tympanic membranes bilaterally, with no evidence of cerumen or middle ear effusion. Tuning fork examination showed positive Rinne test bilaterally but with lateralization of Weber test to the left side, indicating right ear sensorineural hearing loss. Audiometric analysis confirmed sensorineural hearing loss across all frequencies of about 70- dB in the right ear. Routine lab work were all within normal limits. Clinical diagnosis of idiopathic sudden sensorineural hearing loss of the right ear was made and the patient began a medical treatment (corticosteroid, vasodilator and HBO therapy). The recommended treatment profile consists of 100% O2 at 2.5 atmospheres absolute for 60 minutes daily (six days per week) for 40 treatments .The optimal number of HBOT treatments will vary, depending on the severity and duration of symptomatology and the response to treatment. Results: As HBOT is not yet a standard for idiopathic sudden sensorineural hearing loss, it was introduced to this patient as an adjuvant therapy. The HBOT program was scheduled for 40 sessions, we used a 12-seat multi place chamber for the HBOT, which was started at day seven after the hearing loss onset. After the tenth session of HBOT, improvement of both hearing (by audiogram) and tinnitus was obtained in the affected ear (right). Conclusions: In conclusion, HBOT may be used for idiopathic sudden sensorineural hearing loss as an adjuvant therapy. It may promote oxygenation to the inner ear apparatus and revive hearing ability. Patients who fail to respond to oral and intratympanic steroids may benefit from this treatment. Further investigation is warranted, including animal studies to understand the molecular and histopathological aspects of HBOT and randomized control clinical studies.

Keywords: idiopathic sudden sensorineural hearing loss (issnhl), hyperbaric oxygen therapy (hbot), the decibel (db), oxygen (o2)

Procedia PDF Downloads 431
5272 Fatigue Life Estimation Using N-Code for Drive Shaft of Passenger Vehicle

Authors: Tae An Kim, Hyo Lim Kang, Hye Won Han, Seung Ho Han

Abstract:

The drive shaft of passenger vehicle has its own function such as transmitting the engine torque from the gearbox and differential gears to the wheels. It must also compensate for all variations in angle or length resulting from manoeuvring and deflection for perfect synchronization between joints. Torsional fatigue failures occur frequently at the connection parts of the spline joints in the end of the drive shaft. In this study, the fatigue life of a drive shaft of passenger vehicle was estimated by using the finite element analysis. A commercial software of n-Code was applied under twisting load conditions, i.e. 0~134kgf•m and 0~188kgf•m, in which the shear strain range-fatigue life relationship considering Signed Shear method, Smith-Watson-Topper equation, Neuber-Hoffman Seeger method, size sensitivity factor and surface roughness effect was taken into account. The estimated fatigue life was verified by a twisting load test of the real drive shaft in a test rig. (Human Resource Training Project for Industry Matched R & D, KIAT, N036200004).

Keywords: drive shaft, fatigue life estimation, passenger vehicle, shear strain range-fatigue life relationship, torsional fatigue failure

Procedia PDF Downloads 274
5271 Studies of the Corrosion Kinetics of Metal Alloys in Stagnant Simulated Seawater Environment

Authors: G. Kabir, A. M. Mohammed, M. A. Bawa

Abstract:

The paper presents corrosion behaviors of Naval Brass, aluminum alloy and carbon steel in simulated seawater under stagnant conditions. The behaviors were characterized on the variation of chloride ions concentration in the range of 3.0wt% and 3.5wt% and exposure time. The weight loss coupon-method immersion technique was employed. The weight loss for the various alloys was measured. Based on the obtained results, the corrosion rate was determined. It was found that the corrosion rates of the various alloys are related to the chloride ions concentrations, exposure time and kinetics of passive film formation of the various alloys. Carbon steel, suffers corrosion many folds more than Naval Brass. This indicated that the alloy exhibited relatively strong resistance to corrosion in the exposure environment of the seawater. Whereas, the aluminum alloy exhibited an excellent and beneficial resistance to corrosion more than the Naval Brass studied. Despite the prohibitive cost, Naval Brass and aluminum alloy, indicated to have beneficial corrosion behavior that can offer wide range of application in seashore operations. The corrosion kinetics parameters indicated that the corrosion reaction is limited by diffusion mass transfer of the corrosion reaction elements and not by reaction controlled.

Keywords: alloys, chloride ions concentration, corrosion kinetics, corrosion rate, diffusion mass transfer, exposure time, seawater, weight loss

Procedia PDF Downloads 301