Search results for: automatic linear modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7588

Search results for: automatic linear modeling

5578 Variations of Total Electron Content over High Latitude Region during the 24th Solar Cycle

Authors: Arun Kumar Singh, Rupesh M. Das, Shailendra Saini

Abstract:

The effect of solar cycle and seasons on the total electron content has been investigated over high latitude region during 24th solar cycle (2010-2014). The total electron content data has been observed with the help of Global Ionospheric Scintillation and TEC monitoring (GISTM) system installed at Indian permanent scientific 'Maitri station' [70˚46’00”S 11˚43’56” E]. The dependence of TEC over a solar cycle has been examined by the performing linear regression analysis between the vertical total electron content (VTEC) and daily total sunspot numbers (SSN). It has been found that the season and level of geomagnetic activity has a considerable effect on the VTEC. It is observed that the VTEC and SSN follow better agreement during summer seasons as compared to winter and equinox seasons and extraordinary agreement during minimum phase (during the year 2010) of the solar cycle. There is a significant correlation between VTEC and SSN during quiet days of the years as compared to overall days of the years (2010-2014). Further, saturation effect has been observed during maximum phase (during the year 2014) of the 24th solar cycle. It is also found that Ap index and SSN has a linear correlation (R=0.37) and the most of the geomagnetic activity occurs during the declining phase of the solar cycle.

Keywords: high latitude ionosphere, sunspot number, correlation, vertical total electron content

Procedia PDF Downloads 187
5577 Multiscale Model of Blast Explosion Human Injury Biomechanics

Authors: Raj K. Gupta, X. Gary Tan, Andrzej Przekwas

Abstract:

Bomb blasts from Improvised Explosive Devices (IEDs) account for vast majority of terrorist attacks worldwide. Injuries caused by IEDs result from a combination of the primary blast wave, penetrating fragments, and human body accelerations and impacts. This paper presents a multiscale computational model of coupled blast physics, whole human body biodynamics and injury biomechanics of sensitive organs. The disparity of the involved space- and time-scales is used to conduct sequential modeling of an IED explosion event, CFD simulation of blast loads on the human body and FEM modeling of body biodynamics and injury biomechanics. The paper presents simulation results for blast-induced brain injury coupling macro-scale brain biomechanics and micro-scale response of sensitive neuro-axonal structures. Validation results on animal models and physical surrogates are discussed. Results of our model can be used to 'replicate' filed blast loadings in laboratory controlled experiments using animal models and in vitro neuro-cultures.

Keywords: blast waves, improvised explosive devices, injury biomechanics, mathematical models, traumatic brain injury

Procedia PDF Downloads 243
5576 Identification of Switched Reluctance Motor Parameters Using Exponential Swept-Sine Signal

Authors: Abdelmalek Ouannou, Adil Brouri, Laila Kadi, Tarik

Abstract:

Switched reluctance motor (SRM) has a major interest in a large domain as in electric vehicle driving because of its wide range of speed operation, high performances, low cost, and robustness to run under degraded conditions. The purpose of the paper is to develop a new analytical approach for modeling SRM parameters. Then, an identification scheme is proposed to obtain the SRM parameters. Since the SRM is featured by a highly nonlinear behavior, modeling these devices is difficult. Then, it is convenient to develop an accurate model describing the SRM. Furthermore, it is always operated in the magnetically saturated mode to maximize the energy transfer. Accordingly, it is shown that the SRM can be accurately described by a generalized polynomial Hammerstein model, i.e., the parallel connection of several Hammerstein models having polynomial nonlinearity. Presently an analytical identification method is developed using a chirp excitation signal. Afterward, the parameters of the obtained model have been determined using Finite Element Method analysis. Finally, in order to show the effectiveness of the proposed method, a comparison between the true and estimate models has been performed. The obtained results show that the output responses are very close.

Keywords: switched reluctance motor, swept-sine signal, generalized Hammerstein model, nonlinear system

Procedia PDF Downloads 230
5575 Convergence and Stability in Federated Learning with Adaptive Differential Privacy Preservation

Authors: Rizwan Rizwan

Abstract:

This paper provides an overview of Federated Learning (FL) and its application in enhancing data security, privacy, and efficiency. FL utilizes three distinct architectures to ensure privacy is never compromised. It involves training individual edge devices and aggregating their models on a server without sharing raw data. This approach not only provides secure models without data sharing but also offers a highly efficient privacy--preserving solution with improved security and data access. Also we discusses various frameworks used in FL and its integration with machine learning, deep learning, and data mining. In order to address the challenges of multi--party collaborative modeling scenarios, a brief review FL scheme combined with an adaptive gradient descent strategy and differential privacy mechanism. The adaptive learning rate algorithm adjusts the gradient descent process to avoid issues such as model overfitting and fluctuations, thereby enhancing modeling efficiency and performance in multi-party computation scenarios. Additionally, to cater to ultra-large-scale distributed secure computing, the research introduces a differential privacy mechanism that defends against various background knowledge attacks.

Keywords: federated learning, differential privacy, gradient descent strategy, convergence, stability, threats

Procedia PDF Downloads 15
5574 Automatic MC/DC Test Data Generation from Software Module Description

Authors: Sekou Kangoye, Alexis Todoskoff, Mihaela Barreau

Abstract:

Modified Condition/Decision Coverage (MC/DC) is a structural coverage criterion that is highly recommended or required for safety-critical software coverage. Therefore, many testing standards include this criterion and require it to be satisfied at a particular level of testing (e.g. validation and unit levels). However, an important amount of time is needed to meet those requirements. In this paper we propose to automate MC/DC test data generation. Thus, we present an approach to automatically generate MC/DC test data, from software module description written over a dedicated language. We introduce a new merging approach that provides high MC/DC coverage for the description, with only a little number of test cases.

Keywords: domain-specific language, MC/DC, test data generation, safety-critical software coverage

Procedia PDF Downloads 429
5573 The French Ekang Ethnographic Dictionary. The Quantum Approach

Authors: Henda Gnakate Biba, Ndassa Mouafon Issa

Abstract:

Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, language, entenglement, science, research

Procedia PDF Downloads 57
5572 Feasibility and Obstacles of Air Quality Attainment in Hong Kong from 2019 to 2025

Authors: Xuguo Zhang, Jimmy Fung, Kenneth Leung, Alexis Lau

Abstract:

Fine particulate matter concentrations have been decreasing in the past few years while the ozone concentrations are posing an increasing trend in the Greater Bay Area (GBA) of China. A series of control policies have been released to mitigate the country-wide air pollution, however, how to effectively evaluate the exercised control measures and efficiently reveal potential projected mitigation pathways are still limited. By refining an enhanced air-quality-modeling system, this study provides an account of the air quality assessments from 2019 to 2025 to appraise the air quality results and improvement under designed scenarios for assessing the optimum scope for tightening the Air Quality Objectives (AQOs). The results show that it is doable to tighten the 24-hour AQO for SO2 from the World Health Objective air quality guidelines Interim Targets Level-1 (IT-1) (125μg/m3) to IT-2 level (50μg/m3) with the current number of exceedance allowed (three) remains unchanged. It is also possible to tighten the annual AQO for PM2.5 from IT-1 (35 μg/m3) to IT 2 (25 μg/m3), and its 24-hr AQO from IT-1 (75 μg/m3) to IT 2 (50 μg/m3) with the number of exceedances allowed increased from current nine to 35. Regional cooperation under the development of the GBA cooperation are still needed to be focused and strengthen due to the cross-boundary transport characteristics of the air pollution.

Keywords: air quality attainment, Hong Kong, mitigation policy, chemical transport modeling, sensitivity analysis

Procedia PDF Downloads 73
5571 Scheduling Building Projects: The Chronographical Modeling Concept

Authors: Adel Francis

Abstract:

Most of scheduling methods and software apply the critical path logic. This logic schedule activities, apply constraints between these activities and try to optimize and level the allocated resources. The extensive use of this logic produces a complex an erroneous network hard to present, follow and update. Planning and management building projects should tackle the coordination of works and the management of limited spaces, traffic, and supplies. Activities cannot be performed without the resources available and resources cannot be used beyond the capacity of workplaces. Otherwise, workspace congestion will negatively affect the flow of works. The objective of the space planning is to link the spatial and temporal aspects, promote efficient use of the site, define optimal site occupancy rates, and ensures suitable rotation of the workforce in the different spaces. The Chronographic scheduling modelling belongs to this category and models construction operations as well as their processes, logical constraints, association and organizational models, which help to better illustrate the schedule information using multiple flexible approaches. The model defined three categories of areas (punctual, surface and linear) and four different layers (space creation, systems, closing off space, finishing, and reduction of space). The Chronographical modelling is a more complete communication method, having the ability to alternate from one visual approach to another by manipulation of graphics via a set of parameters and their associated values. Each individual approach can help to schedule a certain project type or specialty. Visual communication can also be improved through layering, sheeting, juxtaposition, alterations, and permutations, allowing for groupings, hierarchies, and classification of project information. In this way, graphic representation becomes a living, transformable image, showing valuable information in a clear and comprehensible manner, simplifying the site management while simultaneously utilizing the visual space as efficiently as possible.

Keywords: building projects, chronographic modelling, CPM, critical path, precedence diagram, scheduling

Procedia PDF Downloads 139
5570 MCERTL: Mutation-Based Correction Engine for Register-Transfer Level Designs

Authors: Khaled Salah

Abstract:

In this paper, we present MCERTL (mutation-based correction engine for RTL designs) as an automatic error correction technique based on mutation analysis. A mutation-based correction methodology is proposed to automatically fix the erroneous RTL designs. The proposed strategy combines the processes of mutation and assertion-based localization. The erroneous statements are mutated to produce possible fixes for the failed RTL code. A concurrent mutation engine is proposed to mitigate the computational cost of running sequential mutants operators. The proposed methodology is evaluated against some benchmarks. The experimental results demonstrate that our proposed method enables us to automatically locate and correct multiple bugs at reasonable time.

Keywords: bug localization, error correction, mutation, mutants

Procedia PDF Downloads 272
5569 Modeling of a UAV Longitudinal Dynamics through System Identification Technique

Authors: Asadullah I. Qazi, Mansoor Ahsan, Zahir Ashraf, Uzair Ahmad

Abstract:

System identification of an Unmanned Aerial Vehicle (UAV), to acquire its mathematical model, is a significant step in the process of aircraft flight automation. The need for reliable mathematical model is an established requirement for autopilot design, flight simulator development, aircraft performance appraisal, analysis of aircraft modifications, preflight testing of prototype aircraft and investigation of fatigue life and stress distribution etc.  This research is aimed at system identification of a fixed wing UAV by means of specifically designed flight experiment. The purposely designed flight maneuvers were performed on the UAV and aircraft states were recorded during these flights. Acquired data were preprocessed for noise filtering and bias removal followed by parameter estimation of longitudinal dynamics transfer functions using MATLAB system identification toolbox. Black box identification based transfer function models, in response to elevator and throttle inputs, were estimated using least square error   technique. The identification results show a high confidence level and goodness of fit between the estimated model and actual aircraft response.

Keywords: fixed wing UAV, system identification, black box modeling, longitudinal dynamics, least square error

Procedia PDF Downloads 318
5568 Compact Optical Sensors for Harsh Environments

Authors: Branislav Timotijevic, Yves Petremand, Markus Luetzelschwab, Dara Bayat, Laurent Aebi

Abstract:

Optical miniaturized sensors with remote readout are required devices for the monitoring in harsh electromagnetic environments. As an example, in turbo and hydro generators, excessively high vibrations of the end-windings can lead to dramatic damages, imposing very high, additional service costs. A significant change of the generator temperature can also be an indicator of the system failure. Continuous monitoring of vibrations, temperature, humidity, and gases is therefore mandatory. The high electromagnetic fields in the generators impose the use of non-conductive devices in order to prevent electromagnetic interferences and to electrically isolate the sensing element to the electronic readout. Metal-free sensors are good candidates for such systems since they are immune to very strong electromagnetic fields and given the fact that they are non-conductive. We have realized miniature optical accelerometer and temperature sensors for a remote sensing of the harsh environments using the common, inexpensive silicon Micro Electro-Mechanical System (MEMS) platform. Both devices show highly linear response. The accelerometer has a deviation within 1% from the linear fit when tested in a range 0 – 40 g. The temperature sensor can provide the measurement accuracy better than 1 °C in a range 20 – 150 °C. The design of other type of sensors for the environments with high electromagnetic interferences has also been discussed.

Keywords: optical MEMS, temperature sensor, accelerometer, remote sensing, harsh environment

Procedia PDF Downloads 355
5567 A Digital Representation of a Microstructure and Determining Its Mechanical Behavior

Authors: Burak Bal

Abstract:

Mechanical characterization tests might come with a remarkable cost of time and money for both companies and academics. The inquiry to transform laboratory experiments to the computational media is getting a trend; accordingly, the literature supplies many analytical ways to explain the mechanics of deformation. In our work, we focused on the crystal plasticity finite element modeling (CPFEM) analysis on various materials in various crystal structures to predict the stress-strain curve without tensile tests. For FEM analysis, which we used in this study was ABAQUS, a standard user-defined material subroutine (UMAT) was prepared. The geometry of a specimen was created via DREAM 3D software with the inputs of Euler angles taken by Electron Back-Scattered Diffraction (EBSD) technique as orientation, or misorientation angles. The synthetic crystal created with DREAM 3D is also meshed in a way the grains inside the crystal meshed separately, and the computer can realize interaction of inter, and intra grain structures. The mechanical deformation parameters obtained from the literature put into the Fortran based UMAT code to describe how material will response to the load applied from specific direction. The mechanical response of a synthetic crystal created with DREAM 3D agrees well with the material response in the literature.

Keywords: crystal plasticity finite element modeling, ABAQUS, Dream.3D, microstructure

Procedia PDF Downloads 145
5566 Correlations between Wear Rate and Energy Dissipation Mechanisms in a Ti6Al4V–WC/Co Sliding Pair

Authors: J. S. Rudas, J. M. Gutiérrez Cabeza, A. Corz Rodríguez, L. M. Gómez, A. O. Toro

Abstract:

The prediction of the wear rate of rubbing pairs has attracted the interest of many researchers for years. It has been recently proposed that the sliding wear rate can be inferred from the calculation of the energy rate dissipated by the tribological pair. In this paper some of the dissipative mechanisms present in a pin-on-disc configuration are discussed and both analytical and numerical calculations are carried out. Three dissipative mechanisms were studied: First, the energy release due to temperature gradients within the solid; second, the heat flow from the solid to the environment, and third, the energy loss due to abrasive damage of the surface. The Finite Element Method was used to calculate the dynamics of heat transfer within the solid, with the aid of commercial software. Validation the FEM model was assisted by virtual and laboratory experimentation using different operating points (sliding velocity and geometry contact). The materials for the experiments were Ti6Al4V alloy and Tungsten Carbide (WC-Co). The results showed that the sliding wear rate has a linear relationship with the energy dissipation flow. It was also found that energy loss due to micro-cutting is relevant for the system. This mechanism changes if the sliding velocity and pin geometry are modified though the degradation coefficient continues to present a linear behavior. We found that the less relevant dissipation mechanism for all the cases studied is the energy release by temperature gradients in the solid.

Keywords: degradation, dissipative mechanism, dry sliding, entropy, friction, wear

Procedia PDF Downloads 493
5565 Modeling and Simulation of Multiphase Evaporation in High Torque Low Speed Diesel Engine

Authors: Ali Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi

Abstract:

Diesel engines are most efficient and reliable in terms of efficiency, reliability, and adaptability. Most of the research and development up till now have been directed towards High Speed Diesel Engine, for Commercial use. In these engines, objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low speed engines, the requirement is altogether different. These types of engines are mostly used in Maritime Industry, Agriculture Industry, Static Engines Compressors Engines, etc. On the contrary, high torque low speed engines are neglected quite often and are eminent for low efficiency and high soot emissions. One of the most effective ways to overcome these issues is by efficient combustion in an engine cylinder. Fuel spray dynamics play a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process in high torque low speed diesel engine is of great importance. Evaporation in the combustion chamber has a rigorous effect on the efficiency of the engine. In this paper, multiphase evaporation of fuel is modeled for high torque low speed engine using the CFD (computational fluid dynamics) codes. Two distinct phases of evaporation are modeled using modeling soft wares. The basic model equations are derived from the energy conservation equation and Naiver-Stokes equation. O’Rourke model is used to model the evaporation phases. The results obtained showed a generous effect on the efficiency of the engine. Evaporation rate of fuel droplet is increased with the increase in vapor pressure. An appreciable reduction in size of droplet is achieved by adding the convective heat effects in the combustion chamber. By and large, an overall increase in efficiency is observed by modeling distinct evaporation phases. This increase in efficiency is due to the fact that droplet size is reduced and vapor pressure is increased in the engine cylinder.

Keywords: diesel fuel, CFD, evaporation, multiphase

Procedia PDF Downloads 332
5564 Mapping Man-Induced Soil Degradation in Armenia's High Mountain Pastures through Remote Sensing Methods: A Case Study

Authors: A. Saghatelyan, Sh. Asmaryan, G. Tepanosyan, V. Muradyan

Abstract:

One of major concern to Armenia has been soil degradation emerged as a result of unsustainable management and use of grasslands, this in turn largely impacting environment, agriculture and finally human health. Hence, assessment of soil degradation is an essential and urgent objective set out to measure its possible consequences and develop a potential management strategy. Since recently, an essential tool for assessing pasture degradation has been remote sensing (RS) technologies. This research was done with an intention to measure preciseness of Linear spectral unmixing (LSU) and NDVI-SMA methods to estimate soil surface components related to degradation (fractional vegetation cover-FVC, bare soils fractions, surface rock cover) and determine appropriateness of these methods for mapping man-induced soil degradation in high mountain pastures. Taking into consideration a spatially complex and heterogeneous biogeophysical structure of the studied site, we used high resolution multispectral QuickBird imagery of a pasture site in one of Armenia’s rural communities - Nerkin Sasoonashen. The accuracy assessment was done by comparing between the land cover abundance data derived through RS methods and the ground truth land cover abundance data. A significant regression was established between ground truth FVC estimate and both NDVI-LSU and LSU - produced vegetation abundance data (R2=0.636, R2=0.625, respectively). For bare soil fractions linear regression produced a general coefficient of determination R2=0.708. Because of poor spectral resolution of the QuickBird imagery LSU failed with assessment of surface rock abundance (R2=0.015). It has been well documented by this particular research, that reduction in vegetation cover runs in parallel with increase in man-induced soil degradation, whereas in the absence of man-induced soil degradation a bare soil fraction does not exceed a certain level. The outcomes show that the proposed method of man-induced soil degradation assessment through FVC, bare soil fractions and field data adequately reflects the current status of soil degradation throughout the studied pasture site and may be employed as an alternate of more complicated models for soil degradation assessment.

Keywords: Armenia, linear spectral unmixing, remote sensing, soil degradation

Procedia PDF Downloads 321
5563 X-Ray Dosimetry by a Low-Cost Current Mode Ion Chamber

Authors: Ava Zarif Sanayei, Mustafa Farjad-Fard, Mohammad-Reza Mohammadian-Behbahani, Leyli Ebrahimi, Sedigheh Sina

Abstract:

The fabrication and testing of a low-cost air-filled ion chamber for X-ray dosimetry is studied. The chamber is made of a metal cylinder, a central wire, a BC517 Darlington transistor, a 9V DC battery, and a voltmeter in order to have a cost-effective means to measure the dose. The output current of the dosimeter is amplified by the transistor and then fed to the large internal resistance of the voltmeter, producing a readable voltage signal. The dose-response linearity of the ion chamber is evaluated for different exposure scenarios by the X-ray tube. kVp values 70, 90, and 120, and mAs up to 20 are considered. In all experiments, a solid-state dosimeter (Solidose 400, Elimpex Medizintechnik) is used as a reference device for chamber calibration. Each case of exposure is repeated three times, the voltmeter and Solidose readings are recorded, and the mean and standard deviation values are calculated. Then, the calibration curve, derived by plotting voltmeter readings against Solidose readings, provided a linear fit result for all tube kVps of 70, 90, and 120. A 99, 98, and 100% linear relationship, respectively, for kVp values 70, 90, and 120 are demonstrated. The study shows the feasibility of achieving acceptable dose measurements with a simplified setup. Further enhancements to the proposed setup include solutions for limiting the leakage current, optimizing chamber dimensions, utilizing electronic microcontrollers for dedicated data readout, and minimizing the impact of stray electromagnetic fields on the system.

Keywords: dosimetry, ion chamber, radiation detection, X-ray

Procedia PDF Downloads 59
5562 Rule-Based Mamdani Type Fuzzy Modeling of Performances of Anode Side of Proton Exchange Membrane Fuel Cell Spin-Coated with Yttria-Stabilized Zirconia

Authors: Sadık Ata, Kevser Dincer

Abstract:

In this study, performance of proton exchange membrane (PEM) fuel cell was experimentally investigated and modelled with Rule-Based Mamdani-Type Fuzzy (RBMTF) modelling technique. Coating on the anode side of the PEM fuel cell was accomplished with the spin method by using Yttria-stabilized zirconia (YSZ). Input parameters voltage density (V/cm2), and current density (A/cm2), temperature (°C), time (s); output parameter power density (W/cm2) were described by RBMTF if-then rules. Numerical parameters of input and output variables were fuzzificated as linguistic variables: Very Very Low (L1), Very Low (L2), Low (L3), Negative Medium (L4), Medium (L5), Positive Medium (L6), High (L7), Very High (L8) and Very Very High (L9) linguistic classes. The comparison between experimental data and RBMTF is done by using statistical methods like absolute fraction of variance (R2). The actual values and RBMTF results indicated that RBMTF can be successfully used for the analysis of performance of PEM fuel cell.

Keywords: proton exchange membrane (PEM), fuel cell, rule-based Mamdani-type fuzzy (RMBTF) modeling, yttria-stabilized zirconia (YSZ)

Procedia PDF Downloads 352
5561 Arabic Text Classification: Review Study

Authors: M. Hijazi, A. Zeki, A. Ismail

Abstract:

An enormous amount of valuable human knowledge is preserved in documents. The rapid growth in the number of machine-readable documents for public or private access requires the use of automatic text classification. Text classification can be defined as assigning or structuring documents into a defined set of classes known in advance. Arabic text classification methods have emerged as a natural result of the existence of a massive amount of varied textual information written in the Arabic language on the web. This paper presents a review on the published researches of Arabic Text Classification using classical data representation, Bag of words (BoW), and using conceptual data representation based on semantic resources such as Arabic WordNet and Wikipedia.

Keywords: Arabic text classification, Arabic WordNet, bag of words, conceptual representation, semantic relations

Procedia PDF Downloads 415
5560 A Comparative Study on Automatic Feature Classification Methods of Remote Sensing Images

Authors: Lee Jeong Min, Lee Mi Hee, Eo Yang Dam

Abstract:

Geospatial feature extraction is a very important issue in the remote sensing research. In the meantime, the image classification based on statistical techniques, but, in recent years, data mining and machine learning techniques for automated image processing technology is being applied to remote sensing it has focused on improved results generated possibility. In this study, artificial neural network and decision tree technique is applied to classify the high-resolution satellite images, as compared to the MLC processing result is a statistical technique and an analysis of the pros and cons between each of the techniques.

Keywords: remote sensing, artificial neural network, decision tree, maximum likelihood classification

Procedia PDF Downloads 341
5559 Nonlinear Finite Element Analysis of Concrete Filled Steel I-Girder Bridge

Authors: Waheed Ahmad Safi, Shunichi Nakamura

Abstract:

Concrete filled steel I-girder (CFIG) bridge was proposed and the bending and shear strength was confirmed by experiments. The area surrounded by the upper and lower flanges and the web is filled with concrete in CFIG, which is used to the intermediate support of a continuous girder. Three-dimensional finite element models were established to simulate the bending and shear behaviors of CFIG and to clarify the load transfer mechanism. Steel plates and filled concrete were modeled as a three-dimensional 8-node solid element and steel reinforcement bars as a three-dimensional 2-node truss element. The elements were mostly divided into the 50 x 50 mm mesh size. The non-linear stress-strain relation is assumed for concrete in compression including the softening effect after the peak, and the stress increases linearly for concrete in tension until concrete cracking but then decreases due to tension stiffening effect. The stress-strain relation for steel plates was tri-linear and that for reinforcements was bi-linear. The concrete and the steel plates were rigidly connected. The developed FEM model was applied to simulate and analysis the bending behaviors of the CFIG specimens. The vertical displacements and the strains of steel plates and the filled concrete obtained by FEM agreed very well with the test results until the yield load. The specimens collapsed when the upper flange buckled or the concrete spalled off. These phenomena cannot be properly analyzed by FEM, which produces a small discrepancy at the ultimate states. The FEM model was also applied to simulate and analysis the shear tests of the CFIG specimens. The vertical displacements and strains of steel and concrete calculated by FEM model agreed well with the test results. A truss action was confirmed by the FEM and the experiment, clarifying that shear forces were mainly resisted by the tension strut of the steel plate and the compression strut of the filled concrete acting in the diagonal direction. A trail design with the CFIG was carried out for a four-span continuous highway bridge and the design method was established. Construction cost was estimated about 12% lower than that of a conventional steel I-section girder.

Keywords: concrete filled steel I-girder, bending strength, FEM, limit states design, steel I-girder, shear strength

Procedia PDF Downloads 209
5558 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip

Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas

Abstract:

A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.

Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration

Procedia PDF Downloads 382
5557 Fault Analysis of Ship Power System Comprising of Parallel Generators and Variable Frequency Drive

Authors: Umair Ashraf, Kjetil Uhlen, Sverre Eriksen, Nadeem Jelani

Abstract:

Although advancement in technology has increased the reliability and ease of work in ship power system, but these advancements are also adding complexities. Ever increasing non linear loads, like power electronics (PE) devices effect the stability of the system. Frequent load variations and complex load dynamics are due to the frequency converters and motor drives, these problem are more prominent when system is connected with the weak grid. In the ship power system major consumers are thruster motors for the propulsion. For the control operation of these motors variable frequency drives (VFD) are used, mostly VFDs operate on nominal voltage of the system. Some of the consumers in ship operate on lower voltage than nominal, these consumers got supply through step down transformers. In this paper the vector control scheme is used for the control of both rectifier and inverter, parallel operation of the synchronous generators is also demonstrated. The simulation have been performed with induction motor as load on VFD and parallel RLC load. Fault analysis has been performed first for the system which do not have VFD and then for the system with VFD. Three phase to the ground, single phase to the ground fault were implemented and behavior of the system in both the cases was observed.

Keywords: non-linear load, power electronics, parallel operating generators, pulse width modulation, variable frequency drives, voltage source converters, weak grid

Procedia PDF Downloads 565
5556 Railway Ballast Volumes Automated Estimation Based on LiDAR Data

Authors: Bahar Salavati Vie Le Sage, Ismaïl Ben Hariz, Flavien Viguier, Sirine Noura Kahil, Audrey Jacquin, Maxime Convert

Abstract:

The ballast layer plays a key role in railroad maintenance and the geometry of the track structure. Ballast also holds the track in place as the trains roll over it. Track ballast is packed between the sleepers and on the sides of railway tracks. An imbalance in ballast volume on the tracks can lead to safety issues as well as a quick degradation of the overall quality of the railway segment. If there is a lack of ballast in the track bed during the summer, there is a risk that the rails will expand and buckle slightly due to the high temperatures. Furthermore, the knowledge of the ballast quantities that will be excavated during renewal works is important for efficient ballast management. The volume of excavated ballast per meter of track can be calculated based on excavation depth, excavation width, volume of track skeleton (sleeper and rail) and sleeper spacing. Since 2012, SNCF has been collecting 3D points cloud data covering its entire railway network by using 3D laser scanning technology (LiDAR). This vast amount of data represents a modelization of the entire railway infrastructure, allowing to conduct various simulations for maintenance purposes. This paper aims to present an automated method for ballast volume estimation based on the processing of LiDAR data. The estimation of abnormal volumes in ballast on the tracks is performed by analyzing the cross-section of the track. Further, since the amount of ballast required varies depending on the track configuration, the knowledge of the ballast profile is required. Prior to track rehabilitation, excess ballast is often present in the ballast shoulders. Based on 3D laser scans, a Digital Terrain Model (DTM) was generated and automatic extraction of the ballast profiles from this data is carried out. The surplus in ballast is then estimated by performing a comparison between this ballast profile obtained empirically, and a geometric modelization of the theoretical ballast profile thresholds as dictated by maintenance standards. Ideally, this excess should be removed prior to renewal works and recycled to optimize the output of the ballast renewal machine. Based on these parameters, an application has been developed to allow the automatic measurement of ballast profiles. We evaluated the method on a 108 kilometers segment of railroad LiDAR scans, and the results show that the proposed algorithm detects ballast surplus that amounts to values close to the total quantities of spoil ballast excavated.

Keywords: ballast, railroad, LiDAR , cloud point, track ballast, 3D point

Procedia PDF Downloads 97
5555 Speeding up Nonlinear Time History Analysis of Base-Isolated Structures Using a Nonlinear Exponential Model

Authors: Nicolò Vaiana, Giorgio Serino

Abstract:

The nonlinear time history analysis of seismically base-isolated structures can require a significant computational effort when the behavior of each seismic isolator is predicted by adopting the widely used differential equation Bouc-Wen model. In this paper, a nonlinear exponential model, able to simulate the response of seismic isolation bearings within a relatively large displacements range, is described and adopted in order to reduce the numerical computations and speed up the nonlinear dynamic analysis. Compared to the Bouc-Wen model, the proposed one does not require the numerical solution of a nonlinear differential equation for each time step of the analysis. The seismic response of a 3d base-isolated structure with a lead rubber bearing system subjected to harmonic earthquake excitation is simulated by modeling each isolator using the proposed analytical model. The comparison of the numerical results and computational time with those obtained by modeling the lead rubber bearings using the Bouc-Wen model demonstrates the good accuracy of the proposed model and its capability to reduce significantly the computational effort of the analysis.

Keywords: base isolation, computational efficiency, nonlinear exponential model, nonlinear time history analysis

Procedia PDF Downloads 375
5554 Spatial REE Geochemical Modeling at Lake Acıgöl, Denizli, Turkey: Analytical Approaches on Spatial Interpolation and Spatial Correlation

Authors: M. Budakoglu, M. Karaman, A. Abdelnasser, M. Kumral

Abstract:

The spatial interpolation and spatial correlation of the rare earth elements (REE) of lake surface sediments of Lake Acıgöl and its surrounding lithological units is carried out by using GIS techniques like Inverse Distance Weighted (IDW) and Geographically Weighted Regression (GWR) techniques. IDW technique which makes the spatial interpolation shows that the lithological units like Hayrettin Formation at north of Lake Acigol have high REE contents than lake sediments as well as ∑LREE and ∑HREE contents. However, Eu/Eu* values (based on chondrite-normalized REE pattern) show high value in some lake surface sediments than in lithological units and that refers to negative Eu-anomaly. Also, the spatial interpolation of the V/Cr ratio indicated that Acıgöl lithological units and lake sediments deposited in in oxic and dysoxic conditions. But, the spatial correlation is carried out by GWR technique. This technique shows high spatial correlation coefficient between ∑LREE and ∑HREE which is higher in the lithological units (Hayrettin Formation and Cameli Formation) than in the other lithological units and lake surface sediments. Also, the matching between REEs and Sc and Al refers to REE abundances of Lake Acıgöl sediments weathered from local bedrock around the lake.

Keywords: spatial geochemical modeling, IDW, GWR techniques, REE, lake sediments, Lake Acıgöl, Turkey

Procedia PDF Downloads 544
5553 Quality Parameters of Offset Printing Wastewater

Authors: Kiurski S. Jelena, Kecić S. Vesna, Aksentijević M. Snežana

Abstract:

Samples of tap and wastewater were collected in three offset printing facilities in Novi Sad, Serbia. Ten physicochemical parameters were analyzed within all collected samples: pH, conductivity, m - alkalinity, p - alkalinity, acidity, carbonate concentration, hydrogen carbonate concentration, active oxygen content, chloride concentration and total alkali content. All measurements were conducted using the standard analytical and instrumental methods. Comparing the obtained results for tap water and wastewater, a clear quality difference was noticeable, since all physicochemical parameters were significantly higher within wastewater samples. The study also involves the application of simple linear regression analysis on the obtained dataset. By using software package ORIGIN 5 the pH value was mutually correlated with other physicochemical parameters. Based on the obtained values of Pearson coefficient of determination a strong positive correlation between chloride concentration and pH (r = -0.943), as well as between acidity and pH (r = -0.855) was determined. In addition, statistically significant difference was obtained only between acidity and chloride concentration with pH values, since the values of parameter F (247.634 and 182.536) were higher than Fcritical (5.59). In this way, results of statistical analysis highlighted the most influential parameter of water contamination in offset printing, in the form of acidity and chloride concentration. The results showed that variable dependence could be represented by the general regression model: y = a0 + a1x+ k, which further resulted with matching graphic regressions.

Keywords: pollution, printing industry, simple linear regression analysis, wastewater

Procedia PDF Downloads 228
5552 Application of Transportation Models for Analysing Future Intercity and Intracity Travel Patterns in Kuwait

Authors: Srikanth Pandurangi, Basheer Mohammed, Nezar Al Sayegh

Abstract:

In order to meet the increasing demand for housing care for Kuwaiti citizens, the government authorities in Kuwait are undertaking a series of projects in the form of new large cities, outside the current urban area. Al Mutlaa City located to the north-west of the Kuwait Metropolitan Area is one such project out of the 15 planned new cities. The city accommodates a wide variety of residential developments, employment opportunities, commercial, recreational, health care and institutional uses. This paper examines the application of comprehensive transportation demand modeling works undertaken in VISUM platform to understand the future intracity and intercity travel distribution patterns in Kuwait. The scope of models developed varied in levels of detail: strategic model update, sub-area models representing future demand of Al Mutlaa City, sub-area models built to estimate the demand in the residential neighborhoods of the city. This paper aims at offering model update framework that facilitates easy integration between sub-area models and strategic national models for unified traffic forecasts. This paper presents the transportation demand modeling results utilized in informing the planning of multi-modal transportation system for Al Mutlaa City. This paper also presents the household survey data collection efforts undertaken using GPS devices (first time in Kuwait) and notebook computer based digital survey forms for interviewing representative sample of citizens and residents. The survey results formed the basis of estimating trip generation rates and trip distribution coefficients used in the strategic base year model calibration and validation process.

Keywords: innovative methods in transportation data collection, integrated public transportation system, traffic forecasts, transportation modeling, travel behavior

Procedia PDF Downloads 213
5551 Stochastic Modeling and Productivity Analysis of a Flexible Manufacturing System

Authors: Mehmet Savsar, Majid Aldaihani

Abstract:

Flexible Manufacturing Systems (FMS) are used to produce a variety of parts on the same equipment. Therefore, their utilization is higher than traditional machining systems. Higher utilization, on the other hand, results in more frequent equipment failures and additional need for maintenance. Therefore, it is necessary to carefully analyze operational characteristics and productivity of FMS or Flexible Manufacturing Cells (FMC), which are smaller configuration of FMS, before installation or during their operation. Appropriate models should be developed to determine production rates based on operational conditions, including equipment reliability, availability, and repair capacity. In this paper, a stochastic model is developed for an automated FMC system, which consists of two machines served by two robots and a single repairman. The model is used to determine system productivity and equipment utilization under different operational conditions, including random machine failures, random repairs, and limited repair capacity. The results are compared to previous study results for FMC system with sufficient repair capacity assigned to each machine. The results show that the model will be useful for design engineers and operational managers to analyze performance of manufacturing systems at the design or operational stages.

Keywords: flexible manufacturing, FMS, FMC, stochastic modeling, production rate, reliability, availability

Procedia PDF Downloads 511
5550 Logical-Probabilistic Modeling of the Reliability of Complex Systems

Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia

Abstract:

The paper presents logical-probabilistic methods, models and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of weights of elements. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research and designing of optimal structure systems are carried out.

Keywords: Complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability, weight of element

Procedia PDF Downloads 57
5549 Cross-Comparison between Land Surface Temperature from Polar and Geostationary Satellite over Heterogenous Landscape: A Case Study in Hong Kong

Authors: Ibrahim A. Adeniran, Rui F. Zhu, Man S. Wong

Abstract:

Owing to the insufficiency in the spatial representativeness and continuity of in situ temperature measurements from weather stations (WS), the use of temperature measurement from WS for large-range diurnal analysis in heterogenous landscapes has been limited. This has made the accurate estimation of land surface temperature (LST) from remotely sensed data more crucial. Moreover, the study of dynamic interaction between the atmosphere and the physical surface of the Earth could be enhanced at both annual and diurnal scales by using optimal LST data derived from satellite sensors. The tradeoff between the spatial and temporal resolution of LSTs from satellite’s thermal infrared sensors (TIRS) has, however, been a major challenge, especially when high spatiotemporal LST data are recommended. It is well-known from existing literature that polar satellites have the advantage of high spatial resolution, while geostationary satellites have a high temporal resolution. Hence, this study is aimed at designing a framework for the cross-comparison of LST data from polar and geostationary satellites in a heterogeneous landscape. This could help to understand the relationship between the LST estimates from the two satellites and, consequently, their integration in diurnal LST analysis. Landsat-8 satellite data will be used as the representative of the polar satellite due to the availability of its long-term series, while the Himawari-8 satellite will be used as the data source for the geostationary satellite because of its improved TIRS. For the study area, Hong Kong Special Administrative Region (HK SAR) will be selected; this is due to the heterogeneity in the landscape of the region. LST data will be retrieved from both satellites using the Split window algorithm (SWA), and the resulting data will be validated by comparing satellite-derived LST data with temperature data from automatic WS in HK SAR. The LST data from the satellite data will then be separated based on the land use classification in HK SAR using the Global Land Cover by National Mapping Organization version3 (GLCNMO 2013) data. The relationship between LST data from Landsat-8 and Himawari-8 will then be investigated based on the land-use class and over different seasons of the year in order to account for seasonal variation in their relationship. The resulting relationship will be spatially and statistically analyzed and graphically visualized for detailed interpretation. Findings from this study will reveal the relationship between the two satellite data based on the land use classification within the study area and the seasons of the year. While the information provided by this study will help in the optimal combination of LST data from Polar (Landsat-8) and geostationary (Himawari-8) satellites, it will also serve as a roadmap in the annual and diurnal urban heat (UHI) analysis in Hong Kong SAR.

Keywords: automatic weather station, Himawari-8, Landsat-8, land surface temperature, land use classification, split window algorithm, urban heat island

Procedia PDF Downloads 63