Search results for: 6D posture estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2037

Search results for: 6D posture estimation

807 Evaluation of Geomechanical and Geometrical Parameters’ Effects on Hydro-Mechanical Estimation of Water Inflow into Underground Excavations

Authors: M. Mazraehli, F. Mehrabani, S. Zare

Abstract:

In general, mechanical and hydraulic processes are not independent of each other in jointed rock masses. Therefore, the study on hydro-mechanical coupling of geomaterials should be a center of attention in rock mechanics. Rocks in their nature contain discontinuities whose presence extremely influences mechanical and hydraulic characteristics of the medium. Assuming this effect, experimental investigations on intact rock cannot help to identify jointed rock mass behavior. Hence, numerical methods are being used for this purpose. In this paper, water inflow into a tunnel under significant water table has been estimated using hydro-mechanical discrete element method (HM-DEM). Besides, effects of geomechanical and geometrical parameters including constitutive model, friction angle, joint spacing, dip of joint sets, and stress factor on the estimated inflow rate have been studied. Results demonstrate that inflow rates are not identical for different constitutive models. Also, inflow rate reduces with increased spacing and stress factor.

Keywords: distinct element method, fluid flow, hydro-mechanical coupling, jointed rock mass, underground excavations

Procedia PDF Downloads 148
806 Pseudo Modal Operating Deflection Shape Based Estimation Technique of Mode Shape Using Time History Modal Assurance Criterion

Authors: Doyoung Kim, Hyo Seon Park

Abstract:

Studies of System Identification(SI) based on Structural Health Monitoring(SHM) have actively conducted for structural safety. Recently SI techniques have been rapidly developed with output-only SI paradigm for estimating modal parameters. The features of these output-only SI methods consist of Frequency Domain Decomposition(FDD) and Stochastic Subspace Identification(SSI) are using the algorithms based on orthogonal decomposition such as singular value decomposition(SVD). But the SVD leads to high level of computational complexity to estimate modal parameters. This paper proposes the technique to estimate mode shape with lower computational cost. This technique shows pseudo modal Operating Deflections Shape(ODS) through bandpass filter and suggests time history Modal Assurance Criterion(MAC). Finally, mode shape could be estimated from pseudo modal ODS and time history MAC. Analytical simulations of vibration measurement were performed and the results with mode shape and computation time between representative SI method and proposed method were compared.

Keywords: modal assurance criterion, mode shape, operating deflection shape, system identification

Procedia PDF Downloads 391
805 Artificial Neural Network Based Approach for Estimation of Individual Vehicle Speed under Mixed Traffic Condition

Authors: Subhadip Biswas, Shivendra Maurya, Satish Chandra, Indrajit Ghosh

Abstract:

Developing speed model is a challenging task particularly under mixed traffic condition where the traffic composition plays a significant role in determining vehicular speed. The present research has been conducted to model individual vehicular speed in the context of mixed traffic on an urban arterial. Traffic speed and volume data have been collected from three midblock arterial road sections in New Delhi. Using the field data, a volume based speed prediction model has been developed adopting the methodology of Artificial Neural Network (ANN). The model developed in this work is capable of estimating speed for individual vehicle category. Validation results show a great deal of agreement between the observed speeds and the predicted values by the model developed. Also, it has been observed that the ANN based model performs better compared to other existing models in terms of accuracy. Finally, the sensitivity analysis has been performed utilizing the model in order to examine the effects of traffic volume and its composition on individual speeds.

Keywords: speed model, artificial neural network, arterial, mixed traffic

Procedia PDF Downloads 369
804 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility

Authors: Fu Jinyu, Lin Jinguan

Abstract:

This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.

Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate

Procedia PDF Downloads 136
803 Absorbed Dose Estimation of 68Ga-EDTMP in Human Organs

Authors: S. Zolghadri, H. Yousefnia, A. R. Jalilian

Abstract:

Bone metastases are observed in a wide range of cancers leading to intolerable pain. While early detection can help the physicians in the decision of the type of treatment, various radiopharmaceuticals using phosphonates like 68Ga-EDTMP have been developed. In this work, due to the importance of absorbed dose, human absorbed dose of this new agent was calculated for the first time based on biodistribution data in Wild-type rats. 68Ga was obtained from 68Ge/68Ga generator with radionuclidic purity and radiochemical purity of higher than 99%. The radiolabeled complex was prepared in the optimized conditions. Radiochemical purity of the radiolabeled complex was checked by instant thin layer chromatography (ITLC) method using Whatman No. 2 paper and saline. The results indicated the radiochemical purity of higher than 99%. The radiolabelled complex was injected into the Wild-type rats and its biodistribution was studied up to 120 min. As expected, major accumulation was observed in the bone. Absorbed dose of each human organ was calculated based on biodistribution in the rats using RADAR method. Bone surface and bone marrow with 0.112 and 0.053 mSv/MBq, respectively, received the highest absorbed dose. According to these results, the radiolabeled complex is a suitable and safe option for PET bone imaging.

Keywords: absorbed dose, EDTMP, ⁶⁸Ga, rats

Procedia PDF Downloads 183
802 Introduction to Various Innovative Techniques Suggested for Seismic Hazard Assessment

Authors: Deepshikha Shukla, C. H. Solanki, Mayank K. Desai

Abstract:

Amongst all the natural hazards, earthquakes have the potential for causing the greatest damages. Since the earthquake forces are random in nature and unpredictable, the quantification of the hazards becomes important in order to assess the hazards. The time and place of a future earthquake are both uncertain. Since earthquakes can neither be prevented nor be predicted, engineers have to design and construct in such a way, that the damage to life and property are minimized. Seismic hazard analysis plays an important role in earthquake design structures by providing a rational value of input parameter. In this paper, both mathematical, as well as computational methods adopted by researchers globally in the past five years, will be discussed. Some mathematical approaches involving the concepts of Poisson’s ratio, Convex Set Theory, Empirical Green’s Function, Bayesian probability estimation applied for seismic hazard and FOSM (first-order second-moment) algorithm methods will be discussed. Computational approaches and numerical model SSIFiBo developed in MATLAB to study dynamic soil-structure interaction problem is discussed in this paper. The GIS-based tool will also be discussed which is predominantly used in the assessment of seismic hazards.

Keywords: computational methods, MATLAB, seismic hazard, seismic measurements

Procedia PDF Downloads 316
801 On Estimating the Low Income Proportion with Several Auxiliary Variables

Authors: Juan F. Muñoz-Rosas, Rosa M. García-Fernández, Encarnación Álvarez-Verdejo, Pablo J. Moya-Fernández

Abstract:

Poverty measurement is a very important topic in many studies in social sciences. One of the most important indicators when measuring poverty is the low income proportion. This indicator gives the proportion of people of a population classified as poor. This indicator is generally unknown, and for this reason, it is estimated by using survey data, which are obtained by official surveys carried out by many statistical agencies such as Eurostat. The main feature of the mentioned survey data is the fact that they contain several variables. The variable used to estimate the low income proportion is called as the variable of interest. The survey data may contain several additional variables, also named as the auxiliary variables, related to the variable of interest, and if this is the situation, they could be used to improve the estimation of the low income proportion. In this paper, we use Monte Carlo simulation studies to analyze numerically the performance of estimators based on several auxiliary variables. In this simulation study, we considered real data sets obtained from the 2011 European Union Survey on Income and Living Condition. Results derived from this study indicate that the estimators based on auxiliary variables are more accurate than the naive estimator.

Keywords: inclusion probability, poverty, poverty line, survey sampling

Procedia PDF Downloads 431
800 Chemical Characterization and Antioxidant Capacity of Flour From Two Soya Bean Cultivars (Glycine Max)

Authors: Meziani Samira, Menadi Noreddine, Labga Lahouaria, Chenni Fatima Zohra, Toumi Asma

Abstract:

A comparative study between two varieties of soya beans was carried out in this work. The method consists of studying and proceeding to prepare a by-product (Flour) from two varieties of soybeans, a Chinese variety imported and marketed in Algeria. The chemical composition of ash, protein and fat was determined in this study. The minerals, namely potassium and sodium, were measured by flame spectrophotometer. In addition, the estimation of the polyphenol content and evaluation of the antioxidant activity Ferric Reducing Antioxidant Power assay (FRAP) f the methanol extracts of the flours were also carried out. The result revealed that soy flour from two cultivars, on average, contained 8% moisture, more than 50% protein, 1.58-1.87g fat, and 0.28-0.30g of ash. A slight difference was found for contents of 489 mg/ml of K + and 20 mg/ml of NA +. In addition, the phenolic content of the methanolic extracts gives a value of almost 37 mg EAG / g for both cultivars of soy flour. The estimated Reductive Antioxidant Iron (FRAP) potency of soy flour might be related to its polyphenol richness, which is similar to the variety of China. The flour Soya varieties tested contained a significant amount of protein and phenolic compounds with good antioxidant properties.

Keywords: soye beans, soya flour, protein, total polyphenols

Procedia PDF Downloads 71
799 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model

Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis

Abstract:

In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.

Keywords: cause of failure, linear degradation path, reliability function, expectation-maximization algorithm, intensity, masked data

Procedia PDF Downloads 314
798 Assessment of Exploitation Vulnerability of Quantum Communication Systems with Phase Encryption

Authors: Vladimir V. Nikulin, Bekmurza H. Aitchanov, Olimzhon A. Baimuratov

Abstract:

Quantum communication technology takes advantage of the intrinsic properties of laser carriers, such as very high data rates and low power requirements, to offer unprecedented data security. Quantum processes at the physical layer of encryption are used for signal encryption with very competitive performance characteristics. The ultimate range of applications for QC systems spans from fiber-based to free-space links and from secure banking operations to mobile airborne and space-borne networking where they are subjected to channel distortions. Under practical conditions, the channel can alter the optical wave front characteristics, including its phase. In addition, phase noise of the communication source and photo-detection noises alter the signal to bring additional ambiguity into the measurement process. If quantized values of photons are used to encrypt the signal, exploitation of quantum communication links becomes extremely difficult. In this paper, we present the results of analysis and simulation studies of the effects of noise on phase estimation for quantum systems with different number of encryption bases and operating at different power levels.

Keywords: encryption, phase distortion, quantum communication, quantum noise

Procedia PDF Downloads 534
797 Numerical Study for the Estimation of Hydrodynamic Current Drag Coefficients for the Colombian Navy Frigates Using Computational Fluid Dynamics

Authors: Mauricio Gracia, Luis Leal, Bharat Verma

Abstract:

Computational fluid dynamics (CFD) has become nowadays an important tool in the process of hydrodynamic design of modern ships. CFD is used to model any phenomena related to fluid flow in a control volume like a ship or any offshore structure in the sea. In the present study, the current force drag coefficients for a Colombian Navy Frigate in deep and shallow water are estimated through the application of CFD. The study shows the process of simulating the ship current drag coefficients using the CFD simulations method, which is conducted using STAR-CCM+ software package. The Almirante Padilla class Frigate ship scale model is investigated. The results show the ship current drag coefficient calculated considering a current speed of 1 knot with a 90° drift angle for the full-scale ship. Predicted results were compared against the current drag coefficients published in the Lloyds register OCIMF report. It is shown that the simulation results agree fairly well with the published results and that STAR-CCM+ code can predict current drag coefficients.

Keywords: CFD, current draft coefficient, STAR-CCM+, OCIMF, Bollard pull

Procedia PDF Downloads 143
796 End-to-End Pyramid Based Method for Magnetic Resonance Imaging Reconstruction

Authors: Omer Cahana, Ofer Levi, Maya Herman

Abstract:

Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.

Keywords: magnetic resonance imaging, image reconstruction, pyramid network, deep learning

Procedia PDF Downloads 75
795 Estimation of the Parameters of Muskingum Methods for the Prediction of the Flood Depth in the Moudjar River Catchment

Authors: Fares Laouacheria, Said Kechida, Moncef Chabi

Abstract:

The objective of the study was based on the hydrological routing modelling for the continuous monitoring of the hydrological situation in the Moudjar river catchment, especially during floods with Hydrologic Engineering Center–Hydrologic Modelling Systems (HEC-HMS). The HEC-GeoHMS was used to transform data from geographic information system (GIS) to HEC-HMS for delineating and modelling the catchment river in order to estimate the runoff volume, which is used as inputs to the hydrological routing model. Two hydrological routing models were used, namely Muskingum and Muskingum routing models, for conducting this study. In this study, a comparison between the parameters of the Muskingum and Muskingum-Cunge routing models in HEC-HMS was used for modelling flood routing in the Moudjar river catchment and determining the relationship between these parameters and the physical characteristics of the river. The results indicate that the effects of input parameters such as the weighting factor "X" and travel time "K" on the output results are more significant, where the Muskingum routing model was more sensitive to input parameters than the Muskingum-Cunge routing model. This study can contribute to understand and improve the knowledge of the mechanisms of river floods, especially in ungauged river catchments.

Keywords: HEC-HMS, hydrological modelling, Muskingum routing model, Muskingum-Cunge routing model

Procedia PDF Downloads 250
794 Risk Based Building Information Modeling (BIM) for Urban Infrastructure Transportation Project

Authors: Debasis Sarkar

Abstract:

Building Information Modeling (BIM) is a holistic documentation process for operational visualization, design coordination, estimation and project scheduling. BIM software defines objects parametrically and it is a tool for virtual reality. Primary advantage of implementing BIM is the visual coordination of the building structure and systems such as Mechanical, Electrical and Plumbing (MEP) and it also identifies the possible conflicts between the building systems. This paper is an attempt to develop a risk based BIM model which would highlight the primary advantages of application of BIM pertaining to urban infrastructure transportation project. It has been observed that about 40% of the Architecture, Engineering and Construction (AEC) companies use BIM but primarily for their outsourced projects. Also, 65% of the respondents agree that BIM would be used quiet strongly for future construction projects in India. The 3D models developed with Revit 2015 software would reduce co-ordination problems amongst the architects, structural engineers, contractors and building service providers (MEP). Integration of risk management along with BIM would provide enhanced co-ordination, collaboration and high probability of successful completion of the complex infrastructure transportation project within stipulated time and cost frame.

Keywords: building information modeling (BIM), infrastructure transportation, project risk management, underground metro rail

Procedia PDF Downloads 289
793 Model Based Fault Diagnostic Approach for Limit Switches

Authors: Zafar Mahmood, Surayya Naz, Nazir Shah Khattak

Abstract:

The degree of freedom relates to our capability to observe or model the energy paths within the system. Higher the number of energy paths being modeled leaves to us a higher degree of freedom, but increasing the time and modeling complexity rendering it useless for today’s world’s need for minimum time to market. Since the number of residuals that can be uniquely isolated are dependent on the number of independent outputs of the system, increasing the number of sensors required. The examples of discrete position sensors that may be used to form an array include limit switches, Hall effect sensors, optical sensors, magnetic sensors, etc. Their mechanical design can usually be tailored to fit in the transitional path of an STME in a variety of mechanical configurations. The case studies into multi-sensor system were carried out and actual data from sensors is used to test this generic framework. It is being investigated, how the proper modeling of limit switches as timing sensors, could lead to unified and neutral residual space while keeping the implementation cost reasonably low.

Keywords: low-cost limit sensors, fault diagnostics, Single Throw Mechanical Equipment (STME), parameter estimation, parity-space

Procedia PDF Downloads 588
792 Digital Material Characterization Using the Quantum Fourier Transform

Authors: Felix Givois, Nicolas R. Gauger, Matthias Kabel

Abstract:

The efficient digital material characterization is of great interest to many fields of application. It consists of the following three steps. First, a 3D reconstruction of 2D scans must be performed. Then, the resulting gray-value image of the material sample is enhanced by image processing methods. Finally, partial differential equations (PDE) are solved on the segmented image, and by averaging the resulting solutions fields, effective properties like stiffness or conductivity can be computed. Due to the high resolution of current CT images, the latter is typically performed with matrix-free solvers. Among them, a solver that uses the explicit formula of the Green-Eshelby operator in Fourier space has been proposed by Moulinec and Suquet. Its algorithmic, most complex part is the Fast Fourier Transformation (FFT). In our talk, we will discuss the potential quantum advantage that can be obtained by replacing the FFT with the Quantum Fourier Transformation (QFT). We will especially show that the data transfer for noisy intermediate-scale quantum (NISQ) devices can be improved by using appropriate boundary conditions for the PDE, which also allows using semi-classical versions of the QFT. In the end, we will compare the results of the QFT-based algorithm for simple geometries with the results of the FFT-based homogenization method.

Keywords: most likelihood amplitude estimation (MLQAE), numerical homogenization, quantum Fourier transformation (QFT), NISQ devises

Procedia PDF Downloads 54
791 The Effectiveness of Environmental Policy Instruments for Promoting Renewable Energy Consumption: Command-and-Control Policies versus Market-Based Policies

Authors: Mahmoud Hassan

Abstract:

Understanding the impact of market- and non-market-based environmental policy instruments on renewable energy consumption (REC) is crucial for the design and choice of policy packages. This study aims to empirically investigate the effect of environmental policy stringency index (EPS) and its components on REC in 27 OECD countries over the period from 1990 to 2015, and then use the results to identify what the appropriate environmental policy mix should look like. By relying on the two-step system GMM estimator, we provide evidence that increasing environmental policy stringency as a whole promotes renewable energy consumption in these 27 developed economies. Moreover, policymakers are able, through the market- and non-market-based environmental policy instruments, to increase the use of renewable energy. However, not all of these instruments are effective for achieving this goal. The results indicate that R&D subsidies and trading schemes have a positive and significant impact on REC, while taxes, feed-in tariff and emission standards have not a significant effect. Furthermore, R&D subsidies are more effective than trading schemes for stimulating the use of clean energy. These findings proved to be robust across the three alternative panel techniques used.

Keywords: environmental policy stringency, renewable energy consumption, two-step system-GMM estimation, linear dynamic panel data model

Procedia PDF Downloads 167
790 F-VarNet: Fast Variational Network for MRI Reconstruction

Authors: Omer Cahana, Maya Herman, Ofer Levi

Abstract:

Magnetic resonance imaging (MRI) is a long medical scan that stems from a long acquisition time. This length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach, such as compress sensing (CS) or parallel imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. In order to achieve that, two properties have to exist: i) the signal must be sparse under a known transform domain, ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm needs to be applied to recover the signal. While the rapid advance in the deep learning (DL) field, which has demonstrated tremendous successes in various computer vision task’s, the field of MRI reconstruction is still in an early stage. In this paper, we present an extension of the state-of-the-art model in MRI reconstruction -VarNet. We utilize VarNet by using dilated convolution in different scales, which extends the receptive field to capture more contextual information. Moreover, we simplified the sensitivity map estimation (SME), for it holds many unnecessary layers for this task. Those improvements have shown significant decreases in computation costs as well as higher accuracy.

Keywords: MRI, deep learning, variational network, computer vision, compress sensing

Procedia PDF Downloads 130
789 Analysis of an Alternative Data Base for the Estimation of Solar Radiation

Authors: Graciela Soares Marcelli, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Claudineia Brazil, Rafael Haag

Abstract:

The sun is a source of renewable energy, and its use as both a source of heat and light is one of the most promising energy alternatives for the future. To measure the thermal or photovoltaic systems a solar irradiation database is necessary. Brazil still has a reduced number of meteorological stations that provide frequency tests, as an alternative to the radio data platform, with reanalysis systems, quite significant. ERA-Interim is a global fire reanalysis by the European Center for Medium-Range Weather Forecasts (ECMWF). The data assimilation system used for the production of ERA-Interim is based on a 2006 version of the IFS (Cy31r2). The system includes a 4-dimensional variable analysis (4D-Var) with a 12-hour analysis window. The spatial resolution of the dataset is approximately 80 km at 60 vertical levels from the surface to 0.1 hPa. This work aims to make a comparative analysis between the ERA-Interim data and the data observed in the Solarimmetric Atlas of the State of Rio Grande do Sul, to verify its applicability in the absence of an observed data network. The analysis of the results obtained for a study region as an alternative to the energy potential of a given region.

Keywords: energy potential, reanalyses, renewable energy, solar radiation

Procedia PDF Downloads 140
788 Evaluation of Settlement of Coastal Embankments Using Finite Elements Method

Authors: Sina Fadaie, Seyed Abolhassan Naeini

Abstract:

Coastal embankments play an important role in coastal structures by reducing the effect of the wave forces and controlling the movement of sediments. Many coastal areas are underlain by weak and compressible soils. Estimation of during construction settlement of coastal embankments is highly important in design and safety control of embankments and appurtenant structures. Accordingly, selecting and establishing of an appropriate model with a reasonable level of complication is one of the challenges for engineers. Although there are advanced models in the literature regarding design of embankments, there is not enough information on the prediction of their associated settlement, particularly in coastal areas having considerable soft soils. Marine engineering study in Iran is important due to the existence of two important coastal areas located in the northern and southern parts of the country. In the present study, the validity of Terzaghi’s consolidation theory has been investigated. In addition, the settlement of these coastal embankments during construction is predicted by using special methods in PLAXIS software by the help of appropriate boundary conditions and soil layers. The results indicate that, for the existing soil condition at the site, some parameters are important to be considered in analysis. Consequently, a model is introduced to estimate the settlement of the embankments in such geotechnical conditions.

Keywords: consolidation, settlement, coastal embankments, numerical methods, finite elements method

Procedia PDF Downloads 138
787 Modelling the Long Rune of Aggregate Import Demand in Libya

Authors: Said Yousif Khairi

Abstract:

Being a developing economy, imports of capital, raw materials and manufactories goods are vital for sustainable economic growth. In 2006, Libya imported LD 8 billion (US$ 6.25 billion) which composed of mainly machinery and transport equipment (49.3%), raw material (18%), and food products and live animals (13%). This represented about 10% of GDP. Thus, it is pertinent to investigate factors affecting the amount of Libyan imports. An econometric model representing the aggregate import demand for Libya was developed and estimated using the bounds test procedure, which based on an unrestricted error correction model (UECM). The data employed for the estimation was from 1970–2010. The results of the bounds test revealed that the volume of imports and its determinants namely real income, consumer price index and exchange rate are co-integrated. The findings indicate that the demand for imports is inelastic with respect to income, index price level and The exchange rate variable in the short run is statistically significant. In the long run, the income elasticity is elastic while the price elasticity and the exchange rate remains inelastic. This indicates that imports are important elements for Libyan economic growth in the long run.

Keywords: import demand, UECM, bounds test, Libya

Procedia PDF Downloads 342
786 Economic Evaluation Offshore Wind Project under Uncertainly and Risk Circumstances

Authors: Sayed Amir Hamzeh Mirkheshti

Abstract:

Offshore wind energy as a strategic renewable energy, has been growing rapidly due to availability, abundance and clean nature of it. On the other hand, budget of this project is incredibly higher in comparison with other renewable energies and it takes more duration. Accordingly, precise estimation of time and cost is needed in order to promote awareness in the developers and society and to convince them to develop this kind of energy despite its difficulties. Occurrence risks during on project would cause its duration and cost constantly changed. Therefore, to develop offshore wind power, it is critical to consider all potential risks which impacted project and to simulate their impact. Hence, knowing about these risks could be useful for the selection of most influencing strategies such as avoidance, transition, and act in order to decrease their probability and impact. This paper presents an evaluation of the feasibility of 500 MV offshore wind project in the Persian Gulf and compares its situation with uncertainty resources and risk. The purpose of this study is to evaluate time and cost of offshore wind project under risk circumstances and uncertain resources by using Monte Carlo simulation. We analyzed each risk and activity along with their distribution function and their effect on the project.

Keywords: wind energy project, uncertain resources, risks, Monte Carlo simulation

Procedia PDF Downloads 336
785 Method Validation for Determining Platinum and Palladium in Catalysts Using Inductively Coupled Plasma Optical Emission Spectrometry

Authors: Marin Senila, Oana Cadar, Thorsten Janisch, Patrick Lacroix-Desmazes

Abstract:

The study presents the analytical capability and validation of a method based on microwave-assisted acid digestion for quantitative determination of platinum and palladium in catalysts using inductively coupled plasma optical emission spectrometry (ICP-OES). In order to validate the method, the main figures of merit such as limit of detection and limit of quantification, precision and accuracy were considered and the measurement uncertainty was estimated based on the bottom-up approach according to the international guidelines of ISO/IEC 17025. Limit of detections, estimated from blank signal using 3 s criterion, were 3.0 mg/kg for Pt and respectively 3.6 mg/kg for Pd, while limits of quantification were 9.0 mg/kg for Pt and respectively 10.8 mg/kg for Pd. Precisions, evaluated as standard deviations of repeatability (n=5 parallel samples), were less than 10% for both precious metals. Accuracies of the method, verified by recovery estimation certified reference material NIST SRM 2557 - pulverized recycled monolith, were 99.4 % for Pt and 101% for Pd. The obtained limit of quantifications and accuracy were satisfactory for the intended purpose. The paper offers all the steps necessary to validate the determination method for Pt and Pd in catalysts using inductively coupled plasma optical emission spectrometry.

Keywords: catalyst analysis, ICP-OES, method validation, platinum, palladium

Procedia PDF Downloads 152
784 Performance Estimation of Two Port Multiple-Input and Multiple-Output Antenna for Wireless Local Area Network Applications

Authors: Radha Tomar, Satish K. Jain, Manish Panchal, P. S. Rathore

Abstract:

In the presented work, inset fed microstrip patch antenna (IFMPA) based two port MIMO Antenna system has been proposed, which is suitable for wireless local area network (WLAN) applications. IFMPA has been designed, optimized for 2.4 GHz and applied for MIMO formation. The optimized parameters of the proposed IFMPA have been used for fabrication of antenna and two port MIMO in a laboratory. Fabrication of the designed MIMO antenna has been done and tested experimentally for performance parameters like Envelope Correlation Coefficient (ECC), Mean Effective Gain (MEG), Directive Gain (DG), Channel Capacity Loss (CCL), Multiplexing Efficiency (ME) etc and results are compared with simulated parameters extracted with simulated S parameters to validate the results. The simulated and experimentally measured plots and numerical values of these MIMO performance parameters resembles very much with each other. This shows the success of MIMO antenna design methodology.

Keywords: multiple-input and multiple-output, wireless local area network, vector network analyzer, envelope correlation coefficient

Procedia PDF Downloads 38
783 Groundwater Utilization and Sustainability: A Case Study of Pydibheemavaram Industrial Area, India

Authors: G. Venkata Rao, R. Srinivasa Rao, B. Neelima Sri Priya

Abstract:

The over extraction of groundwater from the coastal aquifers, result in reduction of groundwater resource and lowering of water level. In general, the depletion of groundwater level enhances the landward migration of saltwater wedge. Now a days the ground water extraction increases by year to year because increased population and industrialization. The ground water is the only source of irrigation, domestic and Industrial purposes at Pydibhimavaram industrial area, which is located in the coastal belt of Srikakulam district, India of Latitudes 18.145N 83.627E and Longitudes 18.099N 83.674E. The present study has been attempted to calculate amount of water getting recharged into this aquifer, status of rainfall pattern for the past two decades and the runoff is calculated by using Khosla’s formula with available rainfall and temperature in the study area. A decision support model has been developed on the basis of Monthly Extractions of the water from the ground through bore wells and the Net Recharge of the aquifer. It is concluded that the amount of extractions is exceeding the amount of recharge from May to October in a given year which will in turn damage the water balance in the subsurface layers.

Keywords: aquifer, decision support model, groundwater extraction, run off estimation and rainfall

Procedia PDF Downloads 277
782 Estimation of Implicit Colebrook White Equation by Preferable Explicit Approximations in the Practical Turbulent Pipe Flow

Authors: Itissam Abuiziah

Abstract:

In several hydraulic systems, it is necessary to calculate the head losses which depend on the resistance flow friction factor in Darcy equation. Computing the resistance friction is based on implicit Colebrook-White equation which is considered as the standard for the friction calculation, but it needs high computational cost, therefore; several explicit approximation methods are used for solving an implicit equation to overcome this issue. It follows that the relative error is used to determine the most accurate method among the approximated used ones. Steel, cast iron and polyethylene pipe materials investigated with practical diameters ranged from 0.1m to 2.5m and velocities between 0.6m/s to 3m/s. In short, the results obtained show that the suitable method for some cases may not be accurate for other cases. For example, when using steel pipe materials, Zigrang and Silvester's method has revealed as the most precise in terms of low velocities 0.6 m/s to 1.3m/s. Comparatively, Halland method showed a less relative error with the gradual increase in velocity. Accordingly, the simulation results of this study might be employed by the hydraulic engineers, so they can take advantage to decide which is the most applicable method according to their practical pipe system expectations.

Keywords: Colebrook–White, explicit equation, friction factor, hydraulic resistance, implicit equation, Reynolds numbers

Procedia PDF Downloads 166
781 Towards Reliable Mobile Cloud Computing

Authors: Khaled Darwish, Islam El Madahh, Hoda Mohamed, Hadia El Hennawy

Abstract:

Cloud computing has been one of the fastest growing parts in IT industry mainly in the context of the future of the web where computing, communication, and storage services are main services provided for Internet users. Mobile Cloud Computing (MCC) is gaining stream which can be used to extend cloud computing functions, services and results to the world of future mobile applications and enables delivery of a large variety of cloud application to billions of smartphones and wearable devices. This paper describes reliability for MCC by determining the ability of a system or component to function correctly under stated conditions for a specified period of time to be able to deal with the estimation and management of high levels of lifetime engineering uncertainty and risks of failure. The assessment procedures consists of determine Mean Time between Failures (MTBF), Mean Time to Failure (MTTF), and availability percentages for main components in both cloud computing and MCC structures applied on single node OpenStack installation to analyze its performance with different settings governing the behavior of participants. Additionally, we presented several factors have a significant impact on rates of change overall cloud system reliability should be taken into account in order to deliver highly available cloud computing services for mobile consumers.

Keywords: cloud computing, mobile cloud computing, reliability, availability, OpenStack

Procedia PDF Downloads 379
780 Wear Measuring and Wear Modelling Based On Archard, ASTM, and Neural Network Models

Authors: A. Shebani, C. Pislaru

Abstract:

Wear of materials is an everyday experience and has been observed and studied for long time. The prediction of wear is a fundamental problem in the industrial field, mainly correlated to the planning of maintenance interventions and economy. Pin-on-disc test is the most common test which is used to study the wear behaviour. In this paper, the pin-on-disc (AEROTECH UNIDEX 11) is used for the investigation of the effects of normal load and hardness of material on the wear under dry and sliding conditions. In the pin-on-disc rig, two specimens were used; one, a pin which is made of steel with a tip, is positioned perpendicular to the disc, where the disc is made of aluminium. The pin wear and disc wear were measured by using the following instruments: The Talysurf instrument, a digital microscope, and the alicona instrument; where the Talysurf profilometer was used to measure the pin/disc wear scar depth, and the alicona was used to measure the volume loss for pin and disc. After that, the Archard model, American Society for Testing and Materials model (ASTM), and neural network model were used for pin/disc wear modelling and the simulation results are implemented by using the Matlab program. This paper focuses on how the alicona can be considered as a powerful tool for wear measurements and how the neural network is an effective algorithm for wear estimation.

Keywords: wear modelling, Archard Model, ASTM Model, Neural Networks Model, Pin-on-disc Test, Talysurf, digital microscope, Alicona

Procedia PDF Downloads 431
779 Nafion Multiwalled Carbon Nano Tubes Composite Film Modified Glassy Carbon Sensor for the Voltammetric Estimation of Dianabol Steroid in Pharmaceuticals and Biological Fluids

Authors: Nouf M. Al-Ourfi, A. S. Bashammakh, M. S. El-Shahawi

Abstract:

The redox behavior of dianabol steroid (DS) on Nafion Multiwalled Carbon nano -tubes (MWCNT) composite film modified glassy carbon electrode (GCE) in various buffer solutions was studied using cyclic voltammetry (CV) and differential pulse- adsorptive cathodic stripping voltammetry (DP-CSV) and successfully compared with the results at non modified bare GCE. The Nafion-MWCNT composite film modified GCE exhibited the best electrochemical response among the two electrodes for the electro reduction of DS that was inferred from the EIS, CV and DP-CSV. The modified sensor showed a sensitive, stable and linear response in the concentration range of 5 – 100 nM with a detection limit of 0.08 nM. The selectivity of the proposed sensor was assessed in the presence of high concentration of major interfering species. The analytical application of the sensor for the quantification of DS in pharmaceutical formulations and biological fluids (urine) was determined and the results demonstrated acceptable recovery and RSD of 5%. Statistical treatment of the results of the proposed method revealed no significant differences in the accuracy and precision. The relative standard deviations for five measurements of 50 and 300 ng mL−1 of DS were 3.9 % and 1.0 %, respectively.

Keywords: dianabol steroid, determination, modified GCE, urine

Procedia PDF Downloads 265
778 Progressive Type-I Interval Censoring with Binomial Removal-Estimation and Its Properties

Authors: Sonal Budhiraja, Biswabrata Pradhan

Abstract:

This work considers statistical inference based on progressive Type-I interval censored data with random removal. The scheme of progressive Type-I interval censoring with random removal can be described as follows. Suppose n identical items are placed on a test at time T0 = 0 under k pre-fixed inspection times at pre-specified times T1 < T2 < . . . < Tk, where Tk is the scheduled termination time of the experiment. At inspection time Ti, Ri of the remaining surviving units Si, are randomly removed from the experiment. The removal follows a binomial distribution with parameters Si and pi for i = 1, . . . , k, with pk = 1. In this censoring scheme, the number of failures in different inspection intervals and the number of randomly removed items at pre-specified inspection times are observed. Asymptotic properties of the maximum likelihood estimators (MLEs) are established under some regularity conditions. A β-content γ-level tolerance interval (TI) is determined for two parameters Weibull lifetime model using the asymptotic properties of MLEs. The minimum sample size required to achieve the desired β-content γ-level TI is determined. The performance of the MLEs and TI is studied via simulation.

Keywords: asymptotic normality, consistency, regularity conditions, simulation study, tolerance interval

Procedia PDF Downloads 226