Search results for: queueing calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1258

Search results for: queueing calculation

1018 Study on the Model Predicting Post-Construction Settlement of Soft Ground

Authors: Pingshan Chen, Zhiliang Dong

Abstract:

In order to estimate the post-construction settlement more objectively, the power-polynomial model is proposed, which can reflect the trend of settlement development based on the observed settlement data. It was demonstrated by an actual case history of an embankment, and during the prediction. Compared with the other three prediction models, the power-polynomial model can estimate the post-construction settlement more accurately with more simple calculation.

Keywords: prediction, model, post-construction settlement, soft ground

Procedia PDF Downloads 398
1017 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 435
1016 Monte Carlo Simulations of LSO/YSO for Dose Evaluation in Photon Beam Radiotherapy

Authors: H. Donya

Abstract:

Monte Carlo (MC) techniques play a fundamental role in radiotherapy. A two non-water-equivalent of different media were used to evaluate the dose in water. For such purpose, Lu2SiO5 (LSO) and Y2SiO5 (YSO) orthosilicates scintillators are chosen for MC simulation using Penelope code. To get higher efficiency in dose calculation, variance reduction techniques are discussed. Overall results of this investigation ensured that the LSO/YSO bi-media a good combination to tackle over-response issue in dynamic photon radiotherapy.

Keywords: Lu2SiO5 (LSO) and Y2SiO5 (YSO) orthosilicates, Monte Carlo, correlated sampling, radiotherapy

Procedia PDF Downloads 376
1015 Characteristic Function in Estimation of Probability Distribution Moments

Authors: Vladimir S. Timofeev

Abstract:

In this article the problem of distributional moments estimation is considered. The new approach of moments estimation based on usage of the characteristic function is proposed. By statistical simulation technique, author shows that new approach has some robust properties. For calculation of the derivatives of characteristic function there is used numerical differentiation. Obtained results confirmed that author’s idea has a certain working efficiency and it can be recommended for any statistical applications.

Keywords: characteristic function, distributional moments, robustness, outlier, statistical estimation problem, statistical simulation

Procedia PDF Downloads 464
1014 Economic Evaluation of Cataract Eye Surgery by Health Attendant of Doctor and Nurse through the Social Insurance Board Cadr at General Hospital Anutapura Palu Central Sulawesi Indonesia

Authors: Sitti Rahmawati

Abstract:

Payment system of cataract surgery implemented by professional attendant of doctor and nurse has been increasing, through health insurance program and this has become one of the factors that affects a lot of government in the budget establishment. This system has been implemented in purpose of quality and expenditure control, i.e., controlling health overpayment to obtain benefit (moral hazard) by the user of insurance or health service provider. The increasing health cost becomes the main issue that hampers the society to receive required health service in cash payment-system. One of the efforts that should be taken by the government in health payment is by securing health insurance through society's health insurance. The objective of the study is to learn the capability of a patient to pay cataract eye operation for the elders. Method of study sample population in this study was patients who obtain health insurance board card for the society that was started in the first of tri-semester (January-March) 2015 and claimed in Indonesian software-Case Based Group as a purposive sampling of 40 patients. Results of the study show that total unit cost analysis of surgery service unit was obtained $75 for unit cost without AFC and salary of nurse and doctor. The operation tariff that has been implemented today at Anutapura hospitals in eye department is tariff without AFC and the salary of the employee is $80. The operation tariff of the unit cost calculation with double distribution model at $65. Conclusion, the calculation result of actual unit cost that is much greater causes incentive distribution system provided to an ophthalmologist at $37 and nurse at $20 for one operation. The surgery service tariff is still low; consequently, the hospital receives low revenue and the quality of health insurance in eye operation department is relatively low. In purpose of increasing the service quality, it requires adequately high cost to equip medical equipment and increase the number of professional health attendant in serving patients in cataract eye operation at hospital.

Keywords: economic evaluation, cataract operation, health attendant, health insurance system

Procedia PDF Downloads 141
1013 Self-Disclosure and Privacy Management Behavior in Social Media: Privacy Calculus Perspective

Authors: Chien-Wen Chen, Nguyen Duong Thuy Trang, Yu-Hsuan Chang

Abstract:

With the development of information technology, social networking sites are inseparable from life and have become an important way for people to communicate. Nonetheless, privacy issues are raised by the presence of personal information on social networking sites. However, users can benefit from using the functions of social networking sites, which also leads to users worrying about the leakage of personal information without corresponding privacy protection behaviors, which is called the privacy paradox. However, previous studies have questioned the viewpoint of the privacy paradox, believing that users are not so naive and that people with privacy concerns will conduct privacy management. Consequently, this study is based on the view of privacy calculation perspective to investigate the privacy behavior of users on social networking sites. Among them, social benefits and privacy concerns are taken as the expected benefits and costs in the viewpoint of privacy calculation. At the same time, this study also explores the antecedents, including positive feedback, self-presentation, privacy policy, and information sensitivity, and the consequence of privacy behavior of weighing benefits and costs, including self-disclosure and three privacy management strategies by interpersonal boundaries (Preventive, Censorship, and Corrective). The survey respondents' characteristics and prior use experience of social networking sites were analyzed. As a consequence, a survey of 596 social network users was conducted online to validate the research framework. The results show that social benefit has the greatest influence on privacy behavior. The most important external factors affecting privacy behavior are positive feedback, followed by the privacy policy and information sensitivity. In addition, the important findings of this study are that social benefits will positively affect privacy management. It shows that users can get satisfaction from interacting with others through social networking sites. They will not only disclose themselves but also manage their privacy on social networking sites after considering social benefits and privacy management on social networking sites, and it expands the adoption of the Privacy Calculus Perspective framework from prior research. Therefore, it is suggested that as the functions of social networking sites increase and the development of social networking sites, users' needs should be understood and updated in order to ensure the sustainable operation of social networking.

Keywords: privacy calculus perspective, self-disclosure, privacy management, social benefit, privacy concern

Procedia PDF Downloads 59
1012 Determination of Anchor Lengths by Retaining Walls

Authors: Belabed Lazhar

Abstract:

The dimensioning of the anchored retaining screens passes always by the analysis of their stability. The calculation of anchoring lengths is practically carried out according to the mechanical model suggested by Kranz which is often criticized. The safety is evaluated through the comparison of interior force and external force. The force of anchoring over the length cut behind the failure solid is neglected. The failure surface cuts anchoring in the medium length of sealing. In this article, one proposes a new mechanical model which overcomes these disadvantages (simplifications) and gives interesting results.

Keywords: retaining walls, anchoring, stability, mechanical modeling, safety

Procedia PDF Downloads 326
1011 Feigenbaum Universality, Chaos and Fractal Dimensions in Discrete Dynamical Systems

Authors: T. K. Dutta, K. K. Das, N. Dutta

Abstract:

The salient feature of this paper is primarily concerned with Ricker’s population model: f(x)=x e^(r(1-x/k)), where r is the control parameter and k is the carrying capacity, and some fruitful results are obtained with the following objectives: 1) Determination of bifurcation values leading to a chaotic region, 2) Development of Statistical Methods and Analysis required for the measure of Fractal dimensions, 3) Calculation of various fractal dimensions. These results also help that the invariant probability distribution on the attractor, when it exists, provides detailed information about the long-term behavior of a dynamical system. At the end, some open problems are posed for further research.

Keywords: Feigenbaum universality, chaos, Lyapunov exponent, fractal dimensions

Procedia PDF Downloads 271
1010 Study of the Energy Levels in the Structure of the Laser Diode GaInP

Authors: Abdelali Laid, Abid Hamza, Zeroukhi Houari, Sayah Naimi

Abstract:

This work relates to the study of the energy levels and the optimization of the Parameter intrinsic (a number of wells and their widths, width of barrier of potential, index of refraction etc.) and extrinsic (temperature, pressure) in the Structure laser diode containing the structure GaInP. The methods of calculation used; - method of the empirical pseudo potential to determine the electronic structures of bands, - graphic method for optimization. The found results are in concord with those of the experiment and the theory.

Keywords: semi-conductor, GaInP/AlGaInP, pseudopotential, energy, alliages

Procedia PDF Downloads 458
1009 Changes in Kidney Tissue at Postmortem Magnetic Resonance Imaging Depending on the Time of Fetal Death

Authors: Uliana N. Tumanova, Viacheslav M. Lyapin, Vladimir G. Bychenko, Alexandr I. Shchegolev, Gennady T. Sukhikh

Abstract:

All cases of stillbirth undoubtedly subject to postmortem examination, since it is necessary to find out the cause of the stillbirths, as well as a forecast of future pregnancies and their outcomes. Determination of the time of death is an important issue which is addressed during the examination of the body of a stillborn. It is mean the period from the time of death until the birth of the fetus. The time for fetal deaths determination is based on the assessment of the severity of the processes of maceration. To study the possibilities of postmortem magnetic resonance imaging (MRI) for determining the time of intrauterine fetal death based on the evaluation of maceration in the kidney. We have conducted MRI morphological comparisons of 7 dead fetuses (18-21 gestational weeks) and 26 stillbirths (22-39 gestational weeks), and 15 bodies of died newborns at the age of 2 hours – 36 days. Postmortem MRI 3T was performed before the autopsy. The signal intensity of the kidney tissue (SIK), pleural fluid (SIF), external air (SIA) was determined on T1-WI and T2-WI. Macroscopic and histological signs of maceration severity and time of death were evaluated in the autopsy. Based on the results of the morphological study, the degree of maceration varied from 0 to 4. In 13 cases, the time of intrauterine death was up to 6 hours, in 2 cases - 6-12 hours, in 4 -12-24 hours, in 9 -2-3 days, in 3 -1 week, in 2 -1,5-2 weeks. At 15 dead newborns, signs of maceration were absent, naturally. Based on the data from SIK, SIF, SIA on MR-tomograms, we calculated the coefficient of MR-maceration (M). The calculation of the time of intrauterine death (MP-t) (hours) was performed by our formula: МR-t = 16,87+95,38×М²-75,32×М. A direct positive correlation of MR-t and autopsy data from the dead at the gestational ages 22-40 weeks, with a dead time, not more than 1 week, was received. The maceration at the antenatal fetal death is characterized by changes in T1-WI and T2-WI signals at postmortem MRI. The calculation of MP-t allows defining accurately the time of intrauterine death within one week at the stillbirths who died on 22-40 gestational weeks. Thus, our study convincingly demonstrates that radiological methods can be used for postmortem study of the bodies, in particular, the bodies of stillborn to determine the time of intrauterine death. Postmortem MRI allows for an objective and sufficiently accurate analysis of pathological processes with the possibility of their documentation, storage, and analysis after the burial of the body.

Keywords: intrauterine death, maceration, postmortem MRI, stillborn

Procedia PDF Downloads 104
1008 Lumped Parameter Models for Numerical Simulation of The Dynamic Response of Hoisting Appliances

Authors: Candida Petrogalli, Giovanni Incerti, Luigi Solazzi

Abstract:

This paper describes three lumped parameters models for the study of the dynamic behaviour of a boom crane. The models proposed here allow evaluating the fluctuations of the load arising from the rope and structure elasticity and from the type of the motion command imposed by the winch. A calculation software was developed in order to determine the actual acceleration of the lifted mass and the dynamic overload during the lifting phase. Some application examples are presented, with the aim of showing the correlation between the magnitude of the stress and the type of the employed motion command.

Keywords: crane, dynamic model, overloading condition, vibration

Procedia PDF Downloads 546
1007 Websites for Hypothesis Testing

Authors: Frantisek Mosna

Abstract:

E-learning has become an efficient and widespread means in process of education at all branches of human activities. Statistics is not an exception. Unfortunately the main focus in the statistics teaching is usually paid to the substitution to formulas. Suitable web-sites can simplify and automate calculation and provide more attention and time to the basic principles of statistics, mathematization of real-life situations and following interpretation of results. We introduce our own web-sites for hypothesis testing. Their didactic aspects, technical possibilities of individual tools for their creating, experience and advantages or disadvantages of them are discussed in this paper. These web-sites do not substitute common statistical software but significantly improve the teaching of the statistics at universities.

Keywords: e-learning, hypothesis testing, PHP, web-sites

Procedia PDF Downloads 393
1006 Climate Related Financial Risk on Automobile Industry and the Impact to the Financial Institutions

Authors: Mahalakshmi Vivekanandan S.

Abstract:

As per the recent changes happening in the global policies, climate-related changes and the impact it causes across every sector are viewed as green swan events – in essence, climate-related changes can often happen and lead to risk and a lot of uncertainty, but needs to be mitigated instead of considering them as black swan events. This brings about a question on how this risk can be computed so that the financial institutions can plan to mitigate it. Climate-related changes impact all risk types – credit risk, market risk, operational risk, liquidity risk, reputational risk and other risk types. And the models required to compute this has to consider the different industrial needs of the counterparty, as well as the factors that are contributing to this – be it in the form of different risk drivers, or the different transmission channels or the different approaches and the granular form of data availability. This brings out the suggestion that the climate-related changes, though it affects Pillar I risks, will be a Pillar II risk. This has to be modeled specifically based on the financial institution’s actual exposure to different industries instead of generalizing the risk charge. And this will have to be considered as the additional capital to be met by the financial institution in addition to their Pillar I risks, as well as the existing Pillar II risks. In this paper, the author presents a risk assessment framework to model and assess climate change risks - for both credit and market risks. This framework helps in assessing the different scenarios and how the different transition risks affect the risk associated with the different parties. This research paper delves into the topic of the increase in the concentration of greenhouse gases that in turn cause global warming. It then considers the various scenarios of having the different risk drivers impacting the Credit and market risk of an institution by understanding the transmission channels and also considering the transition risk. The paper then focuses on the industry that’s fast seeing a disruption: the automobile industry. The paper uses the framework to show how the climate changes and the change to the relevant policies have impacted the entire financial institution. Appropriate statistical models for forecasting, anomaly detection and scenario modeling are built to demonstrate how the framework can be used by the relevant agencies to understand their financial risks. The paper also focuses on the climate risk calculation for the Pillar II Capital calculations and how it will make sense for the bank to maintain this in addition to their regular Pillar I and Pillar II capital.

Keywords: capital calculation, climate risk, credit risk, pillar ii risk, scenario modeling

Procedia PDF Downloads 101
1005 Two-Dimensional Analysis and Numerical Simulation of the Navier-Stokes Equations for Principles of Turbulence around Isothermal Bodies Immersed in Incompressible Newtonian Fluids

Authors: Romulo D. C. Santos, Silvio M. A. Gama, Ramiro G. R. Camacho

Abstract:

In this present paper, the thermos-fluid dynamics considering the mixed convection (natural and forced convections) and the principles of turbulence flow around complex geometries have been studied. In these applications, it was necessary to analyze the influence between the flow field and the heated immersed body with constant temperature on its surface. This paper presents a study about the Newtonian incompressible two-dimensional fluid around isothermal geometry using the immersed boundary method (IBM) with the virtual physical model (VPM). The numerical code proposed for all simulations satisfy the calculation of temperature considering Dirichlet boundary conditions. Important dimensionless numbers such as Strouhal number is calculated using the Fast Fourier Transform (FFT), Nusselt number, drag and lift coefficients, velocity and pressure. Streamlines and isothermal lines are presented for each simulation showing the flow dynamics and patterns. The Navier-Stokes and energy equations for mixed convection were discretized using the finite difference method for space and a second order Adams-Bashforth and Runge-Kuta 4th order methods for time considering the fractional step method to couple the calculation of pressure, velocity, and temperature. This work used for simulation of turbulence, the Smagorinsky, and Spalart-Allmaras models. The first model is based on the local equilibrium hypothesis for small scales and hypothesis of Boussinesq, such that the energy is injected into spectrum of the turbulence, being equal to the energy dissipated by the convective effects. The Spalart-Allmaras model, use only one transport equation for turbulent viscosity. The results were compared with numerical data, validating the effect of heat-transfer together with turbulence models. The IBM/VPM is a powerful tool to simulate flow around complex geometries. The results showed a good numerical convergence in relation the references adopted.

Keywords: immersed boundary method, mixed convection, turbulence methods, virtual physical model

Procedia PDF Downloads 91
1004 Study of Corrosion in Structures due to Chloride Infiltration

Authors: Sukrit Ghorai, Akku Aby Mathews

Abstract:

Corrosion in reinforcing steel is the leading cause for deterioration in concrete structures. It is an electrochemical process which leads to volumetric change in concrete and causes cracking, delamination and spalling. The objective of the study is to provide a rational method to estimate the probable chloride concentration at the reinforcement level for a known surface chloride concentration. The paper derives the formulation of design charts to aid engineers for quick calculation of the chloride concentration. Furthermore, the paper focuses on comparison of durability design against corrosion with American, European and Indian design standards.

Keywords: chloride infiltration, concrete, corrosion, design charts

Procedia PDF Downloads 379
1003 Forming for Confirmation of Predicted Epoxy Forming Composition Range in Cr-Zn System

Authors: Foad Saadi

Abstract:

Aim of this work was to determine the approximate Epoxy forming composition range of Cr-Zn system for the composites produced by forming compositing. It was predicted by MI edema semi-empirical model that the composition had to be in the range of 30-60 wt. % tin, while Cr-32Zn had the most susceptibility to produce amorphous composite. In the next stage, some different compositions of Cr-Zn were foamingly composited, where one of them had the proper predicted composition. Products were characterized by SDM analysis. There was a good agreement between calculation and experiments, in which Cr-32Zn composite had the most amorphization degree.

Keywords: Cr-Zn system, forming compositing, amorphous composite, MI edema model

Procedia PDF Downloads 266
1002 BER Estimate of WCDMA Systems with MATLAB Simulation Model

Authors: Suyeb Ahmed Khan, Mahmood Mian

Abstract:

Simulation plays an important role during all phases of the design and engineering of communications systems, from early stages of conceptual design through the various stages of implementation, testing, and fielding of the system. In the present paper, a simulation model has been constructed for the WCDMA system in order to evaluate the performance. This model describes multiusers effects and calculation of BER (Bit Error Rate) in 3G mobile systems using Simulink MATLAB 7.1. Gaussian Approximation defines the multi-user effect on system performance. BER has been analyzed with comparison between transmitting data and receiving data.

Keywords: WCDMA, simulations, BER, MATLAB

Procedia PDF Downloads 553
1001 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 118
1000 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 41
999 Traverse Surveying Table Simple and Sure

Authors: Hamid Fallah

Abstract:

Creating surveying stations is the first thing that a surveyor learns; they can use it for control and implementation in projects such as buildings, roads, tunnels, monitoring, etc., whatever is related to the preparation of maps. In this article, the method of calculation through the traverse table and by checking several examples of errors of several publishers of surveying books in the calculations of this table, we also control the results of several software in a simple way. Surveyors measure angles and lengths in creating surveying stations, so the most important task of a surveyor is to be able to correctly remove the error of angles and lengths from the calculations and to determine whether the amount of error is within the permissible limit for delete it or not.

Keywords: UTM, localization, scale factor, cartesian, traverse

Procedia PDF Downloads 52
998 Motion Planning of SCARA Robots for Trajectory Tracking

Authors: Giovanni Incerti

Abstract:

The paper presents a method for a simple and immediate motion planning of a SCARA robot, whose end-effector has to move along a given trajectory; the calculation procedure requires the user to define in analytical form or by points the trajectory to be followed and to assign the curvilinear abscissa as function of the time. On the basis of the geometrical characteristics of the robot, a specifically developed program determines the motion laws of the actuators that enable the robot to generate the required movement; this software can be used in all industrial applications for which a SCARA robot has to be frequently reprogrammed, in order to generate various types of trajectories with different motion times.

Keywords: motion planning, SCARA robot, trajectory tracking, analytical form

Procedia PDF Downloads 288
997 Improved Mutual Inductance of Rogowski Coil Using Hexagonal Core

Authors: S. Al-Sowayan

Abstract:

Rogowski coils are increasingly used for measurement of AC and transient electric currents. Mostly used Rogowski coils now are with circular or rectangular cores. In order to increase the sensitivity of the measurement of Rogowski coil and perform smooth wire winding, this paper studies the effect of increasing the mutual inductance in order to increase the coil sensitivity by presenting the calculation and simulation of a Rogowski coil with equilateral hexagonal shaped core and comparing the resulted mutual inductance with commonly used core shapes.

Keywords: Rogowski coil, mutual inductance, magnetic flux density, communication engineering

Procedia PDF Downloads 339
996 Using Mechanical Alloying for Verification of Predicted Glass Forming Composition Range

Authors: F. Saadi, M. Fatahi, M. Heidari

Abstract:

Aim of this work was to determine the approximate glass forming composition range of Ni-Sn system for the alloys produced by mechanical alloying. It was predicted by Miedema semi-empirical model that the composition had to be in the range of 30-60 wt. % tin, while Ni-40Sn had the most susceptibility to produce amorphous alloy. In the next stage, some different compositions of Ni-Sn were mechanically alloyed, where one of them had the proper predicted composition. Products were characterized by XRD analysis. There was a good agreement between calculation and experiments, in which Ni-40Sn alloy had the most amorphization degree.

Keywords: Ni-Sn system, mechanical alloying, Amorphous alloy, Miedema model

Procedia PDF Downloads 403
995 Atomistic Study of Structural and Phases Transition of TmAs Semiconductor, Using the FPLMTO Method

Authors: Rekab Djabri Hamza, Daoud Salah

Abstract:

We report first-principles calculations of structural and magnetic properties of TmAs compound in zinc blende(B3) and CsCl(B2), structures employing the density functional theory (DFT) within the local density approximation (LDA). We use the full potential linear muffin-tin orbitals (FP-LMTO) as implemented in the LMTART-MINDLAB code (Calculation). Results are given for lattice parameters (a), bulk modulus (B), and its first derivatives(B’) in the different structures NaCl (B1) and CsCl (B2). The most important result in this work is the prediction of the possibility of transition; from cubic rocksalt (NaCl)→ CsCl (B2) (32.96GPa) for TmAs. These results use the LDA approximation.

Keywords: LDA, phase transition, properties, DFT

Procedia PDF Downloads 81
994 Calculation Of Energy Gap Of (Ga,Mn)As Diluted Magnetic Semiconductor From The Eight-Band k.p Model

Authors: Khawlh A. Alzubaidi, Khadijah B. Alziyadi, Amor M. Alsayari

Abstract:

Now a days (Ga, Mn) is one of the most extensively studied and best understood diluted magnetic semiconductors. Also, the study of (Ga, Mn)As is a fervent research area since it allows to explore of a variety of novel functionalities and spintronics concepts that could be implemented in the future. In this work, we will calculate the energy gap of (Ga, Mn)As using the eight-band model. In the Hamiltonian, the effects of spin-orbit, spin-splitting, and strain will be considered. The dependence of the energy gap on Mn content, and the effect of the strain, which is varied continuously from tensile to compressive, will be studied. Finally, analytical expressions for the (Ga, Mn)As energy band gap, taking into account both parameters (Mn concentration and strain), will be provided.

Keywords: energy gap, diluted magnetic semiconductors, k.p method, strain

Procedia PDF Downloads 91
993 On the Application of Heuristics of the Traveling Salesman Problem for the Task of Restoring the DNA Matrix

Authors: Boris Melnikov, Dmitrii Chaikovskii, Elena Melnikova

Abstract:

The traveling salesman problem (TSP) is a well-known optimization problem that seeks to find the shortest possible route that visits a set of points and returns to the starting point. In this paper, we apply some heuristics of the TSP for the task of restoring the DNA matrix. This restoration problem is often considered in biocybernetics. For it, we must recover the matrix of distances between DNA sequences if not all the elements of the matrix under consideration are known at the input. We consider the possibility of using this method in the testing of distance calculation algorithms between a pair of DNAs to restore the partially filled matrix.

Keywords: optimization problems, DNA matrix, partially filled matrix, traveling salesman problem, heuristic algorithms

Procedia PDF Downloads 120
992 Identification of Accumulated Hydrocarbon Based on Heat Propagation Analysis in Order to Develop Mature Field: Case Study in South Sumatra Basin, Indonesia

Authors: Kukuh Suprayogi, Muhamad Natsir, Olif Kurniawan, Hot Parulian, Bayu Fitriana, Fery Mustofa

Abstract:

The new approach by utilizing the heat propagation analysis carried out by studying and evaluating the effect of the presence of hydrocarbons to the flow of heat that goes from the bottom surface to surface. Heat propagation is determined by the thermal conductivity of rocks. The thermal conductivity of rock itself is a quantity that describes the ability of a rock to deliver heat. This quantity depends on the constituent rock lithology, large porosity, and pore fluid filler. The higher the thermal conductivity of a rock, the more easily the flow of heat passing through these rocks. With the same sense, the heat flow will more easily pass through the rock when the rock is filled with water than hydrocarbons, given the nature of the hydrocarbons having more insulator against heat. The main objective of this research is to try to make the model the heat propagation calculations in degrees Celsius from the subsurface to the surface which is then compared with the surface temperature is measured directly at the point of location. In calculating the propagation of heat, we need to first determine the thermal conductivity of rocks, where the rocks at the point calculation are not composed of homogeneous but consist of strata. Therefore, we need to determine the mineral constituent and porosity values of each stratum. As for the parameters of pore fluid filler, we assume that all the pores filled with water. Once we get a thermal conductivity value of each unit of the rock, then we begin to model the propagation of heat profile from the bottom to the surface. The initial value of the temperature that we use comes from the data bottom hole temperature (BHT) is obtained from drilling results. Results of calculations per depths the temperature is displayed in plotting temperature versus depth profiles that describe the propagation of heat from the bottom of the well to the surface, note that pore fluid is water. In the technical implementation, we can identify the magnitude of the effect of hydrocarbons in reducing the amount of heat that crept to the surface based on the calculation of propagation of heat at a certain point and compared with measurements of surface temperature at that point, assuming that the surface temperature measured is the temperature that comes from the asthenosphere. This publication proves that the accumulation of hydrocarbon can be identified by analysis of heat propagation profile which could be a method for identifying the presence of hydrocarbons.

Keywords: thermal conductivity, rock, pore fluid, heat propagation

Procedia PDF Downloads 88
991 FEM Analysis of an Occluded Ear Simulator with Narrow Slit Pathway

Authors: Manabu Sasajima, Takao Yamaguchi, Yoshio Koike, Mitsuharu Watanabe

Abstract:

This paper discusses the propagation of sound waves in air, specifically in narrow rectangular pathways of an occluded-ear simulator for acoustic measurements. In narrow pathways, both the speed of sound and the phase of the sound waves are affected by the damping of the air viscosity. Herein, we propose a new finite-element method (FEM) that considers the effects of the air viscosity. The method was developed as an extension of existing FEMs for porous, sound-absorbing materials. The results of a numerical calculation for a three-dimensional ear-simulator model using the proposed FEM were validated by comparing with theoretical lumped-parameter modeling analysis and standard values.

Keywords: ear simulator, FEM, simulation, viscosity

Procedia PDF Downloads 413
990 Design and Development of a Prototype Vehicle for Shell Eco-Marathon

Authors: S. S. Dol

Abstract:

Improvement in vehicle efficiency can reduce global fossil fuels consumptions. For that sole reason, Shell Global Corporation introduces Shell Eco-marathon where student teams require to design, build and test energy-efficient vehicles. Hence, this paper will focus on design processes and the development of a fuel economic vehicle which satisfying the requirements of the competition. In this project, three components are designed and analyzed, which are the body, chassis and powertrain of the vehicle. Optimum design for each component is produced through simulation analysis and theoretical calculation in which improvement is made as the project progresses.

Keywords: energy efficient, drag force, chassis, powertrain

Procedia PDF Downloads 297
989 A Method for Calculating Dew Point Temperature in the Humidity Test

Authors: Wu Sa, Zhang Qian, Li Qi, Wang Ye

Abstract:

Currently in humidity tests having not put the Dew point temperature as a control parameter, this paper selects wet and dry bulb thermometer to measure the vapor pressure, and introduces several the saturation vapor pressure formulas easily calculated on the controller. Then establish the Dew point temperature calculation model to obtain the relationship between the Dew point temperature and vapor pressure. Finally check through the 100 groups of sample in the range of 0-100 ℃ from "Psychrometric handbook", find that the average error is small. This formula can be applied to calculate the Dew point temperature in the humidity test.

Keywords: dew point temperature, psychrometric handbook, saturation vapor pressure, wet and dry bulb thermometer

Procedia PDF Downloads 455