Search results for: position error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3974

Search results for: position error

3644 Feature Location Restoration for Under-Sampled Photoplethysmogram Using Spline Interpolation

Authors: Hangsik Shin

Abstract:

The purpose of this research is to restore the feature location of under-sampled photoplethysmogram using spline interpolation and to investigate feasibility for feature shape restoration. We obtained 10 kHz-sampled photoplethysmogram and decimated it to generate under-sampled dataset. Decimated dataset has 5 kHz, 2.5 k Hz, 1 kHz, 500 Hz, 250 Hz, 25 Hz and 10 Hz sampling frequency. To investigate the restoration performance, we interpolated under-sampled signals with 10 kHz, then compared feature locations with feature locations of 10 kHz sampled photoplethysmogram. Features were upper and lower peak of photplethysmography waveform. Result showed that time differences were dramatically decreased by interpolation. Location error was lesser than 1 ms in both feature types. In 10 Hz sampled cases, location error was also deceased a lot, however, they were still over 10 ms.

Keywords: peak detection, photoplethysmography, sampling, signal reconstruction

Procedia PDF Downloads 342
3643 Maximum Initial Input Allowed to Iterative Learning Control Set-up Using Singular Values

Authors: Naser Alajmi, Ali Alobaidly, Mubarak Alhajri, Salem Salamah, Muhammad Alsubaie

Abstract:

Iterative Learning Control (ILC) known to be a controlling tool to overcome periodic disturbances for repetitive systems. This technique is required to let the error signal tends to zero as the number of operation increases. The learning process that lies within this context is strongly dependent on the initial input which if selected properly tends to let the learning process be more effective compared to the case where a system starts from blind. ILC uses previous recorded execution data to update the following execution/trial input such that a reference trajectory is followed to a high accuracy. Error convergence in ILC is generally highly dependent on the input applied to a plant for trial $1$, thus a good choice of initial starting input signal would make learning faster and as a consequence the error tends to zero faster as well. In the work presented within, an upper limit based on the Singular Values Principle (SV) is derived for the initial input signal applied at trial $1$ such that the system follow the reference in less number of trials without responding aggressively or exceeding the working envelope where a system is required to move within in a robot arm, for example. Simulation results presented illustrate the theory introduced within this paper.

Keywords: initial input, iterative learning control, maximum input, singular values

Procedia PDF Downloads 215
3642 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z

Authors: Catarina Cruz, Ana Breda

Abstract:

Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.

Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings

Procedia PDF Downloads 134
3641 Assessment of Time-variant Work Stress for Human Error Prevention

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

For an operator in a nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. The possibility of human errors may be low, but the risk of them would be unimaginably enormous. Thus, for accident prevention, it is quite indispensable to analyze the influence of any factors which may raise the possibility of human errors. During the past decades, not a few research results showed that performance of human operators may vary over time due to lots of factors. Among them, stress is known to be an indirect factor that may cause human errors and result in mental illness. Until now, not a few assessment tools have been developed to assess stress level of human workers. However, it still is questionable to utilize them for human performance anticipation which is related with human error possibility, because they were mainly developed from the viewpoint of mental health rather than industrial safety. Stress level of a person may go up or down with work time. In that sense, if they would be applicable in the safety aspect, they should be able to assess the variation resulted from work time at least. Therefore, this study aimed to compare their applicability for safety purpose. More than 10 kinds of work stress tools were analyzed with reference to assessment items, assessment and analysis methods, and follow-up measures which are known to close related factors with work stress. The results showed that most tools mainly focused their weights on some common organizational factors such as demands, supports, and relationships, in sequence. Their weights were broadly similar. However, they failed to recommend practical solutions. Instead, they merely advised to set up overall counterplans in PDCA cycle or risk management activities which would be far from practical human error prevention. Thus, it was concluded that application of stress assessment tools mainly developed for mental health seemed to be impractical for safety purpose with respect to human performance anticipation, and that development of a new assessment tools would be inevitable if anyone wants to assess stress level in the aspect of human performance variation and accident prevention. As a consequence, as practical counterplans, this study proposed a new scheme for assessment of work stress level of a human operator that may vary over work time which is closely related with the possibility of human errors.

Keywords: human error, human performance, work stress, assessment tool, time-variant, accident prevention

Procedia PDF Downloads 647
3640 Banking Sector Development and Economic Growth: Evidence from the State of Qatar

Authors: Fekri Shawtari

Abstract:

The banking sector plays a very crucial role in the economic development of the country. As a financial intermediary, it has assigned a great role in the economic growth and stability. This paper aims to examine the empirically the relationship between banking industry and economic growth in state of Qatar. We adopt the VAR vector error correction model (VECM) along with Granger causality to address the issue over the long-run and short-run between the banking sector and economic growth. It is expected that the results will give policy directions to the policymakers to make strategies that are conducive toward boosting development to achieve the targeted economic growth in current situation.

Keywords: economic growth, banking sector, Qatar, vector error correction model, VECM

Procedia PDF Downloads 147
3639 China's BRI and Germany's Baghdad Railroad – a Realist Analysis of Hegemonic Conflict and the Circumvention of Maritime Power

Authors: Kamen Kirov

Abstract:

In the late 19th and early 20th centuries, Britain dominated global trade and finance in large part due to its maritime superiority. Germany, a land power, sought to undermine Britain’s position as the primary hegemon but ultimately could not challenge Britain’s maritime position or capabilities. This drove Germany to seek alternative strategies to weaken Britain’s position. Notably, it pushed Germany to create a reliable overland link through the Balkans to the Middle East via railroad. This article will seek to draw parallels between the German-British hegemonic conflict of the early 20th century and the Chinese-American hegemonic conflict taking place today using both secondary historical sources and current scholarly discussions of the changing international sphere. In doing so, it will provide useful insights into how China might attempt to outflank American power. The article will demonstrate that in many ways, the strategic positions and approaches of the early-20th century Germany and modern China are similar. Both countries were faced with a vastly superior foe with respect to maritime and economic power, and in both cases, their response was to undermine their rival hegemon by creating new overland infrastructure. Furthermore, in both cases, a major goal of creating new overland links was to gain further access to and control over Middle Eastern energy markets. It seems that in the modern day, China is conducting such a policy on a much grander scale than Germany did in the early 20th century—which may result in negative consequences for the US strategic position.

Keywords: belt and road Initiative, hegemonic conflict, maritime power, realism

Procedia PDF Downloads 164
3638 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 47
3637 Left Ventricular Adaptations of Elite Volleyball Players Based on the Playing Position

Authors: Shihab Aldin Al Riyami, Khosrow Ebrahim, Sajad Ahmadizad

Abstract:

Hemodynamic changes and ventricular loading during exercise lead to left ventricular (LV) hypertrophy. In athletes, volume load induces enlargement of the LV internal diameter and a proportional increase of wall thickness; while, pressure load would induce thickening of the ventricular wall. These adaptations are not similar in all athletes and are related to the types of sport. Volleyball players have different types of activity and roles based on their playing. Therefore, their physiological adaptations and requirements are different. The aim of the current study was to investigate the LV adaptationsinelite volleyball players based on their playing position. Sixty male elite volleyball players (age, 30.55±3.64 years)from Brazil, Serbia, Poland, Iran, Colombia, Cameroon, Japan, Egypt, Qatar, and Tunisia were investigated (from all five volleyball play positions). All participants had the experience of at least 3 years of participation at a professional level and international tournaments. LV characteristics were evaluated and measured using the echocardiography technique. Statistical analyses revealed significant differences (P<0.05)among the five groups of players forLV internal dimension (LVID), posterior wall thickness (PWT), and intact ventricular septum (IVS). Post-hoc analysis showed that opposite position players had significant higher value of LVID, PWT, and IVS when compared with other players, including outside hitter, middle blocker, setter, and libero (p<0.05). Additionally, in libero players, PWT was significantly lower when compared with other players (p<0.05). Based on the findings of the present study, it is concluded that LV adaptations in volleyball players are related to their playing position and that the opposite players had the highest LV adaptations when compared to other positions.

Keywords: athletes, cardiac adaptations, echocardio graphy, heart, sport

Procedia PDF Downloads 224
3636 Financial Inclusion for Inclusive Growth in an Emerging Economy

Authors: Godwin Chigozie Okpara, William Chimee Nwaoha

Abstract:

The paper set out to stress on how financial inclusion index could be calculated and also investigated the impact of inclusive finance on inclusive growth in an emerging economy. In the light of these objectives, chi-wins method was used to calculate indexes of financial inclusion while co-integration and error correction model were used for evaluation of the impact of financial inclusion on inclusive growth. The result of the analysis revealed that financial inclusion while having a long-run relationship with GDP growth is an insignificant function of the growth of the economy. The speed of adjustment is correctly signed and significant. On the basis of these results, the researchers called for tireless efforts of government and banking sector in promoting financial inclusion in developing countries.

Keywords: chi-wins index, co-integration, error correction model, financial inclusion

Procedia PDF Downloads 629
3635 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data

Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini

Abstract:

A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.

Keywords: central Italy, extreme events, rainfall data, underestimation errors

Procedia PDF Downloads 165
3634 Capability Prediction of Machining Processes Based on Uncertainty Analysis

Authors: Hamed Afrasiab, Saeed Khodaygan

Abstract:

Prediction of machining process capability in the design stage plays a key role to reach the precision design and manufacturing of mechanical products. Inaccuracies in machining process lead to errors in position and orientation of machined features on the part, and strongly affect the process capability in the final quality of the product. In this paper, an efficient systematic approach is given to investigate the machining errors to predict the manufacturing errors of the parts and capability prediction of corresponding machining processes. A mathematical formulation of fixture locators modeling is presented to establish the relationship between the part errors and the related sources. Based on this method, the final machining errors of the part can be accurately estimated by relating them to the combined dimensional and geometric tolerances of the workpiece – fixture system. This method is developed for uncertainty analysis based on the Worst Case and statistical approaches. The application of the presented method is illustrated through presenting an example and the computational results are compared with the Monte Carlo simulation results.

Keywords: process capability, machining error, dimensional and geometrical tolerances, uncertainty analysis

Procedia PDF Downloads 278
3633 DSPIC30F6010A Control for 12/8 Switched Reluctance Motor

Authors: Yang Zhou, Chen Hao, Ma Xiaoping

Abstract:

This paper briefly mentions the micro controller unit, and then goes into details about the exact regulations for SRM. Firstly, it proposes the main driving state control for motor and the importance of the motor position sensor. For different speed, the controller will choice various styles such as voltage chopper control, angle position control and current chopper control for which owns its advantages and disadvantages. Combining the strengths of the three discrepant methods, the main control chip will intelligently select the best performing control depending on the load and speed demand. Then the exact flow diagram is showed in paper. At last, an experimental platform is established to verify the correctness of the proposed theory.

Keywords: switched reluctance motor, dspic microcontroller, current chopper

Procedia PDF Downloads 400
3632 MCERTL: Mutation-Based Correction Engine for Register-Transfer Level Designs

Authors: Khaled Salah

Abstract:

In this paper, we present MCERTL (mutation-based correction engine for RTL designs) as an automatic error correction technique based on mutation analysis. A mutation-based correction methodology is proposed to automatically fix the erroneous RTL designs. The proposed strategy combines the processes of mutation and assertion-based localization. The erroneous statements are mutated to produce possible fixes for the failed RTL code. A concurrent mutation engine is proposed to mitigate the computational cost of running sequential mutants operators. The proposed methodology is evaluated against some benchmarks. The experimental results demonstrate that our proposed method enables us to automatically locate and correct multiple bugs at reasonable time.

Keywords: bug localization, error correction, mutation, mutants

Procedia PDF Downloads 253
3631 Autonomous Landing of UAV on Moving Platform: A Mathematical Approach

Authors: Mortez Alijani, Anas Osman

Abstract:

Recently, the popularity of Unmanned aerial vehicles (UAVs) has skyrocketed amidst the unprecedented events and the global pandemic, as they play a key role in both the security and health sectors, through surveillance, taking test samples, transportation of crucial goods and spreading awareness among civilians. However, the process of designing and producing such aerial robots is suppressed by the internal and external constraints that pose serious challenges. Landing is one of the key operations during flight, especially, the autonomous landing of UAVs on a moving platform is a scientifically complex engineering problem. Typically having a successful automatic landing of UAV on a moving platform requires accurate localization of landing, fast trajectory planning, and robust control planning. To achieve these goals, the information about the autonomous landing process such as the intersection point, the position of platform/UAV and inclination angle are more necessary. In this study, the mathematical approach to this problem in the X-Y axis based on the inclination angle and position of UAV in the landing process have been presented. The experimental results depict the accurate position of the UAV, intersection between UAV and moving platform and inclination angle in the landing process, allowing prediction of the intersection point.

Keywords: autonomous landing, inclination angle, unmanned aerial vehicles, moving platform, X-Y axis, intersection point

Procedia PDF Downloads 138
3630 An Application of Modified M-out-of-N Bootstrap Method to Heavy-Tailed Distributions

Authors: Hannah F. Opayinka, Adedayo A. Adepoju

Abstract:

This study is an extension of a prior study on the modification of the existing m-out-of-n (moon) bootstrap method for heavy-tailed distributions in which modified m-out-of-n (mmoon) was proposed as an alternative method to the existing moon technique. In this study, both moon and mmoon techniques were applied to two real income datasets which followed Lognormal and Pareto distributions respectively with finite variances. The performances of these two techniques were compared using Standard Error (SE) and Root Mean Square Error (RMSE). The findings showed that mmoon outperformed moon bootstrap in terms of smaller SEs and RMSEs for all the sample sizes considered in the two datasets.

Keywords: Bootstrap, income data, lognormal distribution, Pareto distribution

Procedia PDF Downloads 161
3629 A New Type Safety-Door for Earthquake Disaster Prevention: Part I

Authors: Daniel Y. Abebe, Jaehyouk Choi

Abstract:

From the past earthquake events, many people get hurt at the exit while they are trying to go out of the buildings because of the exit doors are unable to be opened. The door is not opened because it deviates from its the original position. The aim of this research is to develop and evaluate a new type safety door that keeps the door frame in its original position or keeps its edge angles perpendicular during and post-earthquake. The proposed door is composed of three components: outer frame joined to the wall, inner frame (door frame) and circular hollow section connected to the inner and outer frame which is used as seismic energy dissipating device.

Keywords: safety-door, earthquake disaster, low yield point steel, passive energy dissipating device, FE analysis

Procedia PDF Downloads 502
3628 Compensatory Neuro-Fuzzy Inference (CNFI) Controller for Bilateral Teleoperation

Authors: R. Mellah, R. Toumi

Abstract:

This paper presents a new adaptive neuro-fuzzy controller equipped with compensatory fuzzy control (CNFI) in order to not only adjusts membership functions but also to optimize the adaptive reasoning by using a compensatory learning algorithm. The proposed control structure includes both CNFI controllers for which one is used to control in force the master robot and the second one for controlling in position the slave robot. The experimental results obtained, show a fairly high accuracy in terms of position and force tracking under free space motion and hard contact motion, what highlights the effectiveness of the proposed controllers.

Keywords: compensatory fuzzy, neuro-fuzzy, control adaptive, teleoperation

Procedia PDF Downloads 299
3627 Comparison between Separable and Irreducible Goppa Code in McEliece Cryptosystem

Authors: Newroz Nooralddin Abdulrazaq, Thuraya Mahmood Qaradaghi

Abstract:

The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.

Keywords: McEliece cryptosystem, Goppa code, separable, irreducible

Procedia PDF Downloads 239
3626 Exploration of Abuse of Position for Sexual Gain by UK Police

Authors: Terri Cole, Fay Sweeting

Abstract:

Abuse of position for sexual gain by police is defined as behavior involving individuals taking advantage of their role to pursue a sexual or improper relationship. Previous research has considered whether it involves ‘bad apples’ - individuals with poor moral ethos or ‘bad barrels’ – broader organizational flaws which may unconsciously allow, minimize, or do not effectively deal with such behavior. Low level sexual misconduct (e.g., consensual sex on duty) is more common than more serious offences (e.g., rape), yet the impact of such behavior can have severe implications not only for those involved but can also negatively undermine public confidence in the police. This ongoing, collaborative research project has identified variables from 514 historic case files from 35 UK police forces in order to identify potential risk indicators which may lead to such behavior. Quantitative analysis using logistic regression and the Cox proportion hazard model has resulted in the identification of specific risk factors of significance in prediction. Factors relating to both perpetrator background such as a history of intimate partner violence, debt, and substance misuse coupled with in work behavior such as misusing police systems increase the risk. Findings are able to provide pragmatic recommendations for those tasked with identifying potential or investigating suspected perpetrators of misconduct.

Keywords: abuse of position, forensic psychology, misconduct, sexual abuse

Procedia PDF Downloads 170
3625 Development of Materials Based on Phosphates of NaZr2(PO4)3 with Low Thermal Expansion

Authors: V. Yu. Volgutov, A. I. Orlova, S. A. Khainakov

Abstract:

NaZr2(PO4)3 (NZP) and their structural analogues are characterized by a peculiar behaviors on heating – they have different expansion and contraction along different crystallographic directions due to specific arrangements of crystal structure in these compounds. An important feature of such structures is the ability to incorporate into their structural analogues wide variety of metal cations having different size and oxidation states, with different combinations and concentrations. These cations are located in different crystallographic non-equivalent positions of octahedral tetrahedral crystal framework as well as in inter-framework cavities. Through, due to iso- and hetero-valent isomorphism of the cations (and the anions) in NZP, it becomes possible to tuning the compositions and to obtain the compounds with ‘on a plan’ properties. For the design of compounds with low and ultra-low thermal expansion including those with tailored thermal expansion properties, the following crystallochemical principles it seems are promising: 1) Insertion into crystal M1 position the cations having different sizes and, 2) the variation in the composition of compounds, providing different occupation of crystal M1 position. Following these principles we have designed and synthesized the next NZP-type phosphates series: a) where radii of the cations in the M1 crystal position was varied: Zr1/4Zr2(PO4)3 - Th1/4Zr2(PO4)3 (series I); R1/3Zr2(PO4)3 where R= Nd, Eu, Er (series II), b) where the occupation of M1 crystal position was varied: Zr1/4Zr2(PO4)3-Er1/3Zr2(PO4)3 (series III) and Zr1/4Zr2(PO4)3-Sr1/2Zr2(PO4)3 (series IV). The thermal expansion parameters were determined over the range of 25-800ºC. For each series the minimum axial coefficient of thermal expansion αa = αb, αc and their anisotropy Δα = Iαa - αcI, 10-6 K-1 was found as next: -1.51, 1.07, 2.58 for Th1/4Zr2(PO4)3 (series I); -0.72, 0.10, 0.81 for Nd1/3Zr2(PO4)3 (series II); -2.78, 1.35, 4.12 for Er1/6Zr1/8Zr2(PO4)3 (series III); 2.23, 1.32, 0.91 for Sr1/2Zr2(PO4)3 (series IV). The measured tendencies of the thermal expansion of crystals were in good agreement with predicted ones. For one of the members from the studied phosphates namely Th1/16Zr3/16Zr2(PO4)3 structural refinement have been carried out at 25, 200, 600, and 800°C. The dependencies of the structural parameters with the temperature have been determined.

Keywords: high-temperature crystallography, NaZr2(PO4)3, (NZP) analogs, structural-chemical principles, tuning thermal expansion

Procedia PDF Downloads 212
3624 Selection of Rayleigh Damping Coefficients for Seismic Response Analysis of Soil Layers

Authors: Huai-Feng Wang, Meng-Lin Lou, Ru-Lin Zhang

Abstract:

One good analysis method in seismic response analysis is direct time integration, which widely adopts Rayleigh damping. An approach is presented for selection of Rayleigh damping coefficients to be used in seismic analyses to produce a response that is consistent with Modal damping response. In the presented approach, the expression of the error of peak response, acquired through complete quadratic combination method, and Rayleigh damping coefficients was set up and then the coefficients were produced by minimizing the error. Two finite element modes of soil layers, excited by 28 seismic waves, were used to demonstrate the feasibility and validity.

Keywords: Rayleigh damping, modal damping, damping coefficients, seismic response analysis

Procedia PDF Downloads 416
3623 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria

Authors: Isaac Kayode Ogunlade

Abstract:

Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.

Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device

Procedia PDF Downloads 66
3622 Body Shape Control of Magnetic Soft Continuum Robots with PID Controller

Authors: M. H. Korayem, N. Sangsefidi

Abstract:

Magnetically guided soft robots have emerged as a promising technology in minimally invasive surgery due to their ability to adapt to complex environments. However, one of the main challenges in this field is damage to the vascular structure caused by unwanted stress on the vessel wall and deformation of the vessel due to improper control of the shape of the robot body during surgery. Therefore, this article proposes an approach for controlling the form of a magnetic, soft, continuous robot body using a PID controller. The magnetic soft continuous robot is modelled using Cosserat theory in static mode and solved numerically. The designed controller adjusts the position of each part of the robot to match the desired shape. The PID controller is considered to minimize the robot's contact with the vessel wall and prevent unwanted vessel deformation. The simulation results confirmed the accuracy of the numerical solution of the static Cosserat model. Also, they showed the effectiveness of the proposed contouring method in achieving the desired shape with a maximum error of about 0.3 millimetres.

Keywords: PID, magnetic soft continuous robot, soft robot shape control, Cosserat theory, minimally invasive surgery

Procedia PDF Downloads 56
3621 Hybrid Localization Schemes for Wireless Sensor Networks

Authors: Fatima Babar, Majid I. Khan, Malik Najmus Saqib, Muhammad Tahir

Abstract:

This article provides range based improvements over a well-known single-hop range free localization scheme, Approximate Point in Triangulation (APIT) by proposing an energy efficient Barycentric coordinate based Point-In-Triangulation (PIT) test along with PIT based trilateration. These improvements result in energy efficiency, reduced localization error and improved localization coverage compared to APIT and its variants. Moreover, we propose to embed Received signal strength indication (RSSI) based distance estimation in DV-Hop which is a multi-hop localization scheme. The proposed localization algorithm achieves energy efficiency and reduced localization error compared to DV-Hop and its available improvements. Furthermore, a hybrid multi-hop localization scheme is also proposed that utilize Barycentric coordinate based PIT test and both range based (Received signal strength indicator) and range free (hop count) techniques for distance estimation. Our experimental results provide evidence that proposed hybrid multi-hop localization scheme results in two to five times reduction in the localization error compare to DV-Hop and its variants, at reduced energy requirements.

Keywords: Localization, Trilateration, Triangulation, Wireless Sensor Networks

Procedia PDF Downloads 443
3620 New HCI Design Process Education

Authors: Jongwan Kim

Abstract:

Human Computer Interaction (HCI) is a subject covering the study, plan, and design of interactions between humans and computers. The prevalent use of digital mobile devices is increasing the need for education and research on HCI. This work is focused on a new education method geared towards reducing errors while developing application programs that incorporate role-changing brainstorming techniques during HCI design process. The proposed method has been applied to a capstone design course in the last spring semester. Students discovered some examples about UI design improvement and their error discovering and reducing capability was promoted. An UI design improvement, PC voice control for people with disabilities as an assistive technology examplar, will be presented. The improvement of these students' design ability will be helpful to the real field work.

Keywords: HCI, design process, error reducing education, role-changing brainstorming, assistive technology

Procedia PDF Downloads 469
3619 Income-Consumption Relationships in Pakistan (1980-2011): A Cointegration Approach

Authors: Himayatullah Khan, Alena Fedorova

Abstract:

The present paper analyses the income-consumption relationships in Pakistan using annual time series data from 1980-81 to 2010-1. The paper uses the Augmented Dickey-Fuller test to check the unit root and stationarity in these two time series. The paper finds that the two time series are nonstationary but stationary at their first difference levels. The Augmented Engle-Granger test and the Cointegrating Regression Durbin-Watson test imply that the two time series of consumption and income are cointegrated and that long-run marginal propensity to consume is 0.88 which is given by the estimated (static) equilibrium relation. The paper also used the error correction mechanism to find out to model dynamic relationship. The purpose of the ECM is to indicate the speed of adjustment from the short-run equilibrium to the long-run equilibrium state. The results show that MPC is equal to 0.93 and is highly significant. The coefficient of Engle-Granger residuals is negative but insignificant. Statistically, the equilibrium error term is zero, which suggests that consumption adjusts to changes in GDP in the same period. The short-run changes in GDP have a positive impact on short-run changes in consumption. The paper concludes that we may interpret 0.93 as the short-run MPC. The pair-wise Granger Causality test shows that both GDP and consumption Granger cause each other.

Keywords: cointegrating regression, Augmented Dickey Fuller test, Augmented Engle-Granger test, Granger causality, error correction mechanism

Procedia PDF Downloads 390
3618 Effects of Canned Cycles and Cutting Parameters on Hole Quality in Cryogenic Drilling of Aluminum 6061-6T

Authors: M. N. Islam, B. Boswell, Y. R. Ginting

Abstract:

The influence of canned cycles and cutting parameters on hole quality in cryogenic drilling has been investigated experimentally and analytically. A three-level, three-parameter experiment was conducted by using the design-of-experiment methodology. The three levels of independent input parameters were the following: for canned cycles—a chip-breaking canned cycle (G73), a spot drilling canned cycle (G81), and a deep hole canned cycle (G83); for feed rates—0.2, 0.3, and 0.4 mm/rev; and for cutting speeds—60, 75, and 100 m/min. The selected work and tool materials were aluminum 6061-6T and high-speed steel (HSS), respectively. For cryogenic cooling, liquid nitrogen (LN2) was used and was applied externally. The measured output parameters were the three widely used quality characteristics of drilled holes—diameter error, circularity, and surface roughness. Pareto ANOVA was applied for analyzing the results. The findings revealed that the canned cycle has a significant effect on diameter error (contribution ratio 44.09%) and small effects on circularity and surface finish (contribution ratio 7.25% and 6.60%, respectively). The best results for the dimensional accuracy and surface roughness were achieved by G81. G73 produced the best circularity results; however, for dimensional accuracy, it was the worst level.

Keywords: circularity, diameter error, drilling canned cycle, pareto ANOVA, surface roughness

Procedia PDF Downloads 261
3617 Core Stability Training and the Young Para-Swimmers’ Results on 50 Meters and 100 Meters Freestyle

Authors: Ninomyslaw Jakubczyk, Anna Zwierzchowska, Adam Maszczyk

Abstract:

Background: Central stabilisation training aims to improve neuromuscular coordination. It is used in the form of injury prevention and completing the swimmers' process. The aim of the study was to access the impact of this training on the results by disabled swimmers at 50 and 100 meters’ freestyle. Material/Method: 20 competitors with similar dysfunctions of the musculoskeletal system, randomly assigned to the experimental and control group, participated in the study. Each group consisted of 7 swimmers started in competitions from the standing starting position, and 3 started from the water. The study included a 4-week set of stabilization exercises, 4 times a week instead of pulling by legs. Exercises were held under specialist swimming conditions and involved controlled circuit muscle movements while maintaining a floating stable position in the water. Results: All groups improved their 'best times' besides swimmers started from standing position in the control group. There were no significant differences between intergroup and intra-group results, both at distance 50 and 100 meters’ freestyle. Conclusions: Better improvements in the experimental group were noted, but this effect cannot be attributed to 4-week stabilisation training. However, this investigation might suggest that this type of training could be beneficial for junior disabled swimmers.

Keywords: athletes, swimming, trunk exercises, youth

Procedia PDF Downloads 137
3616 Estimation of Residual Stresses in Thick Walled Cylinder by Radial Basis Artificial Neural

Authors: Mohammad Heidari

Abstract:

In this paper a method for high strength steel is proposed of residual stresses in autofrettaged tubes by combination of artificial neural networks is presented. Many different thick walled cylinders that were subjected to different conditions were studied. At first, the residual stress is calculated by analytical solution. Then by changing of the parameters that influenced in residual stresses such as percentage of autofrettage, internal pressure, wall ratio of cylinder, material property of cylinder, bauschinger and hardening effect factor, a neural network is created. These parameters are the input of network. The output of network is residual stress. Numerical data, employed for training the network and capabilities of the model in predicting the residual stress has been verified. The output obtained from neural network model is compared with numerical results, and the amount of relative error has been calculated. Based on this verification error, it is shown that the radial basis function of neural network has the average error of 2.75% in predicting residual stress of thick wall cylinder. Further analysis of residual stress of thick wall cylinder under different input conditions has been investigated and comparison results of modeling with numerical considerations shows a good agreement, which also proves the feasibility and effectiveness of the adopted approach.

Keywords: thick walled cylinder, residual stress, radial basis, artificial neural network

Procedia PDF Downloads 389
3615 The Relationship between Public Relations and Media Relations: The Case of Hotel Enterprises

Authors: Burcu Oksuz, Volkan Altıintas, Zulfiye Acar Senturk

Abstract:

Though in the academic literature, it is emphasized that Public Relations (PR) should not be seen only as media relations, in practise, the media relations has a very dominant position at the communication studies carried out by many companies. There are many PR practitioners who have journalism background. However the number of the practitioners who have started to work in the sector after having PR education at the universities has been highly increasing. Therefore, it can be said that previous journalist dominance has diminished at the public relations sector in Turkey. However, by virtue of the fact that some companies and practitioners consider the media coverage the first priority of PR, this much is certain that the dominant position of media relations is ongoing. On the other hand, still many companies measure the success of their PR by how much place their companies have taken. This situation creates major pressure on the PR practitioners to have close relations with the media members and to make them write articles about their companies. Thereupon, PR practitioners have to take the time for the media relations and the media relations comes into prominence more than the other PR functions. The aim of this study is to reveal the PR functions at the companies and to evaluate the position of the media relations in the PR studies. Therefore, it is aimed to find out at what extend the discourse of “Public relations is not media relations” is accepted in practice and actualised. Accordingly, a research about 15 hotel enterprises which are located in the city of İzmir will be carried out. İzmir as one of the most important tourism destinations has many hotels. The PR/corporate communications managers will be interviewed profoundly within the scope of this study and PR functions performed by hotels will be discussed in details in consideration of the datum obtained.

Keywords: media relations, public relations, public relations practitioners, Turkey

Procedia PDF Downloads 359