Search results for: distance measurement error
5868 User Experience Measurement of User Interfaces
Authors: Mohammad Hashemi, John Herbert
Abstract:
Quantifying and measuring Quality of Experience (QoE) are important and difficult concerns in Human Computer Interaction (HCI). Quality of Service (QoS) and the actual User Interface (UI) of the application are both important contributors to the QoE of a user. This paper describes a framework that measures accurately the way a user uses the UI in order to model users' behaviours and profiles. It monitors the use of the mouse and use of UI elements with accurate time measurement. It does this in real-time and does so unobtrusively and efficiently allowing the user to work as normal with the application. This real-time accurate measurement of the user's interaction provides valuable data and insight into the use of the UI, and is also the basis for analysis of the user's QoE.Keywords: user modelling, user interface experience, quality of experience, user experience, human and computer interaction
Procedia PDF Downloads 5035867 Direct Current Electric Field Stimulation against PC12 Cells in 3D Bio-Reactor to Enhance Axonal Extension
Authors: E. Nakamachi, S. Tanaka, K. Yamamoto, Y. Morita
Abstract:
In this study, we developed a three-dimensional (3D) direct current electric field (DCEF) stimulation bio-reactor for axonal outgrowth enhancement to generate the neural network of the central nervous system (CNS). By using our newly developed 3D DCEF stimulation bio-reactor, we cultured the rat pheochromocytoma cells (PC12) and investigated the effects on the axonal extension enhancement and network generation. Firstly, we designed and fabricated a 3D bio-reactor, which can load DCEF stimulation on PC12 cells embedded in the collagen gel as extracellular environment. The connection between the electrolyte and the medium using salt bridges for DCEF stimulation was introduced to avoid the cell death by the toxicity of metal ion. The distance between the salt bridges was adopted as the design variable to optimize a structure for uniform DCEF stimulation, where the finite element (FE) analyses results were used. Uniform DCEF strength and electric flux vector direction in the PC12 cells embedded in collagen gel were examined through measurements of the fabricated 3D bio-reactor chamber. Measurement results of DCEF strength in the bio-reactor showed a good agreement with FE results. In addition, the perfusion system was attached to maintain pH 7.2 ~ 7.6 of the medium because pH change was caused by DCEF stimulation loading. Secondly, we disseminated PC12 cells in collagen gel and carried out 3D culture. Finally, we measured the morphology of PC12 cell bodies and neurites by the multiphoton excitation fluorescence microscope (MPM). The effectiveness of DCEF stimulation to enhance the axonal outgrowth and the neural network generation was investigated. We confirmed that both an increase of mean axonal length and axogenesis rate of PC12, which have been exposed 5 mV/mm for 6 hours a day for 4 days in the bioreactor. We found following conclusions in our study. 1) Design and fabrication of DCEF stimulation bio-reactor capable of 3D culture nerve cell were completed. A uniform electric field strength of average value of 17 mV/mm within the 1.2% error range was confirmed by using FE analyses, after the structure determination through the optimization process. In addition, we attached a perfusion system capable of suppressing the pH change of the culture solution due to DCEF stimulation loading. 2) Evaluation of DCEF stimulation effects on PC12 cell activity was executed. The 3D culture of PC 12 was carried out adopting the embedding culture method using collagen gel as a scaffold for four days under the condition of 5.0 mV/mm and 10mV/mm. There was a significant effect on the enhancement of axonal extension, as 11.3% increase in an average length, and the increase of axogenesis rate. On the other hand, no effects on the orientation of axon against the DCEF flux direction was observed. Further, the network generation was enhanced to connect longer distance between the target neighbor cells by DCEF stimulation.Keywords: PC12, DCEF stimulation, 3D bio-reactor, axonal extension, neural network generation
Procedia PDF Downloads 1845866 Modelling Volatility of Cryptocurrencies: Evidence from GARCH Family of Models with Skewed Error Innovation Distributions
Authors: Timothy Kayode Samson, Adedoyin Isola Lawal
Abstract:
The past five years have shown a sharp increase in public interest in the crypto market, with its market capitalization growing from $100 billion in June 2017 to $2158.42 billion on April 5, 2022. Despite the outrageous nature of the volatility of cryptocurrencies, the use of skewed error innovation distributions in modelling the volatility behaviour of these digital currencies has not been given much research attention. Hence, this study models the volatility of 5 largest cryptocurrencies by market capitalization (Bitcoin, Ethereum, Tether, Binance coin, and USD Coin) using four variants of GARCH models (GJR-GARCH, sGARCH, EGARCH, and APARCH) estimated using three skewed error innovation distributions (skewed normal, skewed student- t and skewed generalized error innovation distributions). Daily closing prices of these currencies were obtained from Yahoo Finance website. Finding reveals that the Binance coin reported higher mean returns compared to other digital currencies, while the skewness indicates that the Binance coin, Tether, and USD coin increased more than they decreased in values within the period of study. For both Bitcoin and Ethereum, negative skewness was obtained, meaning that within the period of study, the returns of these currencies decreased more than they increased in value. Returns from these cryptocurrencies were found to be stationary but not normality distributed with evidence of the ARCH effect. The skewness parameters in all best forecasting models were all significant (p<.05), justifying of use of skewed error innovation distributions with a fatter tail than normal, Student-t, and generalized error innovation distributions. For Binance coin, EGARCH-sstd outperformed other volatility models, while for Bitcoin, Ethereum, Tether, and USD coin, the best forecasting models were EGARCH-sstd, APARCH-sstd, EGARCH-sged, and GJR-GARCH-sstd, respectively. This suggests the superiority of skewed Student t- distribution and skewed generalized error distribution over the skewed normal distribution.Keywords: skewed generalized error distribution, skewed normal distribution, skewed student t- distribution, APARCH, EGARCH, sGARCH, GJR-GARCH
Procedia PDF Downloads 1195865 Comparison of Spiral Circular Coil and Helical Coil Structures for Wireless Power Transfer System
Authors: Zhang Kehan, Du Luona
Abstract:
Wireless power transfer (WPT) systems have been widely investigated for advantages of convenience and safety compared to traditional plug-in charging systems. The research contents include impedance matching, circuit topology, transfer distance et al. for improving the efficiency of WPT system, which is a decisive factor in the practical application. What is more, coil structures such as spiral circular coil and helical coil with variable distance between two turns also have indispensable effects on the efficiency of WPT systems. This paper compares the efficiency of WPT systems utilizing spiral or helical coil with variable distance between two turns, and experimental results show that efficiency of spiral circular coil with an optimum distance between two turns is the highest. According to efficiency formula of resonant WPT system with series-series topology, we introduce M²/R₋₁ to measure the efficiency of spiral circular coil and helical coil WPT system. If the distance between two turns s is too close, proximity effect theory shows that the induced current in the conductor, caused by a variable flux created by the current flows in the skin of vicinity conductor, is the opposite direction of source current and has assignable impart on coil resistance. Thus in two coil structures, s affects coil resistance. At the same time, when the distance between primary and secondary coils is not variable, s can also make the influence on M to some degrees. The aforementioned study proves that s plays an indispensable role in changing M²/R₋₁ and then can be adjusted to find the optimum value with which WPT system achieves the highest efficiency. In actual application situations of WPT systems especially in underwater vehicles, miniaturization is one vital issue in designing WPT system structures. Limited by system size, the largest external radius of spiral circular coil is 100 mm, and the largest height of helical coil is 40 mm. In other words, the turn of coil N changes with s. In spiral circular and helical structures, the distance between each two turns in secondary coil is set as a constant value 1 mm to guarantee that the R2 is not variable. Based on the analysis above, we set up spiral circular coil and helical coil model using COMSOL to analyze the value of M²/R₋₁ when the distance between each two turns in primary coil sp varies from 0 mm to 10 mm. In the two structure models, the distance between primary and secondary coils is 50 mm and wire diameter is chosen as 1.5 mm. The turn of coil in secondary coil are 27 in helical coil model and 20 in spiral circular coil model. The best value of s in helical coil structure and spiral circular coil structure are 1 mm and 2 mm respectively, in which the value of M²/R₋₁ is the largest. It is obviously to select spiral circular coil as the first choice to design the WPT system for that the value of M²/R₋₁ in spiral circular coil is larger than that in helical coil under the same condition.Keywords: distance between two turns, helical coil, spiral circular coil, wireless power transfer
Procedia PDF Downloads 3455864 Influence of the Seat Arrangement in Public Reading Spaces on Individual Subjective Perceptions
Authors: Jo-Han Chang, Chung-Jung Wu
Abstract:
This study involves a design proposal. The objective of is to create a seat arrangement model for public reading spaces that enable free arrangement without disturbing the users. Through a subjective perception scale, this study explored whether distance between seats and direction of seats influence individual subjective perceptions in a public reading space. This study also involves analysis of user subjective perceptions when reading in the settings on 3 seats at different directions and with 5 distances between seats. The results may be applied to public chair design. This study investigated that (a) whether different directions of seats and distances between seats influence individual subjective perceptions and (b) the acceptable personal space between 2 strangers in a public reading space. The results are shown as follows: (a) the directions of seats and distances between seats influenced individual subjective perceptions. (b) subjective evaluation scores were higher for back-to-back seat directions with Distances A (10 cm) and B (62 cm) compared with face-to-face and side-by-side seat directions; however, when the seat distance exceeded 114 cm (Distance C), no difference existed among the directions of seats. (c) regarding reading in public spaces, when the distance between seats is 10 cm only, we recommend arranging the seats in a back-to-back fashion to increase user comfort and arrangement of face-to-face and side- by-side seat directions should be avoided. When the seat arrangement is limited to face-to-face design, the distance between seats should be increased to at least 62 cm. Moreover, the distance between seats should be increased to at least 114 cm for side- by-side seats to elevate user comfort.Keywords: individual subjective perceptions, personal space, seat arrangement, direction, distances
Procedia PDF Downloads 4275863 Distance and Coverage: An Assessment of Location-Allocation Models for Fire Stations in Kuwait City, Kuwait
Authors: Saad M. Algharib
Abstract:
The major concern of planners when placing fire stations is finding their optimal locations such that the fire companies can reach fire locations within reasonable response time or distance. Planners are also concerned with the numbers of fire stations that are needed to cover all service areas and the fires, as demands, with standard response time or distance. One of the tools for such analysis is location-allocation models. Location-allocation models enable planners to determine the optimal locations of facilities in an area in order to serve regional demands in the most efficient way. The purpose of this study is to examine the geographic distribution of the existing fire stations in Kuwait City. This study utilized location-allocation models within the Geographic Information System (GIS) environment and a number of statistical functions to assess the current locations of fire stations in Kuwait City. Further, this study investigated how well all service areas are covered and how many and where additional fire stations are needed. Four different location-allocation models were compared to find which models cover more demands than the others, given the same number of fire stations. This study tests many ways to combine variables instead of using one variable at a time when applying these models in order to create a new measurement that influences the optimal locations for locating fire stations. This study also tests how location-allocation models are sensitive to different levels of spatial dependency. The results indicate that there are some districts in Kuwait City that are not covered by the existing fire stations. These uncovered districts are clustered together. This study also identifies where to locate the new fire stations. This study provides users of these models a new variable that can assist them to select the best locations for fire stations. The results include information about how the location-allocation models behave in response to different levels of spatial dependency of demands. The results show that these models perform better with clustered demands. From the additional analysis carried out in this study, it can be concluded that these models applied differently at different spatial patterns.Keywords: geographic information science, GIS, location-allocation models, geography
Procedia PDF Downloads 1775862 On Constructing a Cubically Convergent Numerical Method for Multiple Roots
Authors: Young Hee Geum
Abstract:
We propose the numerical method defined by xn+1 = xn − λ[f(xn − μh(xn))/]f'(xn) , n ∈ N, and determine the control parameter λ and μ to converge cubically. In addition, we derive the asymptotic error constant. Applying this proposed scheme to various test functions, numerical results show a good agreement with the theory analyzed in this paper and are proven using Mathematica with its high-precision computability.Keywords: asymptotic error constant, iterative method, multiple root, root-finding
Procedia PDF Downloads 2205861 SSRUIC Students’ Attitude and Preference toward Error Corrections
Authors: Papitchaya Papangkorn
Abstract:
Matching the expectations of teachers and learners is significant for successful language learning. Moreover, teachers should discover what their learners think and feel about what and how they want to learn. Therefore, this study investigates International College, Suan Sunandha Rajabhat University students’ preferences toward error corrections in order to help SSRUIC teachers match their expectations and their learners because it is important for successful language learning. This study examined the learners’ attitude and preference toward error correction through 50 first year SSRUIC students both male (25) and female (25) in Bangkok, Thailand. The data were collected from a questionnaire and interviews to investigate the necessity and frequency, timing, type of errors, method of corrective feedback, and person who gives error correction in order to answer the overall research question and sub-questions. The findings indicate five suggestions regarding the overall research question. Firstly, errors should be treated, and always be treated. Secondly, treating errors after finish speaking is the most appropriate time. Thirdly, “errors that may cause problems in an understanding of listener” and “frequent spoken errors” should be treated. Fourthly, repetition and explicit feedback were the most popular types of feedback among males, whereas metalinguistic feedback was the most favoured types amongst females. Finally, teachers were the most preferred person to deliver corrective feedback for the learners. Although the results of the study are difficult to generalize to a larger population, which are Thai EFL learners because of the small sample, the findings provide useful information that may contribute to understanding of SSRUIC learners’ preferences toward error corrections and it might reduce the gap between what teachers employ and what students expect when receiving corrective feedback. The reduction of this gap may be useful for the learning process and could enhance the efforts of both teachers and learners in a Thai context.Keywords: attitude, corrective feedback, error, preference
Procedia PDF Downloads 3575860 Pavement Management for a Metropolitan Area: A Case Study of Montreal
Authors: Luis Amador Jimenez, Md. Shohel Amin
Abstract:
Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization
Procedia PDF Downloads 4605859 Non-Contact Measurement of Soil Deformation in a Cyclic Triaxial Test
Authors: Erica Elice Uy, Toshihiro Noda, Kentaro Nakai, Jonathan Dungca
Abstract:
Deformation in a conventional cyclic triaxial test is normally measured by using point-wise measuring device. In this study, non-contact measurement technique was applied to be able to monitor and measure the occurrence of non-homogeneous behavior of the soil under cyclic loading. Non-contact measurement is executed through image processing. Two-dimensional measurements were performed using Lucas and Kanade optical flow algorithm and it was implemented Labview. In this technique, the non-homogeneous deformation was monitored using a mirrorless camera. A mirrorless camera was used because it is economical and it has the capacity to take pictures at a fast rate. The camera was first calibrated to remove the distortion brought about the lens and the testing environment as well. Calibration was divided into 2 phases. The first phase was the calibration of the camera parameters and distortion caused by the lens. The second phase was to for eliminating the distortion brought about the triaxial plexiglass. A correction factor was established from this phase. A series of consolidated undrained cyclic triaxial test was performed using a coarse soil. The results from the non-contact measurement technique were compared to the measured deformation from the linear variable displacement transducer. It was observed that deformation was higher at the area where failure occurs.Keywords: cyclic loading, non-contact measurement, non-homogeneous, optical flow
Procedia PDF Downloads 3015858 Heat Transfer Characteristics of Aluminum Foam Heat Sinks Subject to an Impinging Jet
Authors: So-Ra Jeon, Chan Byon
Abstract:
This study investigates the heat transfer characteristics of aluminum foam heat sink and pin fin heat sink subjected to an impinging air jet under a fixed pumping power condition as well as fixed flow rate condition. The effects of dimensionless pumping power or the Reynolds number and the impinging distance ratio on the Nusselt number are considered. The result shows that the effect of the impinging distance on the Nusselt number is negligible under a fixed pumping power condition, while the Nusselt number increases with decreasing the impinging distance under a fixed pumping power condition. A correlation for the pressure drop is obtained as a function of the flow rate and the impinging distance ratio. And correlations for the stagnation Nusselt number of the impinging jet are developed as a function of the pumping power. The aluminum foam heat sinks did not show higher thermal performance compared to a conventional pin fin heat sink under a fixed pumping power condition.Keywords: aluminum foam, heat sinks, impinging jet, pumping power
Procedia PDF Downloads 3055857 Channel Estimation for LTE Downlink
Authors: Rashi Jain
Abstract:
The LTE systems employ Orthogonal Frequency Division Multiplexing (OFDM) as the multiple access technology for the Downlink channels. For enhanced performance, accurate channel estimation is required. Various algorithms such as Least Squares (LS), Minimum Mean Square Error (MMSE) and Recursive Least Squares (RLS) can be employed for the purpose. The paper proposes channel estimation algorithm based on Kalman Filter for LTE-Downlink system. Using the frequency domain pilots, the initial channel response is obtained using the LS criterion. Then Kalman Filter is employed to track the channel variations in time-domain. To suppress the noise within a symbol, threshold processing is employed. The paper draws comparison between the LS, MMSE, RLS and Kalman filter for channel estimation. The parameters for evaluation are Bit Error Rate (BER), Mean Square Error (MSE) and run-time.Keywords: LTE, channel estimation, OFDM, RLS, Kalman filter, threshold
Procedia PDF Downloads 3565856 Image Distortion Correction Method of 2-MHz Side Scan Sonar for Underwater Structure Inspection
Authors: Youngseok Kim, Chul Park, Jonghwa Yi, Sangsik Choi
Abstract:
The 2-MHz Side Scan SONAR (SSS) attached to the boat for inspection of underwater structures is affected by shaking. It is difficult to determine the exact scale of damage of structure. In this study, a motion sensor is attached to the inside of the 2-MHz SSS to get roll, pitch, and yaw direction data, and developed the image stabilization tool to correct the sonar image. We checked that reliable data can be obtained with an average error rate of 1.99% between the measured value and the actual distance through experiment. It is possible to get the accurate sonar data to inspect damage in underwater structure.Keywords: image stabilization, motion sensor, safety inspection, sonar image, underwater structure
Procedia PDF Downloads 2805855 A Novel Approach to Design of EDDR Architecture for High Speed Motion Estimation Testing Applications
Authors: T. Gangadhararao, K. Krishna Kishore
Abstract:
Motion Estimation (ME) plays a critical role in a video coder, testing such a module is of priority concern. While focusing on the testing of ME in a video coding system, this work presents an error detection and data recovery (EDDR) design, based on the residue-and-quotient (RQ) code, to embed into ME for video coding testing applications. An error in processing Elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the proposed EDDR design. The proposed EDDR design for ME testing can detect errors and recover data with an acceptable area overhead and timing penalty.Keywords: area overhead, data recovery, error detection, motion estimation, reliability, residue-and-quotient (RQ) code
Procedia PDF Downloads 4325854 An Error Analysis of English Communication of Suan Sunandha Rajabhat University Students
Authors: Chantima Wangsomchok
Abstract:
The main purposes of this study are (1) to test the students’ communicative competence within six main functions: greeting, parting, thanking, offering, requesting and suggesting, (2) to employ error analysis in the students’ communicative competence within those functions, and (3) to compare the characteristics of the error found from the investigation. The subjects of the study is 328 first-year undergraduates taking the Foundation English course in the first semester of the 2008 academic year at Suan Sunandha Rajabhat University. This study found that while the subjects showed high communicative competence in the use of the following three functions: greeting, thanking, and offering, they seemed to show poor communicative competence in suggesting, requesting and parting instead. In addition, this study found that the grammatical errors were likely to be most frequently found in the parting function. In the same way, the type of errors which were less frequently found was in the functions of thanking and requesting respectively. Instead, the students tended to have high pragmatic failure in the use of greeting and suggesting functions.Keywords: error analysis, functions of English language, communicative competence, cognitive science
Procedia PDF Downloads 4315853 Blood Oxygen Saturation Measurement System Using Broad-Band Light Source with LabVIEW Program
Authors: Myoung Ah Kim, Dong Ho Sin, Chul Gyu Song
Abstract:
Blood oxygen saturation system is a well-established, noninvasive photoplethysmographic method to monitor vital signs. Conventional blood oxygen saturation measurements for the two LED light source is the ambiguity of the oxygen saturation measurement principle and the measurement results greatly influenced and heat and motion artifact. A high accuracy in order to solve these problems blood oxygen saturation measuring method has been proposed using a broadband light source that can be easily understood by the algorithm. The measurement of blood oxygen saturation based on broad-band light source has advantage of simple testing facility and easy understanding. Broadband light source based on blood oxygen saturation measuring program proposed in this paper is a combination of LabVIEW and MATLAB. Using the wavelength range of 450 nm-750 nm using a floating light absorption of oxyhemoglobin and deoxyhemoglobin to measure the blood oxygen saturation. Hand movement is to fix the probe to the motor stage in order to prevent oxygen saturation measurement that affect the sample and probe kept constant interval. Experimental results show that the proposed method noticeably increases the accuracy and saves time compared with the conventional methods.Keywords: oxygen saturation, broad-band light source, CCD, light reflectance theory
Procedia PDF Downloads 4595852 Uncertainty Assessment in Building Energy Performance
Authors: Fally Titikpina, Abderafi Charki, Antoine Caucheteux, David Bigaud
Abstract:
The building sector is one of the largest energy consumer with about 40% of the final energy consumption in the European Union. Ensuring building energy performance is of scientific, technological and sociological matter. To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared with the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of dynamic and static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the \textit{Guide to the Expression of Measurement Uncertainty (GUM)} as well as by Bayesian Statistical Theory (BST). Another choice is the use of numerical methods like Monte Carlo Simulation (MCS). In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST) is given. Therefore, an office building has been monitored and multiple sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 $m^2$. Temperature data, electrical and heating consumption, windows opening and occupancy rate are the features for our research work.Keywords: building energy performance, uncertainty evaluation, GUM, bayesian approach, monte carlo method
Procedia PDF Downloads 4595851 Simultaneous Measurement of Displacement and Roll Angle of Object
Authors: R. Furutani, K. Ishii
Abstract:
Laser interferometers are now widely used for length and displacement measurement. In conventional methods, the optical path difference between two mirrors, one of which is a reference mirror and the other is a target mirror, is measured, as in Michelson interferometry, or two target mirrors are set up and the optical path difference between the two targets is measured, as in differential interferometry. In these interferometers, the two laser beams pass through different optical elements so that the measurement result is affected by the vibration and other effects in the optical paths. In addition, it is difficult to measure the roll angle around the optical axis. The proposed interferometer simultaneously measures both the translational motion along the optical axis and the roll motion around it by combining the retroreflective principle of the ball lens (BL) and the polarization. This interferometer detects the interferogram by the two beams traveling along the identical optical path from the beam source to BL. This principle is expected to reduce external influences by using the interferogram between the two lasers in an identical optical path. The proposed interferometer uses a BL so that the reflected light from the lens travels on the identical optical path as the incident light. After reaching the aperture of the He-Ne laser oscillator, the reflected light is reflected by a mirror with a very high reflectivity installed in the aperture and is irradiated back toward the BL. Both the first laser beam that enters the BL and the second laser beam that enters the BL after the round trip interferes with each other, enabling the measurement of displacement along the optical axis. In addition, for the measurement of the roll motion, a quarter-wave plate is installed on the optical path to change the polarization state of the laser. The polarization states of the first laser beam and second laser beam are different by the roll angle of the target. As a result, this system can measure the displacement and the roll angle of BL simultaneously. It was verified by the simulation and the experiment that the proposed optical system could measure the displacement and the roll angle simultaneously.Keywords: common path interferometer, displacement measurement, laser interferometer, simultaneous measurement, roll angle measurement
Procedia PDF Downloads 895850 Weighted-Distance Sliding Windows and Cooccurrence Graphs for Supporting Entity-Relationship Discovery in Unstructured Text
Authors: Paolo Fantozzi, Luigi Laura, Umberto Nanni
Abstract:
The problem of Entity relation discovery in structured data, a well covered topic in literature, consists in searching within unstructured sources (typically, text) in order to find connections among entities. These can be a whole dictionary, or a specific collection of named items. In many cases machine learning and/or text mining techniques are used for this goal. These approaches might be unfeasible in computationally challenging problems, such as processing massive data streams. A faster approach consists in collecting the cooccurrences of any two words (entities) in order to create a graph of relations - a cooccurrence graph. Indeed each cooccurrence highlights some grade of semantic correlation between the words because it is more common to have related words close each other than having them in the opposite sides of the text. Some authors have used sliding windows for such problem: they count all the occurrences within a sliding windows running over the whole text. In this paper we generalise such technique, coming up to a Weighted-Distance Sliding Window, where each occurrence of two named items within the window is accounted with a weight depending on the distance between items: a closer distance implies a stronger evidence of a relationship. We develop an experiment in order to support this intuition, by applying this technique to a data set consisting in the text of the Bible, split into verses.Keywords: cooccurrence graph, entity relation graph, unstructured text, weighted distance
Procedia PDF Downloads 1535849 Determination of Measurement Uncertainty of the Diagnostic Meteorological Model CALMET
Authors: Nina Miklavčič, Urška Kugovnik, Natalia Galkina, Primož Ribarič, Rudi Vončina
Abstract:
Today, the need for weather predictions is deeply rooted in the everyday life of people as well as it is in industry. The forecasts influence final decision-making processes in multiple areas, from agriculture and prevention of natural disasters to air traffic regulations and solutions on a national level for health, security, and economic problems. Namely, in Slovenia, alongside other existing forms of application, weather forecasts are adopted for the prognosis of electrical current transmission through powerlines. Meteorological parameters are one of the key factors which need to be considered in estimations of the reliable supply of electrical energy to consumers. And like for any other measured value, the knowledge about measurement uncertainty is also critical for the secure and reliable supply of energy. The estimation of measurement uncertainty grants us a more accurate interpretation of data, a better quality of the end results, and even a possibility of improvement of weather forecast models. In the article, we focused on the estimation of measurement uncertainty of the diagnostic microscale meteorological model CALMET. For the purposes of our research, we used a network of meteorological stations spread in the area of our interest, which enables a side-by-side comparison of measured meteorological values with the values calculated with the help of CALMET and the measurement uncertainty estimation as a final result.Keywords: uncertancy, meteorological model, meteorological measurment, CALMET
Procedia PDF Downloads 815848 Exploring Bidirectional Encoder Representations from the Transformers’ Capabilities to Detect English Preposition Errors
Authors: Dylan Elliott, Katya Pertsova
Abstract:
Preposition errors are some of the most common errors created by L2 speakers. In addition, improving error correction and detection methods remains an open issue in the realm of Natural Language Processing (NLP). This research investigates whether the bidirectional encoder representations from the transformers model (BERT) have the potential to correct preposition errors accurately enough to be useful in error correction software. This research finds that BERT performs strongly when the scope of its error correction is limited to preposition choice. The researchers used an open-source BERT model and over three hundred thousand edited sentences from Wikipedia, tagged for part of speech, where only a preposition edit had occurred. To test BERT’s ability to detect errors, a technique known as multi-level masking was used to generate suggestions based on sentence context for every prepositional environment in the test data. These suggestions were compared with the original errors in the data and their known corrections to evaluate BERT’s performance. The suggestions were further analyzed to determine if BERT more often agreed with the judgements of the Wikipedia editors. Both the untrained and fined-tuned models were compared. Finetuning led to a greater rate of error-detection which significantly improved recall, but lowered precision due to an increase in false positives or falsely flagged errors. However, in most cases, these false positives were not errors in preposition usage but merely cases where more than one preposition was possible. Furthermore, when BERT correctly identified an error, the model largely agreed with the Wikipedia editors, suggesting that BERT’s ability to detect misused prepositions is better than previously believed. To evaluate to what extent BERT’s false positives were grammatical suggestions, we plan to do a further crowd-sourcing study to test the grammaticality of BERT’s suggested sentence corrections against native speakers’ judgments.Keywords: BERT, grammatical error correction, preposition error detection, prepositions
Procedia PDF Downloads 1475847 The Impact of COVID-19 Pandemic on Educators in South Africa: Self-Efficacy and Anxiety
Authors: Mostert Jacques, Gulseven Osman, Williams Courtney
Abstract:
The Covid-19 pandemic caused unparalleled disruption in the lives of the majority of the world. This included school closures and introduction of Online Learning. In this article we investigated the impact of distance learning on the self-efficacy and anxiety levels experienced by educators in South Africa. We surveyed 60 respondents from Independent Schools using a Likert Scale rating of 0 to 4. The results suggested that despite experiencing moderate anxiety, educators showed a sense of high self-efficacy during distance learning. This was specifically true for those with underlying health concerns. There was no significant difference between how the different genders experienced anxiety and self-efficacy. Further research into the impact on learners’ anxiety levels during distance learning will provide policymakers and educators with a better understanding of how the use of technology is influencing the effectiveness of teaching, learning, and assessment.Keywords: COVID-19, education, self-efficacy, anxiety
Procedia PDF Downloads 2055846 Comparison between Bernardi’s Equation and Heat Flux Sensor Measurement as Battery Heat Generation Estimation Method
Authors: Marlon Gallo, Eduardo Miguel, Laura Oca, Eneko Gonzalez, Unai Iraola
Abstract:
The heat generation of an energy storage system is an essential topic when designing a battery pack and its cooling system. Heat generation estimation is used together with thermal models to predict battery temperature in operation and adapt the design of the battery pack and the cooling system to these thermal needs guaranteeing its safety and correct operation. In the present work, a comparison between the use of a heat flux sensor (HFS) for indirect measurement of heat losses in a cell and the widely used and simplified version of Bernardi’s equation for estimation is presented. First, a Li-ion cell is thermally characterized with an HFS to measure the thermal parameters that are used in a first-order lumped thermal model. These parameters are the equivalent thermal capacity and the thermal equivalent resistance of a single Li-ion cell. Static (when no current is flowing through the cell) and dynamic (making current flow through the cell) tests are conducted in which HFS is used to measure heat between the cell and the ambient, so thermal capacity and resistances respectively can be calculated. An experimental platform records current, voltage, ambient temperature, surface temperature, and HFS output voltage. Second, an equivalent circuit model is built in a Matlab-Simulink environment. This allows the comparison between the generated heat predicted by Bernardi’s equation and the HFS measurements. Data post-processing is required to extrapolate the heat generation from the HFS measurements, as the sensor records the heat released to the ambient and not the one generated within the cell. Finally, the cell temperature evolution is estimated with the lumped thermal model (using both HFS and Bernardi’s equation total heat generation) and compared towards experimental temperature data (measured with a T-type thermocouple). At the end of this work, a critical review of the results obtained and the possible mismatch reasons are reported. The results show that indirectly measuring the heat generation with HFS gives a more precise estimation than Bernardi’s simplified equation. On the one hand, when using Bernardi’s simplified equation, estimated heat generation differs from cell temperature measurements during charges at high current rates. Additionally, for low capacity cells where a small change in capacity has a great influence on the terminal voltage, the estimated heat generation shows high dependency on the State of Charge (SoC) estimation, and therefore open circuit voltage calculation (as it is SoC dependent). On the other hand, with indirect measuring the heat generation with HFS, the resulting error is a maximum of 0.28ºC in the temperature prediction, in contrast with 1.38ºC with Bernardi’s simplified equation. This illustrates the limitations of Bernardi’s simplified equation for applications where precise heat monitoring is required. For higher current rates, Bernardi’s equation estimates more heat generation and consequently, a higher predicted temperature. Bernardi´s equation accounts for no losses after cutting the charging or discharging current. However, HFS measurement shows that after cutting the current the cell continues generating heat for some time, increasing the error of Bernardi´s equation.Keywords: lithium-ion battery, heat flux sensor, heat generation, thermal characterization
Procedia PDF Downloads 3895845 Knowledge Required for Avoiding Lexical Errors at Machine Translation
Authors: Yukiko Sasaki Alam
Abstract:
This research aims at finding out the causes that led to wrong lexical selections in machine translation (MT) rather than categorizing lexical errors, which has been a main practice in error analysis. By manually examining and analyzing lexical errors outputted by a MT system, it suggests what knowledge would help the system reduce lexical errors.Keywords: machine translation, error analysis, lexical errors, evaluation
Procedia PDF Downloads 3385844 Pattern of Refractive Error, Knowledge, Attitude and Practice about Eye Health among the Primary School Children in Bangladesh
Authors: Husain Rajib, K. S. Kishor, D. G. Jewel
Abstract:
Background: Uncorrected refractive error is a common cause of preventable visual impairment in pediatric age group which can be lead to blindness but early detection of visual impairment can reduce the problem that will have good effective in education and more involve in social activities. Glasses are the cheapest and commonest form of correction of refractive errors. To achieve this, patient must exhibit good compliance to spectacle wear. Patient’s attitude and perception of glasses and eye health could affect compliance. Material and method: A Prospective community based cross sectional study was designed in order to evaluate the knowledge, attitude and practices about refractive errors and eye health amongst the primary school going children. Result: Among 140 respondents, 72 were males and 68 were females. We found 50 children were myopic and out of them 26 were male and 24 were female, 27 children were hyperopic and out of them 14 were male and 13 were female. About 63 children were astigmatic and out of them 32 were male and 31 were female. The level of knowledge, attitude was satisfactory. The attitude of the students, teachers and parents was cooperative which helps to do cycloplegic refraction. Practice was not satisfactory due to social stigma and information gap. Conclusion: Knowledge of refractive error and acceptance of glasses for the correction of uncorrected refractive error. Public awareness program such as vision screening program, eye camp, and teachers training program are more beneficial for wearing and prescribing spectacle.Keywords: refractive error, stigma, knowledge, attitude, practice
Procedia PDF Downloads 2645843 Application of Balance Score Card (BSc) in Education: Case of the International University
Authors: Hieu Nguyen
Abstract:
Performance management is the concern of any organizations in the context of increasing demand and fierce competition between education institution. This paper draws together the performance management concepts and focuses specifically to Balance Scorecard in the context of education. The study employs semi-structured in-depth interview to explore the measurement items for each of the sub-objectives in the four perspectives. Each of the perspectives’ explored measurement items will then be discussed the role and influence of them towards the perspective and how to improve the measurements to have improved performance management. Finally, the measurements will be put together as a suggested balanced scorecard framework in the case of International University.Keywords: performance management, education institution, balance scorecard, measurement items, four perspectives, international univeristy
Procedia PDF Downloads 4115842 Arterial Compliance Measurement Using Split Cylinder Sensor/Actuator
Authors: Swati Swati, Yuhang Chen, Robert Reuben
Abstract:
Coronary stents are devices resembling the shape of a tube which are placed in coronary arteries, to keep the arteries open in the treatment of coronary arterial diseases. Coronary stents are routinely deployed to clear atheromatous plaque. The stent essentially applies an internal pressure to the artery because its structure is cylindrically symmetrical and this may introduce some abnormalities in final arterial shape. The goal of the project is to develop segmented circumferential arterial compliance measuring devices which can be deployed (eventually) in vivo. The segmentation of the device will allow the mechanical asymmetry of any stenosis to be assessed. The purpose will be to assess the quality of arterial tissue for applications in tailored stents and in the assessment of aortic aneurism. Arterial distensibility measurement is of utmost importance to diagnose cardiovascular diseases and for prediction of future cardiac events or coronary artery diseases. In order to arrive at some generic outcomes, a preliminary experimental set-up has been devised to establish the measurement principles for the device at macro-scale. The measurement methodology consists of a strain gauge system monitored by LABVIEW software in a real-time fashion. This virtual instrument employs a balloon within a gelatine model contained in a split cylinder with strain gauges fixed on it. The instrument allows automated measurement of the effect of air-pressure on gelatine and measurement of strain with respect to time and pressure during inflation. Compliance simple creep model has been applied to the results for the purpose of extracting some measures of arterial compliance. The results obtained from the experiments have been used to study the effect of air pressure on strain at varying time intervals. The results clearly demonstrate that with decrease in arterial volume and increase in arterial pressure, arterial strain increases thereby decreasing the arterial compliance. The measurement system could lead to development of portable, inexpensive and small equipment and could prove to be an efficient automated compliance measurement device.Keywords: arterial compliance, atheromatous plaque, mechanical symmetry, strain measurement
Procedia PDF Downloads 2795841 Cross-Sectional Study Investigating the Prevalence of Uncorrected Refractive Error and Visual Acuity through Mobile Vision Screening in the Homeless in Wales
Authors: Pakinee Pooprasert, Wanxin Wang, Tina Parmar, Dana Ahnood, Tafadzwa Young-Zvandasara, James Morgan
Abstract:
Homelessness has been shown to be correlated to poor health outcomes, including increased visual health morbidity. Despite this, there are relatively few studies regarding visual health in the homeless population, especially in the UK. This research aims to investigate visual disability and access barriers prevalent in the homeless population in Cardiff, South Wales. Data was collected from 100 homeless participants in three different shelters. Visual outcomes included near and distance visual acuity as well as non-cycloplegic refraction. Qualitative data was collected via a questionnaire and included socio-demographic profile, ocular history, subjective visual acuity and level of access to healthcare facilities. Based on the participants’ presenting visual acuity, the total prevalence of myopia and hyperopia was 17.0% and 19.0% respectively based on spherical equivalent from the eye with the greatest absolute value. The prevalence of astigmatism was 8.0%. The mean absolute spherical equivalent was 0.841D and 0.853D for right and left eye respectively. The number of participants with sight loss (as defined by VA= 6/12-6/60 in the better-seeing eye) was 27.0% in comparison to 0.89% and 1.1% in the general Cardiff and Wales population respectively (p-value is < 0.05). Additionally, 1.0% of the homeless subjects were registered blind (VA less than 3/60), in comparison to 0.17% for the national consensus after age standardization. Most participants had good knowledge regarding access to prescription glasses and eye examination services. Despite this, 85.0% never had their eyes examined by a doctor and 73.0% had their last optometrist appointment in more than 5 years. These findings suggested that there was a significant disparity in ocular health, including visual acuity and refractive error amongst the homeless in comparison to the general population. Further, the homeless were less likely to receive the same level of support and continued care in the community due to access barriers. These included a number of socio-economic factors such as travel expenses and regional availability of services, as well as administrative shortcomings. In conclusion, this research demonstrated unmet visual health needs within the homeless, and that inclusive policy changes may need to be implemented for better healthcare outcomes within this marginalized community.Keywords: homelessness, refractive error, visual disability, Wales
Procedia PDF Downloads 1725840 Phasor Measurement Unit Based on Particle Filtering
Authors: Rithvik Reddy Adapa, Xin Wang
Abstract:
Phasor Measurement Units (PMUs) are very sophisticated measuring devices that find amplitude, phase and frequency of various voltages and currents in a power system. Particle filter is a state estimation technique that uses Bayesian inference. Particle filters are widely used in pose estimation and indoor navigation and are very reliable. This paper studies and compares four different particle filters as PMUs namely, generic particle filter (GPF), genetic algorithm particle filter (GAPF), particle swarm optimization particle filter (PSOPF) and adaptive particle filter (APF). Two different test signals are used to test the performance of the filters in terms of responsiveness and correctness of the estimates.Keywords: phasor measurement unit, particle filter, genetic algorithm, particle swarm optimisation, state estimation
Procedia PDF Downloads 95839 Ant Lion Optimization in a Fuzzy System for Benchmark Control Problem
Authors: Leticia Cervantes, Edith Garcia, Oscar Castillo
Abstract:
At today, there are several control problems where the main objective is to obtain the best control in the study to decrease the error in the application. Many techniques can use to control these problems such as Neural Networks, PID control, Fuzzy Logic, Optimization techniques and many more. In this case, fuzzy logic with fuzzy system and an optimization technique are used to control the case of study. In this case, Ant Lion Optimization is used to optimize a fuzzy system to control the velocity of a simple treadmill. The main objective is to achieve the control of the velocity in the control problem using the ALO optimization. First, a simple fuzzy system was used to control the velocity of the treadmill it has two inputs (error and error change) and one output (desired speed), then results were obtained but to decrease the error the ALO optimization was developed to optimize the fuzzy system of the treadmill. Having the optimization, the simulation was performed, and results can prove that using the ALO optimization the control of the velocity was better than a conventional fuzzy system. This paper describes some basic concepts to help to understand the idea in this work, the methodology of the investigation (control problem, fuzzy system design, optimization), the results are presented and the optimization is used for the fuzzy system. A comparison between the simple fuzzy system and the optimized fuzzy systems are presented where it can be proving the optimization improved the control with good results the major findings of the study is that ALO optimization is a good alternative to improve the control because it helped to decrease the error in control applications even using any control technique to optimized, As a final statement is important to mentioned that the selected methodology was good because the control of the treadmill was improve using the optimization technique.Keywords: ant lion optimization, control problem, fuzzy control, fuzzy system
Procedia PDF Downloads 399