Search results for: measurement models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9157

Search results for: measurement models

9127 The Use of AI to Measure Gross National Happiness

Authors: Riona Dighe

Abstract:

This research attempts to identify an alternative approach to the measurement of Gross National Happiness (GNH). It uses artificial intelligence (AI), incorporating natural language processing (NLP) and sentiment analysis to measure GNH. We use ‘off the shelf’ NLP models responsible for the sentiment analysis of a sentence as a building block for this research. We constructed an algorithm using NLP models to derive a sentiment analysis score against sentences. This was then tested against a sample of 20 respondents to derive a sentiment analysis score. The scores generated resembled human responses. By utilising the MLP classifier, decision tree, linear model, and K-nearest neighbors, we were able to obtain a test accuracy of 89.97%, 54.63%, 52.13%, and 47.9%, respectively. This gave us the confidence to use the NLP models against sentences in websites to measure the GNH of a country.

Keywords: artificial intelligence, NLP, sentiment analysis, gross national happiness

Procedia PDF Downloads 119
9126 Text Similarity in Vector Space Models: A Comparative Study

Authors: Omid Shahmirzadi, Adam Lugowski, Kenneth Younge

Abstract:

Automatic measurement of semantic text similarity is an important task in natural language processing. In this paper, we evaluate the performance of different vector space models to perform this task. We address the real-world problem of modeling patent-to-patent similarity and compare TFIDF (and related extensions), topic models (e.g., latent semantic indexing), and neural models (e.g., paragraph vectors). Contrary to expectations, the added computational cost of text embedding methods is justified only when: 1) the target text is condensed; and 2) the similarity comparison is trivial. Otherwise, TFIDF performs surprisingly well in other cases: in particular for longer and more technical texts or for making finer-grained distinctions between nearest neighbors. Unexpectedly, extensions to the TFIDF method, such as adding noun phrases or calculating term weights incrementally, were not helpful in our context.

Keywords: big data, patent, text embedding, text similarity, vector space model

Procedia PDF Downloads 175
9125 Software Improvements of the Accuracy in the Air-Electronic Measurement Systems for Geometrical Dimensions

Authors: Miroslav H. Hristov, Velizar A. Vassilev, Georgi K. Dukendjiev

Abstract:

Due to the constant development of measurement systems and the aim for computerization, unavoidable improvements are made for the main disadvantages of air gauges. With the appearance of the air-electronic measuring devices, some of their disadvantages are solved. The output electrical signal allows them to be included in the modern systems for measuring information processing and process management. Producer efforts are aimed at reducing the influence of supply pressure and measurement system setup errors. Increased accuracy requirements and preventive error measures are due to the main uses of air electronic systems - measurement of geometric dimensions in the automotive industry where they are applied as modules in measuring systems to measure geometric parameters, form, orientation and location of the elements.

Keywords: air-electronic, geometrical parameters, improvement, measurement systems

Procedia PDF Downloads 226
9124 Relative Navigation with Laser-Based Intermittent Measurement for Formation Flying Satellites

Authors: Jongwoo Lee, Dae-Eun Kang, Sang-Young Park

Abstract:

This study presents a precise relative navigational method for satellites flying in formation using laser-based intermittent measurement data. The measurement data for the relative navigation between two satellites consist of a relative distance measured by a laser instrument and relative attitude angles measured by attitude determination. The relative navigation solutions are estimated by both the Extended Kalman filter (EKF) and unscented Kalman filter (UKF). The solutions estimated by the EKF may become inaccurate or even diverge as measurement outage time gets longer because the EKF utilizes a linearization approach. However, this study shows that the UKF with the appropriate scaling parameters provides a stable and accurate relative navigation solutions despite the long measurement outage time and large initial error as compared to the relative navigation solutions of the EKF. Various navigation results have been analyzed by adjusting the scaling parameters of the UKF.

Keywords: satellite relative navigation, laser-based measurement, intermittent measurement, unscented Kalman filter

Procedia PDF Downloads 357
9123 Influence of Measurement System on Negative Bias Temperature Instability Characterization: Fast BTI vs Conventional BTI vs Fast Wafer Level Reliability

Authors: Vincent King Soon Wong, Hong Seng Ng, Florinna Sim

Abstract:

Negative Bias Temperature Instability (NBTI) is one of the critical degradation mechanisms in semiconductor device reliability that causes shift in the threshold voltage (Vth). However, thorough understanding of this reliability failure mechanism is still unachievable due to a recovery characteristic known as NBTI recovery. This paper will demonstrate the severity of NBTI recovery as well as one of the effective methods used to mitigate, which is the minimization of measurement system delays. Comparison was done in between two measurement systems that have significant differences in measurement delays to show how NBTI recovery causes result deviations and how fast measurement systems can mitigate NBTI recovery. Another method to minimize NBTI recovery without the influence of measurement system known as Fast Wafer Level Reliability (FWLR) NBTI was also done to be used as reference.

Keywords: fast vs slow BTI, fast wafer level reliability (FWLR), negative bias temperature instability (NBTI), NBTI measurement system, metal-oxide-semiconductor field-effect transistor (MOSFET), NBTI recovery, reliability

Procedia PDF Downloads 426
9122 Effect of Traffic Volume and Its Composition on Vehicular Speed under Mixed Traffic Conditions: A Kriging Based Approach

Authors: Subhadip Biswas, Shivendra Maurya, Satish Chandra, Indrajit Ghosh

Abstract:

Use of speed prediction models sometimes appears as a feasible alternative to laborious field measurement particularly, in case when field data cannot fulfill designer’s requirements. However, developing speed models is a challenging task specifically in the context of developing countries like India where vehicles with diverse static and dynamic characteristics use the same right of way without any segregation. Here the traffic composition plays a significant role in determining the vehicular speed. The present research was carried out to examine the effects of traffic volume and its composition on vehicular speed under mixed traffic conditions. Classified traffic volume and speed data were collected from different geometrically identical six lane divided arterials in New Delhi. Based on these field data, speed prediction models were developed for individual vehicle category adopting Kriging approximation technique, an alternative for commonly used regression. These models are validated with the data set kept aside earlier for validation purpose. The predicted speeds showed a great deal of agreement with the observed values and also the model outperforms all other existing speed models. Finally, the proposed models were utilized to evaluate the effect of traffic volume and its composition on speed.

Keywords: speed, Kriging, arterial, traffic volume

Procedia PDF Downloads 353
9121 Creation and Management of Knowledge for Organization Sustainability and Learning

Authors: Deepa Kapoor, Rajshree Singh

Abstract:

This paper appreciates the emergence and growing importance as a new production factor makes the development of technologies, methodologies and strategies for measurement, creation, and diffusion into one of the main priorities of the organizations in the knowledge society. There are many models for creation and management of knowledge and diverse and varied perspectives for study, analysis, and understanding. In this article, we will conduct a theoretical approach to the type of models for the creation and management of knowledge; we will discuss some of them and see some of the difficulties and the key factors that determine the success of the processes for the creation and management of knowledge.

Keywords: knowledge creation, knowledge management, organizational development, organization learning

Procedia PDF Downloads 345
9120 Validation of the Formula for Air Attenuation Coefficient for Acoustic Scale Models

Authors: Katarzyna Baruch, Agata Szelag, Aleksandra Majchrzak, Tadeusz Kamisinski

Abstract:

Methodology of measurement of sound absorption coefficient in scaled models is based on the ISO 354 standard. The measurement is realised indirectly - the coefficient is calculated from the reverberation time of an empty chamber as well as a chamber with an inserted sample. It is crucial to maintain the atmospheric conditions stable during both measurements. Possible differences may be amended basing on the formulas for atmospheric attenuation coefficient α given in ISO 9613-1. Model studies require scaling particular factors in compliance with specified characteristic numbers. For absorption coefficient measurement, these are for example: frequency range or the value of attenuation coefficient m. Thanks to the possibilities of modern electroacoustic transducers, it is no longer a problem to scale the frequencies which have to be proportionally higher. However, it may be problematic to reduce values of the attenuation coefficient. It is practically obtained by drying the air down to a defined relative humidity. Despite the change of frequency range and relative humidity of the air, ISO 9613-1 standard still allows the calculation of the amendment for little differences of the atmospheric conditions in the chamber during measurements. The paper discusses a number of theoretical analyses and experimental measurements performed in order to obtain consistency between the values of attenuation coefficient calculated from the formulas given in the standard and by measurement. The authors performed measurements of reverberation time in a chamber made in a 1/8 scale in a corresponding frequency range, i.e. 800 Hz - 40 kHz and in different values of the relative air humidity (40% 5%). Based on the measurements, empirical values of attenuation coefficient were calculated and compared with theoretical ones. In general, the values correspond with each other, but for high frequencies and low values of relative air humidity the differences are significant. Those discrepancies may directly influence the values of measured sound absorption coefficient and cause errors. Therefore, the authors made an effort to determine an amendment minimizing described inaccuracy.

Keywords: air absorption correction, attenuation coefficient, dimensional analysis, model study, scaled modelling

Procedia PDF Downloads 421
9119 Measurement Tools of the Maturity Model for IT Service Outsourcing in Higher Education Institutions

Authors: Victoriano Valencia García, Luis Usero Aragonés, Eugenio J. Fernández Vicente

Abstract:

Nowadays, the successful implementation of ICTs is vital for almost any kind of organization. Good governance and ICT management are essential for delivering value, managing technological risks, managing resources and performance measurement. In addition, outsourcing is a strategic IT service solution which complements IT services provided internally in organizations. This paper proposes the measurement tools of a new holistic maturity model based on standards ISO/IEC 20000 and ISO/IEC 38500, and the frameworks and best practices of ITIL and COBIT, with a specific focus on IT outsourcing. These measurement tools allow independent validation and practical application in the field of higher education, using a questionnaire, metrics tables, and continuous improvement plan tables as part of the measurement process. Guidelines and standards are proposed in the model for facilitating adaptation to universities and achieving excellence in the outsourcing of IT services.

Keywords: IT governance, IT management, IT services, outsourcing, maturity model, measurement tools

Procedia PDF Downloads 591
9118 Inter Laboratory Comparison with Coordinate Measuring Machine and Uncertainty Analysis

Authors: Tugrul Torun, Ihsan A. Yuksel, Si̇nem On Aktan, Taha K. Vezi̇roglu

Abstract:

In the quality control processes in some industries, the usage of CMM has increased in recent years. Consequently, the CMMs play important roles in the acceptance or rejection of manufactured parts. For parts, it’s important to be able to make decisions by performing fast measurements. According to related technical drawing and its tolerances, measurement uncertainty should also be considered during assessment. Since uncertainty calculation is difficult and time-consuming, most companies ignore the uncertainty value in their routine inspection method. Although studies on measurement uncertainty have been carried out on CMM’s in recent years, there is still no applicable method for analyzing task-specific measurement uncertainty. There are some standard series for calculating measurement uncertainty (ISO-15530); it is not possible to use it in industrial measurement because it is not a practical method for standard measurement routine. In this study, the inter-laboratory comparison test has been carried out in the ROKETSAN A.Ş. with all dimensional inspection units. The reference part that we used is traceable to the national metrology institute TUBİTAK UME. Each unit has measured reference parts according to related technical drawings, and the task-specific measuring uncertainty has been calculated with related parameters. According to measurement results and uncertainty values, the En values have been calculated.

Keywords: coordinate measurement, CMM, comparison, uncertainty

Procedia PDF Downloads 211
9117 Dual-Task–Immersion in the Interactions of Simultaneously Performed Tasks

Authors: M. Liebherr, P. Schubert, S. Kersten, C. Dietz, L. Franz, C. T. Haas

Abstract:

With a long history, dual-task has become one of the most intriguing research fields regarding human brain functioning and cognition. However, findings considering effects of task-interrelations are limited (especially, in combined motor and cognitive tasks). Therefore, we aimed at developing a measurement system in order to analyse interrelation effects of cognitive and motor tasks. On the one hand, the present study demonstrates the applicability of the measurement system and on the other hand first results regarding a systematization of different task combinations are shown. Future investigations should combine imagine technologies and this developed measurement system.

Keywords: dual-task, interference, cognition, measurement

Procedia PDF Downloads 534
9116 Investigation of Learning Challenges in Building Measurement Unit

Authors: Argaw T. Gurmu, Muhammad N. Mahmood

Abstract:

The objective of this research is to identify the architecture and construction management students’ learning challenges of the building measurement. This research used the survey data obtained collected from the students who completed the building measurement unit. NVivo qualitative data analysis software was used to identify relevant themes. The analysis of the qualitative data revealed the major learning difficulties such as inadequacy of practice questions for the examination, inability to work as a team, lack of detailed understanding of the prerequisite units, insufficiency of the time allocated for tutorials and incompatibility of lecture and tutorial schedules. The output of this research can be used as a basis for improving the teaching and learning activities in construction measurement units.

Keywords: building measurement, construction management, learning challenges, evaluate survey

Procedia PDF Downloads 138
9115 Enhancing Signal Reception in a Mobile Radio Network Using Adaptive Beamforming Antenna Arrays Technology

Authors: Ugwu O. C., Mamah R. O., Awudu W. S.

Abstract:

This work is aimed at enhancing signal reception on a mobile radio network and minimizing outage probability in a mobile radio network using adaptive beamforming antenna arrays. In this research work, an empirical real-time drive measurement was done in a cellular network of Globalcom Nigeria Limited located at Ikeja, the headquarters of Lagos State, Nigeria, with reference base station number KJA 004. The empirical measurement includes Received Signal Strength and Bit Error Rate which were recorded for exact prediction of the signal strength of the network as at the time of carrying out this research work. The Received Signal Strength and Bit Error Rate were measured with a spectrum monitoring Van with the help of a Ray Tracer at an interval of 100 meters up to 700 meters from the transmitting base station. The distance and angular location measurements from the reference network were done with the help Global Positioning System (GPS). The other equipment used were transmitting equipment measurements software (Temsoftware), Laptops and log files, which showed received signal strength with distance from the base station. Results obtained were about 11% from the real-time experiment, which showed that mobile radio networks are prone to signal failure and can be minimized using an Adaptive Beamforming Antenna Array in terms of a significant reduction in Bit Error Rate, which implies improved performance of the mobile radio network. In addition, this work did not only include experiments done through empirical measurement but also enhanced mathematical models that were developed and implemented as a reference model for accurate prediction. The proposed signal models were based on the analysis of continuous time and discrete space, and some other assumptions. These developed (proposed) enhanced models were validated using MATLAB (version 7.6.3.35) program and compared with the conventional antenna for accuracy. These outage models were used to manage the blocked call experience in the mobile radio network. 20% improvement was obtained when the adaptive beamforming antenna arrays were implemented on the wireless mobile radio network.

Keywords: beamforming algorithm, adaptive beamforming, simulink, reception

Procedia PDF Downloads 41
9114 Multiple Linear Regression for Rapid Estimation of Subsurface Resistivity from Apparent Resistivity Measurements

Authors: Sabiu Bala Muhammad, Rosli Saad

Abstract:

Multiple linear regression (MLR) models for fast estimation of true subsurface resistivity from apparent resistivity field measurements are developed and assessed in this study. The parameters investigated were apparent resistivity (ρₐ), horizontal location (X) and depth (Z) of measurement as the independent variables; and true resistivity (ρₜ) as the dependent variable. To achieve linearity in both resistivity variables, datasets were first transformed into logarithmic domain following diagnostic checks of normality of the dependent variable and heteroscedasticity to ensure accurate models. Four MLR models were developed based on hierarchical combination of the independent variables. The generated MLR coefficients were applied to another data set to estimate ρₜ values for validation. Contours of the estimated ρₜ values were plotted and compared to the observed data plots at the colour scale and blanking for visual assessment. The accuracy of the models was assessed using coefficient of determination (R²), standard error (SE) and weighted mean absolute percentage error (wMAPE). It is concluded that the MLR models can estimate ρₜ for with high level of accuracy.

Keywords: apparent resistivity, depth, horizontal location, multiple linear regression, true resistivity

Procedia PDF Downloads 276
9113 Improving the Analytical Power of Dynamic DEA Models, by the Consideration of the Shape of the Distribution of Inputs/Outputs Data: A Linear Piecewise Decomposition Approach

Authors: Elias K. Maragos, Petros E. Maravelakis

Abstract:

In Dynamic Data Envelopment Analysis (DDEA), which is a subfield of Data Envelopment Analysis (DEA), the productivity of Decision Making Units (DMUs) is considered in relation to time. In this case, as it is accepted by the most of the researchers, there are outputs, which are produced by a DMU to be used as inputs in a future time. Those outputs are known as intermediates. The common models, in DDEA, do not take into account the shape of the distribution of those inputs, outputs or intermediates data, assuming that the distribution of the virtual value of them does not deviate from linearity. This weakness causes the limitation of the accuracy of the analytical power of the traditional DDEA models. In this paper, the authors, using the concept of piecewise linear inputs and outputs, propose an extended DDEA model. The proposed model increases the flexibility of the traditional DDEA models and improves the measurement of the dynamic performance of DMUs.

Keywords: Dynamic Data Envelopment Analysis, DDEA, piecewise linear inputs, piecewise linear outputs

Procedia PDF Downloads 160
9112 Pavement Management for a Metropolitan Area: A Case Study of Montreal

Authors: Luis Amador Jimenez, Md. Shohel Amin

Abstract:

Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.

Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization

Procedia PDF Downloads 460
9111 Measurement of Intellectual Capital in an Algerian Company

Authors: S. Brahmi, S. Aitouche, M. D. Mouss

Abstract:

Every modern company should measure the value of its intellectual capital and to report to complement the traditional annual balance sheets. The purpose of this work is to measure the intellectual capital in an Algerian company (or production system) using the Weightless Wealth Tool Kit (WWTK). The results of the measurement of intellectual capital are supplemented by traditional financial ratios. The measurement was applied to the National Company of Wells Services (ENSP) in Hassi Messaoud city, in the south of Algeria. We calculated the intellectual capital (intangible resources) of the ENSP to help the organization to better capitalize on its potential of workers and their know-how. The intangible value of the ENSP is evaluated at 16,936,173,345 DA in 2015.

Keywords: financial valuation, intangible capital, intellectual capital, intellectual capital measurement

Procedia PDF Downloads 286
9110 Distance and Coverage: An Assessment of Location-Allocation Models for Fire Stations in Kuwait City, Kuwait

Authors: Saad M. Algharib

Abstract:

The major concern of planners when placing fire stations is finding their optimal locations such that the fire companies can reach fire locations within reasonable response time or distance. Planners are also concerned with the numbers of fire stations that are needed to cover all service areas and the fires, as demands, with standard response time or distance. One of the tools for such analysis is location-allocation models. Location-allocation models enable planners to determine the optimal locations of facilities in an area in order to serve regional demands in the most efficient way. The purpose of this study is to examine the geographic distribution of the existing fire stations in Kuwait City. This study utilized location-allocation models within the Geographic Information System (GIS) environment and a number of statistical functions to assess the current locations of fire stations in Kuwait City. Further, this study investigated how well all service areas are covered and how many and where additional fire stations are needed. Four different location-allocation models were compared to find which models cover more demands than the others, given the same number of fire stations. This study tests many ways to combine variables instead of using one variable at a time when applying these models in order to create a new measurement that influences the optimal locations for locating fire stations. This study also tests how location-allocation models are sensitive to different levels of spatial dependency. The results indicate that there are some districts in Kuwait City that are not covered by the existing fire stations. These uncovered districts are clustered together. This study also identifies where to locate the new fire stations. This study provides users of these models a new variable that can assist them to select the best locations for fire stations. The results include information about how the location-allocation models behave in response to different levels of spatial dependency of demands. The results show that these models perform better with clustered demands. From the additional analysis carried out in this study, it can be concluded that these models applied differently at different spatial patterns.

Keywords: geographic information science, GIS, location-allocation models, geography

Procedia PDF Downloads 177
9109 An Autopilot System for Static Zone Detection

Authors: Yanchun Zuo, Yingao Liu, Wei Liu, Le Yu, Run Huang, Lixin Guo

Abstract:

Electric field detection is important in many application scenarios. The traditional strategy is measuring the electric field with a man walking around in the area under test. This strategy cannot provide a satisfactory measurement accuracy. To solve the mentioned problem, an autopilot measurement system is divided. A mini-car is produced, which can travel in the area under test according to respect to the program within the CPU. The electric field measurement platform (EFMP) carries a central computer, two horn antennas, and a vector network analyzer. The mini-car stop at the sampling points according to the preset. When the car stops, the EFMP probes the electric field and stores data on the hard disk. After all the sampling points are traversed, an electric field map can be plotted. The proposed system can give an accurate field distribution description of the chamber.

Keywords: autopilot mini-car measurement system, electric field detection, field map, static zone measurement

Procedia PDF Downloads 101
9108 A Dual-Polarized Wideband Probe for Near-Field Antenna Measurement

Authors: K. S. Sruthi

Abstract:

Antennas are one of the most important parts of a communication chain. They are used for both communication and calibration purposes. New developments in probe technologies have enabled near-field probes with much larger bandwidth. The objective of this paper is to design, simulate and fabricate a dual polarized wide band inverted quad ridged shape horn antenna which can be used as measurement probe for near field measurements. The inverted quad-ridged horn antenna probe not only provides measurement in the much wider range but also provides dual-polarization measurement thus enabling antenna developers to measure UWB, UHF, VHF antennas more precisely and at lower cost. The antenna is designed to meet the characteristics such as high gain, light weight, linearly polarized with suppressed side lobes for near-field measurement applications. The proposed antenna is simulated with commercially available packages such as Ansoft HFSS. The antenna gives a moderate gain over operating range while delivering a wide bandwidth.

Keywords: near-field antenna measurement, inverted quad-ridge horn antenna, wideband Antennas, dual polarized antennas, ansoft HFSS

Procedia PDF Downloads 425
9107 Measurement Errors and Misclassifications in Covariates in Logistic Regression: Bayesian Adjustment of Main and Interaction Effects and the Sample Size Implications

Authors: Shahadut Hossain

Abstract:

Measurement errors in continuous covariates and/or misclassifications in categorical covariates are common in epidemiological studies. Regression analysis ignoring such mismeasurements seriously biases the estimated main and interaction effects of covariates on the outcome of interest. Thus, adjustments for such mismeasurements are necessary. In this research, we propose a Bayesian parametric framework for eliminating deleterious impacts of covariate mismeasurements in logistic regression. The proposed adjustment method is unified and thus can be applied to any generalized linear and non-linear regression models. Furthermore, adjustment for covariate mismeasurements requires validation data usually in the form of either gold standard measurements or replicates of the mismeasured covariates on a subset of the study population. Initial investigation shows that adequacy of such adjustment depends on the sizes of main and validation samples, especially when prevalences of the categorical covariates are low. Thus, we investigate the impact of main and validation sample sizes on the adjusted estimates, and provide a general guideline about these sample sizes based on simulation studies.

Keywords: measurement errors, misclassification, mismeasurement, validation sample, Bayesian adjustment

Procedia PDF Downloads 408
9106 Forced-Choice Measurement Models of Behavioural, Social, and Emotional Skills: Theory, Research, and Development

Authors: Richard Roberts, Anna Kravtcova

Abstract:

Introduction: The realisation that personality can change over the course of a lifetime has led to a new companion model to the Big Five, the behavioural, emotional, and social skills approach (BESSA). BESSA hypothesizes that this set of skills represents how the individual is thinking, feeling, and behaving when the situation calls for it, as opposed to traits, which represent how someone tends to think, feel, and behave averaged across situations. The five major skill domains share parallels with the Big Five Factor (BFF) model creativity and innovation (openness), self-management (conscientiousness), social engagement (extraversion), cooperation (agreeableness), and emotional resilience (emotional stability) skills. We point to noteworthy limitations in the current operationalisation of BESSA skills (i.e., via Likert-type items) and offer up a different measurement approach: forced choice. Method: In this forced-choice paradigm, individuals were given three skill items (e.g., managing my time) and asked to select one response they believed they were “worst at” and “best at”. The Thurstonian IRT models allow these to be placed on a normative scale. Two multivariate studies (N = 1178) were conducted with a 22-item forced-choice version of the BESSA, a published measure of the BFF, and various criteria. Findings: Confirmatory factor analysis of the forced-choice assessment showed acceptable model fit (RMSEA<0.06), while reliability estimates were reasonable (around 0.70 for each construct). Convergent validity evidence was as predicted (correlations between 0.40 and 0.60 for corresponding BFF and BESSA constructs). Notable was the extent the forced-choice BESSA assessment improved upon test-criterion relationships over and above the BFF. For example, typical regression models find BFF personality accounting for 25% of the variance in life satisfaction scores; both studies showed incremental gains over the BFF exceeding 6% (i.e., BFF and BESSA together accounted for over 31% of the variance in both studies). Discussion: Forced-choice measurement models offer up the promise of creating equated test forms that may unequivocally measure skill gains and are less prone to fakability and reference bias effects. Implications for practitioners are discussed, especially those interested in selection, succession planning, and training and development. We also discuss how the forced choice method can be applied to other constructs like emotional immunity, cross-cultural competence, and self-estimates of cognitive ability.

Keywords: Big Five, forced-choice method, BFF, methods of measurements

Procedia PDF Downloads 94
9105 Development of Automatic Laser Scanning Measurement Instrument

Authors: Chien-Hung Liu, Yu-Fen Chen

Abstract:

This study used triangular laser probe and three-axial direction mobile platform for surface measurement, programmed it and applied it to real-time analytic statistics of different measured data. This structure was used to design a system integration program: using triangular laser probe for scattering or reflection non-contact measurement, transferring the captured signals to the computer through RS-232, and using RS-485 to control the three-axis platform for a wide range of measurement. The data captured by the laser probe are formed into a 3D surface. This study constructed an optical measurement application program in the concept of visual programming language. First, the signals are transmitted to the computer through RS-232/RS-485, and then the signals are stored and recorded in graphic interface timely. This programming concept analyzes various messages, and makes proper presentation graphs and data processing to provide the users with friendly graphic interfaces and data processing state monitoring, and identifies whether the present data are normal in graphic concept. The major functions of the measurement system developed by this study are thickness measurement, SPC, surface smoothness analysis, and analytical calculation of trend line. A result report can be made and printed promptly. This study measured different heights and surfaces successfully, performed on-line data analysis and processing effectively, and developed a man-machine interface for users to operate.

Keywords: laser probe, non-contact measurement, triangulation measurement principle, statistical process control, labVIEW

Procedia PDF Downloads 360
9104 A Review on Water Models of Surface Water Environment

Authors: Shahbaz G. Hassan

Abstract:

Water quality models are very important to predict the changes in surface water quality for environmental management. The aim of this paper is to give an overview of the water qualities, and to provide directions for selecting models in specific situation. Water quality models include one kind of model based on a mechanistic approach, while other models simulate water quality without considering a mechanism. Mechanistic models can be widely applied and have capabilities for long-time simulation, with highly complexity. Therefore, more spaces are provided to explain the principle and application experience of mechanistic models. Mechanism models have certain assumptions on rivers, lakes and estuaries, which limits the application range of the model, this paper introduces the principles and applications of water quality model based on the above three scenarios. On the other hand, mechanistic models are more easily to compute, and with no limit to the geographical conditions, but they cannot be used with confidence to simulate long term changes. This paper divides the empirical models into two broad categories according to the difference of mathematical algorithm, models based on artificial intelligence and models based on statistical methods.

Keywords: empirical models, mathematical, statistical, water quality

Procedia PDF Downloads 264
9103 Mathematical Modeling of the Working Principle of Gravity Gradient Instrument

Authors: Danni Cong, Meiping Wu, Hua Mu, Xiaofeng He, Junxiang Lian, Juliang Cao, Shaokun Cai, Hao Qin

Abstract:

Gravity field is of great significance in geoscience, national economy and national security, and gravitational gradient measurement has been extensively studied due to its higher accuracy than gravity measurement. Gravity gradient sensor, being one of core devices of the gravity gradient instrument, plays a key role in measuring accuracy. Therefore, this paper starts from analyzing the working principle of the gravity gradient sensor by Newton’s law, and then considers the relative motion between inertial and non-inertial systems to build a relatively adequate mathematical model, laying a foundation for the measurement error calibration, measurement accuracy improvement.

Keywords: gravity gradient, gravity gradient sensor, accelerometer, single-axis rotation modulation

Procedia PDF Downloads 326
9102 A Comparison of Bias Among Relaxed Divisor Methods Using 3 Bias Measurements

Authors: Sumachaya Harnsukworapanich, Tetsuo Ichimori

Abstract:

The apportionment method is used by many countries, to calculate the distribution of seats in political bodies. For example, this method is used in the United States (U.S.) to distribute house seats proportionally based on the population of the electoral district. Famous apportionment methods include the divisor methods called the Adams Method, Dean Method, Hill Method, Jefferson Method and Webster Method. Sometimes the results from the implementation of these divisor methods are unfair and include errors. Therefore, it is important to examine the optimization of this method by using a bias measurement to figure out precise and fair results. In this research we investigate the bias of divisor methods in the U.S. Houses of Representatives toward large and small states by applying the Stolarsky Mean Method. We compare the bias of the apportionment method by using two famous bias measurements: The Balinski and Young measurement and the Ernst measurement. Both measurements have a formula for large and small states. The Third measurement however, which was created by the researchers, did not factor in the element of large and small states into the formula. All three measurements are compared and the results show that our measurement produces similar results to the other two famous measurements.

Keywords: apportionment, bias, divisor, fair, measurement

Procedia PDF Downloads 366
9101 Management and Marketing Implications of Tourism Gravity Models

Authors: Clive L. Morley

Abstract:

Gravity models and panel data modelling of tourism flows are receiving renewed attention, after decades of general neglect. Such models have quite different underpinnings from conventional demand models derived from micro-economic theory. They operate at a different level of data and with different theoretical bases. These differences have important consequences for the interpretation of the results and their policy and managerial implications. This review compares and contrasts the two model forms, clarifying the distinguishing features and the estimation requirements of each. In general, gravity models are not recommended for use to address specific management and marketing purposes.

Keywords: gravity models, micro-economics, demand models, marketing

Procedia PDF Downloads 438
9100 Cracks Detection and Measurement Using VLP-16 LiDAR and Intel Depth Camera D435 in Real-Time

Authors: Xinwen Zhu, Xingguang Li, Sun Yi

Abstract:

Crack is one of the most common damages in buildings, bridges, roads and so on, which may pose safety hazards. However, cracks frequently happen in structures of various materials. Traditional methods of manual detection and measurement, which are known as subjective, time-consuming, and labor-intensive, are gradually unable to meet the needs of modern development. In addition, crack detection and measurement need be safe considering space limitations and danger. Intelligent crack detection has become necessary research. In this paper, an efficient method for crack detection and quantification using a 3D sensor, LiDAR, and depth camera is proposed. This method works even in a dark environment, which is usual in real-world applications. The LiDAR rapidly spins to scan the surrounding environment and discover cracks through lasers thousands of times per second, providing a rich, 3D point cloud in real-time. The LiDAR provides quite accurate depth information. The precision of the distance of each point can be determined within around  ±3 cm accuracy, and not only it is good for getting a precise distance, but it also allows us to see far of over 100m going with the top range models. But the accuracy is still large for some high precision structures of material. To make the depth of crack is much more accurate, the depth camera is in need. The cracks are scanned by the depth camera at the same time. Finally, all data from LiDAR and Depth cameras are analyzed, and the size of the cracks can be quantified successfully. The comparison shows that the minimum and mean absolute percentage error between measured and calculated width are about 2.22% and 6.27%, respectively. The experiments and results are presented in this paper.

Keywords: LiDAR, depth camera, real-time, detection and measurement

Procedia PDF Downloads 224
9099 Infusion Pump Historical Development, Measurement and Parts of Infusion Pump

Authors: Samuel Asrat

Abstract:

Infusion pumps have become indispensable tools in modern healthcare, allowing for precise and controlled delivery of fluids, medications, and nutrients to patients. This paper provides an overview of the historical development, measurement, and parts of infusion pumps. The historical development of infusion pumps can be traced back to the early 1960s when the first rudimentary models were introduced. These early pumps were large, cumbersome, and often unreliable. However, advancements in technology and engineering over the years have led to the development of smaller, more accurate, and user-friendly infusion pumps. Measurement of infusion pumps involves assessing various parameters such as flow rate, volume delivered, and infusion duration. Flow rate, typically measured in milliliters per hour (mL/hr), is a critical parameter that determines the rate at which fluids or medications are delivered to the patient. Accurate measurement of flow rate is essential to ensure the proper administration of therapy and prevent adverse effects. Infusion pumps consist of several key parts, including the pump mechanism, fluid reservoir, tubing, and control interface. The pump mechanism is responsible for generating the necessary pressure to push fluids through the tubing and into the patient's bloodstream. The fluid reservoir holds the medication or solution to be infused, while the tubing serves as the conduit through which the fluid travels from the reservoir to the patient. The control interface allows healthcare providers to program and adjust the infusion parameters, such as flow rate and volume. In conclusion, infusion pumps have evolved significantly since their inception, offering healthcare providers unprecedented control and precision in delivering fluids and medications to patients. Understanding the historical development, measurement, and parts of infusion pumps is essential for ensuring their safe and effective use in clinical practice.

Keywords: dip, ip, sp, is

Procedia PDF Downloads 67
9098 Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques

Authors: F. H. Abanda, B. Kamsu-Foguem, J. H. M. Tah

Abstract:

Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or under-estimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improves accuracies. This requires standard measurement methods to be structured in ontologically and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.

Keywords: BIM, construction projects, cost estimation, NRM, ontology

Procedia PDF Downloads 551