Search results for: measurement models
9007 Development of a Real Time Axial Force Measurement System and IoT-Based Monitoring for Smart Bearing
Authors: Hassam Ahmed, Yuanzhi Liu, Yassine Selami, Wei Tao, Hui Zhao
Abstract:
The purpose of this research is to develop a real time axial force measurement system for a smart bearing through the use of strain-gauges, whereby the data acquisition is performed by an Arduino microcontroller due to its easy manipulation and low-cost. The measured signal is acquired and then discretized using a Wheatstone Bridge and an Analog-Digital Converter (ADC) respectively. For bearing monitoring, a real time monitoring system based on Internet of things (IoT) and Bluetooth were developed. Experimental tests were performed on a bearing within a force range up to 600 kN. The experimental results show that there is a proportional linear relationship between the applied force and the output voltage, and the error R squared is within 0.9878 based on the regression analysis.Keywords: bearing, force measurement, IoT, strain gauge
Procedia PDF Downloads 1429006 Contribution to the Evaluation of Uncertainties of Measurement to the Data Processing Sequences of a Cmm
Authors: Hassina Gheribi, Salim Boukebbab
Abstract:
The measurement of the parts manufactured on CMM (coordinate measuring machine) is based on the association of a surface of perfect geometry to the group of dots palpated via a mathematical calculation of the distances between the palpated points and itself surfaces. Surfaces not being never perfect, they are measured by a number of points higher than the minimal number necessary to define them mathematically. However, the central problems of three-dimensional metrology are the estimate of, the orientation parameters, location and intrinsic of this surface. Including the numerical uncertainties attached to these parameters help the metrologist to make decisions to be able to declare the conformity of the part to specifications fixed on the design drawing. During this paper, we will present a data-processing model in Visual Basic-6 which makes it possible automatically to determine the whole of these parameters, and their uncertainties.Keywords: coordinate measuring machines (CMM), associated surface, uncertainties of measurement, acquisition and modeling
Procedia PDF Downloads 3279005 Blood Volume Pulse Extraction for Non-Contact Photoplethysmography Measurement from Facial Images
Authors: Ki Moo Lim, Iman R. Tayibnapis
Abstract:
According to WHO estimation, 38 out of 56 million (68%) global deaths in 2012, were due to noncommunicable diseases (NCDs). To avert NCD, one of the solutions is early detection of diseases. In order to do that, we developed 'U-Healthcare Mirror', which is able to measure vital sign such as heart rate (HR) and respiration rate without any physical contact and consciousness. To measure HR in the mirror, we utilized digital camera. The camera records red, green, and blue (RGB) discoloration from user's facial image sequences. We extracted blood volume pulse (BVP) from the RGB discoloration because the discoloration of the facial skin is accordance with BVP. We used blind source separation (BSS) to extract BVP from the RGB discoloration and adaptive filters for removing noises. We utilized singular value decomposition (SVD) method to implement the BSS and the adaptive filters. HR was estimated from the obtained BVP. We did experiment for HR measurement by using our method and previous method that used independent component analysis (ICA) method. We compared both of them with HR measurement from commercial oximeter. The experiment was conducted under various distance between 30~110 cm and light intensity between 5~2000 lux. For each condition, we did measurement 7 times. The estimated HR showed 2.25 bpm of mean error and 0.73 of pearson correlation coefficient. The accuracy has improved compared to previous work. The optimal distance between the mirror and user for HR measurement was 50 cm with medium light intensity, around 550 lux.Keywords: blood volume pulse, heart rate, photoplethysmography, independent component analysis
Procedia PDF Downloads 3299004 Flow Measurement Using Magnetic Meters in Large Underground Cooling Water Pipelines
Authors: Humanyun Zahir, Irtsam Ghazi
Abstract:
This report outlines the basic installation and operation of magnetic inductive flow velocity sensors on large underground cooling water pipelines. Research on the effects of cathodic protection as well as into other factors that might influence the overall performance of the meter are presented in this paper. The experiments were carried out on an immersion type magnetic meter specially used for flow measurement of cooling water pipeline. An attempt has been made in this paper to outline guidelines that can ensure accurate measurement related to immersion type magnetic meters on underground pipelines.Keywords: magnetic induction, flow meter, Faraday's law, immersion, cathodic protection, anode, cathode, flange, grounding, plant information management system, electrodes
Procedia PDF Downloads 4189003 Reliability Estimation of Bridge Structures with Updated Finite Element Models
Authors: Ekin Ozer
Abstract:
Assessment of structural reliability is essential for efficient use of civil infrastructure which is subjected hazardous events. Dynamic analysis of finite element models is a commonly used tool to simulate structural behavior and estimate its performance accordingly. However, theoretical models purely based on preliminary assumptions and design drawings may deviate from the actual behavior of the structure. This study proposes up-to-date reliability estimation procedures which engages actual bridge vibration data modifying finite element models for finite element model updating and performing reliability estimation, accordingly. The proposed method utilizes vibration response measurements of bridge structures to identify modal parameters, then uses these parameters to calibrate finite element models which are originally based on design drawings. The proposed method does not only show that reliability estimation based on updated models differs from the original models, but also infer that non-updated models may overestimate the structural capacity.Keywords: earthquake engineering, engineering vibrations, reliability estimation, structural health monitoring
Procedia PDF Downloads 2239002 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters
Authors: Trevor C. Brown, David J. Miron
Abstract:
Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics
Procedia PDF Downloads 2339001 Detection of Chaos in General Parametric Model of Infectious Disease
Authors: Javad Khaligh, Aghileh Heydari, Ali Akbar Heydari
Abstract:
Mathematical epidemiological models for the spread of disease through a population are used to predict the prevalence of a disease or to study the impacts of treatment or prevention measures. Initial conditions for these models are measured from statistical data collected from a population since these initial conditions can never be exact, the presence of chaos in mathematical models has serious implications for the accuracy of the models as well as how epidemiologists interpret their findings. This paper confirms the chaotic behavior of a model for dengue fever and SI by investigating sensitive dependence, bifurcation, and 0-1 test under a variety of initial conditions.Keywords: epidemiological models, SEIR disease model, bifurcation, chaotic behavior, 0-1 test
Procedia PDF Downloads 3249000 Bi-Lateral Comparison between NIS-Egypt and NMISA-South Africa for the Calibration of an Optical Spectrum Analyzer
Authors: Osama Terra, Hatem Hussein, Adriaan Van Brakel
Abstract:
Dense wavelength division multiplexing (DWDM) technology requires tight specification and therefore measurement of wavelength accuracy and stability of the telecommunication lasers. Thus, calibration of the used Optical Spectrum Analyzers (OSAs) that are used to measure wavelength is of a great importance. Proficiency testing must be performed on such measuring activity to insure the accuracy of the measurement results. In this paper, a new comparison scheme is introduced to test the performance of such calibrations. This comparison scheme is implemented between NIS-Egypt and NMISA-South Africa for the calibration of the wavelength scale of an OSA. Both institutes employ reference gas cell to calibrate OSA according to the standard IEC/ BS EN 62129 (2006). The result of this comparison is compiled in this paper.Keywords: OSA calibration, HCN gas cell, DWDM technology, wavelength measurement
Procedia PDF Downloads 3038999 Calibration of the Radical Installation Limit Error of the Accelerometer in the Gravity Gradient Instrument
Authors: Danni Cong, Meiping Wu, Xiaofeng He, Junxiang Lian, Juliang Cao, Shaokuncai, Hao Qin
Abstract:
Gravity gradient instrument (GGI) is the core of the gravity gradiometer, so the structural error of the sensor has a great impact on the measurement results. In order not to affect the aimed measurement accuracy, limit error is required in the installation of the accelerometer. In this paper, based on the established measuring principle model, the radial installation limit error is calibrated, which is taken as an example to provide a method to calculate the other limit error of the installation under the premise of ensuring the accuracy of the measurement result. This method provides the idea for deriving the limit error of the geometry structure of the sensor, laying the foundation for the mechanical precision design and physical design.Keywords: gravity gradient sensor, radial installation limit error, accelerometer, uniaxial rotational modulation
Procedia PDF Downloads 4228998 Innovative Methods of Improving Train Formation in Freight Transport
Authors: Jaroslav Masek, Juraj Camaj, Eva Nedeliakova
Abstract:
The paper is focused on the operational model for transport the single wagon consignments on railway network by using two different models of train formation. The paper gives an overview of possibilities of improving the quality of transport services. Paper deals with two models used in problematic of train formatting - time continuously and time discrete. By applying these models in practice, the transport company can guarantee a higher quality of service and expect increasing of transport performance. The models are also applicable into others transport networks. The models supplement a theoretical problem of train formation by new ways of looking to affecting the organization of wagon flows.Keywords: train formation, wagon flows, marshalling yard, railway technology
Procedia PDF Downloads 4378997 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana
Authors: Gautier Viaud, Paul-Henry Cournède
Abstract:
Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models
Procedia PDF Downloads 3038996 Sustainability as a Platform in Microfinance Industry for Developing Countries
Authors: Nor Azlina Ab.Rahman, Salwana Hassan, Zuraeda Ibrahim, Normah Omar, Jamaliah Said
Abstract:
Revolution in the business environment has crucial growing changes on most globalized markets. Numerous of organizations are necessitating towards producing more proactive entrepreneurs with a dynamic teams, who can run and steer their business to victory. Revolutionizing on business strategy and entrepreneurial skills, also implementing innovation and practices to enhance its performance is necessary for these organizations to be more cost-efficient and increase their efficiency. The study aims to clarify issues of whether measurement has a positive effect on different aspects of innovation and best practices. The study contributes to the current understanding in three ways; first by presenting the important aspects of organizational innovation and best practices. Second by showing the importance of measurement in promoting different aspects of innovation and best practices. Third is to examine the link between innovation, best practices and sustainability in microfinance. The study has been executed by conducting a qualitative study toward the microfinance industry. A representative of management and employees in each company was selected through an invitation to participate in getting information for data collection purpose in the study. The study contains a comprehensive description of the impacts of measurement on different aspects of innovation and best practices towards sustainability in both microfinance industries and SMEs. Findings from this study shows that performance measurement has positive effects on issues related to innovation and best practices. The measurement for several aspects of innovation and best practices is good potential in microfinance industries. Additionally, measurement on innovation and best practices shows a positively related with each other to enhance organization performance. The study suggests that both academics and practitioners should focus on the development of new methods and practices to describe and scrutinize further understanding for measuring issues which is related to innovation and best practices, in order to better develop innovation and best practices towards sustainability. This effort would not only contribute to firm’s success, but also toward the development of the nation in the developing countries.Keywords: best practices, innovation, microfinance, sustainability
Procedia PDF Downloads 5228995 SOM Map vs Hopfield Neural Network: A Comparative Study in Microscopic Evacuation Application
Authors: Zouhour Neji Ben Salem
Abstract:
Microscopic evacuation focuses on the evacuee behavior and way of search of safety place in an egress situation. In recent years, several models handled microscopic evacuation problem. Among them, we have proposed Artificial Neural Network (ANN) as an alternative to mathematical models that can deal with such problem. In this paper, we present two ANN models: SOM map and Hopfield Network used to predict the evacuee behavior in a disaster situation. These models are tested in a real case, the second floor of Tunisian children hospital evacuation in case of fire. The two models are studied and compared in order to evaluate their performance.Keywords: artificial neural networks, self-organization map, hopfield network, microscopic evacuation, fire building evacuation
Procedia PDF Downloads 4048994 A Comparative Study of the Proposed Models for the Components of the National Health Information System
Authors: M. Ahmadi, Sh. Damanabi, F. Sadoughi
Abstract:
National Health Information System plays an important role in ensuring timely and reliable access to Health information which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, by using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system for better planning and management influential factors of performance seems necessary, therefore, in this study, different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process, and output. In this context, search for information using library resources and internet search were conducted and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system, Lippeveld, Sauerborn, and Bodart Model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008 and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities, and equipment. In addition, in the ‘process’ section from three models, we pointed up the actions ensuring the quality of health information system and in output section, except Lippeveld Model, two other models consider information products, usage and distribution of information as components of the national health information system. Conclusion: The results showed that all the three models have had a brief discussion about the components of health information in input section. However, Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process, and output.Keywords: National Health Information System, components of the NHIS, Lippeveld Model
Procedia PDF Downloads 4208993 Possibility of Making Ceramic Models from Condemned Plaster of Paris (Pop) Moulds for Ceramics Production in Edo State Nigeria
Authors: Osariyekemwen, Daniel Nosakhare
Abstract:
Some ceramic wastes, such as discarded (condemn) Plaster of Paris (POP) in Auchi Polytechnic, Edo State, constitute environmental hazards. This study, therefore, bridges the forgoing gaps by undertaking the use of these discarded (POP) moulds to produced ceramic models for making casting moulds for mass production. This is in line with the possibility of using this medium to properly manage the discarded (condemn) Plaster of Paris (POP) that littered our immediate environment. Presently these are major wastes disposal in the department. Hence, the study has been made to fabricate sanitary miniature models and contract fuse models, respectively. Findings arising from this study show that discarded (condemn) Plaster of Paris (POP) can be carved when to set it neither shrink nor expand; hence warping is quite unusual. Above all, it also gives good finishing with little deterioration with time when compared to clay models.Keywords: plaster of Paris, condemn, moulds, models, production
Procedia PDF Downloads 1898992 Short Review on Models to Estimate the Risk in the Financial Area
Authors: Tiberiu Socaciu, Tudor Colomeischi, Eugenia Iancu
Abstract:
Business failure affects in various proportions shareholders, managers, lenders (banks), suppliers, customers, the financial community, government and society as a whole. In the era in which we have telecommunications networks, exists an interdependence of markets, the effect of a failure of a company is relatively instant. To effectively manage risk exposure is thus require sophisticated support systems, supported by analytical tools to measure, monitor, manage and control operational risks that may arise. As we know, bankruptcy is a phenomenon that managers do not want no matter what stage of life is the company they direct / lead. In the analysis made by us, by the nature of economic models that are reviewed (Altman, Conan-Holder etc.), estimating the risk of bankruptcy of a company corresponds to some extent with its own business cycle tracing of the company. Various models for predicting bankruptcy take into account direct / indirect aspects such as market position, company growth trend, competition structure, characteristics and customer retention, organization and distribution, location etc. From the perspective of our research we will now review the economic models known in theory and practice for estimating the risk of bankruptcy; such models are based on indicators drawn from major accounting firms.Keywords: Anglo-Saxon models, continental models, national models, statistical models
Procedia PDF Downloads 4058991 Improve Safety Performance of Un-Signalized Intersections in Oman
Authors: Siham G. Farag
Abstract:
The main objective of this paper is to provide a new methodology for road safety assessment in Oman through the development of suitable accident prediction models. GLM technique with Poisson or NBR using SAS package was carried out to develop these models. The paper utilized the accidents data of 31 un-signalized T-intersections during three years. Five goodness-of-fit measures were used to assess the overall quality of the developed models. Two types of models were developed separately; the flow-based models including only traffic exposure functions, and the full models containing both exposure functions and other significant geometry and traffic variables. The results show that, traffic exposure functions produced much better fit to the accident data. The most effective geometric variables were major-road mean speed, minor-road 85th percentile speed, major-road lane width, distance to the nearest junction, and right-turn curb radius. The developed models can be used for intersection treatment or upgrading and specify the appropriate design parameters of T- intersections. Finally, the models presented in this thesis reflect the intersection conditions in Oman and could represent the typical conditions in several countries in the middle east area, especially gulf countries.Keywords: accidents prediction models (APMs), generalized linear model (GLM), T-intersections, Oman
Procedia PDF Downloads 2738990 Filter for the Measurement of Supraharmonics in Distribution Networks
Authors: Sivaraman Karthikeyan
Abstract:
Due to rapidly developing power electronics devices and technologies such as power line communication or self-commutating converters, voltage and current distortion, as well as interferences, have increased in the frequency range of 2 kHz to 150 kHz; there is an urgent need for regulation of electromagnetic compatibility (EMC) standards in this frequency range. Measuring or testing compliance with emission and immunity limitations necessitates the use of precise, repeatable measuring methods. Appropriate filters to minimize the fundamental component and its harmonics below 2 kHz in the measuring signal would improve the measurement accuracy in this frequency range leading to better analysis. This paper discusses filter suggestions in the current measurement standard and proposes an infinite impulse response (IIR) filter design that is optimized for a low number of poles, strong fundamental damping, and high accuracy above 2 kHz. The new filter’s transfer function is delivered as a result. An analog implementation is derived from the overall design.Keywords: supraharmonics, 2 kHz, 150 kHz, filter, analog filter
Procedia PDF Downloads 1468989 Efficiency Improvement of REV-Method for Calibration of Phased Array Antennas
Authors: Daniel Hristov
Abstract:
The paper describes the principle of operation, simulation and physical validation of method for simultaneous acquisition of gain and phase states of multiple antenna elements and the corresponding feed lines across a Phased Array Antenna (PAA). The derived values for gain and phase are used for PAA-calibration. The method utilizes the Rotating-Element Electric- Field Vector (REV) principle currently used for gain and phase state estimation of single antenna element across an active antenna aperture. A significant reduction of procedure execution time is achieved with simultaneous setting of different phase delays to multiple phase shifters, followed by a single power measurement. The initial gain and phase states are calculated using spectral and correlation analysis of the measured power series.Keywords: antenna, antenna arrays, calibration, phase measurement, power measurement
Procedia PDF Downloads 1378988 Kinect Station: Using Microsoft Kinect V2 as a Total Station Theodolite for Distance and Angle Determination in a 3D Cartesian Environment
Authors: Amin Amini
Abstract:
A Kinect sensor has been utilized as a cheap and accurate alternative to 3D laser scanners and electronic distance measurement (EDM) systems. This research presents an inexpensive and easy-to-setup system that utilizes the Microsoft Kinect v2 sensor as a surveying and measurement tool and investigates the possibility of using such a device as a replacement for conventional theodolite systems. The system was tested in an indoor environment where its accuracy in distance and angle measurements was tested using virtual markers in a 3D Cartesian environment. The system has shown an average accuracy of 97.94 % in measuring distances and 99.11 % and 98.84 % accuracy for area and perimeter, respectively, within the Kinect’s surveying range of 1.5 to 6 meters. The research also tested the system competency for relative angle determination between two objects.Keywords: kinect v2, 3D measurement, depth map, ToF
Procedia PDF Downloads 678987 Performance of Reinforced Concrete Beams under Different Fire Durations
Authors: Arifuzzaman Nayeem, Tafannum Torsha, Tanvir Manzur, Shaurav Alam
Abstract:
Performance evaluation of reinforced concrete (RC) beams subjected to accidental fire is significant for post-fire capacity measurement. Mechanical properties of any RC beam degrade due to heating since the strength and modulus of concrete and reinforcement suffer considerable reduction under elevated temperatures. Moreover, fire-induced thermal dilation and shrinkage cause internal stresses within the concrete and eventually result in cracking, spalling, and loss of stiffness, which ultimately leads to lower service life. However, conducting full-scale comprehensive experimental investigation for RC beams exposed to fire is difficult and cost-intensive, where the finite element (FE) based numerical study can provide an economical alternative for evaluating the post-fire capacity of RC beams. In this study, an attempt has been made to study the fire behavior of RC beams using FE software package ABAQUS under different durations of fire. The damaged plasticity model of concrete in ABAQUS was used to simulate behavior RC beams. The effect of temperature on strength and modulus of concrete and steel was simulated following relevant Eurocodes. Initially, the result of FE models was validated using several experimental results from available scholarly articles. It was found that the response of the developed FE models matched quite well with the experimental outcome for beams without heat. The FE analysis of beams subjected to fire showed some deviation from the experimental results, particularly in terms of stiffness degradation. However, the ultimate strength and deflection of FE models were similar to that of experimental values. The developed FE models, thus, exhibited the good potential to predict the fire behavior of RC beams. Once validated, FE models were then used to analyze several RC beams having different strengths (ranged between 20 MPa and 50 MPa) exposed to the standard fire curve (ASTM E119) for different durations. The post-fire performance of RC beams was investigated in terms of load-deflection behavior, flexural strength, and deflection characteristics.Keywords: fire durations, flexural strength, post fire capacity, reinforced concrete beam, standard fire
Procedia PDF Downloads 1418986 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete
Authors: Farzad Danaei, Yilmaz Akkaya
Abstract:
In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient
Procedia PDF Downloads 778985 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 1558984 A Weighted Group EI Incorporating Role Information for More Representative Group EI Measurement
Authors: Siyu Wang, Anthony Ward
Abstract:
Emotional intelligence (EI) is a well-established personal characteristic. It has been viewed as a critical factor which can influence an individual's academic achievement, ability to work and potential to succeed. When working in a group, EI is fundamentally connected to the group members' interaction and ability to work as a team. The ability of a group member to intelligently perceive and understand own emotions (Intrapersonal EI), to intelligently perceive and understand other members' emotions (Interpersonal EI), and to intelligently perceive and understand emotions between different groups (Cross-boundary EI) can be considered as Group emotional intelligence (Group EI). In this research, a more representative Group EI measurement approach, which incorporates the information of the composition of a group and an individual’s role in that group, is proposed. To demonstrate the claim of being more representative Group EI measurement approach, this study adopts a multi-method research design, involving a combination of both qualitative and quantitative techniques to establish a metric of Group EI. From the results, it can be concluded that by introducing the weight coefficient of each group member on group work into the measurement of Group EI, Group EI will be more representative and more capable of understanding what happens during teamwork than previous approaches.Keywords: case study, emotional intelligence, group EI, multi-method research
Procedia PDF Downloads 1258983 Estimations of Spectral Dependence of Tropospheric Aerosol Single Scattering Albedo in Sukhothai, Thailand
Authors: Siriluk Ruangrungrote
Abstract:
Analyses of available data from MFR-7 measurement were performed and discussed on the study of tropospheric aerosol and its consequence in Thailand. Since, ASSA (w) is one of the most important parameters for a determination of aerosol effect on radioactive forcing. Here the estimation of w was directly determined in terms of the ratio of aerosol scattering optical depth to aerosol extinction optical depth (ωscat/ωext) without any utilization of aerosol computer code models. This is of benefit for providing the elimination of uncertainty causing by the modeling assumptions and the estimation of actual aerosol input data. Diurnal w of 5 cloudless-days in winter and early summer at 5 distinct wavelengths of 415, 500, 615, 673 and 870 nm with the consideration of Rayleigh scattering and atmospheric column NO2 and Ozone contents were investigated, respectively. Besides, the tendency of spectral dependence of ω representing two seasons was observed. The characteristic of spectral results reveals that during wintertime the atmosphere of the inland rural vicinity for the period of measurement possibly dominated with a lesser amount of soil dust aerosols loading than one in early summer. Hence, the major aerosol loading particularly in summer was subject to a mixture of both soil dust and biomass burning aerosols.Keywords: aerosol scattering optical depth, aerosol extinction optical depth, biomass burning aerosol, soil dust aerosol
Procedia PDF Downloads 4058982 Impact of Integrated Signals for Doing Human Activity Recognition Using Deep Learning Models
Authors: Milagros Jaén-Vargas, Javier García Martínez, Karla Miriam Reyes Leiva, María Fernanda Trujillo-Guerrero, Francisco Fernandes, Sérgio Barroso Gonçalves, Miguel Tavares Silva, Daniel Simões Lopes, José Javier Serrano Olmedo
Abstract:
Human Activity Recognition (HAR) is having a growing impact in creating new applications and is responsible for emerging new technologies. Also, the use of wearable sensors is an important key to exploring the human body's behavior when performing activities. Hence, the use of these dispositive is less invasive and the person is more comfortable. In this study, a database that includes three activities is used. The activities were acquired from inertial measurement unit sensors (IMU) and motion capture systems (MOCAP). The main objective is differentiating the performance from four Deep Learning (DL) models: Deep Neural Network (DNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and hybrid model Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM), when considering acceleration, velocity and position and evaluate if integrating the IMU acceleration to obtain velocity and position represent an increment in performance when it works as input to the DL models. Moreover, compared with the same type of data provided by the MOCAP system. Despite the acceleration data is cleaned when integrating, results show a minimal increase in accuracy for the integrated signals.Keywords: HAR, IMU, MOCAP, acceleration, velocity, position, feature maps
Procedia PDF Downloads 988981 Impact of Contemporary Performance Measurement System and Organization Justice on Academic Staff Work Performance
Authors: Amizawati Mohd Amir, Ruhanita Maelah, Zaidi Mohd Noor
Abstract:
As part of the Malaysia Higher Institutions' Strategic Plan in promoting high-quality research and education, the Ministry of Higher Education has introduced various instrument to assess the universities performance. The aims are that university will produce more commercially-oriented research and continue to contribute in producing professional workforce for domestic and foreign needs. Yet the spirit of the success lies in the commitment of university particularly the academic staff to translate the vision into reality. For that reason, the element of fairness and justice in assessing individual academic staff performance is crucial to promote directly linked between university and individual work goals. Focusing on public research universities (RUs) in Malaysia, this study observes at the issue through the practice of university contemporary performance measurement system. Accordingly management control theory has conceptualized that contemporary performance measurement consisting of three dimension namely strategic, comprehensive and dynamic building upon equity theory, the relationships between contemporary performance measurement system and organizational justice and in turn the effect on academic staff work performance are tested based on online survey data administered on 365 academic staff from public RUs, which were analyzed using statistics analysis SPSS and Equation Structure Modeling. The findings validated the presence of strategic, comprehensive and dynamic in the contemporary performance measurement system. The empirical evidence also indicated that contemporary performance measure and procedural justice are significantly associated with work performance but not for distributive justice. Furthermore, procedural justice does mediate the relationship between contemporary performance measurement and academic staff work performance. Evidently, this study provides evidence on the importance of perceptions of justice towards influencing academic staff work performance. This finding may be a fruitful input in the setting up academic staff performance assessment policy.Keywords: comprehensive, dynamic, distributive justice, contemporary performance measurement system, strategic, procedure justice, work performance
Procedia PDF Downloads 4088980 Modelling and Maping Malnutrition Toddlers in Bojonegoro Regency with Mixed Geographically Weighted Regression Approach
Authors: Elvira Mustikawati P.H., Iis Dewi Ratih, Dita Amelia
Abstract:
Bojonegoro has proclaimed a policy of zero malnutrition. Therefore, as an effort to solve the cases of malnutrition children in Bojonegoro, this study used the approach geographically Mixed Weighted Regression (MGWR) to determine the factors that influence the percentage of malnourished children under five in which factors can be divided into locally influential factor in each district and global factors that influence throughout the district. Based on the test of goodness of fit models, R2 and AIC values in GWR models are better than MGWR models. R2 and AIC values in MGWR models are 84.37% and 14.28, while the GWR models respectively are 91.04% and -62.04. Based on the analysis with GWR models, District Sekar, Bubulan, Gondang, and Dander is a district with three predictor variables (percentage of vitamin A, the percentage of births assisted health personnel, and the percentage of clean water) that significantly influence the percentage of malnourished children under five. Procedia PDF Downloads 2968979 Beyond the Effect on Children: Investigation on the Longitudinal Effect of Parental Perfectionism on Child Maltreatment
Authors: Alice Schittek, Isabelle Roskam, Moira Mikolajczak
Abstract:
Background: Perfectionistic strivings (PS) and perfectionistic concerns (PC) are associated with an increase in parental burnout (PB), and PB causally increases violence towards the offspring. Objective: To our best knowledge, no study has ever investigated whether perfectionism (PS and PC) predicts violence towards the offspring and whether PB could explain this link. We hypothesized that an increase in PS and PC would lead to an increase in violence via an increase in PB. Method: 228 participants responded to an online survey, with three measurement occasions spaced two months apart. Results: Contrary to expectations, cross-lagged path models revealed that violence towards the offspring prospectively predicts an increase in PS and PC. Mediation models showed that PB is not a significant mediator. The results of all models did not change when controlling for social desirability. Conclusion: The present study shows that violence towards the offspring increases the risk of PS and PC in parents, which highlights the importance of understanding the effect of child maltreatment on the whole family system and not just on children. Results are discussed in light of the feeling of guilt experienced by parents. Considering the insignificant mediation effect, PB research should slowly shift towards more (quasi) causal designs, allowing to identify which significant correlations translate into causal effects. Implications: Clinicians should focus on preventing child maltreatment as well as treating parental perfectionism. Researchers should unravel the effects of child maltreatment on the family system.Keywords: maltreatment, parental burnout, perfectionistic strivings, perfectionistic concerns, perfectionism, violence
Procedia PDF Downloads 718978 A Comparative Analysis of E-Government Quality Models
Authors: Abdoullah Fath-Allah, Laila Cheikhi, Rafa E. Al-Qutaish, Ali Idri
Abstract:
Many quality models have been used to measure e-government portals quality. However, the absence of an international consensus for e-government portals quality models results in many differences in terms of quality attributes and measures. The aim of this paper is to compare and analyze the existing e-government quality models proposed in literature (those that are based on ISO standards and those that are not) in order to propose guidelines to build a good and useful e-government portals quality model. Our findings show that, there is no e-government portal quality model based on the new international standard ISO 25010. Besides that, the quality models are not based on a best practice model to allow agencies to both; measure e-government portals quality and identify missing best practices for those portals.Keywords: e-government, portal, best practices, quality model, ISO, standard, ISO 25010, ISO 9126
Procedia PDF Downloads 560