Search results for: complexity measurement
3883 The Development of a Residual Stress Measurement Method for Roll Formed Products
Authors: Yong Sun, Vladimir Luzin, Zhen Qian, William J. T. Daniel, Mingxing Zhang, Shichao Ding
Abstract:
The residual stresses in roll formed products are generally very high and un-predictable. This is due to the occurrence of redundant plastic deformation in roll forming process and it can cause various product defects. Although the residual stresses of a roll formed product consist of longitudinal and transverse residual stresses components, but the longitudinal residual stresses plays a key role to the product defects of a roll formed product and therefore, only the longitudinal residual stresses concerned by the roll forming scholars and engineers. However, how to inspect the residual stresses of a product quickly and economically as a routine operation is still a challenge. This paper introduces a residual stresses measurement method called slope cutting method to study the longitudinal residual stresses through layers geometrically to a roll formed products or a product with similar process such as a rolled sheet. The detailed measuring procedure is given and discussed. The residual stresses variation through the layer can be derived based on the variation of curvature in different layers and steps. The slope cutting method has been explored and validated by experimental study on a roll-formed square tube. The neutron diffraction method is applied to validate the accuracy of the newly proposed layering removal materials results. The two set results agree with each other very well and therefore, the method is expected to be a routine testing method to monitor the quality of a product been formed and that is a great impact to roll forming industry.Keywords: roll forming, residual stress, measurement method, neutron diffraction
Procedia PDF Downloads 3653882 A Paradigm for Characterization and Checking of a Human Noise Behavior
Authors: Himanshu Dehra
Abstract:
This paper presents a paradigm for characterization and checking of human noise behavior. The definitions of ‘Noise’ and ‘Noise Behavior’ are devised. The concept of characterization and examining of Noise Behavior is obtained from the proposed paradigm of Psychoacoustics. The measurement of human noise behavior is discussed through definitions of noise sources and noise measurements. The noise sources, noise measurement equations and noise filters are further illustrated through examples. The theory and significance of solar energy acoustics is presented for life and its activities. Human comfort and health are correlated with human brain through physiological responses and noise protection. Examples of heat stress, intense heat, sweating and evaporation are also enumerated.Keywords: human brain, noise behavior, noise characterization, noise filters, physiological responses, psychoacoustics
Procedia PDF Downloads 5083881 Impact of Changes of the Conceptual Framework for Financial Reporting on the Indicators of the Financial Statement
Authors: Nadezhda Kvatashidze
Abstract:
The International Accounting Standards Board updated the conceptual framework for financial reporting. The main reason behind it is to resolve the tasks of the accounting, which are caused by the market development and business-transactions of a new economic content. Also, the investors call for higher transparency of information and responsibility for the results in order to make a more accurate risk assessment and forecast. All these make it necessary to further develop the conceptual framework for financial reporting so that the users get useful information. The market development and certain shortcomings of the conceptual framework revealed in practice require its reconsideration and finding new solutions. Some issues and concepts, such as disclosure and supply of information, its qualitative characteristics, assessment, and measurement uncertainty had to be supplemented and perfected. The criteria of recognition of certain elements (assets and liabilities) of reporting had to be updated, too and all this is set out in the updated edition of the conceptual framework for financial reporting, a comprehensive collection of concepts underlying preparation of the financial statement. The main objective of conceptual framework revision is to improve financial reporting and development of clear concepts package. This will support International Accounting Standards Board (IASB) to set common “Approach & Reflection” for similar transactions on the basis of mutually accepted concepts. As a result, companies will be able to develop coherent accounting policies for those transactions or events that are occurred from particular deals to which no standard is used or when standard allows choice of accounting policy.Keywords: conceptual framework, measurement basis, measurement uncertainty, neutrality, prudence, stewardship
Procedia PDF Downloads 1263880 An AK-Chart for the Non-Normal Data
Authors: Chia-Hau Liu, Tai-Yue Wang
Abstract:
Traditional multivariate control charts assume that measurement from manufacturing processes follows a multivariate normal distribution. However, this assumption may not hold or may be difficult to verify because not all the measurement from manufacturing processes are normal distributed in practice. This study develops a new multivariate control chart for monitoring the processes with non-normal data. We propose a mechanism based on integrating the one-class classification method and the adaptive technique. The adaptive technique is used to improve the sensitivity to small shift on one-class classification in statistical process control. In addition, this design provides an easy way to allocate the value of type I error so it is easier to be implemented. Finally, the simulation study and the real data from industry are used to demonstrate the effectiveness of the propose control charts.Keywords: multivariate control chart, statistical process control, one-class classification method, non-normal data
Procedia PDF Downloads 4223879 Examining the Effects of College Education on Democratic Attitudes in China: A Regression Discontinuity Analysis
Authors: Gang Wang
Abstract:
Education is widely believed to be a prerequisite for democracy and civil society, but the causal link between education and outcome variables is usually hardly to be identified. This study applies a fuzzy regression discontinuity design to examine the effects of college education on democratic attitudes in the Chinese context. In the analysis treatment assignment is determined by students’ college entry years and thus naturally selected by subjects’ ages. Using a sample of Chinese college students collected in Beijing in 2009, this study finds that college education actually reduces undergraduates’ motivation for political development in China but promotes political loyalty to the authoritarian government. Further hypotheses tests explain these interesting findings from two perspectives. The first is related to the complexity of politics. As college students progress over time, they increasingly realize the complexity of political reform in China’s authoritarian regime and rather stay away from politics. The second is related to students’ career opportunities. As students are close to graduation, they are immersed with job hunting and have a reduced interest in political freedom.Keywords: china, college education, democratic attitudes, regression discontinuity
Procedia PDF Downloads 3513878 Convex Restrictions for Outage Constrained MU-MISO Downlink under Imperfect Channel State Information
Authors: A. Preetha Priyadharshini, S. B. M. Priya
Abstract:
In this paper, we consider the MU-MISO downlink scenario, under imperfect channel state information (CSI). The main issue in imperfect CSI is to keep the probability of each user achievable outage rate below the given threshold level. Such a rate outage constraints present significant and analytical challenges. There are many probabilistic methods are used to minimize the transmit optimization problem under imperfect CSI. Here, decomposition based large deviation inequality and Bernstein type inequality convex restriction methods are used to perform the optimization problem under imperfect CSI. These methods are used for achieving improved output quality and lower complexity. They provide a safe tractable approximation of the original rate outage constraints. Based on these method implementations, performance has been evaluated in the terms of feasible rate and average transmission power. The simulation results are shown that all the two methods offer significantly improved outage quality and lower computational complexity.Keywords: imperfect channel state information, outage probability, multiuser- multi input single output, channel state information
Procedia PDF Downloads 8133877 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector
Authors: Mariam Vardiashvili
Abstract:
The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity. When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.Keywords: cash-generating assets, non-cash-generating assets, recoverable (usable restorative) value, value of use
Procedia PDF Downloads 1433876 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing
Authors: S. Bouhouche, R. Drai, J. Bast
Abstract:
This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement
Procedia PDF Downloads 2833875 Collaboration During Planning and Reviewing in Writing: Effects on L2 Writing
Authors: Amal Sellami, Ahlem Ammar
Abstract:
Writing is acknowledged to be a cognitively demanding and complex task. Indeed, the writing process is composed of three iterative sub-processes, namely planning, translating (writing), and reviewing. Not only do second or foreign language learners need to write according to this process, but they also need to respect the norms and rules of language and writing in the text to-be-produced. Accordingly, researchers have suggested to approach writing as a collaborative task in order to al leviate its complexity. Consequently, collaboration has been implemented during the whole writing process or only during planning orreviewing. Researchers report that implementing collaboration during the whole process might be demanding in terms of time in comparison to individual writing tasks. Consequently, because of time constraints, teachers may avoid it. For this reason, it might be pedagogically more realistic to limit collaboration to one of the writing sub-processes(i.e., planning or reviewing). However, previous research implementing collaboration in planning or reviewing is limited and fails to explore the effects of the seconditionson the written text. Consequently, the present study examines the effects of collaboration in planning and collaboration in reviewing on the written text. To reach this objective, quantitative as well as qualitative methods are deployed to examine the written texts holistically and in terms of fluency, complexity, and accuracy. Participants of the study include 4 pairs in each group (n=8). They participated in two experimental conditions, which are: (1) collaborative planning followed by individual writing and individual reviewing and (2) individual planning followed by individual writing and collaborative reviewing. The comparative research findings indicate that while collaborative planning resulted in better overall text quality (precisely better content and organization ratings), better fluency, better complexity, and fewer lexical errors, collaborative reviewing produces better accuracy and less syntactical and mechanical errors. The discussion of the findings suggests the need to conduct more comparative research in order to further explore the effects of collaboration in planning or in reviewing. Pedagogical implications of the current study include advising teachers to choose between implementing collaboration in planning or in reviewing depending on their students’ need and what they need to improve.Keywords: collaboration, writing, collaborative planning, collaborative reviewing
Procedia PDF Downloads 993874 A New and Simple Method of Plotting Binocular Single Vision Field (BSVF) using the Cervical Range of Motion - CROM - Device
Authors: Mihir Kothari, Heena Khan, Vivek Rathod
Abstract:
Assessment of binocular single vision field (BSVF) is traditionally done using a Goldmann perimeter. The measurement of BSVF is important for the management of incomitant strabismus, viz. orbital fractures, thyroid orbitopathy, oculomotor cranial nerve palsies, Duane syndrome etc. In this paper, we describe a new technique for measuring BSVF using a CROM device. Goldmann perimeter is very bulky and expensive (Euro 5000.00 or more) instrument which is 'almost' obsolete from the contemporary ophthalmology practice. Whereas, CROM can be easily made in the DIY (do it yourself) manner for the fraction of the price of the perimeter (only Euro 15.00). Moreover, CROM is useful for the accurate measurement of ocular torticollis vis. nystagmus, paralytic or incomitant squint etc, and it is highly portable.Keywords: binocular single vision, perimetry, cervical rgen of motion, visual field, binocular single vision field
Procedia PDF Downloads 663873 Application of Fuzzy Analytical Hierarchical Process in Evaluation Supply Chain Performance Measurement
Authors: Riyadh Jamegh, AllaEldin Kassam, Sawsan Sabih
Abstract:
In modern trends of market, organizations face high-pressure environment which is characterized by globalization, high competition, and customer orientation, so it is very crucial to control and know the weak and strong points of the supply chain in order to improve their performance. So the performance measurements presented as an important tool of supply chain management because it's enabled the organizations to control, understand, and improve their efficiency. This paper aims to identify supply chain performance measurement (SCPM) by using Fuzzy Analytical Hierarchical Process (FAHP). In our real application, the performance of organizations estimated based on four parameters these are cost parameter indicator of cost (CPI), inventory turnover parameter indicator of (INPI), raw material parameter (RMPI), and safety stock level parameter indicator (SSPI), these indicators vary in impact on performance depending upon policies and strategies of organization. In this research (FAHP) technique has been used to identify the importance of such parameters, and then first fuzzy inference (FIR1) is applied to identify performance indicator of each factor depending on the importance of the factor and its value. Then, the second fuzzy inference (FIR2) also applied to integrate the effect of these indicators and identify (SCPM) which represent the required output. The developed approach provides an effective tool for evaluation of supply chain performance measurement.Keywords: fuzzy performance measurements, supply chain, fuzzy logic, key performance indicator
Procedia PDF Downloads 1413872 Service-Oriented Enterprise Architecture (SoEA) Adoption and Maturity Measurement Model: A Systematic Review
Authors: Nur Azaliah Abu Bakar, Harihodin Selamat, Mohd Nazri Kama
Abstract:
This article provides a systematic review of existing research related to the Service-oriented Enterprise Architecture (SoEA) adoption and maturity measurement model. The review’s main goals are to support research, to facilitate other researcher’s search for relevant studies and to propose areas for future studies within this area. In addition, this article provides useful information on SoEA adoption issues and its related maturity model, based on research-based knowledge. The review results suggest that motives, critical success factors (CSFs), implementation status and benefits are the most frequently studied areas and that each of these areas would benefit from further exposure.Keywords: systematic literature review, service-oriented architecture, adoption, maturity model
Procedia PDF Downloads 3243871 The Impact of Direct and Indirect Pressure Measuring Systems on the Pressure Mapping for the Medical Compression Garments
Authors: Arash M. Shahidi, Tilak Dias, Gayani K. Nandasiri
Abstract:
While graduated compression is the foundation of treatment and management of many medical complications such as leg ulcer, varicose veins, and lymphedema, monitoring the interface pressure has been conducted using different sensors that operate based on diverse approaches. The variations existed from the pressure readings collected using different interface pressure measurement systems would cause difficulties in taking a decision regarding the compression therapy. It is crucial to acknowledge the differences existing between direct and indirect pressure measurement systems while considering the commercially available systems such as AMI, Picopress and OPM which are under direct measurements systems, and HATRA (BSI), HOSY (RAL-GZ) and FlexiForce which comes under the indirect measurement system. Furthermore, Piezo-resistive sensors (Flexiforce) can measure the changes in resistance corresponding to the applied force on the sensing area. Direct pressure measuring systems are capable of measuring interface pressure on the three-dimensional states, while the indirect pressure measuring systems stretch the fabric in the two-dimensional direction and extrapolate pressure from surface tension measured on the device and neglect the vital factor which is the radius of curvature. In this study, a leg mannequin of known dimensions is selected with a knitted class 3 compression stocking. It has been decided to evaluate the data collected from different available systems (AMI, PicoPress, FlexiForce, and HATRA) and compare the results. The results showed a discrepancy between Hatra, AMI, Picopress, and Flexiforce against the pressure standard used to generate class 3 compression stocking. As predicted a higher pressure value with direct interface measuring systems were monitored against HATRA due to the effect of the radius of curvature.Keywords: AMI, FlexiForce, graduated compression, HATRA, interface pressure, PicoPress
Procedia PDF Downloads 3523870 Increase Productivity by Using Work Measurement Technique
Authors: Mohammed Al Awadh
Abstract:
In order for businesses to take advantage of the opportunities for expanded production and trade that have arisen as a result of globalization and increased levels of competition, productivity growth is required. The number of available sources is decreasing with each passing day, which results in an ever-increasing demand. In response to this, there will be an increased demand placed on firms to improve the efficiency with which they utilise their resources. As a scientific method, work and time research techniques have been employed in all manufacturing and service industries to raise the efficiency of use of the factors of production. These approaches focus on work and time. The goal of this research is to improve the productivity of a manufacturing industry's production system by looking at ways to measure work. The work cycles were broken down into more manageable and quantifiable components. On the observation sheet, these aspects were noted down. The operation has been properly analysed in order to identify value-added and non-value-added components, and observations have been recorded for each of the different trails.Keywords: time study, work measurement, work study, efficiency
Procedia PDF Downloads 693869 Effect of Filter Paper Technique in Measuring Hydraulic Capacity of Unsaturated Expansive Soil
Authors: Kenechi Kurtis Onochie
Abstract:
This paper shows the use of filter paper technique in the measurement of matric suction of unsaturated expansive soil around the Haspolat region of Lefkosa, North Cyprus in other to establish the soil water characteristics curve (SWCC) or soil water retention curve (SWRC). The dry filter paper approach which is standardized by ASTM, 2003, D 5298-03 in which the filter paper is initially dry was adopted. The whatman No. 42 filter paper was used in the matric suction measurement. The maximum dry density of the soil was obtained as 2.66kg/cm³ and the optimum moisture content as 21%. The soil was discovered to have high air entry value of 1847.46KPa indicating finer particles and 25% hydraulic capacity using filter paper technique. The filter paper technique proved to be very useful for measuring the hydraulic capacity of unsaturated expansive soil.Keywords: SWCC, matric suction, filter paper, expansive soil
Procedia PDF Downloads 1783868 Development of an Artificial Ear for Bone-Conducted Objective Occlusion Measurement
Authors: Yu Luan
Abstract:
The bone-conducted objective occlusion effect (OE) is characterized by a discomforting sensation of fullness experienced in an occluded ear. This phenomenon arises from various external stimuli, such as human speech, chewing, and walking, which generate vibrations transmitted through the body to the ear canal walls. The bone-conducted OE occurs due to the pressure build-up inside the occluded ear caused by sound radiating into the ear canal cavity from its walls. In the hearing aid industry, artificial ears are utilized as a tool for developing hearing aids. However, the currently available commercial artificial ears primarily focus on pure acoustics measurements, neglecting the bone-conducted vibration aspect. This research endeavors to develop an artificial ear specifically designed for bone-conducted occlusion measurements. Finite element analysis (FEA) modeling has been employed to gain insights into the behavior of the artificial ear.Keywords: artificial ear, bone conducted vibration, occlusion measurement, finite element modeling
Procedia PDF Downloads 883867 Second-Order Complex Systems: Case Studies of Autonomy and Free Will
Authors: Eric Sanchis
Abstract:
Although there does not exist a definitive consensus on a precise definition of a complex system, it is generally considered that a system is complex by nature. The presented work illustrates a different point of view: a system becomes complex only with regard to the question posed to it, i.e., with regard to the problem which has to be solved. A complex system is a couple (question, object). Because the number of questions posed to a given object can be potentially substantial, complexity does not present a uniform face. Two types of complex systems are clearly identified: first-order complex systems and second-order complex systems. First-order complex systems physically exist. They are well-known because they have been studied by the scientific community for a long time. In second-order complex systems, complexity results from the system composition and its articulation that are partially unknown. For some of these systems, there is no evidence of their existence. Vagueness is the keyword characterizing this kind of systems. Autonomy and free will, two mental productions of the human cognitive system, can be identified as second-order complex systems. A classification based on the properties structure makes it possible to discriminate complex properties from the others and to model this kind of second order complex systems. The final outcome is an implementable synthetic property that distinguishes the solid aspects of the actual property from those that are uncertain.Keywords: autonomy, free will, synthetic property, vaporous complex systems
Procedia PDF Downloads 2053866 Modeling of Wind Loads on Heliostats Installed in South Algeria of Various Pylon Height
Authors: Hakim Merarda, Mounir Aksas, Toufik Arrif, Abd Elfateh Belaid, Amor Gama, Reski Khelifi
Abstract:
Knowledge of wind loads is important to develop a heliostat with good performance. These loads can be calculated by mathematical equations based on several parameters: the density, wind velocity, the aspect ratio of the mirror (height/width) and the coefficient of the height of the tower. Measurement data of the wind velocity and the density of the air are used in a numerical simulation of wind profile that was performed on heliostats with different pylon heights, with 1m^2 mirror areas and with aspect ratio of mirror equal to 1. These measurement data are taken from the meteorological station installed in Ghardaia, Algeria. The main aim of this work is to find a mathematical correlation between the wind loads and the height of the tower.Keywords: heliostat, solar tower power, wind loads simulation, South Algeria
Procedia PDF Downloads 5613865 An Assessment of Different Blade Tip Timing (BTT) Algorithms Using an Experimentally Validated Finite Element Model Simulator
Authors: Mohamed Mohamed, Philip Bonello, Peter Russhard
Abstract:
Blade Tip Timing (BTT) is a technology concerned with the estimation of both frequency and amplitude of rotating blades. A BTT system comprises two main parts: (a) the arrival time measurement system, and (b) the analysis algorithms. Simulators play an important role in the development of the analysis algorithms since they generate blade tip displacement data from the simulated blade vibration under controlled conditions. This enables an assessment of the performance of the different algorithms with respect to their ability to accurately reproduce the original simulated vibration. Such an assessment is usually not possible with real engine data since there is no practical alternative to BTT for blade vibration measurement. Most simulators used in the literature are based on a simple spring-mass-damper model to determine the vibration. In this work, a more realistic experimentally validated simulator based on the Finite Element (FE) model of a bladed disc (blisk) is first presented. It is then used to generate the necessary data for the assessment of different BTT algorithms. The FE modelling is validated using both a hammer test and two firewire cameras for the mode shapes. A number of autoregressive methods, fitting methods and state-of-the-art inverse methods (i.e. Russhard) are compared. All methods are compared with respect to both synchronous and asynchronous excitations with both single and simultaneous frequencies. The study assesses the applicability of each method for different conditions of vibration, amount of sampling data, and testing facilities, according to its performance and efficiency under these conditions.Keywords: blade tip timing, blisk, finite element, vibration measurement
Procedia PDF Downloads 3113864 Optimization of Heat Insulation Structure and Heat Flux Calculation Method of Slug Calorimeter
Authors: Zhu Xinxin, Wang Hui, Yang Kai
Abstract:
Heat flux is one of the most important test parameters in the ground thermal protection test. Slug calorimeter is selected as the main sensor measuring heat flux in arc wind tunnel test due to the convenience and low cost. However, because of excessive lateral heat transfer and the disadvantage of the calculation method, the heat flux measurement error of the slug calorimeter is large. In order to enhance measurement accuracy, the heat insulation structure and heat flux calculation method of slug calorimeter were improved. The heat transfer model of the slug calorimeter was built according to the energy conservation principle. Based on the heat transfer model, the insulating sleeve of the hollow structure was designed, which helped to greatly decrease lateral heat transfer. And the slug with insulating sleeve of hollow structure was encapsulated using a package shell. The improved insulation structure reduced heat loss and ensured that the heat transfer characteristics were almost the same when calibrated and tested. The heat flux calibration test was carried out in arc lamp system for heat flux sensor calibration, and the results show that test accuracy and precision of slug calorimeter are improved greatly. In the meantime, the simulation model of the slug calorimeter was built. The heat flux values in different temperature rise time periods were calculated by the simulation model. The results show that extracting the data of the temperature rise rate as soon as possible can result in a smaller heat flux calculation error. Then the different thermal contact resistance affecting calculation error was analyzed by the simulation model. The contact resistance between the slug and the insulating sleeve was identified as the main influencing factor. The direct comparison calibration correction method was proposed based on only heat flux calibration. The numerical calculation correction method was proposed based on the heat flux calibration and simulation model of slug calorimeter after the simulation model was solved by solving the contact resistance between the slug and the insulating sleeve. The simulation and test results show that two methods can greatly reduce the heat flux measurement error. Finally, the improved slug calorimeter was tested in the arc wind tunnel. And test results show that the repeatability accuracy of improved slug calorimeter is less than 3%. The deviation of measurement value from different slug calorimeters is less than 3% in the same fluid field. The deviation of measurement value between slug calorimeter and Gordon Gage is less than 4% in the same fluid field.Keywords: correction method, heat flux calculation, heat insulation structure, heat transfer model, slug calorimeter
Procedia PDF Downloads 1183863 Highly Linear and Low Noise AMR Sensor Using Closed Loop and Signal-Chopped Architecture
Authors: N. Hadjigeorgiou, A. C. Tsalikidou, E. Hristoforou, P. P. Sotiriadis
Abstract:
During the last few decades, the continuously increasing demand for accurate and reliable magnetic measurements has paved the way for the development of different types of magnetic sensing systems as well as different measurement techniques. Sensor sensitivity and linearity, signal-to-noise ratio, measurement range, cross-talk between sensors in multi-sensor applications are only some of the aspects that have been examined in the past. In this paper, a fully analog closed loop system in order to optimize the performance of AMR sensors has been developed. The operation of the proposed system has been tested using a Helmholtz coil calibration setup in order to control both the amplitude and direction of magnetic field in the vicinity of the AMR sensor. Experimental testing indicated that improved linearity of sensor response, as well as low noise levels can be achieved, when the system is employed.Keywords: AMR sensor, closed loop, memory effects, chopper, linearity improvement, sensitivity improvement, magnetic noise, electronic noise
Procedia PDF Downloads 3623862 Effectiveness of Earthing System in Vertical Configurations
Authors: S. Yunus, A. Suratman, N. Mohamad Nor, M. Othman
Abstract:
This paper presents the measurement and simulation results by Finite Element Method (FEM) for earth resistance (RDC) for interconnected vertical ground rod configurations. The soil resistivity was measured using the Wenner four-pin Method, and RDC was measured using the Fall of Potential (FOP) method, as outlined in the standard. Genetic Algorithm (GA) is employed to interpret the soil resistivity to that of a 2-layer soil model. The same soil resistivity data that were obtained by Wenner four-pin method were used in FEM for simulation. This paper compares the results of RDC obtained by FEM simulation with the real measurement at field site. A good agreement was seen for RDC obtained by measurements and FEM. This shows that FEM is a reliable software to be used for design of earthing systems. It is also found that the parallel rod system has a better performance compared to a similar setup using a grid layout.Keywords: earthing system, earth electrodes, finite element method, genetic algorithm, earth resistances
Procedia PDF Downloads 1103861 Specific Biomarker Level and Function Outcome Changes in Treatment of Patients with Frozen Shoulder Using Dextrose Prolotherapy Injection
Authors: Nuralam Sam, Irawan Yusuf, Irfan Idris, Endi Adnan
Abstract:
The most case in the shoulder in the the adult is the frozen shoulder. It make an uncomfortable sensation which disturbance daily activity. The studies of frozen shoulder are still limited. This study used a true experimental pre and post test design with a group design. The participant underwent dextrose prolotherapy injection in the rotator cuff, intraarticular glenohumeral joint, long head tendon biceps, and acromioclavicular joint injections with 15% dextrose, respectively, at week 2, week 4, and week 6. Participants were followed for 12 weeks. The specific biomarker MMP and TIMP, ROM, DASH score were measured at baseline, at week 6, and week 12. The data were analyzed by multivariate analysis (repeated measurement ANOVA, Paired T-Test, and Wilcoxon) to determine the effect of the intervention. The result showed a significant decrease in The Disability of the Arm, Shoulder, and Hand (DASH) score in prolo injection patients in each measurement week (p < 0.05). While the measurement of Range of Motion (ROM), each direction of shoulder motion showed a significant difference in average each week, from week 0 to week 6 (p <0.05).Dextrose prolotherapy injection results give a significant improvement in functional outcome of the shoulder joint, and ROMand did not show significant results in assessing the specific biomarker, MMP-1, and TIMP-1 in tissue repair. This study suggestion an alternative to the use of injection prolotherapy in Frozen shoulder patients, which has fewer side effects and better effectiveness than the use of corticosteroid injections.Keywords: frozen shoulder, ROM, DASH score, prolotherapy, MMP-1, TIMP-1
Procedia PDF Downloads 1133860 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements
Authors: Denis A. Sokolov, Andrey V. Mazurkevich
Abstract:
In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement
Procedia PDF Downloads 593859 Effects of Tomato-Crispy Salad Intercropping on Diameter of Tomato Fruits under Greenhouse Conditions
Authors: Halil Demir, Ersin Polat
Abstract:
This study, in which crispy salad plants was cultivated between the two rows of tomato, was conducted in Spring 2007 in a research glasshouse at Akdeniz University. Crispy salad (Lactuca sativa var. crispa cv. Bohemia) plants were intercropped with tomato (Solanum lycopersicon cv. Selin F1) plants as the main crop. Tomato seedlings were planted according to double line plantation system with 100 cm large spacing, 50 cm narrow spacing and 50 cm within row plant spacing. In both control and intercropping applications, each plot was 9.75 m2 according to plantation distances and there were 26 plants per each plot for tomato. Crispy salad seedlings were planted with 30 cm spacing as one row in the middle of tomato plants and with 30x30 spacing as two rows between plants rows. Moreover, salad seedlings were transplanted between tomato plants above the tomato rows that were planted in two rows with intervals of 50 cm and also with 25x25 cm spacing as the third row in the middle of tomato rows. While tomato plants were growing during the research, fruit width and height were measured periodically with 15 days in the tomato fruits of the third cluster from the formation of fruit to fruit ripening. According to results, while there were no differences between cropping systems in terms of fruit width, the highest fruit height was found in Control trial in the first measurement. In the second measurement while the highest fruit width was determined with 64.39 mm in Control, there were no differences between cropping systems. In the third measurement, the highest fruit width and height were obtained from Control with 68.47 mm and 55.52 mm, respectively. As a conclusion the trial, which crispy salad seedlings were planted with 30x30 cm spacing as two rows between tomato plants rows, was determined as a best intercropping application.Keywords: crispy salad, glasshouse, intercropping, tomato
Procedia PDF Downloads 3213858 Improving the Uniformity of Electrostatic Meter’s Spatial Sensitivity
Authors: Mohamed Abdalla, Ruixue Cheng, Jianyong Zhang
Abstract:
In pneumatic conveying, the solids are mixed with air or gas. In industries such as coal fired power stations, blast furnaces for iron making, cement and flour processing, the mass flow rate of solids needs to be monitored or controlled. However the current gas-solids two-phase flow measurement techniques are not as accurate as the flow meters available for the single phase flow. One of the problems that the multi-phase flow meters to face is that the flow profiles vary with measurement locations and conditions of pipe routing, bends, elbows and other restriction devices in conveying system as well as conveying velocity and concentration. To measure solids flow rate or concentration with non-even distribution of solids in gas, a uniform spatial sensitivity is required for a multi-phase flow meter. However, there are not many meters inherently have such property. The circular electrostatic meter is a popular choice for gas-solids flow measurement with its high sensitivity to flow, robust construction, low cost for installation and non-intrusive nature. However such meters have the inherent non-uniform spatial sensitivity. This paper first analyses the spatial sensitivity of circular electrostatic meter in general and then by combining the effect of the sensitivity to a single particle and the sensing volume for a given electrode geometry, the paper reveals first time how a circular electrostatic meter responds to a roping flow stream, which is much more complex than what is believed at present. The paper will provide the recent research findings on spatial sensitivity investigation at the University of Tees side based on Finite element analysis using Ansys Fluent software, including time and frequency domain characteristics and the effect of electrode geometry. The simulation results will be compared tothe experimental results obtained on a large scale (14” diameter) rig. The purpose of this research is paving a way to achieve a uniform spatial sensitivity for the circular electrostatic sensor by mean of compensation so as to improve overall accuracy of gas-solids flow measurement.Keywords: spatial sensitivity, electrostatic sensor, pneumatic conveying, Ansys Fluent software
Procedia PDF Downloads 3673857 The Early Stages of the Standardisation of Finnish Building Sector
Authors: Anu Soikkeli
Abstract:
Early 20th century functionalism aimed at generalising living and rationalising construction, thus laying the foundation for the standardisation of construction components and products. From the 1930s onwards, all measurement and quality instructions for building products, different types of building components, descriptions of working methods complying with advisable building practises, planning, measurement and calculation guidelines, terminology, etc. were called standards. Standardisation was regarded as a necessary prerequisite for the mass production of housing. This article examines the early stages of standardisation in Finland in the 1940s and 1950s, as reflected on the working history of an individual architect, Erkki Koiso-Kanttila (1914-2006). In 1950 Koiso-Kanttila was appointed the Head of Design of the Finnish Association of Architects’ Building Standards Committee, a position which he held until 1958. His main responsibilities were the development of the RT Building Information File and compiling of the files.Keywords: architecture, post WWII period, reconstruction, standardisation
Procedia PDF Downloads 4153856 Established Novel Approach for Chemical Oxygen Demand Concentrations Measurement Based Mach-Zehner Interferometer Sensor
Authors: Su Sin Chong, Abdul Aziz Abdul Raman, Sulaiman Wadi Harun, Hamzah Arof
Abstract:
Chemical Oxygen Demand (COD) plays a vital role determination of an appropriate strategy for wastewater treatment including the control of the quality of an effluent. In this study, a new sensing method was introduced for the first time and developed to investigate chemical oxygen demand (COD) using a Mach-Zehner Interferometer (MZI)-based dye sensor. The sensor is constructed by bridging two single mode fibres (SMF1 and SMF2) with a short section (~20 mm) of multimode fibre (MMF) and was formed by tapering the MMF to generate evanescent field which is sensitive to perturbation of sensing medium. When the COD concentration increase takes effect will induce changes in output intensity and effective refractive index between the microfiber and the sensing medium. The adequacy of decisions based on COD values relies on the quality of the measurements. Therefore, the dual output response can be applied to the analytical procedure enhance measurement quality. This work presents a detailed assessment of the determination of COD values in synthetic wastewaters. Detailed models of the measurement performance, including sensitivity, reversibility, stability, and uncertainty were successfully validated by proficiency tests where supported on sound and objective criteria. Comparison of the standard method with the new proposed method was also conducted. This proposed sensor is compact, reliable and feasible to investigate the COD value.Keywords: chemical oxygen demand, environmental sensing, Mach-Zehnder interferometer sensor, online monitoring
Procedia PDF Downloads 4943855 Long-Term Sitting Posture Identifier Connected with Cloud Service
Authors: Manikandan S. P., Sharmila N.
Abstract:
Pain in the neck, intermediate and anterior, and even low back may occur in one or more locations. Numerous factors can lead to back discomfort, which can manifest into sensations in the other parts of your body. Up to 80% of people will have low back problems at a certain stage of their lives, making spine-related pain a highly prevalent ailment. Roughly twice as commonly as neck pain, low back discomfort also happens about as often as knee pain. According to current studies, using digital devices for extended periods of time and poor sitting posture are the main causes of neck and low back pain. There are numerous monitoring techniques provided to enhance the sitting posture for the aforementioned problems. A sophisticated technique to monitor the extended sitting position is suggested in this research based on this problem. The system is made up of an inertial measurement unit, a T-shirt, an Arduino board, a buzzer, and a mobile app with cloud services. Based on the anatomical position of the spinal cord, the inertial measurement unit was positioned on the inner back side of the T-shirt. The IMU (inertial measurement unit) sensor will evaluate the hip position, imbalanced shoulder, and bending angle. Based on the output provided by the IMU, the data will be analyzed by Arduino, supplied through the cloud, and shared with a mobile app for continuous monitoring. The buzzer will sound if the measured data is mismatched with the human body's natural position. The implementation and data prediction with design to identify balanced and unbalanced posture using a posture monitoring t-shirt will be further discussed in this research article.Keywords: IMU, posture, IOT, textile
Procedia PDF Downloads 893854 A Gauge Repeatability and Reproducibility Study for Multivariate Measurement Systems
Authors: Jeh-Nan Pan, Chung-I Li
Abstract:
Measurement system analysis (MSA) plays an important role in helping organizations to improve their product quality. Generally speaking, the gauge repeatability and reproducibility (GRR) study is performed according to the MSA handbook stated in QS9000 standards. Usually, GRR study for assessing the adequacy of gauge variation needs to be conducted prior to the process capability analysis. Traditional MSA only considers a single quality characteristic. With the advent of modern technology, industrial products have become very sophisticated with more than one quality characteristic. Thus, it becomes necessary to perform multivariate GRR analysis for a measurement system when collecting data with multiple responses. In this paper, we take the correlation coefficients among tolerances into account to revise the multivariate precision-to-tolerance (P/T) ratio as proposed by Majeske (2008). We then compare the performance of our revised P/T ratio with that of the existing ratios. The simulation results show that our revised P/T ratio outperforms others in terms of robustness and proximity to the actual value. Moreover, the optimal allocation of several parameters such as the number of quality characteristics (v), sample size of parts (p), number of operators (o) and replicate measurements (r) is discussed using the confidence interval of the revised P/T ratio. Finally, a standard operating procedure (S.O.P.) to perform the GRR study for multivariate measurement systems is proposed based on the research results. Hopefully, it can be served as a useful reference for quality practitioners when conducting such study in industries. Measurement system analysis (MSA) plays an important role in helping organizations to improve their product quality. Generally speaking, the gauge repeatability and reproducibility (GRR) study is performed according to the MSA handbook stated in QS9000 standards. Usually, GRR study for assessing the adequacy of gauge variation needs to be conducted prior to the process capability analysis. Traditional MSA only considers a single quality characteristic. With the advent of modern technology, industrial products have become very sophisticated with more than one quality characteristic. Thus, it becomes necessary to perform multivariate GRR analysis for a measurement system when collecting data with multiple responses. In this paper, we take the correlation coefficients among tolerances into account to revise the multivariate precision-to-tolerance (P/T) ratio as proposed by Majeske (2008). We then compare the performance of our revised P/T ratio with that of the existing ratios. The simulation results show that our revised P/T ratio outperforms others in terms of robustness and proximity to the actual value. Moreover, the optimal allocation of several parameters such as the number of quality characteristics (v), sample size of parts (p), number of operators (o) and replicate measurements (r) is discussed using the confidence interval of the revised P/T ratio. Finally, a standard operating procedure (S.O.P.) to perform the GRR study for multivariate measurement systems is proposed based on the research results. Hopefully, it can be served as a useful reference for quality practitioners when conducting such study in industries.Keywords: gauge repeatability and reproducibility, multivariate measurement system analysis, precision-to-tolerance ratio, Gauge repeatability
Procedia PDF Downloads 262