Search results for: measurement errors
3131 Investigation of the Effects of the Whey Addition on the Biogas Production of a Reactor Using Cattle Manure for Biogas Production
Authors: Behnam Mahdiyan Nasl
Abstract:
In a lab-scale research, the effects of feeding whey into the biogas system and how to solve the probable problems arising were analysed. In the study a semi-continuous glass reactor, having a total capacity of 13 liters and having a working capacity of 10 liters, was placed in an incubator, and the temperature was tried to be held at 38 °C. At first, the reactor was operated by adding 5 liters of animal manure and water with a ratio of 1/1. By passing time, the production rate of the gas reduced intensively that on the fourth day there was no production of gas and the system stopped working. In this condition, the pH was adjusted and by adding NaOH, it was increased from 5.4 to 7. On 48th day, the first gas measurement was done and an amount of 12.07 % of CH₄ was detected. After making buffer in the ambient, the number of bacteria existing in the cattle’s manure and contributing to the gas production was thought to be not adequate, and up to 20 % of its volume 2 liters of mud was added to the reactor. 7 days after adding the anaerobic mud, second gas measurement was carried out, and biogas including 43 % CH₄ was obtained. From the 61st day of the study, the cheese whey with the animal manure was started to be added with an amount of 40 mL per day. However, by passing time, the raising of the microorganisms existed in the whey (especially Ni and Co), the percent of methane in the biogas decreased. In fact, 2 weeks after adding PAS, the gas measurement was done and 36,97 % CH₄ was detected. 0,06 mL Ni-Co (to gain a concentration of 0.05 mg/L in the reactor’s mixture) solution was added to the system for 15 days. To find out the effect of the solution on archaea, 7 days after stopping addition of the solution, methane gas was found to have a 9,03 % increase and reach 46 %. Lastly, the effects of adding molasses to the reactor were investigated. The effects of its activity on the bacteria was analysed by adding 4 grams of it to the system. After adding molasses in 10 days, according to the last measurement, the amount of methane gas reached up to 49%.Keywords: biogas, cheese whey, cattle manure, energy
Procedia PDF Downloads 3343130 Continuous Blood Pressure Measurement from Pulse Transit Time Techniques
Authors: Chien-Lin Wang, Cha-Ling Ko, Tainsong Chen
Abstract:
Pulse Blood pressure (BP) is one of the vital signs, and is an index that helps determining the stability of life. In this respect, some spinal cord injury patients need to take the tilt table test. While doing the test, the posture changes abruptly, and may cause a patient’s BP to change abnormally. This may cause patients to feel discomfort, and even feel as though their life is threatened. Therefore, if a continuous non-invasive BP assessment system were built, it could help to alert health care professionals in the process of rehabilitation when the BP value is out of range. In our research, BP assessed by the pulse transit time technique was developed. In the system, we use a self-made photoplethysmograph (PPG) sensor and filter circuit to detect two PPG signals and to calculate the time difference. The BP can immediately be assessed by the trend line. According to the results of this study, the relationship between the systolic BP and PTT has a highly negative linear correlation (R2=0.8). Further, we used the trend line to assess the value of the BP and compared it to a commercial sphygmomanometer (Omron MX3); the error rate of the system was found to be in the range of ±10%, which is within the permissible error range of a commercial sphygmomanometer. The continue blood pressure measurement from pulse transit time technique may have potential to become a convenience method for clinical rehabilitation.Keywords: continous blood pressure measurement, PPG, time transit time, transit velocity
Procedia PDF Downloads 3543129 A Study on the Acquisition of Chinese Classifiers by Vietnamese Learners
Authors: Quoc Hung Le Pham
Abstract:
In the field of language study, classifier is an interesting research feature. In the world’s languages, some languages have classifier system, some do not. Mandarin Chinese and Vietnamese languages are a rich classifier system, however, because of the language system, the cognitive, cultural differences, so that the syntactic structure of classifier of them also dissimilar. When using Mandarin Chinese classifiers must collocate with nouns or verbs, in the lexical category it is not like nouns or verbs, belong to the open class. But some scholars believe that Mandarin Chinese measure words are similar to English and other Indo European languages. The word hanging on the structure and word formation (suffix), is a closed class. Compared to other languages, such as Chinese, Vietnamese, Thai and other Asian languages are still belonging to the classifier language’s second type, this type of language is classifier, it is in the majority of quantity must exist, and following deictic, anaphoric or quantity appearing together, not separation between its modified noun, also known as numeral classifier language. Main syntactic structure of Chinese classifiers are as follows: ‘quantity+measure+noun’, ‘pronoun+measure+noun’, ‘pronoun+quantity+measure+noun’, ‘prefix+quantity+measure +noun’, ‘quantity +adjective + measure +noun’, ‘ quantity (above 10 whole number), + duo (多)measure +noun’, ‘ quantity (around 10) + measure + duo (多) +noun’. Main syntactic structure of Vietnamese classifiers are: ‘quantity+measure+noun’, ‘ measure+noun+pronoun’, ‘quantity+measure+noun+pronoun’, ‘measure+noun+prefix+ quantity’, ‘quantity+measure+noun+adjective', ‘duo (多) +quanlity+measure+noun’, ‘quantity+measure+adjective+pronoun (quantity word could not be 1)’, ‘measure+adjective+pronoun’, ‘measure+pronoun’. In daily life, classifiers are commonly used, if Chinese learners failed to standardize this using catergory, because the negative impact might occur on their verbal communication. The richness of the Chinese classifier system contributes to the complexity in the study of the system by foreign learners, especially in the inter language of Vietnamese learners. As above mentioned, Vietnamese language also has a rich system of classifiers, however, the basic structure order of two languages are similar but both still have differences. These similarities and dissimilarities between Chinese and Vietnamese classifier systems contribute significantly to the common errors made by Vietnamese students while they acquire Chinese, which are distinct from the errors made by students from the other language background. This article from a comparative perspective of language, has an orientation towards Chinese and Vietnamese languages commonly used in classifiers semantics and structural form two aspects. This comparative study aims to identity Vietnamese students while learning Chinese classifiers may face some negative transference of mother language, beside that through the analysis of the classifiers questionnaire, find out the causes and patterns of the errors they made. As the preliminary analysis shows, Vietnamese students while learning Chinese classifiers made some errors such as: overuse classifier ‘ge’(个); misuse the other classifiers ‘*yi zhang ri ji’(yi pian ri ji), ‘*yi zuo fang zi’(yi jian fang zi), ‘*si zhang jin pai’(si mei jin pai); homonym words ‘dui, shuang, fu, tao’ (对、双、副、套), ‘ke, li’ (颗、粒).Keywords: acquisition, classifiers, negative transfer, Vietnamse learners
Procedia PDF Downloads 4523128 A Comparative Case Study on Teaching Romanian Language to Foreign Students: Swedes in Lund versus Arabs in Alba Iulia
Authors: Lucian Vasile Bagiu, Paraschiva Bagiu
Abstract:
The study is a contrastive essay on language acquisition and learning and follows the outcomes of teaching Romanian language to foreign students both at Lund University, Sweden (from 2014 to 2017) and at '1 Decembrie 1918' University in Alba Iulia, Romania (2017-2018). Having employed the same teaching methodology (on campus, same curricula) for the same level of study (beginners’ level: A1-A2), the essay focuses on the written exam at the end of the semester. The study argues on grammar exercises concerned with: the indefinite and the definite article; the conjugation of verbs in the present indicative; the possessive; verbs in the past tense; the subjunctive; the degrees of comparison for adjectives. Identifying similar errors when solving identical grammar exercises by different groups of foreign students is an opportunity to emphasize the major challenges any foreigner has to face and overcome when trying to acquire Romanian language. The conclusion draws attention to the complexity of the morphology of Romanian language in several key elements which may be insurmountable for a foreign speaker no matter if the language acquisition takes place in a foreign country or a Romanian university.Keywords: Arab students, morphological errors, Romanian language, Swedish students, written exam
Procedia PDF Downloads 2583127 Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization
Authors: Linhang Zhu, Hongyu Zhu, Jiahe Liu
Abstract:
This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control.Keywords: Manhattan world hypothesis, real-time positioning and mapping, feature matching, loopback detection
Procedia PDF Downloads 613126 Study on Intensity Modulated Non-Contact Optical Fiber Vibration Sensors of Different Configurations
Authors: Dinkar Dantala, Kishore Putha, Padmavathi Manchineelu
Abstract:
Optical fibers are widely used in the measurement of several physical parameters like temperature, pressure, vibrations etc. Measurement of vibrations plays a vital role in machines. In this paper, three fiber optic non-contact vibration sensors were discussed, which are designed based on the principle of light intensity modulation. The Dual plastic optical fiber, Fiber optic fused 1x2 coupler and Fiber optic fused 2x2 coupler vibration sensors are compared based on range of frequency, resolution and sensitivity. It is to conclude that 2x2 coupler configuration shows better response than other two sensors.Keywords: fiber optic, PMMA, vibration sensor, intensity-modulated
Procedia PDF Downloads 3703125 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method
Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David
Abstract:
Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height
Procedia PDF Downloads 2093124 Study on Concentration and Temperature Measurement with 760 nm Diode Laser in Combustion System Using Tunable Diode Laser Absorption Spectroscopy
Authors: Miyeon Yoo, Sewon Kim, Changyeop Lee
Abstract:
It is important to measure the internal temperature or temperature distribution precisely in combustion system to increase energy efficiency and reduce the pollutants. Especially in case of large combustion systems such as power plant boiler and reheating furnace of steel making process, it is very difficult to measure those physical properties in detail. Tunable diode laser absorption spectroscopy measurement and analysis can be attractive method to overcome the difficulty. In this paper, TDLAS methods are used to measure the oxygen concentration and temperature distribution in various experimental conditions.Keywords: tunable diode laser absorption Spectroscopy, temperature distribution, gas concentration
Procedia PDF Downloads 3863123 Energy Detection Based Sensing and Primary User Traffic Classification for Cognitive Radio
Authors: Urvee B. Trivedi, U. D. Dalal
Abstract:
As wireless communication services grow quickly; the seriousness of spectrum utilization has been on the rise gradually. An emerging technology, cognitive radio has come out to solve today’s spectrum scarcity problem. To support the spectrum reuse functionality, secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance. Once sensing is done, different prediction rules apply to classify the traffic pattern of primary user. Primary user follows two types of traffic patterns: periodic and stochastic ON-OFF patterns. A cognitive radio can learn the patterns in different channels over time. Two types of classification methods are discussed in this paper, by considering edge detection and by using autocorrelation function. Edge detection method has a high accuracy but it cannot tolerate sensing errors. Autocorrelation-based classification is applicable in the real environment as it can tolerate some amount of sensing errors.Keywords: cognitive radio (CR), probability of detection (PD), probability of false alarm (PF), primary user (PU), secondary user (SU), fast Fourier transform (FFT), signal to noise ratio (SNR)
Procedia PDF Downloads 3453122 A Study of Stress and Coping Strategies of School Teachers
Authors: G.S. Patel
Abstract:
In this research paper the discussion have been made on teachers work mental stress and coping strategies. Stress Measurement scale was developed for school teachers. All the scientific steps of test construction was followed. For this test construction, different factors like teachers workplace, teachers' residential area, teachers' family life, teachers' ability and skills, economic factors and other factors to construct teachers stress measurement scale. In this research tool, situational statements have been made and teachers have to give a response in each statement on five-point rating scale what they experienced in their daily life. Special features of the test also established like validity and reliability of this test and also computed norms for its interpretation. A sample of 320 teachers of school teachers of Gujarat state was selected by Cluster sampling technique. t-test was computed for testing null hypothesis. The main findings of the present study are Urban area teachers feel more stressful situation compare to rural area teachers. Those teachers who live in the joint family feel less stress compare to teachers who live in a nuclear family. This research work is very useful to prepare list of activities to reduce teachers mental stress.Keywords: stress measurement scale, level of stress, validity, reliability, norms
Procedia PDF Downloads 1953121 Using the Structural Equation Model to Explain the Effect of Supervisory Practices on Regulatory Density
Authors: Jill Round
Abstract:
In the economic system, the financial sector plays a crucial role as an intermediary between market participants, other financial institutions, and customers. Financial institutions such as banks have to make decisions to satisfy the demands of all the participants by keeping abreast of regulatory change. In recent years, progress has been made regarding frameworks, development of rules, standards, and processes to manage risks in the banking sector. The increasing focus of regulators and policymakers placed on risk management, corporate governance, and the organization’s culture is of special interest as it requires a well-resourced risk controlling function, compliance function, and internal audit function. In the past years, the relevance of these functions that make up the so-called Three Lines of Defense has moved from the backroom to the boardroom. The approach of the model can vary based on the various organizational characteristics. Due to the intense regulatory requirements, organizations operating in the financial sector have more mature models. In less regulated industries there is more cloudiness about what tasks are allocated where. All parties strive to achieve their objectives through the effective management of risks and serve the identical stakeholders. Today, the Three Lines of Defense model is used throughout the world. The research looks at trends and emerging issues in the professions of the Three Lines of Defense within the banking sector. The answers are believed to helping to explain the increasing regulatory requirements for the banking sector. While the number of supervisory practices increases the risk management requirements intensify and demand more regulatory compliance at the same time. The Structural Equation Modeling (SEM) is applied by making use of conducted surveys in the research field. It aims to describe (i) the theoretical model regarding the applicable linearity relationships, (ii) the causal relationship between multiple predictors (exogenous) and multiple dependent variables (endogenous), (iii) taking into consideration the unobservable variables and (iv) the measurement errors. The surveys conducted on the research field suggest that the observable variables are caused by various latent variables. The SEM consists of the 1) measurement model and the 2) structural model. There is a detectable correlation regarding the cause-effect relationship among the performed supervisory practices and the increasing scope of regulation. Supervisory practices reinforce the regulatory density. In the past, controls were placed after supervisory practices were conducted or incidents occurred. In further research, it is of interest to examine, whether risk management is proactive, reactive to incidents and supervisory practices or can be both at the same time.Keywords: risk management, structural equation model, supervisory practice, three lines of defense
Procedia PDF Downloads 2243120 Reliability of Social Support Measurement Modification of the BC-SSAS among Women with Breast Cancer Who Undergone Chemotherapy in Selected Hospital, Central Java, Indonesia
Authors: R. R. Dewi Rahmawaty Aktyani Putri, Earmporn Thongkrajai, Dedy Purwito
Abstract:
There were many instruments have been developed to assess social support which has the different dimension in breast cancer patients. The Issue of measurement is a challenge to determining the component of dimensional concept, defining the unit of measurement, and establishing the validity and reliability of the measurement. However, the instruments where need to know how much support which obtained and perceived among women with breast cancer who undergone chemotherapy which it can help nurses to prevent of non-adherence in chemotherapy. This study aimed to measure the reliability of BC-SSAS instrument among 30 Indonesian women with breast cancer aged 18 years and above who undergone chemotherapy for six cycles in the oncological unit of Outpatient Department (OPD), Margono Soekardjo Hospital, Central Java, Indonesia. Data were collected during October to December 2015 by using modified the Breast Cancer Social Support Assessment (BC-SSAS). The Cronbach’s alpha analysis was carried out to measure internal consistency for reliability test of BC-SSAS instrument. This study used five experts for content validity index. The results showed that for content validity, I-CVI was 0.98 and S-CVI was 0.98; Cronbach’s alpha value was 0.971 and the Cronbach’s alpha coefficients for the subscales were high, with 0.903 for emotional support, 0.865 for informational support, 0.901 for tangible support, 0.897 for appraisal support and 0.884 for positive interaction support. The results confirmed that the BC-SSAS instrument has high reliability. BC-SSAS instruments were reliable and can be used in health care services to measure the social support received and perceived among women with breast cancer who undergone chemotherapy so that preventive interventions can be developed and the quality of health services can be improved.Keywords: BC-SSAS, women with breast cancer, chemotherapy, Indonesia
Procedia PDF Downloads 3623119 Gas Pressure Evaluation through Radial Velocity Measurement of Fluid Flow Modeled by Drift Flux Model
Authors: Aicha Rima Cheniti, Hatem Besbes, Joseph Haggege, Christophe Sintes
Abstract:
In this paper, we consider a drift flux mixture model of the blood flow. The mixture consists of gas phase which is carbon dioxide and liquid phase which is an aqueous carbon dioxide solution. This model was used to determine the distributions of the mixture velocity, the mixture pressure, and the carbon dioxide pressure. These theoretical data are used to determine a measurement method of mean gas pressure through the determination of radial velocity distribution. This method can be applicable in experimental domain.Keywords: mean carbon dioxide pressure, mean mixture pressure, mixture velocity, radial velocity
Procedia PDF Downloads 3243118 Blood Glucose Level Measurement from Breath Analysis
Authors: Tayyab Hassan, Talha Rehman, Qasim Abdul Aziz, Ahmad Salman
Abstract:
The constant monitoring of blood glucose level is necessary for maintaining health of patients and to alert medical specialists to take preemptive measures before the onset of any complication as a result of diabetes. The current clinical monitoring of blood glucose uses invasive methods repeatedly which are uncomfortable and may result in infections in diabetic patients. Several attempts have been made to develop non-invasive techniques for blood glucose measurement. In this regard, the existing methods are not reliable and are less accurate. Other approaches claiming high accuracy have not been tested on extended dataset, and thus, results are not statistically significant. It is a well-known fact that acetone concentration in breath has a direct relation with blood glucose level. In this paper, we have developed the first of its kind, reliable and high accuracy breath analyzer for non-invasive blood glucose measurement. The acetone concentration in breath was measured using MQ 138 sensor in the samples collected from local hospitals in Pakistan involving one hundred patients. The blood glucose levels of these patients are determined using conventional invasive clinical method. We propose a linear regression classifier that is trained to map breath acetone level to the collected blood glucose level achieving high accuracy.Keywords: blood glucose level, breath acetone concentration, diabetes, linear regression
Procedia PDF Downloads 1713117 An Efficiency Measurement of E-Government Performance for United Nation Ranking Index
Authors: Yassine Jadi, Lin Jie
Abstract:
In order to serve the society in an electronic manner, many developing countries have launched tremendous e-government projects. The strategies of development and implementation e-government system have reached different levels, and to ensure consistency of development, the governments need to evaluate e-government performance. The United nation has design e-government development ranking index (EGDI) that rely on three indexes, Online service index (OSI), Telecommunication Infrastructure index (TII), and human capital index( HCI) which are not reflecting the interaction between a government and their citizens. Based on data envelopment analyses (DEA) technique, we are using E-participating index (EPI) as an output of government effort to evaluate the performance of e-government system. Therefore, the ranking index can be achieved in efficiency manner.Keywords: e-government, DEA, efficiency measurement, EGDI
Procedia PDF Downloads 3763116 Alternate Approaches to Quality Measurement: An Exploratory Study in Differentiation of “Quality” Characteristics in Services and Supports
Authors: Caitlin Bailey, Marian Frattarola Saulino, Beth Steinberg
Abstract:
Today, virtually all programs offered to people with intellectual and developmental disabilities tout themselves as person-centered, community-based and inclusive, yet there is a vast range in type and quality of services that use these similar descriptors. The issue is exacerbated by the fields’ measurement practices around quality, inclusion, independent living, choice and person-centered outcomes. For instance, community inclusion for people with disabilities is often measured by the number of times person steps into his or her community. These measurement approaches set standards for quality too low so that agencies supporting group home residents to go bowling every week can report the same outcomes as an agency that supports one person to join a book club that includes people based on their literary interests rather than disability labels. Ultimately, lack of delineation in measurement contributes to the confusion between face value “quality” and true quality services and supports for many people with disabilities and their families. This exploratory study adopts alternative approaches to quality measurement including co-production methods and systems theoretical framework in order to identify the factors that 1) lead to high-quality supports and, 2) differentiate high-quality services. Project researchers have partnered with community practitioners who are all committed to providing quality services and supports but vary in the degree to which they are actually able to provide them. The study includes two parts; first, an online survey distributed to more than 500 agencies that have demonstrated commitment to providing high-quality services; and second, four in-depth case studies with agencies in three United States and Israel providing a variety of supports to children and adults with disabilities. Results from both the survey and in-depth case studies were thematically analyzed and coded. Results show that there are specific factors that differentiate service quality; however meaningful quality measurement practices also require that researchers explore the contextual factors that contribute to quality. These not only include direct services and interactions, but also characteristics of service users, their environments as well as organizations providing services, such as management and funding structures, culture and leadership. Findings from this study challenge researchers, policy makers and practitioners to examine existing quality service standards and measurements and to adopt alternate methodologies and solutions to differentiate and scale up evidence-based quality practices so that all people with disabilities have access to services that support them to live, work, and enjoy where and with whom they choose.Keywords: co-production, inclusion, independent living, quality measurement, quality supports
Procedia PDF Downloads 3993115 On-Chip Aging Sensor Circuit Based on Phase Locked Loop Circuit
Authors: Ararat Khachatryan, Davit Mirzoyan
Abstract:
In sub micrometer technology, the aging phenomenon starts to have a significant impact on the reliability of integrated circuits by bringing performance degradation. For that reason, it is important to have a capability to evaluate the aging effects accurately. This paper presents an accurate aging measurement approach based on phase-locked loop (PLL) and voltage-controlled oscillator (VCO) circuit. The architecture is rejecting the circuit self-aging effect from the characteristics of PLL, which is generating the frequency without any aging phenomena affects. The aging monitor is implemented in low power 32 nm CMOS technology, and occupies a pretty small area. Aging simulation results show that the proposed aging measurement circuit improves accuracy by about 2.8% at high temperature and 19.6% at high voltage.Keywords: aging effect, HCI, NBTI, nanoscale
Procedia PDF Downloads 3593114 Virtual Assessment of Measurement Error in the Fractional Flow Reserve
Authors: Keltoum Chahour, Mickael Binois
Abstract:
Due to a lack of standardization during the invasive fractional flow reserve (FFR) procedure, the index is subject to many sources of uncertainties. In this paper, we investigate -through simulation- the effect of the (FFR) device position and configuration on the obtained value of the (FFR) fraction. For this purpose, we use computational fluid dynamics (CFD) in a 3D domain corresponding to a diseased arterial portion. The (FFR) pressure captor is introduced inside it with a given length and coefficient of bending to capture the (FFR) value. To get over the computational limitations, basically, the time of the simulation is about 2h 15min for one (FFR) value; we generate a Gaussian Process (GP) model for (FFR) prediction. The (GP) model indicates good accuracy and demonstrates the effective error in the measurement created by the random configuration of the pressure captor.Keywords: fractional flow reserve, Gaussian processes, computational fluid dynamics, drift
Procedia PDF Downloads 1343113 Online Measurement of Fuel Stack Elongation
Authors: Sung Ho Ahn, Jintae Hong, Chang Young Joung, Tae Ho Yang, Sung Ho Heo, Seo Yun Jang
Abstract:
The performances of nuclear fuels and materials are qualified at an irradiation system in research reactors operating under the commercial nuclear power plant conditions. Fuel centerline temperature, coolant temperature, neutron flux, deformations of fuel stack and swelling are important parameters needed to analyze the nuclear fuel performances. The dimensional stability of nuclear fuels is a key parameter measuring the fuel densification and swelling. In this study, the fuel stack elongation is measured using a LVDT. A mockup LVDT instrumented fuel rod is developed. The performances of mockup LVDT instrumented fuel rod is evaluated by experiments.Keywords: axial deformation, elongation measurement, in-pile instrumentation, LVDT
Procedia PDF Downloads 5343112 Computer Assisted Strategies Help to Pharmacist
Authors: Komal Fizza
Abstract:
All around the world in every field professionals are taking great support from their computers. Computer assisted strategies not only increase the efficiency of the professionals but also in case of healthcare they help in life-saving interventions. The background of this current research is aimed towards two things; first to find out if computer assisted strategies are useful for Pharmacist for not and secondly how much these assist a Pharmacist to do quality interventions. Shifa International Hospital is a 500 bedded hospital, and it is running Antimicrobial Stewardship, during their stewardship rounds pharmacists observed that a lot of wrong doses of antibiotics were coming at times those were being overlooked by the other pharmacist even. So, with the help of MIS team the patients were categorized into adult and peads depending upon their age. Minimum and maximum dose of every single antibiotic present in the pharmacy that could be dispensed to the patient was developed. These were linked to the order entry window. So whenever pharmacist would type any order and the dose would be below or above the therapeutic limit this would give an alert to the pharmacist. Whenever this message pop-up this was recorded at the back end along with the antibiotic name, pharmacist ID, date, and time. From 14th of January 2015 and till 14th of March 2015 the software stopped different users 350 times. Out of this 300 were found to be major errors which if reached to the patient could have harmed them to the greater extent. While 50 were due to typing errors and minor deviations. The pilot study showed that computer assisted strategies can be of great help to the pharmacist. They can improve the efficacy and quality of interventions.Keywords: antibiotics, computer assisted strategies, pharmacist, stewardship
Procedia PDF Downloads 4903111 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data
Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini
Abstract:
A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.Keywords: central Italy, extreme events, rainfall data, underestimation errors
Procedia PDF Downloads 1913110 Neural Synchronization - The Brain’s Transfer of Sensory Data
Authors: David Edgar
Abstract:
To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)
Procedia PDF Downloads 1263109 Measurement of Temperature, Humidity and Strain Variation Using Bragg Sensor
Authors: Amira Zrelli, Tahar Ezzeddine
Abstract:
Measurement and monitoring of temperature, humidity and strain variation are very requested in great fields and areas such as structural health monitoring (SHM) systems. Currently, the use of fiber Bragg grating sensors (FBGS) is very recommended in SHM systems due to the specifications of these sensors. In this paper, we present the theory of Bragg sensor, therefore we try to measure the efficient variation of strain, temperature and humidity (SV, ST, SH) using Bragg sensor. Thus, we can deduce the fundamental relation between these parameters and the wavelength of Bragg sensor.Keywords: Fiber Bragg Grating Sensors (FBGS), strain, temperature, humidity, structural health monitoring (SHM)
Procedia PDF Downloads 3153108 Efficient Principal Components Estimation of Large Factor Models
Authors: Rachida Ouysse
Abstract:
This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting
Procedia PDF Downloads 1503107 Experimental and Finite Element Forming Limit Diagrams for Interstitial Free Steels
Authors: Basavaraj Vadavadagi, Satishkumar Shekhawat
Abstract:
Interstitial free steels posses better formability and have many applications in automotive industries. Forming limit diagrams (FLDs) indicate the formability of materials which can be determined by experimental and finite element (FE) simulations. FLDs were determined experimentally by LDH test, utilizing optical strain measurement system for measuring the strains in different width specimens and by FE simulations in Interstitial Free (IF) and Interstitial Free High Strength (IFHS) steels. In this study, the experimental and FE simulated FLDs are compared and also the stress based FLDs were investigated.Keywords: forming limit diagram, limiting dome height, optical strain measurement, interstitial
Procedia PDF Downloads 2323106 Memristive Properties of Nanostructured Porous Silicon
Authors: Madina Alimova, Margulan Ibraimov, Ayan Tileu
Abstract:
The paper describes methods for obtaining porous structures with the properties of a silicon-based memristor and explains the electrical properties of porous silicon films. Based on the results, there is a positive shift in the current-voltage characteristics (CVC) after each measurement, i.e., electrical properties depend not only on the applied voltage but also on the previous state. After 3 minutes of rest, the film returns to its original state (reset). The method for obtaining a porous silicon nanofilm with the properties of a memristor is simple and does not require additional effort. Based on the measurement results, the typical memristive behavior of the porous silicon nanofilm is analyzed.Keywords: porous silicon, current-voltage characteristics, memristor, nanofilms
Procedia PDF Downloads 1303105 Reliability and Validity for Measurement of Body Composition: A Field Method
Authors: Ahmad Hashim, Zarizi Ab Rahman
Abstract:
Measurement of body composition via a field method has the most popular instruments which are used to estimate the percentage of body fat. Among the instruments used are the Body Mass Index, Bio Impedance Analysis and Skinfold Test. All three of these instruments do not involve high costs, do not require high technical skills, are mobile, save time, and are suitable for use in large populations. Because all three instruments can estimate the percentage of body fat, but it is important to identify the most appropriate instruments and have high reliability. Hence, this study was conducted to determine the reliability and convergent validity of the instruments. A total of 40 students, males and females aged between 13 and 14 years participated in this study. The study found that the test retest and Pearson correlation coefficient of reliability for the three instruments is very high, r = .99. While the inter class reliability also are at high level with r = .99 for Body Mass Index and Bio Impedance Analysis, r = .96 for Skin fold test. Intra class reliability coefficient for these three instruments is too high for Body Mass Index r = .99, Bio Impedance Analysis r = .97, and Skin fold Test r = .90. However, Standard Error of Measurement value for all three instruments indicates the Body Mass Index is the most appropriate instrument with a mean value of .000672 compared with other instruments. The findings show that the Body Mass Index is an instrument which is the most accurate and reliable in estimating body fat percentage for the population studied.Keywords: reliability, validity, body mass index, bio impedance analysis and skinfold test
Procedia PDF Downloads 3353104 Formulation of a Stress Management Program for Human Error Prevention in Nuclear Power Plants
Authors: Hyeon-Kyo Lim, Tong-il Jang, Yong-Hee Lee
Abstract:
As for any nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. Thus, for accident prevention, it is quite indispensable to analyze and to manage the influence of any factor which may raise the possibility of human errors. Among lots factors, stress has been reported to have significant influence on human performance. Stress level of a person may fluctuate over time. To handle the possibility over time, robust stress management program is required, especially in nuclear power plants. Therefore, to overcome the possibility of human errors, this study aimed to develop a stress management program as a part of Fitness-for-Duty (FFD) Program for the workers in nuclear power plants. The meaning of FFD might be somewhat different by research objectives, appropriate definition of FFD was accomplished in this study with special reference to human error prevention, and diverse stress factors were elicited for management of human error susceptibility. In addition, with consideration of conventional FFD management programs, appropriate tests and interventions were introduced over the whole employment cycle including selection and screening of workers, job allocation, job rotation, and disemployment as well as Employee-Assistance-Program (EAP). The results showed that most tools mainly concentrated their weights on common organizational factors such as Demands, Supports, and Relationships in sequence, which were referred as major stress factors.Keywords: human error, accident prevention, work performance, stress, fatigue
Procedia PDF Downloads 3263103 Comparison of Intraocular Pressure Measurement Prior and Following Full Intracorneal Ring Implantation in Patient with Keratoconus by Three Different Instruments
Authors: Seyed Aliasghar Mosavi, Mostafa Naderi, Khosrow Jadidi, Amir Hashem Mohammadi
Abstract:
To study the measurement of intraocular pressure (IOP) before and after implantation of intrastromal corneal ring (MyoRing) in patients with keratoconus. Setting: Baqiyatallah University of Medical Sciences, Tehran, Iran. Methods: We compared the IOP of 13 eyes which underwent MyoRing implantation prior and six months post operation using Goldman applanation (as gold standard), Icare, and Corvis ST (uncorrected, corrected and corrected with cornea biomechanics). Results: The resulting intraocular pressure measurements prior to surgery, Icare, Corvis (corrected with cornea biomechanics) overestimated the IOP, however measurements by Corvis uncorrected underestimate the IOP. The resulting intraocular pressure measurements after surgery, Icare, Corvis (corrected with cornea biomechanics) overestimated the IOP but measurements by Corvis uncorrected underestimate the IOP. Conclusion: Consistent intraocular pressure measurements on eyes with Myoring in keratoconus can be obtained with the Goldman applanation tonometer as the gold standard measurement. We were not able to obtain consistent results when we measured the IOP by Icare and Corvis prior and after surgery.Keywords: intraocular pressure, MyoRing, Keratoconus, Goldmann applanation, Icare, Corvis ST
Procedia PDF Downloads 2433102 Image Multi-Feature Analysis by Principal Component Analysis for Visual Surface Roughness Measurement
Authors: Wei Zhang, Yan He, Yan Wang, Yufeng Li, Chuanpeng Hao
Abstract:
Surface roughness is an important index for evaluating surface quality, needs to be accurately measured to ensure the performance of the workpiece. The roughness measurement based on machine vision involves various image features, some of which are redundant. These redundant features affect the accuracy and speed of the visual approach. Previous research used correlation analysis methods to select the appropriate features. However, this feature analysis is independent and cannot fully utilize the information of data. Besides, blindly reducing features lose a lot of useful information, resulting in unreliable results. Therefore, the focus of this paper is on providing a redundant feature removal approach for visual roughness measurement. In this paper, the statistical methods and gray-level co-occurrence matrix(GLCM) are employed to extract the texture features of machined images effectively. Then, the principal component analysis(PCA) is used to fuse all extracted features into a new one, which reduces the feature dimension and maintains the integrity of the original information. Finally, the relationship between new features and roughness is established by the support vector machine(SVM). The experimental results show that the approach can effectively solve multi-feature information redundancy of machined surface images and provides a new idea for the visual evaluation of surface roughness.Keywords: feature analysis, machine vision, PCA, surface roughness, SVM
Procedia PDF Downloads 212