Search results for: measure and calibration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3638

Search results for: measure and calibration

3218 Parameters of Validation Method of Determining Polycyclic Aromatic Hydrocarbons in Drinking Water by High Performance Liquid Chromatography

Authors: Jonida Canaj

Abstract:

A simple method of extraction and determination of fifteen priority polycyclic aromatic hydrocarbons (PAHs) from drinking water using high performance liquid chromatography (HPLC) has been validated with limits of detection (LOD) and limits of quantification (LOQ), method recovery and reproducibility, and other factors. HPLC parameters, such as mobile phase composition and flow standardized for determination of PAHs using fluorescent detector (FLD). PAH was carried out by liquid-liquid extraction using dichloromethane. Linearity of calibration curves was good for all PAH (R², 0.9954-1.0000) in the concentration range 0.1-100 ppb. Analysis of standard spiked water samples resulted in good recoveries between 78.5-150%(0.1ppb) and 93.04-137.47% (10ppb). The estimated LOD and LOQ ranged between 0.0018-0.98 ppb. The method described has been used for determination of the fifteen PAHs contents in drinking water samples.

Keywords: high performance liquid chromatography, HPLC, method validation, polycyclic aromatic hydrocarbons, PAHs, water

Procedia PDF Downloads 99
3217 Numerical Modeling of Timber Structures under Varying Humidity Conditions

Authors: Sabina Huč, Staffan Svensson, Tomaž Hozjan

Abstract:

Timber structures may be exposed to various environmental conditions during their service life. Often, the structures have to resist extreme changes in the relative humidity of surrounding air, with simultaneously carrying the loads. Wood material response for this load case is seen as increasing deformation of the timber structure. Relative humidity variations cause moisture changes in timber and consequently shrinkage and swelling of the material. Moisture changes and loads acting together result in mechano-sorptive creep, while sustained load gives viscoelastic creep. In some cases, magnitude of the mechano-sorptive strain can be about five times the elastic strain already at low stress levels. Therefore, analyzing mechano-sorptive creep and its influence on timber structures’ long-term behavior is of high importance. Relatively many one-dimensional rheological models for rheological behavior of wood can be found in literature, while a number of models coupling creep response in each material direction is limited. In this study, mathematical formulation of a coupled two-dimensional mechano-sorptive model and its application to the experimental results are presented. The mechano-sorptive model constitutes of a moisture transport model and a mechanical model. Variation of the moisture content in wood is modelled by multi-Fickian moisture transport model. The model accounts for processes of the bound-water and water-vapor diffusion in wood, that are coupled through sorption hysteresis. Sorption defines a nonlinear relation between moisture content and relative humidity. Multi-Fickian moisture transport model is able to accurately predict unique, non-uniform moisture content field within the timber member over time. Calculated moisture content in timber members is used as an input to the mechanical analysis. In the mechanical analysis, the total strain is assumed to be a sum of the elastic strain, viscoelastic strain, mechano-sorptive strain, and strain due to shrinkage and swelling. Mechano-sorptive response is modelled by so-called spring-dashpot type of a model, that proved to be suitable for describing creep of wood. Mechano-sorptive strain is dependent on change of moisture content. The model includes mechano-sorptive material parameters that have to be calibrated to the experimental results. The calibration is made to the experiments carried out on wooden blocks subjected to uniaxial compressive loaded in tangential direction and varying humidity conditions. The moisture and the mechanical model are implemented in a finite element software. The calibration procedure gives the required, distinctive set of mechano-sorptive material parameters. The analysis shows that mechano-sorptive strain in transverse direction is present, though its magnitude and variation are substantially lower than the mechano-sorptive strain in the direction of loading. The presented mechano-sorptive model enables observing real temporal and spatial distribution of the moisture-induced strains and stresses in timber members. Since the model’s suitability for predicting mechano-sorptive strains is shown and the required material parameters are obtained, a comprehensive advanced analysis of the stress-strain state in timber structures, including connections subjected to constant load and varying humidity is possible.

Keywords: mechanical analysis, mechano-sorptive creep, moisture transport model, timber

Procedia PDF Downloads 243
3216 Mathematical Modelling of Wastewater Collection System in Cha-Am Municipality Using PCSWMM

Authors: Thawtar Htun, Kim N. Irvine, Ranjna Jindal

Abstract:

This study aimed at modelling the wastewater collection system in Cha-Am Municipality using PCSWMM to investigate the quantity of combined sewage delivered to the aeration lagoon treatment system (ALTS). Cha-Am is a small sea resort town in Petchaburi Province located about 175 km southwest of Bangkok and is facing increasing development so it is important to understand current system performance and plan for future build out. PCSWMM was calibrated using observed ALTS inflow data for the period 15 June to 20 July 2015. The model was validated using observed ALTS inflow data for the periods 19 July to 20 October 2015 and 1 October to 31 December 2015, respectively. The 1:1 lines between modeled and observed peak flow and event volume for the calibration events qualitatively showed good correspondence. The r2 values between modeled and observed peak flow (99%) and event volume (89%) also were strong.

Keywords: combined sewer system, mathematical modelling, PCSWMM, wastewater collection system

Procedia PDF Downloads 209
3215 Study on an Integrated Real-Time Sensor in Droplet-Based Microfluidics

Authors: Tien-Li Chang, Huang-Chi Huang, Zhao-Chi Chen, Wun-Yi Chen

Abstract:

The droplet-based microfluidic are used as micro-reactors for chemical and biological assays. Hence, the precise addition of reagents into the droplets is essential for this function in the scope of lab-on-a-chip applications. To obtain the characteristics (size, velocity, pressure, and frequency of production) of droplets, this study describes an integrated on-chip method of real-time signal detection. By controlling and manipulating the fluids, the flow behavior can be obtained in the droplet-based microfluidics. The detection method is used a type of infrared sensor. Through the varieties of droplets in the microfluidic devices, the real-time conditions of velocity and pressure are gained from the sensors. Here the microfluidic devices are fabricated by polydimethylsiloxane (PDMS). To measure the droplets, the signal acquisition of sensor and LabVIEW program control must be established in the microchannel devices. The devices can generate the different size droplets where the flow rate of oil phase is fixed 30 μl/hr and the flow rates of water phase range are from 20 μl/hr to 80 μl/hr. The experimental results demonstrate that the sensors are able to measure the time difference of droplets under the different velocity at the voltage from 0 V to 2 V. Consequently, the droplets are measured the fastest speed of 1.6 mm/s and related flow behaviors that can be helpful to develop and integrate the practical microfluidic applications.

Keywords: microfluidic, droplets, sensors, single detection

Procedia PDF Downloads 485
3214 South Korean Tourists' Expectation, Satisfaction and Loyalty Relationship

Authors: Tolga Gok, Kursad Sayin

Abstract:

The aim of this study is to investigate the relationship between expectation, satisfaction and loyalty of South Korean tourists visiting Turkey. In the research, a questionnaire was used as a data collecting tool. The questionnaires are filled by South Korean tourists coming to Turkey through package tours and individual. The survey was conducted in 2014 in Nevsehir (Cappadocia Region) and Istanbul. Tourist guides and agency staff have helped the implementation of surveys. The survey questions are composed of 4 parts, which are “demographic characteristics of tourists”, “travel behavior characteristics”, “perception of expectations on destination attributes” and “perception of destination loyalty”. 5-point Likert type scale including 28 destination attributes was used to measure the expectations of South Korean tourists coming to Turkey. Questions were directed to the tourists to measure the destination loyalty. The questions relating to destination loyalty are “Talking about Turkey to others”, “Recommendation Turkey to others” and “Tourists’ intentions to revisit Turkey”. The basic hypothesis of the research is that there is a statistically significant relationship among expectations, satisfactions and destination loyalty of South Korean tourists coming to Turkey. The results indicated that the expectation had a significant effect on overall satisfaction. In addition, it was seen that between overall satisfaction of tourists and destination loyalty had a significant relationship. Based on findings, some suggestions for tour operators and travel agencies were made.

Keywords: tourist expectation, tourist satisfaction, destination loyalty, destination attributes

Procedia PDF Downloads 465
3213 The Volume–Volatility Relationship Conditional to Market Efficiency

Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese

Abstract:

The relation between stock price volatility and trading volume represents a controversial issue which has received a remarkable attention over the past decades. In fact, an extensive literature shows a positive relation between price volatility and trading volume in the financial markets, but the causal relationship which originates such association is an open question, from both a theoretical and empirical point of view. In this regard, various models, which can be considered as complementary rather than competitive, have been introduced to explain this relationship. They include the long debated Mixture of Distributions Hypothesis (MDH); the Sequential Arrival of Information Hypothesis (SAIH); the Dispersion of Beliefs Hypothesis (DBH); the Noise Trader Hypothesis (NTH). In this work, we analyze whether stock market efficiency can explain the diversity of results achieved during the years. For this purpose, we propose an alternative measure of market efficiency, based on the pointwise regularity of a stochastic process, which is the Hurst–H¨older dynamic exponent. In particular, we model the stock market by means of the multifractional Brownian motion (mBm) that displays the property of a time-changing regularity. Mostly, such models have in common the fact that they locally behave as a fractional Brownian motion, in the sense that their local regularity at time t0 (measured by the local Hurst–H¨older exponent in a neighborhood of t0 equals the exponent of a fractional Brownian motion of parameter H(t0)). Assuming that the stock price follows an mBm, we introduce and theoretically justify the Hurst–H¨older dynamical exponent as a measure of market efficiency. This allows to measure, at any time t, markets’ departures from the martingale property, i.e. from efficiency as stated by the Efficient Market Hypothesis. This approach is applied to financial markets; using data for the SP500 index from 1978 to 2017, on the one hand we find that when efficiency is not accounted for, a positive contemporaneous relationship emerges and is stable over time. Conversely, it disappears as soon as efficiency is taken into account. In particular, this association is more pronounced during time frames of high volatility and tends to disappear when market becomes fully efficient.

Keywords: volume–volatility relationship, efficient market hypothesis, martingale model, Hurst–Hölder exponent

Procedia PDF Downloads 76
3212 A Literature Review of Ergonomics Sitting Studies to Characterize Safe and Unsafe Sitting Behaviors

Authors: Yoonjin Lee, Dongwook Hwang, Juhee Park, Woojin Park

Abstract:

As undesirable sitting posture is known to be a major cause of musculoskeletal disorder of office workers, sitting has attracted attention on occupational health. However, there seems to be no consensus on what are safe and unsafe sitting behaviors. The purpose of this study was to characterize safe and unsafe behaviors based on scientific findings of sitting behavior. Three objectives were as follows; to identify different sitting behaviors measure used in ergonomics studies on safe sitting, for each measure identified, to find available findings or recommendations on safe and unsafe sitting behaviors along with relevant empirical grounds, and to synthesize the findings or recommendations to provide characterizations of safe and unsafe behaviors. A systematic review of electronic databases (Google Scholar, PubMed, Web of Science) was conducted for extensive search of sitting behavior. Key terms included awkward sitting position, sedentary sitting, dynamic sitting, sitting posture, sitting posture, and sitting biomechanics, etc. Each article was systemically abstracted to extract a list of studied sitting behaviors, measures used to study the sitting behavior, and presence of empirical evidence of safety of the sitting behaviors. Finally, characterization of safe and unsafe sitting behavior was conducted based on knowledge with empirical evidence. This characterization is expected to provide useful knowledge for evaluation of sitting behavior and about postures to be measured in development of sensing chair.

Keywords: sitting position, sitting biomechanics, sitting behavior, unsafe sitting

Procedia PDF Downloads 296
3211 Measuring Learning Independence and Transition through the First Year in Architecture

Authors: Duaa Al Maani, Andrew Roberts

Abstract:

Students in higher education are expected to learn actively and independently. Whilst quite work has been done to understand the perceptions of students’ learning transition regarding independent learning, to author’s best knowledge, it seems relatively few published research on independent learning in studio-based subjects such as architecture. Another major issue in independent learning research concerned the inconsistency in terminology; there appears to be a paucity of research on its definition, challenges, and tools within the UK university sector. It is not always clear how independent learning works in practice, or what are the challenges that face students toward being independent learners. Accordingly, this paper seeks to highlight these problems by analyzing previous and current literature of independent learning, in addition, to measure students’ independence at the very begging of their first academic year and compare it with their level of learning independence at the end of the same year. Eighty-seven student enrolled in 2017/2018 at Cardiff University completed the Autonomous Learning Questionnaire in order to measure their level of learning independence. Students’ initial responses were very positive and showed high level of learning independence. Interestingly, these responses significantly decreased at the end of the year. Time management was the most obvious challenge facing students transition into higher education, and contrary to expectations, we found no effect of student maturity on their level of independence. Moreover, we found no significant differences among students’ gender, but we did find differences among nationalities.

Keywords: autonomous learning, first year, learning independence, transition

Procedia PDF Downloads 142
3210 Explore Urban Spatial Density with Boltzmann Statistical Distribution

Authors: Jianjia Wang, Tong Yu, Haoran Zhu, Kun Liu, Jinwei Hao

Abstract:

The underlying pattern in the modern city is agglomeration. To some degree, the distribution of urban spatial density can be used to describe the status of this assemblage. There are three intrinsic characteristics to measure urban spatial density, namely, Floor Area Ratio (FAR), Building Coverage Ratio (BCR), and Average Storeys (AS). But the underlying mechanism that contributes to these quantities is still vague in the statistical urban study. In this paper, we explore the corresponding extrinsic factors related to spatial density. These factors can further provide the potential influence on the intrinsic quantities. Here, we take Shanghai Inner Ring Area and Manhattan in New York as examples to analyse the potential impacts on urban spatial density with six selected extrinsic elements. Ebery single factor presents the correlation to the spatial distribution, but the overall global impact of all is still implicit. To handle this issue, we attempt to develop the Boltzmann statistical model to explicitly explain the mechanism behind that. We derive a corresponding novel quantity, called capacity, to measure the global effects of all other extrinsic factors to the three intrinsic characteristics. The distribution of capacity presents a similar pattern to real measurements. This reveals the nonlinear influence on the multi-factor relations to the urban spatial density in agglomeration.

Keywords: urban spatial density, Boltzmann statistics, multi-factor correlation, spatial distribution

Procedia PDF Downloads 138
3209 Grid Based Traffic Vulnerability Model Using Betweenness Centrality for Urban Disaster Management Information

Authors: Okyu Kwon, Dongho Kang, Byungsik Kim, Seungkwon Jung

Abstract:

We propose a technique to measure the impact of loss of traffic function in a particular area to surrounding areas. The proposed method is applied to the city of Seoul, which is the capital of South Korea, with a population of about ten million. Based on the actual road network in Seoul, we construct an abstract road network between 1kmx1km grid cells. The link weight of the abstract road network is re-adjusted considering traffic volume measured at several survey points. On the modified abstract road network, we evaluate the traffic vulnerability by calculating a network measure of betweenness centrality (BC) for every single grid cells. This study analyzes traffic impacts caused by road dysfunction due to heavy rainfall in urban areas. We could see the change of the BC value in all other grid cells by calculating the BC value once again when the specific grid cell lost its traffic function, that is, when the node disappeared on the grid-based road network. The results show that it is appropriate to use the sum of the BC variation of other cells as the influence index of each lattice cell on traffic. This research was supported by a grant (2017-MOIS31-004) from Fundamental Technology Development Program for Extreme Disaster Response funded by Korean Ministry of Interior and Safety (MOIS).

Keywords: vulnerability, road network, beweenness centrality, heavy rainfall, road impact

Procedia PDF Downloads 91
3208 Calibration of Discrete Element Method Parameters for Modelling DRI Pellets Flow

Authors: A. Hossein Madadi-Najafabadi, Masoud Nasiri

Abstract:

The discrete element method is a powerful technique for numerical modeling the flow of granular materials such as direct reduced iron. It would enable us to study processes and equipment related to the production and handling of the material. However, the characteristics and properties of the granules have to be adjusted precisely to achieve reliable results in a DEM simulation. The main properties for DEM simulation are size distribution, density, Young's modulus, Poisson's ratio and the contact coefficients of restitution, rolling friction and sliding friction. In the present paper, the mentioned properties are determined for DEM simulation of DRI pellets. A reliable DEM simulation would contribute to optimizing the handling system of DRIs in an iron-making plant. Among the mentioned properties, Young's modulus is the most important parameter, which is usually hard to get for particulate solids. Here, an especial method is utilized to precisely determine this parameter for DRI.

Keywords: discrete element method, direct reduced iron, simulation parameters, granular material

Procedia PDF Downloads 175
3207 Numerical Simulation of Kangimi Reservoir Sedimentation, Kaduna State, Nigeria

Authors: Abdurrasheed Sa'id, Abubakar Isma'il, Waheed Alayande

Abstract:

This study focused on carrying out numerical simulations of Kangimi reservoir sedimentation by reviewing different numerical sediment transport models, and GSTARS3 was selected. The model was developed using the 1977 data. It was calibrated by simulating the 2012 profile and sediment deposition and compared with 2012 hydrographic survey results of NWRI. The model was validated by simulating the 2016 deposition and compared the results with NWRI estimates. Also, the performance of the proposed model was tested using statistical parameters such as MSE (Mean Square Error), MAPE (Mean Average Percentage Error) and R2 (Coefficient of determination) with values of 1.32m, 0.17% and 0.914 respectively which shows strong agreement. After the calibration, validation and performance testing the model was used to simulate the 2032 and 2062 profiles and deposition. The results showed that by 2032 the reservoir will be silted by 25.34MCM or 43.3% of the design capacity and 60.7% of the capacity by the year 2062. A number of sedimentation mitigation measures were recommended.

Keywords: NWRI- national water resources institute, sedimentation, GSTARS3, model

Procedia PDF Downloads 216
3206 The Occurrence of Depression with Chronic Liver Disease

Authors: Roop Kiran, Muhammad Shoaib Zafar, Nazish Idrees Chaudhary

Abstract:

Depression is known to be the second most frequently occurring comorbid mental illness among patients suffering from chronic physical conditions. Around the world, depression is associated with chronic liver diseases as one of the dominant symptoms. This evidence brings attention to the research about various predictors for short life expectancy and poor quality of life in patients suffering from comorbid depression and CLD. Following are the objectives of this study i) measure the occurrence rate of comorbid depression among patients with CLD and ii) find the frequency of risk factors between patients with and without depression comorbid with CLD. This is a quantitative study with a cross-sectional design. The research data was collected through a measure called Hamilton Depression Rating Scale (HDRS) with a demographic Performa from 100 patients who visited the Department of Psychiatry for consultation at Mayo Hospital Lahore with a diagnosed CLD from the last four years. There were (42%) patients with CLD who had comorbid depression. Among depressed and non-depressed patients, significant differences were found (p<0.05) for unemployment in 25 (59.5%) males and 20 (34.5%) female patients, for co-morbidity in 25 (59.5%) males and 18 (31.0%) female patients, for illiteracy in 18 (42.9%) males and 13 (22.4%) female patients, for the history of CLD for more than the last 2years in 41 (97.6%) males and 47 (81.0%) female patients, for severity of CLD in 26 (61.9%) males and 20 (34.5%) female patients. This concludes that depression frequently occurs among patients with CLD. This study recommends considerable attention to plan preventative measures in the future and develop such intervention protocols that consider the management of risk factors that significantly influence comorbid depression with CLD.

Keywords: psychiatry, comorbid, health, quality of life

Procedia PDF Downloads 196
3205 Psychometric Properties of the Sensory Processing Measure Preschool-Home among Children with Autism in Saudi Arabia

Authors: Shahad Alkhalifah, Jonh Wright

Abstract:

Autism spectrum disorder (ASD) is a pervasive developmental disorder associated, for 42% to 88% of people with ASD, with sensory processing disorders. Sensory processing disorders (SPD) impact daily functioning, and it is, therefore, essential to be able to diagnose them accurately. Currently, however, there is no assessment tool available for the Saudi Arabia (SA) population that would cover a wider enough age range. Therefore, this study aimed to assess the psychometric properties of the Sensory Processing Measure Preschool-Home Form (SPM-P) when used in English, with a population of English-speaking Saudi participants. This was chosen due to time limitations and the urgency in providing practitioners with appropriate tools. Using a convenience sampling approach group of caregivers of typically developing (TD) children and a group of caregivers for children with ASD were recruited (N = 40 and N = 16, respectively), and completed the SPM-P Home Form. Participants were also invited to complete it again after two weeks for test-retest reliability, and respectively, nine and five agreed. Reliability analyses suggested some issues with a few items when used in the Saudi culture, and, along with interscale correlations, it highlighted concerns with the factor structure. However, it was also found that the SPM-P Home has good criterion-based validity, and it is, therefore, suggested that it can be used until a tool is developed through translation and cultural adaptation. It is also suggested that the current factor structure of SPM-P Home is reassessed using a large sample.

Keywords: autism, sensory, assessment, reliability, sensory processing dysfunction, preschool, validity

Procedia PDF Downloads 228
3204 Public Health Emergency Management (PHEM) to COVID-19 Pandemic in North-Eastern Part of Thailand

Authors: Orathai Srithongtham, Ploypailin Mekathepakorn, Tossaphong Buraman, Pontida Moonpradap, Rungrueng Kitpati, Chulapon Kratet, Worayuth Nak-ai, Suwaree Charoenmukkayanan, Peeranuch Keawkanya

Abstract:

The COVID-19 pandemic was effect to the health security of the Thai people. The PHEM principle was essential to the surveillance, prevention, and control of COVID-19. This study aimed to present the process of prevention and control of COVID-19 from February 29, 2021- April 30, 2022, and the factors and conditions influent the successful outcome. The study areas were three provinces. The target group was 37 people, composed of public health personnel. The data was collected in-depth, and group interviews followed the non-structure interview guide and were analyzed by content analysis. The components of COVID-19 prevention and control were found in the process of PHEM as follows; 1) Emergency Operation Center (EOC) with an incidence command system (ICS) from the district to provincial level and to propose the provincial measure, 2) Provincial Communicable Disease Committee (PCDC) to decide the provincial measure 3) The measure for surveillance, prevention, control, and treatment of COVID-19, and 4) outcomes and best practices for surveillance and control of COVID-19. The success factors of 4S and EC were as follows; Space: prepare the quarantine (HQ, LQ), Cohort Ward (CW), field hospital, and community isolation and home isolation to face with the patient and risky group, Staff network from various organization and group cover the community leader and Health Volunteer (HV), Stuff the management and sharing of the medical and non-medical equipment, System of Covid-19 respond were EOC, ICS, Joint Investigation Team (JIT) and Communicable Disease Control Unit (CDCU) for monitoring the real-time of surveillance and control of COVID-19 output, Environment management in hospital and the community with Infections Control (IC) principle, and Culture in term of social capital on “the relationship of Isan people” supported the patient provide the good care and support. The structure of PHEM, Isan’s Culture, and good preparation was a significant factor in the three provinces.

Keywords: public health, emergency management, covid-19, pandemic

Procedia PDF Downloads 80
3203 Progress in Accuracy, Reliability and Safety in Firedamp Detection

Authors: José Luis Lorenzo Bayona, Ljiljana Medic-Pejic, Isabel Amez Arenillas, Blanca Castells Somoza

Abstract:

The communication presents the study results carried out by the Official Laboratory J. M. Madariaga (LOM) of the Polytechnic University of Madrid to analyze the reliability of methane detection systems used in underground mining. Poor firedamp control in work can cause from production stoppages to fatal accidents and since there is currently a great variety of equipment with different functional characteristics, a study is needed to indicate which measurement principles have the highest degree of confidence. For the development of the project, a series of fixed, transportable and portable methane detectors with different measurement principles have been selected to subject them to laboratory tests following the methods described in the applicable regulations. The test equipment has been the one usually used in the certification and calibration of these devices, subject to the LOM quality system, and the tests have been carried out on detectors accessible in the market. The conclusions establish the main advantages and disadvantages of the equipment according to the measurement principle used; catalytic combustion, interferometry and infrared absorption.

Keywords: ATEX standards, gas detector, methane meter, mining safety

Procedia PDF Downloads 135
3202 Astronomical Panels of Measuring and Dividing Time in Ancient Egypt

Authors: Mohamed Saeed Ahmed Salman

Abstract:

The ancient Egyptians used the stars to measure time or, in a more precise sense, as one of the astronomical means of measuring time. These methods differed throughout the historical ages. They began with simple observations of observing astronomical phenomena and watching them, such as observing the movements of the stars in the sky. The year, to know the days, nights, and other means used to help set the time when the sky overcast, and so the researcher tries through archaeological evidence to demonstrate the knowledge of the ancient Egyptian stars of heaven, and movements through the first pre-history. It is not believed that the astronomical information possessed by the Egyptian was limited, and simple, it was reaching a level of almost optimal in terms of importance, and the goal he wanted to reach the ancient Egyptian, and also help him to know the time, and the passage of time; which ended in finally trying to find a system of timing and calculation of time. It was noted that there were signs that the stellar creed was known, and prosperous, especially since the pre-family ages, and this is evident on the inscriptions that come back to that period. The Egyptian realized that some of the stars remain visible at night, The ancient Egyptian was familiar with the daily journey of the stars. This is what was adopted in many paragraphs of the texts of the pyramids and its references to the rise of the deceased king of the heavenly world between the stars of the eternal sky. It was noted that the ancient Egyptian link between the doctrine of the star, we find that the public The lunar was known to the ancient Egyptians, and sang it for two years, and the stellar solar; but it was based on the appearance of the star Sirius, and this is the first means used to measure time and know the calendar stars.

Keywords: ancient Egyptian, astronomical panels, Egyptian, astronomical

Procedia PDF Downloads 12
3201 The Properties of Risk-based Approaches to Asset Allocation Using Combined Metrics of Portfolio Volatility and Kurtosis: Theoretical and Empirical Analysis

Authors: Maria Debora Braga, Luigi Riso, Maria Grazia Zoia

Abstract:

Risk-based approaches to asset allocation are portfolio construction methods that do not rely on the input of expected returns for the asset classes in the investment universe and only use risk information. They include the Minimum Variance Strategy (MV strategy), the traditional (volatility-based) Risk Parity Strategy (SRP strategy), the Most Diversified Portfolio Strategy (MDP strategy) and, for many, the Equally Weighted Strategy (EW strategy). All the mentioned approaches were based on portfolio volatility as a reference risk measure but in 2023, the Kurtosis-based Risk Parity strategy (KRP strategy) and the Minimum Kurtosis strategy (MK strategy) were introduced. Understandably, they used the fourth root of the portfolio-fourth moment as a proxy for portfolio kurtosis to work with a homogeneous function of degree one. This paper contributes mainly theoretically and methodologically to the framework of risk-based asset allocation approaches with two steps forward. First, a new and more flexible objective function considering a linear combination (with positive coefficients that sum to one) of portfolio volatility and portfolio kurtosis is used to alternatively serve a risk minimization goal or a homogeneous risk distribution goal. Hence, the new basic idea consists in extending the achievement of typical risk-based approaches’ goals to a combined risk measure. To give the rationale behind operating with such a risk measure, it is worth remembering that volatility and kurtosis are expressions of uncertainty, to be read as dispersion of returns around the mean and that both preserve adherence to a symmetric framework and consideration for the entire returns distribution as well, but also that they differ from each other in that the former captures the “normal” / “ordinary” dispersion of returns, while the latter is able to catch the huge dispersion. Therefore, the combined risk metric that uses two individual metrics focused on the same phenomena but differently sensitive to its intensity allows the asset manager to express, in the context of an objective function by varying the “relevance coefficient” associated with the individual metrics, alternatively, a wide set of plausible investment goals for the portfolio construction process while serving investors differently concerned with tail risk and traditional risk. Since this is the first study that also implements risk-based approaches using a combined risk measure, it becomes of fundamental importance to investigate the portfolio effects triggered by this innovation. The paper also offers a second contribution. Until the recent advent of the MK strategy and the KRP strategy, efforts to highlight interesting properties of risk-based approaches were inevitably directed towards the traditional MV strategy and SRP strategy. Previous literature established an increasing order in terms of portfolio volatility, starting from the MV strategy, through the SRP strategy, arriving at the EQ strategy and provided the mathematical proof for the “equalization effect” concerning marginal risks when the MV strategy is considered, and concerning risk contributions when the SRP strategy is considered. Regarding the validity of similar conclusions when referring to the MK strategy and KRP strategy, the development of a theoretical demonstration is still pending. This paper fills this gap.

Keywords: risk parity, portfolio kurtosis, risk diversification, asset allocation

Procedia PDF Downloads 59
3200 Combination of Electrochemical Impedance Spectroscopy and Electromembrane Extraction for the Determination of Zolpidem Using Modified Screen-Printed Electrode

Authors: Ali Naeemy, Mir Ghasem Hoseini

Abstract:

In this study, for the first time, an analytical method developed and validated by combining electrochemical impedance spectroscopy and electromembrane extraction (EIS-EME) by Vulcan/poly pyrrole nanocomposite modified screen-printed electrode (PPY–VU/SPE) for accurately quantifying zolpidem. EME parameters optimized, including solvent composition, voltage, pH adjustments and extraction time. Zolpidem was transferred from a donor solution (pH 5) to an acceptor solution (pH 13) using a hollow fiber in 1-octanol as a membrane, driven by a 60 V voltage for 25 minutes, ensuring precise and selective extraction. In comparison with SPE, VU/SPE and PPY/SPE, the PPY–VU/SPE was much more efficient for ZP oxidation. Calibration curves with good linearity were obtained in the concentration range of 2-75 µmol L-1 using the EIS-EME with the detection limit of 0.5 µmol L-1 . Finally, the EIS-EME by using the PPY– VU/SPE was successfully used to determine ZP in tablet dosage form, urine and plasma samples. Keywords: Electrochemical impedance spectroscopy, Electromembrane extraction, Zolpidem, Vulcan, poly pyrrole, Screen printed electrode

Keywords: electrochemical impedance spectroscopy, electromembrane extraction, screen printed electrode, zolpidem

Procedia PDF Downloads 34
3199 A Fuzzy-Rough Feature Selection Based on Binary Shuffled Frog Leaping Algorithm

Authors: Javad Rahimipour Anaraki, Saeed Samet, Mahdi Eftekhari, Chang Wook Ahn

Abstract:

Feature selection and attribute reduction are crucial problems, and widely used techniques in the field of machine learning, data mining and pattern recognition to overcome the well-known phenomenon of the Curse of Dimensionality. This paper presents a feature selection method that efficiently carries out attribute reduction, thereby selecting the most informative features of a dataset. It consists of two components: 1) a measure for feature subset evaluation, and 2) a search strategy. For the evaluation measure, we have employed the fuzzy-rough dependency degree (FRFDD) of the lower approximation-based fuzzy-rough feature selection (L-FRFS) due to its effectiveness in feature selection. As for the search strategy, a modified version of a binary shuffled frog leaping algorithm is proposed (B-SFLA). The proposed feature selection method is obtained by hybridizing the B-SFLA with the FRDD. Nine classifiers have been employed to compare the proposed approach with several existing methods over twenty two datasets, including nine high dimensional and large ones, from the UCI repository. The experimental results demonstrate that the B-SFLA approach significantly outperforms other metaheuristic methods in terms of the number of selected features and the classification accuracy.

Keywords: binary shuffled frog leaping algorithm, feature selection, fuzzy-rough set, minimal reduct

Procedia PDF Downloads 222
3198 Designing Automated Embedded Assessment to Assess Student Learning in a 3D Educational Video Game

Authors: Mehmet Oren, Susan Pedersen, Sevket C. Cetin

Abstract:

Despite the frequently criticized disadvantages of the traditional used paper and pencil assessment, it is the most frequently used method in our schools. Although assessments do an acceptable measurement, they are not capable of measuring all the aspects and the richness of learning and knowledge. Also, many assessments used in schools decontextualize the assessment from the learning, and they focus on learners’ standing on a particular topic but do not concentrate on how student learning changes over time. For these reasons, many scholars advocate that using simulations and games (S&G) as a tool for assessment has significant potentials to overcome the problems in traditionally used methods. S&G can benefit from the change in technology and provide a contextualized medium for assessment and teaching. Furthermore, S&G can serve as an instructional tool rather than a method to test students’ learning at a particular time point. To investigate the potentials of using educational games as an assessment and teaching tool, this study presents the implementation and the validation of an automated embedded assessment (AEA), which can constantly monitor student learning in the game and assess their performance without intervening their learning. The experiment was conducted on an undergraduate level engineering course (Digital Circuit Design) with 99 participant students over a period of five weeks in Spring 2016 school semester. The purpose of this research study is to examine if the proposed method of AEA is valid to assess student learning in a 3D Educational game and present the implementation steps. To address this question, this study inspects three aspects of the AEA for the validation. First, the evidence-centered design model was used to lay out the design and measurement steps of the assessment. Then, a confirmatory factor analysis was conducted to test if the assessment can measure the targeted latent constructs. Finally, the scores of the assessment were compared with an external measure (a validated test measuring student learning on digital circuit design) to evaluate the convergent validity of the assessment. The results of the confirmatory factor analysis showed that the fit of the model with three latent factors with one higher order factor was acceptable (RMSEA < 0.00, CFI =1, TLI=1.013, WRMR=0.390). All of the observed variables significantly loaded to the latent factors in the latent factor model. In the second analysis, a multiple regression analysis was used to test if the external measure significantly predicts students’ performance in the game. The results of the regression indicated the two predictors explained 36.3% of the variance (R2=.36, F(2,96)=27.42.56, p<.00). It was found that students’ posttest scores significantly predicted game performance (β = .60, p < .000). The statistical results of the analyses show that the AEA can distinctly measure three major components of the digital circuit design course. It was aimed that this study can help researchers understand how to design an AEA, and showcase an implementation by providing an example methodology to validate this type of assessment.

Keywords: educational video games, automated embedded assessment, assessment validation, game-based assessment, assessment design

Procedia PDF Downloads 416
3197 Visualization and Performance Measure to Determine Number of Topics in Twitter Data Clustering Using Hybrid Topic Modeling

Authors: Moulana Mohammed

Abstract:

Topic models are widely used in building clusters of documents for more than a decade, yet problems occurring in choosing optimal number of topics. The main problem is the lack of a stable metric of the quality of topics obtained during the construction of topic models. The authors analyzed from previous works, most of the models used in determining the number of topics are non-parametric and quality of topics determined by using perplexity and coherence measures and concluded that they are not applicable in solving this problem. In this paper, we used the parametric method, which is an extension of the traditional topic model with visual access tendency for visualization of the number of topics (clusters) to complement clustering and to choose optimal number of topics based on results of cluster validity indices. Developed hybrid topic models are demonstrated with different Twitter datasets on various topics in obtaining the optimal number of topics and in measuring the quality of clusters. The experimental results showed that the Visual Non-negative Matrix Factorization (VNMF) topic model performs well in determining the optimal number of topics with interactive visualization and in performance measure of the quality of clusters with validity indices.

Keywords: interactive visualization, visual mon-negative matrix factorization model, optimal number of topics, cluster validity indices, Twitter data clustering

Procedia PDF Downloads 131
3196 Blood Volume Pulse Extraction for Non-Contact Photoplethysmography Measurement from Facial Images

Authors: Ki Moo Lim, Iman R. Tayibnapis

Abstract:

According to WHO estimation, 38 out of 56 million (68%) global deaths in 2012, were due to noncommunicable diseases (NCDs). To avert NCD, one of the solutions is early detection of diseases. In order to do that, we developed 'U-Healthcare Mirror', which is able to measure vital sign such as heart rate (HR) and respiration rate without any physical contact and consciousness. To measure HR in the mirror, we utilized digital camera. The camera records red, green, and blue (RGB) discoloration from user's facial image sequences. We extracted blood volume pulse (BVP) from the RGB discoloration because the discoloration of the facial skin is accordance with BVP. We used blind source separation (BSS) to extract BVP from the RGB discoloration and adaptive filters for removing noises. We utilized singular value decomposition (SVD) method to implement the BSS and the adaptive filters. HR was estimated from the obtained BVP. We did experiment for HR measurement by using our method and previous method that used independent component analysis (ICA) method. We compared both of them with HR measurement from commercial oximeter. The experiment was conducted under various distance between 30~110 cm and light intensity between 5~2000 lux. For each condition, we did measurement 7 times. The estimated HR showed 2.25 bpm of mean error and 0.73 of pearson correlation coefficient. The accuracy has improved compared to previous work. The optimal distance between the mirror and user for HR measurement was 50 cm with medium light intensity, around 550 lux.

Keywords: blood volume pulse, heart rate, photoplethysmography, independent component analysis

Procedia PDF Downloads 325
3195 Integration GIS–SCADA Power Systems to Enclosure Air Dispersion Model

Authors: Ibrahim Shaker, Amr El Hossany, Moustafa Osman, Mohamed El Raey

Abstract:

This paper will explore integration model between GIS–SCADA system and enclosure quantification model to approach the impact of failure-safe event. There are real demands to identify spatial objects and improve control system performance. Nevertheless, the employed methodology is predicting electro-mechanic operations and corresponding time to environmental incident variations. Open processing, as object systems technology, is presented for integration enclosure database with minimal memory size and computation time via connectivity drivers such as ODBC:JDBC during main stages of GIS–SCADA connection. The function of Geographic Information System is manipulating power distribution in contrast to developing issues. In other ward, GIS-SCADA systems integration will require numerical objects of process to enable system model calibration and estimation demands, determine of past events for analysis and prediction of emergency situations for response training.

Keywords: air dispersion model, environmental management, SCADA systems, GIS system, integration power system

Procedia PDF Downloads 361
3194 The Automated Soil Erosion Monitoring System (ASEMS)

Authors: George N. Zaimes, Valasia Iakovoglou, Paschalis Koutalakis, Konstantinos Ioannou, Ioannis Kosmadakis, Panagiotis Tsardaklis, Theodoros Laopoulos

Abstract:

The advancements in technology allow the development of a new system that can continuously measure surface soil erosion. Continuous soil erosion measurements are required in order to comprehend the erosional processes and propose effective and efficient conservation measures to mitigate surface erosion. Mitigating soil erosion, especially in Mediterranean countries such as Greece, is essential in order to maintain environmental and agricultural sustainability. In this paper, we present the Automated Soil Erosion Monitoring System (ASEMS) that measures surface soil erosion along with other factors that impact erosional process. Specifically, this system measures ground level changes (surface soil erosion), rainfall, air temperature, soil temperature and soil moisture. Another important innovation is that the data will be collected by remote communication. In addition, stakeholder’s awareness is a key factor to help reduce any environmental problem. The different dissemination activities that were utilized are described. The overall outcomes were the development of an innovative system that can measure erosion very accurately. These data from the system help study the process of erosion and find the best possible methods to reduce erosion. The dissemination activities enhance the stakeholder's and public's awareness on surface soil erosion problems and will lead to the adoption of more effective soil erosion conservation practices in Greece.

Keywords: soil management, climate change, new technologies, conservation practices

Procedia PDF Downloads 337
3193 Analysis of Erosion Quantity on Application of Conservation Techniques in Ci Liwung Hulu Watershed

Authors: Zaenal Mutaqin

Abstract:

The level of erosion that occurs in the upsteam watersheed will lead to limited infiltrattion, land degradation and river trivialisation and estuaries in the body. One of the watesheed that has been degraded caused by using land is the DA Ci Liwung Upstream. The high degradation that occurs in the DA Ci Liwung upstream is indicated by the hugher rate of erosion on the region, especially in the area of agriculture. In this case, agriculture cultivation intent to the agricultural land that has been applied conservation techniques. This study is applied to determine the quantity of erosion by reviewing Hidrologic Response Unit (HRU) in agricuktural cultivation land which is contained in DA Ci Liwung upstream by using the Soil and Water Assessmen Tool (SWAT). Conservation techniques applied are terracing, agroforestry and gulud terrace. It was concluded that agroforestry conservation techniques show the best value of erosion (lowest) compared with other conservation techniques with the contribution of erosion of 25.22 tonnes/ha/year. The results of the calibration between the discharge flow models with the observation that R²=0.9014 and NS=0.79 indicates that this model is acceptable and feasible applied to the Ci Liwung Hulu watershed.

Keywords: conservation, erosion, SWAT analysis, watersheed

Procedia PDF Downloads 287
3192 Intra and International Collaborations as Important Factors of Organisational Innovation of Government Agencies in STI Ecosystem in ASEAN

Authors: Salinthip Thipayang, Achara Chandrachai, Rath Pichyangkura, Sukree Sinthupinyo

Abstract:

Most of the well-known frameworks and tools to measure and compare organisational innovation of the public or government agencies have been designed and used in the developed economies such as the EU, Nordic Region, Australia, and South Korea. This project is one of the very first attempts to develop a measurement tool to adequately measure the organisational (administrative) innovation of the government agencies in the developing economies in ASEAN. New measurement framework with the components including the intra and international collaborations of these government agencies to other private, public and academic sectors were added to the proposed measurement framework. Questionnaires and in-depth interviews with the experts and the middle to top executives of the participating public agencies in the ASEAN member states were conducted to determine the suitability and develop the indicators that should be included in the measurement model. The results showed that intra and international collaborations of these government organisations to other agencies in the public, private and academic sectors can lead to new changes and greatly impact the ways in which these government agencies in the ASEAN STI ecosystem are operated and administered. Government organisations in less developing countries in ASEAN are ready and willing to learn from their counterparts in other more advanced countries and adjust their internal management to be more innovative and to better handle international collaborative projects and commitments.

Keywords: organisational innovation, administrative innovation, government agencies, public agencies, ASEAN science technology and innovation ecosystem, international collaborations

Procedia PDF Downloads 381
3191 Performance Evaluation and Planning for Road Safety Measures Using Data Envelopment Analysis and Fuzzy Decision Making

Authors: Hamid Reza Behnood, Esmaeel Ayati, Tom Brijs, Mohammadali Pirayesh Neghab

Abstract:

Investment projects in road safety planning can benefit from an effectiveness evaluation regarding their expected safety outcomes. The objective of this study is to develop a decision support system (DSS) to support policymakers in taking the right choice in road safety planning based on the efficiency of previously implemented safety measures in a set of regions in Iran. The measures considered for each region in the study include performance indicators about (1) police operations, (2) treated black spots, (3) freeway and highway facility supplies, (4) speed control cameras, (5) emergency medical services, and (6) road lighting projects. To this end, inefficiency measure is calculated, defined by the proportion of fatality rates in relation to the combined measure of road safety performance indicators (i.e., road safety measures) which should be minimized. The relative inefficiency for each region is modeled by the Data Envelopment Analysis (DEA) technique. In a next step, a fuzzy decision-making system is constructed to convert the information obtained from the DEA analysis into a rule-based system that can be used by policy makers to evaluate the expected outcomes of certain alternative investment strategies in road safety.

Keywords: performance indicators, road safety, decision support system, data envelopment analysis, fuzzy reasoning

Procedia PDF Downloads 348
3190 MAOD Is Estimated by Sum of Contributions

Authors: David W. Hill, Linda W. Glass, Jakob L. Vingren

Abstract:

Maximal accumulated oxygen deficit (MAOD), the gold standard measure of anaerobic capacity, is the difference between the oxygen cost of exhaustive severe intensity exercise and the accumulated oxygen consumption (O2; mL·kg–1). In theory, MAOD can be estimated as the sum of independent estimates of the phosphocreatine and glycolysis contributions, which we refer to as PCr+glycolysis. Purpose: The purpose was to test the hypothesis that PCr+glycolysis provides a valid measure of anaerobic capacity in cycling and running. Methods: The participants were 27 women (mean ± SD, age 22 ±1 y, height 165 ± 7 cm, weight 63.4 ± 9.7 kg) and 25 men (age 22 ± 1 y, height 179 ± 6 cm, weight 80.8 ± 14.8 kg). They performed two exhaustive cycling and running tests, at speeds and work rates that were tolerable for ~5 min. The rate of oxygen consumption (VO2; mL·kg–1·min–1) was measured in warmups, in the tests, and during 7 min of recovery. Fingerprick blood samples obtained after exercise were analysed to determine peak blood lactate concentration (PeakLac). The VO2 response in exercise was fitted to a model, with a fast ‘primary’ phase followed by a delayed ‘slow’ component, from which was calculated the accumulated O2 and the excess O2 attributable to the slow component. The VO2 response in recovery was fitted to a model with a fast phase and slow component, sharing a common time delay. Oxygen demand (in mL·kg–1·min–1) was determined by extrapolation from steady-state VO2 in warmups; the total oxygen cost (in mL·kg–1) was determined by multiplying this demand by time to exhaustion and adding the excess O2; then, MAOD was calculated as total oxygen cost minus accumulated O2. The phosphocreatine contribution (area under the fast phase of the post-exercise VO2) and the glycolytic contribution (converted from PeakLac) were summed to give PCr+glycolysis. There was not an interaction effect involving sex, so values for anaerobic capacity were examined using a two-way ANOVA, with repeated measures across method (PCr+glycolysis vs MAOD) and mode (cycling vs running). Results: There was a significant effect only for exercise mode. There was no difference between MAOD and PCr+glycolysis: values were 59 ± 6 mL·kg–1 and 61 ± 8 mL·kg–1 in cycling and 78 ± 7 mL·kg–1 and 75 ± 8 mL·kg–1 in running. Discussion: PCr+glycolysis is a valid measure of anaerobic capacity in cycling and running, and it is as valid for women as for men.

Keywords: alactic, anaerobic, cycling, ergometer, glycolysis, lactic, lactate, oxygen deficit, phosphocreatine, running, treadmill

Procedia PDF Downloads 132
3189 The Mediator Role of Social Competence in the Relation between Effortful Control and Maths Achievement

Authors: M. A. Fernández-Vilar, M. D. Galián, E. Ato

Abstract:

The aim of this work was to analyze the relation between children´s effortful control and Maths achievement in a sample of 447 Spanish children aged between 6 and 8 years. Traditionally, the literature confirms that higher level of effortful control has been associated with higher academic achievement, but there are few studies that include the effect that children´s social competence exert to this relation. To measure children’s effortful control parents were given the TMCQ (Temperament in Middle Childhood Questionnaire), and Maths achievement was taken from teacher´s rates. To measure social competence, we used the nominations method in the classroom context. Results confirmed that higher effortful control predicted a better maths achievement, whereas lower effortful control scores predicted lower Maths scores. Using a statistical modeling approach, we tested a mediation model that revealed the mediating role of social competence (popularity and rejection) in the relation between effortful control and Maths achievement. Concretely, higher social competence (higher popularity and lower rejection) seems to mediate the better Maths achievement showed by better self´regulated children. Therefore, an adequate social competence mediates the positive effect that self-regulatory capacity exerts to academic achievement. The clinical implications of the present findings should be considered. Specifically, rejected children must be detected and evaluated in community settings, such as school or community programs, due the relevant role of social competence in the relation between temperament and academic achievement.

Keywords: effortful control, maths achievement, social competence, mediation

Procedia PDF Downloads 385