Search results for: distance calibration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2403

Search results for: distance calibration

1623 Rainfall–Runoff Simulation Using WetSpa Model in Golestan Dam Basin, Iran

Authors: M. R. Dahmardeh Ghaleno, M. Nohtani, S. Khaledi

Abstract:

Flood simulation and prediction is one of the most active research areas in surface water management. WetSpa is a distributed, continuous, and physical model with daily or hourly time step that explains precipitation, runoff, and evapotranspiration processes for both simple and complex contexts. This model uses a modified rational method for runoff calculation. In this model, runoff is routed along the flow path using Diffusion-Wave equation which depends on the slope, velocity, and flow route characteristics. Golestan Dam Basin is located in Golestan province in Iran and it is passing over coordinates 55° 16´ 50" to 56° 4´ 25" E and 37° 19´ 39" to 37° 49´ 28"N. The area of the catchment is about 224 km2, and elevations in the catchment range from 414 to 2856 m at the outlet, with average slope of 29.78%. Results of the simulations show a good agreement between calculated and measured hydrographs at the outlet of the basin. Drawing upon Nash-Sutcliffe model efficiency coefficient for calibration periodic model estimated daily hydrographs and maximum flow rate with an accuracy up to 59% and 80.18%, respectively.

Keywords: watershed simulation, WetSpa, stream flow, flood prediction

Procedia PDF Downloads 229
1622 Open Source, Open Hardware Ground Truth for Visual Odometry and Simultaneous Localization and Mapping Applications

Authors: Janusz Bedkowski, Grzegorz Kisala, Michal Wlasiuk, Piotr Pokorski

Abstract:

Ground-truth data is essential for VO (Visual Odometry) and SLAM (Simultaneous Localization and Mapping) quantitative evaluation using e.g. ATE (Absolute Trajectory Error) and RPE (Relative Pose Error). Many open-access data sets provide raw and ground-truth data for benchmark purposes. The issue appears when one would like to validate Visual Odometry and/or SLAM approaches on data captured using the device for which the algorithm is targeted for example mobile phone and disseminate data for other researchers. For this reason, we propose an open source, open hardware groundtruth system that provides an accurate and precise trajectory with a 3D point cloud. It is based on LiDAR Livox Mid-360 with a non-repetitive scanning pattern, on-board Raspberry Pi 4B computer, battery and software for off-line calculations (camera to LiDAR calibration, LiDAR odometry, SLAM, georeferencing). We show how this system can be used for the evaluation of various the state of the art algorithms (Stella SLAM, ORB SLAM3, DSO) in typical indoor monocular VO/SLAM.

Keywords: SLAM, ground truth, navigation, LiDAR, visual odometry, mapping

Procedia PDF Downloads 13
1621 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest

Authors: Peter Baji

Abstract:

In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.

Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study

Procedia PDF Downloads 180
1620 Synthesis, Characterization, Optical and Photophysical Properties of Pyrene-Labeled Ruthenium(Ii) Trisbipyridine Complex Cored Dendrimers

Authors: Mireille Vonlanthen, Pasquale Porcu, Ernesto Rivera

Abstract:

Dendritic macromolecules are presenting unique physical and chemical properties. One of them is the faculty of transferring energy from a donor moiety introduced at the periphery to an acceptor moiety at the core, mimicking the antenna effect of the process of photosynthesis. The mechanism of energy transfer is based on the Förster resonance energy exchange and requires some overlap between the emission spectrum of the donor and the absorption spectrum of the acceptor. Since it requires a coupling of transition dipole but no overlap of the physical wavefunctions, the energy transfer by Förster mechanism can occur over quite long distances from 1 to a maximum of 10 nm. However, the efficiency of the transfer depends strongly on distance. The Förster radius is the distance at which 50% of the donor’s emission is deactivated by FRET. In this work, we synthesized and characterized a novel series of dendrimers bearing pyrene moieties at the periphery and a Ru (II) complex at the core. The optical and photophysical properties of these compounds were studied by absorption and fluorescence spectroscopy. Pyrene is a well-studied chromophore that has the particularity to present monomer as well as excimer fluorescence emission. The coordination compounds of Ru (II) are red emitters with low quantum yield and long excited lifetime. We observed an efficient singulet to singulet energy transfer in such constructs. Moreover, it is known that the energy of the MLCT emitting state of Ru (II) can be tuned to become almost isoenegetic with respect to the triplet state of pyrene, leading to an extended phosphorescence lifetime. Using dendrimers bearing pyrene moieties as ligands for Ru (II), we could combine the antenna effect of dendrimers as well as its protection effect to the quenching by dioxygen with lifetime increase due to triplet-triplet equilibrium.

Keywords: dendritic molecules, energy transfer, pyrene, ru-trisbipyridine complex

Procedia PDF Downloads 261
1619 The Role of Institutions in Community Wildlife Conservation in Zimbabwe

Authors: Herbert Ntuli, Edwin Muchapondwa

Abstract:

This study used a sample of 336 households and community level data from 30 communities around the Gonarezhou National Park in Zimbabwe to analyse the association between ability to self-organize or cooperation and institutions on one hand and the relationship between success of biodiversity outcomes and cooperation on the other hand. Using both the ordinary least squares and instrumental variables estimation with heteroskedasticity-based instruments, our results confirmed that sound institutions are indeed an important ingredient for cooperation in the respective communities and cooperation positively and significantly affects biodiversity outcomes. Group size, community level trust, the number of stakeholders and punishment were found to be important variables explaining cooperation. From a policy perspective, our results show that external enforcement of rules and regulations does not necessarily translate into sound ecological outcomes but better outcomes are attainable when punishment is rather endogenized by local communities. This seems to suggest that communities should rather be supported in such a way that robust institutions that are tailor made to suit the needs of local condition will emerge that will in turn facilitate good environmental husbandry. Cooperation, training, benefits, distance from the nearest urban canter, distance from the fence, social capital average age of household head, fence and information sharing were found to be very important variables explaining the success of biodiversity outcomes ceteris paribus. Government programmes should target capacity building in terms of institutional capacity and skills development in order to have a positive impact on biodiversity. Hence, the role of stakeholders (e.g., NGOs) in capacity building and government effort should complement each other to ensure that the necessary resources are mobilized and all communities receive the necessary training and resources.

Keywords: institutions, self-organize, common pool resources, wildlife, conservation, Zimbabwe

Procedia PDF Downloads 263
1618 Study on Practice of Improving Water Quality in Urban Rivers by Diverting Clean Water

Authors: Manjie Li, Xiangju Cheng, Yongcan Chen

Abstract:

With rapid development of industrialization and urbanization, water environmental deterioration is widespread in majority of urban rivers, which seriously affects city image and life satisfaction of residents. As an emergency measure to improve water quality, clean water diversion is introduced for water environmental management. Lubao River and Southwest River, two urban rivers in typical plain tidal river network, are identified as technically and economically feasible for the application of clean water diversion. One-dimensional hydrodynamic-water quality model is developed to simulate temporal and spatial variations of water level and water quality, with satisfactory accuracy. The mathematical model after calibration is applied to investigate hydrodynamic and water quality variations in rivers as well as determine the optimum operation scheme of water diversion. Assessment system is developed for evaluation of positive and negative effects of water diversion, demonstrating the effectiveness of clean water diversion and the necessity of pollution reduction.

Keywords: assessment system, clean water diversion, hydrodynamic-water quality model, tidal river network, urban rivers, water environment improvement

Procedia PDF Downloads 262
1617 Analysis and Identification of Different Factors Affecting Students’ Performance Using a Correlation-Based Network Approach

Authors: Jeff Chak-Fu Wong, Tony Chun Yin Yip

Abstract:

The transition from secondary school to university seems exciting for many first-year students but can be more challenging than expected. Enabling instructors to know students’ learning habits and styles enhances their understanding of the students’ learning backgrounds, allows teachers to provide better support for their students, and has therefore high potential to improve teaching quality and learning, especially in any mathematics-related courses. The aim of this research is to collect students’ data using online surveys, to analyze students’ factors using learning analytics and educational data mining and to discover the characteristics of the students at risk of falling behind in their studies based on students’ previous academic backgrounds and collected data. In this paper, we use correlation-based distance methods and mutual information for measuring student factor relationships. We then develop a factor network using the Minimum Spanning Tree method and consider further study for analyzing the topological properties of these networks using social network analysis tools. Under the framework of mutual information, two graph-based feature filtering methods, i.e., unsupervised and supervised infinite feature selection algorithms, are used to analyze the results for students’ data to rank and select the appropriate subsets of features and yield effective results in identifying the factors affecting students at risk of failing. This discovered knowledge may help students as well as instructors enhance educational quality by finding out possible under-performers at the beginning of the first semester and applying more special attention to them in order to help in their learning process and improve their learning outcomes.

Keywords: students' academic performance, correlation-based distance method, social network analysis, feature selection, graph-based feature filtering method

Procedia PDF Downloads 112
1616 Experimental Measurement of Equatorial Ring Current Generated by Magnetoplasma Sail in Three-Dimensional Spatial Coordinate

Authors: Masato Koizumi, Yuya Oshio, Ikkoh Funaki

Abstract:

Magnetoplasma Sail (MPS) is a future spacecraft propulsion that generates high levels of thrust by inducing an artificial magnetosphere to capture and deflect solar wind charged particles in order to transfer momentum to the spacecraft. By injecting plasma in the spacecraft’s magnetic field region, the ring current azimuthally drifts on the equatorial plane about the dipole magnetic field generated by the current flowing through the solenoid attached on board the spacecraft. This ring current results in magnetosphere inflation which improves the thrust performance of MPS spacecraft. In this present study, the ring current was experimentally measured using three Rogowski Current Probes positioned in a circular array about the laboratory model of MPS spacecraft. This investigation aims to determine the detailed structure of ring current through physical experimentation performed under two different magnetic field strengths engendered by varying the applied voltage on the solenoid with 300 V and 600 V. The expected outcome was that the three current probes would detect the same current since all three probes were positioned at equal radial distance of 63 mm from the center of the solenoid. Although experimental results were numerically implausible due to probable procedural error, the trends of the results revealed three pieces of perceptive evidence of the ring current behavior. The first aspect is that the drift direction of the ring current depended on the strength of the applied magnetic field. The second aspect is that the diamagnetic current developed at a radial distance not occupied by the three current probes under the presence of solar wind. The third aspect is that the ring current distribution varied along the circumferential path about the spacecraft’s magnetic field. Although this study yielded experimental evidence that differed from the original hypothesis, the three key findings of this study have informed two critical MPS design solutions that will potentially improve thrust performance. The first design solution is the positioning of the plasma injection point. Based on the implication of the first of the three aspects of ring current behavior, the plasma injection point must be located at a distance instead of at close proximity from the MPS Solenoid for the ring current to drift in the direction that will result in magnetosphere inflation. The second design solution, predicated by the third aspect of ring current behavior, is the symmetrical configuration of plasma injection points. In this study, an asymmetrical configuration of plasma injection points using one plasma source resulted in a non-uniform distribution of ring current along the azimuthal path. This distorts the geometry of the inflated magnetosphere which minimizes the deflection area for the solar wind. Therefore, to realize a ring current that best provides the maximum possible inflated magnetosphere, multiple plasma sources must be spaced evenly apart for the plasma to be injected evenly along its azimuthal path.

Keywords: Magnetoplasma Sail, magnetosphere inflation, ring current, spacecraft propulsion

Procedia PDF Downloads 298
1615 The Current Practices of Analysis of Reinforced Concrete Panels Subjected to Blast Loading

Authors: Palak J. Shukla, Atul K. Desai, Chentankumar D. Modhera

Abstract:

For any country in the world, it has become a priority to protect the critical infrastructure from looming risks of terrorism. In any infrastructure system, the structural elements like lower floors, exterior columns, walls etc. are key elements which are the most susceptible to damage due to blast load. The present study revisits the state of art review of the design and analysis of reinforced concrete panels subjected to blast loading. Various aspects in association with blast loading on structure, i.e. estimation of blast load, experimental works carried out previously, the numerical simulation tools, various material models, etc. are considered for exploring the current practices adopted worldwide. Discussion on various parametric studies to investigate the effect of reinforcement ratios, thickness of slab, different charge weight and standoff distance is also made. It was observed that for the simulation of blast load, CONWEP blast function or equivalent numerical equations were successfully employed by many researchers. The study of literature indicates that the researches were carried out using experimental works and numerical simulation using well known generalized finite element methods, i.e. LS-DYNA, ABAQUS, AUTODYN. Many researchers recommended to use concrete damage model to represent concrete and plastic kinematic material model to represent steel under action of blast loads for most of the numerical simulations. Most of the studies reveal that the increase reinforcement ratio, thickness of slab, standoff distance was resulted in better blast resistance performance of reinforced concrete panel. The study summarizes the various research results and appends the present state of knowledge for the structures exposed to blast loading.

Keywords: blast phenomenon, experimental methods, material models, numerical methods

Procedia PDF Downloads 140
1614 Model for Calculating Traffic Mass and Deceleration Delays Based on Traffic Field Theory

Authors: Liu Canqi, Zeng Junsheng

Abstract:

This study identifies two typical bottlenecks that occur when a vehicle cannot change lanes: car following and car stopping. The ideas of traffic field and traffic mass are presented in this work. When there are other vehicles in front of the target vehicle within a particular distance, a force is created that affects the target vehicle's driving speed. The characteristics of the driver and the vehicle collectively determine the traffic mass; the driving speed of the vehicle and external variables have no bearing on this. From a physical level, this study examines the vehicle's bottleneck when following a car, identifies the outside factors that have an impact on how it drives, takes into account that the vehicle will transform kinetic energy into potential energy during deceleration, and builds a calculation model for traffic mass. The energy-time conversion coefficient is created from an economic standpoint utilizing the social average wage level and the average cost of motor fuel. Vissim simulation program measures the vehicle's deceleration distance and delays under the Wiedemann car-following model. The difference between the measured value of deceleration delay acquired by simulation and the theoretical value calculated by the model is compared using the conversion calculation model of traffic mass and deceleration delay. The experimental data demonstrate that the model is reliable since the error rate between the theoretical calculation value of the deceleration delay obtained by the model and the measured value of simulation results is less than 10%. The article's conclusion is that the traffic field has an impact on moving cars on the road and that physical and socioeconomic factors should be taken into account while studying vehicle-following behavior. The deceleration delay value of a vehicle's driving and traffic mass have a socioeconomic relationship that can be utilized to calculate the energy-time conversion coefficient when dealing with the bottleneck of cars stopping and starting.

Keywords: traffic field, social economics, traffic mass, bottleneck, deceleration delay

Procedia PDF Downloads 48
1613 Numerical Modeling of Phase Change Materials Walls under Reunion Island's Tropical Weather

Authors: Lionel Trovalet, Lisa Liu, Dimitri Bigot, Nadia Hammami, Jean-Pierre Habas, Bruno Malet-Damour

Abstract:

The MCP-iBAT1 project is carried out to study the behavior of Phase Change Materials (PCM) integrated in building envelopes in a tropical environment. Through the phase transitions (melting and freezing) of the material, thermal energy can be absorbed or released. This process enables the regulation of indoor temperatures and the improvement of thermal comfort for the occupants. Most of the commercially available PCMs are more suitable to temperate climates than to tropical climates. The case of Reunion Island is noteworthy as there are multiple micro-climates. This leads to our key question: developing one or multiple bio-based PCMs that cover the thermal needs of the different locations of the island. The present paper focuses on the numerical approach to select the PCM properties relevant to tropical areas. Numerical simulations have been carried out with two softwares: EnergyPlusTM and Isolab. The latter has been developed in the laboratory, with the implicit Finite Difference Method, in order to evaluate different physical models. Both are Thermal Dynamic Simulation (TDS) softwares that predict the building’s thermal behavior with one-dimensional heat transfers. The parameters used in this study are the construction’s characteristics (dimensions and materials) and the environment’s description (meteorological data and building surroundings). The building is modeled in accordance with the experimental setup. It is divided into two rooms, cells A and B, with same dimensions. Cell A is the reference, while in cell B, a layer of commercial PCM (Thermo Confort of MCI Technologies) has been applied to the inner surface of the North wall. Sensors are installed in each room to retrieve temperatures, heat flows, and humidity rates. The collected data are used for the comparison with the numerical results. Our strategy is to implement two similar buildings at different altitudes (Saint-Pierre: 70m and Le Tampon: 520m) to measure different temperature ranges. Therefore, we are able to collect data for various seasons during a condensed time period. The following methodology is used to validate the numerical models: calibration of the thermal and PCM models in EnergyPlusTM and Isolab based on experimental measures, then numerical testing with a sensitivity analysis of the parameters to reach the targeted indoor temperatures. The calibration relies on the past ten months’ measures (from September 2020 to June 2021), with a focus on one-week study on November (beginning of summer) when the effect of PCM on inner surface temperatures is more visible. A first simulation with the PCM model of EnergyPlus gave results approaching the measurements with a mean error of 5%. The studied property in this paper is the melting temperature of the PCM. By determining the representative temperature of winter, summer and inter-seasons with past annual’s weather data, it is possible to build a numerical model of multi-layered PCM. Hence, the combined properties of the materials will provide an optimal scenario for the application on PCM in tropical areas. Future works will focus on the development of bio-based PCMs with the selected properties followed by experimental and numerical validation of the materials. 1Materiaux ´ a Changement de Phase, une innovation pour le B ` ati Tropical

Keywords: energyplus, multi-layer of PCM, phase changing materials, tropical area

Procedia PDF Downloads 81
1612 Building an Arithmetic Model to Assess Visual Consistency in Townscape

Authors: Dheyaa Hussein, Peter Armstrong

Abstract:

The phenomenon of visual disorder is prominent in contemporary townscapes. This paper provides a theoretical framework for the assessment of visual consistency in townscape in order to achieve more favourable outcomes for users. In this paper, visual consistency refers to the amount of similarity between adjacent components of townscape. The paper investigates parameters which relate to visual consistency in townscape, explores the relationships between them and highlights their significance. The paper uses arithmetic methods from outside the domain of urban design to enable the establishment of an objective approach of assessment which considers subjective indicators including users’ preferences. These methods involve the standard of deviation, colour distance and the distance between points. The paper identifies urban space as a key representative of the visual parameters of townscape. It focuses on its two components, geometry and colour in the evaluation of the visual consistency of townscape. Accordingly, this article proposes four measurements. The first quantifies the number of vertices, which are points in the three-dimensional space that are connected, by lines, to represent the appearance of elements. The second evaluates the visual surroundings of urban space through assessing the location of their vertices. The last two measurements calculate the visual similarity in both vertices and colour in townscape by the calculation of their variation using methods including standard of deviation and colour difference. The proposed quantitative assessment is based on users’ preferences towards these measurements. The paper offers a theoretical basis for a practical tool which can alter the current understanding of architectural form and its application in urban space. This tool is currently under development. The proposed method underpins expert subjective assessment and permits the establishment of a unified framework which adds to creativity by the achievement of a higher level of consistency and satisfaction among the citizens of evolving townscapes.

Keywords: townscape, urban design, visual assessment, visual consistency

Procedia PDF Downloads 296
1611 An Internet of Things-Based Weight Monitoring System for Honey

Authors: Zheng-Yan Ruan, Chien-Hao Wang, Hong-Jen Lin, Chien-Peng Huang, Ying-Hao Chen, En-Cheng Yang, Chwan-Lu Tseng, Joe-Air Jiang

Abstract:

Bees play a vital role in pollination. This paper focuses on the weighing process of honey. Honey is usually stored at the comb in a hive. Bee farmers brush bees away from the comb and then collect honey, and the collected honey is weighed afterward. However, such a process brings strong negative influences on bees and even leads to the death of bees. This paper therefore presents an Internet of Things-based weight monitoring system which uses weight sensors to measure the weight of honey and simplifies the whole weighing procedure. To verify the system, the weight measured by the system is compared to the weight of standard weights used for calibration by employing a linear regression model. The R2 of the regression model is 0.9788, which suggests that the weighing system is highly reliable and is able to be applied to obtain actual weight of honey. In the future, the weight data of honey can be used to find the relationship between honey production and different ecological parameters, such as bees’ foraging behavior and weather conditions. It is expected that the findings can serve as critical information for honey production improvement.

Keywords: internet of things, weight, honey, bee

Procedia PDF Downloads 440
1610 Improving Monitoring and Fault Detection of Solar Panels Using Arduino Mega in WSN

Authors: Ali Al-Dahoud, Mohamed Fezari, Thamer Al-Rawashdeh, Ismail Jannoud

Abstract:

Monitoring and detecting faults on a set of Solar panels, using a wireless sensor network (WNS) is our contribution in this paper, This work is part of the project we are working on at Al-Zaytoonah University. The research problem has been exposed by engineers and technicians or operators dealing with PV panels maintenance, in order to monitor and detect faults within solar panels which affect considerably the energy produced by the solar panels. The proposed solution is based on installing WSN nodes with appropriate sensors for more often occurred faults on the 45 solar panels installed on the roof of IT faculty. A simulation has been done on nodes distribution and a study for the design of a node with appropriate sensors taking into account the priorities of the processing faults. Finally, a graphic user interface is designed and adapted to telemonitoring panels using WSN. The primary tests of hardware implementation gave interesting results, the sensors calibration and interference transmission problem have been solved. A friendly GUI using high level language Visial Basic was developed to carry out the monitoring process and to save data on Exel File.

Keywords: Arduino Mega microcnotroller, solar panels, fault-detection, simulation, node design

Procedia PDF Downloads 453
1609 Optimizing the Efficiency of Measuring Instruments in Ouagadougou-Burkina Faso

Authors: Moses Emetere, Marvel Akinyemi, S. E. Sanni

Abstract:

At the moment, AERONET or AMMA database shows a large volume of data loss. With only about 47% data set available to the scientist, it is evident that accurate nowcast or forecast cannot be guaranteed. The calibration constants of most radiosonde or weather stations are not compatible with the atmospheric conditions of the West African climate. A dispersion model was developed to incorporate salient mathematical representations like a Unified number. The Unified number was derived to describe the turbulence of the aerosols transport in the frictional layer of the lower atmosphere. Fourteen years data set from Multi-angle Imaging SpectroRadiometer (MISR) was tested using the dispersion model. A yearly estimation of the atmospheric constants over Ouagadougou using the model was obtained with about 87.5% accuracy. It further revealed that the average atmospheric constant for Ouagadougou-Niger is a_1 = 0.626, a_2 = 0.7999 and the tuning constants is n_1 = 0.09835 and n_2 = 0.266. Also, the yearly atmospheric constants affirmed the lower atmosphere of Ouagadougou is very dynamic. Hence, it is recommended that radiosonde and weather station manufacturers should constantly review the atmospheric constant over a geographical location to enable about eighty percent data retrieval.

Keywords: aerosols retention, aerosols loading, statistics, analytical technique

Procedia PDF Downloads 294
1608 Development of Paper Based Analytical Devices for Analysis of Iron (III) in Natural Water Samples

Authors: Sakchai Satienperakul, Manoch Thanomwat, Jutiporn Seedasama

Abstract:

A paper based analytical devices (PADs) for the analysis of Fe (III) ion in natural water samples is developed, using reagent from guava leaf extract. The extraction is simply performed in deionized water pH 7, where tannin extract is obtained and used as an alternative natural reagent. The PADs are fabricated by ink-jet printing using alkenyl ketene dimer (AKD) wax. The quantitation of Fe (III) is carried out using reagent from guava leaf extract prepared in acetate buffer at the ratio of 1:1. A color change to gray-purple is observed by naked eye when dropping sample contained Fe (III) ion on PADs channel. The reflective absorption measurement is performed for creating a standard curve. The linear calibration range is observed over the concentration range of 2-10 mg L-1. Detection limited of Fe (III) is observed at 2 mg L-1. In its optimum form, the PADs is stable for up to 30 days under oxygen free conditions. The small dimensions, low volume requirement and alternative natural reagent make the proposed PADs attractive for on-site environmental monitoring and analysis.

Keywords: green chemical analysis, guava leaf extract, lab on a chip, paper based analytical device

Procedia PDF Downloads 224
1607 The Problem of the Use of Learning Analytics in Distance Higher Education: An Analytical Study of the Open and Distance University System in Mexico

Authors: Ismene Ithai Bras-Ruiz

Abstract:

Learning Analytics (LA) is employed by universities not only as a tool but as a specialized ground to enhance students and professors. However, not all the academic programs apply LA with the same goal and use the same tools. In fact, LA is formed by five main fields of study (academic analytics, action research, educational data mining, recommender systems, and personalized systems). These fields can help not just to inform academic authorities about the situation of the program, but also can detect risk students, professors with needs, or general problems. The highest level applies Artificial Intelligence techniques to support learning practices. LA has adopted different techniques: statistics, ethnography, data visualization, machine learning, natural language process, and data mining. Is expected that any academic program decided what field wants to utilize on the basis of his academic interest but also his capacities related to professors, administrators, systems, logistics, data analyst, and the academic goals. The Open and Distance University System (SUAYED in Spanish) of the University National Autonomous of Mexico (UNAM), has been working for forty years as an alternative to traditional programs; one of their main supports has been the employ of new information and communications technologies (ICT). Today, UNAM has one of the largest network higher education programs, twenty-six academic programs in different faculties. This situation means that every faculty works with heterogeneous populations and academic problems. In this sense, every program has developed its own Learning Analytic techniques to improve academic issues. In this context, an investigation was carried out to know the situation of the application of LA in all the academic programs in the different faculties. The premise of the study it was that not all the faculties have utilized advanced LA techniques and it is probable that they do not know what field of study is closer to their program goals. In consequence, not all the programs know about LA but, this does not mean they do not work with LA in a veiled or, less clear sense. It is very important to know the grade of knowledge about LA for two reasons: 1) This allows to appreciate the work of the administration to improve the quality of the teaching and, 2) if it is possible to improve others LA techniques. For this purpose, it was designed three instruments to determinate the experience and knowledge in LA. These were applied to ten faculty coordinators and his personnel; thirty members were consulted (academic secretary, systems manager, or data analyst, and coordinator of the program). The final report allowed to understand that almost all the programs work with basic statistics tools and techniques, this helps the administration only to know what is happening inside de academic program, but they are not ready to move up to the next level, this means applying Artificial Intelligence or Recommender Systems to reach a personalized learning system. This situation is not related to the knowledge of LA, but the clarity of the long-term goals.

Keywords: academic improvements, analytical techniques, learning analytics, personnel expertise

Procedia PDF Downloads 113
1606 Multisensory Science, Technology, Engineering and Mathematics Learning: Combined Hands-on and Virtual Science for Distance Learners of Food Chemistry

Authors: Paulomi Polly Burey, Mark Lynch

Abstract:

It has been shown that laboratory activities can help cement understanding of theoretical concepts, but it is difficult to deliver such an activity to an online cohort and issues such as occupational health and safety in the students’ learning environment need to be considered. Chemistry, in particular, is one of the sciences where practical experience is beneficial for learning, however typical university experiments may not be suitable for the learning environment of a distance learner. Food provides an ideal medium for demonstrating chemical concepts, and along with a few simple physical and virtual tools provided by educators, analytical chemistry can be experienced by distance learners. Food chemistry experiments were designed to be carried out in a home-based environment that 1) Had sufficient scientific rigour and skill-building to reinforce theoretical concepts; 2) Were safe for use at home by university students and 3) Had the potential to enhance student learning by linking simple hands-on laboratory activities with high-level virtual science. Two main components of the resources were developed, a home laboratory experiment component, and a virtual laboratory component. For the home laboratory component, students were provided with laboratory kits, as well as a list of supplementary inexpensive chemical items that they could purchase from hardware stores and supermarkets. The experiments used were typical proximate analyses of food, as well as experiments focused on techniques such as spectrophotometry and chromatography. Written instructions for each experiment coupled with video laboratory demonstrations were used to train students on appropriate laboratory technique. Data that students collected in their home laboratory environment was collated across the class through shared documents, so that the group could carry out statistical analysis and experience a full laboratory experience from their own home. For the virtual laboratory component, students were able to view a laboratory safety induction and advised on good characteristics of a home laboratory space prior to carrying out their experiments. Following on from this activity, students observed laboratory demonstrations of the experimental series they would carry out in their learning environment. Finally, students were embedded in a virtual laboratory environment to experience complex chemical analyses with equipment that would be too costly and sensitive to be housed in their learning environment. To investigate the impact of the intervention, students were surveyed before and after the laboratory series to evaluate engagement and satisfaction with the course. Students were also assessed on their understanding of theoretical chemical concepts before and after the laboratory series to determine the impact on their learning. At the end of the intervention, focus groups were run to determine which aspects helped and hindered learning. It was found that the physical experiments helped students to understand laboratory technique, as well as methodology interpretation, particularly if they had not been in such a laboratory environment before. The virtual learning environment aided learning as it could be utilized for longer than a typical physical laboratory class, thus allowing further time on understanding techniques.

Keywords: chemistry, food science, future pedagogy, STEM education

Procedia PDF Downloads 150
1605 Quantitative Analysis of Camera Setup for Optical Motion Capture Systems

Authors: J. T. Pitale, S. Ghassab, H. Ay, N. Berme

Abstract:

Biomechanics researchers commonly use marker-based optical motion capture (MoCap) systems to extract human body kinematic data. These systems use cameras to detect passive or active markers placed on the subject. The cameras use triangulation methods to form images of the markers, which typically require each marker to be visible by at least two cameras simultaneously. Cameras in a conventional optical MoCap system are mounted at a distance from the subject, typically on walls, ceiling as well as fixed or adjustable frame structures. To accommodate for space constraints and as portable force measurement systems are getting popular, there is a need for smaller and smaller capture volumes. When the efficacy of a MoCap system is investigated, it is important to consider the tradeoff amongst the camera distance from subject, pixel density, and the field of view (FOV). If cameras are mounted relatively close to a subject, the area corresponding to each pixel reduces, thus increasing the image resolution. However, the cross section of the capture volume also decreases, causing reduction of the visible area. Due to this reduction, additional cameras may be required in such applications. On the other hand, mounting cameras relatively far from the subject increases the visible area but reduces the image quality. The goal of this study was to develop a quantitative methodology to investigate marker occlusions and optimize camera placement for a given capture volume and subject postures using three-dimension computer-aided design (CAD) tools. We modeled a 4.9m x 3.7m x 2.4m (LxWxH) MoCap volume and designed a mounting structure for cameras using SOLIDWORKS (Dassault Systems, MA, USA). The FOV was used to generate the capture volume for each camera placed on the structure. A human body model with configurable posture was placed at the center of the capture volume on CAD environment. We studied three postures; initial contact, mid-stance, and early swing. The human body CAD model was adjusted for each posture based on the range of joint angles. Markers were attached to the model to enable a full body capture. The cameras were placed around the capture volume at a maximum distance of 2.7m from the subject. We used the Camera View feature in SOLIDWORKS to generate images of the subject as seen by each camera and the number of markers visible to each camera was tabulated. The approach presented in this study provides a quantitative method to investigate the efficacy and efficiency of a MoCap camera setup. This approach enables optimization of a camera setup through adjusting the position and orientation of cameras on the CAD environment and quantifying marker visibility. It is also possible to compare different camera setup options on the same quantitative basis. The flexibility of the CAD environment enables accurate representation of the capture volume, including any objects that may cause obstructions between the subject and the cameras. With this approach, it is possible to compare different camera placement options to each other, as well as optimize a given camera setup based on quantitative results.

Keywords: motion capture, cameras, biomechanics, gait analysis

Procedia PDF Downloads 299
1604 Detection of Total Aflatoxin in Flour of Wheat and Maize Samples in Albania Using ELISA

Authors: Aferdita Dinaku, Jonida Canaj

Abstract:

Aflatoxins are potentially toxic metabolites produced by certain kinds of fungi (molds) that are found naturally all over the world; they can contaminate food crops and pose a serious health threat to humans by mutagenic and carcinogenic effects. Several types of aflatoxin (14 or more) occur in nature. In Albanian nutrition, cereals (especially wheat and corn) are common ingredients in some traditional meals. This study aimed to investigate the presence of aflatoxins in the flour of wheat and maize that are consumed in Albania’s markets. The samples were collected randomly in different markets in Albania and detected by the ELISA method, measured in 450 nm. The concentration of total aflatoxins was analyzed by enzyme-linked immunosorbent assay (ELISA), and they were ranged between 0.05-1.09 ppb. However, the screened mycotoxin levels in the samples were lower than the maximum permissible limits of European Commission No 1881/2006 (4 μg/kg). The linearity of calibration curves was good for total aflatoxins (B1, B2, G1, G2, M1) (R²=0.99) in the concentration range 0.005-4.05 ppb. The samples were analyzed in two replicated measurements and for each sample, the standard deviation (statistical parameter) is calculated. The results showed that the flour samples are safe, but the necessity of performing such tests is necessary.

Keywords: aflatoxins, ELISA technique, food contamination, flour

Procedia PDF Downloads 136
1603 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients

Authors: Ainura Tursunalieva, Irene Hudson

Abstract:

Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.

Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence

Procedia PDF Downloads 136
1602 Multiscale Syntheses of Knee Collateral Ligament Stresses: Aggregate Mechanics as a Function of Molecular Properties

Authors: Raouf Mbarki, Fadi Al Khatib, Malek Adouni

Abstract:

Knee collateral ligaments play a significant role in restraining excessive frontal motion (varus/valgus rotations). In this investigation, a multiscale frame was developed based on structural hierarchies of the collateral ligaments starting from the bottom (tropocollagen molecule) to up where the fibred reinforced structure established. Experimental data of failure tensile test were considered as the principal driver of the developed model. This model was calibrated statistically using Bayesian calibration due to the high number of unknown parameters. Then the model is scaled up to fit the real structure of the collateral ligaments and simulated under realistic boundary conditions. Predications have been successful in describing the observed transient response of the collateral ligaments during tensile test under pre- and post-damage loading conditions. Collateral ligaments maximum stresses and strengths were observed near to the femoral insertions, a results that is in good agreement with experimental investigations. Also for the first time, damage initiation and propagation were documented with this model as a function of the cross-link density between tropocollagen molecules.

Keywords: multiscale model, tropocollagen, fibrils, ligaments commas

Procedia PDF Downloads 142
1601 Effect of Installation Method on the Ratio of Tensile to Compressive Shaft Capacity of Piles in Dense Sand

Authors: A. C. Galvis-Castro, R. D. Tovar, R. Salgado, M. Prezzi

Abstract:

It is generally accepted that the shaft capacity of piles in the sand is lower for tensile loading that for compressive loading. So far, very little attention has been paid to the role of the influence of the installation method on the tensile to compressive shaft capacity ratio. The objective of this paper is to analyze the effect of installation method on the tensile to compressive shaft capacity of piles in dense sand as observed in tests on half-circular model pile tests in a half-circular calibration chamber with digital image correlation (DIC) capability. Model piles are either monotonically jacked, jacked with multiple strokes or pre-installed into the dense sand samples. Digital images of the model pile and sand are taken during both the installation and loading stages of each test and processed using the DIC technique to obtain the soil displacement and strain fields. The study provides key insights into the mobilization of shaft resistance in tensile and compressive loading for both displacement and non-displacement piles.

Keywords: digital image correlation, piles, sand, shaft resistance

Procedia PDF Downloads 252
1600 Chi Square Confirmation of Autonomic Functions Percentile Norms of Indian Sportspersons Withdrawn from Competitive Games and Sports

Authors: Pawan Kumar, Dhananjoy Shaw, Manoj Kumar Rathi

Abstract:

Purpose of the study were to compare between (a) frequencies among the four quartiles of percentile norms of autonomic variables from power events and (b) frequencies among the four quartiles percentile norms of autonomic variables from aerobic events of Indian sportspersons withdrawn from competitive games and sports in regard to number of samples falling in each quartile. The study was conducted on 430 males of 30 to 35 years of age. Based on the nature of game/sports the retired sportspersons were classified into power events (throwers, judo players, wrestlers, short distance swimmers, cricket fast bowlers and power lifters) and aerobic events (long distance runners, long distance swimmers, water polo players). Date was collected using ECG polygraphs. Data were processed and extracted using frequency domain analysis and time domain analysis. Collected data were computed with frequency, percentage of each quartile and finally the frequencies were compared with the chi square analysis. The finding pertaining to norm reference comparison of frequencies among the four quartiles of Indian sportspersons withdrawn from competitive games and sports from (a) power events suggests that frequency distribution in four quartile namely Q1, Q2, Q3, and Q4 are significantly different at .05 level in regard to variables namely, SDNN, Total Power (Absolute Power), HF (Absolute Power), LF (Normalized Power), HF (Normalized Power), LF/HF ratio, deep breathing test, expiratory respiratory ratio, valsalva manoeuvre, hand grip test, cold pressor test and lying to standing test, whereas, insignificantly different at .05 level in regard to variables namely, SDSD, RMSSD, SDANN, NN50 Count, pNN50 Count, LF (Absolute Power) and 30: 15 Ratio (b) aerobic events suggests that frequency distribution in four quartile are significantly different at .05 level in regard to variables namely, SDNN, LF (Normalized Power), HF (Normalized Power), LF/HF ratio, deep breathing test, expiratory respiratory ratio, hand grip test, cold pressor test, lying to standing test and 30: 15 ratio, whereas, insignificantly different at .05 level in regard to variables namely, SDSD, RMSSD. SDANN, NN50 count, pNN50 count, Total Power (Absolute Power), LF(Absolute Power) HF(Absolute Power), and valsalva manoeuvre. The study concluded that comparison of frequencies among the four quartiles of Indian retired sportspersons from power events and aerobic events are different in four quartiles in regard to selected autonomic functions, hence the developed percentile norms are not homogenously distributed across the percentile scale; hence strengthen the percentage distribution towards normal distribution.

Keywords: power, aerobic, absolute power, normalized power

Procedia PDF Downloads 342
1599 The Ductile Fracture of Armor Steel Targets Subjected to Ballistic Impact and Perforation: Calibration of Four Damage Criteria

Authors: Imen Asma Mbarek, Alexis Rusinek, Etienne Petit, Guy Sutter, Gautier List

Abstract:

Over the past two decades, the automotive, aerospace and army industries have been paying an increasing attention to Finite Elements (FE) numerical simulations of the fracture process of their structures. Thanks to the numerical simulations, it is nowadays possible to analyze several problems involving costly and dangerous extreme loadings safely and at a reduced cost such as blast or ballistic impact problems. The present paper is concerned with ballistic impact and perforation problems involving ductile fracture of thin armor steel targets. The target fracture process depends usually on various parameters: the projectile nose shape, the target thickness and its mechanical properties as well as the impact conditions (friction, oblique/normal impact...). In this work, the investigations are concerned with the normal impact of a conical head-shaped projectile on thin armor steel targets. The main aim is to establish a comparative study of four fracture criteria that are commonly used in the fracture process simulations of structures subjected to extreme loadings such as ballistic impact and perforation. Usually, the damage initiation results from a complex physical process that occurs at the micromechanical scale. On a macro scale and according to the following fracture models, the variables on which the fracture depends are mainly the stress triaxiality ƞ, the strain rate, temperature T, and eventually the Lode angle parameter Ɵ. The four failure criteria are: the critical strain to failure model, the Johnson-Cook model, the Wierzbicki model and the Modified Hosford-Coulomb model MHC. Using the SEM, the observations of the fracture facies of tension specimen and of armor steel targets impacted at low and high incident velocities show that the fracture of the specimens is a ductile fracture. The failure mode of the targets is petalling with crack propagation and the fracture facies are covered with micro-cavities. The parameters of each ductile fracture model have been identified for three armor steels and the applicability of each criterion was evaluated using experimental investigations coupled to numerical simulations. Two loading paths were investigated in this study, under a wide range of strain rates. Namely, quasi-static and intermediate uniaxial tension and quasi-static and dynamic double shear testing allow covering various values of stress triaxiality ƞ and of the Lode angle parameter Ɵ. All experiments were conducted on three different armor steel specimen under quasi-static strain rates ranging from 10-4 to 10-1 1/s and at three different temperatures ranging from 297K to 500K, allowing drawing the influence of temperature on the fracture process. Intermediate tension testing was coupled to dynamic double shear experiments conducted on the Hopkinson tube device, allowing to spot the effect of high strain rate on the damage evolution and the crack propagation. The aforementioned fracture criteria are implemented into the FE code ABAQUS via VUMAT subroutine and they were coupled to suitable constitutive relations allow having reliable results of ballistic impact problems simulation. The calibration of the four damage criteria as well as a concise evaluation of the applicability of each criterion are detailed in this work.

Keywords: armor steels, ballistic impact, damage criteria, ductile fracture, SEM

Procedia PDF Downloads 297
1598 Finite Element-Based Stability Analysis of Roadside Settlements Slopes from Barpak to Yamagaun through Laprak Village of Gorkha, an Epicentral Location after the 7.8Mw 2015 Barpak, Gorkha, Nepal Earthquake

Authors: N. P. Bhandary, R. C. Tiwari, R. Yatabe

Abstract:

The research employs finite element method to evaluate the stability of roadside settlements slopes from Barpak to Yamagaon through Laprak village of Gorkha, Nepal after the 7.8Mw 2015 Barpak, Gorkha, Nepal earthquake. It includes three major villages of Gorkha, i.e., Barpak, Laprak and Yamagaun that were devastated by 2015 Gorkhas’ earthquake. The road head distance from the Barpak to Laprak and Laprak to Yamagaun are about 14 and 29km respectively. The epicentral distance of main shock of magnitude 7.8 and aftershock of magnitude 6.6 were respectively 7 and 11 kilometers (South-East) far from the Barpak village nearer to Laprak and Yamagaon. It is also believed that the epicenter of the main shock as said until now was not in the Barpak village, it was somewhere near to the Yamagaun village. The chaos that they had experienced during the earthquake in the Yamagaun was much more higher than the Barpak. In this context, we have carried out a detailed study to investigate the stability of Yamagaun settlements slope as a case study, where ground fissures, ground settlement, multiple cracks and toe failures are the most severe. In this regard, the stability issues of existing settlements and proposed road alignment, on the Yamagaon village slope are addressed, which is surrounded by many newly activated landslides. Looking at the importance of this issue, field survey is carried out to understand the behavior of ground fissures and multiple failure characteristics of the slopes. The results suggest that the Yamgaun slope in Profile 2-2, 3-3 and 4-4 are not safe enough for infrastructure development even in the normal soil slope conditions as per 2, 3 and 4 material models; however, the slope seems quite safe for at Profile 1-1 for all 4 material models. The result also indicates that the first three profiles are marginally safe for 2, 3 and 4 material models respectively. The Profile 4-4 is not safe enough for all 4 material models. Thus, Profile 4-4 needs a special care to make the slope stable.

Keywords: earthquake, finite element method, landslide, stability

Procedia PDF Downloads 330
1597 Presenting the Mathematical Model to Determine Retention in the Watersheds

Authors: S. Shamohammadi, L. Razavi

Abstract:

This paper based on the principle concepts of SCS-CN model, a new mathematical model for computation of retention potential (S) presented. In the mathematical model, not only precipitation-runoff concepts in SCS-CN model are precisely represented in a mathematical form, but also new concepts, called “maximum retention” and “total retention” is introduced, and concepts of potential retention capacity, maximum retention, and total retention have been separated from each other. In the proposed model, actual retention (F), maximum actual retention (Fmax), total retention (S), maximum retention (Smax), and potential retention (Sp), for the first time clearly defined, so that Sp is not variable, but a function of morphological characteristics of the watershed. Indeed, based on the mathematical relation of the conceptual curve of SCS-CN model, the proposed model provides a new method for the computation of actual retention in watershed and it simply determined runoff based on. In the corresponding relations, in addition to Precipitation (P), Initial retention (Ia), cumulative values of actual retention capacity (F), total retention (S), runoff (Q), antecedent moisture (M), potential retention (Sp), total retention (S), we introduced Fmax and Fmin referring to maximum and minimum actual retention, respectively. As well as, ksh is a coefficient which depends on morphological characteristics of the watershed. Advantages of the modified version versus the original model include a better precision, higher performance, easier calibration and speed computing.

Keywords: model, mathematical, retention, watershed, SCS

Procedia PDF Downloads 437
1596 Status Report of the GERDA Phase II Startup

Authors: Valerio D’Andrea

Abstract:

The GERmanium Detector Array (GERDA) experiment, located at the Laboratori Nazionali del Gran Sasso (LNGS) of INFN, searches for 0νββ of 76Ge. Germanium diodes enriched to ∼ 86 % in the double beta emitter 76Ge(enrGe) are exposed being both source and detectors of 0νββ decay. Neutrinoless double beta decay is considered a powerful probe to address still open issues in the neutrino sector of the (beyond) Standard Model of particle Physics. Since 2013, just after the completion of the first part of its experimental program (Phase I), the GERDA setup has been upgraded to perform its next step in the 0νββ searches (Phase II). Phase II aims to reach a sensitivity to the 0νββ decay half-life larger than 1026 yr in about 3 years of physics data taking. This exposing a detector mass of about 35 kg of enrGe and with a background index of about 10^−3 cts/(keV·kg·yr). One of the main new implementations is the liquid argon scintillation light read-out, to veto those events that only partially deposit their energy both in Ge and in the surrounding LAr. In this paper, the GERDA Phase II expected goals, the upgrade work and few selected features from the 2015 commissioning and 2016 calibration runs will be presented. The main Phase I achievements will be also reviewed.

Keywords: gerda, double beta decay, LNGS, germanium

Procedia PDF Downloads 356
1595 Use of Fabric Phase Sorptive Extraction with Gas Chromatography-Mass Spectrometry for the Determination of Organochlorine Pesticides in Various Aqueous and Juice Samples

Authors: Ramandeep Kaur, Ashok Kumar Malik

Abstract:

Fabric Phase Sorptive Extraction (FPSE) combined with Gas chromatography Mass Spectrometry (GCMS) has been developed for the determination of nineteen organochlorine pesticides in various aqueous samples. The method consolidates the features of sol-gel derived microextraction sorbents with rich surface chemistry of cellulose fabric substrate which could directly extract sample from complex sample matrices and incredibly improve the operation with decreased pretreatment time. Some vital parameters such as kind and volume of extraction solvent and extraction time were examinedand optimized. Calibration curves were obtained in the concentration range 0.5-500 ng/mL. Under the optimum conditions, the limits of detection (LODs) were in the range 0.033 ng/mL to 0.136 ng/mL. The relative standard deviations (RSDs) for extraction of 10 ng/mL 0f OCPs were less than 10%. The developed method has been applied for the quantification of these compounds in aqueous and fruit juice samples. The results obtained proved the present method to be rapid and feasible for the determination of organochlorine pesticides in aqueous samples.

Keywords: fabric phase sorptive extraction, gas chromatography-mass spectrometry, organochlorine pesticides, sample pretreatment

Procedia PDF Downloads 466
1594 Generic Early Warning Signals for Program Student Withdrawals: A Complexity Perspective Based on Critical Transitions and Fractals

Authors: Sami Houry

Abstract:

Complex systems exhibit universal characteristics as they near a tipping point. Among them are common generic early warning signals which precede critical transitions. These signals include: critical slowing down in which the rate of recovery from perturbations decreases over time; an increase in the variance of the state variable; an increase in the skewness of the state variable; an increase in the autocorrelations of the state variable; flickering between different states; and an increase in spatial correlations over time. The presence of the signals has management implications, as the identification of the signals near the tipping point could allow management to identify intervention points. Despite the applications of the generic early warning signals in various scientific fields, such as fisheries, ecology and finance, a review of literature did not identify any applications that address the program student withdrawal problem at the undergraduate distance universities. This area could benefit from the application of generic early warning signals as the program withdrawal rate amongst distance students is higher than the program withdrawal rate at face-to-face conventional universities. This research specifically assessed the generic early warning signals through an intensive case study of undergraduate program student withdrawal at a Canadian distance university. The university is non-cohort based due to its system of continuous course enrollment where students can enroll in a course at the beginning of every month. The assessment of the signals was achieved through the comparison of the incidences of generic early warning signals among students who withdrew or simply became inactive in their undergraduate program of study, the true positives, to the incidences of the generic early warning signals among graduates, the false positives. This was achieved through significance testing. Research findings showed support for the signal pertaining to the rise in flickering which is represented in the increase in the student’s non-pass rates prior to withdrawing from a program; moderate support for the signals of critical slowing down as reflected in the increase in the time a student spends in a course; and moderate support for the signals on increase in autocorrelation and increase in variance in the grade variable. The findings did not support the signal on the increase in skewness of the grade variable. The research also proposes a new signal based on the fractal-like characteristic of student behavior. The research also sought to extend knowledge by investigating whether the emergence of a program withdrawal status is self-similar or fractal-like at multiple levels of observation, specifically the program level and the course level. In other words, whether the act of withdrawal at the program level is also present at the course level. The findings moderately supported self-similarity as a potential signal. Overall, the assessment of the signals suggests that the signals, with the exception with the increase of skewness, could be utilized as a predictive management tool and potentially add one more tool, the fractal-like characteristic of withdrawal, as an additional signal in addressing the student program withdrawal problem.

Keywords: critical transitions, fractals, generic early warning signals, program student withdrawal

Procedia PDF Downloads 169