Search results for: distance calibration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2401

Search results for: distance calibration

661 Behavior of Epoxy Insulator with Surface Defect under HVDC Stress

Authors: Qingying Liu, S. Liu, L. Hao, B. Zhang, J. D. Yan

Abstract:

HVDC technology is becoming increasingly popular due to its simplicity in topology and less power loss over long distance of power transmission, in comparison with HVAC technology. However, the dielectric behavior of insulators in the long term under HVDC stress is completely different from that under HVAC stress as a result of charge accumulation in a constant electric field. Insulators used in practical systems are never perfect in their structural conditions. Over time shallow cracks may develop on their surface. The presence of defects can lead to drastic change in their dielectric behaviour and thus increase the probability of surface flashover. In this contribution, experimental investigations have been carried out on the charge accumulation phenomenon on the surface of a rod insulator made of epoxy that is placed between two disk shaped electrodes at different voltage levels and in different gases (SF6, CO2 and N2). Many results obtained, such as, the two-dimensional electrostatic potential distribution along the insulator surface after the removal of the power source following a pre-defined period of application. The probe has been carefully calibrated before each test. Results show that surface charge distribution near the two disk shaped electrodes is not uniform in the circumferential direction, possibly due to the imperfect electrical connections between the embeded conductor in the insulator and the disk shaped electrodes. The axial length of this non-uniform region is experimentally determined, which provides useful information for shielding design. A charge transport model is also used to explain the formation of the long term electrostatic potential distribution under a constant applied voltage.

Keywords: HVDC, power systems, dielectric behavior, insulation, charge accumulation

Procedia PDF Downloads 211
660 Modeling Sediment Transports under Extreme Storm Situation along Persian Gulf North Coast

Authors: Majid Samiee Zenoozian

Abstract:

The Persian Gulf is a bordering sea with an normal depth of 35 m and a supreme depth of 100 m near its narrow appearance. Its lengthen bathymetric axis divorces two main geological shires — the steady Arabian Foreland and the unbalanced Iranian Fold Belt — which are imitated in the conflicting shore and bathymetric morphologies of Arabia and Iran. The sediments were experimented with from 72 offshore positions through an oceanographic cruise in the winter of 2018. Throughout the observation era, several storms and river discharge actions happened, as well as the major flood on record since 1982. Suspended-sediment focus at all three sites varied in reaction to both wave resuspension and advection of river-derived sediments. We used hydrological models to evaluation and associate the wave height and inundation distance required to carriage the rocks inland. Our results establish that no known or possible storm happening on the Makran coast is accomplished of detaching and transporting the boulders. The fluid mud consequently is conveyed seaward due to gravitational forcing. The measured sediment focus and velocity profiles on the shelf provide a strong indication to provision this assumption. The sediment model is joined with a 3D hydrodynamic module in the Environmental Fluid Dynamics Code (EFDC) model that offers data on estuarine rotation and salinity transport under normal temperature conditions. 3-D sediment transport from model simulations specify dynamic sediment resuspension and transport near zones of highly industrious oyster beds.

Keywords: sediment transport, storm, coast, fluid dynamics

Procedia PDF Downloads 95
659 Molecular Dynamics Studies of Main Factors Affecting Mass Transport Phenomena on Cathode of Polymer Electrolyte Membrane Fuel Cell

Authors: Jingjing Huang, Nengwei Li, Guanghua Wei, Jiabin You, Chao Wang, Junliang Zhang

Abstract:

In this work, molecular dynamics (MD) simulation is applied to analyze the mass transport process in the cathode of proton exchange membrane fuel cell (PEMFC), of which all types of molecules situated in the cathode is considered. a reasonable and effective MD simulation process is provided, and models were built and compared using both Materials Studio and LAMMPS. The mass transport is one of the key issues in the study of proton exchange membrane fuel cells (PEMFCs). In this report, molecular dynamics (MD) simulation is applied to analyze the influence of Nafion ionomer distribution and Pt nano-particle size on mass transport process in the cathode. It is indicated by the diffusion coefficients calculation that a larger quantity of Nafion, as well as a higher equivalent weight (EW) value, will hinder the transport of oxygen. In addition, medium-sized Pt nano-particles (1.5~2nm) are more advantageous in terms of proton transport compared with other particle sizes (0.94~2.55nm) when the center-to-center distance between two Pt nano-particles is around 5 nm. Then mass transport channels are found to be formed between the hydrophobic backbone and the hydrophilic side chains of Nafion ionomer according to the radial distribution function (RDF) curves. And the morphology of these channels affected by the Pt size is believed to influence the transport of hydronium ions and, consequently the performance of PEMFC.

Keywords: cathode catalytic layer, mass transport, molecular dynamics, proton exchange membrane fuel cell

Procedia PDF Downloads 200
658 Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and Task Allocation for Exploration and Navigation in Unknown Environments

Authors: Lavanya Ratnabala, Robinroy Peter, E. Y. A. Charles

Abstract:

This research paper addresses the challenges of exploration and navigation in unknown environments from an evolutionary swarm robotics perspective. Path formation plays a crucial role in enabling cooperative swarm robots to accomplish these tasks. The paper presents a method called the sub-goal-based path formation, which establishes a path between two different locations by exploiting visually connected sub-goals. Simulation experiments conducted in the Argos simulator demonstrate the successful formation of paths in the majority of trials. Furthermore, the paper tackles the problem of inter-collision (traffic) among a large number of robots engaged in path formation, which negatively impacts the performance of the sub-goal-based method. To mitigate this issue, a task allocation strategy is proposed, leveraging local communication protocols and light signal-based communication. The strategy evaluates the distance between points and determines the required number of robots for the path formation task, reducing unwanted exploration and traffic congestion. The performance of the sub-goal-based path formation and task allocation strategy is evaluated by comparing path length, time, and resource reduction against the A* algorithm. The simulation experiments demonstrate promising results, showcasing the scalability, robustness, and fault tolerance characteristics of the proposed approach.

Keywords: swarm, path formation, task allocation, Argos, exploration, navigation, sub-goal

Procedia PDF Downloads 29
657 The Advancement of Environmental Impact Assessment for 5th Transmission Natural Gas Pipeline Project in Thailand

Authors: Penrug Pengsombut, Worawut Hamarn, Teerawuth Suwannasri, Kittiphong Songrukkiat, Kanatip Ratanachoo

Abstract:

PTT Public Company Limited or simply PTT has played an important role in strengthening national energy security of the Kingdom of Thailand by transporting natural gas to customers in power, industrial and commercial sectors since 1981. PTT has been constructing and operating natural gas pipeline system of over 4,500-km network length both onshore and offshore laid through different area classifications i.e., marine, forest, agriculture, rural, urban, and city areas. During project development phase, an Environmental Impact Assessment (EIA) is conducted and submitted to the Office of Natural Resources and Environmental Policy and Planning (ONEP) for approval before project construction commencement. Knowledge and experiences gained and revealed from EIA in the past projects definitely are developed to further advance EIA study process for newly 5th Transmission Natural Gas Pipeline Project (5TP) with approximately 415 kilometers length. The preferred pipeline route is selected and justified by SMARTi map, an advance digital one-map platform with consists of multiple layers geographic and environmental information. Sensitive area impact focus (SAIF) is a practicable impact assessment methodology which appropriate for a particular long distance infrastructure project such as 5TP. An environmental modeling simulation is adopted into SAIF methodology for impact quantified in all sensitive areas whereas other area along pipeline right-of-ways is typically assessed as an impact representative. Resulting time and cost deduction is beneficial to project for early start.

Keywords: environmental impact assessment, EIA, natural gas pipeline, sensitive area impact focus, SAIF

Procedia PDF Downloads 381
656 Uniqueness and Repeatability Analysis for Slim Tube Determined Minimum Miscibility Pressure

Authors: Waqar Ahmad Butt, Gholamreza Vakili Nezhaad, Ali Soud Al Bemani, Yahya Al Wahaibi

Abstract:

Miscible gas injection processes as secondary recovery methods can be applied to a huge number of mature reservoirs to improve the trapped oil displacement. Successful miscible gas injection processes require an accurate estimation of the minimum miscibility pressure (MMP) to make injection process feasible, economical, and effective. There are several methods of MMP determination like slim tube approach, vanishing interfacial tension and rising bubble apparatus but slim tube is the deployed experimental technique in this study. Slim tube method is assumed to be non-standardized for MMP determination with respect to both operating procedure and design. Therefore, 25 slim tube runs were being conducted with three different coil lengths (12, 18 and 24 m) of constant diameter using three different injection rates (0.08, 0.1 and 0.15 cc/min) to evaluate uniqueness and repeatability of determined MMP. A trend of decrease in MMP with increase in coil length was found. No unique trend was found between MMP and injection rate. Lowest MMP and highest recovery were observed with highest coil length and lowest injection rate. It shows that slim tube measured MMP does not depend solely on interacting fluids characteristics but also affected by used coil selection and injection rate choice. Therefore, both slim tube design and procedure need to be standardized. It is recommended to use lowest possible injection rate and estimated coil length depending upon the distance between injections and producing wells for accurate and reliable MMP determination.

Keywords: coil length, injection rate, minimum miscibility pressure, multiple contacts miscibility

Procedia PDF Downloads 234
655 The Influence of Air Temperature Controls in Estimation of Air Temperature over Homogeneous Terrain

Authors: Fariza Yunus, Jasmee Jaafar, Zamalia Mahmud, Nurul Nisa’ Khairul Azmi, Nursalleh K. Chang, Nursalleh K. Chang

Abstract:

Variation of air temperature from one place to another is cause by air temperature controls. In general, the most important control of air temperature is elevation. Another significant independent variable in estimating air temperature is the location of meteorological stations. Distances to coastline and land use type are also contributed to significant variations in the air temperature. On the other hand, in homogeneous terrain direct interpolation of discrete points of air temperature work well to estimate air temperature values in un-sampled area. In this process the estimation is solely based on discrete points of air temperature. However, this study presents that air temperature controls also play significant roles in estimating air temperature over homogenous terrain of Peninsular Malaysia. An Inverse Distance Weighting (IDW) interpolation technique was adopted to generate continuous data of air temperature. This study compared two different datasets, observed mean monthly data of T, and estimation error of T–T’, where T’ estimated value from a multiple regression model. The multiple regression model considered eight independent variables of elevation, latitude, longitude, coastline, and four land use types of water bodies, forest, agriculture and build up areas, to represent the role of air temperature controls. Cross validation analysis was conducted to review accuracy of the estimation values. Final results show, estimation values of T–T’ produced lower errors for mean monthly mean air temperature over homogeneous terrain in Peninsular Malaysia.

Keywords: air temperature control, interpolation analysis, peninsular Malaysia, regression model, air temperature

Procedia PDF Downloads 363
654 A Mixed-Integer Nonlinear Program to Optimally Pace and Fuel Ultramarathons

Authors: Kristopher A. Pruitt, Justin M. Hill

Abstract:

The purpose of this research is to determine the pacing and nutrition strategies which minimize completion time and carbohydrate intake for athletes competing in ultramarathon races. The model formulation consists of a two-phase optimization. The first-phase mixed-integer nonlinear program (MINLP) determines the minimum completion time subject to the altitude, terrain, and distance of the race, as well as the mass and cardiovascular fitness of the athlete. The second-phase MINLP determines the minimum total carbohydrate intake required for the athlete to achieve the completion time prescribed by the first phase, subject to the flow of carbohydrates through the stomach, liver, and muscles. Consequently, the second phase model provides the optimal pacing and nutrition strategies for a particular athlete for each kilometer of a particular race. Validation of the model results over a wide range of athlete parameters against completion times for real competitive events suggests strong agreement. Additionally, the kilometer-by-kilometer pacing and nutrition strategies, the model prescribes for a particular athlete suggest unconventional approaches could result in lower completion times. Thus, the MINLP provides prescriptive guidance that athletes can leverage when developing pacing and nutrition strategies prior to competing in ultramarathon races. Given the highly-variable topographical characteristics common to many ultramarathon courses and the potential inexperience of many athletes with such courses, the model provides valuable insight to competitors who might otherwise fail to complete the event due to exhaustion or carbohydrate depletion.

Keywords: nutrition, optimization, pacing, ultramarathons

Procedia PDF Downloads 172
653 Influence of Social Media on Perceived Learning Outcome of Agricultural Students in Tertiary Institutions in Oyo State, Nigeria

Authors: Adedoyin Opeyemi Osokoya

Abstract:

The study assesses the influence of social media on perceived learning outcome of agricultural science students in tertiary institutions in Oyo state, Nigeria. The four-stage sampling procedure was used to select participants. All students in the seven tertiary institutions that offer agriculture science as a course of study in Oyo State was the population. A university, a college of agriculture and a college of education were sampled, and a department from each was randomly selected. Twenty percent of the students’ population in the respective selected department gave a sample size of 165. Questionnaire was used to collect information on respondents’ personal characteristics and information related to access to social media. Data were analysed using descriptive statistics, chi-square, correlation, and multiple regression at the 0.05 confidence level. Age and household size were 21.13 ± 2.64 years and 6 ± 2.1 persons respectively. All respondents had access to social media, majority (86.1%) owned Android phone, 57.6% and 52.7% use social media for course work and entertainment respectively, while the commonly visited sites were WhatsApp, Facebook, Google, Opera mini. Over half (53.9%) had an unfavourable attitude towards the use of social media for learning; benefits of the use of social media for learning was high (56.4%). Removal of information barrier created by distance (x̄=1.58) was the most derived benefit, while inadequate power supply (x̄=2.36), was the most severe constraints. Age (β=0.23), sex (β=0.37), ownership of Android phone (β=-1.29), attitude (β=0.37), constraints (β =-0.26) and use of social media (β=0.23) were significant predictors of influence on perceived learning outcomes.

Keywords: use of social media, agricultural science students, undergraduates of tertiary institutions, Oyo State of Nigeria

Procedia PDF Downloads 112
652 Effect of Gas Boundary Layer on the Stability of a Radially Expanding Liquid Sheet

Authors: Soumya Kedia, Puja Agarwala, Mahesh Tirumkudulu

Abstract:

Linear stability analysis is performed for a radially expanding liquid sheet in the presence of a gas medium. A liquid sheet can break up because of the aerodynamic effect as well as its thinning. However, the study of the aforementioned effects is usually done separately as the formulation becomes complicated and is difficult to solve. Present work combines both, aerodynamic effect and thinning effect, ignoring the non-linearity in the system. This is done by taking into account the formation of the gas boundary layer whilst neglecting viscosity in the liquid phase. Axisymmetric flow is assumed for simplicity. Base state analysis results in a Blasius-type system which can be solved numerically. Perturbation theory is then applied to study the stability of the liquid sheet, where the gas-liquid interface is subjected to small deformations. The linear model derived here can be applied to investigate the instability for sinuous as well as varicose modes, where the former represents displacement in the centerline of the sheet and the latter represents modulation in sheet thickness. Temporal instability analysis is performed for sinuous modes, which are significantly more unstable than varicose modes, for a fixed radial distance implying local stability analysis. The growth rates, measured for fixed wavenumbers, predicated by the present model are significantly lower than those obtained by the inviscid Kelvin-Helmholtz instability and compare better with experimental results. Thus, the present theory gives better insight into understanding the stability of a thin liquid sheet.

Keywords: boundary layer, gas-liquid interface, linear stability, thin liquid sheet

Procedia PDF Downloads 212
651 Online Monitoring and Control of Continuous Mechanosynthesis by UV-Vis Spectrophotometry

Authors: Darren A. Whitaker, Dan Palmer, Jens Wesholowski, James Flaherty, John Mack, Ahmad B. Albadarin, Gavin Walker

Abstract:

Traditional mechanosynthesis has been performed by either ball milling or manual grinding. However, neither of these techniques allow the easy application of process control. The temperature may change unpredictably due to friction in the process. Hence the amount of energy transferred to the reactants is intrinsically non-uniform. Recently, it has been shown that the use of Twin-Screw extrusion (TSE) can overcome these limitations. Additionally, TSE enables a platform for continuous synthesis or manufacturing as it is an open-ended process, with feedstocks at one end and product at the other. Several materials including metal-organic frameworks (MOFs), co-crystals and small organic molecules have been produced mechanochemically using TSE. The described advantages of TSE are offset by drawbacks such as increased process complexity (a large number of process parameters) and variation in feedstock flow impacting on product quality. To handle the above-mentioned drawbacks, this study utilizes UV-Vis spectrophotometry (InSpectroX, ColVisTec) as an online tool to gain real-time information about the quality of the product. Additionally, this is combined with real-time process information in an Advanced Process Control system (PharmaMV, Perceptive Engineering) allowing full supervision and control of the TSE process. Further, by characterizing the dynamic behavior of the TSE, a model predictive controller (MPC) can be employed to ensure the process remains under control when perturbed by external disturbances. Two reactions were studied; a Knoevenagel condensation reaction of barbituric acid and vanillin and, the direct amidation of hydroquinone by ammonium acetate to form N-Acetyl-para-aminophenol (APAP) commonly known as paracetamol. Both reactions could be carried out continuously using TSE, nuclear magnetic resonance (NMR) spectroscopy was used to confirm the percentage conversion of starting materials to product. This information was used to construct partial least squares (PLS) calibration models within the PharmaMV development system, which relates the percent conversion to product to the acquired UV-Vis spectrum. Once this was complete, the model was deployed within the PharmaMV Real-Time System to carry out automated optimization experiments to maximize the percentage conversion based on a set of process parameters in a design of experiments (DoE) style methodology. With the optimum set of process parameters established, a series of PRBS process response tests (i.e. Pseudo-Random Binary Sequences) around the optimum were conducted. The resultant dataset was used to build a statistical model and associated MPC. The controller maximizes product quality whilst ensuring the process remains at the optimum even as disturbances such as raw material variability are introduced into the system. To summarize, a combination of online spectral monitoring and advanced process control was used to develop a robust system for optimization and control of two TSE based mechanosynthetic processes.

Keywords: continuous synthesis, pharmaceutical, spectroscopy, advanced process control

Procedia PDF Downloads 156
650 Design and Simulation of a Radiation Spectrometer Using Scintillation Detectors

Authors: Waleed K. Saib, Abdulsalam M. Alhawsawi, Essam Banoqitah

Abstract:

The idea of this research is to design a radiation spectrometer using LSO scintillation detector coupled to a C series of SiPM (silicon photomultiplier). The device can be used to detects gamma and X-ray radiation. This device is also designed to estimates the activity of the source contamination. The SiPM will detect light in the visible range above the threshold and read them as counts. Three gamma sources were used for these experiments Cs-137, Am-241 and Co-60 with various activities. These sources are applied for four experiments operating the SiPM as a spectrometer, energy resolution, pile-up set and efficiency. The SiPM is connected to a MCA to perform as a spectrometer. Cerium doped Lutetium Silicate (Lu₂SiO₅) with light yield 26000 photons/Mev coupled with the SiPM. As a result, all the main features of the Cs-137, Am-241 and Co-60 are identified in MCA. The experiment shows how photon energy and probability of interaction are inversely related. Total attenuation reduces as photon energy increases. An analytical calculation was made to obtain the FWHM resolution for each gamma source. The FWHM resolution for Am-241 (59 keV) is 28.75 %, for Cs-137 (662 keV) is 7.85 %, for Co-60 (1173 keV) is 4.46 % and for Co-60 (1332 keV) is 3.70%. Moreover, the experiment shows that the dead time and counts number decreased when the pile-up rejection was disabled and the FWHM decreased when the pile-up was enabled. The efficiencies were calculated at four different distances from the detector 2, 4, 8 and 16 cm. The detection efficiency was observed to declined exponentially with increasing distance from the detector face. Conclusively, the SiPM board operated with an LSO scintillator crystal as a spectrometer. The SiPM energy resolution for the three gamma sources used was a decent comparison to other PMTs.

Keywords: PMT, radiation, radiation detection, scintillation detectors, silicon photomultiplier, spectrometer

Procedia PDF Downloads 141
649 Chinese Leaders Abroad: Case in the Netherlands

Authors: Li Lin, Hein Roelfsema

Abstract:

To achieve aggressive expansion goals, many Chinese companies are seeking resources and market around the world. To an increasing extent, Chinese enterprises recognized the Netherlands as their gateway to Europe Market. Yet, large cultural gaps (e.g. individualism/collectivism, power distance) may influence expat leaders’ influencing process, in turn affect intercultural teamwork. Lessons and suggestions from Chinese expat leaders could provide profound knowledge for managerial practice and future research. The current research focuses on the cultural difference between China and the Netherlands, along with leadership tactics for coping and handling differences occurring in the international business work. Exclusive 47 in-depth interviews with Chinese expat leaders were conducted. Within each interview, respondents were asked what were the main issues when working with Dutch employees, and what they believed as the keys to successful leadership in Dutch-Chinese cross-cultural workplaces. Consistent with previous research, the findings highlight the need to consider the cultural context within which leadership adapts. In addition, the findings indicated the importance of recognizing and applying the cultural advantages from which leadership originates. The results address observation ability as a crucial key for Chinese managers to lead Dutch/international teams. Moreover, setting a common goal help a leader to overcome the challenges due to cultural differences. Based on the analysis, we develop a process model to illustrate the dynamic mechanisms. Our study contributes to the better understanding of transference of management practices, and has important practical implications for managing Dutch employees.

Keywords: Chinese managers, Dutch employees, leadership, interviews

Procedia PDF Downloads 329
648 Fragment Domination for Many-Objective Decision-Making Problems

Authors: Boris Djartov, Sanaz Mostaghim

Abstract:

This paper presents a number-based dominance method. The main idea is how to fragment the many attributes of the problem into subsets suitable for the well-established concept of Pareto dominance. Although other similar methods can be found in the literature, they focus on comparing the solutions one objective at a time, while the focus of this method is to compare entire subsets of the objective vector. Given the nature of the method, it is computationally costlier than other methods and thus, it is geared more towards selecting an option from a finite set of alternatives, where each solution is defined by multiple objectives. The need for this method was motivated by dynamic alternate airport selection (DAAS). In DAAS, pilots, while en route to their destination, can find themselves in a situation where they need to select a new landing airport. In such a predicament, they need to consider multiple alternatives with many different characteristics, such as wind conditions, available landing distance, the fuel needed to reach it, etc. Hence, this method is primarily aimed at human decision-makers. Many methods within the field of multi-objective and many-objective decision-making rely on the decision maker to initially provide the algorithm with preference points and weight vectors; however, this method aims to omit this very difficult step, especially when the number of objectives is so large. The proposed method will be compared to Favour (1 − k)-Dom and L-dominance (LD) methods. The test will be conducted using well-established test problems from the literature, such as the DTLZ problems. The proposed method is expected to outperform the currently available methods in the literature and hopefully provide future decision-makers and pilots with support when dealing with many-objective optimization problems.

Keywords: multi-objective decision-making, many-objective decision-making, multi-objective optimization, many-objective optimization

Procedia PDF Downloads 77
647 Uncertainties and Resilience: A Study of Pandemic Impact on the Pastoral-Nomadic Communities in India

Authors: Arati S. Kade, Iftikhar Hussain, Somnath Dadas

Abstract:

The paper studies resilience and uncertainties among nomadic-pastoral communities in India during large events such as pandemics and attempts to understand that with changing times and increased uncertainties, how nomadic communities historically showed their resilience. A review of the literature was performed concerning nomadism and development relations and conflicts by focusing on structural violence on nomadic communities from the caste class and patriarchy as a framework along with the role of the state. Philosophical views on the anti-nomad bias of political theories by Erik Ringmar, along with the decolonial approach by Linda Smith and debrahmanization by Braj Ranjan Mani were used to analyze criminalization of nomads. Data were collected using in-depth telephonic interviews and news reports published during the COVID-19 lockdown in India. Focusing on historical context of current crises, the paper leads to the discussion on how nomadic communities negotiate with the sedentary society during the COVID-19 pandemic. Findings of the current paper approve the hypotheses that the COVID-19 pandemic followed by lockdown deeply impacted the pastoral production system, building on the continued cycle of marginalization by the state and caste society in India, while traditional knowledge stood the test of time. Be it developmental states or pandemics, the nomadic communities have shown their resilience in a number of ways, such as keeping distance from sedentary society, usage of traditional medicine, and relying on traditional leadership.

Keywords: COVID-19, criminalization, India, nomadism, pandemic, pastoralism, resilience, traditional knowledge

Procedia PDF Downloads 78
646 Automatic Detection of Traffic Stop Locations Using GPS Data

Authors: Areej Salaymeh, Loren Schwiebert, Stephen Remias, Jonathan Waddell

Abstract:

Extracting information from new data sources has emerged as a crucial task in many traffic planning processes, such as identifying traffic patterns, route planning, traffic forecasting, and locating infrastructure improvements. Given the advanced technologies used to collect Global Positioning System (GPS) data from dedicated GPS devices, GPS equipped phones, and navigation tools, intelligent data analysis methodologies are necessary to mine this raw data. In this research, an automatic detection framework is proposed to help identify and classify the locations of stopped GPS waypoints into two main categories: signalized intersections or highway congestion. The Delaunay triangulation is used to perform this assessment in the clustering phase. While most of the existing clustering algorithms need assumptions about the data distribution, the effectiveness of the Delaunay triangulation relies on triangulating geographical data points without such assumptions. Our proposed method starts by cleaning noise from the data and normalizing it. Next, the framework will identify stoppage points by calculating the traveled distance. The last step is to use clustering to form groups of waypoints for signalized traffic and highway congestion. Next, a binary classifier was applied to find distinguish highway congestion from signalized stop points. The binary classifier uses the length of the cluster to find congestion. The proposed framework shows high accuracy for identifying the stop positions and congestion points in around 99.2% of trials. We show that it is possible, using limited GPS data, to distinguish with high accuracy.

Keywords: Delaunay triangulation, clustering, intelligent transportation systems, GPS data

Procedia PDF Downloads 258
645 Sustainability of Photovoltaic Recycling Planning

Authors: Jun-Ki Choi

Abstract:

The usage of valuable resources and the potential for waste generation at the end of the life cycle of photovoltaic (PV) technologies necessitate a proactive planning for a PV recycling infrastructure. To ensure the sustainability of PV in large scales of deployment, it is vital to develop and institute low-cost recycling technologies and infrastructure for the emerging PV industry in parallel with the rapid commercialization of these new technologies. There are various issues involved in the economics of PV recycling and this research examine those at macro and micro levels, developing a holistic interpretation of the economic viability of the PV recycling systems. This study developed mathematical models to analyze the profitability of recycling technologies and to guide tactical decisions for allocating optimal location of PV take-back centers (PVTBC), necessary for the collection of end of life products. The economic decision is usually based on the level of the marginal capital cost of each PVTBC, cost of reverse logistics, distance traveled, and the amount of PV waste collected from various locations. Results illustrated that the reverse logistics costs comprise a major portion of the cost of PVTBC; PV recycling centers can be constructed in the optimally selected locations to minimize the total reverse logistics cost for transporting the PV wastes from various collection facilities to the recycling center. In the micro- process level, automated recycling processes should be developed to handle the large amount of growing PV wastes economically. The market price of the reclaimed materials are important factors for deciding the profitability of the recycling process and this illustrates the importance of the recovering the glass and expensive metals from PV modules.

Keywords: photovoltaic, recycling, mathematical models, sustainability

Procedia PDF Downloads 233
644 Evaluation of Pile Performance in Different Layers of Soil

Authors: Orod Zarrin, Mohesn Ramezan Shirazi, Hassan Moniri

Abstract:

The use of pile foundations technique is developed to support structures and buildings on soft soil. The most important dynamic load that can affect the pile structure is earthquake vibrations. Pile foundations during earthquake excitation indicate that piles are subject to damage by affecting the superstructure integrity and serviceability. During an earthquake, two types of stresses can damage the pile head, inertial load that is caused by superstructure and deformation which caused by the surrounding soil. Soil deformation and inertial load are associated with the acceleration developed in an earthquake. The acceleration amplitude at the ground surface depends on the magnitude of earthquakes, soil properties and seismic source distance. According to the investigation, the damage is between the liquefiable and non-liquefiable layers and also soft and stiff layers. This damage crushes the pile head by increasing the inertial load which is applied by the superstructure. On the other hand, the cracks on the piles due to the surrounding soil are directly related to the soil profile and causes cracks from small to large. However, the large cracks reason have been listed such as liquefaction, lateral spreading, and inertial load. In the field of designing, elastic response of piles is always a challenge for designer in liquefaction soil, by allowing deflection at top of piles. Moreover, absence of plastic hinges in piles should be insured, because the damage in the piles is not observed directly. In this study, the performance and behavior of pile foundations during liquefaction and lateral spreading are investigated. In addition, emphasize on the soil behavior in the liquefiable and non-liquefiable layers by different aspect of piles damage such as ranking, location and degree of damage are going to discuss.

Keywords: pile, earthquake, liquefaction, non-liquefiable, damage

Procedia PDF Downloads 286
643 Perception Towards Using E-learning with Stem Students Whose Programs Require Them to Attend Practical Sections in Laboratories during Covid-19

Authors: Youssef A. Yakoub, Ramy M. Shaaban

Abstract:

Covid-19 has changed and affected the whole world dramatically in a new way that the entire world, even scientists, have not imagined before. The educational institutions around the world have been fighting since Covid-19 hit the world last December to keep the educational process unchanged for all students. E-learning was a must for almost all US universities during the pandemic. It was specifically more challenging to use eLearning instead of regular classes among students who take practical education. The aim of this study is to examine the perception of STEM students towards using eLearning instead of traditional methods during their practical study. Focus groups of STEM students studying at a western Pennsylavian, mid-size university were interviewed. Semi-structured interviews were designed to get an insight on students’ perception towards the alternative educational methods they used in the past seven months. Using convenient sampling, four students were chosen from different STEM fields: science of physics, technology, electrical engineering, and mathematics. The interview was primarily about the extent to which these students were satisfied, and their educational needs were met through distance education during the pandemic. The interviewed students were generally able to do a satisfactory performance during their virtual classes, but they were not satisfied enough with the learning methods. The main challenges they faced included the inability to have real practical experience, insufficient materials posted by the faculty, and some technical problems associated with their study. However, they reported they were satisfied with the simulation programs they had. They reported these simulations provided them with a good alternative to their traditional practical education. In conclusion, this study highlighted the challenges students face during the pandemic. It also highlighted the various learning tools students see as good alternatives to their traditional education.

Keywords: eLearning, STEM education, COVID-19 crisis, online practical training

Procedia PDF Downloads 118
642 Perceptions of Students toward ODL Services Quality in Facilitating Their Study: Experience of Universitas Terbuka in Managing ODL in Cultural Diversity Areas

Authors: Ribut Alam Malau, Durri Andriani, C. B. Supartomo

Abstract:

Universitas Terbuka (UT) as a higher education institution implements open and distance education is responsible to provide higher education to all Indonesian citizen wherever they live, including those reside in cultural diversity aras. Operate from Jakarta Head Office and 37 regional centers (ROs), UT is accustomed to the challenge. UT-Kupang and UT-Ambon which oversee East Nusa Tenggara and Maluku have successfully provided quality educational services for students. The two ROs have provided educational facilities which could assist the students to cope with their study in spite of the diversity situations. In order to analyze the effectiveness of the facilities provided, questionnaires focusing on tutorial services were sent to 90 students in the two ROs asking them to assess the facilities which best fulfills students’ needs in terms of their culture diversity. The results showed that UT-Kupang and UT-Ambon have successful in providing education for students in their areas as reflected in more than 80% of respondents aware of the facilities concerning tutorial service except for tutorial mechanism where only 34,5% of respondents aware of. However, despite lower rate of awareness in tutorial mechanism, majority of respondent 90.8% of respondents registered in tutorials and 95.4% will register in tutorials next semester. The majority of respondents showed appreciation for the ROs efforts to provide tutorials on weekdays which could accommodate their beliefs. In addition, conducting tutorials in all islands also perceived highly since students did not have to commute between islands. Efforts done by UT-Kupang and UT-Ambon have proven to be appreciated by students.

Keywords: archipelago, cultural diversity, ODL, service quality, Universitas Terbuka

Procedia PDF Downloads 455
641 Central Finite Volume Methods Applied in Relativistic Magnetohydrodynamics: Applications in Disks and Jets

Authors: Raphael de Oliveira Garcia, Samuel Rocha de Oliveira

Abstract:

We have developed a new computer program in Fortran 90, in order to obtain numerical solutions of a system of Relativistic Magnetohydrodynamics partial differential equations with predetermined gravitation (GRMHD), capable of simulating the formation of relativistic jets from the accretion disk of matter up to his ejection. Initially we carried out a study on numerical methods of unidimensional Finite Volume, namely Lax-Friedrichs, Lax-Wendroff, Nessyahu-Tadmor method and Godunov methods dependent on Riemann problems, applied to equations Euler in order to verify their main features and make comparisons among those methods. It was then implemented the method of Finite Volume Centered of Nessyahu-Tadmor, a numerical schemes that has a formulation free and without dimensional separation of Riemann problem solvers, even in two or more spatial dimensions, at this point, already applied in equations GRMHD. Finally, the Nessyahu-Tadmor method was possible to obtain stable numerical solutions - without spurious oscillations or excessive dissipation - from the magnetized accretion disk process in rotation with respect to a central black hole (BH) Schwarzschild and immersed in a magnetosphere, for the ejection of matter in the form of jet over a distance of fourteen times the radius of the BH, a record in terms of astrophysical simulation of this kind. Also in our simulations, we managed to get substructures jets. A great advantage obtained was that, with the our code, we got simulate GRMHD equations in a simple personal computer.

Keywords: finite volume methods, central schemes, fortran 90, relativistic astrophysics, jet

Procedia PDF Downloads 432
640 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 267
639 Effects of Channel Orientation on Heat Transfer in a Rotating Rectangular Channel with Jet Impingement Cooling and Film Coolant Extraction

Authors: Hua Li, Hongwu Deng

Abstract:

The turbine blade's leading edge is usually cooled by jet impingement cooling technology due to the heaviest heat load. For a rotating turbine blade, however, the channel orientation (β, the angle between the jet direction and the rotating plane) could play an important role in influencing the flow field and heat transfer. Therefore, in this work, the effects of channel orientation (from 90° to 180°) on heat transfer in a jet impingement cooling channel are experimentally investigated. Furthermore, the investigations are conducted under an isothermal boundary condition. Both the jet-to-target surface distance and jet-to-jet spacing are three times the jet hole diameter. The jet Reynolds number is 5,000, and the maximum jet rotation number reaches 0.24. The results show that the rotation-induced variations of heat transfer are different in each channel orientation. In the cases of 90°≤β≤135°, a vortex generated in the low-radius region of the supply channel changes the mass-flowrate distribution in each jet hole. Therefore, the heat transfer in the low-radius region decreases with the rotation number, whereas the heat transfer in the high-radius region increases, indicating that a larger temperature gradient in the radial direction could appear in the turbine blade's leading edge. When 135°<β≤180°; however, the heat transfer of the entire stagnant zone decreases with the rotation number. The rotation-induced jet deflection is the primary factor that weakens the heat transfer, and jets cannot reach the target surface at high rotation numbers. For the downstream regions, however, the heat transfer is enhanced by 50%-80% in every channel orientation because the dead zone is broken by the rotation-induced secondary flow in the impingement channel.

Keywords: heat transfer, jet impingement cooling, channel orientation, high rotation number, isothermal boundary

Procedia PDF Downloads 89
638 Determination of Gross Alpha and Gross Beta Activity in Water Samples by iSolo Alpha/Beta Counting System

Authors: Thiwanka Weerakkody, Lakmali Handagiripathira, Poshitha Dabare, Thisari Guruge

Abstract:

The determination of gross alpha and beta activity in water is important in a wide array of environmental studies and these parameters are considered in international legislations on the quality of water. This technique is commonly applied as screening method in radioecology, environmental monitoring, industrial applications, etc. Measuring of Gross Alpha and Beta emitters by using iSolo alpha beta counting system is an adequate nuclear technique to assess radioactivity levels in natural and waste water samples due to its simplicity and low cost compared with the other methods. Twelve water samples (Six samples of commercially available bottled drinking water and six samples of industrial waste water) were measured by standard method EPA 900.0 consisting of the gas-less, firm wear based, single sample, manual iSolo alpha beta counter (Model: SOLO300G) with solid state silicon PIPS detector. Am-241 and Sr90/ Y90 calibration standards were used to calibrate the detector. The minimum detectable activities are 2.32mBq/L and 406mBq/L, for alpha and beta activity, respectively. Each of the 2L water samples was evaporated (at low heat) to a small volume and transferred into 50mm stainless steel counting planchet evenly (for homogenization) and heated by IR lamp and the constant weighted residue was obtained. Then the samples were counted for gross alpha and beta. Sample density on the planchet area was maintained below 5mg/cm. Large quantities of solid wastes sludges and waste water are generated every year due to various industries. This water can be reused for different applications. Therefore implementation of water treatment plants and measuring water quality parameters in industrial waste water discharge is very important before releasing them into the environment. This waste may contain different types of pollutants, including radioactive substances. All these measured waste water samples having gross alpha and beta activities, lower than the maximum tolerance limits for industrial waste water discharge of industrial waste in to inland surface water, that is 10-9µCi/mL and 10-8µCi/mL for gross alpha and beta respectively (National Environmental Act, No. 47 of 1980). This is according to extraordinary gazette of the democratic socialist republic of Sri Lanka in February 2008. The measured water samples were below the recommended radioactivity levels and do not pose any radiological hazard when releasing the environment. Drinking water is an essential requirement of life. All the drinking water samples were below the permissible levels of 0.5Bq/L for gross alpha activity and 1Bq/L for gross beta activity. The values have been proposed by World Health Organization in 2011; therefore the water is acceptable for consumption of humans without any further clarification with respect to their radioactivity. As these screening levels are very low, the individual dose criterion (IDC) would usually not be exceeded (0.1mSv y⁻¹). IDC is a criterion for evaluating health risks from long term exposure to radionuclides in drinking water. Recommended level of 0.1mSv/y expressed a very low level of health risk. This monitoring work will be continued further for environmental protection purposes.

Keywords: drinking water, gross alpha, gross beta, waste water

Procedia PDF Downloads 178
637 Reversible Information Hitting in Encrypted JPEG Bitstream by LSB Based on Inherent Algorithm

Authors: Vaibhav Barve

Abstract:

Reversible information hiding has drawn a lot of interest as of late. Being reversible, we can restore unique computerized data totally. It is a plan where mystery data is put away in digital media like image, video, audio to maintain a strategic distance from unapproved access and security reason. By and large JPEG bit stream is utilized to store this key data, first JPEG bit stream is encrypted into all around sorted out structure and then this secret information or key data is implanted into this encrypted region by marginally changing the JPEG bit stream. Valuable pixels suitable for information implanting are computed and as indicated by this key subtle elements are implanted. In our proposed framework we are utilizing RC4 algorithm for encrypting JPEG bit stream. Encryption key is acknowledged by framework user which, likewise, will be used at the time of decryption. We are executing enhanced least significant bit supplanting steganography by utilizing genetic algorithm. At first, the quantity of bits that must be installed in a guaranteed coefficient is versatile. By utilizing proper parameters, we can get high capacity while ensuring high security. We are utilizing logistic map for shuffling of bits and utilization GA (Genetic Algorithm) to find right parameters for the logistic map. Information embedding key is utilized at the time of information embedding. By utilizing precise picture encryption and information embedding key, the beneficiary can, without much of a stretch, concentrate the incorporated secure data and totally recoup the first picture and also the original secret information. At the point when the embedding key is truant, the first picture can be recouped pretty nearly with sufficient quality without getting the embedding key of interest.

Keywords: data embedding, decryption, encryption, reversible data hiding, steganography

Procedia PDF Downloads 277
636 Exploring Influence Range of Tainan City Using Electronic Toll Collection Big Data

Authors: Chen Chou, Feng-Tyan Lin

Abstract:

Big Data has been attracted a lot of attentions in many fields for analyzing research issues based on a large number of maternal data. Electronic Toll Collection (ETC) is one of Intelligent Transportation System (ITS) applications in Taiwan, used to record starting point, end point, distance and travel time of vehicle on the national freeway. This study, taking advantage of ETC big data, combined with urban planning theory, attempts to explore various phenomena of inter-city transportation activities. ETC, one of government's open data, is numerous, complete and quick-update. One may recall that living area has been delimited with location, population, area and subjective consciousness. However, these factors cannot appropriately reflect what people’s movement path is in daily life. In this study, the concept of "Living Area" is replaced by "Influence Range" to show dynamic and variation with time and purposes of activities. This study uses data mining with Python and Excel, and visualizes the number of trips with GIS to explore influence range of Tainan city and the purpose of trips, and discuss living area delimited in current. It dialogues between the concepts of "Central Place Theory" and "Living Area", presents the new point of view, integrates the application of big data, urban planning and transportation. The finding will be valuable for resource allocation and land apportionment of spatial planning.

Keywords: Big Data, ITS, influence range, living area, central place theory, visualization

Procedia PDF Downloads 263
635 Comparison of Mini-BESTest versus Berg Balance Scale to Evaluate Balance Disorders in Parkinson's Disease

Authors: R. Harihara Prakash, Shweta R. Parikh, Sangna S. Sheth

Abstract:

The purpose of this study was to explore the usefulness of the Mini-BESTest compared to the Berg Balance Scale in evaluating balance in people with Parkinson's Disease (PD) of varying severity. Evaluation were done to obtain (1) the distribution of patients scores to look for ceiling effects, (2) concurrent validity with severity of disease, and (3) the sensitivity & specificity of separating people with or without postural response deficits. Methods and Material: Seventy-seven(77) people with Parkinson's Disease were tested for balance deficits using the Berg Balance Scale, Mini-BESTest. Unified Parkinson’s Disease Rating Scale (UPDRS) III and the Hoehn & Yahr (H&Y) disease severity scales were used for classification. Materials used in this study were case record sheet, chair without arm rests or wheels, Incline ramp, stopwatch, a box, 3 meter distance measured out and marked on the floor with tape [from chair]. Statistical analysis used: Multiple Linear regression was carried out of UPDRS jointly on the two scores for the Berg and Mini-BESTest. Receiver operating characteristic curves for classifying people into two groups based on a threshold for the H&Y score, to discriminate between mild PD versus more severe PD.Correlation co-efficient to find relativeness between the two variables. Results: The Mini-BESTest is highly correlated with the Berg (r = 0.732,P < 0.001), but avoids the ceiling compression effect of the Berg for mild PD (skewness −0.714 Berg, −0.512 Mini-BESTest). Consequently, the Mini-BESTest is more effective than the Berg for predicting UPDRS Motor score (P < 0.001 Mini-BESTest versus P = 0.72 Berg), and for discriminating between those with and without postural response deficits as measured by the H&Y (ROC).

Keywords: balance, berg balance scale, MINI BESTest, parkinson's disease

Procedia PDF Downloads 375
634 Cytotoxic and Biocompatible Evaluation of Silica Coated Silver Nanoparticle Against Nih-3t3 Cells

Authors: Chen-En Lin, Lih-Rou Rau, Jiunn-Woei Liaw, Shiao-Wen Tsai

Abstract:

The unique optical properties of plasmon resonance metallic particles have attracted considerable applications in the fields of physics, chemistry and biology. Metal-Enhanced Fluorescence (MEF) effect is one of the useful applications. MEF effect stated that fluorescence intensity can be quenched or be enhanced depending on the distance between fluorophores and the metal nanoparticles. Silver nanoparticles have used widely in antibacterial studies. However, the major limitation for silver nanoparticles (AgNPs) in biomedical application is well-known cytotoxicity on cells. There were numerous literatures have been devoted to overcome the disadvantage. The aim of the study is to evaluate the cytotoxicity and biocompatibility of silica coated AgNPs against NIH-3T3 cells. The results were shown that NIH-3T3 cells started to detach, shrink, become rounded and finally be irregular in shape after 24 h of exposure at 10 µg/ml AgNPs. Besides, compared with untreated cells, the cell viability significantly decreased to 60% and 40% which were exposed to 10 µg/ml and 20 µg/ml AgNPs respectively. The result was consistent with previously reported findings that AgNPs induced cytotoxicity was concentration dependent. However, the morphology and cell viability of cells appeared similar to the control group when exposed to 20 µg/ml of silica coated AgNPs. We further utilized the dark-field hyperspectral imaging system to analysis the optical properties of the intracellular nanoparticles. The image displayed that the red shift of the surface plasmonic resonances band of the enclosed AgNPs further confirms the agglomerate of the AgNPs rather than their distribution in cytoplasm. In conclusion, the study demonstrated the silica coated of AgNPs showed well biocompatibility and significant lower cytotoxicity compared with bare AgNPs.

Keywords: silver nanoparticles, silica, cell viability, morphology

Procedia PDF Downloads 376
633 Spatial Distribution and Time Series Analysis of COVID-19 Pandemic in Italy: A Geospatial Perspective

Authors: Muhammad Farhan Ul Moazzam, Tamkeen Urooj Paracha, Ghani Rahman, Byung Gul Lee, Nasir Farid, Adnan Arshad

Abstract:

The novel coronavirus pandemic disease (COVID-19) affected the whole globe, though there is a lack of clinical studies and its epidemiological features. But as per the observation, it has been seen that most of the COVID-19 infected patients show mild to moderate symptoms, and they get better without any medical assistance due to a better immune system to generate antibodies against the novel coronavirus. In this study, the active cases, serious cases, recovered cases, deaths and total confirmed cases had been analyzed using the geospatial inverse distance weightage technique (IDW) within the time span of 2nd March to 3rd June 2020. As of 3rd June, the total number of COVID-19 cases in Italy were 231,238, total deaths 33,310, serious cases 350, recovered cases 158,951, and active cases were 39,177, which has been reported by the Ministry of Health, Italy. March 2nd-June 3rd, 2020 a sum of 231,238 cases has been reported in Italy out of which 38.68% cases reported in the Lombardia region with a death rate of 18%, which is high from its national mortality rate followed by Emilia-Romagna (14.89% deaths), Piemonte (12.68% deaths), and Vento (10% deaths). As per the total cases in the region, the highest number of recoveries has been observed in Umbria (92.52%), followed by Basilicata (87%), Valle d'Aosta (86.85%), and Trento (84.54%). The COVID-19 evolution in Italy has been particularly found in the major urban area, i.e., Rome, Milan, Naples, Bologna, and Florence. Geospatial technology played a vital role in this pandemic by tracking infected patient, active cases, and recovered cases. Geospatial techniques are very important in terms of monitoring and planning to control the pandemic spread in the country.

Keywords: COVID-19, public health, geospatial analysis, IDW, Italy

Procedia PDF Downloads 131
632 Hydrogen Purity: Developing Low-Level Sulphur Speciation Measurement Capability

Authors: Sam Bartlett, Thomas Bacquart, Arul Murugan, Abigail Morris

Abstract:

Fuel cell electric vehicles provide the potential to decarbonise road transport, create new economic opportunities, diversify national energy supply, and significantly reduce the environmental impacts of road transport. A potential issue, however, is that the catalyst used at the fuel cell cathode is susceptible to degradation by impurities, especially sulphur-containing compounds. A recent European Directive (2014/94/EU) stipulates that, from November 2017, all hydrogen provided to fuel cell vehicles in Europe must comply with the hydrogen purity specifications listed in ISO 14687-2; this includes reactive and toxic chemicals such as ammonia and total sulphur-containing compounds. This requirement poses great analytical challenges due to the instability of some of these compounds in calibration gas standards at relatively low amount fractions and the difficulty associated with undertaking measurements of groups of compounds rather than individual compounds. Without the available reference materials and analytical infrastructure, hydrogen refuelling stations will not be able to demonstrate compliance to the ISO 14687 specifications. The hydrogen purity laboratory at NPL provides world leading, accredited purity measurements to allow hydrogen refuelling stations to evidence compliance to ISO 14687. Utilising state-of-the-art methods that have been developed by NPL’s hydrogen purity laboratory, including a novel method for measuring total sulphur compounds at 4 nmol/mol and a hydrogen impurity enrichment device, we provide the capabilities necessary to achieve these goals. An overview of these capabilities will be given in this paper. As part of the EMPIR Hydrogen co-normative project ‘Metrology for sustainable hydrogen energy applications’, NPL are developing a validated analytical methodology for the measurement of speciated sulphur-containing compounds in hydrogen at low amount fractions pmol/mol to nmol/mol) to allow identification and measurement of individual sulphur-containing impurities in real samples of hydrogen (opposed to a ‘total sulphur’ measurement). This is achieved by producing a suite of stable gravimetrically-prepared primary reference gas standards containing low amount fractions of sulphur-containing compounds (hydrogen sulphide, carbonyl sulphide, carbon disulphide, 2-methyl-2-propanethiol and tetrahydrothiophene have been selected for use in this study) to be used in conjunction with novel dynamic dilution facilities to enable generation of pmol/mol to nmol/mol level gas mixtures (a dynamic method is required as compounds at these levels would be unstable in gas cylinder mixtures). Method development and optimisation are performed using gas chromatographic techniques assisted by cryo-trapping technologies and coupled with sulphur chemiluminescence detection to allow improved qualitative and quantitative analyses of sulphur-containing impurities in hydrogen. The paper will review the state-of-the art gas standard preparation techniques, including the use and testing of dynamic dilution technologies for reactive chemical components in hydrogen. Method development will also be presented highlighting the advances in the measurement of speciated sulphur compounds in hydrogen at low amount fractions.

Keywords: gas chromatography, hydrogen purity, ISO 14687, sulphur chemiluminescence detector

Procedia PDF Downloads 201