Search results for: equivalent linear
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4182

Search results for: equivalent linear

882 Bismuth Telluride Topological Insulator: Physical Vapor Transport vs Molecular Beam Epitaxy

Authors: Omar Concepcion, Osvaldo De Melo, Arturo Escobosa

Abstract:

Topological insulator (TI) materials are insulating in the bulk and conducting in the surface. The unique electronic properties associated with these surface states make them strong candidates for exploring innovative quantum phenomena and as practical applications for quantum computing, spintronic and nanodevices. Many materials, including Bi₂Te₃, have been proposed as TIs and, in some cases, it has been demonstrated experimentally by angle-resolved photoemission spectroscopy (ARPES), scanning tunneling spectroscopy (STM) and/or magnetotransport measurements. A clean surface is necessary in order to make any of this measurements. Several techniques have been used to produce films and different kinds of nanostructures. Growth and characterization in situ is usually the best option although cleaving the films can be an alternative to have a suitable surface. In the present work, we report a comparison of Bi₂Te₃ grown by physical vapor transport (PVT) and molecular beam epitaxy (MBE). The samples were characterized by X-ray diffraction (XRD), Scanning electron microscopy (SEM), Atomic force microscopy (AFM), X-ray photoelectron spectroscopy (XPS) and ARPES. The Bi₂Te₃ samples grown by PVT, were cleaved in the ultra-high vacuum in order to obtain a surface free of contaminants. In both cases, the XRD shows a c-axis orientation and the pole diagrams proved the epitaxial relationship between film and substrate. The ARPES image shows the linear dispersion characteristic of the surface states of the TI materials. The samples grown by PVT, a relatively simple and cost-effective technique shows the same high quality and TI properties than the grown by MBE.

Keywords: Bismuth telluride, molecular beam epitaxy, physical vapor transport, topological insulator

Procedia PDF Downloads 196
881 Alterations of Molecular Characteristics of Polyethylene under the Influence of External Effects

Authors: Vigen Barkhudaryan

Abstract:

The influence of external effects (γ-, UV–radiations, high temperature) in presence of air oxygen on structural transformations of low-density polyethylene (LDPE) have been investigated dependent on the polymers’ thickness, the intensity and the dose of external actions. The methods of viscosimetry, light scattering, turbidimetry and gelation measuring were used for this purpose. The comparison of influence of external effects on LDPE shows, that the destruction and cross-linking processes of macromolecules proceed simultaneously with all kinds of external effects. A remarkable growth of average molecular mass of LDPE along with the irradiation doses and heat treatment exposure growth was established. It was linear for the mass average molecular mass and at the initial doses is mainly the result of the increase of the macromolecular branching. As a result, the macromolecular hydrodynamic volumes have been changed, and therefore the dependence of viscosity average molecular mass on the doses was going through the minimum at initial doses. A significant change of molecular mass, sizes and shape of macromolecules of LDPE occurs under the influence of external effects. The influence is limited only by diffusion of oxygen during -irradiation and heat treatment. At UV–irradiation the influence is limited both by diffusion of oxygen and penetration of radiation. Consequently, the molecular transformations are deeper and evident in case of -irradiation, as soon as the polymer is transformed in a whole volume. It was also established, that the mechanism of molecular transformations in polymers from the surface layer distinctly differs from those of the sample deeper layer. A comparison of the results of these investigations allows us to conclude, that the mechanisms of influence of investigated external effects on polyethylene are similar.

Keywords: cross-linking, destruction, high temperature, LDPE, γ-radiations, UV-radiations

Procedia PDF Downloads 322
880 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement

Authors: Hu Zhenxing, Gao Jianxin

Abstract:

Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.

Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D

Procedia PDF Downloads 503
879 Performance Evaluation of a Fuel Cell Membrane Electrode Assembly Prepared from a Reinforced Proton Exchange Membrane

Authors: Yingjeng James Li, Yun Jyun Ou, Chih Chi Hsu, Chiao-Chih Hu

Abstract:

A fuel cell is a device that produces electric power by reacting fuel and oxidant electrochemically. There is no pollution produced from a fuel cell if hydrogen is employed as the fuel. Therefore, a fuel cell is considered as a zero emission device and is a source of green power. A membrane electrode assembly (MEA) is the key component of a fuel cell. It is, therefore, beneficial to develop MEAs with high performance. In this study, an MEA for proton exchange membrane fuel cell (PEMFC) was prepared from a 15-micron thick reinforced PEM. The active area of such MEA is 25 cm2. Carbon supported platinum (Pt/C) was employed as the catalyst for both anode and cathode. The platinum loading is 0.6 mg/cm2 based on the sum of anode and cathode. Commercially available carbon papers coated with a micro porous layer (MPL) serve as gas diffusion layers (GDLs). The original thickness of the GDL is 250 μm. It was compressed down to 163 μm when assembled into the single cell test fixture. Polarization curves were taken by using eight different test conditions. At our standard test condition (cell: 70 °C; anode: pure hydrogen, 100%RH, 1.2 stoic, ambient pressure; cathode: air, 100%RH, 3.0 stoic, ambient pressure), the cell current density is 1250 mA/cm2 at 0.6 V, and 2400 mA/cm2 at 0.4 V. At self-humidified condition and cell temperature of 55 °C, the cell current density is 1050 mA/cm2 at 0.6 V, and 2250 mA/cm2 at 0.4 V. Hydrogen crossover rate of the MEA is 0.0108 mL/min*cm2 according to linear sweep voltammetry experiments. According to the MEA’s Pt loading and the cyclic voltammetry experiments, the Pt electrochemical surface area is 60 m2/g. The ohmic part of the impedance spectroscopy results shows that the membrane resistance is about 60 mΩ*cm2 when the MEA is operated at 0.6 V.

Keywords: fuel cell, membrane electrode assembly, proton exchange membrane, reinforced

Procedia PDF Downloads 296
878 Location Choice: The Effects of Network Configuration upon the Distribution of Economic Activities in the Chinese City of Nanning

Authors: Chuan Yang, Jing Bie, Zhong Wang, Panagiotis Psimoulis

Abstract:

Contemporary studies investigating the association between the spatial configuration of the urban network and economic activities at the street level were mostly conducted within space syntax conceptual framework. These findings supported the theory of 'movement economy' and demonstrated the impact of street configuration on the distribution of pedestrian movement and land-use shaping, especially retail activities. However, the effects varied between different urban contexts. In this paper, the relationship between economic activity distribution and the urban configurational characters was examined at the segment level. In the study area, three kinds of neighbourhood types, urban, suburban, and rural neighbourhood, were included. And among all neighbourhoods, three kinds of urban network form, 'tree-like', grid, and organic pattern, were recognised. To investigate the nested effects of urban configuration measured by space syntax approach and urban context, multilevel zero-inflated negative binomial (ZINB) regression models were constructed. Additionally, considering the spatial autocorrelation, spatial lag was also concluded in the model as an independent variable. The random effect ZINB model shows superiority over the ZINB model or multilevel linear (ML) model in the explanation of economic activities pattern shaping over the urban environment. And after adjusting for the neighbourhood type and network form effects, connectivity and syntax centrality significantly affect economic activities clustering. The comparison between accumulative and new established economic activities illustrated the different preferences for economic activity location choice.

Keywords: space syntax, economic activities, multilevel model, Chinese city

Procedia PDF Downloads 128
877 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI

Authors: Ananya Ananya, Karthik Rao

Abstract:

Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.

Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net

Procedia PDF Downloads 264
876 Electrical Conductivity as Pedotransfer Function in the Determination of Sodium Adsorption Ratio in Soil System in Managing Micro Level Farming Practices in India: An Effective Low Cost Technology

Authors: Usha Loganathan, Haresh Pandya

Abstract:

Analysis and correlation of soil properties represent an important outset for precision agriculture and is currently promoted and implemented in the developed world. Establishing relationships among indices of soil salinity has always been a challenging task in salt affected soils necessitating unique approaches for their reclamation and management to sustain long term productivity of Soil. Soil salinity indices like Electrical Conductivity (EC) and Sodium Adsorption Ratio (SAR) are normally used to characterize soils as either sodic or saline sodic. Currently, Determination of Soil sodium adsorption ratio is a more accepted and reliable measure of soil salinity. However, it involves arduous and protracted laboratory investigations which demand evolving new and economical methods to determine SAR based on simple soil salinity index. A linear regression model to predict soil SAR from soil electrical conductivity has been developed and presented in this paper as per which, soil SAR could very well be worked out as a pedotransfer function of soil EC. The present study was carried out in Orathupalayam (11.09-11.11 N latitude and 74.54-77.59 E longitude) in the vicinity of Orathupalayam Reservoir of Noyyal River Basin, India, over a period of 3 consecutive years from September 2013 through February 2016 in different locations chosen randomly through different seasons. The research findings are discussed in the light of micro level farming practices in India and recommend determination of SAR as a low cost technology aiding in the effective management of salt affected agricultural land.

Keywords: electrical conductivity, orathupalayam, pedotranfer function, sodium adsorption ratio

Procedia PDF Downloads 258
875 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 112
874 Innovative Predictive Modeling and Characterization of Composite Material Properties Using Machine Learning and Genetic Algorithms

Authors: Hamdi Beji, Toufik Kanit, Tanguy Messager

Abstract:

This study aims to construct a predictive model proficient in foreseeing the linear elastic and thermal characteristics of composite materials, drawing on a multitude of influencing parameters. These parameters encompass the shape of inclusions (circular, elliptical, square, triangle), their spatial coordinates within the matrix, orientation, volume fraction (ranging from 0.05 to 0.4), and variations in contrast (spanning from 10 to 200). A variety of machine learning techniques are deployed, including decision trees, random forests, support vector machines, k-nearest neighbors, and an artificial neural network (ANN), to facilitate this predictive model. Moreover, this research goes beyond the predictive aspect by delving into an inverse analysis using genetic algorithms. The intent is to unveil the intrinsic characteristics of composite materials by evaluating their thermomechanical responses. The foundation of this research lies in the establishment of a comprehensive database that accounts for the array of input parameters mentioned earlier. This database, enriched with this diversity of input variables, serves as a bedrock for the creation of machine learning and genetic algorithm-based models. These models are meticulously trained to not only predict but also elucidate the mechanical and thermal conduct of composite materials. Remarkably, the coupling of machine learning and genetic algorithms has proven highly effective, yielding predictions with remarkable accuracy, boasting scores ranging between 0.97 and 0.99. This achievement marks a significant breakthrough, demonstrating the potential of this innovative approach in the field of materials engineering.

Keywords: machine learning, composite materials, genetic algorithms, mechanical and thermal proprieties

Procedia PDF Downloads 57
873 Effect of Islamic Finance on Jobs Generation in Punjab, Pakistan

Authors: B. Ashraf, A. M. Malik

Abstract:

The study was accomplished at the Department of Economics and Agriculture Economics, Pir Mahar Ali Shah ARID Agriculture University, Punjab, Pakistan during 2013-16 with a purpose to discover the effect of Islamic finance/banking on employment in Punjab, Pakistan. Islamic banking system is sub-component of conventional banking system in various countries of the world; however, in Pakistan, it has been established as a separate Islamic banking system. The Islamic banking operates under the doctrine of Shariah. It is claimed that the referred banking is free of interest (Riba) and addresses the philosophy and basic values of Islam in finance that reduces the factors of uncertainty, risk and others speculative activities. Two Islamic bank’s; Meezan Bank Limited (Pakistan) and Al-Baraka Bank Limited (Pakistan) from North Punjab (Bahawalnagar) and central Punjab (Lahore) west Punjab (Gujrat), Pakistan were randomly selected for the conduct of research. A total of 206 samples were collected from the define areas and banks through questionnaire. The data was analyzed by using the Statistical Package for Social Sciences (SPSS) version 21.0. Multiple linear regressions were applied to prove the hypothesis. The results revealed that the assets formation had significant positive; whereas, the technology, length of business (experience) and bossiness size had significant negative impact with employment generation in Islamic finance/banking in Punjab, Pakistan. This concludes that the employment opportunities may be created in the country by extending the finance to business/firms to start new business and increase the Public awareness by the Islamic banks through intensive publicity. However; Islamic financial institutions may be encouraged by Government as it enhances the employment in the country.

Keywords: assets formation, borrowers, employment generation, Islamic banks, Islamic finance

Procedia PDF Downloads 330
872 Numerical Investigation of the Integration of a Micro-Combustor with a Free Piston Stirling Engine in an Energy Recovery System

Authors: Ayodeji Sowale, Athanasios Kolios, Beatriz Fidalgo, Tosin Somorin, Aikaterini Anastasopoulou, Alison Parker, Leon Williams, Ewan McAdam, Sean Tyrrel

Abstract:

Recently, energy recovery systems are thriving and raising attention in the power generation sector, due to the request for cleaner forms of energy that are friendly and safe for the environment. This has created an avenue for cogeneration, where Combined Heat and Power (CHP) technologies have been recognised for their feasibility, and use in homes and small-scale businesses. The efficiency of combustors and the advantages of the free piston Stirling engines over other conventional engines in terms of output power and efficiency, have been observed and considered. This study presents the numerical analysis of a micro-combustor with a free piston Stirling engine in an integrated model of a Nano Membrane Toilet (NMT) unit. The NMT unit will use the micro-combustor to produce waste heat of high energy content from the combustion of human waste and the heat generated will power the free piston Stirling engine which will be connected to a linear alternator for electricity production. The thermodynamic influence of the combustor on the free piston Stirling engine was observed, based on the heat transfer from the flue gas to working gas of the free piston Stirling engine. The results showed that with an input of 25 MJ/kg of faecal matter, and flue gas temperature of 773 K from the micro-combustor, the free piston Stirling engine generates a daily output power of 428 W, at thermal efficiency of 10.7% with engine speed of 1800 rpm. An experimental investigation into the integration of the micro-combustor and free piston Stirling engine with the NMT unit is currently underway.

Keywords: free piston stirling engine, micro-combustor, nano membrane toilet, thermodynamics

Procedia PDF Downloads 262
871 Coupled Hydro-Geomechanical Modeling of Oil Reservoir Considering Non-Newtonian Fluid through a Fracture

Authors: Juan Huang, Hugo Ninanya

Abstract:

Oil has been used as a source of energy and supply to make materials, such as asphalt or rubber for many years. This is the reason why new technologies have been implemented through time. However, research still needs to continue increasing due to new challenges engineers face every day, just like unconventional reservoirs. Various numerical methodologies have been applied in petroleum engineering as tools in order to optimize the production of reservoirs before drilling a wellbore, although not all of these have the same efficiency when talking about studying fracture propagation. Analytical methods like those based on linear elastic fractures mechanics fail to give a reasonable prediction when simulating fracture propagation in ductile materials whereas numerical methods based on the cohesive zone method (CZM) allow to represent the elastoplastic behavior in a reservoir based on a constitutive model; therefore, predictions in terms of displacements and pressure will be more reliable. In this work, a hydro-geomechanical coupled model of horizontal wells in fractured rock was developed using ABAQUS; both extended element method and cohesive elements were used to represent predefined fractures in a model (2-D). A power law for representing the rheological behavior of fluid (shear-thinning, power index <1) through fractures and leak-off rate permeating to the matrix was considered. Results have been showed in terms of aperture and length of the fracture, pressure within fracture and fluid loss. It was showed a high infiltration rate to the matrix as power index decreases. A sensitivity analysis is conclusively performed to identify the most influential factor of fluid loss.

Keywords: fracture, hydro-geomechanical model, non-Newtonian fluid, numerical analysis, sensitivity analysis

Procedia PDF Downloads 208
870 Analysis of Factors Affecting the Number of Infant and Maternal Mortality in East Java with Geographically Weighted Bivariate Generalized Poisson Regression Method

Authors: Luh Eka Suryani, Purhadi

Abstract:

Poisson regression is a non-linear regression model with response variable in the form of count data that follows Poisson distribution. Modeling for a pair of count data that show high correlation can be analyzed by Poisson Bivariate Regression. Data, the number of infant mortality and maternal mortality, are count data that can be analyzed by Poisson Bivariate Regression. The Poisson regression assumption is an equidispersion where the mean and variance values are equal. However, the actual count data has a variance value which can be greater or less than the mean value (overdispersion and underdispersion). Violations of this assumption can be overcome by applying Generalized Poisson Regression. Characteristics of each regency can affect the number of cases occurred. This issue can be overcome by spatial analysis called geographically weighted regression. This study analyzes the number of infant mortality and maternal mortality based on conditions in East Java in 2016 using Geographically Weighted Bivariate Generalized Poisson Regression (GWBGPR) method. Modeling is done with adaptive bisquare Kernel weighting which produces 3 regency groups based on infant mortality rate and 5 regency groups based on maternal mortality rate. Variables that significantly influence the number of infant and maternal mortality are the percentages of pregnant women visit health workers at least 4 times during pregnancy, pregnant women get Fe3 tablets, obstetric complication handled, clean household and healthy behavior, and married women with the first marriage age under 18 years.

Keywords: adaptive bisquare kernel, GWBGPR, infant mortality, maternal mortality, overdispersion

Procedia PDF Downloads 164
869 Optimal Operation of Bakhtiari and Roudbar Dam Using Differential Evolution Algorithms

Authors: Ramin Mansouri

Abstract:

Due to the contrast of rivers discharge regime with water demands, one of the best ways to use water resources is to regulate the natural flow of the rivers and supplying water needs to construct dams. Optimal utilization of reservoirs, consideration of multiple important goals together at the same is of very high importance. To study about analyzing this method, statistical data of Bakhtiari and Roudbar dam over 46 years (1955 until 2001) is used. Initially an appropriate objective function was specified and using DE algorithm, the rule curve was developed. In continue, operation policy using rule curves was compared to standard comparative operation policy. The proposed method distributed the lack to the whole year and lowest damage was inflicted to the system. The standard deviation of monthly shortfall of each year with the proposed algorithm was less deviated than the other two methods. The Results show that median values for the coefficients of F and Cr provide the optimum situation and cause DE algorithm not to be trapped in local optimum. The most optimal answer for coefficients are 0.6 and 0.5 for F and Cr coefficients, respectively. After finding the best combination of coefficients values F and CR, algorithms for solving the independent populations were examined. For this purpose, the population of 4, 25, 50, 100, 500 and 1000 members were studied in two generations (G=50 and 100). result indicates that the generation number 200 is suitable for optimizing. The increase in time per the number of population has almost a linear trend, which indicates the effect of population in the runtime algorithm. Hence specifying suitable population to obtain an optimal results is very important. Standard operation policy had better reversibility percentage, but inflicts severe vulnerability to the system. The results obtained in years of low rainfall had very good results compared to other comparative methods.

Keywords: reservoirs, differential evolution, dam, Optimal operation

Procedia PDF Downloads 81
868 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 46
867 Developing a GIS-Based Tool for the Management of Fats, Oils, and Grease (FOG): A Case Study of Thames Water Wastewater Catchment

Authors: Thomas D. Collin, Rachel Cunningham, Bruce Jefferson, Raffaella Villa

Abstract:

Fats, oils and grease (FOG) are by-products of food preparation and cooking processes. FOG enters wastewater systems through a variety of sources such as households, food service establishments, and industrial food facilities. Over time, if no source control is in place, FOG builds up on pipe walls, leading to blockages, and potentially to sewer overflows which are a major risk to the Environment and Human Health. UK water utilities spend millions of pounds annually trying to control FOG. Despite UK legislation specifying that discharge of such material is against the law, it is often complicated for water companies to identify and prosecute offenders. Hence, it leads to uncertainties regarding the attitude to take in terms of FOG management. Research is needed to seize the full potential of implementing current practices. The aim of this research was to undertake a comprehensive study to document the extent of FOG problems in sewer lines and reinforce existing knowledge. Data were collected to develop a model estimating quantities of FOG available for recovery within Thames Water wastewater catchments. Geographical Information System (GIS) software was used in conjunction to integrate data with a geographical component. FOG was responsible for at least 1/3 of sewer blockages in Thames Water waste area. A waste-based approach was developed through an extensive review to estimate the potential for FOG collection and recovery. Three main sources were identified: residential, commercial and industrial. Commercial properties were identified as one of the major FOG producers. The total potential FOG generated was estimated for the 354 wastewater catchments. Additionally, raw and settled sewage were sampled and analysed for FOG (as hexane extractable material) monthly at 20 sewage treatment works (STW) for three years. A good correlation was found with the sampled FOG and population equivalent (PE). On average, a difference of 43.03% was found between the estimated FOG (waste-based approach) and sampled FOG (raw sewage sampling). It was suggested that the approach undertaken could overestimate the FOG available, the sampling could only capture a fraction of FOG arriving at STW, and/or the difference could account for FOG accumulating in sewer lines. Furthermore, it was estimated that on average FOG could contribute up to 12.99% of the primary sludge removed. The model was further used to investigate the relationship between estimated FOG and number of blockages. The higher the FOG potential, the higher the number of FOG-related blockages is. The GIS-based tool was used to identify critical areas (i.e. high FOG potential and high number of FOG blockages). As reported in the literature, FOG was one of the main causes of sewer blockages. By identifying critical areas (i.e. high FOG potential and high number of FOG blockages) the model further explored the potential for source-control in terms of ‘sewer relief’ and waste recovery. Hence, it helped targeting where benefits from implementation of management strategies could be the highest. However, FOG is still likely to persist throughout the networks, and further research is needed to assess downstream impacts (i.e. at STW).

Keywords: fat, FOG, GIS, grease, oil, sewer blockages, sewer networks

Procedia PDF Downloads 213
866 Exploring 1,2,4-Triazine-3(2H)-One Derivatives as Anticancer Agents for Breast Cancer: A QSAR, Molecular Docking, ADMET, and Molecular Dynamics

Authors: Said Belaaouad

Abstract:

This study aimed to explore the quantitative structure-activity relationship (QSAR) of 1,2,4-Triazine-3(2H)-one derivative as a potential anticancer agent against breast cancer. The electronic descriptors were obtained using the Density Functional Theory (DFT) method, and a multiple linear regression techniques was employed to construct the QSAR model. The model exhibited favorable statistical parameters, including R2=0.849, R2adj=0.656, MSE=0.056, R2test=0.710, and Q2cv=0.542, indicating its reliability. Among the descriptors analyzed, absolute electronegativity (χ), total energy (TE), number of hydrogen bond donors (NHD), water solubility (LogS), and shape coefficient (I) were identified as influential factors. Furthermore, leveraging the validated QSAR model, new derivatives of 1,2,4-Triazine-3(2H)-one were designed, and their activity and pharmacokinetic properties were estimated. Subsequently, molecular docking (MD) and molecular dynamics (MD) simulations were employed to assess the binding affinity of the designed molecules. The Tubulin colchicine binding site, which plays a crucial role in cancer treatment, was chosen as the target protein. Through the simulation trajectory spanning 100 ns, the binding affinity was calculated using the MMPBSA script. As a result, fourteen novel Tubulin-colchicine inhibitors with promising pharmacokinetic characteristics were identified. Overall, this study provides valuable insights into the QSAR of 1,2,4-Triazine-3(2H)-one derivative as potential anticancer agent, along with the design of new compounds and their assessment through molecular docking and dynamics simulations targeting the Tubulin-colchicine binding site.

Keywords: QSAR, molecular docking, ADMET, 1, 2, 4-triazin-3(2H)-ones, breast cancer, anticancer, molecular dynamic simulations, MMPBSA calculation

Procedia PDF Downloads 101
865 Clay Hydrogel Nanocomposite for Controlled Small Molecule Release

Authors: Xiaolin Li, Terence Turney, John Forsythe, Bryce Feltis, Paul Wright, Vinh Truong, Will Gates

Abstract:

Clay-hydrogel nanocomposites have attracted great attention recently, mainly because of their enhanced mechanical properties and ease of fabrication. Moreover, the unique platelet structure of clay nanoparticles enables the incorporation of bioactive molecules, such as proteins or drugs, through ion exchange, adsorption or intercalation. This study seeks to improve the mechanical and rheological properties of a novel hydrogel system, copolymerized from a tetrapodal polyethylene glycol (PEG) thiol and a linear, triblock PEG-PPG-PEG (PPG: polypropylene glycol) α,ω-bispropynoate polymer, with the simultaneous incorporation of various amounts of Na-saturated, montmorillonite clay (MMT) platelets (av. lateral dimension = 200 nm), to form a bioactive three-dimensional network. Although the parent hydrogel has controlled swelling ability and its PEG groups have good affinity for the clay platelets, it suffers from poor mechanical stability and is currently unsuitable for potential applications. Nanocomposite hydrogels containing 4wt% MMT showed a twelve-fold enhancement in compressive strength, reaching 0.75MPa, and also a three-fold acceleration in gelation time, when compared with the parent hydrogel. Interestingly, clay nanoplatelet incorporation into the hydrogel slowed down the rate of its dehydration in air. Preliminary results showed that protein binding by the MMT varied with the nature of the protein, as horseradish peroxidase (HRP) was more strongly bound than bovine serum albumin. The HRP was no longer active when bound, presumably as a result of extensive structural refolding. Further work is being undertaken to assess protein binding behaviour within the nanocomposite hydrogel for potential diabetic wound healing applications.

Keywords: hydrogel, nanocomposite, small molecule, wound healing

Procedia PDF Downloads 272
864 Development and Evaluation of New Complementary Food from Maize, Soya Bean and Moringa for Young Children

Authors: Berhan Fikru

Abstract:

The objective of this study was to develop new complementary food from maize, soybean and moringa for young children. The complementary foods were formulated with linear programming (LP Nutri-survey software) and Faffa (corn soya blend) use as control. Analysis were made for formulated blends and compared with the control and recommended daily intake (RDI). Three complementary foods composed of maize, soya bean, moringa and sugar with ratio of 65:20:15:0, 55:25:15:5 and 65:20:10:5 for blend 1, 2 and 3, respectively. The blends were formulated based on the protein, energy, mineral (iron, zinc an calcium) and vitamin (vitamin A and C) content of foods. The overall results indicated that nutrient content of faffa (control) was 16.32 % protein, 422.31 kcal energy, 64.47 mg calcium, 3.8 mg iron, 1.87mg zinc, 0.19 mg vitamin A and 1.19 vitamin C; blend 1 had 17.16 % protein, 429.84 kcal energy, 330.40 mg calcium, 6.19 mg iron, 1.62 mg zinc, 6.33 mg vitamin A and 4.05 mg vitamin C; blend 2 had 20.26 % protein, 418.79 kcal energy, 417.44 mg calcium, 9.26 mg iron, 2.16 mg zinc, 8.43 mg vitamin A and 4.19 mg vitamin C whereas blend 3 exhibited 16.44 % protein, 417.42 kcal energy, 242.4 mg calcium, 7.09 mg iron, 2.22 mg zinc, 3.69 mg vitamin A and 4.72 mg vitamin C, respectively. The difference was found between all means statically significance (P < 0.05). Sensory evaluation showed that the faffa control and blend 3 were preferred by semi-trained panelists. Blend 3 had better in terms of its mineral and vitamin content than FAFFA corn soya blend and comparable with WFP proprietary products CSB+, CSB++ and fulfills the WHO recommendation for protein, energy and calcium. The suggested formulation with Moringa powder can therefore be used as a complementary food to improve the nutritional status and also help solve problems associated with protein energy and micronutrient malnutrition for young children in developing countries, particularly in Ethiopia.

Keywords: corn soya blend, proximate composition, micronutrient, mineral chelating agents, complementary foods

Procedia PDF Downloads 300
863 Studying Second Language Development from a Complex Dynamic Systems Perspective

Authors: L. Freeborn

Abstract:

This paper discusses the application of complex dynamic system theory (DST) to the study of individual differences in second language development. This transdisciplinary framework allows researchers to view the trajectory of language development as a dynamic, non-linear process. A DST approach views language as multi-componential, consisting of multiple complex systems and nested layers. These multiple components and systems continuously interact and influence each other at both the macro- and micro-level. Dynamic systems theory aims to explain and describe the development of the language system, rather than make predictions about its trajectory. Such a holistic and ecological approach to second language development allows researchers to include various research methods from neurological, cognitive, and social perspectives. A DST perspective would involve in-depth analyses as well as mixed methods research. To illustrate, a neurobiological approach to second language development could include non-invasive neuroimaging techniques such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) to investigate areas of brain activation during language-related tasks. A cognitive framework would further include behavioural research methods to assess the influence of intelligence and personality traits, as well as individual differences in foreign language aptitude, such as phonetic coding ability and working memory capacity. Exploring second language development from a DST approach would also benefit from including perspectives from the field of applied linguistics, regarding the teaching context, second language input, and the role of affective factors such as motivation. In this way, applying mixed research methods from neurobiological, cognitive, and social approaches would enable researchers to have a more holistic view of the dynamic and complex processes of second language development.

Keywords: dynamic systems theory, mixed methods, research design, second language development

Procedia PDF Downloads 140
862 Simultaneous Determination of Bisphenol a, Phtalates and Its Metabolites in Human Urine, by Tandem SPE Coupled to GC-MS

Authors: L. Correia-Sá, S. Norberto, Conceição Calhau, C. Delerue-Matos, V. F. Domingues

Abstract:

Endocrine disruptor chemicals (EDCs) are synthetic compounds that even though being initially designed for a specific function are now being linked with a wide range of side effects. The list of possible EDCs is growing and includes phthalates and bisphenol A (BPA). Phthalates are one of the most widely used plasticizers to improve the extensibility, elasticity and workability of polyvinyl chloride (PVC), polyvinyl acetates, etc. Considered non-toxic and harmless additives for polymers, they were used unrestrainedly all over the world for several decades. However, recent studies have indicated that some phthalates and their metabolic products are reproductive and developmental toxicants in animals and suspected endocrine disruptors in humans. BPA (2,2-bis(4-hydroxyphenyl)propane) is a high production volume chemical mainly used in the production of polycarbonate plastics and epoxy resins. Although BPA was initially considered to be a weak environmental estrogen, nowadays it is known that this compound can stimulate several cellular responses at very low levels of concentrations. The aim of this study was to develop a method based on tandem SPE to evaluate the presence of phthalates, metabolites and BPA in human urine samples. The analyzed compounds included: dibutyl phthalate (DBP) and di-2-ethylhexyl phthalate (DEHP), BPA, mono-isobutyl phthalate (MiBP), monobutyl phthalate (MBP) and. mono-(2-ethyl-5-oxohexyl) (MEOHP). Two SPE cartridges were applied both from Phenomenex, the strata X polymeric reversed phase and the strata X A (Strong anion). Chromatographic analyses were carried out in a Thermo GC ULTRA GC-MS/MS. Good recoveries and linear calibration curves were obtained. After validation, the methodology was applied to human urine samples for phthalates, metabolites and BPA evaluation.

Keywords: Bisphenol A (BPA), gas chromatography, metabolites, phtalates, SPE, tandem mode

Procedia PDF Downloads 296
861 Hemodynamics of a Cerebral Aneurysm under Rest and Exercise Conditions

Authors: Shivam Patel, Abdullah Y. Usmani

Abstract:

Physiological flow under rest and exercise conditions in patient-specific cerebral aneurysm models is numerically investigated. A finite-volume based code with BiCGStab as the linear equation solver is used to simulate unsteady three-dimensional flow field through the incompressible Navier-Stokes equations. Flow characteristics are first established in a healthy cerebral artery for both physiological conditions. The effect of saccular aneurysm on cerebral hemodynamics is then explored through a comparative analysis of the velocity distribution, nature of flow patterns, wall pressure and wall shear stress (WSS) against the reference configuration. The efficacy of coil embolization as a potential strategy of surgical intervention is also examined by modelling coil as a homogeneous and isotropic porous medium where the extended Darcy’s law, including Forchheimer and Brinkman terms, is applicable. The Carreau-Yasuda non-Newtonian blood model is incorporated to capture the shear thinning behavior of blood. Rest and exercise conditions correspond to normotensive and hypertensive blood pressures respectively. The results indicate that the fluid impingement on the outer wall of the arterial bend leads to abnormality in the distribution of wall pressure and WSS, which is expected to be the primary cause of the localized aneurysm. Exercise correlates with elevated flow velocity, vortex strength, wall pressure and WSS inside the aneurysm sac. With the insertion of coils in the aneurysm cavity, the flow bypasses the dilatation, leading to a decline in flow velocities and WSS. Particle residence time is observed to be lower under exercise conditions, a factor favorable for arresting plaque deposition and combating atherosclerosis.

Keywords: 3D FVM, Cerebral aneurysm, hypertension, coil embolization, non-Newtonian fluid

Procedia PDF Downloads 237
860 Use of Sewage Sludge Ash as Partial Cement Replacement in the Production of Mortars

Authors: Domagoj Nakic, Drazen Vouk, Nina Stirmer, Mario Siljeg, Ana Baricevic

Abstract:

Wastewater treatment processes generate significant quantities of sewage sludge that need to be adequately treated and disposed. In many EU countries, the problem of adequate disposal of sewage sludge has not been solved, nor is determined by the unique rules, instructions or guidelines. Disposal of sewage sludge is important not only in terms of satisfying the regulations, but the aspect of choosing the optimal wastewater and sludge treatment technology. Among the solutions that seem reasonable, recycling of sewage sludge and its byproducts reaches the top recommendation. Within the framework of sustainable development, recycling of sludge almost completely closes the cycle of wastewater treatment in which only negligible amounts of waste that requires landfilling are being generated. In many EU countries, significant amounts of sewage sludge are incinerated, resulting in a new byproduct in the form of ash. Sewage sludge ash is three to five times less in volume compared to stabilized and dehydrated sludge, but it also requires further management. The combustion process also destroys hazardous organic components in the sludge and minimizes unpleasant odors. The basic objective of the presented research is to explore the possibilities of recycling of the sewage sludge ash as a supplementary cementitious material. This is because of the main oxides present in the sewage sludge ash (SiO2, Al2O3 and Cao, which is similar to cement), so it can be considered as latent hydraulic and pozzolanic material. Physical and chemical characteristics of ashes, generated by sludge collected from different wastewater treatment plants, and incinerated in laboratory conditions at different temperatures, are investigated since it is a prerequisite of its subsequent recycling and the eventual use in other industries. Research was carried out by replacing up to 20% of cement by mass in cement mortar mixes with different obtained ashes and examining characteristics of created mixes in fresh and hardened condition. The mixtures with the highest ash content (20%) showed an average drop in workability of about 15% which is attributed to the increased water requirements when ash was used. Although some mixes containing added ash showed compressive and flexural strengths equivalent to those of reference mixes, generally slight decrease in strength was observed. However, it is important to point out that the compressive strengths always remained above 85% compared to the reference mix, while flexural strengths remained above 75%. Ecological impact of innovative construction products containing sewage sludge ash was determined by analyzing leaching concentrations of heavy metals. Results demonstrate that sewage sludge ash can satisfy technical and environmental criteria for use in cementitious materials which represents a new recycling application for an increasingly important waste material that is normally landfilled. Particular emphasis is placed on linking the composition of generated ashes depending on its origin and applied treatment processes (stage of wastewater treatment, sludge treatment technology, incineration temperature) with the characteristics of the final products. Acknowledgement: This work has been fully supported by Croatian Science Foundation under the project '7927 - Reuse of sewage sludge in concrete industry – from infrastructure to innovative construction products'.

Keywords: cement mortar, recycling, sewage sludge ash, sludge disposal

Procedia PDF Downloads 253
859 Finite Element Analysis of Hollow Structural Shape (HSS) Steel Brace with Infill Reinforcement under Cyclic Loading

Authors: Chui-Hsin Chen, Yu-Ting Chen

Abstract:

Special concentrically braced frames is one of the seismic load resisting systems, which dissipates seismic energy when bracing members within the frames undergo yielding and buckling while sustaining their axial tension and compression load capacities. Most of the inelastic deformation of a buckling bracing member concentrates in the mid-length region. While experiencing cyclic loading, the region dissipates most of the seismic energy being input into the frame. Such a concentration makes the braces vulnerable to failure modes associated with low-cycle fatigue. In this research, a strategy to improve the cyclic behavior of the conventional steel bracing member is proposed by filling the Hollow Structural Shape (HSS) member with reinforcement. It prevents the local section from concentrating large plastic deformation caused by cyclic loading. The infill helps spread over the plastic hinge region into a wider area hence postpone the initiation of local buckling or even the rupture of the braces. The finite element method is introduced to simulate the complicated bracing member behavior and member-versus-infill interaction under cyclic loading. Fifteen 3-D-element-based models are built by ABAQUS software. The verification of the FEM model is done with unreinforced (UR) HSS bracing members’ cyclic test data and aluminum honeycomb plates’ bending test data. Numerical models include UR and filled HSS bracing members with various compactness ratios based on the specification of AISC-2016 and AISC-1989. The primary variables to be investigated include the relative bending stiffness and the material of the filling reinforcement. The distributions of von Mises stress and equivalent plastic strain (PEEQ) are used as indices to tell the strengths and shortcomings of each model. The result indicates that the change of relative bending stiffness of the infill is much more influential than the change of material in use to increase the energy dissipation capacity. Strengthen the relative bending stiffness of the reinforcement results in additional energy dissipation capacity to the extent of 24% and 46% in model based on AISC-2016 (16-series) and AISC-1989 (89-series), respectively. HSS members with infill show growth in 𝜂Local Buckling, normalized energy cumulated until the happening of local buckling, comparing to UR bracing members. The 89-series infill-reinforced members have more energy dissipation capacity than unreinforced 16-series members by 117% to 166%. The flexural rigidity of infills should be less than 29% and 13% of the member section itself for 16-series and 89-series bracing members accordingly, thereby guaranteeing the spread over of the plastic hinge and the happening of it within the reinforced section. If the parameters are properly configured, the ductility, energy dissipation capacity, and fatigue-life of HSS SCBF bracing members can be improved prominently by the infill-reinforced method.

Keywords: special concentrically braced frames, HSS, cyclic loading, infill reinforcement, finite element analysis, PEEQ

Procedia PDF Downloads 96
858 Engineering a Band Gap Opening in Dirac Cones on Graphene/Tellurium Heterostructures

Authors: Beatriz Muñiz Cano, J. Ripoll Sau, D. Pacile, P. M. Sheverdyaeva, P. Moras, J. Camarero, R. Miranda, M. Garnica, M. A. Valbuena

Abstract:

Graphene, in its pristine state, is a semiconductor with a zero band gap and massless Dirac fermions carriers, which conducts electrons like a metal. Nevertheless, the absence of a bandgap makes it impossible to control the material’s electrons, something that is essential to perform on-off switching operations in transistors. Therefore, it is necessary to generate a finite gap in the energy dispersion at the Dirac point. Intense research has been developed to engineer band gaps while preserving the exceptional properties of graphene, and different strategies have been proposed, among them, quantum confinement of 1D nanoribbons or the introduction of super periodic potential in graphene. Besides, in the context of developing new 2D materials and Van der Waals heterostructures, with new exciting emerging properties, as 2D transition metal chalcogenides monolayers, it is fundamental to know any possible interaction between chalcogenide atoms and graphene-supporting substrates. In this work, we report on a combined Scanning Tunneling Microscopy (STM), Low Energy Electron Diffraction (LEED), and Angle-Resolved Photoemission Spectroscopy (ARPES) study on a new superstructure when Te is evaporated (and intercalated) onto graphene over Ir(111). This new superstructure leads to the electronic doping of the Dirac cone while the linear dispersion of massless Dirac fermions is preserved. Very interestingly, our ARPES measurements evidence a large band gap (~400 meV) at the Dirac point of graphene Dirac cones below but close to the Fermi level. We have also observed signatures of the Dirac point binding energy being tuned (upwards or downwards) as a function of Te coverage.

Keywords: angle resolved photoemission spectroscopy, ARPES, graphene, spintronics, spin-orbitronics, 2D materials, transition metal dichalcogenides, TMDCs, TMDs, LEED, STM, quantum materials

Procedia PDF Downloads 81
857 On Crack Tip Stress Field in Pseudo-Elastic Shape Memory Alloys

Authors: Gulcan Ozerim, Gunay Anlas

Abstract:

In shape memory alloys, upon loading, stress increases around crack tip and a martensitic phase transformation occurs in early stages. In many studies the stress distribution in the vicinity of the crack tip is represented by using linear elastic fracture mechanics (LEFM) although the pseudo-elastic behavior results in a nonlinear stress-strain relation. In this study, the HRR singularity (Hutchinson, Rice and Rosengren), that uses Rice’s path independent J-integral, is tried to formulate the stress distribution around the crack tip. In HRR approach, the Ramberg-Osgood model for the stress-strain relation of power-law hardening materials is used to represent the elastic-plastic behavior. Although it is recoverable, the inelastic portion of the deformation in martensitic transformation (up to the end of transformation) resembles to that of plastic deformation. To determine the constants of the Ramberg-Osgood equation, the material’s response is simulated in ABAQUS using a UMAT based on ZM (Zaki-Moumni) thermo-mechanically coupled model, and the stress-strain curve of the material is plotted. An edge cracked shape memory alloy (Nitinol) plate is loaded quasi-statically under mode I and modeled using ABAQUS; the opening stress values ahead of the cracked tip are calculated. The stresses are also evaluated using the asymptotic equations of both LEFM and HRR. The results show that in the transformation zone around the crack tip, the stress values are much better represented when the HRR singularity is used although the J-integral does not show path independent behavior. For the nodes very close to the crack tip, the HRR singularity is not valid due to the non-proportional loading effect and high-stress values that go beyond the transformation finish stress.

Keywords: crack, HRR singularity, shape memory alloys, stress distribution

Procedia PDF Downloads 327
856 Investigating Students' Understanding about Mathematical Concept through Concept Map

Authors: Rizky Oktaviana

Abstract:

The main purpose of studying lies in improving students’ understanding. Teachers usually use written test to measure students’ understanding about learning material especially mathematical learning material. This common method actually has a lack point, such that in mathematics content, written test only show procedural steps to solve mathematical problems. Therefore, teachers unable to see whether students actually understand about mathematical concepts and the relation between concepts or not. One of the best tools to observe students’ understanding about the mathematical concepts is concept map. The goal of this research is to describe junior high school students understanding about mathematical concepts through Concept Maps based on the difference of mathematical ability. There were three steps in this research; the first step was choosing the research subjects by giving mathematical ability test to students. The subjects of this research are three students with difference mathematical ability, high, intermediate and low mathematical ability. The second step was giving concept mapping training to the chosen subjects. The last step was giving concept mapping task about the function to the subjects. Nodes which are the representation of concepts of function were provided in concept mapping task. The subjects had to use the nodes in concept mapping. Based on data analysis, the result of this research shows that subject with high mathematical ability has formal understanding, due to that subject could see the connection between concepts of function and arranged the concepts become concept map with valid hierarchy. Subject with intermediate mathematical ability has relational understanding, because subject could arranged all the given concepts and gave appropriate label between concepts though it did not represent the connection specifically yet. Whereas subject with low mathematical ability has poor understanding about function, it can be seen from the concept map which is only used few of the given concepts because subject could not see the connection between concepts. All subjects have instrumental understanding for the relation between linear function concept, quadratic function concept and domain, co domain, range.

Keywords: concept map, concept mapping, mathematical concepts, understanding

Procedia PDF Downloads 271
855 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling

Procedia PDF Downloads 128
854 Ionic Liquid and Chemical Denaturants Effects on the Fluorescence Properties of the Laccase

Authors: Othman Saoudi

Abstract:

In this work, we have interested in the investigation of the chemical denaturants and synthesized ionic liquids effects on the fluorescence properties of the laccase from Trametes versicolor. The fluorescence properties of the laccase result from the presence of Tryptophan, which has an aromatic core responsible for the absorption in ultra violet domain and the emission of the photons of fluorescence. The effect Pyrrolidinuim Formate ([pyrr][F]) and Morpholinium Formate ([morph][F]) ionic liquids on the laccase behavior for various volumetric fractions are studied. We have shown that the fluorescence spectrum relative to the [pyrr][F] presents a single band with a maximum around 340 nm and a secondary peak at 361 nm for a volumetric fraction of 20% v/v. For concentration superiors to 40%, the fluorescence intensity decreases and a displacement of the peaks toward higher wavelengths has occurred. For the [morph][F], the fluorescence spectrum showed a single band around 340 nm. The intensity of the principal peak decreases for concentration superiors to 20% v/v. From the plot representing the variation of the λₘₐₓ versus the volumetric concentration, we have determined the concentration of the half-transitions C1/2. These concentrations are equal to 42.62% and 40.91% v/v in the presence of [pyrr][F] and [morph][F] respectively. For the chemical denaturation, we have shown that the fluorescence intensity decreases with increasing denaturant concentrations where the maximum of the wavelength of emission shifts toward the higher wavelengths. We have also determined from the spectrum relative to the urea and GdmCl, the unfolding energy, ∆GD. The results show that the variation of the unfolding energy as a function of the denaturant concentrations varies according to the linear regression model. We have demonstrated also that the half-transitions C1/2 have occurred for urea and GdmCl denaturants concentrations around 3.06 and 3.17 M respectively.

Keywords: laccase, fluorescence, ionic liquids, chemical denaturants

Procedia PDF Downloads 515
853 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 237