Search results for: methods of measurements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17130

Search results for: methods of measurements

16260 Explicit Numerical Approximations for a Pricing Weather Derivatives Model

Authors: Clarinda V. Nhangumbe, Ercília Sousa

Abstract:

Weather Derivatives are financial instruments used to cover non-catastrophic weather events and can be expressed in the form of standard or plain vanilla products, structured or exotics products. The underlying asset, in this case, is the weather index, such as temperature, rainfall, humidity, wind, and snowfall. The complexity of the Weather Derivatives structure shows the weakness of the Black Scholes framework. Therefore, under the risk-neutral probability measure, the option price of a weather contract can be given as a unique solution of a two-dimensional partial differential equation (parabolic in one direction and hyperbolic in other directions), with an initial condition and subjected to adequate boundary conditions. To calculate the price of the option, one can use numerical methods such as the Monte Carlo simulations and implicit finite difference schemes conjugated with Semi-Lagrangian methods. This paper is proposed two explicit methods, namely, first-order upwind in the hyperbolic direction combined with Lax-Wendroff in the parabolic direction and first-order upwind in the hyperbolic direction combined with second-order upwind in the parabolic direction. One of the advantages of these methods is the fact that they take into consideration the boundary conditions obtained from the financial interpretation and deal efficiently with the different choices of the convection coefficients.

Keywords: incomplete markets, numerical methods, partial differential equations, stochastic process, weather derivatives

Procedia PDF Downloads 79
16259 Thermal Imaging of Aircraft Piston Engine in Laboratory Conditions

Authors: Lukasz Grabowski, Marcin Szlachetka, Tytus Tulwin

Abstract:

The main task of the engine cooling system is to maintain its average operating temperatures within strictly defined limits. Too high or too low average temperatures result in accelerated wear or even damage to the engine or its individual components. In order to avoid local overheating or significant temperature gradients, leading to high stresses in the component, the aim is to ensure an even flow of air. In the case of analyses related to heat exchange, one of the main problems is the comparison of temperature fields because standard measuring instruments such as thermocouples or thermistors only provide information about the course of temperature at a given point. Thermal imaging tests can be helpful in this case. With appropriate camera settings and taking into account environmental conditions, we are able to obtain accurate temperature fields in the form of thermograms. Emission of heat from the engine to the engine compartment is an important issue when designing a cooling system. Also, in the case of liquid cooling, the main sources of heat in the form of emissions from the engine block, cylinders, etc. should be identified. It is important to redesign the engine compartment ventilation system. Ensuring proper cooling of aircraft reciprocating engine is difficult not only because of variable operating range but mainly because of different cooling conditions related to the change of speed or altitude of flight. Engine temperature also has a direct and significant impact on the properties of engine oil, which under the influence of this parameter changes, in particular, its viscosity. Too low or too high, its value can be a result of fast wear of engine parts. One of the ways to determine the temperatures occurring on individual parts of the engine is the use of thermal imaging measurements. The article presents the results of preliminary thermal imaging tests of aircraft piston diesel engine with a maximum power of about 100 HP. In order to perform the heat emission tests of the tested engine, the ThermaCAM S65 thermovision monitoring system from FLIR (Forward-Looking Infrared) together with the ThermaCAM Researcher Professional software was used. The measurements were carried out after the engine warm up. The engine speed was 5300 rpm The measurements were taken for the following environmental parameters: air temperature: 17 °C, ambient pressure: 1004 hPa, relative humidity: 38%. The temperatures distribution on the engine cylinder and on the exhaust manifold were analysed. Thermal imaging tests made it possible to relate the results of simulation tests to the real object by measuring the rib temperature of the cylinders. The results obtained are necessary to develop a CFD (Computational Fluid Dynamics) model of heat emission from the engine bay. The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).

Keywords: aircraft, piston engine, heat, emission

Procedia PDF Downloads 111
16258 A Hybrid Fuzzy Clustering Approach for Fertile and Unfertile Analysis

Authors: Shima Soltanzadeh, Mohammad Hosain Fazel Zarandi, Mojtaba Barzegar Astanjin

Abstract:

Diagnosis of male infertility by the laboratory tests is expensive and, sometimes it is intolerable for patients. Filling out the questionnaire and then using classification method can be the first step in decision-making process, so only in the cases with a high probability of infertility we can use the laboratory tests. In this paper, we evaluated the performance of four classification methods including naive Bayesian, neural network, logistic regression and fuzzy c-means clustering as a classification, in the diagnosis of male infertility due to environmental factors. Since the data are unbalanced, the ROC curves are most suitable method for the comparison. In this paper, we also have selected the more important features using a filtering method and examined the impact of this feature reduction on the performance of each methods; generally, most of the methods had better performance after applying the filter. We have showed that using fuzzy c-means clustering as a classification has a good performance according to the ROC curves and its performance is comparable to other classification methods like logistic regression.

Keywords: classification, fuzzy c-means, logistic regression, Naive Bayesian, neural network, ROC curve

Procedia PDF Downloads 325
16257 Modification of the Risk for Incident Cancer with Changes in the Metabolic Syndrome Status: A Prospective Cohort Study in Taiwan

Authors: Yung-Feng Yen, Yun-Ju Lai

Abstract:

Background: Metabolic syndrome (MetS) is reversible; however, the effect of changes in MetS status on the risk of incident cancer has not been extensively studied. We aimed to investigate the effects of changes in MetS status on incident cancer risk. Methods: This prospective, longitudinal study used data from Taiwan’s MJ cohort of 157,915 adults recruited from 2002–2016 who had repeated MetS measurements 5.2 (±3.5) years apart and were followed up for the new onset of cancer over 8.2 (±4.5) years. A new diagnosis of incident cancer in study individuals was confirmed by their pathohistological reports. The participants’ MetS status included MetS-free (n=119,331), MetS-developed (n=14,272), MetS-recovered (n=7,914), and MetS-persistent (n=16,398). We used the Fine-Gray sub-distribution method, with death as the competing risk, to determine the association between MetS changes and the risk of incident cancer. Results: During the follow-up period, 7,486 individuals had new development of cancer. Compared with the MetS-free group, MetS-persistent individuals had a significantly higher risk of incident cancer (adjusted hazard ratio [aHR], 1.10; 95% confidence interval [CI], 1.03-1.18). Considering the effect of dynamic changes in MetS status on the risk of specific cancer types, MetS persistence was significantly associated with a higher risk of incident colon and rectum, kidney, pancreas, uterus, and thyroid cancer. The risk of kidney, uterus, and thyroid cancer in MetS-recovered individuals was higher than in those who remained MetS but lower than MetS-persistent individuals. Conclusions: Persistent MetS is associated with a higher risk of incident cancer, and recovery from MetS may reduce the risk. The findings of our study suggest that it is imperative for individuals with pre-existing MetS to seek treatment for this condition to reduce the cancer risk.

Keywords: metabolic syndrome change, cancer, risk factor, cohort study

Procedia PDF Downloads 69
16256 Estimation of Mobility Parameters and Threshold Voltage of an Organic Thin Film Transistor Using an Asymmetric Capacitive Test Structure

Authors: Rajesh Agarwal

Abstract:

Carrier mobility at the organic/insulator interface is essential to the performance of organic thin film transistors (OTFT). The present work describes estimation of field dependent mobility (FDM) parameters and the threshold voltage of an OTFT using a simple, easy to fabricate two terminal asymmetric capacitive test structure using admittance measurements. Conventionally, transfer characteristics are used to estimate the threshold voltage in an OTFT with field independent mobility (FIDM). Yet, this technique breaks down to give accurate results for devices with high contact resistance and having field dependent mobility. In this work, a new technique is presented for characterization of long channel organic capacitor (LCOC). The proposed technique helps in the accurate estimation of mobility enhancement factor (γ), the threshold voltage (V_th) and band mobility (µ₀) using capacitance-voltage (C-V) measurement in OTFT. This technique also helps to get rid of making short channel OTFT or metal-insulator-metal (MIM) structures for making C-V measurements. To understand the behavior of devices and ease of analysis, transmission line compact model is developed. The 2-D numerical simulation was carried out to illustrate the correctness of the model. Results show that proposed technique estimates device parameters accurately even in the presence of contact resistance and field dependent mobility. Pentacene/Poly (4-vinyl phenol) based top contact bottom-gate OTFT’s are fabricated to illustrate the operation and advantages of the proposed technique. Small signal of frequency varying from 1 kHz to 5 kHz and gate potential ranging from +40 V to -40 V have been applied to the devices for measurement.

Keywords: capacitance, mobility, organic, thin film transistor

Procedia PDF Downloads 155
16255 Improved Reuse and Storage Performances at Room Temperature of a New Environmental-Friendly Lactate Oxidase Biosensor Made by Ambient Electrospray Deposition

Authors: Antonella Cartoni, Mattea Carmen Castrovilli

Abstract:

A biosensor for lactate detection has been developed using an environmentally friendly approach. The biosensor is based on lactate oxidase (LOX) and has remarkable capabilities for reuse and storage at room temperature. The manufacturing technique employed is ambient electrospray deposition (ESD), which enables efficient and sustainable immobilization of the LOX enzyme on a cost-effective com-mercial screen-printed Prussian blue/carbon electrode (PB/C-SPE). The study demonstrates that the ESD technology allows the biosensor to be stored at ambient pressure and temperature for extended periods without affecting the enzymatic activity. The biosensor can be stored for up to 90 days without requiring specific storage conditions, and it can be reused for up to 24 measurements on both freshly prepared electrodes and electrodes that are three months old. The LOX-based biosensor exhibits a lin-ear range of lactate detection between 0.1 and 1 mM, with a limit of detection of 0.07±0.02 mM. Ad-ditionally, it does not exhibit any memory effects. The immobilization process does not involve the use of entrapment matrices or hazardous chemicals, making it environmentally sustainable and non-toxic compared to current methods. Furthermore, the application of a electrospray deposition cycle on previously used biosensors rejuvenates their performance, making them comparable to freshly made biosensors. This highlights the excellent recycling potential of the technique, eliminating the waste as-sociated with disposable devices.

Keywords: green friendly, reuse, storage performance, immobilization, matrix-free, electrospray deposition, biosensor, lactate oxidase, enzyme

Procedia PDF Downloads 51
16254 Application of the Electrical Resistivity Tomography and Tunnel Seismic Prediction 303 Methods for Detection Fracture Zones Ahead of Tunnel: A Case Study

Authors: Nima Dastanboo, Xiao-Qing Li, Hamed Gharibdoost

Abstract:

The purpose of this study is to investigate about the geological properties ahead of a tunnel face with using Electrical Resistivity Tomography ERT and Tunnel Seismic Prediction TSP303 methods. In deep tunnels with hydro-geological conditions, it is important to study the geological structures of the region before excavating tunnels. Otherwise, it would lead to unexpected accidents that impose serious damage to the project. For constructing Nosoud tunnel in west of Iran, the ERT and TSP303 methods are employed to predict the geological conditions dynamically during the excavation. In this paper, based on the engineering background of Nosoud tunnel, the important results of applying these methods are discussed. This work demonstrates seismic method and electrical tomography as two geophysical techniques that are able to detect a tunnel. The results of these two methods were being in agreement with each other but the results of TSP303 are more accurate and quality. In this case, the TSP 303 method was a useful tool for predicting unstable geological structures ahead of the tunnel face during excavation. Thus, using another geophysical method together with TSP303 could be helpful as a decision support in excavating, especially in complicated geological conditions.

Keywords: tunnel seismic prediction (TSP303), electrical resistivity tomography (ERT), seismic wave, velocity analysis, low-velocity zones

Procedia PDF Downloads 135
16253 Nanoparticle Based Green Inhibitor for Corrosion Protection of Zinc in Acidic Medium

Authors: Neha Parekh, Divya Ladha, Poonam Wadhwani, Nisha Shah

Abstract:

Nano scaled materials have attracted tremendous interest as corrosion inhibitor due to their high surface area on the metal surfaces. It is well known that the zinc oxide nanoparticles have higher reactivity towards aqueous acidic solution. This work presents a new method to incorporate zinc oxide nanoparticles with white sesame seeds extract (nano-green inhibitor) for corrosion protection of zinc in acidic medium. The morphology of the zinc oxide nanoparticles was investigated by TEM and DLS. The corrosion inhibition efficiency of the green inhibitor and nano-green inhibitor was determined by Gravimetric and electrochemical impedance spectroscopy (EIS) methods. Gravimetric measurements suggested that nano-green inhibitor is more effective than green inhibitor. Furthermore, with the increasing temperature, inhibition efficiency increases for both the inhibitors. In addition, it was established the Temkin adsorption isotherm fits well with the experimental data for both the inhibitors. The effect of temperature and Temkin adsorption isotherm revealed Chemisorption mechanism occurring in the system. The activation energy (Ea) and other thermodynamic parameters for inhibition process were calculated. The data of EIS showed that the charge transfer controls the corrosion process. The surface morphology of zinc metal (specimen) in absence and presence of green inhibitor and nano-green inhibitor were performed using Scanning Electron Microscopy (SEM) and Atomic Force Microscopy (AFM) techniques. The outcomes indicated a formation of a protective layer over zinc metal (specimen).

Keywords: corrosion, green inhibitor, nanoparticles, zinc

Procedia PDF Downloads 436
16252 Reemergence of Behaviorism in Language Teaching

Authors: Hamid Gholami

Abstract:

During the years, the language teaching methods have been the offshoots of schools of thought in psychology. The methods were mainly influenced by their contemporary psychological approaches, as Audiolingualism was based on behaviorism and Communicative Language Teaching on constructivism. In 1950s, the text books were full of repetition exercises which were encouraged by Behaviorism. In 1980s they got filled with communicative exercises as suggested by constructivism. The trend went on to nowadays that sees no specific method as prevalent since none of the schools of thought seem to be illustrative of the complexity in human being learning. But some changes can be notable; some textbooks are giving more and more space to repetition exercises at least to enhance some aspects of language proficiency, namely collocations, rhythm and intonation, and conversation models. These changes may mark the reemergence of one of the once widely accepted schools of thought in psychology; behaviorism.

Keywords: language teaching methods, psychology, schools of thought, Behaviorism

Procedia PDF Downloads 553
16251 Seismic Performance Point of RC Frame Buildings Using ATC-40, FEMA 356 and FEMA 440 Guidelines

Authors: Gram Y. Rivas Sanchez

Abstract:

The seismic design codes in the world allow the analysis of structures considering an elastic-linear behavior; however, against earthquakes, the structures exhibit non-linear behaviors that induce damage to their elements. For this reason, it is necessary to use non-linear methods to analyze these structures, being the dynamic methods that provide more reliable results but require a lot of computational costs; on the other hand, non-linear static methods do not have this disadvantage and are being used more and more. In the present work, the nonlinear static analysis (pushover) of RC frame buildings of three, five, and seven stories is carried out considering models of concentrated plasticity using plastic hinges; and the seismic performance points are determined using ATC-40, FEMA 356, and FEMA 440 guidelines. Using this last standard, the highest inelastic displacements and basal shears are obtained, providing designs that are more conservative.

Keywords: pushover, nonlinear, RC building, FEMA 440, ATC 40

Procedia PDF Downloads 142
16250 Predicting OpenStreetMap Coverage by Means of Remote Sensing: The Case of Haiti

Authors: Ran Goldblatt, Nicholas Jones, Jennifer Mannix, Brad Bottoms

Abstract:

Accurate, complete, and up-to-date geospatial information is the foundation of successful disaster management. When the 2010 Haiti Earthquake struck, accurate and timely information on the distribution of critical infrastructure was essential for the disaster response community for effective search and rescue operations. Existing geospatial datasets such as Google Maps did not have comprehensive coverage of these features. In the days following the earthquake, many organizations released high-resolution satellite imagery, catalyzing a worldwide effort to map Haiti and support the recovery operations. Of these organizations, OpenStreetMap (OSM), a collaborative project to create a free editable map of the world, used the imagery to support volunteers to digitize roads, buildings, and other features, creating the most detailed map of Haiti in existence in just a few weeks. However, large portions of the island are still not fully covered by OSM. There is an increasing need for a tool to automatically identify which areas in Haiti, as well as in other countries vulnerable to disasters, that are not fully mapped. The objective of this project is to leverage different types of remote sensing measurements, together with machine learning approaches, in order to identify geographical areas where OSM coverage of building footprints is incomplete. Several remote sensing measures and derived products were assessed as potential predictors of OSM building footprints coverage, including: intensity of light emitted at night (based on VIIRS measurements), spectral indices derived from Sentinel-2 satellite (normalized difference vegetation index (NDVI), normalized difference built-up index (NDBI), soil-adjusted vegetation index (SAVI), urban index (UI)), surface texture (based on Sentinel-1 SAR measurements)), elevation and slope. Additional remote sensing derived products, such as Hansen Global Forest Change, DLR`s Global Urban Footprint (GUF), and World Settlement Footprint (WSF), were also evaluated as predictors, as well as OSM street and road network (including junctions). Using a supervised classification with a random forest classifier resulted in the prediction of 89% of the variation of OSM building footprint area in a given cell. These predictions allowed for the identification of cells that are predicted to be covered but are actually not mapped yet. With these results, this methodology could be adapted to any location to assist with preparing for future disastrous events and assure that essential geospatial information is available to support the response and recovery efforts during and following major disasters.

Keywords: disaster management, Haiti, machine learning, OpenStreetMap, remote sensing

Procedia PDF Downloads 113
16249 Hospital Malnutrition and its Impact on 30-day Mortality in Hospitalized General Medicine Patients in a Tertiary Hospital in South India

Authors: Vineet Agrawal, Deepanjali S., Medha R., Subitha L.

Abstract:

Background. Hospital malnutrition is a highly prevalent issue and is known to increase the morbidity, mortality, length of hospital stay, and cost of care. In India, studies on hospital malnutrition have been restricted to ICU, post-surgical, and cancer patients. We designed this study to assess the impact of hospital malnutrition on 30-day post-discharge and in-hospital mortality in patients admitted in the general medicine department, irrespective of diagnosis. Methodology. All patients aged above 18 years admitted in the medicine wards, excluding medico-legal cases, were enrolled in the study. Nutritional assessment was done within 72 h of admission, using Subjective Global Assessment (SGA), which classifies patients into three categories: Severely malnourished, Mildly/moderately malnourished, and Normal/well-nourished. Anthropometric measurements like Body Mass Index (BMI), Triceps skin-fold thickness (TSF), and Mid-upper arm circumference (MUAC) were also performed. Patients were followed-up during hospital stay and 30 days after discharge through telephonic interview, and their final diagnosis, comorbidities, and cause of death were noted. Multivariate logistic regression and cox regression model were used to determine if the nutritional status at admission independently impacted mortality at one month. Results. The prevalence of malnourishment by SGA in our study was 67.3% among 395 hospitalized patients, of which 155 patients (39.2%) were moderately malnourished, and 111 (28.1%) were severely malnourished. Of 395 patients, 61 patients (15.4%) expired, of which 30 died in the hospital, and 31 died within 1 month of discharge from hospital. On univariate analysis, malnourished patients had significantly higher morality (24.3% in 111 Cat C patients) than well-nourished patients (10.1% in 129 Cat A patients), with OR 9.17, p-value 0.007. On multivariate logistic regression, age and higher Charlson Comorbidity Index (CCI) were independently associated with mortality. Higher CCI indicates higher burden of comorbidities on admission, and the CCI in the expired patient group (mean=4.38) was significantly higher than that of the alive cohort (mean=2.85). Though malnutrition significantly contributed to higher mortality on univariate analysis, it was not an independent predictor of outcome on multivariate logistic regression. Length of hospitalisation was also longer in the malnourished group (mean= 9.4 d) compared to the well-nourished group (mean= 8.03 d) with a trend towards significance (p=0.061). None of the anthropometric measurements like BMI, MUAC, or TSF showed any association with mortality or length of hospitalisation. Inference. The results of our study highlight the issue of hospital malnutrition in medicine wards and reiterate that malnutrition contributes significantly to patient outcomes. We found that SGA performs better than anthropometric measurements in assessing under-nutrition. We are of the opinion that the heterogeneity of the study population by diagnosis was probably the primary reason why malnutrition by SGA was not found to be an independent risk factor for mortality. Strategies to identify high-risk patients at admission and treat malnutrition in the hospital and post-discharge are needed.

Keywords: hospitalization outcome, length of hospital stay, mortality, malnutrition, subjective global assessment (SGA)

Procedia PDF Downloads 145
16248 On Dialogue Systems Based on Deep Learning

Authors: Yifan Fan, Xudong Luo, Pingping Lin

Abstract:

Nowadays, dialogue systems increasingly become the way for humans to access many computer systems. So, humans can interact with computers in natural language. A dialogue system consists of three parts: understanding what humans say in natural language, managing dialogue, and generating responses in natural language. In this paper, we survey deep learning based methods for dialogue management, response generation and dialogue evaluation. Specifically, these methods are based on neural network, long short-term memory network, deep reinforcement learning, pre-training and generative adversarial network. We compare these methods and point out the further research directions.

Keywords: dialogue management, response generation, deep learning, evaluation

Procedia PDF Downloads 157
16247 Detecting of Crime Hot Spots for Crime Mapping

Authors: Somayeh Nezami

Abstract:

The management of financial and human resources of police in metropolitans requires many information and exact plans to reduce a rate of crime and increase the safety of the society. Geographical Information Systems have an important role in providing crime maps and their analysis. By using them and identification of crime hot spots along with spatial presentation of the results, it is possible to allocate optimum resources while presenting effective methods for decision making and preventive solutions. In this paper, we try to explain and compare between some of the methods of hot spots analysis such as Mode, Fuzzy Mode and Nearest Neighbour Hierarchical spatial clustering (NNH). Then the spots with the highest crime rates of drug smuggling for one province in Iran with borderline with Afghanistan are obtained. We will show that among these three methods NNH leads to the best result.

Keywords: GIS, Hot spots, nearest neighbor hierarchical spatial clustering, NNH, spatial analysis of crime

Procedia PDF Downloads 320
16246 A Conjugate Gradient Method for Large Scale Unconstrained Optimization

Authors: Mohammed Belloufi, Rachid Benzine, Badreddine Sellami

Abstract:

Conjugate gradient methods is useful for solving large scale optimization problems in scientific and engineering computation, characterized by the simplicity of their iteration and their low memory requirements. It is well known that the search direction plays a main role in the line search method. In this paper, we propose a search direction with the Wolfe line search technique for solving unconstrained optimization problems. Under the above line searches and some assumptions, the global convergence properties of the given methods are discussed. Numerical results and comparisons with other CG methods are given.

Keywords: unconstrained optimization, conjugate gradient method, strong Wolfe line search, global convergence

Procedia PDF Downloads 408
16245 Emerging Methods as a Tool for Obtaining Subconscious Feedback in E-Commerce and Marketplace

Authors: J. Berčík, A. Mravcová, A. Rusková, P. Jurčišin, R. Virágh

Abstract:

The online world is changing every day. With this comes the emergence and development of new business models. One of them is the sale of several types of products in one place. This type of sales in the form of online marketplaces has undergone a positive development in recent years and represents a kind of alternative to brick-and-mortar shopping centres. The main philosophy is to buy several products under one roof. Examples of popular e-commerce marketplaces are Amazon, eBay, and Allegro. Their share of total e-commerce turnover is expected to even double in the coming years. The paper highlights possibilities for testing web applications and online marketplace using emerging methods like stationary eye cameras (eye tracking) and facial analysis (FaceReading).

Keywords: emerging methods, consumer neuroscience, e-commerce, marketplace, user experience, user interface

Procedia PDF Downloads 63
16244 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 123
16243 Vitamin D Status in Tunisian Obese Patients

Authors: O. Berriche, R. Ben Othmen, H. Sfar, H. Abdesslam, S. Bou Meftah, S. Bhouri, F. Mahjoub, C. Amrouche, H. Jamoussi

Abstract:

Introduction: Although current evidence emphasizes a high prevalence of vitamin D deficiency and an inverse association between serum 25-hydroxyvitamin D (25-OHD) concentration and obesity, no studies have been conducted in Tunisian obese. The objectives of our study were to estimate the vitamin D deficiency in obese, identify risk factors for vitamin D deficiency, demonstrating a possible association between vitamin D levels and metabolic parameters. Methods: This was a descriptive study of 100 obese 18-65 year-old. Anthropometric measurements were determined. Fasting blood samples were assessed for the following essays : serum calcium, 25 OH vitamin D, inorganic phosphorus, fasting glucose, HDL, LDL cholesterol and triglyceride. Insulin resistance was evaluated by fasting insulin, HOMA-IR and HOMA-ß. Consumption of foods riche in vitamin D, sunscreen use, wearing protective clothes and exposed surface were assessed through applied questionnaires. Results: The deficit of vitamin D (< 30 ng/ml) among obese was 98,8%. Half of them had a rate < 10ng/ml. Environmental factors involved in vitamin D deficiency are : the veil (p = 0,001), wearing protective clothes (p = 0,04) and the exposed surface (p = 0,011) and dietary factors are represented by the daily caloric intake (p = 0,0001). The percent of fat mass was negatively related to vitamin D levels (p = 0,01) but not with BMI (p = 0,11) or waist circumference (p = 0,88). Similarly, lipid and glucose profile had no link with vitamin D. We found no relationship between Insulin resistance and vitamin D levels. Conclusion: At the end of our study, we have identified a very important vitamin D deficiency among obese. Dosage and systematic supplementation should be applied and for that physician awareness is needed.

Keywords: insulinresistance, risk factors, obesity, vitamin D

Procedia PDF Downloads 644
16242 Additive Manufacturing of Microstructured Optical Waveguides Using Two-Photon Polymerization

Authors: Leonnel Mhuka

Abstract:

Background: The field of photonics has witnessed substantial growth, with an increasing demand for miniaturized and high-performance optical components. Microstructured optical waveguides have gained significant attention due to their ability to confine and manipulate light at the subwavelength scale. Conventional fabrication methods, however, face limitations in achieving intricate and customizable waveguide structures. Two-photon polymerization (TPP) emerges as a promising additive manufacturing technique, enabling the fabrication of complex 3D microstructures with submicron resolution. Objectives: This experiment aimed to utilize two-photon polymerization to fabricate microstructured optical waveguides with precise control over geometry and dimensions. The objective was to demonstrate the feasibility of TPP as an additive manufacturing method for producing functional waveguide devices with enhanced performance. Methods: A femtosecond laser system operating at a wavelength of 800 nm was employed for two-photon polymerization. A custom-designed CAD model of the microstructured waveguide was converted into G-code, which guided the laser focus through a photosensitive polymer material. The waveguide structures were fabricated using a layer-by-layer approach, with each layer formed by localized polymerization induced by non-linear absorption of the laser light. Characterization of the fabricated waveguides included optical microscopy, scanning electron microscopy, and optical transmission measurements. The optical properties, such as mode confinement and propagation losses, were evaluated to assess the performance of the additive manufactured waveguides. Conclusion: The experiment successfully demonstrated the additive manufacturing of microstructured optical waveguides using two-photon polymerization. Optical microscopy and scanning electron microscopy revealed the intricate 3D structures with submicron resolution. The measured optical transmission indicated efficient light propagation through the fabricated waveguides. The waveguides exhibited well-defined mode confinement and relatively low propagation losses, showcasing the potential of TPP-based additive manufacturing for photonics applications. The experiment highlighted the advantages of TPP in achieving high-resolution, customized, and functional microstructured optical waveguides. Conclusion: his experiment substantiates the viability of two-photon polymerization as an innovative additive manufacturing technique for producing complex microstructured optical waveguides. The successful fabrication and characterization of these waveguides open doors to further advancements in the field of photonics, enabling the development of high-performance integrated optical devices for various applications

Keywords: Additive Manufacturing, Microstructured Optical Waveguides, Two-Photon Polymerization, Photonics Applications

Procedia PDF Downloads 88
16241 Investigating the Molecular Behavior of H₂O in Caso 4 -2h₂o Two-Dimensional Nanoscale System

Authors: Manal Alhazmi, Artem Mishchenko

Abstract:

A molecular fluids' behavior and interaction with other materials at the nanoscale is a complex process. Nanoscale fluids behave so differently than macroscale fluids and interact with other materials in unique ways. It is, therefore, feasible to understand the molecular behavior of H₂O in such two-dimensional nanoscale systems by studying (CaSO4-2H2O), commonly known as gypsum. In the present study, spectroscopic measurements on a 2D structure of exfoliated gypsum crystals are carried out by Raman and IR spectroscopy. An array of gypsum flakes with thicknesses ranging from 8nm to 100nm were observed and analyzed for their Raman and IR spectrum. Water molecules stretching modes spectra lines were also measured and observed in nanoscale gypsum flakes and compared with those of bulk crystals. CaSO4-2H2O crystals have Raman and infrared bands at 3341 cm-1 resulting from the weak hydrogen bonds between the water molecules. This internal vibration of water molecules, together with external vibrations with other atoms, are responsible for these bands. There is a shift of about 70 cm-1 In the peak position of thin flakes with respect to the bulk crystal, which is a result of the different atomic arrangement from bulk to thin flake on the nano scale. An additional peak was observed in Raman spectra around 2910-3137 cm⁻¹ in thin flakes but is missing in bulk crystal. This additional peak is attributed to a combined mode of water internal (stretching mode at 3394cm⁻¹) and external vibrations. In addition to Raman and infra- red analysis of gypsum 2D structure, electrical measurements were conducted to reveal the water molecules transport behavior in such systems. Electrical capacitance of the fabricated device is measured and found to be (0.0686 *10-12) F, and the calculated dielectric constant (ε) is (12.26).

Keywords: gypsum, infra-red spectroscopy, raman spectroscopy, H₂O behavior

Procedia PDF Downloads 96
16240 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong

Abstract:

This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.

Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 226
16239 The Effect of Education given to Parents of Children with Sickle Cell Anemia in Turkey and Chad to Reduce Children's Pain

Authors: Fatima El Zahra Amin, Emine Efe

Abstract:

This study was carried out to evaluate the effect of the education program for parents of children with Sickle Cell Anemia, on the knowledge level of parents and the reduction of pain relief by non-pharmacological methods used by parents at home. In Turkey, 54 parents and 109 from Chad agreed to participate in the survey. The data were collected by the researcher using a face-to-face interview method. Non-pharmacological treatment information form for parents, face expressions rating scale, and parent education program for non-pharmacological methods used in children with sickle cell anemia were used. It was determined that there was a statistically significant difference between the educational status, occupation, disease status, place of residence, family structure and age of parents of Chad and Turkey. According to the ratings of facial expressions scale, it was concluded that there was no significant difference between the children’s average degree of pain before and after administration of non-pharmacological methods by the groups of Chad and Turkey. It was determined that the educational programs prepared for parents of children with sickle cell anemia in both Turkey and Chad were effective in increasing the knowledge level of parents and also in reducing pain crisis with non-pharmacological methods parents used at home.

Keywords: Chad, child, non-pharmacological treatment methods, nurse, sickle cell anemia, Turkey

Procedia PDF Downloads 254
16238 In-Situ Studies of Cyclohexane Oxidation Using Laser Raman Spectroscopy for the Refinement of Mechanism Based Kinetic Models

Authors: Christine Fräulin, Daniela Schurr, Hamed Shahidi Rad, Gerrit Waters, Günter Rinke, Roland Dittmeyer, Michael Nilles

Abstract:

The reaction mechanisms of many liquid-phase reactions in organic chemistry have not yet been sufficiently clarified. Process conditions of several hundred degrees celsius and pressures to ten megapascals complicate the sampling and the determination of kinetic data. Space resolved in-situ measurements promises new insights. A non-invasive in-situ measurement technique has the advantages that no sample preparation is necessary, there is no change in sample mixture before analysis and the sampling do no lead to interventions in the flow. Thus, the goal of our research was the development of a contact-free spatially resolved measurement technique for kinetic studies of liquid phase reaction under process conditions. Therefore we used laser Raman spectroscopy combined with an optical transparent microchannel reactor. To show the performance of the system we choose the oxidation of cyclohexane as sample reaction. Cyclohexane oxidation is an economically important process. The products are intermediates for caprolactam and adipic acid, which are starting materials for polyamide 6 and 6.6 production. To maintain high selectivities of 70 to 90 %, the reaction is performed in industry at a low conversion of about six percent. As Raman spectroscopy is usually very selective but not very sensitive the detection of the small product concentration in cyclohexane oxidation is quite challenging. To meet these requirements, an optical experimental setup was optimized to determine the concentrations by laser Raman spectroscopy with respect to good detection sensitivity. With this measurement technique space resolved kinetic studies of uncatalysed and homogeneous catalyzed cyclohexane oxidation were carried out to obtain details about the reaction mechanism.

Keywords: in-situ laser raman spectroscopy, space resolved kinetic measurements, homogeneous catalysis, chemistry

Procedia PDF Downloads 326
16237 Effects of Different Drying Methods on the Properties of Viscose Single Jersey Fabrics

Authors: Merve Kucukali Ozturk, Yesim Beceren, Banu Nergis

Abstract:

The study discussed in this paper was conducted in an attempt to investigate effects of different drying methods (line dry and tumble dry) on viscose single jersey fabrics knitted with ring yarn.

Keywords: color change, dimensional properties, drying method, fabric tightness, physical properties

Procedia PDF Downloads 277
16236 Dislocation Density-Based Modeling of the Grain Refinement in Surface Mechanical Attrition Treatment

Authors: Reza Miresmaeili, Asghar Heydari Astaraee, Fereshteh Dolati

Abstract:

In the present study, an analytical model based on dislocation density model was developed to simulate grain refinement in surface mechanical attrition treatment (SMAT). The correlation between SMAT time and development in plastic strain on one hand, and dislocation density evolution, on the other hand, was established to simulate the grain refinement in SMAT. A dislocation density-based constitutive material law was implemented using VUHARD subroutine. A random sequence of shots is taken into consideration for multiple impacts model using Python programming language by utilizing a random function. The simulation technique was to model each impact in a separate run and then transferring the results of each run as initial conditions for the next run (impact). The developed Finite Element (FE) model of multiple impacts describes the coverage evolution in SMAT. Simulations were run to coverage levels as high as 4500%. It is shown that the coverage implemented in the FE model is equal to the experimental coverage. It is depicted that numerical SMAT coverage parameter is adequately conforming to the well-known Avrami model. Comparison between numerical results and experimental measurements for residual stresses and depth of deformation layers confirms the performance of the established FE model for surface engineering evaluations in SMA treatment. X-ray diffraction (XRD) studies of grain refinement, including resultant grain size and dislocation density, were conducted to validate the established model. The full width at half-maximum in XRD profiles can be used to measure the grain size. Numerical results and experimental measurements of grain refinement illustrate good agreement and show the capability of established FE model to predict the gradient microstructure in SMA treatment.

Keywords: dislocation density, grain refinement, severe plastic deformation, simulation, surface mechanical attrition treatment

Procedia PDF Downloads 129
16235 Estimation of Sediment Transport into a Reservoir Dam

Authors: Kiyoumars Roushangar, Saeid Sadaghian

Abstract:

Although accurate sediment load prediction is very important in planning, designing, operating and maintenance of water resources structures, the transport mechanism is complex, and the deterministic transport models are based on simplifying assumptions often lead to large prediction errors. In this research, firstly, two intelligent ANN methods, Radial Basis and General Regression Neural Networks, are adopted to model of total sediment load transport into Madani Dam reservoir (north of Iran) using the measured data and then applicability of the sediment transport methods developed by Engelund and Hansen, Ackers and White, Yang, and Toffaleti for predicting of sediment load discharge are evaluated. Based on comparison of the results, it is found that the GRNN model gives better estimates than the sediment rating curve and mentioned classic methods.

Keywords: sediment transport, dam reservoir, RBF, GRNN, prediction

Procedia PDF Downloads 488
16234 Relationship between Conjugated Linoleic Acid Intake, Biochemical Parameters and Body Fat among Adults and Elderly

Authors: Marcela Menah de Sousa Lima, Victor Ushijima Leone, Natasha Aparecida Grande de Franca, Barbara Santarosa Emo Peters, Ligia Araujo Martini

Abstract:

Conjugated linoleic acid (CLA) intake has been constantly related to benefits to human health since having a positive effect on reducing body fat. The aim of the present study was to investigate the association between CLA intake and biochemical measurements and body composition of adults and the elderly. Subjects/Methods: 287 adults and elderly participants in an epidemiological study in Sao Paulo Brazil, were included in the present study. Participants had their dietary data obtained by two non-consecutive 24HR, a body composition assessed by dual-energy absorptiometry exam (DXA), and a blood collection. Mean differences and a correlation test was performed. For all statistical tests, a significance of 5% was considered. Results: CLA intake showed a positive correlation with HDL-c levels (r = 0.149; p = 0.011) and negative with VLDL-c levels (r = -0.134; p = 0.023), triglycerides (r = -0.135; p = 0.023) and glycemia (r = -0.171; p = 0.004), as well as negative correlation with visceral adipose tissue (VAT) (r = -0.124, p = 0.036). Evaluating individuals in two groups according to VAT values, a significant difference in CLA intake was observed (p = 0.041), being the group with the highest VAT values, the one with the lowest fatty acid intake. Conclusions: This study suggests that CLA intake is associated with a better lipid profile and lower visceral adipose tissue volume, which contributes to the investigation of the effects of CLA on obesity parameters. However, it is necessary to investigate the effects of CLA from milk and dairy products in the control adiposity.

Keywords: adiposity, dairy products, diet, fatty acids

Procedia PDF Downloads 129
16233 Calibration of Syringe Pumps Using Interferometry and Optical Methods

Authors: E. Batista, R. Mendes, A. Furtado, M. C. Ferreira, I. Godinho, J. A. Sousa, M. Alvares, R. Martins

Abstract:

Syringe pumps are commonly used for drug delivery in hospitals and clinical environments. These instruments are critical in neonatology and oncology, where any variation in the flow rate and drug dosing quantity can lead to severe incidents and even death of the patient. Therefore it is very important to determine the accuracy and precision of these devices using the suitable calibration methods. The Volume Laboratory of the Portuguese Institute for Quality (LVC/IPQ) uses two different methods to calibrate syringe pumps from 16 nL/min up to 20 mL/min. The Interferometric method uses an interferometer to monitor the distance travelled by a pusher block of the syringe pump in order to determine the flow rate. Therefore, knowing the internal diameter of the syringe with very high precision, the travelled distance, and the time needed for that travelled distance, it was possible to calculate the flow rate of the fluid inside the syringe and its uncertainty. As an alternative to the gravimetric and the interferometric method, a methodology based on the application of optical technology was also developed to measure flow rates. Mainly this method relies on measuring the increase of volume of a drop over time. The objective of this work is to compare the results of the calibration of two syringe pumps using the different methodologies described above. The obtained results were consistent for the three methods used. The uncertainties values were very similar for all the three methods, being higher for the optical drop method due to setup limitations.

Keywords: calibration, flow, interferometry, syringe pump, uncertainty

Procedia PDF Downloads 102
16232 A Case Study on the Guidelines for Application of Project Management Methods in Infrastructure Projects

Authors: Fernanda Varella Borges, Silvio Burrattino Melhado

Abstract:

Motivated by the importance of public infrastructure projects in the civil construction chain, this research shows the study of project management methods and the infrastructure projects’ characteristics. The research aims at the objective of improving management efficiency by proposing guidelines for the application of project management methods in infrastructure projects. Through literature review and case studies, the research analyses two major infrastructure projects underway in Brazil, identifying the critical points for achieving its success. As a result, the proposed guidelines indicate that special attention should be given to the management of stakeholders, focusing on their knowledge and experience, their different interests, the efficient management of their communication, and their behavior in the day-by-day project management process.

Keywords: construction, infrastructure, project management, public projects

Procedia PDF Downloads 482
16231 Global Supply Chain Tuning: Role of National Culture

Authors: Aleksandr S. Demin, Anastasiia V. Ivanova

Abstract:

Purpose: The current economy tends to increase the influence of digital technologies and diminish the human role in management. However, it is impossible to deny that a person still leads a business with its own set of values and priorities. The article presented aims to incorporate the peculiarities of the national culture and the characteristics of the supply chain using the quantitative values of the national culture obtained by the scholars of comparative management (Hofstede, House, and others). Design/Methodology/Approach: The conducted research is based on the secondary data in the field of cross-country comparison achieved by Prof. Hofstede and received in the GLOBE project. The data mentioned are used to design different aspects of the supply chain both on the cross-functional and inter-organizational levels. The connection between a range of principles in general (roles assignment, customer service prioritization, coordination of supply chain partners) and in comparative management (acknowledgment of the national peculiarities of the country in which the company operates) is shown over economic and mathematical models, mainly linear programming models. Findings: The combination of the team management wheel concept, the business processes of the global supply chain, and the national culture characteristics let a transnational corporation to form a supply chain crew balanced in costs, functions, and personality. To elaborate on an effective customer service policy and logistics strategy in goods and services distribution in the country under review, two approaches are offered. The first approach relies exceptionally on the customer’s interest in the place of operation, while the second one takes into account the position of the transnational corporation and its previous experience in order to accord both organizational and national cultures. The effect of integration practice on the achievement of a specific supply chain goal in a specific location is advised to assess via types of correlation (positive, negative, non) and the value of national culture indices. Research Limitations: The models developed are intended to be used by transnational companies and business forms located in several nationally different areas. Some of the inputs to illustrate the application of the methods offered are simulated. That is why the numerical measurements should be used with caution. Practical Implications: The research can be of great interest for the supply chain managers who are responsible for the engineering of global supply chains in a transnational corporation and the further activities in doing business on the international area. As well, the methods, tools, and approaches suggested can be used by top managers searching for new ways of competitiveness and can be suitable for all staff members who are keen on the national culture traits topic. Originality/Value: The elaborated methods of decision-making with regard to the national environment suggest the mathematical and economic base to find a comprehensive solution.

Keywords: logistics integration, logistics services, multinational corporation, national culture, team management, service policy, supply chain management

Procedia PDF Downloads 97