Search results for: choice experiments (CE)
4319 Study of the Behavior and Similarities of Flow and Level Transmitters in the Industries
Authors: Valentina Alessandra Carvalho do Vale, Elmo Thiago Lins Cöuras Ford
Abstract:
In view of the requirements of the current industrial processes, the instrumentation plays a critical role. In this context, this work aims to raise some the operating characteristics of the level and flow transmitters, in addition to observing their similarities and possible limitations for certain configurations.Keywords: flow, level, instrumentation, configurations of meters, method of choice of the meters, instrumentation in the industrial processes
Procedia PDF Downloads 5794318 Determinaton of Processing Parameters of Decaffeinated Black Tea by Using Pilot-Scale Supercritical CO₂ Extraction
Authors: Saziye Ilgaz, Atilla Polat
Abstract:
There is a need for development of new processing techniques to ensure safety and quality of final product while minimizing the adverse impact of extraction solvents on environment and residue levels of these solvents in final product, decaffeinated black tea. In this study pilot scale supercritical carbon dioxide (SCCO₂) extraction was used to produce decaffeinated black tea in place of solvent extraction. Pressure (250, 375, 500 bar), extraction time (60, 180, 300 min), temperature (55, 62.5, 70 °C), CO₂ flow rate (1, 2 ,3 LPM) and co-solvent quantity (0, 2.5, 5 %mol) were selected as extraction parameters. The five factors BoxBehnken experimental design with three center points was performed to generate 46 different processing conditions for caffeine removal from black tea samples. As a result of these 46 experiments caffeine content of black tea samples were reduced from 2.16 % to 0 – 1.81 %. The experiments showed that extraction time, pressure, CO₂ flow rate and co-solvent quantity had great impact on decaffeination yield. Response surface methodology (RSM) was used to optimize the parameters of the supercritical carbon dioxide extraction. Optimum extraction parameters obtained of decaffeinated black tea were as follows: extraction temperature of 62,5 °C, extraction pressure of 375 bar, CO₂ flow rate of 3 LPM, extraction time of 176.5 min and co-solvent quantity of 5 %mol.Keywords: supercritical carbon dioxide, decaffeination, black tea, extraction
Procedia PDF Downloads 3644317 A Comparative Study of Mechanisms across Different Online Social Learning Types
Authors: Xinyu Wang
Abstract:
In the context of the rapid development of Internet technology and the increasing prevalence of online social media, this study investigates the impact of digital communication on social learning. Through three behavioral experiments, we explore both affective and cognitive social learning in online environments. Experiment 1 manipulates the content of experimental materials and two forms of feedback, emotional valence, sociability, and repetition, to verify whether individuals can achieve online emotional social learning through reinforcement using two social learning strategies. Results reveal that both social learning strategies can assist individuals in affective, social learning through reinforcement, with feedback-based learning strategies outperforming frequency-dependent strategies. Experiment 2 similarly manipulates the content of experimental materials and two forms of feedback to verify whether individuals can achieve online knowledge social learning through reinforcement using two social learning strategies. Results show that similar to online affective social learning, individuals adopt both social learning strategies to achieve cognitive social learning through reinforcement, with feedback-based learning strategies outperforming frequency-dependent strategies. Experiment 3 simultaneously observes online affective and cognitive social learning by manipulating the content of experimental materials and feedback at different levels of social pressure. Results indicate that online affective social learning exhibits different learning effects under different levels of social pressure, whereas online cognitive social learning remains unaffected by social pressure, demonstrating more stable learning effects. Additionally, to explore the sustained effects of online social learning and differences in duration among different types of online social learning, all three experiments incorporate two test time points. Results reveal significant differences in pre-post-test scores for online social learning in Experiments 2 and 3, whereas differences are less apparent in Experiment 1. To accurately measure the sustained effects of online social learning, the researchers conducted a mini-meta-analysis of all effect sizes of online social learning duration. Results indicate that although the overall effect size is small, the effect of online social learning weakens over time.Keywords: online social learning, affective social learning, cognitive social learning, social learning strategies, social reinforcement, social pressure, duration
Procedia PDF Downloads 464316 Influence of Silica Fume Addition on Concrete
Authors: Gaurav Datta, Sourav Ghosh, Rahul Roy
Abstract:
The incorporation of silica fume into the normal concrete is a routine one in the present days to produce the tailor made high strength and high performance concrete. The design parameters are increasing with the incorporation of silica fume in conventional concrete and the mix proportioning is becoming complex. The main objective of this paper has been made to investigate the different mechanical properties like compressive strength, permeability, porosity, density, modulus of elasticity, compacting factor, slump of concrete incorporating silica fume. In this present paper 5 (five) mix of concrete incorporating silica fume is cast to perform experiments. These experiments were carried out by replacing cement with different percentages of silica fume at a single constant water-cementitious materials ratio keeping other mix design variables constant. The silica fume was replaced by 0%, 5%, 10%, 15% and 20% for water-cementitious materials (w/cm) ratio for 0.40. For all mixes compressive strengths were determined at 24 hours, 7 and 28 days for 100 mm and 150 mm cubes. Other properties like permeability, porosity, density, modulus of elasticity, compacting factor, and slump were also determined for five mixes of concrete.Keywords: high performance concrete, high strength concrete, silica fume, strength
Procedia PDF Downloads 2904315 Experimental Study on the Heat Transfer Characteristics of the 200W Class Woofer Speaker
Authors: Hyung-Jin Kim, Dae-Wan Kim, Moo-Yeon Lee
Abstract:
The objective of this study is to experimentally investigate the heat transfer characteristics of 200 W class woofer speaker units with the input voice signals. The temperature and heat transfer characteristics of the 200 W class woofer speaker unit were experimentally tested with the several input voice signals such as 1500 Hz, 2500 Hz, and 5000 Hz respectively. From the experiments, it can be observed that the temperature of the woofer speaker unit including the voice-coil part increases with a decrease in input voice signals. Also, the temperature difference in measured points of the voice coil is increased with decrease of the input voice signals. In addition, the heat transfer characteristics of the woofer speaker in case of the input voice signal of 1500 Hz is 40% higher than that of the woofer speaker in case of the input voice signal of 5000 Hz at the measuring time of 200 seconds. It can be concluded from the experiments that initially the temperature of the voice signal increases rapidly with time, after a certain period of time it increases exponentially. Also during this time dependent temperature change, it can be observed that high voice signal is stable than low voice signal.Keywords: heat transfer, temperature, voice coil, woofer speaker
Procedia PDF Downloads 3604314 Improvements in Transient Testing in The Transient REActor Test (TREAT) with a Choice of Filter
Authors: Harish Aryal
Abstract:
The safe and reliable operation of nuclear reactors has always been one of the topmost priorities in the nuclear industry. Transient testing allows us to understand the time-dependent behavior of the neutron population in response to either a planned change in the reactor conditions or unplanned circumstances. These unforeseen conditions might occur due to sudden reactivity insertions, feedback, power excursions, instabilities, and accidents. To study such behavior, we need transient testing, which is like car crash testing, to estimate the durability and strength of a car design. In nuclear designs, such transient testing can simulate a wide range of accidents due to sudden reactivity insertions and helps to study the feasibility and integrity of the fuel to be used in certain reactor types. This testing involves a high neutron flux environment and real-time imaging technology with advanced instrumentation with appropriate accuracy and resolution to study the fuel slumping behavior. With the aid of transient testing and adequate imaging tools, it is possible to test the safety basis for reactor and fuel designs that serves as a gateway in licensing advanced reactors in the future. To that end, it is crucial to fully understand advanced imaging techniques both analytically and via simulations. This paper presents an innovative method of supporting real-time imaging of fuel pins and other structures during transient testing. The major fuel-motion detection device that is studied in this dissertation is the Hodoscope which requires collimators. This paper provides 1) an MCNP model and simulation of a Transient Reactor Test (TREAT) core with a central fuel element replaced by a slotted fuel element that provides an open path between test samples and a hodoscope detector and 2) a choice of good filter to improve image resolution.Keywords: hodoscope, transient testing, collimators, MCNP, TREAT, hodogram, filters
Procedia PDF Downloads 774313 Fruits and Vegetable Consumers' Behaviour towards Organised Retailers: Evidence from India
Authors: K. B. Ramappa, A. V. Manjunatha
Abstract:
Consumerism in India is witnessing unprecedented growth driven by favourable demographics, rising young and working population, rising income levels, urbanization and growing brand orientation. In addition, the increasing level of awareness on health, hygiene and quality has made the consumers to think on the fairly traded goods and brands. This has made retailing extremely important to everyone because without retailers’ consumers would not have access to day-to-day products. The increased competition among different retailers has contributed significantly towards rising consumer awareness on quality products and brand loyalty. Many existing empirical studies have mainly focused on net saving of consumers at organised retail via-a-vis unorganised retail shops. In this article, authors have analysed the Bangalore consumers' attitudes towards buying of fruits and vegetables and their choice of retail outlets. The primary data was collected from 100 consumers belonging to the Bangalore City during October 2014. Sample consumers buying at supermarkets, convenience stores and hypermarkets were purposively selected. The collected data was analyzed using descriptive statistics and multinomial logit model. It was found that among all variables, quality and prices were major accountable factors for buying fruits and vegetables at organized retail shops. The empirical result of multinomial logit model reveals that annual net income was positively associated with the Big Bazar and Food World consumers and negatively associated with the Reliance Fresh, More and Niligiris consumers, as compared with the HOPCOMS consumers. Per month expenditure on fruits and vegetables was positively and age of the consumer was negatively related to the consumers’ choice of buying at modern retail markets. Consumers were willing to buy at modern retail outlets irrespective of the distance.Keywords: organized retailers, consumers' attitude, consumers' preference, fruits, vegetables, multinomial logit, Bangalore
Procedia PDF Downloads 4134312 H2/He and H2O/He Separation Experiments with Zeolite Membranes for Nuclear Fusion Applications
Authors: Rodrigo Antunes, Olga Borisevich, David Demange
Abstract:
In future nuclear fusion reactors, tritium self-sufficiency will be ensured by tritium (3H) production via reactions between the fusion neutrons and lithium. To favor tritium breeding, a neutron multiplier must also be used. Both tritium breeder and neutron multiplier will be placed in the so-called Breeding Blanket (BB). For the European Helium-Cooled Pebble Bed (HCPB) BB concept, the tritium production and neutron multiplication will be ensured by neutron bombardment of Li4SiO4 and Be pebbles, respectively. The produced tritium is extracted from the pebbles by purging them with large flows of He (~ 104 Nm3h-1), doped with small amounts of H2 (~ 0.1 vol%) to promote tritium extraction via isotopic exchange (producing HT). Due to the presence of oxygen in the pebbles, production of tritiated water is unavoidable. Therefore, the purging gas downstream of the BB will be composed by Q2/Q2O/He (Q = 1H, 2H, 3H), with Q2/Q2O down to ppm levels, which must be further processed for tritium recovery. A two-stage continuous approach, where zeolite membranes (ZMs) are followed by a catalytic membrane reactor (CMR), has been recently proposed to fulfil this task. The tritium recovery from Q2/Q2O/He is ensured by the CMR, that requires a reduction of the gas flow coming from the BB and a pre-concentration of Q2 and Q2O to be efficient. For this reason, and to keep this stage with reasonable dimensions, ZMs are required upfront to reduce as much as possible the He flows and concentrate the Q2/Q2O species. Therefore, experimental activities have been carried out at the Tritium Laboratory Karlsruhe (TLK) to test the separation performances of different zeolite membranes for H2/H2O/He. First experiments have been performed with binary mixtures of H2/He and H2O/He with commercial MFI-ZSM5 and NaA zeolite-type membranes. Only the MFI-ZSM5 demonstrated selectivity towards H2, with a separation factor around 1.5, and H2 permeances around 0.72 µmolm-2s-1Pa-1, rather independent for feed concentrations in the range 0.1 vol%-10 vol% H2/He. The experiments with H2O/He have demonstrated that the separation factor towards H2O is highly dependent on the feed concentration and temperature. For instance, at 0.2 vol% H2O/He the separation factor with NaA is below 2 and around 1000 at 5 vol% H2O/He, at 30°C. Overall, both membranes demonstrated complementary results at equivalent temperatures. In fact, at low feed concentrations ( ≤ 1 vol% H2O/He) MFI-ZSM5 separates better than NaA, whereas the latter has higher separation factors for higher inlet water content ( ≥ 5 vol% H2O/He). In this contribution, the results obtained with both MFI-ZSM5 and NaA membranes for H2/He and H2O/H2 mixtures at different concentrations and temperatures are compared and discussed.Keywords: nuclear fusion, gas separation, tritium processes, zeolite membranes
Procedia PDF Downloads 2884311 Effect of Precursors Aging Time on the Photocatalytic Activity of Zno Thin Films
Authors: N. Kaneva, A. Bojinova, K. Papazova
Abstract:
Thin ZnO films are deposited on glass substrates via sol–gel method and dip-coating. The films are prepared from zinc acetate dehydrate as a starting reagent. After that the as-prepared ZnO sol is aged for different periods (0, 1, 3, 5, 10, 15, and 30 days). Nanocrystalline thin films are deposited from various sols. The effect ZnO sols aging time on the structural and photocatalytic properties of the films is studied. The films surface is studied by Scanning Electron Microscopy. The effect of the aging time of the starting solution is studied inrespect to photocatalytic degradation of Reactive Black 5 (RB5) by UV-vis spectroscopy. The experiments are conducted upon UV-light illumination and in complete darkness. The variation of the absorption spectra shows the degradation of RB5 dissolved in water, as a result of the reaction acurring on the surface of the films, and promoted by UV irradiation. The initial concentrations of dye (5, 10 and 20 ppm) and the effect of the aging time are varied during the experiments. The results show, that the increasing aging time of starting solution with respect to ZnO generally promotes photocatalytic activity. The thin films obtained from ZnO sol, which is aged 30 days have best photocatalytic degradation of the dye (97,22%) in comparison with the freshly prepared ones (65,92%). The samples and photocatalytic experimental results are reproducible. Nevertheless, all films exhibit a substantial activity in both UV light and darkness, which is promising for the development of new ZnO photocatalysts by sol-gel method.Keywords: ZnO thin films, sol-gel, photocatalysis, aging time
Procedia PDF Downloads 3824310 Numerical Investigation on Optimizing Fatigue Life in a Lap Joint Structure
Authors: P. Zamani, S. Mohajerzadeh, R. Masoudinejad, K. Farhangdoost
Abstract:
The riveting process is one of the important ways to keep fastening the lap joints in aircraft structures. Failure of aircraft lap joints directly depends on the stress field in the joint. An important application of riveting process is in the construction of aircraft fuselage structures. In this paper, a 3D finite element method is carried out in order to optimize residual stress field in a riveted lap joint and also to estimate its fatigue life. In continue, a number of experiments are designed and analyzed using design of experiments (DOE). Then, Taguchi method is used to select an optimized case between different levels of each factor. Besides that, the factor which affects the most on residual stress field is investigated. Such optimized case provides the maximum residual stress field. Fatigue life of the optimized joint is estimated by Paris-Erdogan law. Stress intensity factors (SIFs) are calculated using both finite element analysis and experimental formula. In addition, the effect of residual stress field, geometry, and secondary bending are considered in SIF calculation. A good agreement is found between results of such methods. Comparison between optimized fatigue life and fatigue life of other joints has shown an improvement in the joint’s life.Keywords: fatigue life, residual stress, riveting process, stress intensity factor, Taguchi method
Procedia PDF Downloads 4524309 Study on Breakdown Voltage Characteristics of Different Types of Oils with Contaminations
Authors: C. Jouhar, B. Rajesh Kamath, M. K. Veeraiah, M. Z. Kurian
Abstract:
Since long time ago, petroleum-based mineral oils have been used for liquid insulation in high voltage equipments. Mineral oils are widely used as insulation for transmission and distribution power transformers, capacitors and other high voltage equipment. Petroleum-based insulating oils have excellent dielectric properties such as high electric field strength, low dielectric losses and good long-term performance. Due to environmental consideration, an attempt to search the alternate liquid insulation is required. The influence of particles on the voltage breakdown in insulating oil and other liquids has been recognized for many years. Particles influence both AC and DC voltage breakdown in insulating oil. Experiments are conducted under AC voltage. The breakdown process starts with a microscopic bubble, an area of large distance where ions or electrons initiate avalanches. Insulating liquids drive their dielectric strength from the much higher density compare to gases. Experiments are carried out under High Voltage AC (HVAC) in different types of oils namely castor oil, vegetable oil and mineral oil. The Breakdown Voltage (BDV) with presence of moisture and particle contamination in different types of oils is studied. The BDV of vegetable oil is better when compared to other oils without contamination. The BDV of mineral oil is better when compared to other types of oils in presence of contamination.Keywords: breakdown voltage, high voltage AC, insulating oil, oil breakdown
Procedia PDF Downloads 3414308 The Efficacy of Methylphenidate vs Atomoxetine in Treating Attention Deficit/Hyperactivity Disorder in Child and Adolescent
Authors: Gadia Duhita, Noorhana, Tjhin Wiguna
Abstract:
Background: ADHD is the most common behavioural disorder in Indonesia. A stimulant, specifically methylphenidate, has been the first drug of choice for an ADHD treatment more than half a century. During the last decade, non-stimulant therapy (atomoxetine) for ADHD treatment has been developing. Growing evidence of its efficacy and the difference in its side effects profile to stimulant therapy have made methylphenidate’s position as a first line therapy for ADHD in need of re-evaluation. Both methylphenidate and atomoxetine have proven themselves against placebos in reducing core symptoms of ADHD. More recent studies directly compare the efficacy of methylphenidate and atomoxetine. Objective: The objective of this paper is to find out if either methylphenidate or atomoxetine is superior to another. This paper will assess the validity, importance, and applicability of current available evidence which compare the effectivity, efficacy, and safety of methylphenidate to atomoxetine for treatment in children and adolescents with ADHD. Method: The articles were searched for through the PubMed and Cochrane databases with “attention deficit/hyperactivity disorder OR adhd”, “methylphenidate”, and “atomoxetine” as the search keywords. Two articles which were relevant and eligible were chosen by using inclusion and exclusion criterias to be critically appraised. Result: The study by Hazel et al. showed that the efficacy of methylphenidate and atomoxetine are comparable for treatment in child and adolescent ADHD. The result shows 53.6% (95% CI 48.5%-58.4%) of the patient responded to the treatment by atomoxetine and 54.4% (95% CI 47.6%-61.1%) patients responded to methylphenidate, with the difference in proportion of–0.9% (95% CI –9.2%-7.5%). The other study by Hanwella et al. also showed that the efficacy of atomoxetine was not inferior to metilphenidate (SMD = 0.09, 95% CI –0.08-0.26) (Z = 1.06, p = 0.29). However, the sub-group analysis showed that OROS methylphenidate is more effective compared to atomoxetine (SMD = 0.32, 95% CI 0.12-0.53) (Z = 3.05, p < 0.02). Conclusion: The efficacy of methylphenidate and atomoxetine in reducing symptoms of ADHD is comparable. None is proven inferior to another. The choice of pharmacological tratment children and adolescents with ADHD should be made based on contraindication and the side effects profile of each drug.Keywords: attention deficit/hyperactivity disorder, ADHD, atomoxetine, methylphenidate
Procedia PDF Downloads 4744307 Predictive Functional Control with Disturbance Observer for Tendon-Driven Balloon Actuator
Authors: Jun-ya Nagase, Toshiyuki Satoh, Norihiko Saga, Koichi Suzumori
Abstract:
In recent years, Japanese society has been aging, engendering a labour shortage of young workers. Robots are therefore expected to perform tasks such as rehabilitation, nursing elderly people, and day-to-day work support for elderly people. The pneumatic balloon actuator is a rubber artificial muscle developed for use in a robot hand in such environments. This actuator has a long stroke, and a high power-to-weight ratio compared with the present pneumatic artificial muscle. Moreover, the dynamic characteristics of this actuator resemble those of human muscle. This study evaluated characteristics of force control of balloon actuator using a predictive functional control (PFC) system with disturbance observer. The predictive functional control is a model-based predictive control (MPC) scheme that predicts the future outputs of the actual plants over the prediction horizon and computes the control effort over the control horizon at every sampling instance. For this study, a 1-link finger system using a pneumatic balloon actuator is developed. Then experiments of PFC control with disturbance observer are performed. These experiments demonstrate the feasibility of its control of a pneumatic balloon actuator for a robot hand.Keywords: disturbance observer, pneumatic balloon, predictive functional control, rubber artificial muscle
Procedia PDF Downloads 4534306 Comparison of Deep Brain Stimulation Targets in Parkinson's Disease: A Systematic Review
Authors: Hushyar Azari
Abstract:
Aim and background: Deep brain stimulation (DBS) is regarded as an important therapeutic choice for Parkinson's disease (PD). The two most common targets for DBS are the subthalamic nucleus (STN) and globus pallidus (GPi). This review was conducted to compare the clinical effectiveness of these two targets. Methods: A systematic literature search in electronic databases: Embase, Cochrane Library and PubMed were restricted to English language publications 2010 to 2021. Specified MeSH terms were searched in all databases. Studies which evaluated the Unified Parkinson's Disease Rating Scale (UPDRS) III were selected by meeting the following criteria: (1) compared both GPi and STN DBS; (2) had at least three months follow-up period; (3)at least five participants in each group; (4)conducted after 2010. Study quality assessment was performed using the Modified Jadad Scale. Results: 3577 potentially relevant articles were identified, of these, 3569 were excluded based on title and abstract, duplicate and unsuitable article removal. Eight articles satisfied the inclusion criteria and were scrutinized (458 PD patients). According to Modified Jadad Scale, the majority of included studies had low evidence quality which was a limitation of this review. 5 studies reported no statistically significant between-group difference for improvements in UPDRS ш scores. At the same time, there were some results in terms of pain, action tremor, rigidity, and urinary symptoms, which indicated that STN DBS might be a better choice. Regarding the adverse effects, GPi was superior. Conclusion: It is clear that other larger randomized clinical trials with longer follow-up periods and control groups are needed to decide which target is more efficient for deep brain stimulation in Parkinson’s disease and imposes fewer adverse effects on the patients. Meanwhile, STN seems more reasonable according to the results of this systematic review.Keywords: brain stimulation, globus pallidus, Parkinson's disease, subthalamic nucleus
Procedia PDF Downloads 1794305 Design and Validation of Cutting Performance of Ceramic Matrix Composites Using FEM Simulations
Authors: Zohaib Ellahi, Guolong Zhao
Abstract:
Ceramic matrix composite (CMC) material possesses high strength, wear resistance and anisotropy thus machining of this material is very difficult and demands high cost. In this research, FEM simulations and physical experiments have been carried out to assess the machinability of carbon fiber reinforced silicon carbide (C/SiC) using polycrystalline diamond (PCD) tool in slot milling process. Finite element model has been generated in Abaqus/CAE software and milling operation performed by using user defined material subroutine. Effect of different milling parameters on cutting forces and stresses has been calculated through FEM simulations and compared with experimental results to validate the finite element model. Cutting forces in x and y-direction were calculated through both experiments and finite element model and found a good agreement between them. With increase in cutting speed resultant cutting forces are decreased. Resultant cutting forces are increased with increased feed per tooth and depth of cut. When machining performed along the fiber direction stresses generated near the tool edge were minimum and increases with fiber cutting angle.Keywords: experimental & numerical investigation, C/SiC cutting performance analysis, milling of CMCs, CMC composite stress analysis
Procedia PDF Downloads 864304 Effects of Cerium Oxide Nanoparticle Addition in Diesel and Diesel-Biodiesel Blends on the Performance Characteristics of a CI Engine
Authors: Abbas Ali Taghipoor Bafghi, Hosein Bakhoda, Fateme Khodaei Chegeni
Abstract:
An experimental investigation is carried out to establish the performance characteristics of a compression ignition engine while using cerium oxide nano particles as additive in neat diesel and diesel-bio diesel blends. In the first phase of the experiments, stability of neat diesel and diesel-bio diesel fuel blends with the addition of cerium oxide nano particles are analyzed. After series of experiments, it is found that the blends subjected to high speed blending followed by ultrasonic bath stabilization improves the stability.In the second phase, performance characteristics are studied using the stable fuel blends in a single cylinder four stroke engine coupled with an electrical dynamo meter and a data acquisition system. The cerium oxide acts as an oxygen donating catalyst and provides oxygen for combustion. The activation energy of cerium oxide acts to burn off carbon deposits within the engine cylinder at the wall temperature and prevents the deposition of non-polar compounds on the cylinder wall results reduction in HC emissions. The tests revealed that cerium oxide nano particles can be used as additive in diesel and diesel-bio diesel blends to improve complete combustion of the fuel significantly.Keywords: engine, cerium oxide, biodiesel, deposit
Procedia PDF Downloads 3454303 Exploring Data Leakage in EEG Based Brain-Computer Interfaces: Overfitting Challenges
Authors: Khalida Douibi, Rodrigo Balp, Solène Le Bars
Abstract:
In the medical field, applications related to human experiments are frequently linked to reduced samples size, which makes the training of machine learning models quite sensitive and therefore not very robust nor generalizable. This is notably the case in Brain-Computer Interface (BCI) studies, where the sample size rarely exceeds 20 subjects or a few number of trials. To address this problem, several resampling approaches are often used during the data preparation phase, which is an overly critical step in a data science analysis process. One of the naive approaches that is usually applied by data scientists consists in the transformation of the entire database before the resampling phase. However, this can cause model’ s performance to be incorrectly estimated when making predictions on unseen data. In this paper, we explored the effect of data leakage observed during our BCI experiments for device control through the real-time classification of SSVEPs (Steady State Visually Evoked Potentials). We also studied potential ways to ensure optimal validation of the classifiers during the calibration phase to avoid overfitting. The results show that the scaling step is crucial for some algorithms, and it should be applied after the resampling phase to avoid data leackage and improve results.Keywords: data leackage, data science, machine learning, SSVEP, BCI, overfitting
Procedia PDF Downloads 1534302 UF as Pretreatment of RO for Tertiary Treatment of Biologically Treated Distillery Spentwash
Authors: Pinki Sharma, Himanshu Joshi
Abstract:
Distillery spentwash contains high chemical oxygen demand (COD), biological oxygen demand (BOD), color, total dissolved solids (TDS) and other contaminants even after biological treatment. The effluent can’t be discharged as such in the surface water bodies or land without further treatment. Reverse osmosis (RO) treatment plants have been installed in many of the distilleries at tertiary level. But at most of the places these plants are not properly working due to high concentration of organic matter and other contaminants in biologically treated spentwash. To make the membrane treatment proven and reliable technology, proper pre-treatment is mandatory. In the present study, ultra-filtration (UF) as pre-treatment of RO at tertiary stage was performed. Operating parameters namely initial pH (pHo: 2–10), trans-membrane pressure (TMP: 4-20 bars) and temperature (T: 15- 43°C) used for conducting experiments with UF system. Experiments were optimized at different operating parameters in terms of COD, color, TDS and TOC removal by using response surface methodology (RSM) with central composite design. The results showed that removal of COD, color and TDS by 62%, 93.5% and 75.5%, with UF, respectively at optimized conditions with increased permeate flux from 17.5 l/m2/h (RO) to 38 l/m2/h (UF-RO). The performance of the RO system was greatly improved both in term of pollutant removal as well as water recovery.Keywords: bio-digested distillery spentwash, reverse osmosis, response surface methodology, ultra-filtration
Procedia PDF Downloads 3474301 Dynamics Characterizations of Dielectric Electro- Active Polymer Pull Actuator for Vibration Control
Authors: A. M. Wahab, E. Rustighi
Abstract:
Elastomeric dielectric material has recently become a new alternative for actuator technology. The characteristics of dielectric elastomers placed between two electrodes to withstand large strain when electrodes are charged has attracted the attention of many researcher to study this material for actuator technology. Thus, in the past few years Danfoss Ventures A/S has established their own dielectric electro-active polymer (DEAP), which was called PolyPower. The main objective of this work was to investigate the dynamic characteristics for vibration control of a PolyPower actuator folded in ‘pull’ configuration. A range of experiments was carried out on the folded actuator including passive (without electrical load) and active (with electrical load) testing. For both categories static and dynamic testing have been done to determine the behavior of folded DEAP actuator. Voltage-Strain experiments show that the DEAP folded actuator is a non-linear system. It is also shown that the voltage supplied has no effect on the natural frequency. Finally, varying AC voltage with different amplitude and frequency shows the parameters that influence the performance of DEAP folded actuator. As a result, the actuator performance dominated by the frequency dependence of the elastic response and was less influenced by dielectric properties.Keywords: dielectric electro-active polymer, pull actuator, static, dynamic, electromechanical
Procedia PDF Downloads 2514300 High Pressure Multiphase Flow Experiments: The Impact of Pressure on Flow Patterns Using an X-Ray Tomography Visualisation System
Authors: Sandy Black, Calum McLaughlin, Alessandro Pranzitelli, Marc Laing
Abstract:
Multiphase flow structures of two-phase multicomponent fluids were experimentally investigated in a large diameter high-pressure pipeline up to 130 bar at TÜV SÜD’s National Engineering Laboratory Advanced Multiphase Facility. One of the main objectives of the experimental test campaign was to evaluate the impact of pressure on multiphase flow patterns as much of the existing information is based on low-pressure measurements. The experiments were performed in a horizontal and vertical orientation in both 4-inch and 6-inch pipework using nitrogen, ExxsolTM D140 oil, and a 6% aqueous solution of NaCl at incremental pressures from 10 bar to 130 bar. To visualise the detailed structure of the flow of the entire cross-section of the pipe, a fast response X-ray tomography system was used. A wide range of superficial velocities from 0.6 m/s to 24.0 m/s for gas and 0.04 m/s and 6.48 m/s for liquid was examined to evaluate different flow regimes. The results illustrated the suppression of instabilities between the gas and the liquid at the measurement location and that intermittent or slug flow was observed less frequently as the pressure was increased. CFD modellings of low and high-pressure simulations were able to successfully predict the likelihood of intermittent flow; however, further tuning is necessary to predict the slugging frequency. The dataset generated is unique as limited datasets exist above 100 bar and is of considerable value to multiphase flow specialists and numerical modellers.Keywords: computational fluid dynamics, high pressure, multiphase, X-ray tomography
Procedia PDF Downloads 1434299 Removal of Basic Yellow 28 Dye from Aqueous Solutions Using Plastic Wastes
Authors: Nadjib Dahdouh, Samira Amokrane, Elhadj Mekatel, Djamel Nibou
Abstract:
The removal of Basic Yellow 28 (BY28) from aqueous solutions by plastic wastes PMMA was investigated. The characteristics of plastic wastes PMMA were determined by SEM, FTIR and chemical composition analysis. The effects of solution pH, initial Basic Yellow 28 (BY28) concentration C, solid/liquid ratio R, and temperature T were studied in batch experiments. The Freundlich and the Langmuir models have been applied to the adsorption process, and it was found that the equilibrium followed well Langmuir adsorption isotherm. A comparison of kinetic models applied to the adsorption of BY28 on the PMMA was evaluated for the pseudo-first-order and the pseudo-second-order kinetic models. It was found that used models were correlated with the experimental data. Intraparticle diffusion model was also used in these experiments. The thermodynamic parameters namely the enthalpy ∆H°, entropy ∆S° and free energy ∆G° of adsorption of BY28 on PMMA were determined. From the obtained results, the negative values of Gibbs free energy ∆G° indicated the spontaneity of the adsorption of BY28 by PMMA. The negative values of ∆H° revealed the exothermic nature of the process and the negative values of ∆S° suggest the stability of BY28 on the surface of SW PMMA.Keywords: removal, Waste PMMA, BY28 dye, equilibrium, kinetic study, thermodynamic study
Procedia PDF Downloads 1534298 Graphic Calculator Effectiveness in Biology Teaching and Learning
Authors: Nik Azmah Nik Yusuff, Faridah Hassan Basri, Rosnidar Mansor
Abstract:
The purpose of the study is to find out the effectiveness of using Graphic calculators (GC) with Calculator Based Laboratory 2 (CBL2) in teaching and learning of form four biology for these topics: Nutrition, Respiration and Dynamic Ecosystem. Sixty form four science stream students were the participants of this study. The participants were divided equally into the treatment and control groups. The treatment group used GC with CBL2 during experiments while the control group used the ordinary conventional laboratory apparatus without using GC with CBL2. Instruments in this study were a set of pre-test and post-test and a questionnaire. T-Test was used to compare the student’s biology achievement while a descriptive statistic was used to analyze the outcome of the questionnaire. The findings of this study indicated the use of GC with CBL2 in biology had significant positive effect. The highest mean was 4.43 for item stating the use of GC with CBL2 had saved collecting experiment result’s time. The second highest mean was 4.10 for item stating GC with CBL2 had saved drawing and labelling graphs. The outcome from the questionnaire also showed that GC with CBL2 were easy to use and save time. Thus, teachers should use GC with CBL2 in support of efforts by Malaysia Ministry of Education in encouraging technology-enhanced lessons.Keywords: biology experiments, Calculator-Based Laboratory 2 (CBL2), graphic calculators, Malaysia Secondary School, teaching/learning
Procedia PDF Downloads 4034297 JaCoText: A Pretrained Model for Java Code-Text Generation
Authors: Jessica Lopez Espejel, Mahaman Sanoussi Yahaya Alassan, Walid Dahhane, El Hassane Ettifouri
Abstract:
Pretrained transformer-based models have shown high performance in natural language generation tasks. However, a new wave of interest has surged: automatic programming language code generation. This task consists of translating natural language instructions to a source code. Despite the fact that well-known pre-trained models on language generation have achieved good performance in learning programming languages, effort is still needed in automatic code generation. In this paper, we introduce JaCoText, a model based on Transformer neural network. It aims to generate java source code from natural language text. JaCoText leverages the advantages of both natural language and code generation models. More specifically, we study some findings from state of the art and use them to (1) initialize our model from powerful pre-trained models, (2) explore additional pretraining on our java dataset, (3) lead experiments combining the unimodal and bimodal data in training, and (4) scale the input and output length during the fine-tuning of the model. Conducted experiments on CONCODE dataset show that JaCoText achieves new state-of-the-art results.Keywords: java code generation, natural language processing, sequence-to-sequence models, transformer neural networks
Procedia PDF Downloads 2844296 Features of Fossil Fuels Generation from Bazhenov Formation Source Rocks by Hydropyrolysis
Authors: Anton G. Kalmykov, Andrew Yu. Bychkov, Georgy A. Kalmykov
Abstract:
Nowadays, most oil reserves in Russia and all over the world are hard to recover. That is the reason oil companies are searching for new sources for hydrocarbon production. One of the sources might be high-carbon formations with unconventional reservoirs. Bazhenov formation is a huge source rock formation located in West Siberia, which contains unconventional reservoirs on some of the areas. These reservoirs are formed by secondary processes with low predicting ratio. Only one of five wells is drilled through unconventional reservoirs, in others kerogen has low thermal maturity, and they are of low petroliferous. Therefore, there was a request for tertiary methods for in-situ cracking of kerogen and production of oil. Laboratory experiments of Bazhenov formation rock hydrous pyrolysis were used to investigate features of the oil generation process. Experiments on Bazhenov rocks with a different mineral composition (silica concentration from 15 to 90 wt.%, clays – 5-50 wt.%, carbonates – 0-30 wt.%, kerogen – 1-25 wt.%) and thermal maturity (from immature to late oil window kerogen) were performed in a retort under reservoir conditions. Rock samples of 50 g weight were placed in retort, covered with water and heated to the different temperature varied from 250 to 400°C with the durability of the experiments from several hours to one week. After the experiments, the retort was cooled to room temperature; generated hydrocarbons were extracted with hexane, then separated from the solvent and weighted. The molecular composition of this synthesized oil was then investigated via GC-MS chromatography Characteristics of rock samples after the heating was measured via the Rock-Eval method. It was found, that the amount of synthesized oil and its composition depending on the experimental conditions and composition of rocks. The highest amount of oil was produced at a temperature of 350°C after 12 hours of heating and was up to 12 wt.% of initial organic matter content in the rocks. At the higher temperatures and within longer heating time secondary cracking of generated hydrocarbons occurs, the mass of produced oil is lowering, and the composition contains more hydrocarbons that need to be recovered by catalytical processes. If the temperature is lower than 300°C, the amount of produced oil is too low for the process to be economically effective. It was also found that silica and clay minerals work as catalysts. Selection of heating conditions allows producing synthesized oil with specified composition. Kerogen investigations after heating have shown that thermal maturity increases, but the yield is only up to 35% of the maximum amount of synthetic oil. This yield is the result of gaseous hydrocarbons formation due to secondary cracking and aromatization and coaling of kerogen. Future investigations will allow the increase in the yield of synthetic oil. The results are in a good agreement with theoretical data on kerogen maturation during oil production. Evaluated trends could be tooled up for in-situ oil generation by shale rocks thermal action.Keywords: Bazhenov formation, fossil fuels, hydropyrolysis, synthetic oil
Procedia PDF Downloads 1144295 Fused Structure and Texture (FST) Features for Improved Pedestrian Detection
Authors: Hussin K. Ragb, Vijayan K. Asari
Abstract:
In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.Keywords: pedestrian detection, phase congruency, local phase, LBP features, CSLBP features, FST descriptor
Procedia PDF Downloads 4884294 Sharing Personal Information for Connection: The Effect of Social Exclusion on Consumer Self-Disclosure to Brands
Authors: Jiyoung Lee, Andrew D. Gershoff, Jerry Jisang Han
Abstract:
Most extant research on consumer privacy concerns and their willingness to share personal data has focused on contextual factors (e.g., types of information collected, type of compensation) that lead to consumers’ personal information disclosure. Unfortunately, the literature lacks a clear understanding of how consumers’ incidental psychological needs may influence consumers’ decisions to share their personal information with companies or brands. In this research, we investigate how social exclusion, which is an increasing societal problem, especially since the onset of the COVID-19 pandemic, leads to increased information disclosure intentions for consumers. Specifically, we propose and find that when consumers become socially excluded, their desire for social connection increases, and this desire leads to a greater willingness to disclose their personal information with firms. The motivation to form and maintain interpersonal relationships is one of the most fundamental human needs, and many researchers have found that deprivation of belongingness has negative consequences. Given the negative effects of social exclusion and the universal need to affiliate with others, people respond to exclusion with a motivation for social reconnection, resulting in various cognitive and behavioral consequences, such as paying greater attention to social cues and conforming to others. Here, we propose personal information disclosure as another form of behavior that can satisfy such social connection needs. As self-disclosure can serve as a strategic tool in creating and developing social relationships, those who have been socially excluded and thus have greater social connection desires may be more willing to engage in self-disclosure behavior to satisfy such needs. We conducted four experiments to test how feelings of social exclusion can influence the extent to which consumers share their personal information with brands. Various manipulations and measures were used to demonstrate the robustness of our effects. Through the four studies, we confirmed that (1) consumers who have been socially excluded show greater willingness to share their personal information with brands and that (2) such an effect is driven by the excluded individuals’ desire for social connection. Our findings shed light on how the desire for social connection arising from exclusion influences consumers’ decisions to disclose their personal information to brands. We contribute to the consumer disclosure literature by uncovering a psychological need that influences consumers’ disclosure behavior. We also extend the social exclusion literature by demonstrating that exclusion influences not only consumers’ choice of products but also their decision to disclose personal information to brands.Keywords: consumer-brand relationship, consumer information disclosure, consumer privacy, social exclusion
Procedia PDF Downloads 1234293 A Laundry Algorithm for Colored Textiles
Authors: H. E. Budak, B. Arslan-Ilkiz, N. Cakmakci, I. Gocek, U. K. Sahin, H. Acikgoz-Tufan, M. H. Arslan
Abstract:
The aim of this study is to design a novel laundry algorithm for colored textiles which have significant decoloring problem. During the experimental work, bleached knitted single jersey fabric made of 100% cotton and dyed with reactive dyestuff was utilized, since according to a conducted survey textiles made of cotton are the most demanded textile products in the textile market by the textile consumers and for coloration of textiles reactive dyestuffs are the ones that are the most commonly used in the textile industry for dyeing cotton-made products. Therefore, the fabric used in this study was selected and purchased in accordance with the survey results. The fabric samples cut out of this fabric were dyed with different dyeing parameters by using Remazol Brilliant Red 3BS dyestuff in Gyrowash machine at laboratory conditions. From the alternative reactive-dyed cotton fabric samples, the ones that have high tendency to color loss were determined and examined. Accordingly, the parameters of the dyeing process used for these fabric samples were evaluated and the dyeing process which was chosen to be used for causing high tendency to color loss for the cotton fabrics was determined in order to reveal the level of improvement in color loss during this study clearly. Afterwards, all of the untreated fabric samples cut out of the fabric purchased were dyed with the dyeing process selected. When dyeing process was completed, an experimental design was created for the laundering process by using Minitab® program considering temperature, time and mechanical action as parameters. All of the washing experiments were performed in domestic washing machine. 16 washing experiments were performed with 8 different experimental conditions and 2 repeats for each condition. After each of the washing experiments, water samples of the main wash of the laundering process were measured with UV spectrophotometer. The values obtained were compared with the calibration curve of the materials used for the dyeing process. The results of the washing experiments were statistically analyzed with Minitab® program. According to the results, the most suitable washing algorithm to be used in terms of the parameters temperature, time and mechanical action for domestic washing machines for minimizing fabric color loss was chosen. The laundry algorithm proposed in this study have the ability of minimalizing the problem of color loss of colored textiles in washing machines by eliminating the negative effects of the parameters of laundering process on color of textiles without compromising the fundamental effects of basic cleaning action being performed properly. Therefore, since fabric color loss is minimized with this washing algorithm, dyestuff residuals will definitely be lower in the grey water released from the laundering process. In addition to this, with this laundry algorithm it is possible to wash and clean other types of textile products with proper cleaning effect and minimized color loss.Keywords: color loss, laundry algorithm, textiles, domestic washing process
Procedia PDF Downloads 3574292 Effect of Electromagnetic Fields at 27 GHz on Sperm Quality of Mytilus galloprovincialis
Authors: Carmen Sica, Elena M. Scalisi, Sara Ignoto, Ludovica Palmeri, Martina Contino, Greta Ferruggia, Antonio Salvaggio, Santi C. Pavone, Gino Sorbello, Loreto Di Donato, Roberta Pecoraro, Maria V. Brundo
Abstract:
Recently, a rise in the use of wireless internet technologies such as Wi-Fi and 5G routers/modems have been demonstrated. These devices emit a considerable amount of electromagnetic radiation (EMR), which could interact with the male reproductive system either by thermal or non-thermal mechanisms. The aim of this study was to investigate the direct in vitro influence of 5G radiation on sperm quality in Mytilus galloprovincialis, considered an excellent model for reproduction studies. The experiments at 27 GHz were conducted by using a no commercial high gain pyramidal horn antenna. To evaluate the specific absorption rate (SAR), a numerical simulation has been performed. The resulting incident power density was significantly lower than the power density limit of 10 mW/cm2 set by the international guidelines as a limit for nonthermal effects above 6 GHz. However, regarding temperature measurements of the aqueous sample, it has been verified an increase of 0.2°C, compared to the control samples. This very low-temperature increase couldn’t interfere with experiments. For experiments, sperm samples taken from sexually mature males of Mytilus galloprovincialis were placed in artificial seawater, salinity 30 + 1% and pH 8.3 filtered with a 0.2 m filter. After evaluating the number and quality of spermatozoa, sperm cells were exposed to electromagnetic fields a 27GHz. The effect of exposure on sperm motility and quality was evaluated after 10, 20, 30 and 40 minutes with a light microscope and also using the Eosin test to verify the vitality of the gametes. All the samples were performed in triplicate and statistical analysis was carried out using one-way analysis of variance (ANOVA) with Turkey test for multiple comparations of means to determine differences of sperm motility. A significant decrease (30%) in sperm motility was observed after 10 minutes of exposure and after 30 minutes, all sperms were immobile and not vital. Due to little literature data about this topic, these results could be useful for further studies concerning a great diffusion of these new technologies.Keywords: mussel, spermatozoa, sperm motility, millimeter waves
Procedia PDF Downloads 1684291 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink
Authors: Sanjay Rathee, Arti Kashyap
Abstract:
Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining
Procedia PDF Downloads 2944290 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status
Authors: Rosa Figueroa, Christopher Flores
Abstract:
Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm
Procedia PDF Downloads 297