Search results for: spectral mixture model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18061

Search results for: spectral mixture model

17581 Efficient Numerical Simulation for LDC

Authors: Badr Alkahtani

Abstract:

In this poster, numerical solutions of two-dimensional and three-dimensional lid driven cavity are presented by solving the steady Navier-Stokes equations at high Reynolds numbers where it becomes difficult. Lid driven cavity is where the a fluid contained in a cube and the upper wall is moving. In two dimensions, we use the streamfunction-vorticity formulation to solve the problem in a square domain. A numerical method is employed to discretize the problem in the x and y directions with a spectral collocation method. The problem is coded in the MATLAB programming environment. Solutions at high Reynolds numbers are obtained up to Re=20000 on a fine grid of 131 * 131. Also in this presentation, the numerical solutions for the three-dimensional lid-driven cavity problem are obtained by solving the velocity-vorticity formulation of the Navier-Stokes equations (which is the first time that this has been simulated with special boundary conditions) for various Reynolds numbers. A spectral collocation method is employed to discretize the y and z directions and a finite difference method is used to discretize the x direction. Numerical solutions are obtained for Reynolds number up to 200. , The work prepared here is to show the efficiency of methods used to simulate the physical problem where accurate simulations of lid driven cavity are obtained at high Reynolds number as mentioned above. The result for the two dimensional problem is far from the previous researcher result.

Keywords: lid driven cavity, navier-stokes, simulation, Reynolds number

Procedia PDF Downloads 690
17580 Impact of Aging on Fatigue Performance of Novel Hybrid HMA

Authors: Faizan Asghar, Mohammad Jamal Khattak

Abstract:

Aging, in general, refers to changes in rheological characteristics of asphalt mixture due to changes in chemical composition over the course of construction and service life of the pavement. The main goal of this study was to investigate the impact of oxidation on fatigue characteristics of a novel HMA composite fabricated with a combination of crumb rubber (CRM) and polyvinyl alcohol (PVA) fiber subject to aging of 7 and 14 days. A flexural beam fatigue test was performed to evaluate several characteristics of control, CRM modified, PVA reinforced, and novel rubber-fiber HMA composite. Experimental results revealed that aging had a significant impact on the fatigue performance of novel HMA composite. It was found that a suitable proportion of CRM and PVA radically affected the performance of novel rubber-fiber HMA in resistance to fracture and fatigue cracking when subjected to long-term aging. The developed novel HMA composite containing 2% CRM and 0.2% PVA presented around 29 times higher resistance to fatigue cracking for a period of 7 days of aging. To develop a cumulative plastic deformation level of 250 micros, such a mixture required over 50 times higher cycles than control HMA. Moreover, the crack propagation rate was reduced by over 90%, with over 12 times higher energy required to propagate a unit crack length in such a mixture compared to conventional HMA. Further, digital imaging correlation analyses revealed a more twisted and convoluted fracture path and higher strain distribution in rubber-fiber HMA composite. The fatigue performance after long-term aging of such novel HMA composite explicitly validates the ability to withstand load repetition that could lead to an extension in the service life of pavement infrastructure and reduce taxpayers’ dollars spent.

Keywords: crumb rubber, PVA fibers, dry process, aging, performance testing, fatigue life

Procedia PDF Downloads 47
17579 Theoretical Study of Structural Parameters, Chemical Reactivity and Spectral and Thermodynamical Properties of Organometallic Complexes Containing Zinc, Nickel and Cadmium with Nitrilotriacetic Acid and Tea Ligands: Density Functional Theory Investigation

Authors: Nour El Houda Bensiradj, Nafila Zouaghi, Taha Bensiradj

Abstract:

The pollution of water resources is characterized by the presence of microorganisms, chemicals, or industrial waste. Generally, this waste generates effluents containing large quantities of heavy metals, making the water unsuitable for consumption and causing the death of aquatic life and associated biodiversity. Currently, it is very important to assess the impact of heavy metals in water pollution as well as the processes for treating and reducing them. Among the methods of water treatment and disinfection, we mention the complexation of metal ions using ligands which serve to precipitate and subsequently eliminate these ions. In this context, we are interested in the study of complexes containing heavy metals such as zinc, nickel, and cadmium, which are present in several industrial discharges and are discharged into water sources. We will use the ligands of triethanolamine (TEA) and nitrilotriacetic acid (NTA). The theoretical study is based on molecular modeling, using the density functional theory (DFT) implemented in the Gaussian 09 program. The geometric and energetic properties of the above complexes will be calculated. Spectral properties such as infrared, as well as reactivity descriptors, and thermodynamic properties such as enthalpy and free enthalpy will also be determined.

Keywords: heavy metals, NTA, TEA, DFT, IR, reactivity descriptors

Procedia PDF Downloads 78
17578 Identifying Model to Predict Deterioration of Water Mains Using Robust Analysis

Authors: Go Bong Choi, Shin Je Lee, Sung Jin Yoo, Gibaek Lee, Jong Min Lee

Abstract:

In South Korea, it is difficult to obtain data for statistical pipe assessment. In this paper, to address these issues, we find that various statistical model presented before is how data mixed with noise and are whether apply in South Korea. Three major type of model is studied and if data is presented in the paper, we add noise to data, which affects how model response changes. Moreover, we generate data from model in paper and analyse effect of noise. From this we can find robustness and applicability in Korea of each model.

Keywords: proportional hazard model, survival model, water main deterioration, ecological sciences

Procedia PDF Downloads 720
17577 Oil-Spill Monitoring in Istanbul Strait and Marmara Sea by RASAT Remote Sensing Images

Authors: Ozgun Oktar, Sevilay Can, Cengiz V. Ekici

Abstract:

The oil spill is a form of pollution caused by releasing of a liquid petroleum hydrocarbon into the marine environment. Considering the growth of ship traffic, increasing of off-shore oil drilling and seaside refineries affect the risk of oil spill upward. The oil spill is easy to spread to large areas when occurs especially on the sea surface. Remote sensing technology offers the easiest way to control/monitor the area of the oil spill in a large region. It’s usually easy to detect pollution when occurs by the ship accidents, however monitoring non-accidental pollution could be possible by remote sensing. It is also needed to observe specific regions daily and continuously by satellite solutions. Remote sensing satellites mostly and effectively used for monitoring oil pollution are RADARSAT, ENVISAT and MODIS. Spectral coverage and transition period of these satellites are not proper to monitor Marmara Sea and Istanbul Strait continuously. In this study, RASAT and GOKTURK-2 are suggested to use for monitoring Marmara Sea and Istanbul Strait. RASAT, with spectral resolution 420 – 730 nm, is the first Turkish-built satellite. GOKTURK-2’s resolution can reach up to 2,5 meters. This study aims to analyze the images from both satellites and produce maps to show the regions which have potentially affected by spills from shipping traffic.

Keywords: Marmara Sea, monitoring, oil spill, satellite remote sensing

Procedia PDF Downloads 393
17576 Computer Simulation of Hydrogen Superfluidity through Binary Mixing

Authors: Sea Hoon Lim

Abstract:

A superfluid is a fluid of bosons that flows without resistance. In order to be a superfluid, a substance’s particles must behave like bosons, yet remain mobile enough to be considered a superfluid. Bosons are low-temperature particles that can be in all energy states at the same time. If bosons were to be cooled down, then the particles will all try to be on the lowest energy state, which is called the Bose Einstein condensation. The temperature when bosons start to matter is when the temperature has reached its critical temperature. For example, when Helium reaches its critical temperature of 2.17K, the liquid density drops and becomes a superfluid with zero viscosity. However, most materials will solidify -and thus not remain fluids- at temperatures well above the temperature at which they would otherwise become a superfluid. Only a few substances currently known to man are capable of at once remaining a fluid and manifesting boson statistics. The most well-known of these is helium and its isotopes. Because hydrogen is lighter than helium, and thus expected to manifest Bose statistics at higher temperatures than helium, one might expect hydrogen to also be a superfluid. As of today, however, no one has yet been able to produce a bulk, hydrogen superfluid. The reason why hydrogen did not form a superfluid in the past is its intermolecular interactions. As a result, hydrogen molecules are much more likely to crystallize than their helium counterparts. The key to creating a hydrogen superfluid is therefore finding a way to reduce the effect of the interactions among hydrogen molecules, postponing the solidification to lower temperature. In this work, we attempt via computer simulation to produce bulk superfluid hydrogen through binary mixing. Binary mixture is a technique of mixing two pure substances in order to avoid crystallization and enhance super fluidity. Our mixture here is KALJ H2. We then sample the partition function using this Path Integral Monte Carlo (PIMC), which is well-suited for the equilibrium properties of low-temperature bosons and captures not only the statistics but also the dynamics of Hydrogen. Via this sampling, we will then produce a time evolution of the substance and see if it exhibits superfluid properties.

Keywords: superfluidity, hydrogen, binary mixture, physics

Procedia PDF Downloads 299
17575 Mechanical Performance of Sandwich Square Honeycomb Structure from Sugar Palm Fibre

Authors: Z. Ansari, M. R. M. Rejab, D. Bachtiar, J. P. Siregar

Abstract:

This study focus on the compression and tensile properties of new and recycle square honeycombs structure from sugar palm fibre (SPF) and polylactic acid (PLA) composite. The end data will determine the failure strength and energy absorption for both new and recycle composite. The control SPF specimens were fabricated from short fibre co-mingled with PLA by using a bra-blender set at 180°C and 50 rpm consecutively. The mixture of 30% fibre and 70% PLA were later on the hot press at 180°C into sheets with thickness 3mm consecutively before being assembled into a sandwich honeycomb structure. An INSTRON tensile machine and Abaqus 6.13 software were used for mechanical test and finite element simulation. The percentage of error from the simulation and experiment data was 9.20% and 9.17% for both new and recycled product. The small error of percentages was acceptable due to the nature of the simulation model to be assumed as a perfect model with no imperfect geometries. The energy absorption value from new to recycled product decrease from 312.86kJ to 282.10kJ. With this small decrements, it is still possible to implement a recycle SPF/PLA composite into everyday usages such as a car's interior or a small size furniture.

Keywords: failure modes, numerical modelling, polylactic acid, sugar palm fibres

Procedia PDF Downloads 280
17574 Evaluation of the Environmental Risk from the Co-Deposition of Waste Rock Material and Fly Ash

Authors: A. Mavrikos, N. Petsas, E. Kaltsi, D. Kaliampakos

Abstract:

The lignite-fired power plants in the Western Macedonia Lignite Center produce more than 8 106 t of fly ash per year. Approximately 90% of this quantity is used for restoration-reclamation of exhausted open-cast lignite mines and slope stabilization of the overburden. The purpose of this work is to evaluate the environmental behavior of the mixture of waste rock and fly ash that is being used in the external deposition site of the South Field lignite mine. For this reason, a borehole was made within the site and 86 samples were taken and subjected to chemical analyses and leaching tests. The results showed very limited leaching of trace elements and heavy metals from this mixture. Moreover, when compared to the limit values set for waste acceptable in inert waste landfills, only few excesses were observed, indicating only minor risk for groundwater pollution. However, due to the complexity of both the leaching process and the contaminant pathway, more boreholes and analyses should be made in nearby locations and a systematic groundwater monitoring program should be implemented both downstream and within the external deposition site.

Keywords: co-deposition, fly ash, leaching tests, lignite, waste rock

Procedia PDF Downloads 219
17573 Preparation of Nano-Scaled linbo3 by Polyol Method

Authors: Gabriella Dravecz, László Péter, Zsolt Kis

Abstract:

Abstract— The growth of optical LiNbO3 single crystal and its physical and chemical properties are well known on the macroscopic scale. Nowadays the rare-earth doped single crystals became important for coherent quantum optical experiments: electromagnetically induced transparency, slow down of light pulses, coherent quantum memory. The expansion of applications is increasingly requiring the production of nano scaled LiNbO3 particles. For example, rare-earth doped nanoscaled particles of lithium niobate can be act like single photon source which can be the bases of a coding system of the quantum computer providing complete inaccessibility to strangers. The polyol method is a chemical synthesis where oxide formation occurs instead of hydroxide because of the high temperature. Moreover the polyol medium limits the growth and agglomeration of the grains producing particles with the diameter of 30-200 nm. In this work nano scaled LiNbO3 was prepared by the polyol method. The starting materials (niobium oxalate and LiOH) were diluted in H2O2. Then it was suspended in ethylene glycol and heated up to about the boiling point of the mixture with intensive stirring. After the thermal equilibrium was reached, the mixture was kept in this temperature for 4 hours. The suspension was cooled overnight. The mixture was centrifuged and the particles were filtered. Dynamic Light Scattering (DLS) measurement was carried out and the size of the particles were found to be 80-100 nms. This was confirmed by Scanning Electron Microscope (SEM) investigations. The element analysis of SEM showed large amount of Nb in the sample. The production of LiNbO3 nano particles were succesful by the polyol method. The agglomeration of the particles were avoided and the size of 80-100nm could be reached.

Keywords: lithium-niobate, nanoparticles, polyol, SEM

Procedia PDF Downloads 111
17572 Modelling and Control of Binary Distillation Column

Authors: Narava Manose

Abstract:

Distillation is a very old separation technology for separating liquid mixtures that can be traced back to the chemists in Alexandria in the first century A. D. Today distillation is the most important industrial separation technology. By the eleventh century, distillation was being used in Italy to produce alcoholic beverages. At that time, distillation was probably a batch process based on the use of just a single stage, the boiler. The word distillation is derived from the Latin word destillare, which means dripping or trickling down. By at least the sixteenth century, it was known that the extent of separation could be improved by providing multiple vapor-liquid contacts (stages) in a so called Rectifactorium. The term rectification is derived from the Latin words rectefacere, meaning to improve. Modern distillation derives its ability to produce almost pure products from the use of multi-stage contacting. Throughout the twentieth century, multistage distillation was by far the most widely used industrial method for separating liquid mixtures of chemical components.The basic principle behind this technique relies on the different boiling temperatures for the various components of the mixture, allowing the separation between the vapor from the most volatile component and the liquid of other(s) component(s). •Developed a simple non-linear model of a binary distillation column using Skogestad equations in Simulink. •We have computed the steady-state operating point around which to base our analysis and controller design. However, the model contains two integrators because the condenser and reboiler levels are not controlled. One particular way of stabilizing the column is the LV-configuration where we use D to control M_D, and B to control M_B; such a model is given in cola_lv.m where we have used two P-controllers with gains equal to 10.

Keywords: modelling, distillation column, control, binary distillation

Procedia PDF Downloads 256
17571 Seismic Microzonation Analysis for Damage Mapping of the 2006 Yogyakarta Earthquake, Indonesia

Authors: Fathul Mubin, Budi E. Nurcahya

Abstract:

In 2006, a large earthquake ever occurred in the province of Yogyakarta, which caused considerable damage. This is the basis need to investigate the seismic vulnerability index in around of the earthquake zone. This research is called microzonation of earthquake hazard. This research has been conducted at the site and surrounding of Prambanan Temple, includes homes and civil buildings. The reason this research needs to be done because in the event of an earthquake in 2006, there was damage to the temples at Prambanan temple complex and its surroundings. In this research, data collection carried out for 60 minutes using three component seismograph measurements at 165 points with spacing of 1000 meters. The data recorded in time function were analyzed using the spectral ratio method, known as the Horizontal to Vertical Spectral Ratio (HVSR). Results from this analysis are dominant frequency (Fg) and maximum amplification factor (Ag) are used to obtain seismic vulnerability index. The results of research showed the dominant frequency range from 0.5 to 30 Hz and the amplification is in interval from 0.5 to 9. Interval value for seismic vulnerability index is 0.1 to 50. Based on distribution maps of seismic vulnerability index and impact of buildings damage seemed for suitability. For further research, it needs to survey to the east (klaten) and south (Bantul, DIY) to determine a full distribution maps of seismic vulnerability index.

Keywords: amplification factor, dominant frequency, microzonation analysis, seismic vulnerability index

Procedia PDF Downloads 178
17570 Low Frequency Ultrasonic Degassing to Reduce Void Formation in Epoxy Resin and Its Effect on the Thermo-Mechanical Properties of the Cured Polymer

Authors: A. J. Cobley, L. Krishnan

Abstract:

The demand for multi-functional lightweight materials in sectors such as automotive, aerospace, electronics is growing, and for this reason fibre-reinforced, epoxy polymer composites are being widely utilized. The fibre reinforcing material is mainly responsible for the strength and stiffness of the composites whilst the main role of the epoxy polymer matrix is to enhance the load distribution applied on the fibres as well as to protect the fibres from the effect of harmful environmental conditions. The superior properties of the fibre-reinforced composites are achieved by the best properties of both of the constituents. Although factors such as the chemical nature of the epoxy and how it is cured will have a strong influence on the properties of the epoxy matrix, the method of mixing and degassing of the resin can also have a significant impact. The production of a fibre-reinforced epoxy polymer composite will usually begin with the mixing of the epoxy pre-polymer with a hardener and accelerator. Mechanical methods of mixing are often employed for this stage but such processes naturally introduce air into the mixture, which, if it becomes entrapped, will lead to voids in the subsequent cured polymer. Therefore, degassing is normally utilised after mixing and this is often achieved by placing the epoxy resin mixture in a vacuum chamber. Although this is reasonably effective, it is another process stage and if a method of mixing could be found that, at the same time, degassed the resin mixture this would lead to shorter production times, more effective degassing and less voids in the final polymer. In this study the effect of four different methods for mixing and degassing of the pre-polymer with hardener and accelerator were investigated. The first two methods were manual stirring and magnetic stirring which were both followed by vacuum degassing. The other two techniques were ultrasonic mixing/degassing using a 40 kHz ultrasonic bath and a 20 kHz ultrasonic probe. The cured cast resin samples were examined under scanning electron microscope (SEM), optical microscope, and Image J analysis software to study morphological changes, void content and void distribution. Three point bending test and differential scanning calorimetry (DSC) were also performed to determine the thermal and mechanical properties of the cured resin. It was found that the use of the 20 kHz ultrasonic probe for mixing/degassing gave the lowest percentage voids of all the mixing methods in the study. In addition, the percentage voids found when employing a 40 kHz ultrasonic bath to mix/degas the epoxy polymer mixture was only slightly higher than when magnetic stirrer mixing followed by vacuum degassing was utilized. The effect of ultrasonic mixing/degassing on the thermal and mechanical properties of the cured resin will also be reported. The results suggest that low frequency ultrasound is an effective means of mixing/degassing a pre-polymer mixture and could enable a significant reduction in production times.

Keywords: degassing, low frequency ultrasound, polymer composites, voids

Procedia PDF Downloads 279
17569 High Photosensitivity and Broad Spectral Response of Multi-Layered Germanium Sulfide Transistors

Authors: Rajesh Kumar Ulaganathan, Yi-Ying Lu, Chia-Jung Kuo, Srinivasa Reddy Tamalampudi, Raman Sankar, Fang Cheng Chou, Yit-Tsong Chen

Abstract:

In this paper, we report the optoelectronic properties of multi-layered GeS nanosheets (~28 nm thick)-based field-effect transistors (called GeS-FETs). The multi-layered GeS-FETs exhibit remarkably high photoresponsivity of Rλ ~ 206 AW-1 under illumination of 1.5 µW/cm2 at  = 633 nm, Vg = 0 V, and Vds = 10 V. The obtained Rλ ~ 206 AW-1 is excellent as compared with a GeS nanoribbon-based and the other family members of group IV-VI-based photodetectors in the two-dimensional (2D) realm, such as GeSe and SnS2. The gate-dependent photoresponsivity of GeS-FETs was further measured to be able to reach Rλ ~ 655 AW-1 operated at Vg = -80 V. Moreover, the multi-layered GeS photodetector holds high external quantum efficiency (EQE ~ 4.0 × 104 %) and specific detectivity (D* ~ 2.35 × 1013 Jones). The measured D* is comparable to those of the advanced commercial Si- and InGaAs-based photodiodes. The GeS photodetector also shows an excellent long-term photoswitching stability with a response time of ~7 ms over a long period of operation (>1 h). These extraordinary properties of high photocurrent generation, broad spectral range, fast response, and long-term stability make the GeS-FET photodetector a highly qualified candidate for future optoelectronic applications.

Keywords: germanium sulfide, photodetector, photoresponsivity, external quantum efficiency, specific detectivity

Procedia PDF Downloads 513
17568 Operational Matrix Method for Fuzzy Fractional Reaction Diffusion Equation

Authors: Sachin Kumar

Abstract:

Fuzzy fractional diffusion equation is widely useful to depict different physical processes arising in physics, biology, and hydrology. The motive of this article is to deal with the fuzzy fractional diffusion equation. We study a mathematical model of fuzzy space-time fractional diffusion equation in which unknown function, coefficients, and initial-boundary conditions are fuzzy numbers. First, we find out a fuzzy operational matrix of Legendre polynomial of Caputo type fuzzy fractional derivative having a non-singular Mittag-Leffler kernel. The main advantages of this method are that it reduces the fuzzy fractional partial differential equation (FFPDE) to a system of fuzzy algebraic equations from which we can find the solution of the problem. The feasibility of our approach is shown by some numerical examples. Hence, our method is suitable to deal with FFPDE and has good accuracy.

Keywords: fractional PDE, fuzzy valued function, diffusion equation, Legendre polynomial, spectral method

Procedia PDF Downloads 173
17567 A Holistic Study of the Beta Lyrae Systems V0487 Lac, V0566 Hya and V0666 Lac

Authors: Moqbil S. Alenazi, Magdy. M. Elkhateeb

Abstract:

A comprehensive photometric study and evolutionary state for the newly discovered Beta Lyr systems V0487 Lac, V0566 Hya, and V0666 Lac were carried out by means of their first photometric observations. New times of minima were estimated from the observed light curves, and first (O-C) curves were established for all systems. A windows interface version of the Wilson and Devinney code (W-D) based on model atmospheres and a pass band prescription have been used for the radiative treatment. The accepted models reveal some absolute parameters for the studied systems, which are used in adopting the spectral type of the system's components and their evolutionary status. Distances to each system were calculated, and physical properties were estimated. Locations of the systems on the theoreticalmass–luminosity and mass–radius relations revealed a good fit for all systems components except for the secondary component of the system V0487 Lac.

Keywords: eclipsing binaries, light curve modelling, evolutionary state

Procedia PDF Downloads 58
17566 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang

Abstract:

Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.

Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI

Procedia PDF Downloads 249
17565 Co-Gasification of Petroleum Waste and Waste Tires: A Numerical and CFD Study

Authors: Thomas Arink, Isam Janajreh

Abstract:

The petroleum industry generates significant amounts of waste in the form of drill cuttings, contaminated soil and oily sludge. Drill cuttings are a product of the off-shore drilling rigs, containing wet soil and total petroleum hydrocarbons (TPH). Contaminated soil comes from different on-shore sites and also contains TPH. The oily sludge is mainly residue or tank bottom sludge from storage tanks. The two main treatment methods currently used are incineration and thermal desorption (TD). Thermal desorption is a method where the waste material is heated to 450ºC in an anaerobic environment to release volatiles, the condensed volatiles can be used as a liquid fuel. For the thermal desorption unit dry contaminated soil is mixed with moist drill cuttings to generate a suitable mixture. By thermo gravimetric analysis (TGA) of the TD feedstock it was found that less than 50% of the TPH are released, the discharged material is stored in landfill. This study proposes co-gasification of petroleum waste with waste tires as an alternative to thermal desorption. Co-gasification with a high-calorific material is necessary since the petroleum waste consists of more than 60 wt% ash (soil/sand), causing its calorific value to be too low for gasification. Since the gasification process occurs at 900ºC and higher, close to 100% of the TPH can be released, according to the TGA. This work consists of three parts: 1. a mathematical gasification model, 2. a reactive flow CFD model and 3. experimental work on a drop tube reactor. Extensive material characterization was done by means of proximate analysis (TGA), ultimate analysis (CHNOS flash analysis) and calorific value measurements (Bomb calorimeter) for the input parameters of the mathematical and CFD model. The mathematical model is a zero dimensional model based on Gibbs energy minimization together with Lagrange multiplier; it is used to find the product species composition (molar fractions of CO, H2, CH4 etc.) for different tire/petroleum feedstock mixtures and equivalence ratios. The results of the mathematical model act as a reference for the CFD model of the drop-tube reactor. With the CFD model the efficiency and product species composition can be predicted for different mixtures and particle sizes. Finally both models are verified by experiments on a drop tube reactor (1540 mm long, 66 mm inner diameter, 1400 K maximum temperature).

Keywords: computational fluid dynamics (CFD), drop tube reactor, gasification, Gibbs energy minimization, petroleum waste, waste tires

Procedia PDF Downloads 500
17564 Different Formula of Mixed Bacteria as a Bio-Treatment for Sewage Wastewater

Authors: E. Marei, A. Hammad, S. Ismail, A. El-Gindy

Abstract:

This study aims to investigate the ability of different formula of mixed bacteria as a biological treatments of wastewater after primary treatment as a bio-treatment and bio-removal and bio-adsorbent of different heavy metals in natural circumstances. The wastewater was collected from Sarpium forest site-Ismailia Governorate, Egypt. These treatments were mixture of free cells and mixture of immobilized cells of different bacteria. These different formulas of mixed bacteria were prepared under Lab. condition. The obtained data indicated that, as a result of wastewater bio-treatment, the removal rate was found to be 76.92 and 76.70% for biological oxygen demand, 79.78 and 71.07% for chemical oxygen demand, 32.45 and 36.84 % for ammonia nitrogen as well as 91.67 and 50.0% for phosphate after 24 and 28 hrs with mixed free cells and mixed immobilized cells, respectively. Moreover, the bio-removals of different heavy metals were found to reach 90.0 and 50. 0% for Cu ion, 98.0 and 98.5% for Fe ion, 97.0 and 99.3% for Mn ion, 90.0 and 90.0% Pb, 80.0% and 75.0% for Zn ion after 24 and 28 hrs with mixed free cells and mixed immobilized cells, respectively. The results indicated that 13.86 and 17.43% of removal efficiency and reduction of total dissolved solids were achieved after 24 and 28 hrs with mixed free cells and mixed immobilized cells, respectively.

Keywords: wastewater bio-treatment , bio-sorption heavy metals, biological desalination, immobilized bacteria, free cell bacteria

Procedia PDF Downloads 178
17563 Features of Normative and Pathological Realizations of Sibilant Sounds for Computer-Aided Pronunciation Evaluation in Children

Authors: Zuzanna Miodonska, Michal Krecichwost, Pawel Badura

Abstract:

Sigmatism (lisping) is a speech disorder in which sibilant consonants are mispronounced. The diagnosis of this phenomenon is usually based on the auditory assessment. However, the progress in speech analysis techniques creates a possibility of developing computer-aided sigmatism diagnosis tools. The aim of the study is to statistically verify whether specific acoustic features of sibilant sounds may be related to pronunciation correctness. Such knowledge can be of great importance while implementing classifiers and designing novel tools for automatic sibilants pronunciation evaluation. The study covers analysis of various speech signal measures, including features proposed in the literature for the description of normative sibilants realization. Amplitudes and frequencies of three fricative formants (FF) are extracted based on local spectral maxima of the friction noise. Skewness, kurtosis, four normalized spectral moments (SM) and 13 mel-frequency cepstral coefficients (MFCC) with their 1st and 2nd derivatives (13 Delta and 13 Delta-Delta MFCC) are included in the analysis as well. The resulting feature vector contains 51 measures. The experiments are performed on the speech corpus containing words with selected sibilant sounds (/ʃ, ʒ/) pronounced by 60 preschool children with proper pronunciation or with natural pathologies. In total, 224 /ʃ/ segments and 191 /ʒ/ segments are employed in the study. The Mann-Whitney U test is employed for the analysis of stigmatism and normative pronunciation. Statistically, significant differences are obtained in most of the proposed features in children divided into these two groups at p < 0.05. All spectral moments and fricative formants appear to be distinctive between pathology and proper pronunciation. These metrics describe the friction noise characteristic for sibilants, which makes them particularly promising for the use in sibilants evaluation tools. Correspondences found between phoneme feature values and an expert evaluation of the pronunciation correctness encourage to involve speech analysis tools in diagnosis and therapy of sigmatism. Proposed feature extraction methods could be used in a computer-assisted stigmatism diagnosis or therapy systems.

Keywords: computer-aided pronunciation evaluation, sigmatism diagnosis, speech signal analysis, statistical verification

Procedia PDF Downloads 279
17562 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models

Authors: Azadeh Jafari, Robert G. Owens

Abstract:

In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.

Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics

Procedia PDF Downloads 341
17561 Nonlinear Triad Interactions in Magnetohydrodynamic Plasma Turbulence

Authors: Yasser Rammah, Wolf-Christian Mueller

Abstract:

Nonlinear triad interactions in incompressible three-dimensional magnetohydrodynamic (3D-MHD) turbulence are studied by analyzing data from high-resolution direct numerical simulations of decaying isotropic (5123 grid points) and forced anisotropic (10242 x256 grid points) turbulence. An accurate numerical approach toward analyzing nonlinear turbulent energy transfer function and triad interactions is presented. It involves the direct numerical examination of every wavenumber triad that is associated with the nonlinear terms in the differential equations of MHD in the inertial range of turbulence. The technique allows us to compute the spectral energy transfer and energy fluxes, as well as the spectral locality property of energy transfer function. To this end, the geometrical shape of each underlying wavenumber triad that contributes to the statistical transfer density function is examined to infer the locality of the energy transfer. Results show that the total energy transfer is local via nonlocal triad interactions in decaying macroscopically isotropic MHD turbulence. In anisotropic MHD, turbulence subject to a strong mean magnetic field the nonlinear transfer is generally weaker and exhibits a moderate increase of nonlocality in both perpendicular and parallel directions compared to the isotropic case. These results support the recent mathematical findings, which also claim the locality of nonlinear energy transfer in MHD turbulence.

Keywords: magnetohydrodynamic (MHD) turbulence, transfer density function, locality function, direct numerical simulation (DNS)

Procedia PDF Downloads 365
17560 Delivery of Ginseng Extract Containing Phytosome Loaded Microsphere System: A Preclinical Approach for Treatment of Neuropathic Pain in Rodent Model

Authors: Nitin Kumar

Abstract:

Purpose: The current research work focuses mainly on evolving a delivery system for ginseng extract (GE), which in turn will ameliorate the neuroprotective potential by means of enhancing the ginsenoside (Rb1) bio-availability (BA). For more noteworthy enhancement in oral bioavailability (OBA) along with pharmacological properties, the drug carriers’ performance can be strengthened by utilizing phytosomes-loaded microspheres (PM) delivery system. Methods: For preparing the disparate phytosome complexes (F1, F2, and F3), an aqueous extract of ginseng roots (GR) along with phospholipids were reacted in disparate ratio. Considering the outcomes, F3 formulation (spray-dried) was chosen for preparing the phytosomes powder (PP), PM, and extract microspheres (EM). PM was made by means of loading of F3 into Gum Arabic (GA) in addition to maltodextrin polymer mixture, whereas EM was prepared by means of the addition of extract directly into the same polymer mixture. For investigating the neuroprotective effect (NPE) in addition to their pharmacokinetic (PK) properties, PP, PM, and EM formulations were assessed. Results: F3 formulation gave enhanced entrapment efficiency (EE) (i.e., 50.61%) along with good homogeneity of spherical shaped particle size (PS) (42.58 ± 1.4 nm) with least polydispersity index (PDI) (i.e., 0.193 ± 0.01). The sustained release (up to 24 h) of ginsenoside Rb1 (GRb1) is revealed by the dissolution study of PM. A significantly (p < 0.05) greater anti-oxidant (AO) potential of PM can well be perceived as of the diminution in the lipid peroxidase level in addition to the rise in the glutathione superoxide dismutase (SOD) in addition to catalase levels. It also showed a greater neuroprotective potential exhibiting significant (p < 0.05) augmentation in the nociceptive threshold together with the diminution in damage to nerves. A noteworthy enhancement in the relative BA (157.94%) of GRb1 through the PM formulation can well be seen in the PK studies. Conclusion: It is exhibited that the PM system is an optimistic and feasible strategy to enhance the delivery of GE for the effectual treatment of neuropathic pain.

Keywords: ginseng, neuropathic, phytosome, pain

Procedia PDF Downloads 173
17559 Describing the Fine Electronic Structure and Predicting Properties of Materials with ATOMIC MATTERS Computation System

Authors: Rafal Michalski, Jakub Zygadlo

Abstract:

We present the concept and scientific methods and algorithms of our computation system called ATOMIC MATTERS. This is the first presentation of the new computer package, that allows its user to describe physical properties of atomic localized electron systems subject to electromagnetic interactions. Our solution applies to situations where an unclosed electron 2p/3p/3d/4d/5d/4f/5f subshell interacts with an electrostatic potential of definable symmetry and external magnetic field. Our methods are based on Crystal Electric Field (CEF) approach, which takes into consideration the electrostatic ligands field as well as the magnetic Zeeman effect. The application allowed us to predict macroscopic properties of materials such as: Magnetic, spectral and calorimetric as a result of physical properties of their fine electronic structure. We emphasize the importance of symmetry of charge surroundings of atom/ion, spin-orbit interactions (spin-orbit coupling) and the use of complex number matrices in the definition of the Hamiltonian. Calculation methods, algorithms and convention recalculation tools collected in ATOMIC MATTERS were chosen to permit the prediction of magnetic and spectral properties of materials in isostructural series.

Keywords: atomic matters, crystal electric field (CEF) spin-orbit coupling, localized states, electron subshell, fine electronic structure

Procedia PDF Downloads 300
17558 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity

Authors: Yuri Laevsky, Tatyana Nosova

Abstract:

The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.

Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation

Procedia PDF Downloads 284
17557 Syntax and Words as Evolutionary Characters in Comparative Linguistics

Authors: Nancy Retzlaff, Sarah J. Berkemer, Trudie Strauss

Abstract:

In the last couple of decades, the advent of digitalization of any kind of data was probably one of the major advances in all fields of study. This paves the way for also analysing these data even though they might come from disciplines where there was no initial computational necessity to do so. Especially in linguistics, one can find a rather manual tradition. Still when considering studies that involve the history of language families it is hard to overlook the striking similarities to bioinformatics (phylogenetic) approaches. Alignments of words are such a fairly well studied example of an application of bioinformatics methods to historical linguistics. In this paper we will not only consider alignments of strings, i.e., words in this case, but also alignments of syntax trees of selected Indo-European languages. Based on initial, crude alignments, a sophisticated scoring model is trained on both letters and syntactic features. The aim is to gain a better understanding on which features in two languages are related, i.e., most likely to have the same root. Initially, all words in two languages are pre-aligned with a basic scoring model that primarily selects consonants and adjusts them before fitting in the vowels. Mixture models are subsequently used to filter ‘good’ alignments depending on the alignment length and the number of inserted gaps. Using these selected word alignments it is possible to perform tree alignments of the given syntax trees and consequently find sentences that correspond rather well to each other across languages. The syntax alignments are then filtered for meaningful scores—’good’ scores contain evolutionary information and are therefore used to train the sophisticated scoring model. Further iterations of alignments and training steps are performed until the scoring model saturates, i.e., barely changes anymore. A better evaluation of the trained scoring model and its function in containing evolutionary meaningful information will be given. An assessment of sentence alignment compared to possible phrase structure will also be provided. The method described here may have its flaws because of limited prior information. This, however, may offer a good starting point to study languages where only little prior knowledge is available and a detailed, unbiased study is needed.

Keywords: alignments, bioinformatics, comparative linguistics, historical linguistics, statistical methods

Procedia PDF Downloads 134
17556 The Modification of Convolutional Neural Network in Fin Whale Identification

Authors: Jiahao Cui

Abstract:

In the past centuries, due to climate change and intense whaling, the global whale population has dramatically declined. Among the various whale species, the fin whale experienced the most drastic drop in number due to its popularity in whaling. Under this background, identifying fin whale calls could be immensely beneficial to the preservation of the species. This paper uses feature extraction to process the input audio signal, then a network based on AlexNet and three networks based on the ResNet model was constructed to classify fin whale calls. A mixture of the DOSITS database and the Watkins database was used during training. The results demonstrate that a modified ResNet network has the best performance considering precision and network complexity.

Keywords: convolutional neural network, ResNet, AlexNet, fin whale preservation, feature extraction

Procedia PDF Downloads 96
17555 Impact of Weather Conditions on Generalized Frequency Division Multiplexing over Gamma Gamma Channel

Authors: Muhammad Sameer Ahmed, Piotr Remlein, Tansal Gucluoglu

Abstract:

The technique called as Generalized frequency division multiplexing (GFDM) used in the free space optical channel can be a good option for implementation free space optical communication systems. This technique has several strengths e.g. good spectral efficiency, low peak-to-average power ratio (PAPR), adaptability and low co-channel interference. In this paper, the impact of weather conditions such as haze, rain and fog on GFDM over the gamma-gamma channel model is discussed. A Trade off between link distance and system performance under intense weather conditions is also analysed. The symbol error probability (SEP) of GFDM over the gamma-gamma turbulence channel is derived and verified with the computer simulations.

Keywords: free space optics, generalized frequency division multiplexing, weather conditions, gamma gamma distribution

Procedia PDF Downloads 148
17554 Power Iteration Clustering Based on Deflation Technique on Large Scale Graphs

Authors: Taysir Soliman

Abstract:

One of the current popular clustering techniques is Spectral Clustering (SC) because of its advantages over conventional approaches such as hierarchical clustering, k-means, etc. and other techniques as well. However, one of the disadvantages of SC is the time consuming process because it requires computing the eigenvectors. In the past to overcome this disadvantage, a number of attempts have been proposed such as the Power Iteration Clustering (PIC) technique, which is one of versions from SC; some of PIC advantages are: 1) its scalability and efficiency, 2) finding one pseudo-eigenvectors instead of computing eigenvectors, and 3) linear combination of the eigenvectors in linear time. However, its worst disadvantage is an inter-class collision problem because it used only one pseudo-eigenvectors which is not enough. Previous researchers developed Deflation-based Power Iteration Clustering (DPIC) to overcome problems of PIC technique on inter-class collision with the same efficiency of PIC. In this paper, we developed Parallel DPIC (PDPIC) to improve the time and memory complexity which is run on apache spark framework using sparse matrix. To test the performance of PDPIC, we compared it to SC, ESCG, ESCALG algorithms on four small graph benchmark datasets and nine large graph benchmark datasets, where PDPIC proved higher accuracy and better time consuming than other compared algorithms.

Keywords: spectral clustering, power iteration clustering, deflation-based power iteration clustering, Apache spark, large graph

Procedia PDF Downloads 168
17553 Graphene Metamaterials Supported Tunable Terahertz Fano Resonance

Authors: Xiaoyong He

Abstract:

The manipulation of THz waves is still a challenging task due to lack of natural materials interacted with it strongly. Designed by tailoring the characters of unit cells (meta-molecules), the advance of metamaterials (MMs) may solve this problem. However, because of Ohmic and radiation losses, the performance of MMs devices is subjected to the dissipation and low quality factor (Q-factor). This dilemma may be circumvented by Fano resonance, which arises from the destructive interference between a bright continuum mode and dark discrete mode (or a narrow resonance). Different from symmetric Lorentz spectral curve, Fano resonance indicates a distinct asymmetric line-shape, ultrahigh quality factor, steep variations in spectrum curves. Fano resonance is usually realized through symmetry breaking. However, if concentric double rings (DR) are placed closely to each other, the near-field coupling between them gives rise to two hybridized modes (bright and narrowband dark modes) because of the local asymmetry, resulting into the characteristic Fano line shape. Furthermore, from the practical viewpoint, it is highly desirable requirement that to achieve the modulation of Fano spectral curves conveniently, which is an important and interesting research topics. For current Fano systems, the tunable spectral curves can be realized by adjusting the geometrical structural parameters or magnetic fields biased the ferrite-based structure. But due to limited dispersion properties of active materials, it is still a tough work to tailor Fano resonance conveniently with the fixed structural parameters. With the favorable properties of extreme confinement and high tunability, graphene is a strong candidate to achieve this goal. The DR-structure possesses the excitation of so-called “trapped modes,” with the merits of simple structure and high quality of resonances in thin structures. By depositing graphene circular DR on the SiO2/Si/ polymer substrate, the tunable Fano resonance has been theoretically investigated in the terahertz regime, including the effects of graphene Fermi level, structural parameters and operation frequency. The results manifest that the obvious Fano peak can be efficiently modulated because of the strong coupling between incident waves and graphene ribbons. As Fermi level increases, the peak amplitude of Fano curve increases, and the resonant peak position shifts to high frequency. The amplitude modulation depth of Fano curves is about 30% if Fermi level changes in the scope of 0.1-1.0 eV. The optimum gap distance between DR is about 8-12 μm, where the value of figure of merit shows a peak. As the graphene ribbon width increases, the Fano spectral curves become broad, and the resonant peak denotes blue shift. The results are very helpful to develop novel graphene plasmonic devices, e.g. sensors and modulators.

Keywords: graphene, metamaterials, terahertz, tunable

Procedia PDF Downloads 327
17552 The Impact of Trait and Mathematical Anxiety on Oscillatory Brain Activity during Lexical and Numerical Error-Recognition Tasks

Authors: Alexander N. Savostyanov, Tatyana A. Dolgorukova, Elena A. Esipenko, Mikhail S. Zaleshin, Margherita Malanchini, Anna V. Budakova, Alexander E. Saprygin, Yulia V. Kovas

Abstract:

The present study compared spectral-power indexes and cortical topography of brain activity in a sample characterized by different levels of trait and mathematical anxiety. 52 healthy Russian-speakers (age 17-32; 30 males) participated in the study. Participants solved an error recognition task under 3 conditions: A lexical condition (simple sentences in Russian), and two numerical conditions (simple arithmetic and complicated algebraic problems). Trait and mathematical anxiety were measured using self-repot questionnaires. EEG activity was recorded simultaneously during task execution. Event-related spectral perturbations (ERSP) were used to analyze spectral-power changes in brain activity. Additionally, sLORETA was applied in order to localize the sources of brain activity. When exploring EEG activity recorded after tasks onset during lexical conditions, sLORETA revealed increased activation in frontal and left temporal cortical areas, mainly in the alpha/beta frequency ranges. When examining the EEG activity recorded after task onset during arithmetic and algebraic conditions, additional activation in delta/theta band in the right parietal cortex was observed. The ERSP plots reveled alpha/beta desynchronizations within a 500-3000 ms interval after task onset and slow-wave synchronization within an interval of 150-350 ms. Amplitudes of these intervals reflected the accuracy of error recognition, and were differently associated with the three (lexical, arithmetic and algebraic) conditions. The level of trait anxiety was positively correlated with the amplitude of alpha/beta desynchronization. The level of mathematical anxiety was negatively correlated with the amplitude of theta synchronization and of alpha/beta desynchronization. Overall, trait anxiety was related with an increase in brain activation during task execution, whereas mathematical anxiety was associated with increased inhibitory-related activity. We gratefully acknowledge the support from the №11.G34.31.0043 grant from the Government of the Russian Federation.

Keywords: anxiety, EEG, lexical and numerical error-recognition tasks, alpha/beta desynchronization

Procedia PDF Downloads 510