Search results for: Dirichlet processes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5644

Search results for: Dirichlet processes

544 Precursor Synthesis of Carbon Materials with Different Aggregates Morphologies

Authors: Nikolai A. Khlebnikov, Vladimir N. Krasilnikov, Evgenii V. Polyakov, Anastasia A. Maltceva

Abstract:

Carbon materials with advanced surfaces are widely used both in modern industry and in environmental protection. The physical-chemical nature of these materials is determined by the morphology of primary atomic and molecular carbon structures, which are the basis for synthesizing the following materials: zero-dimensional (fullerenes), one-dimensional (fiber, tubes), two-dimensional (graphene) carbon nanostructures, three-dimensional (multi-layer graphene, graphite, foams) with unique physical-chemical and functional properties. Experience shows that the microscopic morphological level is the basis for the creation of the next mesoscopic morphological level. The dependence of the morphology on the chemical way and process prehistory (crystallization, colloids formation, liquid crystal state and other) is the peculiarity of the last called level. These factors determine the consumer properties of carbon materials, such as specific surface area, porosity, chemical resistance in corrosive environments, catalytic and adsorption activities. Based on the developed ideology of thin precursor synthesis, the authors discuss one of the approaches of the porosity control of carbon-containing materials with a given aggregates morphology. The low-temperature thermolysis of precursors in a gas environment of a given composition is the basis of the above-mentioned idea. The processes of carbothermic precursor synthesis of two different compounds: tungsten carbide WC:nC and zinc oxide ZnO:nC containing an impurity phase in the form of free carbon were selected as subjects of the research. In the first case, the transition metal (tungsten) forming carbides was the object of the synthesis. In the second case, there was selected zinc that does not form carbides. The synthesis of both kinds of transition metals compounds was conducted by the method of precursor carbothermic synthesis from the organic solution. ZnO:nC composites were obtained by thermolysis of succinate Zn(OO(CH2)2OO), formate glycolate Zn(HCOO)(OCH2CH2O)1/2, glycerolate Zn(OCH2CHOCH2OH), and tartrate Zn(OOCCH(OH)CH(OH)COO). WC:nC composite was synthesized from ammonium paratungstate and glycerol. In all cases, carbon structures that are specific for diamond- like carbon forms appeared on the surface of WC and ZnO particles after the heat treatment. Tungsten carbide and zinc oxide were removed from the composites by selective chemical dissolution preserving the amorphous carbon phase. This work presents the results of investigating WC:nC and ZnO:nC composites and carbon nanopowders with tubular, tape, plate and onion morphologies of aggregates that are separated by chemical dissolution of WC and ZnO from the composites by the following methods: SEM, TEM, XPA, Raman spectroscopy, and BET. The connection between the carbon morphology under the conditions of synthesis and chemical nature of the precursor and the possibility of regulation of the morphology with the specific surface area up to 1700-2000 m2/g of carbon-structured materials are discussed.

Keywords: carbon morphology, composite materials, precursor synthesis, tungsten carbide, zinc oxide

Procedia PDF Downloads 335
543 Transcription Skills and Written Composition in Chinese

Authors: Pui-sze Yeung, Connie Suk-han Ho, David Wai-ock Chan, Kevin Kien-hoa Chung

Abstract:

Background: Recent findings have shown that transcription skills play a unique and significant role in Chinese word reading and spelling (i.e. word dictation), and written composition development. The interrelationships among component skills of transcription, word reading, word spelling, and written composition in Chinese have rarely been examined in the literature. Is the contribution of component skills of transcription to Chinese written composition mediated by word level skills (i.e., word reading and spelling)? Methods: The participants in the study were 249 Chinese children in Grade 1, Grade 3, and Grade 5 in Hong Kong. They were administered measures of general reasoning ability, orthographic knowledge, stroke sequence knowledge, word spelling, handwriting fluency, word reading, and Chinese narrative writing. Orthographic knowledge- orthographic knowledge was assessed by a task modeled after the lexical decision subtest of the Hong Kong Test of Specific Learning Difficulties in Reading and Writing (HKT-SpLD). Stroke sequence knowledge: The participants’ performance in producing legitimate stroke sequences was measured by a stroke sequence knowledge task. Handwriting fluency- Handwriting fluency was assessed by a task modeled after the Chinese Handwriting Speed Test. Word spelling: The stimuli of the word spelling task consist of fourteen two-character Chinese words. Word reading: The stimuli of the word reading task consist of 120 two-character Chinese words. Written composition: A narrative writing task was used to assess the participants’ text writing skills. Results: Analysis of covariance results showed that there were significant between-grade differences in the performance of word reading, word spelling, handwriting fluency, and written composition. Preliminary hierarchical multiple regression analysis results showed that orthographic knowledge, word spelling, and handwriting fluency were unique predictors of Chinese written composition even after controlling for age, IQ, and word reading. The interaction effects between grade and each of these three skills (orthographic knowledge, word spelling, and handwriting fluency) were not significant. Path analysis results showed that orthographic knowledge contributed to written composition both directly and indirectly through word spelling, while handwriting fluency contributed to written composition directly and indirectly through both word reading and spelling. Stroke sequence knowledge only contributed to written composition indirectly through word spelling. Conclusions: Preliminary hierarchical regression results were consistent with previous findings about the significant role of transcription skills in Chinese word reading, spelling and written composition development. The fact that orthographic knowledge contributed both directly and indirectly to written composition through word reading and spelling may reflect the impact of the script-sound-meaning convergence of Chinese characters on the composing process. The significant contribution of word spelling and handwriting fluency to Chinese written composition across elementary grades highlighted the difficulty in attaining automaticity of transcription skills in Chinese, which limits the working memory resources available for other composing processes.

Keywords: orthographic knowledge, transcription skills, word reading, writing

Procedia PDF Downloads 424
542 Using Teachers' Perceptions of Science Outreach Activities to Design an 'Optimum' Model of Science Outreach

Authors: Victoria Brennan, Andrea Mallaburn, Linda Seton

Abstract:

Science outreach programmes connect school pupils with external agencies to provide activities and experiences that enhance their exposure to science. It can be argued that these programmes not only aim to support teachers with curriculum engagement and promote scientific literacy but also provide pivotal opportunities to spark scientific interest in students. In turn, a further objective of these programmes is to increase awareness of career opportunities within this field. Although outreach work is also often described as a fun and satisfying venture, a plethora of researchers express caution to how successful the processes are to increases engagement post-16 in science. When researching the impact of outreach programmes, it is often student feedback regarding the activities or enrolment numbers to particular science courses post-16, which are generated and analysed. Although this is informative, the longevity of the programme’s impact could be better informed by the teacher’s perceptions; the evidence of which is far more limited in the literature. In addition, there are strong suggestions that teachers can have an indirect impact on a student’s own self-concept. These themes shape the focus and importance of this ongoing research project as it presents the rationale that teachers are under-used resources when it comes to considering the design of science outreach programmes. Therefore, the end result of the research will consist of a presentation of an ‘optimum’ model of outreach. The result of which should be of interest to the wider stakeholders such as universities or private or government organisations who design science outreach programmes in the hope to recruit future scientists. During phase one, questionnaires (n=52) and interviews (n=8) have generated both quantitative and qualitative data. These have been analysed using the Wilcoxon non-parametric test to compare teachers’ perceptions of science outreach interventions and thematic analysis for open-ended questions. Both of these research activities provide an opportunity for a cross-section of teacher opinions of science outreach to be obtained across all educational levels. Therefore, an early draft of the ‘optimum’ model of science outreach delivery was generated using both the wealth of literature and primary data. This final (ongoing) phase aims to refine this model using teacher focus groups to provide constructive feedback about the proposed model. The analysis uses principles of modified Grounded Theory to ensure that focus group data is used to further strengthen the model. Therefore, this research uses a pragmatist approach as it aims to focus on the strengths of the different paradigms encountered to ensure the data collected will provide the most suitable information to create an improved model of sustainable outreach. The results discussed will focus on this ‘optimum’ model and teachers’ perceptions of benefits and drawbacks when it comes to engaging with science outreach work. Although the model is still a ‘work in progress’, it provides both insight into how teachers feel outreach delivery can be a sustainable intervention tool within the classroom and what providers of such programmes should consider when designing science outreach activities.

Keywords: educational partnerships, science education, science outreach, teachers

Procedia PDF Downloads 129
541 Irradion: Portable Small Animal Imaging and Irradiation Unit

Authors: Josef Uher, Jana Boháčová, Richard Kadeřábek

Abstract:

In this paper, we present a multi-robot imaging and irradiation research platform referred to as Irradion, with full capabilities of portable arbitrary path computed tomography (CT). Irradion is an imaging and irradiation unit entirely based on robotic arms for research on cancer treatment with ion beams on small animals (mice or rats). The platform comprises two subsystems that combine several imaging modalities, such as 2D X-ray imaging, CT, and particle tracking, with precise positioning of a small animal for imaging and irradiation. Computed Tomography: The CT subsystem of the Irradion platform is equipped with two 6-joint robotic arms that position a photon counting detector and an X-ray tube independently and freely around the scanned specimen and allow image acquisition utilizing computed tomography. Irradiation measures nearly all conventional 2D and 3D trajectories of X-ray imaging with precisely calibrated and repeatable geometrical accuracy leading to a spatial resolution of up to 50 µm. In addition, the photon counting detectors allow X-ray photon energy discrimination, which can suppress scattered radiation, thus improving image contrast. It can also measure absorption spectra and recognize different materials (tissue) types. X-ray video recording and real-time imaging options can be applied for studies of dynamic processes, including in vivo specimens. Moreover, Irradion opens the door to exploring new 2D and 3D X-ray imaging approaches. We demonstrate in this publication various novel scan trajectories and their benefits. Proton Imaging and Particle Tracking: The Irradion platform allows combining several imaging modules with any required number of robots. The proton tracking module comprises another two robots, each holding particle tracking detectors with position, energy, and time-sensitive sensors Timepix3. Timepix3 detectors can track particles entering and exiting the specimen and allow accurate guiding of photon/ion beams for irradiation. In addition, quantifying the energy losses before and after the specimen brings essential information for precise irradiation planning and verification. Work on the small animal research platform Irradion involved advanced software and hardware development that will offer researchers a novel way to investigate new approaches in (i) radiotherapy, (ii) spectral CT, (iii) arbitrary path CT, (iv) particle tracking. The robotic platform for imaging and radiation research developed for the project is an entirely new product on the market. Preclinical research systems with precision robotic irradiation with photon/ion beams combined with multimodality high-resolution imaging do not exist currently. The researched technology can potentially cause a significant leap forward compared to the current, first-generation primary devices.

Keywords: arbitrary path CT, robotic CT, modular, multi-robot, small animal imaging

Procedia PDF Downloads 90
540 Anaerobic Digestion of Spent Wash through Biomass Development for Obtaining Biogas

Authors: Sachin B. Patil, Narendra M. Kanhe

Abstract:

A typical cane molasses based distillery generates 15 L of waste water per liter of alcohol production. Distillery waste with COD of over 1,00,000 mg/l and BOD of over 30,000 mg/l ranks high amongst the pollutants produced by industries both in magnitude and strength. Treatment and safe disposal of this waste is a challenging task since long. The high strength of waste water renders aerobic treatment very expensive and physico-chemical processes have met with little success. Thermophilic anaerobic treatment of distillery waste may provide high degree of treatment and better recovery of biogas. It may prove more feasible in most part of tropical country like India, where temperature is suitable for thermophilic micro-organisms. Researchers have reviled that, at thermophilic conditions due to increased destruction rate of organic matter and pathogens, higher digestion rate can be achieved. Literature review reveals that the variety of anaerobic reactors including anaerobic lagoon, conventional digester, anaerobic filter, two staged fixed film reactors, sludge bed and granular bed reactors have been studied, but little attempts have been made to evaluate the usefulness of thermophilic anaerobic treatment for treating distillery waste. The present study has been carried out, to study feasibility of thermophilic anaerobic digestion to facilitate the design of full scale reactor. A pilot scale anaerobic fixed film fixed bed reactor (AFFFB) of capacity 25m3 was designed, fabricated, installed and commissioned for thermophilic (55-65°C) anaerobic digestion at a constant pH of 6.5-7.5, because these temperature and pH ranges are considered to be optimum for biogas recovery from distillery wastewater. In these conditions, working of the reactor was studied, for different hydraulic retention times (HRT) (0.25days to 12days) and variable organic loading rates (361.46 to 7.96 Kg COD/m3d). The parameters such as flow rate and temperature, various chemical parameters such as pH, chemical oxygen demands (COD), biogas quantity, and biogas composition were regularly monitored. It was observed that, with the increase in OLR, the biogas production was increased, but the specific biogas yield decreased. Similarly, with the increase in HRT, the biogas production got decrease, but the specific biogas yield was increased. This may also be due to the predominant activity of acid producers to methane producers at the higher substrate loading rates. From the present investigation, it can be concluded that for thermophilic conditions the highest COD removal percentage was obtained at an HRT of 08 days, thereafter it tends to decrease from 8 to 12 days HRT. There is a little difference between COD removal efficiency of 8 days HRT (74.03%) and 5 day HRT (78.06%), therefore it would not be feasible to increase the reactor size by 1.5 times for mere 4 percent more efficiency. Hence, 5 days HRT is considered to be optimum, at which the biogas yield was 98 m3/day and specific biogas yield was 0.385 CH4 m3/Kg CODr.

Keywords: spent wash, anaerobic digestion, biomass, biogas

Procedia PDF Downloads 265
539 The Role of Piceatannol in Counteracting Glyceraldehyde-3-Phosphate Dehydrogenase Aggregation and Nuclear Translocation

Authors: Joanna Gerszon, Aleksandra Rodacka

Abstract:

In the pathogenesis of neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease, protein and peptide aggregation processes play a vital role in contributing to the formation of intracellular and extracellular protein deposits. One of the major components of these deposits is the oxidatively modified glyceraldehyde-3-phosphate dehydrogenase (GAPDH). Therefore, the purpose of this research was to answer the question whether piceatannol, a stilbene derivative, counteracts and/or slows down oxidative stress-induced GAPDH aggregation. The study also aimed to determine if this natural occurring compound prevents unfavorable nuclear translocation of GAPDH in hippocampal cells. The isothermal titration calorimetry (ITC) analysis indicated that one molecule of GAPDH can bind up to 8 molecules of piceatannol (7.3 ± 0.9). As a consequence of piceatannol binding to the enzyme, the loss of activity was observed. Parallel with GAPDH inactivation the changes in zeta potential, and loss of free thiol groups were noted. Nevertheless, the ligand-protein binding does not influence the secondary structure of the GAPDH. Precise molecular docking analysis of the interactions inside the active center allowed to presume that these effects are due to piceatannol ability to assemble a covalent binding with nucleophilic cysteine residue (Cys149) which is directly involved in the catalytic reaction. Molecular docking also showed that simultaneously 11 molecules of ligand can be bound to dehydrogenase. Taking into consideration obtained data, the influence of piceatannol on level of GAPDH aggregation induced by excessive oxidative stress was examined. The applied methods (thioflavin-T binding-dependent fluorescence as well as microscopy methods - transmission electron microscopy, Congo Red staining) revealed that piceatannol significantly diminishes level of GAPDH aggregation. Finally, studies involving cellular model (Western blot analyses of nuclear and cytosolic fractions and confocal microscopy) indicated that piceatannol-GAPDH binding prevents GAPDH from nuclear translocation induced by excessive oxidative stress in hippocampal cells. In consequence, it counteracts cell apoptosis. These studies demonstrate that by binding with GAPDH, piceatannol blocks cysteine residue and counteracts its oxidative modifications, that induce oligomerization and GAPDH aggregation as well as it prevents hippocampal cells from apoptosis by retaining GAPDH in the cytoplasm. All these findings provide a new insight into the role of piceatannol interaction with GAPDH and present a potential therapeutic strategy for some neurological disorders related to GAPDH aggregation. This work was supported by the by National Science Centre, Poland (grant number 2017/25/N/NZ1/02849).

Keywords: glyceraldehyde-3-phosphate dehydrogenase, neurodegenerative disease, neuroprotection, piceatannol, protein aggregation

Procedia PDF Downloads 167
538 Ecosystem Modeling along the Western Bay of Bengal

Authors: A. D. Rao, Sachiko Mohanty, R. Gayathri, V. Ranga Rao

Abstract:

Modeling on coupled physical and biogeochemical processes of coastal waters is vital to identify the primary production status under different natural and anthropogenic conditions. About 7, 500 km length of Indian coastline is occupied with number of semi enclosed coastal bodies such as estuaries, inlets, bays, lagoons, and other near shore, offshore shelf waters, etc. This coastline is also rich in wide varieties of ecosystem flora and fauna. Directly/indirectly extensive domestic and industrial sewage enter into these coastal water bodies affecting the ecosystem character and create environment problems such as water quality degradation, hypoxia, anoxia, harmful algal blooms, etc. lead to decline in fishery and other related biological production. The present study is focused on the southeast coast of India, starting from Pulicat to Gulf of Mannar, which is rich in marine diversity such as lagoon, mangrove and coral ecosystem. Three dimensional Massachusetts Institute of Technology general circulation model (MITgcm) along with Darwin biogeochemical module is configured for the western Bay of Bengal (BoB) to study the biogeochemistry over this region. The biogeochemical module resolves the cycling of carbon, phosphorous, nitrogen, silica, iron and oxygen through inorganic, living, dissolved and particulate organic phases. The model domain extends from 4°N-16.5°N and 77°E-86°E with a horizontal resolution of 1 km. The bathymetry is derived from General Bathymetric Chart of the Oceans (GEBCO), which has a resolution of 30 sec. The model is initialized by using the temperature, salinity filed from the World Ocean Atlas (WOA2013) of National Oceanographic Data Centre with a resolution of 0.25°. The model is forced by the surface wind stress from ASCAT and the photosynthetically active radiation from the MODIS-Aqua satellite. Seasonal climatology of nutrients (phosphate, nitrate and silicate) for the southwest BoB region are prepared using available National Institute of Oceanography (NIO) in-situ data sets and compared with the WOA2013 seasonal climatology data. The model simulations with the two different initial conditions viz., WOA2013 and the generated NIO climatology, showed evident changes in the concentration and the evolution of the nutrients in the study region. It is observed that the availability of nutrients is more in NIO data compared to WOA in the model domain. The model simulated primary productivity is compared with the spatially distributed satellite derived chlorophyll data and at various locations with the in-situ data. The seasonal variability of the model simulated primary productivity is also studied.

Keywords: Bay of Bengal, Massachusetts Institute of Technology general circulation model, MITgcm, biogeochemistry, primary productivity

Procedia PDF Downloads 141
537 Integration of a Protective Film to Enhance the Longevity and Performance of Miniaturized Ion Sensors

Authors: Antonio Ruiz Gonzalez, Kwang-Leong Choy

Abstract:

The measurement of electrolytes has a high value in the clinical routine. Ions are present in all body fluids with variable concentrations and are involved in multiple pathologies such as heart failures and chronic kidney disease. In the case of dissolved potassium, although a high concentration in the blood (hyperkalemia) is relatively uncommon in the general population, it is one of the most frequent acute electrolyte abnormalities. In recent years, the integration of thin films technologies in this field has allowed the development of highly sensitive biosensors with ultra-low limits of detection for the assessment of metals in liquid samples. However, despite the current efforts in the miniaturization of sensitive devices and their integration into portable systems, only a limited number of successful examples used commercially can be found. This fact can be attributed to a high cost involved in their production and the sustained degradation of the electrodes over time, which causes a signal drift in the measurements. Thus, there is an unmet necessity for the development of low-cost and robust sensors for the real-time monitoring of analyte concentrations in patients to allow the early detection and diagnosis of diseases. This paper reports a thin film ion-selective sensor for the evaluation of potassium ions in aqueous samples. As an alternative for this fabrication method, aerosol assisted chemical vapor deposition (AACVD), was applied due to cost-effectivity and fine control over the film deposition. Such a technique does not require vacuum and is suitable for the coating of large surface areas and structures with complex geometries. This approach allowed the fabrication of highly homogeneous surfaces with well-defined microstructures onto 50 nm thin gold layers. The degradative processes of the ubiquitously employed poly (vinyl chloride) membranes in contact with an electrolyte solution were studied, including the polymer leaching process, mechanical desorption of nanoparticles and chemical degradation over time. Rational design of a protective coating based on an organosilicon material in combination with cellulose to improve the long-term stability of the sensors was then carried out, showing an improvement in the performance after 5 weeks. The antifouling properties of such coating were assessed using a cutting-edge quartz microbalance sensor, allowing the quantification of the adsorbed proteins in the nanogram range. A correlation between the microstructural properties of the films with the surface energy and biomolecules adhesion was then found and used to optimize the protective film.

Keywords: hyperkalemia, drift, AACVD, organosilicon

Procedia PDF Downloads 123
536 Studies of the Reaction Products Resulted from Glycerol Electrochemical Conversion under Galvanostatic Mode

Authors: Ching Shya Lee, Mohamed Kheireddine Aroua, Wan Mohd Ashri Wan Daud, Patrick Cognet, Yolande Peres, Mohammed Ajeel

Abstract:

In recent years, with the decreasing supply of fossil fuel, renewable energy has received a significant demand. Biodiesel which is well known as vegetable oil based fatty acid methyl ester is an alternative fuel for diesel. It can be produced from transesterification of vegetable oils, such as palm oil, sunflower oil, rapeseed oil, etc., with methanol. During the transesterification process, crude glycerol is formed as a by-product, resulting in 10% wt of the total biodiesel production. To date, due to the fast growing of biodiesel production in worldwide, the crude glycerol supply has also increased rapidly and resulted in a significant price drop for glycerol. Therefore, extensive research has been developed to use glycerol as feedstock to produce various added-value chemicals, such as tartronic acid, mesoxalic acid, glycolic acid, glyceric acid, propanediol, acrolein etc. The industrial processes that usually involved are selective oxidation, biofermentation, esterification, and hydrolysis. However, the conversion of glycerol into added-value compounds by electrochemical approach is rarely discussed. Currently, the approach is mainly focused on the electro-oxidation study of glycerol under potentiostatic mode for cogenerating energy with other chemicals. The electro-organic synthesis study from glycerol under galvanostatic mode is seldom reviewed. In this study, the glycerol was converted into various added-value compounds by electrochemical method under galvanostatic mode. This work aimed to study the possible compounds produced from glycerol by electrochemical technique in a one-pot electrolysis cell. The electro-organic synthesis study from glycerol was carried out in a single compartment reactor for 8 hours, over the platinum cathode and anode electrodes under acidic condition. Various parameters such as electric current (1.0 A to 3.0 A) and reaction temperature (27 °C to 80 °C) were evaluated. The products obtained were characterized by using gas chromatography-mass spectroscopy equipped with an aqueous-stable polyethylene glycol stationary phase column. Under the optimized reaction condition, the glycerol conversion achieved as high as 95%. The glycerol was successfully converted into various added-value chemicals such as ethylene glycol, glycolic acid, glyceric acid, acetaldehyde, formic acid, and glyceraldehyde; given the yield of 1%, 45%, 27%, 4%, 0.7% and 5%, respectively. Based on the products obtained from this study, the reaction mechanism of this process is proposed. In conclusion, this study has successfully converted glycerol into a wide variety of added-value compounds. These chemicals are found to have high market value; they can be used in the pharmaceutical, food and cosmetic industries. This study effectively opens a new approach for the electrochemical conversion of glycerol. For further enhancement on the product selectivity, electrode material is an important parameter to be considered.

Keywords: biodiesel, glycerol, electrochemical conversion, galvanostatic mode

Procedia PDF Downloads 193
535 Fractional, Component and Morphological Composition of Ambient Air Dust in the Areas of Mining Industry

Authors: S.V. Kleyn, S.Yu. Zagorodnov, А.А. Kokoulina

Abstract:

Technogenic emissions of the mining and processing complex are characterized by a high content of chemical components and solid dust particles. However, each industrial enterprise and the surrounding area have features that require refinement and parameterization. Numerous studies have shown the negative impact of fine dust PM10 and PM2.5 on the health, as well as the possibility of toxic components absorption, including heavy metals by dust particles. The target of the study was the quantitative assessment of the fractional and particle size composition of ambient air dust in the area of impact by primary magnesium production complex. Also, we tried to describe the morphology features of dust particles. Study methods. To identify the dust emission sources, the analysis of the production process has been carried out. The particulate composition of the emissions was measured using laser particle analyzer Microtrac S3500 (covered range of particle size is 20 nm to 2000 km). Particle morphology and the component composition were established by electron microscopy by scanning microscope of high resolution (magnification rate - 5 to 300 000 times) with X-ray fluorescence device S3400N ‘HITACHI’. The chemical composition was identified by X-ray analysis of the samples using an X-ray diffractometer XRD-700 ‘Shimadzu’. Determination of the dust pollution level was carried out using model calculations of emissions in the atmosphere dispersion. The calculations were verified by instrumental studies. Results of the study. The results demonstrated that the dust emissions of different technical processes are heterogeneous and fractional structure is complicated. The percentage of particle sizes up to 2.5 micrometres inclusive was ranged from 0.00 to 56.70%; particle sizes less than 10 microns inclusive – 0.00 - 85.60%; particle sizes greater than 10 microns - 14.40% -100.00%. During microscopy, the presence of nanoscale size particles has been detected. Studied dust particles are round, irregular, cubic and integral shapes. The composition of the dust includes magnesium, sodium, potassium, calcium, iron, chlorine. On the base of obtained results, it was performed the model calculations of dust emissions dispersion and establishment of the areas of fine dust РМ 10 and РМ 2.5 distribution. It was found that the dust emissions of fine powder fractions PM10 and PM2.5 are dispersed over large distances and beyond the border of the industrial site of the enterprise. The population living near the enterprise is exposed to the risk of diseases associated with dust exposure. Data are transferred to the economic entity to make decisions on the measures to minimize the risks. Exposure and risks indicators on the health are used to provide named patient health and preventive care to the citizens living in the area of negative impact of the facility.

Keywords: dust emissions, еxposure assessment, PM 10, PM 2.5

Procedia PDF Downloads 261
534 Electrical Degradation of GaN-based p-channel HFETs Under Dynamic Electrical Stress

Authors: Xuerui Niu, Bolin Wang, Xinchuang Zhang, Xiaohua Ma, Bin Hou, Ling Yang

Abstract:

The application of discrete GaN-based power switches requires the collaboration of silicon-based peripheral circuit structures. However, the packages and interconnection between the Si and GaN devices can introduce parasitic effects to the circuit, which has great impacts on GaN power transistors. GaN-based monolithic power integration technology is an emerging solution which can improve the stability of circuits and allow the GaN-based devices to achieve more functions. Complementary logic circuits consisting of GaN-based E-mode p-channel heterostructure field-effect transistors (p-HFETs) and E-mode n-channel HEMTs can be served as the gate drivers. E-mode p-HFETs with recessed gate have attracted increasing interest because of the low leakage current and large gate swing. However, they suffer from a poor interface between the gate dielectric and polarized nitride layers. The reliability of p-HFETs is analyzed and discussed in this work. In circuit applications, the inverter is always operated with dynamic gate voltage (VGS) rather than a constant VGS. Therefore, dynamic electrical stress has been simulated to resemble the operation conditions for E-mode p-HFETs. The dynamic electrical stress condition is as follows. VGS is a square waveform switching from -5 V to 0 V, VDS is fixed, and the source grounded. The frequency of the square waveform is 100kHz with the rising/falling time of 100 ns and duty ratio of 50%. The effective stress time is 1000s. A number of stress tests are carried out. The stress was briefly interrupted to measure the linear IDS-VGS, saturation IDS-VGS, As VGS switches from -5 V to 0 V and VDS = 0 V, devices are under negative-bias-instability (NBI) condition. Holes are trapped at the interface of oxide layer and GaN channel layer, which results in the reduction of VTH. The negative shift of VTH is serious at the first 10s and then changes slightly with the following stress time. However, different phenomenon is observed when VDS reduces to -5V. VTH shifts negatively during stress condition, and the variation in VTH increases with time, which is different from that when VDS is 0V. Two mechanisms exists in this condition. On the one hand, the electric field in the gate region is influenced by the drain voltage, so that the trapping behavior of holes in the gate region changes. The impact of the gate voltage is weakened. On the other hand, large drain voltage can induce the hot holes generation and lead to serious hot carrier stress (HCS) degradation with time. The poor-quality interface between the oxide layer and GaN channel layer at the gate region makes a major contribution to the high-density interface traps, which will greatly influence the reliability of devices. These results emphasize that the improved etching and pretreatment processes needs to be developed so that high-performance GaN complementary logics with enhanced stability can be achieved.

Keywords: GaN-based E-mode p-HFETs, dynamic electric stress, threshold voltage, monolithic power integration technology

Procedia PDF Downloads 93
533 The Influence of Active Breaks on the Attention/Concentration Performance in Eighth-Graders

Authors: Christian Andrä, Luisa Zimmermann, Christina Müller

Abstract:

Introduction: The positive relation between physical activity and cognition is commonly known. Relevant studies show that in everyday school life active breaks can lead to improvement in certain abilities (e.g. attention and concentration). A beneficial effect is in particular attributed to moderate activity. It is still unclear whether active breaks are beneficial after relatively short phases of cognitive load and whether the postulated effects of activity really have an immediate impact. The objective of this study was to verify whether an active break after 18 minutes of cognitive load leads to enhanced attention/concentration performance, compared to inactive breaks with voluntary mobile phone activity. Methodology: For this quasi-experimental study, 36 students [age: 14.0 (mean value) ± 0.3 (standard deviation); male/female: 21/15] of a secondary school were tested. In week 1, every student’s maximum heart rate (Hfmax) was determined through maximum effort tests conducted during physical education classes. The task was to run 3 laps of 300 m with increasing subjective effort (lap 1: 60%, lap 2: 80%, lap 3: 100% of the maximum performance capacity). Furthermore, first attention/concentration tests (D2-R) took place (pretest). The groups were matched on the basis of the pretest results. During week 2 and 3, crossover testing was conducted, comprising of 18 minutes of cognitive preload (test for concentration performance, KLT-R), a break and an attention/concentration test after a 2-minutes transition. Different 10-minutes breaks (active break: moderate physical activity with 65% Hfmax or inactive break: mobile phone activity) took place between preloading and transition. Major findings: In general, there was no impact of the different break interventions on the concentration test results (symbols processed after physical activity: 185.2 ± 31.3 / after inactive break: 184.4 ± 31.6; errors after physical activity: 5.7 ± 6.3 / after inactive break: 7.0. ± 7.2). There was, however, a noticeable development of the values over the testing periods. Although no difference in the number of processed symbols was detected (active/inactive break: period 1: 49.3 ± 8.8/46.9 ± 9.0; period 2: 47.0 ± 7.7/47.3 ± 8.4; period 3: 45.1 ± 8.3/45.6 ± 8.0; period 4: 43.8 ± 7.8/44.6 ± 8.0), error rates decreased successively after physical activity and increased gradually after an inactive break (active/inactive break: period 1: 1.9 ± 2.4/1.2 ± 1.4; period 2: 1.7 ± 1.8/ 1.5 ± 2.0, period 3: 1.2 ± 1.6/1.8 ± 2.1; period 4: 0.9 ± 1.5/2.5 ± 2.6; p= .012). Conclusion: Taking into consideration only the study’s overall results, the hypothesis must be dismissed. However, more differentiated evaluation shows that the error rates decreased after active breaks and increased after inactive breaks. Obviously, the effects of active intervention occur with a delay. The 2-minutes transition (regeneration time) used for this study seems to be insufficient due to the longer adaptation time of the cardio-vascular system in untrained individuals, which might initially affect the concentration capacity. To use the positive effects of physical activity for teaching and learning processes, physiological characteristics must also be considered. Only this will ensure optimum ability to perform.

Keywords: active breaks, attention/concentration test, cognitive performance capacity, heart rate, physical activity

Procedia PDF Downloads 315
532 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration

Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu

Abstract:

Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.

Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery

Procedia PDF Downloads 131
531 A Cloud-Based Federated Identity Management in Europe

Authors: Jesus Carretero, Mario Vasile, Guillermo Izquierdo, Javier Garcia-Blas

Abstract:

Currently, there is a so called ‘identity crisis’ in cybersecurity caused by the substantial security, privacy and usability shortcomings encountered in existing systems for identity management. Federated Identity Management (FIM) could be solution for this crisis, as it is a method that facilitates management of identity processes and policies among collaborating entities without enforcing a global consistency, that is difficult to achieve when there are ID legacy systems. To cope with this problem, the Connecting Europe Facility (CEF) initiative proposed in 2014 a federated solution in anticipation of the adoption of the Regulation (EU) N°910/2014, the so-called eIDAS Regulation. At present, a network of eIDAS Nodes is being deployed at European level to allow that every citizen recognized by a member state is to be recognized within the trust network at European level, enabling the consumption of services in other member states that, until now were not allowed, or whose concession was tedious. This is a very ambitious approach, since it tends to enable cross-border authentication of Member States citizens without the need to unify the authentication method (eID Scheme) of the member state in question. However, this federation is currently managed by member states and it is initially applied only to citizens and public organizations. The goal of this paper is to present the results of a European Project, named eID@Cloud, that focuses on the integration of eID in 5 cloud platforms belonging to authentication service providers of different EU Member States to act as Service Providers (SP) for private entities. We propose an initiative based on a private eID Scheme both for natural and legal persons. The methodology followed in the eID@Cloud project is that each Identity Provider (IdP) is subscribed to an eIDAS Node Connector, requesting for authentication, that is subscribed to an eIDAS Node Proxy Service, issuing authentication assertions. To cope with high loads, load balancing is supported in the eIDAS Node. The eID@Cloud project is still going on, but we already have some important outcomes. First, we have deployed the federation identity nodes and tested it from the security and performance point of view. The pilot prototype has shown the feasibility of deploying this kind of systems, ensuring good performance due to the replication of the eIDAS nodes and the load balance mechanism. Second, our solution avoids the propagation of identity data out of the native domain of the user or entity being identified, which avoids problems well known in cybersecurity due to network interception, man in the middle attack, etc. Last, but not least, this system allows to connect any country or collectivity easily, providing incremental development of the network and avoiding difficult political negotiations to agree on a single authentication format (which would be a major stopper).

Keywords: cybersecurity, identity federation, trust, user authentication

Procedia PDF Downloads 166
530 Zeolite 4A-confined Ni-Co Nanocluster: An Efficient and Durable Electrocatalyst for Alkaline Methanol Oxidation Reaction

Authors: Sarmistha Baruah, Akshai Kumar, Nageswara Rao Peela

Abstract:

The global energy crisis due to the dependence on fossil fuels and its limited reserves as well as environmental pollution are key concerns to the research communities. However, the implementation of alcohol-based fuel cells such as methanol is anticipated as a reliable source of future energy technology due to their high energy density, environment friendliness, ease of storage, transportation, etc. To drive the anodic methanol oxidation reaction (MOR) in direct methanol fuel cells (DMFCs), an active and long-lasting catalyst is necessary for efficient energy conversion from methanol. Recently, transition metal-zeolite-based materials have been considered versatile catalysts for a variety of industrial and lab-scale processes. Large specific surface area, well-organized micropores, and adjustable acidity/basicity are characteristics of zeolites that make them excellent supports for immobilizing small-sized and highly dispersed metal species. Significant advancement in the production and characterization of well-defined metal clusters encapsulated within zeolite matrix has substantially expanded the library of materials available, and consequently, their catalytic efficacy. In this context, we developed bimetallic Ni-Co catalysts encapsulated within LTA (also known as 4A) zeolite via a method combined with the in-situ encapsulation of metal species using hydrothermal treatment followed by a chemical reduction process. The prepared catalyst was characterized using advanced characterization techniques, such as X-ray diffraction (XRD), field emission transmission electron microscope (FETEM), field emission scanning electron microscope (FESEM), energy dispersive X-ray (EDX), and X-ray photoelectron spectroscopy (XPS). The electrocatalytic activity of the catalyst for MOR was carried out in an alkaline medium at room temperature using techniques such as cyclic voltammetry (CV), and chronoamperometry (CA). The resulting catalyst exhibited better catalytic activity of 12.1 mA cm-2 at 1.12 V vs Ag/AgCl and retained remarkable stability (~77%) even after 1000 cycles CV test for the electro-oxidation of methanol in alkaline media without any significant microstructural changes. The high surface area, better Ni-Co species integration in the zeolite, and the ample amount of surface hydroxyl groups contribute to highly dispersed active sites and quick analyte diffusion, which provide notable MOR kinetics. Thus, this study will open up new possibilities to develop a noble metal-free zeolite-based electrocatalyst due to its simple synthesis steps, large-scale fabrication, improved stability, and efficient activity for DMFC application.

Keywords: alkaline media, bimetallic, encapsulation, methanol oxidation reaction, LTA zeolite.

Procedia PDF Downloads 65
529 The Environmental Concerns in Coal Mining, and Utilization in Pakistan

Authors: S. R. H. Baqri, T. Shahina, M. T. Hasan

Abstract:

Pakistan is facing acute shortage of energy and looking for indigenous resources of the energy mix to meet the short fall. After the discovery of huge coal resources in Thar Desert of Sindh province, focus has shifted to coal power generation. The government of Pakistan has planned power generation of 20000 MW on coal by the year 2025. This target will be achieved by mining and power generation in Thar coal Field and on imported coal in different parts of Pakistan. Total indigenous coal production of around 3.0 million tons is being utilized in brick kilns, cement and sugar industry. Coal-based power generation is only limited to three units of 50 MW near Hyderabad from nearby Lakhra Coal field. The purpose of this presentation is to identify and redressal of issues of coal mining and utilization with reference to environmental hazards. Thar coal resource is estimated at 175 billion tons out of a total resource estimate of 184 billion tons in Pakistan. Coal of Pakistan is of Tertiary age (Palaeocene/Eocene) and classified from lignite to sub-bituminous category. Coal characterization has established three main pollutants such as Sulphur, Carbon dioxide and Methane besides some others associated with coal and rock types. The element Sulphur occurs in organic as well as inorganic forms associated with coals as free sulphur and as pyrite, gypsum, respectively. Carbon dioxide, methane and minerals are mostly associated with fractures, joints local faults, seatearth and roof rocks. The abandoned and working coal mines give kerosene odour due to escape of methane in the atmosphere. While the frozen methane/methane ices in organic matter rich sediments have also been reported from the Makran coastal and offshore areas. The Sulphur escapes into the atmosphere during mining and utilization of coal in industry. The natural erosional processes due to rivers, streams, lakes and coastal waves erode over lying sediments allowing pollutants to escape into air and water. Power plants emissions should be controlled through application of appropriate clean coal technology and need to be regularly monitored. Therefore, the systematic and scientific studies will be required to estimate the quantity of methane, carbon dioxide and sulphur at various sites such as abandoned and working coal mines, exploratory wells for coal, oil and gas. Pressure gauges on gas pipes connecting the coal-bearing horizons will be installed on surface to know the quantity of gas. The quality and quantity of gases will be examined according to the defined intervals of times. This will help to design and recommend the methods and procedures to stop the escape of gases into atmosphere. The element of Sulphur can be removed partially by gravity and chemical methods after grinding and before industrial utilization of coal.

Keywords: atmosphere, coal production, energy, pollutants

Procedia PDF Downloads 436
528 Analysing the Stability of Electrical Grid for Increased Renewable Energy Penetration by Focussing on LI-Ion Battery Storage Technology

Authors: Hemendra Singh Rathod

Abstract:

Frequency is, among other factors, one of the governing parameters for maintaining electrical grid stability. The quality of an electrical transmission and supply system is mainly described by the stability of the grid frequency. Over the past few decades, energy generation by intermittent sustainable sources like wind and solar has seen a significant increase globally. Consequently, controlling the associated deviations in grid frequency within safe limits has been gaining momentum so that the balance between demand and supply can be maintained. Lithium-ion battery energy storage system (Li-Ion BESS) has been a promising technology to tackle the challenges associated with grid instability. BESS is, therefore, an effective response to the ongoing debate whether it is feasible to have an electrical grid constantly functioning on a hundred percent renewable power in the near future. In recent years, large-scale manufacturing and capital investment into battery production processes have made the Li-ion battery systems cost-effective and increasingly efficient. The Li-ion systems require very low maintenance and are also independent of geographical constraints while being easily scalable. The paper highlights the use of stationary and moving BESS for balancing electrical energy, thereby maintaining grid frequency at a rapid rate. Moving BESS technology, as implemented in the selected railway network in Germany, is here considered as an exemplary concept for demonstrating the same functionality in the electrical grid system. Further, using certain applications of Li-ion batteries, such as self-consumption of wind and solar parks or their ancillary services, wind and solar energy storage during low demand, black start, island operation, residential home storage, etc. offers a solution to effectively integrate the renewables and support Europe’s future smart grid. EMT software tool DIgSILENT PowerFactory has been utilised to model an electrical transmission system with 100% renewable energy penetration. The stability of such a transmission system has been evaluated together with BESS within a defined frequency band. The transmission system operators (TSO) have the superordinate responsibility for system stability and must also coordinate with the other European transmission system operators. Frequency control is implemented by TSO by maintaining a balance between electricity generation and consumption. Li-ion battery systems are here seen as flexible, controllable loads and flexible, controllable generation for balancing energy pools. Thus using Li-ion battery storage solution, frequency-dependent load shedding, i.e., automatic gradual disconnection of loads from the grid, and frequency-dependent electricity generation, i.e., automatic gradual connection of BESS to the grid, is used as a perfect security measure to maintain grid stability in any case scenario. The paper emphasizes the use of stationary and moving Li-ion battery storage for meeting the demands of maintaining grid frequency and stability for near future operations.

Keywords: frequency control, grid stability, li-ion battery storage, smart grid

Procedia PDF Downloads 150
527 Comparison of GIS-Based Soil Erosion Susceptibility Models Using Support Vector Machine, Binary Logistic Regression and Artificial Neural Network in the Southwest Amazon Region

Authors: Elaine Lima Da Fonseca, Eliomar Pereira Da Silva Filho

Abstract:

The modeling of areas susceptible to soil loss by hydro erosive processes consists of a simplified instrument of reality with the purpose of predicting future behaviors from the observation and interaction of a set of geoenvironmental factors. The models of potential areas for soil loss will be obtained through binary logistic regression, artificial neural networks, and support vector machines. The choice of the municipality of Colorado do Oeste in the south of the western Amazon is due to soil degradation due to anthropogenic activities, such as agriculture, road construction, overgrazing, deforestation, and environmental and socioeconomic configurations. Initially, a soil erosion inventory map constructed through various field investigations will be designed, including the use of remotely piloted aircraft, orbital imagery, and the PLANAFLORO/RO database. 100 sampling units with the presence of erosion will be selected based on the assumptions indicated in the literature, and, to complement the dichotomous analysis, 100 units with no erosion will be randomly designated. The next step will be the selection of the predictive parameters that exert, jointly, directly, or indirectly, some influence on the mechanism of occurrence of soil erosion events. The chosen predictors are altitude, declivity, aspect or orientation of the slope, curvature of the slope, composite topographic index, flow power index, lineament density, normalized difference vegetation index, drainage density, lithology, soil type, erosivity, and ground surface temperature. After evaluating the relative contribution of each predictor variable, the erosion susceptibility model will be applied to the municipality of Colorado do Oeste - Rondônia through the SPSS Statistic 26 software. Evaluation of the model will occur through the determination of the values of the R² of Cox & Snell and the R² of Nagelkerke, Hosmer and Lemeshow Test, Log Likelihood Value, and Wald Test, in addition to analysis of the Confounding Matrix, ROC Curve and Accumulated Gain according to the model specification. The validation of the synthesis map resulting from both models of the potential risk of soil erosion will occur by means of Kappa indices, accuracy, and sensitivity, as well as by field verification of the classes of susceptibility to erosion using drone photogrammetry. Thus, it is expected to obtain the mapping of the following classes of susceptibility to erosion very low, low, moderate, very high, and high, which may constitute a screening tool to identify areas where more detailed investigations need to be carried out, applying more efficient social resources.

Keywords: modeling, susceptibility to erosion, artificial intelligence, Amazon

Procedia PDF Downloads 66
526 Foreseen the Future: Human Factors Integration in European Horizon Projects

Authors: José Manuel Palma, Paula Pereira, Margarida Tomás

Abstract:

Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).

Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0

Procedia PDF Downloads 65
525 A Geographical Information System Supported Method for Determining Urban Transformation Areas in the Scope of Disaster Risks in Kocaeli

Authors: Tayfun Salihoğlu

Abstract:

Following the Law No: 6306 on Transformation of Disaster Risk Areas, urban transformation in Turkey found its legal basis. In the best practices all over the World, the urban transformation was shaped as part of comprehensive social programs through the discourses of renewing the economic, social and physical degraded parts of the city, producing spaces resistant to earthquakes and other possible disasters and creating a livable environment. In Turkish practice, a contradictory process is observed. In this study, it is aimed to develop a method for better understanding of the urban space in terms of disaster risks in order to constitute a basis for decisions in Kocaeli Urban Transformation Master Plan, which is being prepared by Kocaeli Metropolitan Municipality. The spatial unit used in the study is the 50x50 meter grids. In order to reflect the multidimensionality of urban transformation, three basic components that have spatial data in Kocaeli were identified. These components were named as 'Problems in Built-up Areas', 'Disaster Risks arising from Geological Conditions of the Ground and Problems of Buildings', and 'Inadequacy of Urban Services'. Each component was weighted and scored for each grid. In order to delimitate urban transformation zones Optimized Outlier Analysis (Local Moran I) in the ArcGIS 10.6.1 was conducted to test the type of distribution (clustered or scattered) and its significance on the grids by assuming the weighted total score of the grid as Input Features. As a result of this analysis, it was found that the weighted total scores were not significantly clustering at all grids in urban space. The grids which the input feature is clustered significantly were exported as the new database to use in further mappings. Total Score Map reflects the significant clusters in terms of weighted total scores of 'Problems in Built-up Areas', 'Disaster Risks arising from Geological Conditions of the Ground and Problems of Buildings' and 'Inadequacy of Urban Services'. Resulting grids with the highest scores are the most likely candidates for urban transformation in this citywide study. To categorize urban space in terms of urban transformation, Grouping Analysis in ArcGIS 10.6.1 was conducted to data that includes each component scores in significantly clustered grids. Due to Pseudo Statistics and Box Plots, 6 groups with the highest F stats were extracted. As a result of the mapping of the groups, it can be said that 6 groups can be interpreted in a more meaningful manner in relation to the urban space. The method presented in this study can be magnified due to the availability of more spatial data. By integrating with other data to be obtained during the planning process, this method can contribute to the continuation of research and decision-making processes of urban transformation master plans on a more consistent basis.

Keywords: urban transformation, GIS, disaster risk assessment, Kocaeli

Procedia PDF Downloads 120
524 Gradient Length Anomaly Analysis for Landslide Vulnerability Analysis of Upper Alaknanda River Basin, Uttarakhand Himalayas, India

Authors: Hasmithaa Neha, Atul Kumar Patidar, Girish Ch Kothyari

Abstract:

The northward convergence of the Indian plate has a dominating influence over the structural and geomorphic development of the Himalayan region. The highly deformed and complex stratigraphy in the area arises from a confluence of exogenic and endogenetic geological processes. This region frequently experiences natural hazards such as debris flows, flash floods, avalanches, landslides, and earthquakes due to its harsh and steep topography and fragile rock formations. Therefore, remote sensing technique-based examination and real-time monitoring of tectonically sensitive regions may provide crucial early warnings and invaluable data for effective hazard mitigation strategies. In order to identify unusual changes in the river gradients, the current study demonstrates a spatial quantitative geomorphic analysis of the upper Alaknanda River basin, Uttarakhand Himalaya, India, using gradient length anomaly analysis (GLAA). This basin is highly vulnerable to ground creeping and landslides due to the presence of active faults/thrusts, toe-cutting of slopes for road widening, development of heavy engineering projects on the highly sheared bedrock, and periodic earthquakes. The intersecting joint sets developed in the bedrocks have formed wedges that have facilitated the recurrence of several landslides. The main objective of current research is to identify abnormal gradient lengths, indicating potential landslide-prone zones. High-resolution digital elevation data and geospatial techniques are used to perform this analysis. The results of GLAA are corroborated with the historical landslide events and ultimately used for the generation of landslide susceptibility maps of the current study area. The preliminary results indicate that approximately 3.97% of the basin is stable, while about 8.54% is classified as moderately stable and suitable for human habitation. However, roughly 19.89% fall within the zone of moderate vulnerability, 38.06% are classified as vulnerable, and 29% fall within the highly vulnerable zones, posing risks for geohazards, including landslides, glacial avalanches, and earthquakes. This research provides valuable insights into the spatial distribution of landslide-prone areas. It offers a basis for implementing proactive measures for landslide risk reduction, including land-use planning, early warning systems, and infrastructure development techniques.

Keywords: landslide vulnerability, geohazard, GLA, upper Alaknanda Basin, Uttarakhand Himalaya

Procedia PDF Downloads 72
523 The Practise of Hand Drawing as a Premier Form of Representation in Architectural Design Teaching: The Case of FAUP

Authors: Rafael Santos, Clara Pimenta Do Vale, Barbara Bogoni, Poul Henning Kirkegaard

Abstract:

In the last decades, the relevance of hand drawing has decreased in the scope of architectural education. However, some schools continue to recognize its decisive role, not only in the architectural design teaching, but in the whole of architectural training. With this paper it is intended to present the results of a research developed on the following problem: the practise of hand drawing as a premier form of representation in architectural design teaching. The research had as its object the educational model of the Faculty of Architecture of the University of Porto (FAUP) and was led by three main objectives: to identify the circumstance that promoted hand drawing as a form of representation in FAUP's model; to characterize the types of hand drawing and their role in that model; to determine the particularities of hand drawing as a premier form of representation in architectural design teaching. Methodologically, the research was conducted according to a qualitative embedded single-case study design. The object – i.e., the educational model – was approached in FAUP case considering its Context and three embedded unities of analysis: the educational Purposes, Principles and Practices. In order to guide the procedures of data collection and analysis, a Matrix for the Characterization (MCC) was developed. As a methodological tool, the MCC allowed to relate the three embedded unities of analysis with the three main sources of evidence where the object manifests itself: the professors, expressing how the model is Assumed; the architectural design classes, expressing how the model is Achieved; and the students, expressing how the model is Acquired. The main research methods used were the naturalistic and participatory observation, in-person-interview and documentary and bibliographic review. The results reveal that the educational model of FAUP – following the model of the former Porto School – was largely due to the methodological foundations created with the hand drawing teaching-learning processes. In the absence of a culture of explicit theoretical elaboration or systematic research, hand drawing was the support for the continuity of the school, an expression of a unified thought about what should be the reflection and practice of architecture. As a form of representation, hand drawing plays a transversal role in the entire educational model, since its purposes are not limited to the conception of architectural design – it is also a means for perception, analysis and synthesis. Regarding the architectural design teaching, there seems to be an understanding of three complementary dimensions of didactics: the instrumental, methodological and propositional dimension. At FAUP, hand drawing is recognized as the common denominator among these dimensions, according to the idea of "globality of drawing". It is expected that the knowledge base developed in this research may have three main contributions: to contribute to the maintenance and valorisation of FAUP’s model; through the precise description of the methodological procedures, to contribute by transferability to similar studies; through the critical and objective framework of the problem underlying the hand drawing in architectural design teaching, to contribute to the broader discussion concerning the contemporary challenges on architectural education.

Keywords: architectural design teaching, architectural education, forms of representation, hand drawing

Procedia PDF Downloads 131
522 Carbon Capture and Storage by Continuous Production of CO₂ Hydrates Using a Network Mixing Technology

Authors: João Costa, Francisco Albuquerque, Ricardo J. Santos, Madalena M. Dias, José Carlos B. Lopes, Marcelo Costa

Abstract:

Nowadays, it is well recognized that carbon dioxide emissions, together with other greenhouse gases, are responsible for the dramatic climate changes that have been occurring over the past decades. Gas hydrates are currently seen as a promising and disruptive set of materials that can be used as a basis for developing new technologies for CO₂ capture and storage. Its potential as a clean and safe pathway for CCS is tremendous since it requires only water and gas to be mixed under favorable temperatures and mild high pressures. However, the hydrates formation process is highly exothermic; it releases about 2 MJ per kilogram of CO₂, and it only occurs in a narrow window of operational temperatures (0 - 10 °C) and pressures (15 to 40 bar). Efficient continuous hydrate production at a specific temperature range necessitates high heat transfer rates in mixing processes. Past technologies often struggled to meet this requirement, resulting in low productivity or extended mixing/contact times due to inadequate heat transfer rates, which consistently posed a limitation. Consequently, there is a need for more effective continuous hydrate production technologies in industrial applications. In this work, a network mixing continuous production technology has been shown to be viable for producing CO₂ hydrates. The structured mixer used throughout this work consists of a network of unit cells comprising mixing chambers interconnected by transport channels. These mixing features result in enhanced heat and mass transfer rates and high interfacial surface area. The mixer capacity emerges from the fact that, under proper hydrodynamic conditions, the flow inside the mixing chambers becomes fully chaotic and self-sustained oscillatory flow, inducing intense local laminar mixing. The device presents specific heat transfer rates ranging from 107 to 108 W⋅m⁻³⋅K⁻¹. A laboratory scale pilot installation was built using a device capable of continuously capturing 1 kg⋅h⁻¹ of CO₂, in an aqueous slurry of up to 20% in mass. The strong mixing intensity has proven to be sufficient to enhance dissolution and initiate hydrate crystallization without the need for external seeding mechanisms and to achieve, at the device outlet, conversions of 99% in CO₂. CO₂ dissolution experiments revealed that the overall liquid mass transfer coefficient is orders of magnitude larger than in similar devices with the same purpose, ranging from 1 000 to 12 000 h⁻¹. The present technology has shown itself to be capable of continuously producing CO₂ hydrates. Furthermore, the modular characteristics of the technology, where scalability is straightforward, underline the potential development of a modular hydrate-based CO₂ capture process for large-scale applications.

Keywords: network, mixing, hydrates, continuous process, carbon dioxide

Procedia PDF Downloads 52
521 Pandemic-Related Disruption to the Home Environment and Early Vocabulary Acquisition

Authors: Matthew McArthur, Margaret Friend

Abstract:

The COVID-19 pandemic disrupted the stability of the home environment for families across the world. Potential disruptions include parent work modality (in-person vs. remote), levels of health anxiety, family routines, and caregiving. These disruptions may have interfered with the processes of early vocabulary acquisition, carrying lasting effects over the life course. Our justification for this research is as follows: First, early, stable, caregiver-child reciprocal interactions, which may have been disrupted during the pandemic, contribute to the development of the brain architecture that supports language, cognitive, and social-emotional development. Second, early vocabulary predicts several cognitive outcomes, such as numeracy, literacy, and executive function. Further, disruption in the home is associated with adverse cognitive, academic, socio-emotional, behavioral, and communication outcomes in young children. We are interested in how disruptions related to the COVID-19 pandemic are associated with vocabulary acquisition in children born during the first two waves of the pandemic. We are conducting a moderated online experiment to assess this question. Participants are 16 children (10F) ranging in age from 19 to 39 months (M=25.27) and their caregivers. All child participants were screened for language background, health history, and history of language disorders, and were typically developing. Parents completed a modified version of the COVID-19 Family Stressor Scale (CoFaSS), a published measure of COVID-19-related family stressors. Thirteen items from the original scale were replaced to better capture change in family organization and stability specifically related to disruptions in income, anxiety, family relations, and childcare. Following completion of the modified CoFaSS, children completed a Web-Based version of the Computerized Comprehension Task and the Receptive One Word Picture Vocabulary if 24 months or older or the MacArthur-Bates Communicative Development Inventory if younger than 24 months. We report our preliminary data as a partial correlation analysis controlling for age. Raw vocabulary scores on the CCT, ROWPVT-4, and MCDI were all negatively associated with pandemic-related disruptions related to anxiety (r12=-.321; r1=-.332; r9=-.509), family relations (r12=-.590*; r1=-.155; r9=-.468), and childcare (r12=-.294; r1=-.468; r9=-.177). Although the small sample size for these preliminary data limits our power to detect significance, this trend is in the predicted direction, suggesting that increased pandemic-related disruption across multiple domains is associated with lower vocabulary scores. We anticipate presenting data on a full sample of 50 monolingual English participants. A sample of 50 participants would provide sufficient statistical power to detect a moderate effect size, adhering to a nominal alpha of 0.05 and ensuring a power level of 0.80.

Keywords: COVID-19, early vocabulary, home environment, language acquisition, multiple measures

Procedia PDF Downloads 62
520 The Sustained Utility of Japan's Human Security Policy

Authors: Maria Thaemar Tana

Abstract:

The paper examines the policy and practice of Japan’s human security. Specifically, it asks the question: How does Japan’s shift towards a more proactive defence posture affect the place of human security in its foreign policy agenda? Corollary to this, how is Japan sustaining its human security policy? The objective of this research is to understand how Japan, chiefly through the Ministry of Foreign Affairs (MOFA) and JICA (Japan International Cooperation Agency), sustains the concept of human security as a policy framework. In addition, the paper also aims to show how and why Japan continues to include the concept in its overall foreign policy agenda. In light of the recent developments in Japan’s security policy, which essentially result from the changing security environment, human security appears to be gradually losing relevance. The paper, however, argues that despite the strategic challenges Japan faced and is facing, as well as the apparent decline of its economic diplomacy, human security remains to be an area of critical importance for Japanese foreign policy. In fact, as Japan becomes more proactive in its international affairs, the strategic value of human security also increases. Human security was initially envisioned to help Japan compensate for its weaknesses in the areas of traditional security, but as Japan moves closer to a more activist foreign policy, the soft policy of human security complements its hard security policies. Using the framework of neoclassical realism (NCR), the paper recognizes that policy-making is essentially a convergence of incentives and constraints at the international and domestic levels. The theory posits that there is no perfect 'transmission belt' linking material power on the one hand, and actual foreign policy on the other. State behavior is influenced by both international- and domestic-level variables, but while systemic pressures and incentives determine the general direction of foreign policy, they are not strong enough to affect the exact details of state conduct. Internal factors such as leaders’ perceptions, domestic institutions, and domestic norms, serve as intervening variables between the international system and foreign policy. Thus, applied to this study, Japan’s sustained utilization of human security as a foreign policy instrument (dependent variable) is essentially a result of systemic pressures (indirectly) (independent variables) and domestic processes (directly) (intervening variables). Two cases of Japan’s human security practice in two regions are examined in two time periods: Iraq in the Middle East (2001-2010) and South Sudan in Africa (2011-2017). The cases show that despite the different motives behind Japan’s decision to participate in these international peacekeepings ad peace-building operations, human security continues to be incorporated in both rhetoric and practice, thus demonstrating that it was and remains to be an important diplomatic tool. Different variables at the international and domestic levels will be examined to understand how the interaction among them results in changes and continuities in Japan’s human security policy.

Keywords: human security, foreign policy, neoclassical realism, peace-building

Procedia PDF Downloads 133
519 Digital Twins in the Built Environment: A Systematic Literature Review

Authors: Bagireanu Astrid, Bros-Williamson Julio, Duncheva Mila, Currie John

Abstract:

Digital Twins (DT) are an innovative concept of cyber-physical integration of data between an asset and its virtual replica. They have originated in established industries such as manufacturing and aviation and have garnered increasing attention as a potentially transformative technology within the built environment. With the potential to support decision-making, real-time simulations, forecasting abilities and managing operations, DT do not fall under a singular scope. This makes defining and leveraging the potential uses of DT a potential missed opportunity. Despite its recognised potential in established industries, literature on DT in the built environment remains limited. Inadequate attention has been given to the implementation of DT in construction projects, as opposed to its operational stage applications. Additionally, the absence of a standardised definition has resulted in inconsistent interpretations of DT in both industry and academia. There is a need to consolidate research to foster a unified understanding of the DT. Such consolidation is indispensable to ensure that future research is undertaken with a solid foundation. This paper aims to present a comprehensive systematic literature review on the role of DT in the built environment. To accomplish this objective, a review and thematic analysis was conducted, encompassing relevant papers from the last five years. The identified papers are categorised based on their specific areas of focus, and the content of these papers was translated into a through classification of DT. In characterising DT and the associated data processes identified, this systematic literature review has identified 6 DT opportunities specifically relevant to the built environment: Facilitating collaborative procurement methods, Supporting net-zero and decarbonization goals, Supporting Modern Methods of Construction (MMC) and off-site manufacturing (OSM), Providing increased transparency and stakeholders collaboration, Supporting complex decision making (real-time simulations and forecasting abilities) and Seamless integration with Internet of Things (IoT), data analytics and other DT. Finally, a discussion of each area of research is provided. A table of definitions of DT across the reviewed literature is provided, seeking to delineate the current state of DT implementation in the built environment context. Gaps in knowledge are identified, as well as research challenges and opportunities for further advancements in the implementation of DT within the built environment. This paper critically assesses the existing literature to identify the potential of DT applications, aiming to harness the transformative capabilities of data in the built environment. By fostering a unified comprehension of DT, this paper contributes to advancing the effective adoption and utilisation of this technology, accelerating progress towards the realisation of smart cities, decarbonisation, and other envisioned roles for DT in the construction domain.

Keywords: built environment, design, digital twins, literature review

Procedia PDF Downloads 81
518 Performance Improvement of Piston Engine in Aeronautics by Means of Additive Manufacturing Technologies

Authors: G. Andreutti, G. Saccone, D. Lucariello, C. Pirozzi, S. Franchitti, R. Borrelli, C. Toscano, P. Caso, G. Ferraro, C. Pascarella

Abstract:

The reduction of greenhouse gases and pollution emissions is a worldwide environmental issue. The amount of CO₂ released by an aircraft is associated with the amount of fuel burned, so the improvement of engine thermo-mechanical efficiency and specific fuel consumption is a significant technological driver for aviation. Moreover, with the prospect that avgas will be phased out, an engine able to use more available and cheaper fuels is an evident advantage. An advanced aeronautical Diesel engine, because of its high efficiency and ability to use widely available and low-cost jet and diesel fuels, is a promising solution to achieve a more fuel-efficient aircraft. On the other hand, a Diesel engine has generally a higher overall weight, if compared with a gasoline one of same power performances. Fixing the MTOW, Max Take-Off Weight, and the operational payload, this extra-weight reduces the aircraft fuel fraction, partially vinifying the associated benefits. Therefore, an effort in weight saving manufacturing technologies is likely desirable. In this work, in order to achieve the mentioned goals, innovative Electron Beam Melting – EBM, Additive Manufacturing – AM technologies were applied to a two-stroke, common rail, GF56 Diesel engine, developed by the CMD Company for aeronautic applications. For this purpose, a consortium of academic, research and industrial partners, including CMD Company, Italian Aerospace Research Centre – CIRA, University of Naples Federico II and the University of Salerno carried out a technological project, funded by the Italian Minister of Education and Research – MIUR. The project aimed to optimize the baseline engine in order to improve its performance and increase its airworthiness features. This project was focused on the definition, design, development, and application of enabling technologies for performance improvement of GF56. Weight saving of this engine was pursued through the application of EBM-AM technologies and in particular using Arcam AB A2X machine, available at CIRA. The 3D printer processes titanium alloy micro-powders and it was employed to realize new connecting rods of the GF56 engine with an additive-oriented design approach. After a preliminary investigation of EBM process parameters and a thermo-mechanical characterization of titanium alloy samples, additive manufactured, innovative connecting rods were fabricated. These engine elements were structurally verified, topologically optimized, 3D printed and suitably post-processed. Finally, the overall performance improvement, on a typical General Aviation aircraft, was estimated, substituting the conventional engine with the optimized GF56 propulsion system.

Keywords: aeronautic propulsion, additive manufacturing, performance improvement, weight saving, piston engine

Procedia PDF Downloads 142
517 Belarus Rivers Runoff: Current State, Prospects

Authors: Aliaksandr Volchak, Мaryna Barushka

Abstract:

The territory of Belarus is studied quite well in terms of hydrology but runoff fluctuations over time require more detailed research in order to forecast changes in rivers runoff in future. Generally, river runoff is shaped by natural climatic factors, but man-induced impact has become so big lately that it can be compared to natural processes in forming runoffs. In Belarus, a heavy man load on the environment was caused by large-scale land reclamation in the 1960s. Lands of southern Belarus were reclaimed most, which contributed to changes in runoff. Besides, global warming influences runoff. Today we observe increase in air temperature, decrease in precipitation, changes in wind velocity and direction. These result from cyclic climate fluctuations and, to some extent, the growth of concentration of greenhouse gases in the air. Climate change affects Belarus’s water resources in different ways: in hydropower industry, other water-consuming industries, water transportation, agriculture, risks of floods. In this research we have done an assessment of river runoff according to the scenarios of climate change and global climate forecast presented in the 4th and 5th Assessment Reports conducted by Intergovernmental Panel on Climate Change (IPCC) and later specified and adjusted by experts from Vilnius Gediminas Technical University with the use of a regional climatic model. In order to forecast changes in climate and runoff, we analyzed their changes from 1962 up to now. This period is divided into two: from 1986 up to now in comparison with the changes observed from 1961 to 1985. Such a division is a common world-wide practice. The assessment has revealed that, on the average, changes in runoff are insignificant all over the country, even with its irrelevant increase by 0.5 – 4.0% in the catchments of the Western Dvina River and north-eastern part of the Dnieper River. However, changes in runoff have become more irregular both in terms of the catchment area and inter-annual distribution over seasons and river lengths. Rivers in southern Belarus (the Pripyat, the Western Bug, the Dnieper, the Neman) experience reduction of runoff all year round, except for winter, when their runoff increases. The Western Bug catchment is an exception because its runoff reduces all year round. Significant changes are observed in spring. Runoff of spring floods reduces but the flood comes much earlier. There are different trends in runoff changes in spring, summer, and autumn. Particularly in summer, we observe runoff reduction in the south and west of Belarus, with its growth in the north and north-east. Our forecast of runoff up to 2035 confirms the trend revealed in 1961 – 2015. According to it, in the future, there will be a strong difference between northern and southern Belarus, between small and big rivers. Although we predict irrelevant changes in runoff, it is quite possible that they will be uneven in terms of seasons or particular months. Especially, runoff can change in summer, but decrease in the rest seasons in the south of Belarus, whereas in the northern part the runoff is predicted to change insignificantly.

Keywords: assessment, climate fluctuation, forecast, river runoff

Procedia PDF Downloads 121
516 An Emergentist Defense of Incompatibility between Morally Significant Freedom and Causal Determinism

Authors: Lubos Rojka

Abstract:

The common perception of morally responsible behavior is that it presupposes freedom of choice, and that free decisions and actions are not determined by natural events, but by a person. In other words, the moral agent has the ability and the possibility of doing otherwise when making morally responsible decisions, and natural causal determinism cannot fully account for morally significant freedom. The incompatibility between a person’s morally significant freedom and causal determinism appears to be a natural position. Nevertheless, some of the most influential philosophical theories on moral responsibility are compatibilist or semi-compatibilist, and they exclude the requirement of alternative possibilities, which contradicts the claims of classical incompatibilism. The compatibilists often employ Frankfurt-style thought experiments to prove their theory. The goal of this paper is to examine the role of imaginary Frankfurt-style examples in compatibilist accounts. More specifically, the compatibilist accounts defended by John Martin Fischer and Michael McKenna will be inserted into the broader understanding of a person elaborated by Harry Frankfurt, Robert Kane and Walter Glannon. Deeper analysis reveals that the exclusion of alternative possibilities based on Frankfurt-style examples is problematic and misleading. A more comprehensive account of moral responsibility and morally significant (source) freedom requires higher order complex theories of human will and consciousness, in which rational and self-creative abilities and a real possibility to choose otherwise, at least on some occasions during a lifetime, are necessary. Theoretical moral reasons and their logical relations seem to require a sort of higher-order agent-causal incompatibilism. The ability of theoretical or abstract moral reasoning requires complex (strongly emergent) mental and conscious properties, among which an effective free will, together with first and second-order desires. Such a hierarchical theoretical model unifies reasons-responsiveness, mesh theory and emergentism. It is incompatible with physical causal determinism, because such determinism only allows non-systematic processes that may be hard to predict, but not complex (strongly) emergent systems. An agent’s effective will and conscious reflectivity is the starting point of a morally responsible action, which explains why a decision is 'up to the subject'. A free decision does not always have a complete causal history. This kind of an emergentist source hyper-incompatibilism seems to be the best direction of the search for an adequate explanation of moral responsibility in the traditional (merit-based) sense. Physical causal determinism as a universal theory would exclude morally significant freedom and responsibility in the traditional sense because it would exclude the emergence of and supervenience by the essential complex properties of human consciousness.

Keywords: consciousness, free will, determinism, emergence, moral responsibility

Procedia PDF Downloads 164
515 Music Reading Expertise Facilitates Implicit Statistical Learning of Sentence Structures in a Novel Language: Evidence from Eye Movement Behavior

Authors: Sara T. K. Li, Belinda H. J. Chung, Jeffery C. N. Yip, Janet H. Hsiao

Abstract:

Music notation and text reading both involve statistical learning of music or linguistic structures. However, it remains unclear how music reading expertise influences text reading behavior. The present study examined this issue through an eye-tracking study. Chinese-English bilingual musicians and non-musicians read English sentences, Chinese sentences, musical phrases, and sentences in Tibetan, a language novel to the participants, with their eye movement recorded. Each set of stimuli consisted of two conditions in terms of structural regularity: syntactically correct and syntactically incorrect musical phrases/sentences. They then completed a sentence comprehension (for syntactically correct sentences) or a musical segment/word recognition task afterwards to test their comprehension/recognition abilities. The results showed that in reading musical phrases, as compared with non-musicians, musicians had a higher accuracy in the recognition task, and had shorter reading time, fewer fixations, and shorter fixation duration when reading syntactically correct (i.e., in diatonic key) than incorrect (i.e., in non-diatonic key/atonal) musical phrases. This result reflects their expertise in music reading. Interestingly, in reading Tibetan sentences, which was novel to both participant groups, while non-musicians did not show any behavior differences between reading syntactically correct or incorrect Tibetan sentences, musicians showed a shorter reading time and had marginally fewer fixations when reading syntactically correct sentences than syntactically incorrect ones. However, none of the musicians reported discovering any structural regularities in the Tibetan stimuli after the experiment when being asked explicitly, suggesting that they may have implicitly acquired the structural regularities in Tibetan sentences. This group difference was not observed when they read English or Chinese sentences. This result suggests that music reading expertise facilities reading texts in a novel language (i.e., Tibetan), but not in languages that the readers are already familiar with (i.e., English and Chinese). This phenomenon may be due to the similarities between reading music notations and reading texts in a novel language, as in both cases the stimuli follow particular statistical structures but do not involve semantic or lexical processing. Thus, musicians may transfer their statistical learning skills stemmed from music notation reading experience to implicitly discover structures of sentences in a novel language. This speculation is consistent with a recent finding showing that music reading expertise modulates the processing of English nonwords (i.e., words that do not follow morphological or orthographic rules) but not pseudo- or real words. These results suggest that the modulation of music reading expertise on language processing depends on the similarities in the cognitive processes involved. It also has important implications for the benefits of music education on language and cognitive development.

Keywords: eye movement behavior, eye-tracking, music reading expertise, sentence reading, structural regularity, visual processing

Procedia PDF Downloads 380