Search results for: vapour phase transesterification
495 Functionalized Spherical Aluminosilicates in Biomedically Grade Composites
Authors: Damian Stanislaw Nakonieczny, Grazyna Simha Martynkova, Marianna Hundakova, G. Kratosová, Karla Cech Barabaszova
Abstract:
The main aim of the research was to functionalize the surface of spherical aluminum silicates in the form of so-called cenospheres. Cenospheres are light ceramic particles with a density between 0.45 and 0.85 kgm-3 hat can be obtained as a result of separation from fly ash from coal combustion. However, their occurrence is limited to about 1% by weight of dry ash mainly derived from anthracite. Hence they are very rare and desirable material. Cenospheres are characterized by complete chemical inertness. Mohs hardness in range of 6 and completely smooth surface. Main idea was to prepare the surface by chemical etching, among others hydrofluoric acid (HF) and hydrogen peroxide, caro acid, silanization using (3-aminopropyl) triethoxysilane (APTES) and tetraethyl orthosilicate (TEOS) to obtain the maximum development and functionalization of the surface to improve chemical and mechanical connection with biomedically used polymers, i.e., polyacrylic methacrylate (PMMA) and polyetheretherketone (PEEK). These polymers are used medically mainly as a material for fixed and removable dental prostheses and PEEK spinal implants. The problem with their use is the decrease in mechanical properties over time and bacterial infections fungal during implantation and use of dentures. Hence, the use of a ceramic filler that will significantly improve the mechanical properties, improve the fluidity of the polymer during shape formation, and in the future, will be able to support bacteriostatic substances such as silver and zinc ions seem promising. In order to evaluate our laboratory work, several instrumental studies were performed: chemical composition and morphology with scanning electron microscopy with Energy-Dispersive X-Ray Probe (SEM/EDX), determination of characteristic functional groups of Fourier Transform Infrared Spectroscopy (FTIR), phase composition of X-ray Diffraction (XRD) and thermal analysis of Thermo Gravimetric Analysis/differentia thermal analysis (TGA/DTA), as well as assessment of isotherm of adsorption with Brunauer-Emmett-Teller (BET) surface development. The surface was evaluated for the future application of additional bacteria and static fungus layers. Based on the experimental work, it was found that orated methods can be suitable for the functionalization of the surface of cenosphere ceramics, and in the future it can be suitable as a bacteriostatic filler for biomedical polymers, i.e., PEEK or PMMA.Keywords: bioceramics, composites, functionalization, surface development
Procedia PDF Downloads 120494 Influence of Thermal Ageing on Microstructural Features and Mechanical Properties of Reduced Activation Ferritic/Martensitic Grades
Authors: Athina Puype, Lorenzo Malerba, Nico De Wispelaere, Roumen Petrov, Jilt Sietsma
Abstract:
Reduced Activation Ferritic/Martensitic (FM) steels like EUROFER are of interest for first wall application in the future demonstration (DEMO) fusion reactor. Depending on the final design codes for the DEMO reactor, the first wall material will have to function in low-temperature mode or high-temperature mode, i.e. around 250-300°C of above 550°C respectively. However, the use of RAFM steels is limited up to a temperature of about 550°C. For the low-temperature application, the material suffers from irradiation embrittlement, due to a shift of ductile-to-brittle transition temperature (DBTT) towards higher temperatures upon irradiation. The high-temperature response of the material is equally insufficient for long-term use in fusion reactors, due to the instability of the matrix phase and coarsening of the precipitates at prolonged high-temperature exposure. The objective of this study is to investigate the influence of thermal ageing for 1000 hrs and 4000 hrs on microstructural features and mechanical properties of lab-cast EUROFER. Additionally, the ageing behavior of the lab-cast EUROFER is compared with the ageing behavior of standard EUROFER97-2 and T91. The microstructural features were investigated with light optical microscopy (LOM), electron back-scattered diffraction (EBSD) and transmission electron microscopy (TEM). Additionally, hardness measurements, tensile tests at elevated temperatures and Charpy V-notch impact testing of KLST-type MCVN specimens were performed to study the microstructural features and mechanical properties of four different F/M grades, i.e. T91, EUROFER97-2 and two lab-casted EUROFER grades. After ageing for 1000 hrs, the microstructures exhibit similar martensitic block sizes independent on the grain size before ageing. With respect to the initial coarser microstructures, the aged microstructures displayed a dislocation structure which is partially fragmented by polygonization. On the other hand, the initial finer microstructures tend to be more stable up to 1000hrs resulting in similar grain sizes for the four different steels. Increasing the ageing time to 4000 hrs, resulted in an increase of lath thickness and coarsening of M23C6 precipitates leading to a deterioration of tensile properties.Keywords: ageing experiments, EUROFER, ferritic/martensitic steels, mechanical properties, microstructure, T91
Procedia PDF Downloads 261493 Chemical Composition of Volatiles Emitted from Ziziphus jujuba Miller Collected during Different Growth Stages
Authors: Rose Vanessa Bandeira Reidel, Bernardo Melai, Pier Luigi Cioni, Luisa Pistelli
Abstract:
Ziziphus jujuba Miller is a common species of the Ziziphus genus (Rhamnaceae family) native to the tropics and subtropics known for its edible fruits, fresh consumed or used in healthy food, as flavoring and sweetener. Many phytochemicals and biological activities are described for this species. In this work, the aroma profiles emitted in vivo by whole fresh organs (leaf, bud flower, flower, green and red fruits) were analyzed separately by mean of solid phase micro-extraction (SPME) coupled with gas chromatography mass spectrometry (GC-MS). The emitted volatiles from different plant parts were analysed using Supelco SPME device coated with polydimethylsiloxane (PDMS, 100µm). Fresh plant material was introduced separately into a glass conical flask and allowed to equilibrate for 20 min. After the equilibration time, the fibre was exposed to the headspace for 15 min at room temperature, the fibre was re-inserted into the needle and transferred to the injector of the CG and CG-MS system, where the fibre was desorbed. All the data were submitted to multivariate statistical analysis, evidencing many differences amongst the selected plant parts and their developmental stages. A total of 144 compounds were identified corresponding to 94.6-99.4% of the whole aroma profile of jujube samples. Sesquiterpene hydrocarbons were the main chemical class of compounds in leaves also present in similar percentage in flowers and bud flowers where (E, E)-α-farnesene was the main constituent in all cited plant parts. This behavior can be due to a protection mechanism against pathogens and herbivores as well as resistance to abiotic factors. The aroma of green fruits was characterized by high amount of perillene while the red fruits release a volatile blend mainly constituted by different monoterpenes. The terpenoid emission of flesh fruits has important function in the interaction with animals including attraction of seed dispersers and it is related to a good quality of fruits. This study provides for the first time the chemical composition of the volatile emission from different Ziziphus jujuba organs. The SPME analyses of the collected samples showed different patterns of emission and can contribute to understand their ecological interactions and fruit production management.Keywords: Rhamnaceae, aroma profile, jujube organs, HS-SPME, GC-MS
Procedia PDF Downloads 256492 Hansen Solubility Parameter from Surface Measurements
Authors: Neveen AlQasas, Daniel Johnson
Abstract:
Membranes for water treatment are an established technology that attracts great attention due to its simplicity and cost effectiveness. However, membranes in operation suffer from the adverse effect of membrane fouling. Bio-fouling is a phenomenon that occurs at the water-membrane interface, and is a dynamic process that is initiated by the adsorption of dissolved organic material, including biomacromolecules, on the membrane surface. After initiation, attachment of microorganisms occurs, followed by biofilm growth. The biofilm blocks the pores of the membrane and consequently results in reducing the water flux. Moreover, the presence of a fouling layer can have a substantial impact on the membrane separation properties. Understanding the mechanism of the initiation phase of biofouling is a key point in eliminating the biofouling on membrane surfaces. The adhesion and attachment of different fouling materials is affected by the surface properties of the membrane materials. Therefore, surface properties of different polymeric materials had been studied in terms of their surface energies and Hansen solubility parameters (HSP). The difference between the combined HSP parameters (HSP distance) allows prediction of the affinity of two materials to each other. The possibilities of measuring the HSP of different polymer films via surface measurements, such as contact angle has been thoroughly investigated. Knowing the HSP of a membrane material and the HSP of a specific foulant, facilitate the estimation of the HSP distance between the two, and therefore the strength of attachment to the surface. Contact angle measurements using fourteen different solvents on five different polymeric films were carried out using the sessile drop method. Solvents were ranked as good or bad solvents using different ranking method and ranking was used to calculate the HSP of each polymeric film. Results clearly indicate the absence of a direct relation between contact angle values of each film and the HSP distance between each polymer film and the solvents used. Therefore, estimating HSP via contact angle alone is not sufficient. However, it was found if the surface tensions and viscosities of the used solvents are taken in to the account in the analysis of the contact angle values, a prediction of the HSP from contact angle measurements is possible. This was carried out via training of a neural network model. The trained neural network model has three inputs, contact angle value, surface tension and viscosity of solvent used. The model is able to predict the HSP distance between the used solvent and the tested polymer (material). The HSP distance prediction is further used to estimate the total and individual HSP parameters of each tested material. The results showed an accuracy of about 90% for all the five studied filmsKeywords: surface characterization, hansen solubility parameter estimation, contact angle measurements, artificial neural network model, surface measurements
Procedia PDF Downloads 94491 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data
Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton
Abstract:
The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.Keywords: analytics, digitization, industry 4.0, manufacturing
Procedia PDF Downloads 111490 Microbial Resource Research Infrastructure: A Large-Scale Research Infrastructure for Microbiological Services
Authors: R. Hurtado-Ortiz, D. Clermont, M. Schüngel, C. Bizet, D. Smith, E. Stackebrandt
Abstract:
Microbiological resources and their derivatives are the essential raw material for the advancement of human health, agro-food, food security, biotechnology, research and development in all life sciences. Microbial resources, and their genetic and metabolic products, are utilised in many areas such as production of healthy and functional food, identification of new antimicrobials against emerging and resistant pathogens, fighting agricultural disease, identifying novel energy sources on the basis of microbial biomass and screening for new active molecules for the bio-industries. The complexity of public collections, distribution and use of living biological material (not only living but also DNA, services, training, consultation, etc.) and service offer, demands the coordination and sharing of policies, processes and procedures. The Microbial Resource Research Infrastructure (MIRRI) is an initiative within the European Strategy Forum Infrastructures (ESFRI), bring together 16 partners including 13 European public microbial culture collections and biological resource centres (BRCs), supported by several European and non-European associated partners. The objective of MIRRI is to support innovation in microbiology by provision of a one-stop shop for well-characterized microbial resources and high quality services on a not-for-profit basis for biotechnology in support of microbiological research. In addition, MIRRI contributes to the structuring of microbial resources capacity both at the national and European levels. This will facilitate access to microorganisms for biotechnology for the enhancement of the bio-economy in Europe. MIRRI will overcome the fragmentation of access to current resources and services, develop harmonised strategies for delivery of associated information, ensure bio-security and other regulatory conditions to bring access and promote the uptake of these resources into European research. Data mining of the landscape of current information is needed to discover potential and drive innovation, to ensure the uptake of high quality microbial resources into research. MIRRI is in its Preparatory Phase focusing on governance and structure including technical, legal governance and financial issues. MIRRI will help the Biological Resources Centres to work more closely with policy makers, stakeholders, funders and researchers, to deliver resources and services needed for innovation.Keywords: culture collections, microbiology, infrastructure, microbial resources, biotechnology
Procedia PDF Downloads 444489 Synthesis of Temperature Sensitive Nano/Microgels by Soap-Free Emulsion Polymerization and Their Application in Hydrate Sediments Drilling Operations
Authors: Xuan Li, Weian Huang, Jinsheng Sun, Fuhao Zhao, Zhiyuan Wang, Jintang Wang
Abstract:
Natural gas hydrates (NGHs) as promising alternative energy sources have gained increasing attention. Hydrate-bearing formation in marine areas is highly unconsolidated formation and is fragile, which is composed of weakly cemented sand-clay and silty sediments. During the drilling process, the invasion of drilling fluid can easily lead to excessive water content in the formation. It will change the soil liquid plastic limit index, which significantly affects the formation quality, leading to wellbore instability due to the metastable character of hydrate-bearing sediments. Therefore, controlling the filtrate loss into the formation in the drilling process has to be highly regarded for protecting the stability of the wellbore. In this study, the temperature-sensitive nanogel of P(NIPAM-co-AMPS-co-tBA) was prepared by soap-free emulsion polymerization, and the temperature-sensitive behavior was employed to achieve self-adaptive plugging in hydrate sediments. First, the effects of additional amounts of AMPS, tBA, and cross-linker MBA on the microgel synthesis process and temperature-sensitive behaviors were investigated. Results showed that, as a reactive emulsifier, AMPS can not only participate in the polymerization reaction but also act as an emulsifier to stabilize micelles and enhance the stability of nanoparticles. The volume phase transition temperature (VPTT) of nanogels gradually decreased with the increase of the contents of hydrophobic monomer tBA. An increase in the content of the cross-linking agent MBA can lead to a rise in the coagulum content and instability of the emulsion. The plugging performance of nanogel was evaluated in a core sample with a pore size distribution range of 100-1000nm. The temperature-sensitive nanogel can effectively improve the microfiltration performance of drilling fluid. Since a combination of a series of nanogels could have a wide particle size distribution at any temperature, around 200nm to 800nm, the self-adaptive plugging capacity of nanogels for the hydrate sediments was revealed. Thermosensitive nanogel is a potential intelligent plugging material for drilling operations in natural gas hydrate-bearing sediments.Keywords: temperature-sensitive nanogel, NIPAM, self-adaptive plugging performance, drilling operations, hydrate-bearing sediments
Procedia PDF Downloads 170488 Degradation and Detoxification of Tetracycline by Sono-Fenton and Ozonation
Authors: Chikang Wang, Jhongjheng Jian, Poming Huang
Abstract:
Among a wide variety of pharmaceutical compounds, tetracycline antibiotics are one of the largest groups of pharmaceutical compounds extensively used in human and veterinary medicine to treat and prevent bacterial infections. Because it is water soluble, biologically active, stable and bio-refractory, release to the environment threatens aquatic life and increases the risk posed by antibiotic-resistant pathogens. In practice, due to its antibacterial nature, tetracycline cannot be effectively destructed by traditional biological methods. Hence, in this study, two advanced oxidation processes such as ozonation and sono-Fenton processes were conducted individually to degrade the tetracycline for investigating their feasibility on tetracycline degradation. Effect of operational variables on tetracycline degradation, release of nitrogen and change of toxicity were also proposed. Initial tetracycline concentration was 50 mg/L. To evaluate the efficiency of tetracycline degradation by ozonation, the ozone gas was produced by an ozone generator (Model LAB2B, Ozonia) and introduced into the reactor with different flows (25 - 500 mL/min) at varying pH levels (pH 3 - pH 11) and reaction temperatures (15 - 55°C). In sono-Fenton system, an ultrasonic transducer (Microson VCX 750, USA) operated at 20 kHz combined with H₂O₂ (2 mM) and Fe²⁺ (0.2 mM) were carried out at different pH levels (pH 3 - pH 11), aeration gas and flows (air and oxygen; 0.2 - 1.0 L/min), tetracycline concentrations (10 - 200 mg/L), reaction temperatures (15 - 55°C) and ultrasonic powers (25 - 200 Watts), respectively. Sole ultrasound was ineffective on tetracycline degradation, where the degradation efficiencies were lower than 10% with 60 min reaction. Contribution of Fe²⁺ and H₂O₂ on the degradation of tetracycline was significant, where the maximum tetracycline degradation efficiency in sono-Fenton process was as high as 91.3% followed by 45.8% mineralization. Effect of initial pH level on tetracycline degradation was insignificant from pH 3 to pH 6 but significantly decreased as the pH was greater than pH 7. Increase of the ultrasonic power was slightly increased the degradation efficiency of tetracycline, which indicated that the hydroxyl radicals dominated the oxidation of tetracycline. Effects of aeration of air or oxygen with different flows and reaction temperatures were insignificant. Ozonation showed better efficiencies in tetracycline degradation, where the optimum reaction condition was found at pH 3, 100 mL O₃/min and 25°C with 94% degradation and 60% mineralization. The toxicity of tetracycline was significantly decreased due to the mineralization of tetracycline. In addition, less than 10% of nitrogen content was released to solution phase as NH₃-N, and the most degraded tetracycline cannot be full mineralized to CO₂. The results shown in this study indicated that both the sono-Fenton process and ozonation can effectively degrade the tetracycline and reduce its toxicity at profitable condition. The costs of two systems needed to be further investigated to understand the feasibility in tetracycline degradation.Keywords: degradation, detoxification, mineralization, ozonation, sono-Fenton process, tetracycline
Procedia PDF Downloads 268487 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 106486 Relationship between Functional Properties and Supramolecular Structure of the Poly(Trimethylene 2,5-Furanoate) Based Multiblock Copolymers with Aliphatic Polyethers or Aliphatic Polyesters
Authors: S. Paszkiewicz, A. Zubkiewicz, A. Szymczyk, D. Pawlikowska, I. Irska, E. Piesowicz, A. Linares, T. A. Ezquerra
Abstract:
Over the last century, the world has become increasingly dependent on oil as its main source of chemicals and energy. Driven largely by the strong economic growth of India and China, demand for oil is expected to increase significantly in the coming years. This growth in demand, combined with diminishing reserves, will require the development of new, sustainable sources for fuels and bulk chemicals. Biomass is an attractive alternative feedstock, as it is widely available carbon source apart from oil and coal. Nowadays, academic and industrial research in the field of polymer materials is strongly oriented towards bio-based alternatives to petroleum-derived plastics with enhanced properties for advanced applications. In this context, 2,5-furandicarboxylic acid (FDCA), a biomass-based chemical product derived from lignocellulose, is one of the most high-potential biobased building blocks for polymers and the first candidate to replace the petro-derived terephthalic acid. FDCA has been identified as one of the top 12 chemicals in the future, which may be used as a platform chemical for the synthesis of biomass-based polyester. The aim of this study is to synthesize and characterize the multiblock copolymers containing rigid segments of poly(trimethylene 2,5-furanoate) (PTF) and soft segments of poly(tetramethylene oxide) (PTMO) with excellent elastic properties or aliphatic polyesters of polycaprolactone (PCL). Two series of PTF based copolymers, i.e., PTF-block-PTMO-T and PTF-block-PCL-T, with different content of flexible segments were synthesized by means of a two-step melt polycondensation process and characterized by various methods. The rigid segments of PTF, as well as the flexible PTMO/or PCL ones, were randomly distributed along the chain. On the basis of 1H NMR, SAXS and WAXS, DSC an DMTA results, one can conclude that both phases were thermodynamically immiscible and the values of phase transition temperatures varied with the composition of the copolymer. The copolymers containing 25, 35 and 45wt.% of flexible segments (PTMO) exhibited elastomeric property characteristics. Moreover, with respect to the flexible segments content, the temperatures corresponding to 5%, 25%, 50% and 90% mass loss as well as the values of tensile modulus decrease with the increasing content of aliphatic polyether or aliphatic polyester in the composition.Keywords: furan based polymers, multiblock copolymers, supramolecular structure, functional properties
Procedia PDF Downloads 129485 Double Wishbone Pushrod Suspension Systems Co-Simulation for Racing Applications
Authors: Suleyman Ogul Ertugrul, Mustafa Turgut, Serkan Inandı, Mustafa Gorkem Coban, Mustafa Kıgılı, Ali Mert, Oguzhan Kesmez, Murat Ozancı, Caglar Uyulan
Abstract:
In high-performance automotive engineering, the realistic simulation of suspension systems is crucial for enhancing vehicle dynamics and handling. This study focuses on the double wishbone suspension system, prevalent in racing vehicles due to its superior control and stability characteristics. Utilizing MATLAB and Adams Car simulation software, we conduct a comprehensive analysis of displacement behaviors and damper sizing under various dynamic conditions. The initial phase involves using MATLAB to simulate the entire suspension system, allowing for the preliminary determination of damper size based on the system's response under simulated conditions. Following this, manual calculations of wheel loads are performed to assess the forces acting on the front and rear suspensions during scenarios such as braking, cornering, maximum vertical loads, and acceleration. Further dynamic force analysis is carried out using MATLAB Simulink, focusing on the interactions between suspension components during key movements such as bumps and rebounds. This simulation helps in formulating precise force equations and in calculating the stiffness of the suspension springs. To enhance the accuracy of our findings, we focus on a detailed kinematic and dynamic analysis. This includes the creation of kinematic loops, derivation of relevant equations, and computation of Jacobian matrices to accurately determine damper travel and compression metrics. The calculated spring stiffness is crucial in selecting appropriate springs to ensure optimal suspension performance. To validate and refine our results, we replicate the analyses using the Adams Car software, renowned for its detailed handling of vehicular dynamics. The goal is to achieve a robust, reliable suspension setup that maximizes performance under the extreme conditions encountered in racing scenarios. This study exemplifies the integration of theoretical mechanics with advanced simulation tools to achieve a high-performance suspension setup that can significantly improve race car performance, providing a methodology that can be adapted for different types of racing vehicles.Keywords: FSAE, suspension system, Adams Car, kinematic
Procedia PDF Downloads 51484 Chemical, Physical and Microbiological Characteristics of a Texture-Modified Beef- Based 3D Printed Functional Product
Authors: Elvan G. Bulut, Betul Goksun, Tugba G. Gun, Ozge Sakiyan Demirkol, Kamuran Ayhan, Kezban Candogan
Abstract:
Dysphagia, difficulty in swallowing solid foods and thin liquids, is one of the common health threats among the elderly who require foods with modified texture in their diet. Although there are some commercial food formulations or hydrocolloids to thicken the liquid foods for dysphagic individuals, there is still a need for developing and offering new food products with enriched nutritional, textural and sensory characteristics to safely nourish these patients. 3D food printing is an appealing alternative in creating personalized foods for this purpose with attractive shape, soft and homogenous texture. In order to modify texture and prevent phase separation, hydrocolloids are generally used. In our laboratory, an optimized 3D printed beef-based formulation specifically for people with swallowing difficulties was developed based on the research project supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK Project # 218O017). The optimized formulation obtained from response surface methodology was 60% beef powder, 5.88% gelatin, and 0.74% kappa-carrageenan (all in a dry basis). This product was enriched with powders of freeze-dried beet, celery, and red capia pepper, butter, and whole milk. Proximate composition (moisture, fat, protein, and ash contents), pH value, CIE lightness (L*), redness (a*) and yellowness (b*), and color difference (ΔE*) values were determined. Counts of total mesophilic aerobic bacteria (TMAB), lactic acid bacteria (LAB), mold and yeast, total coliforms were conducted, and detection of coagulase positive S. aureus, E. coli, and Salmonella spp. were performed. The 3D printed products had 60.11% moisture, 16.51% fat, 13.68% protein, and 1.65% ash, and the pH value was 6.19, whereas the ΔE* value was 3.04. Counts of TMAB, LAB, mold and yeast and total coliforms before and after 3D printing were 5.23-5.41 log cfu/g, < 1 log cfu/g, < 1 log cfu/g, 2.39-2.15 log EMS/g, respectively. Coagulase positive S. aureus, E. coli, and Salmonella spp. were not detected in the products. The data obtained from this study based on determining some important product characteristics of functional beef-based formulation provides an encouraging basis for future research on the subject and should be useful in designing mass production of 3D printed products of similar composition.Keywords: beef, dysphagia, product characteristics, texture-modified foods, 3D food printing
Procedia PDF Downloads 111483 Learning with Music: The Effects of Musical Tension on Long-Term Declarative Memory Formation
Authors: Nawras Kurzom, Avi Mendelsohn
Abstract:
The effects of background music on learning and memory are inconsistent, partly due to the intrinsic complexity and variety of music and partly to individual differences in music perception and preference. A prominent musical feature that is known to elicit strong emotional responses is musical tension. Musical tension can be brought about by building anticipation of rhythm, harmony, melody, and dynamics. Delaying the resolution of dominant-to-tonic chord progressions, as well as using dissonant harmonics, can elicit feelings of tension, which can, in turn, affect memory formation of concomitant information. The aim of the presented studies was to explore how forming declarative memory is influenced by musical tension, brought about within continuous music as well as in the form of isolated chords with varying degrees of dissonance/consonance. The effects of musical tension on long-term memory of declarative information were studied in two ways: 1) by evoking tension within continuous music pieces by delaying the release of harmonic progressions from dominant to tonic chords, and 2) by using isolated single complex chords with various degrees of dissonance/roughness. Musical tension was validated through subjective reports of tension, as well as physiological measurements of skin conductance response (SCR) and pupil dilation responses to the chords. In addition, music information retrieval (MIR) was used to quantify musical properties associated with tension and its release. Each experiment included an encoding phase, wherein individuals studied stimuli (words or images) with different musical conditions. Memory for the studied stimuli was tested 24 hours later via recognition tasks. In three separate experiments, we found positive relationships between tension perception and physiological measurements of SCR and pupil dilation. As for memory performance, we found that background music, in general, led to superior memory performance as compared to silence. We detected a trade-off effect between tension perception and memory, such that individuals who perceived musical tension as such displayed reduced memory performance for images encoded during musical tension, whereas tense music benefited memory for those who were less sensitive to the perception of musical tension. Musical tension exerts complex interactions with perception, emotional responses, and cognitive performance on individuals with and without musical training. Delineating the conditions and mechanisms that underlie the interactions between musical tension and memory can benefit our understanding of musical perception at large and the diverse effects that music has on ongoing processing of declarative information.Keywords: musical tension, declarative memory, learning and memory, musical perception
Procedia PDF Downloads 98482 Assessment of Bisphenol A and 17 α-Ethinyl Estradiol Bioavailability in Soils Treated with Biosolids
Authors: I. Ahumada, L. Ascar, C. Pedraza, J. Montecino
Abstract:
It has been found that the addition of biosolids to soil is beneficial to soil health, enriching soil with essential nutrient elements. Although this sludge has properties that allow for the improvement of the physical features and productivity of agricultural and forest soils and the recovery of degraded soils, they also contain trace elements, organic trace and pathogens that can cause damage to the environment. The application of these biosolids to land without the total reclamation and the treated wastewater can transfer these compounds into terrestrial and aquatic environments, giving rise to potential accumulation in plants. The general aim of this study was to evaluate the bioavailability of bisphenol A (BPA), and 17 α-ethynyl estradiol (EE2) in a soil-biosolid system using wheat (Triticum aestivum) plant assays and a predictive extraction method using a solution of hydroxypropyl-β-cyclodextrin (HPCD) to determine if it is a reliable surrogate for this bioassay. Two soils were obtained from the central region of Chile (Lo Prado and Chicauma). Biosolids were obtained from a regional wastewater treatment plant. The soils were amended with biosolids at 90 Mg ha-1. Soils treated with biosolids, spiked with 10 mgkg-1 of the EE2 and 15 mgkg-1 and 30 mgkg-1of BPA were also included. The BPA, and EE2 concentration were determined in biosolids, soils and plant samples through ultrasound assisted extraction, solid phase extraction (SPE) and gas chromatography coupled to mass spectrometry determination (GC/MS). The bioavailable fraction found of each one of soils cultivated with wheat plants was compared with results obtained through a cyclodextrin biosimulator method. The total concentration found in biosolid from a treatment plant was 0.150 ± 0.064 mgkg-1 and 12.8±2.9 mgkg-1 of EE2 and BPA respectively. BPA and EE2 bioavailability is affected by the organic matter content and the physical and chemical properties of the soil. The bioavailability response of both compounds in the two soils varied with the EE2 and BPA concentration. It was observed in the case of EE2, the bioavailability in wheat plant crops contained higher concentrations in the roots than in the shoots. The concentration of EE2 increased with increasing biosolids rate. On the other hand, for BPA, a higher concentration was found in the shoot than the roots of the plants. The predictive capability the HPCD extraction was assessed using a simple linear correlation test, for both compounds in wheat plants. The correlation coefficients for the EE2 obtained from the HPCD extraction with those obtained from the wheat plants were r= 0.99 and p-value ≤ 0.05. On the other hand, in the case of BPA a correlation was not found. Therefore, the methodology was validated with respect to wheat plants bioassays, only in the EE2 case. Acknowledgments: The authors thank FONDECYT 1150502.Keywords: emerging compounds, bioavailability, biosolids, endocrine disruptors
Procedia PDF Downloads 145481 Effects of Evening vs. Morning Training on Motor Skill Consolidation in Morning-Oriented Elderly
Authors: Maria Korman, Carmit Gal, Ella Gabitov, Avi Karni
Abstract:
The main question addressed in this study was whether the time-of-day wherein training is afforded is a significant factor for motor skill ('how-to', procedural knowledge) acquisition and consolidation into long term memory in the healthy elderly population. Twenty-nine older adults (60-75 years) practiced an explicitly instructed 5-element key-press sequence by repeatedly generating the sequence ‘as fast and accurately as possible’. Contribution of three parameters to acquisition, 24h post-training consolidation, and 1-week retention gains in motor sequence speed was assessed: (a) time of training (morning vs. evening group) (b) sleep quality (actigraphy) and (c) chronotype. All study participants were moderately morning type, according to the Morningness-Eveningness Questionnaire score. All participants had sleep patterns typical of age, with average sleep efficiency of ~ 82%, and approximately 6 hours of sleep. Speed of motor sequence performance in both groups improved to a similar extent during training session. Nevertheless, evening group expressed small but significant overnight consolidation phase gains, while morning group showed only maintenance of performance level attained at the end of training. By 1-week retention test, both groups showed similar performance levels with no significant gains or losses with respect to 24h test. Changes in the tapping patterns at 24h and 1-week post-training were assessed based on normalized Pearson correlation coefficients using the Fisher’s z-transformation in reference to the tapping pattern attained at the end of the training. Significant differences between the groups were found: the evening group showed larger changes in tapping patterns across the consolidation and retention windows. Our results show that morning-oriented older adults effectively acquired, consolidated, and maintained a new sequence of finger movements, following both morning and evening practice sessions. However, time-of-training affected the time-course of skill evolution in terms of performance speed, as well as the re-organization of tapping patterns during the consolidation period. These results are in line with the notion that motor training preceding a sleep interval may be beneficial for the long-term memory in the elderly. Evening training should be considered an appropriate time window for motor skill learning in older adults, even in individuals with morning chronotype.Keywords: time-of-day, elderly, motor learning, memory consolidation, chronotype
Procedia PDF Downloads 134480 Improving Low English Oral Skills of 5 Second-Year English Major Students at Debark University
Authors: Belyihun Muchie
Abstract:
This study investigates the low English oral communication skills of 5 second-year English major students at Debark University. It aims to identify the key factors contributing to their weaknesses and propose effective interventions to improve their spoken English proficiency. Mixed-methods research will be employed, utilizing observations, questionnaires, and semi-structured interviews to gather data from the participants. To clearly identify these factors, structured and informal observations will be employed; the former will be used to identify their fluency, pronunciation, vocabulary use, and grammar accuracy, and the later will be suited to observe the natural interactions and communication patterns of learners in the classroom setting. The questionnaires will assess their self-perceptions of their skills, perceived barriers to fluency, and preferred learning styles. Interviews will also delve deeper into their experiences and explore specific obstacles faced in oral communication. Data analysis will involve both quantitative and qualitative responses. The structured observation and questionnaire will be analyzed quantitatively, whereas the informal observation and interview transcripts will be analyzed thematically. Findings will be used to identify the major causes of low oral communication skills, such as limited vocabulary, grammatical errors, pronunciation difficulties, or lack of confidence. They are also helpful to develop targeted solutions addressing these causes, such as intensive pronunciation practice, conversation simulations, personalized feedback, or anxiety-reduction techniques. Finally, the findings will guide designing an intervention plan for implementation during the action research phase. The study's outcomes are expected to provide valuable insights into the challenges faced by English major students in developing oral communication skills, contribute to the development of evidence-based interventions for improving spoken English proficiency in similar contexts, and offer practical recommendations for English language instructors and curriculum developers to enhance student learning outcomes. By addressing the specific needs of these students and implementing tailored interventions, this research aims to bridge the gap between theoretical knowledge and practical speaking ability, equipping them with the confidence and skills to flourish in English communication settings.Keywords: oral communication skills, mixed-methods, evidence-based interventions, spoken English proficiency
Procedia PDF Downloads 51479 Valorization of Mineralogical Byproduct TiO₂ Using Photocatalytic Degradation of Organo-Sulfur Industrial Effluent
Authors: Harish Kuruva, Vedasri Bai Khavala, Tiju Thomas, K. Murugan, B. S. Murty
Abstract:
Industries are growing day to day to increase the economy of the country. The biggest problem with industries is wastewater treatment. Releasing these wastewater directly into the river is more harmful to human life and a threat to aquatic life. These industrial effluents contain many dissolved solids, organic/inorganic compounds, salts, toxic metals, etc. Phenols, pesticides, dioxins, herbicides, pharmaceuticals, and textile dyes were the types of industrial effluents and more challenging to degrade eco-friendly. So many advanced techniques like electrochemical, oxidation process, and valorization have been applied for industrial wastewater treatment, but these are not cost-effective. Industrial effluent degradation is complicated compared to commercially available pollutants (dyes) like methylene blue, methylene orange, rhodamine B, etc. TiO₂ is one of the widely used photocatalysts which can degrade organic compounds using solar light and moisture available in the environment (organic compounds converted to CO₂ and H₂O). TiO₂ is widely studied in photocatalysis because of its low cost, non-toxic, high availability, and chemically and physically stable in the atmosphere. This study mainly focused on valorizing the mineralogical product TiO₂ (IREL, India). This mineralogical graded TiO₂ was characterized and compared with its structural and photocatalytic properties (industrial effluent degradation) with the commercially available Degussa P-25 TiO₂. It was testified that this mineralogical TiO₂ has the best photocatalytic properties (particle shape - spherical, size - 30±5 nm, surface area - 98.19 m²/g, bandgap - 3.2 eV, phase - 95% anatase, and 5% rutile). The industrial effluent was characterized by TDS (total dissolved solids), ICP-OES (inductively coupled plasma – optical emission spectroscopy), CHNS (Carbon, Hydrogen, Nitrogen, and sulfur) analyzer, and FT-IR (fourier-transform infrared spectroscopy). It was observed that it contains high sulfur (S=11.37±0.15%), organic compounds (C=4±0.1%, H=70.25±0.1%, N=10±0.1%), heavy metals, and other dissolved solids (60 g/L). However, the organo-sulfur industrial effluent was degraded by photocatalysis with the industrial mineralogical product TiO₂. In this study, the industrial effluent pH value (2.5 to 10), catalyst concentration (50 to 150 mg) were varied, and effluent concentration (0.5 Abs) and light exposure time (2 h) were maintained constant. The best degradation is about 80% of industrial effluent was achieved at pH 5 with a concentration of 150 mg - TiO₂. The FT-IR results and CHNS analyzer confirmed that the sulfur and organic compounds were degraded.Keywords: wastewater treatment, industrial mineralogical product TiO₂, photocatalysis, organo-sulfur industrial effluent
Procedia PDF Downloads 116478 Irish Print Media Framing of Syrian Migration to Ireland in the Irish Times and Irish Independent
Authors: Moufida Benmoussa
Abstract:
Since the escalation of the Syrian conflict in 2011, 6.9 million Syrians have fled to neighbouring countries, and 6.7 have remained displaced in Syria. Out of the 6.9 who fled Syria, over one million have crossed the Mediterranean Sea and become refugees and asylum seekers in various European countries. As a European and a member country of the EU, the Republic of Ireland was not an exception. In response to the refugee crisis caused mainly by the Syrian displacement, Ireland established the Syrian Humanitarian Admission Programme (SHAM) in 2014 and the Irish Refugee Protection Programme (IRPP) in 2015, followed by its second phase in 2019. In light of these events, Irish print media played a significant role in covering the Irish government’s decisions, political stance, and public opinion on the debate on taking Syrian refugees into Ireland. Considering the tremendous impact of media on politics and public opinion, my research examined how The Irish Times and Irish Independent framed Syrian migration to Ireland. I adopted a qualitative framing analysis to identify the prominent framings in these two newspapers. The collection of newspaper articles focused on three periods. The first period is from the first of January 2014 to the end of December 2014. During this period, the media covered the launch of the Syrian Humanitarian Admission Programme (SHAP) and stories about the first arrival of the Syrian refugees to Ireland. The second period is the year 2015. During this year, various events gained the attention of the Irish media. These events include Ireland’s establishment of the Irish Refugee Protection Programme, the Paris attacks, and the publishing of Aylan Kurdi’s Photograph. The third period is from the first of December 2019 to the thirtieth of January 2020. In this period, the media covered the convention of Ireland with the UNHCR and the European Union to provide sanctuary to 2900 refugees in the years 2020, 2021, 2022, and 2023. The primary findings of my study indicate that The Irish Times and Irish Independent’s framing of Syrian migration to Ireland was various. My research findings indicate that The Irish Times and Irish Independent’s framing of Syrian migration to Ireland was varied and asymmetrical. The dominant frames used by these two newspapers are humanitarian, responsibility, contribution, burden, intruder, and threat. The former three frames positively perceive Syrian migration to Ireland and support the Irish government’s decisions to welcome more Syrian refugees. On the other hand, the last three frames perceive Syrian migration and refugees negatively and stand for the principle that Ireland should not take Syrian refugees.Keywords: framing, Syrian migration, Ireland, newspaper
Procedia PDF Downloads 68477 A Novel Harmonic Compensation Algorithm for High Speed Drives
Authors: Lakdar Sadi-Haddad
Abstract:
The past few years study of very high speed electrical drives have seen a resurgence of interest. An inventory of the number of scientific papers and patents dealing with the subject makes it relevant. In fact democratization of magnetic bearing technology is at the origin of recent developments in high speed applications. These machines have as main advantage a much higher power density than the state of the art. Nevertheless particular attention should be paid to the design of the inverter as well as control and command. Surface mounted permanent magnet synchronous machine is the most appropriate technology to address high speed issues. However, it has the drawback of using a carbon sleeve to contain magnets that could tear because of the centrifugal forces generated in rotor periphery. Carbon fiber is well known for its mechanical properties but it has poor heat conduction. It results in a very bad evacuation of eddy current losses induce in the magnets by time and space stator harmonics. The three-phase inverter is the main harmonic source causing eddy currents in the magnets. In high speed applications such harmonics are harmful because on the one hand the characteristic impedance is very low and on the other hand the ratio between the switching frequency and that of the fundamental is much lower than that of the state of the art. To minimize the impact of these harmonics a first lever is to use strategy of modulation producing low harmonic distortion while the second is to introduce a sinus filter between the inverter and the machine to smooth voltage and current waveforms applied to the machine. Nevertheless, in very high speed machine the interaction of the processes mentioned above may introduce particular harmonics that can irreversibly damage the system: harmonics at the resonant frequency, harmonics at the shaft mode frequency, subharmonics etc. Some studies address these issues but treat these phenomena with separate solutions (specific strategy of modulation, active damping methods ...). The purpose of this paper is to present a complete new active harmonic compensation algorithm based on an improvement of the standard vector control as a global solution to all these issues. This presentation will be based on a complete theoretical analysis of the processes leading to the generation of such undesired harmonics. Then a state of the art of available solutions will be provided before developing the content of a new active harmonic compensation algorithm. The study will be completed by a validation study using simulations and practical case on a high speed machine.Keywords: active harmonic compensation, eddy current losses, high speed machine
Procedia PDF Downloads 395476 Vibrational Spectra and Nonlinear Optical Investigations of a Chalcone Derivative (2e)-3-[4-(Methylsulfanyl) Phenyl]-1-(3-Bromophenyl) Prop-2-En-1-One
Authors: Amit Kumar, Archana Gupta, Poonam Tandon, E. D. D’Silva
Abstract:
Nonlinear optical (NLO) materials are the key materials for the fast processing of information and optical data storage applications. In the last decade, materials showing nonlinear optical properties have been the object of increasing attention by both experimental and computational points of view. Chalcones are one of the most important classes of cross conjugated NLO chromophores that are reported to exhibit good SHG efficiency, ultra fast optical nonlinearities and are easily crystallizable. The basic structure of chalcones is based on the π-conjugated system in which two aromatic rings are connected by a three-carbon α, β-unsaturated carbonyl system. Due to the overlap of π orbitals, delocalization of electronic charge distribution leads to a high mobility of the electron density. On a molecular scale, the extent of charge transfer across the NLO chromophore determines the level of SHG output. Hence, the functionalization of both ends of the π-bond system with appropriate electron donor and acceptor groups can enhance the asymmetric electronic distribution in either or both ground and excited states, leading to an increased optical nonlinearity. In this research, the experimental and theoretical study on the structure and vibrations of (2E)-3-[4-(methylsulfanyl) phenyl]-1-(3-bromophenyl) prop-2-en-1-one (3Br4MSP) is presented. The FT-IR and FT-Raman spectra of the NLO material in the solid phase have been recorded. Density functional theory (DFT) calculations at B3LYP with 6-311++G(d,p) basis set were carried out to study the equilibrium geometry, vibrational wavenumbers, infrared absorbance and Raman scattering activities. The interpretation of vibrational features (normal mode assignments, for instance) has an invaluable aid from DFT calculations that provide a quantum-mechanical description of the electronic energies and forces involved. Perturbation theory allows one to obtain the vibrational normal modes by estimating the derivatives of the Kohn−Sham energy with respect to atomic displacements. The molecular hyperpolarizability β plays a chief role in the NLO properties, and a systematical study on β has been carried out. Furthermore, the first order hyperpolarizability (β) and the related properties such as dipole moment (μ) and polarizability (α) of the title molecule are evaluated by Finite Field (FF) approach. The electronic α and β of the studied molecule are 41.907×10-24 and 79.035×10-24 e.s.u. respectively, indicating that 3Br4MSP can be used as a good nonlinear optical material.Keywords: DFT, MEP, NLO, vibrational spectra
Procedia PDF Downloads 221475 Clustering-Based Computational Workload Minimization in Ontology Matching
Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris
Abstract:
In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching
Procedia PDF Downloads 248474 A User-Directed Approach to Optimization via Metaprogramming
Authors: Eashan Hatti
Abstract:
In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.Keywords: optimization, metaprogramming, logic programming, abstraction
Procedia PDF Downloads 87473 Variability of the X-Ray Sun during Descending Period of Solar Cycle 23
Authors: Zavkiddin Mirtoshev, Mirabbos Mirkamalov
Abstract:
We have analyzed the time series of full disk integrated soft X-ray (SXR) and hard X-ray (HXR) emission from the solar corona during 2004 January 1 to 2009 December 31, covering the descending phase of solar cycle 23. We employed the daily X-ray index (DXI) derived from X-ray observations from the Solar X-ray Spectrometer (SOXS) mission in four different energy bands: 4-5.5; 5.5-7.5 keV (SXR) and 15-20; 20-25 keV (HXR). The application of Lomb-Scargle periodogram technique to the DXI time series observed by the Silicium detector in the energy bands reveals several short and intermediate periodicities of the X-ray corona. The DXI explicitly show the periods of 13.6 days, 26.7 days, 128.5 days, 151 days, 180 days, 220 days, 270 days, 1.24 year and 1.54 year periods in SXR as well as in HXR energy bands. Although all periods are above 70% confidence level in all energy bands, they show strong power in HXR emission in comparison to SXR emission. These periods are distinctly clear in three bands but somehow not unambiguously clear in 5.5-7.5 keV band. This might be due to the presence of Ferrum and Ferrum/Niccolum line features, which frequently vary with small scale flares like micro-flares. The regular 27-day rotation and 13.5 day period of sunspots from the invisible side of the Sun are found stronger in HXR band relative to SXR band. However, flare activity Rieger periods (150 and 180 days) and near Rieger period 220 days are very strong in HXR emission which is very much expected. On the other hand, our current study reveals strong 270 day periodicity in SXR emission which may be connected with tachocline, similar to a fundamental rotation period of the Sun. The 1.24 year and 1.54 year periodicities, represented from the present research work, are well observable in both SXR as well as in HXR channels. These long-term periodicities must also have connection with tachocline and should be regarded as a consequence of variation in rotational modulation over long time scales. The 1.24 year and 1.54 year periods are also found great importance and significance in the life formation and it evolution on the Earth, and therefore they also have great astro-biological importance. We gratefully acknowledge support by the Indian Centre for Space Science and Technology Education in Asia and the Pacific (CSSTEAP, the Centre is affiliated to the United Nations), Physical Research Laboratory (PRL) at Ahmedabad, India. This work has done under the supervision of Prof. Rajmal Jain and paper consist materials of pilot project and research part of the M. Tech program which was made during Space and Atmospheric Science Course.Keywords: corona, flares, solar activity, X-ray emission
Procedia PDF Downloads 345472 Room Temperature Electron Spin Resonance and Raman Study of Nanocrystalline Zn(1-x)Cu(x)O (0.005 < x < 0.05) Synthesized by Pyrophoric Method
Authors: Jayashree Das, V. V. Srinivasu , D. K. Mishra, A. Maity
Abstract:
Owing to the important potential applications over decades, transition metal (TM: Mn, Fe, Ni, Cu, Cr, V etc.) doped ZnO-based diluted magnetic semiconductors (DMS) always attract research attention for more and newer investigations. One of the interesting aspects of these materials is to study and understand the magnetic property at room temperature properly, which is very crucial to select a material for any related application. In this regard, Electron spin resonance (ESR) study has been proven to be a powerful technique to investigate the spin dynamics of electrons inside the system, which are responsible for the magnetic behaviour of any system. ESR as well as the Raman and Photoluminescence spectroscopy studies are also helpful to study the defects present or created inside the system in the form of oxygen vacancy or cluster instrumental in determining the room temperature ferromagnetic property of transition metal doped ZnO system, which can be controlled through varying dopant concentration, appropriate synthesis technique and sintering of the samples. For our investigation, we synthesised Cu-doped ZnO nanocrystalline samples with composition Zn1-xCux ( 0.005< x < 0.05) by pyrophoric method and sintered at a low temperature of 650 0C. The microwave absorption is studied by the Electron Spin Resonance (ESR) of X-band (9.46 GHz) at room temperature. Systematic analysis of the obtained ESR spectra reveals that all the compositions of Cu-doped ZnO samples exhibit resonance signals of appreciable line widths and g value ~ 2.2, typical characteristic of ferromagnetism in the sample. Raman scattering and the photoluminescence study performed on the samples clearly indicated the presence of pronounced defect related peaks in the respective spectra. Cu doping in ZnO with varying concentration also observed to affect the optical band gap and the respective absorption edges in the UV-Vis spectra. FTIR spectroscopy reveals the Cu doping effect on the stretching bonds of ZnO. To probe into the structural and morphological changes incurred by Cu doping, we have performed XRD, SEM and EDX study, which confirms adequate Cu substitution without any significant impurity phase formation or lattice disorder. With proper explanation, we attempt to correlate the results observed for the structural optical and magnetic behaviour of the Cu-doped ZnO samples. We also claim that our result can be instrumental for appropriate applications of transition metal doped ZnO based DMS in the field of optoelectronics and Spintronics.Keywords: diluted magnetic semiconductors, electron spin resonance, raman scattering, spintronics.
Procedia PDF Downloads 312471 Modification of Aliphatic-Aromatic Copolyesters with Polyether Block for Segmented Copolymers with Elastothemoplastic Properties
Authors: I. Irska, S. Paszkiewicz, D. Pawlikowska, E. Piesowicz, A. Linares, T. A. Ezquerra
Abstract:
Due to the number of advantages such as high tensile strength, sensitivity to hydrolytic degradation, and biocompatibility poly(lactic acid) (PLA) is one of the most common polyesters for biomedical and pharmaceutical applications. However, PLA is a rigid, brittle polymer with low heat distortion temperature and slow crystallization rate. In order to broaden the range of PLA applications, it is necessary to improve these properties. In recent years a number of new strategies have been evolved to obtain PLA-based materials with improved characteristics, including manipulation of crystallinity, plasticization, blending, and incorporation into block copolymers. Among the other methods, synthesis of aliphatic-aromatic copolyesters has been attracting considerable attention as they may combine the mechanical performance of aromatic polyesters with biodegradability known from aliphatic ones. Given the need for highly flexible biodegradable polymers, in this contribution, a series of aromatic-aliphatic based on poly(butylene terephthalate) and poly(lactic acid) (PBT-b-PLA) copolyesters exhibiting superior mechanical properties were copolymerized with an additional poly(tetramethylene oxide) (PTMO) soft block. The structure and properties of both series were characterized by means of attenuated total reflectance – Fourier transform infrared spectroscopy (ATR-FTIR), nuclear magnetic resonance spectroscopy (¹H NMR), differential scanning calorimetry (DSC), wide-angle X-ray scattering (WAXS) and dynamic mechanical, thermal analysis (DMTA). Moreover, the related changes in tensile properties have been evaluated and discussed. Lastly, the viscoelastic properties of synthesized poly(ester-ether) copolymers were investigated in detail by step cycle tensile tests. The block lengths decreased with the advance of treatment, and the block-random diblock terpolymers of (PBT-ran-PLA)-b-PTMO were obtained. DSC and DMTA analysis confirmed unambiguously that synthesized poly(ester-ether) copolymers are microphase-separated systems. The introduction of polyether co-units resulted in a decrease in crystallinity degree and melting temperature. X-ray diffraction patterns revealed that only PBT blocks are able to crystallize. The mechanical properties of (PBT-ran-PLA)-b-PTMO copolymers are a result of a unique arrangement of immiscible hard and soft blocks, providing both strength and elasticity.Keywords: aliphatic-aromatic copolymers, multiblock copolymers, phase behavior, thermoplastic elastomers
Procedia PDF Downloads 138470 Miniaturized PVC Sensors for Determination of Fe2+, Mn2+ and Zn2+ in Buffalo-Cows’ Cervical Mucus Samples
Authors: Ahmed S. Fayed, Umima M. Mansour
Abstract:
Three polyvinyl chloride membrane sensors were developed for the electrochemical evaluation of ferrous, manganese and zinc ions. The sensors were used for assaying metal ions in cervical mucus (CM) of Egyptian river buffalo-cows (Bubalus bubalis) as their levels vary dependent on cyclical hormone variation during different phases of estrus cycle. The presented sensors are based on using ionophores, β-cyclodextrin (β-CD), hydroxypropyl β-cyclodextrin (HP-β-CD) and sulfocalix-4-arene (SCAL) for sensors 1, 2 and 3 for Fe2+, Mn2+ and Zn2+, respectively. Dioctyl phthalate (DOP) was used as the plasticizer in a polymeric matrix of polyvinylchloride (PVC). For increasing the selectivity and sensitivity of the sensors, each sensor was enriched with a suitable complexing agent, which enhanced the sensor’s response. For sensor 1, β-CD was mixed with bathophenanthroline; for sensor 2, porphyrin was incorporated with HP-β-CD; while for sensor 3, oxine was the used complexing agent with SCAL. Linear responses of 10-7-10-2 M with cationic slopes of 53.46, 45.01 and 50.96 over pH range 4-8 were obtained using coated graphite sensors for ferrous, manganese and zinc ionic solutions, respectively. The three sensors were validated, according to the IUPAC guidelines. The obtained results by the presented potentiometric procedures were statistically analyzed and compared with those obtained by atomic absorption spectrophotometric method (AAS). No significant differences for either accuracy or precision were observed between the two techniques. Successful application for the determination of the three studied cations in CM, for the purpose to determine the proper time for artificial insemination (AI) was achieved. The results were compared with those obtained upon analyzing the samples by AAS. Proper detection of estrus and correct time of AI was necessary to maximize the production of buffaloes. In this experiment, 30 multi-parous buffalo-cows were in second to third lactation and weighting 415-530 kg, and were synchronized with OVSynch protocol. Samples were taken in three times around ovulation, on day 8 of OVSynch protocol, on day 9 (20 h before AI) and on day 10 (1 h before AI). Beside analysis of trace elements (Fe2+, Mn2+ and Zn2+) in CM using the three sensors, the samples were analyzed for the three cations and also Cu2+ by AAS in the CM samples and blood samples. The results obtained were correlated with hormonal analysis of serum samples and ultrasonography for the purpose of determining of the optimum time of AI. The results showed significant differences and powerful correlation with Zn2+ composition of CM during heat phase and the ovulation time, indicating that the parameter could be used as a tool to decide optimal time of AI in buffalo-cows.Keywords: PVC Sensors, buffalo-cows, cyclodextrins, atomic absorption spectrophotometry, artificial insemination, OVSynch protocol
Procedia PDF Downloads 219469 Acrylic Microspheres-Based Microbial Bio-Optode for Nitrite Ion Detection
Authors: Siti Nur Syazni Mohd Zuki, Tan Ling Ling, Nina Suhaity Azmi, Chong Kwok Feng, Lee Yook Heng
Abstract:
Nitrite (NO2-) ion is used prevalently as a preservative in processed meat. Elevated levels of nitrite also found in edible bird’s nests (EBNs). Consumption of NO2- ion at levels above the health-based risk may cause cancer in humans. Spectrophotometric Griess test is the simplest established standard method for NO2- ion detection, however, it requires careful control of pH of each reaction step and susceptible to strong oxidants and dyeing interferences. Other traditional methods rely on the use of laboratory-scale instruments such as GC-MS, HPLC and ion chromatography, which cannot give real-time response. Therefore, it is of significant need for devices capable of measuring nitrite concentration in-situ, rapidly and without reagents, sample pretreatment or extraction step. Herein, we constructed a microspheres-based microbial optode for visual quantitation of NO2- ion. Raoutella planticola, the bacterium expressing NAD(P)H nitrite reductase (NiR) enzyme has been successfully extracted by microbial technique from EBN collected from local birdhouse. The whole cells and the lipophilic Nile Blue chromoionophore were physically absorbed on the photocurable poly(n-butyl acrylate-N-acryloxysuccinimide) [poly (nBA-NAS)] microspheres, whilst the reduced coenzyme NAD(P)H was covalently immobilized on the succinimide-functionalized acrylic microspheres to produce a reagentless biosensing system. Upon the NiR enzyme catalyzes the oxidation of NAD(P)H to NAD(P)+, NO2- ion is reduced to ammonium hydroxide, and that a colour change from blue to pink of the immobilized Nile Blue chromoionophore is perceived as a result of deprotonation reaction increasing the local pH in the microspheres membrane. The microspheres-based optosensor was optimized with a reflectance spectrophotometer at 639 nm and pH 8. The resulting microbial bio-optode membrane could quantify NO2- ion at 0.1 ppm and had a linear response up to 400 ppm. Due to the large surface area to mass ratio of the acrylic microspheres, it allows efficient solid state diffusional mass transfer of the substrate to the bio-recognition phase, and achieve the steady state response as fast as 5 min. The proposed optical microbial biosensor requires no sample pre-treatment step and possesses high stability as the whole cell biocatalyst provides protection to the enzymes from interfering substances, hence it is suitable for measurements in contaminated samples.Keywords: acrylic microspheres, microbial bio-optode, nitrite ion, reflectometric
Procedia PDF Downloads 448468 Historical Development of Negative Emotive Intensifiers in Hungarian
Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges
Abstract:
In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time
Procedia PDF Downloads 233467 Decision-Making Process Based on Game Theory in the Process of Urban Transformation
Authors: Cemil Akcay, Goksun Yerlikaya
Abstract:
Buildings are the living spaces of people with an active role in every aspect of life in today's world. While some structures have survived from the early ages, most of the buildings that completed their lifetime have not transported to the present day. Nowadays, buildings that do not meet the social, economic, and safety requirements of the age return to life with a transformation process. This transformation is called urban transformation. Urban transformation is the renewal of the areas with a risk of disaster and the technological infrastructure required by the structure. The transformation aims to prevent damage to earthquakes and other disasters by rebuilding buildings that have completed their non-earthquake-resistant economic life. It is essential to decide on other issues related to conversion and transformation in places where most of the building stock should transform into the first-degree earthquake belt, such as Istanbul. In urban transformation, property owners, local authority, and contractor must deal at a common point. Considering that hundreds of thousands of property owners are sometimes in the areas of transformation, it is evident how difficult it is to make the deal and decide. For the optimization of these decisions, the use of game theory is foreseeing. The main problem in this study is that the urban transformation is carried out in place, or the building or buildings are transport to a different location. There are many stakeholders in the Istanbul University Cerrahpaşa Medical Faculty Campus, which is planned to be carried out in the process of urban transformation, was tried to solve the game theory applications. An analysis of the decisions given on a real urban transformation project and the logical suitability of decisions taken without the use of game theory were also supervised using game theory. In each step of this study, many decision-makers are classifying according to a specific logical sequence, and in the game trees that emerged as a result of this classification, Nash balances were tried to observe, and optimum decisions were determined. All decisions taken for this project have been subjected to two significant differentiated comparisons using game theory, and as decisions are taken without the use of game theory, and according to the results, solutions for the decision phase of the urban transformation process introduced. The game theory model developed from beginning to the end of the urban transformation process, particularly as a solution to the difficulty of making rational decisions in large-scale projects with many participants in the decision-making process. The use of a decision-making mechanism can provide an optimum answer to the demands of the stakeholders. In today's world for the construction sector, it is also seeing that the game theory is a non-surprising consequence of the fact that it is the most critical issues of planning and making the right decision in future years.Keywords: urban transformation, the game theory, decision making, multi-actor project
Procedia PDF Downloads 140466 Growth and Yield Response of an Indian Wheat Cultivar (HD 2967) to Ozone and Water Stress in Open-Top Chambers with Emphasis on Its Antioxidant Status, Photosynthesis and Nutrient Allocation
Authors: Annesha Ghosh, S. B. Agrawal
Abstract:
Agricultural sector is facing a serious threat due to climate change and exacerbation of different atmospheric pollutants. Tropospheric ozone (O₃) is considered as a dynamic air pollutant imposing substantial phytotoxicity to natural vegetations and agriculture worldwide. Naturally, plants are exposed to different environmental factors and their interactions. Amongst such interactions, studies related to O₃ and water stress are still rare. In the present experiment, wheat cultivar HD2967 were grown in open top chambers (OTC) under two O₃ concentration; ambient O₃ level (A) and elevated O₃ (E) (ambient + 20 ppb O₃) along with two different water supply; well-watered (W) and 50% water stress conditions (WS), with an aim to assess the individual and interactive effect of two most prevailing stress factors in Indo-Gangetic Plains of India. Exposure to elevated O₃ dose caused early senescence symptoms and reduction in growth and biomass of the test cultivar. The adversity was more pronounced under the combined effect of EWS. Significant reduction of stomatal conductance (gs) and assimilation rate were observed under combined stress condition compared to the control (AW). However, plants grown under individual stress conditions displayed higher gs, biomass, and antioxidant defense mechanism compared to the plants grown under the presence of combined stresses. Higher induction in most of the enzyme activities of catalase (CAT), ascorbate peroxidase (APX), glutathione reductase (GR), peroxidase (POD) and superoxide dismutase (SOD) was displayed by HD 2967 under EW while, under the presence of combined stresses (EWS), a moderate increment of APX and CAT activity was observed only at its vegetative phase. Furthermore, variations in nutrient uptake and redistribution to different plants parts were also observed in the present study. Reduction in water availability has checked nutrient uptake (N, K, P, Ca, Cu, Mg, Zn) in above-ground parts (leaf) and below-ground parts (root). On the other hand, carbon (C) accumulation with subsequent C-N ratio was observed to be higher in the leaves under EWS. Such major nutrient check and limitation in carbon fixation due to lower gs under combined stress conditions might have weakened the defense mechanisms of the test cultivar. Grain yield was significantly reduced under EWS followed by AWS and EW as compared to their control, exhibiting an additive effect on the grain yield.Keywords: antioxidants, open-top chambers, ozone, water stress, wheat, yield
Procedia PDF Downloads 117