Search results for: amplification of angular speed differential
287 Upon Poly(2-Hydroxyethyl Methacrylate-Co-3, 9-Divinyl-2, 4, 8, 10-Tetraoxaspiro (5.5) Undecane) as Polymer Matrix Ensuring Intramolecular Strategies for Further Coupling Applications
Authors: Aurica P. Chiriac, Vera Balan, Mihai Asandulesa, Elena Butnaru, Nita Tudorachi, Elena Stoleru, Loredana E. Nita, Iordana Neamtu, Alina Diaconu, Liliana Mititelu-Tartau
Abstract:
The interest for studying ‘smart’ materials is entirely justified and in this context were realized investigations on poly(2-hydroxyethylmethacrylate-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane), which is a macromolecular compound with sensibility at pH and temperature, gel formation capacity, binding properties, amphilicity, good oxidative and thermal stability. Physico-chemical characteristics in terms of the molecular weight, temperature-sensitive abilities and thermal stability, as well rheological, dielectric and spectroscopic properties were evaluated in correlation with further coupling capabilities. Differential scanning calorimetry investigation indicated Tg at 36.6 °C and a melting point at Tm=72.8°C, for the studied copolymer, and up to 200oC two exothermic processes (at 99.7°C and 148.8°C) were registered with losing weight of about 4 %, respective 19.27%, which indicate just processes of thermal decomposition (and not phenomena of thermal transition) owing to scission of the functional groups and breakage of the macromolecular chains. At the same time, the rheological studies (rotational tests) confirmed the non-Newtonian shear-thinning fluid behavior of the copolymer solution. The dielectric properties of the copolymer have been evaluated in order to investigate the relaxation processes and two relaxation processes under Tg value were registered and attributed to localized motions of polar groups from side chain macromolecules, or parts of them, without disturbing the main chains. According to literature and confirmed as well by our investigations, β-relaxation is assigned with the rotation of the ester side group and the γ-relaxation corresponds to the rotation of hydroxy- methyl side groups. The fluorescence spectroscopy confirmed the copolymer structure, the spiroacetal moiety getting an axial conformation, more stable, with lower energy, able for specific interactions with molecules from environment, phenomena underlined by different shapes of the emission spectra of the copolymer. Also, the copolymer was used as template for indomethacin incorporation as model drug, and the biocompatible character of the complex was confirmed. The release behavior of the bioactive compound was dependent by the copolymer matrix composition, the increasing of 3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane comonomer amount attenuating the drug release. At the same time, the in vivo studies did not show significant differences of leucocyte formula elements, GOT, GPT and LDH levels, nor immune parameters (OC, PC, and BC) between control mice group and groups treated just with copolymer samples, with or without drug, data attesting the biocompatibility of the polymer samples. The investigation of the physico-chemical characteristics of poly(2-hydrxyethyl methacrylate-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane) in terms of temperature-sensitive abilities, rheological and dielectrical properties, are bringing useful information for further specific use of this polymeric compound.Keywords: bioapplications, dielectric and spectroscopic properties, dual sensitivity at pH and temperature, smart materials
Procedia PDF Downloads 282286 New Suspension Mechanism for a Formula Car using Camber Thrust
Authors: Shinji Kajiwara
Abstract:
The basic ability of a vehicle is the ability to “run”, “turn” and “stop”. The safeness and comfort during a drive on various road surfaces and speed depends on the performance of these basic abilities of the vehicle. Stability and maneuverability of a vehicle is vital in automotive engineering. Stability of a vehicle is the ability of the vehicle to revert back to a stable state during a drive when faced with crosswind and irregular road conditions. Maneuverability of a vehicle is the ability of the vehicle to change direction during a drive swiftly based on the steering of the driver. The stability and maneuverability of a vehicle can also be defined as the driving stability of the vehicle. Since fossil fueled vehicle is the main type of transportation today, the environmental factor in automotive engineering is also vital. By improving the fuel efficiency of the vehicle, the overall carbon emission will be reduced thus reducing the effect of global warming and greenhouse gas on the Earth. Another main focus of the automotive engineering is the safety performance of the vehicle especially with the worrying increase of vehicle collision every day. With better safety performance on a vehicle, every driver will be more confidence driving every day. Next, let us focus on the “turn” ability of a vehicle. By improving this particular ability of the vehicle, the cornering limit of the vehicle can be improved thus increasing the stability and maneuverability factor. In order to improve the cornering limit of the vehicle, a study to find the balance between the steering systems, the stability of the vehicle, higher lateral acceleration and the cornering limit detection must be conducted. The aim of this research is to study and develop a new suspension system that that will boost the lateral acceleration of the vehicle and ultimately improving the cornering limit of the vehicle. This research will also study environmental factor and the stability factor of the new suspension system. The double wishbone suspension system is widely used in four-wheel vehicle especially for high cornering performance sports car and racing car. The double wishbone designs allow the engineer to carefully control the motion of the wheel by controlling such parameters as camber angle, caster angle, toe pattern, roll center height, scrub radius, scuff and more. The development of the new suspension system will focus on the ability of the new suspension system to optimize the camber control and to improve the camber limit during a cornering motion. The research will be carried out using the CAE analysis tool. Using this analysis tool we will develop a JSAE Formula Machine equipped with the double wishbone system and also the new suspension system and conduct simulation and conduct studies on performance of both suspension systems.Keywords: automobile, camber thrust, cornering force, suspension
Procedia PDF Downloads 323285 Dynamic Wetting and Solidification
Authors: Yulii D. Shikhmurzaev
Abstract:
The modelling of the non-isothermal free-surface flows coupled with the solidification process has become the topic of intensive research with the advent of additive manufacturing, where complex 3-dimensional structures are produced by successive deposition and solidification of microscopic droplets of different materials. The issue is that both the spreading of liquids over solids and the propagation of the solidification front into the fluid and along the solid substrate pose fundamental difficulties for their mathematical modelling. The first of these processes, known as ‘dynamic wetting’, leads to the well-known ‘moving contact-line problem’ where, as shown recently both experimentally and theoretically, the contact angle formed by the free surfac with the solid substrate is not a function of the contact-line speed but is rather a functional of the flow field. The modelling of the propagating solidification front requires generalization of the classical Stefan problem, which would be able to describe the onset of the process and the non-equilibrium regime of solidification. Furthermore, given that both dynamic wetting and solification occur concurrently and interactively, they should be described within the same conceptual framework. The present work addresses this formidable problem and presents a mathematical model capable of describing the key element of additive manufacturing in a self-consistent and singularity-free way. The model is illustrated simple examples highlighting its main features. The main idea of the work is that both dynamic wetting and solidification, as well as some other fluid flows, are particular cases in a general class of flows where interfaces form and/or disappear. This conceptual framework allows one to derive a mathematical model from first principles using the methods of irreversible thermodynamics. Crucially, the interfaces are not considered as zero-mass entities introduced using Gibbsian ‘dividing surface’ but the 2-dimensional surface phases produced by the continuum limit in which the thickness of what physically is an interfacial layer vanishes, and its properties are characterized by ‘surface’ parameters (surface tension, surface density, etc). This approach allows for the mass exchange between the surface and bulk phases, which is the essence of the interface formation. As shown numerically, the onset of solidification is preceded by the pure interface formation stage, whilst the Stefan regime is the final stage where the temperature at the solidification front asymptotically approaches the solidification temperature. The developed model can also be applied to the flow with the substrate melting as well as a complex flow where both types of phase transition take place.Keywords: dynamic wetting, interface formation, phase transition, solidification
Procedia PDF Downloads 65284 Localized and Time-Resolved Velocity Measurements of Pulsatile Flow in a Rectangular Channel
Authors: R. Blythman, N. Jeffers, T. Persoons, D. B. Murray
Abstract:
The exploitation of flow pulsation in micro- and mini-channels is a potentially useful technique for enhancing cooling of high-end photonics and electronics systems. It is thought that pulsation alters the thickness of the hydrodynamic and thermal boundary layers, and hence affects the overall thermal resistance of the heat sink. Although the fluid mechanics and heat transfer are inextricably linked, it can be useful to decouple the parameters to better understand the mechanisms underlying any heat transfer enhancement. Using two-dimensional, two-component particle image velocimetry, the current work intends to characterize the heat transfer mechanisms in pulsating flow with a mean Reynolds number of 48 by experimentally quantifying the hydrodynamics of a generic liquid-cooled channel geometry. Flows circulated through the test section by a gear pump are modulated using a controller to achieve sinusoidal flow pulsations with Womersley numbers of 7.45 and 2.36 and an amplitude ratio of 0.75. It is found that the transient characteristics of the measured velocity profiles are dependent on the speed of oscillation, in accordance with the analytical solution for flow in a rectangular channel. A large velocity overshoot is observed close to the wall at high frequencies, resulting from the interaction of near-wall viscous stresses and inertial effects of the main fluid body. The steep velocity gradients at the wall are indicative of augmented heat transfer, although the local flow reversal may reduce the upstream temperature difference in heat transfer applications. While unsteady effects remain evident at the lower frequency, the annular effect subsides and retreats from the wall. The shear rate at the wall is increased during the accelerating half-cycle and decreased during deceleration compared to steady flow, suggesting that the flow may experience both enhanced and diminished heat transfer during a single period. Hence, the thickness of the hydrodynamic boundary layer is reduced for positively moving flow during one half of the pulsation cycle at the investigated frequencies. It is expected that the size of the thermal boundary layer is similarly reduced during the cycle, leading to intervals of heat transfer enhancement.Keywords: Heat transfer enhancement, particle image velocimetry, localized and time-resolved velocity, photonics and electronics cooling, pulsating flow, Richardson’s annular effect
Procedia PDF Downloads 346283 The Facilitatory Effect of Phonological Priming on Visual Word Recognition in Arabic as a Function of Lexicality and Overlap Positions
Authors: Ali Al Moussaoui
Abstract:
An experiment was designed to assess the performance of 24 Lebanese adults (mean age 29:5 years) in a lexical decision making (LDM) task to find out how the facilitatory effect of phonological priming (PP) affects the speed of visual word recognition in Arabic as lexicality (wordhood) and phonological overlap positions (POP) vary. The experiment falls in line with previous research on phonological priming in the light of the cohort theory and in relation to visual word recognition. The experiment also departs from the research on the Arabic language in which the importance of the consonantal root as a distinct morphological unit is confirmed. Based on previous research, it is hypothesized that (1) PP has a facilitating effect in LDM with words but not with nonwords and (2) final phonological overlap between the prime and the target is more facilitatory than initial overlap. An LDM task was programmed on PsychoPy application. Participants had to decide if a target (e.g., bayn ‘between’) preceded by a prime (e.g., bayt ‘house’) is a word or not. There were 4 conditions: no PP (NP), nonwords priming nonwords (NN), nonwords priming words (NW), and words priming words (WW). The conditions were simultaneously controlled for word length, wordhood, and POP. The interstimulus interval was 700 ms. Within the PP conditions, POP was controlled for in which there were 3 overlap positions between the primes and the targets: initial (e.g., asad ‘lion’ and asaf ‘sorrow’), final (e.g., kattab ‘cause to write’ 2sg-mas and rattab ‘organize’ 2sg-mas), or two-segmented (e.g., namle ‘ant’ and naħle ‘bee’). There were 96 trials, 24 in each condition, using a within-subject design. The results show that concerning (1), the highest average reaction time (RT) is that in NN, followed firstly by NW and finally by WW. There is statistical significance only between the pairs NN-NW and NN-WW. Regarding (2), the shortest RT is that in the two-segmented overlap condition, followed by the final POP in the first place and the initial POP in the last place. The difference between the two-segmented and the initial overlap is significant, while other pairwise comparisons are not. Based on these results, PP emerges as a facilitatory phenomenon that is highly sensitive to lexicality and POP. While PP can have a facilitating effect under lexicality, it shows no facilitation in its absence, which intersects with several previous findings. Participants are found to be more sensitive to the final phonological overlap than the initial overlap, which also coincides with a body of earlier literature. The results contradict the cohort theory’s stress on the onset overlap position and, instead, give more weight to final overlap, and even heavier weight to the two-segmented one. In conclusion, this study confirms the facilitating effect of PP with words but not when stimuli (at least the primes and at most both the primes and targets) are nonwords. It also shows that the two-segmented priming is the most influential in LDM in Arabic.Keywords: lexicality, phonological overlap positions, phonological priming, visual word recognition
Procedia PDF Downloads 185282 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers
Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala
Abstract:
The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification
Procedia PDF Downloads 163281 Interlayer-Mechanical Working: Effective Strategy to Mitigate Solidification Cracking in Wire-Arc Additive Manufacturing (WAAM) of Fe-based Shape Memory Alloy
Authors: Soumyajit Koley, Kuladeep Rajamudili, Supriyo Ganguly
Abstract:
In recent years, iron-based shape-memory alloys have been emerging as an inexpensive alternative to costly Ni-Ti alloy and thus considered suitable for many different applications in civil structures. Fe-17Mn-10Cr-5Si-4Ni-0.5V-0.5C alloy contains 37 wt.% of total solute elements. Such complex multi-component metallurgical system often leads to severe solute segregation and solidification cracking. Wire-arc additive manufacturing (WAAM) of Fe-17Mn-10Cr-5Si-4Ni-0.5V-0.5C alloy was attempted using a cold-wire fed plasma arc torch attached to a 6-axis robot. Self-standing walls were manufactured. However, multiple vertical cracks were observed after deposition of around 15 layers. Microstructural characterization revealed open surfaces of dendrites inside the crack, confirming these cracks as solidification cracks. Machine hammer peening (MHP) process was adopted on each layer to cold work the newly deposited alloy. Effect of MHP traverse speed were varied systematically to attain a window of operation where cracking was completely stopped. Microstructural and textural analysis were carried out further to correlate the peening process to microstructure.MHP helped in many ways. Firstly, a compressive residual stress was induced on each layer which countered the tensile residual stress evolved from solidification process; thus, reducing net tensile stress on the wall along its length. Secondly, significant local plastic deformation from MHP followed by the thermal cycle induced by deposition of next layer resulted into a recovered and recrystallized equiaxed microstructure instead of long columnar grains along the vertical direction. This microstructural change increased the total crack propagation length and thus, the overall toughness. Thirdly, the inter-layer peening significantly reduced the strong cubic {001} crystallographic texture formed along the build direction. Cubic {001} texture promotes easy separation of planes and easy crack propagation. Thus reduction of cubic texture alleviates the chance of cracking.Keywords: Iron-based shape-memory alloy, wire-arc additive manufacturing, solidification cracking, inter-layer cold working, machine hammer peening
Procedia PDF Downloads 72280 Numerical Analysis of the Computational Fluid Dynamics of Co-Digestion in a Large-Scale Continuous Stirred Tank Reactor
Authors: Sylvana A. Vega, Cesar E. Huilinir, Carlos J. Gonzalez
Abstract:
Co-digestion in anaerobic biodigesters is a technology improving hydrolysis by increasing methane generation. In the present study, the dimensional computational fluid dynamics (CFD) is numerically analyzed using Ansys Fluent software for agitation in a full-scale Continuous Stirred Tank Reactor (CSTR) biodigester during the co-digestion process. For this, a rheological study of the substrate is carried out, establishing rotation speeds of the stirrers depending on the microbial activity and energy ranges. The substrate is organic waste from industrial sources of sanitary water, butcher, fishmonger, and dairy. Once the rheological behavior curves have been obtained, it is obtained that it is a non-Newtonian fluid of the pseudoplastic type, with a solids rate of 12%. In the simulation, the rheological results of the fluid are considered, and the full-scale CSTR biodigester is modeled. It was coupling the second-order continuity differential equations, the three-dimensional Navier Stokes, the power-law model for non-Newtonian fluids, and three turbulence models: k-ε RNG, k-ε Realizable, and RMS (Reynolds Stress Model), for a 45° tilt vane impeller. It is simulated for three minutes since it is desired to study an intermittent mixture with a saving benefit of energy consumed. The results show that the absolute errors of the power number associated with the k-ε RNG, k-ε Realizable, and RMS models were 7.62%, 1.85%, and 5.05%, respectively, the numbers of power obtained from the analytical-experimental equation of Nagata. The results of the generalized Reynolds number show that the fluid dynamics have a transition-turbulent flow regime. Concerning the Froude number, the result indicates there is no need to implement baffles in the biodigester design, and the power number provides a steady trend close to 1.5. It is observed that the levels of design speeds within the biodigester are approximately 0.1 m/s, which are speeds suitable for the microbial community, where they can coexist and feed on the substrate in co-digestion. It is concluded that the model that more accurately predicts the behavior of fluid dynamics within the reactor is the k-ε Realizable model. The flow paths obtained are consistent with what is stated in the referenced literature, where the 45° inclination PBT impeller is the right type of agitator to keep particles in suspension and, in turn, increase the dispersion of gas in the liquid phase. If a 24/7 complete mix is considered under stirred agitation, with a plant factor of 80%, 51,840 kWh/year are estimated. On the contrary, if intermittent agitations of 3 min every 15 min are used under the same design conditions, reduce almost 80% of energy costs. It is a feasible solution to predict the energy expenditure of an anaerobic biodigester CSTR. It is recommended to use high mixing intensities, at the beginning and end of the joint phase acetogenesis/methanogenesis. This high intensity of mixing, in the beginning, produces the activation of the bacteria, and once reaching the end of the Hydraulic Retention Time period, it produces another increase in the mixing agitations, favoring the final dispersion of the biogas that may be trapped in the biodigester bottom.Keywords: anaerobic co-digestion, computational fluid dynamics, CFD, net power, organic waste
Procedia PDF Downloads 114279 Structural Health Assessment of a Masonry Bridge Using Wireless
Authors: Nalluri Lakshmi Ramu, C. Venkat Nihit, Narayana Kumar, Dillep
Abstract:
Masonry bridges are the iconic heritage transportation infrastructure throughout the world. Continuous increase in traffic loads and speed have kept engineers in dilemma about their structural performance and capacity. Henceforth, research community has an urgent need to propose an effective methodology and validate on real-time bridges. The presented research aims to assess the structural health of an Eighty-year-old masonry railway bridge in India using wireless accelerometer sensors. The bridge consists of 44 spans with length of 24.2 m each and individual pier is 13 m tall laid on well foundation. To calculate the dynamic characteristic properties of the bridge, ambient vibrations were recorded from the moving traffic at various speeds and the same are compared with the developed three-dimensional numerical model using finite element-based software. The conclusions about the weaker or deteriorated piers are drawn from the comparison of frequencies obtained from the experimental tests conducted on alternative spans. Masonry is a heterogeneous anisotropic material made up of incoherent materials (such as bricks, stones, and blocks). It is most likely the earliest largely used construction material. Masonry bridges, which were typically constructed of brick and stone, are still a key feature of the world's highway and railway networks. There are 1,47,523 railway bridges across India and about 15% of these bridges are built by masonry, which are around 80 to 100 year old. The cultural significance of masonry bridges cannot be overstated. These bridges are considered to be complicated due to the presence of arches, spandrel walls, piers, foundations, and soils. Due to traffic loads and vibrations, wind, rain, frost attack, high/low temperature cycles, moisture, earthquakes, river overflows, floods, scour, and soil under their foundations may cause material deterioration, opening of joints and ring separation in arch barrels, cracks in piers, loss of brick-stones and mortar joints, distortion of the arch profile. Few NDT tests like Flat jack Tests are being employed to access the homogeneity, durability of masonry structure, however there are many drawbacks because of the test. A modern approach of structural health assessment of masonry structures by vibration analysis, frequencies and stiffness properties is being explored in this paper.Keywords: masonry bridges, condition assessment, wireless sensors, numerical analysis modal frequencies
Procedia PDF Downloads 169278 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission
Authors: Tingwei Shu, Dong Zhou, Chengjun Guo
Abstract:
Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.Keywords: semantic communication, transformer, wavelet transform, data processing
Procedia PDF Downloads 78277 The Development of Noctiluca scintillans Algal Bloom in Coastal Waters of Muscat, Sulanate of Oman
Authors: Aysha Al Sha'aibi
Abstract:
Algal blooms of the dinoflagellate species Noctiluca scintillans became frequent events in Omani waters. The current study aims at elucidating the abundance, size variation and observations on the feeding mechanism performed by this species during the winter bloom. An attempt was made, to relate observed biological parameters of the Noctiluca population to environmental factors. Field studies spanned the period from December 2014 to April 2015. Samples were collected from Bandar Rawdah (Muscat region) by Bongo nets, twice per week, from the surface and the integrated upper mixed layer. The measured environmental variables were: temperature, salinity, dissolved oxygen, chlorophyll a, turbidity, nitrite, phosphate, wind speed and rainfall. During the winter bloom (from December 2014 through February 2015), the abundance exhibited the highest concentration on 17 February (640.24×106 cell.L-1) in oblique samples and 83.9x103 cell.L-1 in surface samples, with a subsequent decline up to the end of April. The average number of food vacuoles inside Noctiluca cells was 1.5 per cell; the percentage of feeding Noctiluca compared to the entire population varied from 0.01% to 0.03%. Both the surface area of the Noctiluca symbionts (Pedinomonas noctilucae) and cell diameter were maximal in December. In oblique samples the highest average cell diameter and the surface area of symbiont algae were 751.7 µm and 179.2x103 µm2 respectively. In surface samples, highest average cell diameter and the surface area of symbionts were 760 µm and 284.05x103 µm2 respectively. No significant correlations were detected between Noctiluca’s biological parameters and environmental variables except for the correlation between cell diameter and chlorophyll a, also between symbiotic algae surface area and chlorophyll a. The high correlation of chlorophyll a was as a reason of endosymbiotic algae Pedinomonas noctilucae and green Noctiluca enhanced chlorophyll during bloom. All correlations among biological parameters were significant; they are perhaps one of major factors that mediating high growth rates, generating millions of cell per liter in a short time range. The results gained from this study will provide a beneficial background for understanding deeply the development of coastal algal blooms of Noctiluca scintillans. Moreover, results could be used in different applications related to marine environment.Keywords: abundance, feeding activities, Noctiluca scintillans, Oman
Procedia PDF Downloads 435276 A Model of the Universe without Expansion of Space
Authors: Jia-Chao Wang
Abstract:
A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction
Procedia PDF Downloads 133275 Dynamic Simulation of Disintegration of Wood Chips Caused by Impact and Collisions during the Steam Explosion Pre-Treatment
Authors: Muhammad Muzamal, Anders Rasmuson
Abstract:
Wood material is extensively considered as a raw material for the production of bio-polymers, bio-fuels and value-added chemicals. However, the shortcoming in using wood as raw material is that the enzymatic hydrolysis of wood material is difficult because the accessibility of enzymes to hemicelluloses and cellulose is hindered by complex chemical and physical structure of the wood. The steam explosion (SE) pre-treatment improves the digestion of wood material by creating both chemical and physical modifications in wood. In this process, first, wood chips are treated with steam at high pressure and temperature for a certain time in a steam treatment vessel. During this time, the chemical linkages between lignin and polysaccharides are cleaved and stiffness of material decreases. Then the steam discharge valve is rapidly opened and the steam and wood chips exit the vessel at very high speed. These fast moving wood chips collide with each other and with walls of the equipment and disintegrate to small pieces. More damaged and disintegrated wood have larger surface area and increased accessibility to hemicelluloses and cellulose. The energy required for an increase in specific surface area by same value is 70 % more in conventional mechanical technique, i.e. attrition mill as compared to steam explosion process. The mechanism of wood disintegration during the SE pre-treatment is very little studied. In this study, we have simulated collision and impact of wood chips (dimension 20 mm x 20 mm x 4 mm) with each other and with walls of the vessel. The wood chips are simulated as a 3D orthotropic material. Damage and fracture in the wood material have been modelled using 3D Hashin’s damage model. This has been accomplished by developing a user-defined subroutine and implementing it in the FE software ABAQUS. The elastic and strength properties used for simulation are of spruce wood at 12% and 30 % moisture content and at 20 and 160 OC because the impacted wood chips are pre-treated with steam at high temperature and pressure. We have simulated several cases to study the effects of elastic and strength properties of wood, velocity of moving chip and orientation of wood chip at the time of impact on the damage in the wood chips. The disintegration patterns captured by simulations are very similar to those observed in experimentally obtained steam exploded wood. Simulation results show that the wood chips moving with higher velocity disintegrate more. Moisture contents and temperature decreases elastic properties and increases damage. Impact and collision in specific directions cause easy disintegration. This model can be used to efficiently design the steam explosion equipment.Keywords: dynamic simulation, disintegration of wood, impact, steam explosion pretreatment
Procedia PDF Downloads 400274 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor
Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro
Abstract:
Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.Keywords: control, DC motor, discrete PID, discrete state feedback
Procedia PDF Downloads 266273 Experimental Investigation on the Effect of Prestress on the Dynamic Mechanical Properties of Conglomerate Based on 3D-SHPB System
Authors: Wei Jun, Liao Hualin, Wang Huajian, Chen Jingkai, Liang Hongjun, Liu Chuanfu
Abstract:
Kuqa Piedmont is rich in oil and gas resources and has great development potential in Tarim Basin, China. However, there is a huge thick gravel layer developed with high content, wide distribution and variation in size of gravel, leading to the condition of strong heterogeneity. So that, the drill string is in a state of severe vibration and the drill bit is worn seriously while drilling, which greatly reduces the rock-breaking efficiency, and there is a complex load state of impact and three-dimensional in-situ stress acting on the rock in the bottom hole. The dynamic mechanical properties and the influencing factors of conglomerate, the main component of gravel layer, are the basis of engineering design and efficient rock breaking method and theoretical research. Limited by the previously experimental technique, there are few works published yet about conglomerate, especially rare in dynamic load. Based on this, a kind of 3D SHPB system, three-dimensional prestress, can be applied to simulate the in-situ stress characteristics, is adopted for the dynamic test of the conglomerate. The results show that the dynamic strength is higher than its static strength obviously, and while the three-dimensional prestress is 0 and the loading strain rate is 81.25~228.42 s-1, the true triaxial equivalent strength is 167.17~199.87 MPa, and the strong growth factor of dynamic and static is 1.61~1.92. And the higher the impact velocity, the greater the loading strain rate, the higher the dynamic strength and the greater the failure strain, which all increase linearly. There is a critical prestress in the impact direction and its vertical direction. In the impact direction, while the prestress is less than the critical one, the dynamic strength and the loading strain rate increase linearly; otherwise, the strength decreases slightly and the strain rate decreases rapidly. In the vertical direction of impact load, the strength increases and the strain rate decreases linearly before the critical prestress, after that, oppositely. The dynamic strength of the conglomerate can be reduced properly by reducing the amplitude of impact load so that the service life of rock-breaking tools can be prolonged while drilling in the stratum rich in gravel. The research has important reference significance for the speed-increasing technology and theoretical research while drilling in gravel layer.Keywords: huge thick gravel layer, conglomerate, 3D SHPB, dynamic strength, the deformation characteristics, prestress
Procedia PDF Downloads 209272 The Effects of Labeling Cues on Sensory and Affective Responses of Consumers to Categories of Functional Food Carriers: A Mixed Factorial ANOVA Design
Authors: Hedia El Ourabi, Marc Alexandre Tomiuk, Ahmed Khalil Ben Ayed
Abstract:
The aim of this study is to investigate the effects of the labeling cues traceability (T), health claim (HC), and verification of health claim (VHC) on consumer affective response and sensory appeal toward a wide array of functional food carriers (FFC). Predominantly, research in the food area has tended to examine the effects of these information cues independently on cognitive responses to food product offerings. Investigations and findings of potential interaction effects among these factors on effective response and sensory appeal are therefore scant. Moreover, previous studies have typically emphasized single or limited sets of functional food products and categories. In turn, this study considers five food product categories enriched with omega-3 fatty acids, namely: meat products, eggs, cereal products, dairy products and processed fruits and vegetables. It is, therefore, exhaustive in scope rather than exclusive. An investigation of the potential simultaneous effects of these information cues on the affective responses and sensory appeal of consumers should give rise to important insights to both functional food manufacturers and policymakers. A mixed (2 x 3) x (2 x 5) between-within subjects factorial ANOVA design was implemented in this study. T (two levels: completely traceable or non-traceable) and HC (three levels: functional health claim, or disease risk reduction health claim, or disease prevention health claim) were treated as between-subjects factors whereas VHC (two levels: by a government agency and by a non-government agency) and FFC (five food categories) were modeled as within-subjects factors. Subjects were randomly assigned to one of the six between-subjects conditions. A total of 463 questionnaires were obtained from a convenience sample of undergraduate students at various universities in the Montreal and Ottawa areas (in Canada). Consumer affective response and sensory appeal were respectively measured via the following statements assessed on seven-point semantic differential scales: ‘Your evaluation of [food product category] enriched with omega-3 fatty acids is Unlikeable (1) / Likeable (7)’ and ‘Your evaluation of [food product category] enriched with omega-3 fatty acids is Unappetizing (1) / Appetizing (7).’ Results revealed a significant interaction effect between HC and VHC on consumer affective response as well as on sensory appeal toward foods enriched with omega-3 fatty acids. On the other hand, the three-way interaction effect between T, HC, and VHC on either of the two dependent variables was not significant. However, the triple interaction effect among T, VHC, and FFC was significant on consumer effective response and the interaction effect among T, HC, and FFC was significant on consumer sensory appeal. Findings of this study should serve as impetus for functional food manufacturers to closely cooperate with policymakers in order to improve on and legitimize the use of health claims in their marketing efforts through credible verification practices and protocols put in place by trusted government agencies. Finally, both functional food manufacturers and retailers may benefit from the socially-responsible image which is conveyed by product offerings whose ingredients remain traceable from farm to kitchen table.Keywords: functional foods, labeling cues, effective appeal, sensory appeal
Procedia PDF Downloads 164271 Consideration of Whether Participation in the International '16 Days of Activism against Gender Based Violence' Campaign Is an Effective Teaching Tool for Raising Awareness and Understanding of Gender Based Violence
Authors: Kayliegh Richardson, Ana Speed
Abstract:
The international campaign, '16 Days of Activism against Gender Based Violence', seeks to raise awareness and understanding of gender based violence in a variety of settings. The campaign requires its participants to join in for advancing the right to education and challenging violence, discrimination, and inequality and take into account intersections such as gender, race, ethnicity, religion, sexual orientation, socio-economic status and other social identifiers. The authors of this paper are both clinic supervisors at Northumbria University in Newcastle Upon Tyne, England. As part of their research project, the authors are going to ask final year students on the MLaw degree at Northumbria University to become involved in the campaign by participating in a variety of awareness-raising activities during the course of the 16 days, which runs from 27 November 2017 until 10 December 2017. As part of the campaign, the authors will be running the following activities for students to participate in 1. Documentary showing of Banaz, a love story followed by discussion group. 2. 16 blogs for 16 days. Students will contribute to our family law blog over the 16 days, with articles about gender based violence. 3. Guest lecture on domestic violence (potentially run by a domestic violence organisation) 4. Workshop by Professor Ruth Lewis who will be presenting her innovative research in gender based violence and online abuse. 5. Poster competition - the students are asked to submit a poster about the different forms of gender based violence or proposals for ending violence against women and girls. The research aims are to identify whether participation in the project: 1. increases the students' engagement with issues of gender justice 2. is an effective educational tool for raising the students' awareness and understanding of gender based violence in its many forms. 3. increases the students' understanding of the domestic and international framework for protecting victims (in particular women and children) of gender based violence. After the activities, an impartial, experienced researcher will be holding a focus group with volunteering students to discuss their experiences of participating in the activities and whether they felt that participation in the project achieved the aims set out above. This paper will discuss the activities undertaken by the students and will address the data gathered during the focus group. Finally, the authors will discuss their thoughts on whether awareness of gender-based violence and other international family law issues can be appropriately raised in an educational setting.Keywords: gender based violence, clinical legal education, international family law, domestic abuse
Procedia PDF Downloads 339270 Understanding the Challenges of Lawbook Translation via the Framework of Functional Theory of Language
Authors: Tengku Sepora Tengku Mahadi
Abstract:
Where the speed of book writing lags behind the high need for such material for tertiary studies, translation offers a way to enhance the equilibrium in this demand-supply equation. Nevertheless, translation is confronted by obstacles that threaten its effectiveness. The primary challenge to the production of efficient translations may well be related to the text-type and in terms of its complexity. A text that is intricately written with unique rhetorical devices, subject-matter foundation and cultural references will undoubtedly challenge the translator. Longer time and greater effort would be the consequence. To understand these text-related challenges, the present paper set out to analyze a lawbook entitled Learning the Law by David Melinkoff. The book is chosen because it has often been used as a textbook or for reference in many law courses in the United Kingdom and has seen over thirteen editions; therefore, it can be said to be a worthy book for studies in law. Another reason is the existence of a ready translation in Malay. Reference to this translation enables confirmation to some extent of the potential problems that might occur in its translation. Understanding the organization and the language of the book will help translators to prepare themselves better for the task. They can anticipate the research and time that may be needed to produce an effective translation. Another premise here is that this text-type implies certain ways of writing and organization. Accordingly, it seems practicable to adopt the functional theory of language as suggested by Michael Halliday as its theoretical framework. Concepts of the context of culture, the context of situation and measures of the field, tenor and mode form the instruments for analysis. Additional examples from similar materials can also be used to validate the findings. Some interesting findings include the presence of several other text-types or sub-text-types in the book and the dependence on literary discourse and devices to capture the meanings better or add color to the dry field of law. In addition, many elements of culture can be seen, for example, the use of familiar alternatives, allusions, and even terminology and references that date back to various periods of time and languages. Also found are parts which discuss origins of words and terms that may be relevant to readers within the United Kingdom but make little sense to readers of the book in other languages. In conclusion, the textual analysis in terms of its functions and the linguistic and textual devices used to achieve them can then be applied as a guide to determine the effectiveness of the translation that is produced.Keywords: functional theory of language, lawbook text-type, rhetorical devices, culture
Procedia PDF Downloads 149269 Biopolymers: A Solution for Replacing Polyethylene in Food Packaging
Authors: Sonia Amariei, Ionut Avramia, Florin Ursachi, Ancuta Chetrariu, Ancuta Petraru
Abstract:
The food industry is one of the major generators of plastic waste derived from conventional synthetic petroleum-based polymers, which are non-biodegradable, used especially for packaging. These packaging materials, after the food is consumed, accumulate serious environmental concerns due to the materials but also to the organic residues that adhere to them. It is the concern of specialists, researchers to eliminate problems related to conventional materials that are not biodegradable or unnecessary plastic and replace them with biodegradable and edible materials, supporting the common effort to protect the environment. Even though environmental and health concerns will cause more consumers to switch to a plant-based diet, most people will continue to add more meat to their diet. The paper presents the possibility of replacing the polyethylene packaging from the surface of the trays for meat preparations with biodegradable packaging obtained from biopolymers. During the storage of meat products may occur deterioration by lipids oxidation and microbial spoilage, as well as the modification of the organoleptic characteristics. For this reason, different compositions of polymer mixtures and film conditions for obtaining must be studied to choose the best packaging material to achieve food safety. The compositions proposed for packaging are obtained from alginate, agar, starch, and glycerol as plasticizers. The tensile strength, elasticity, modulus of elasticity, thickness, density, microscopic images of the samples, roughness, opacity, humidity, water activity, the amount of water transferred as well as the speed of water transfer through these packaging materials were analyzed. A number of 28 samples with various compositions were analyzed, and the results showed that the sample with the highest values for hardness, density, and opacity, as well as the smallest water vapor permeability, of 1.2903E-4 ± 4.79E-6, has the ratio of components as alginate: agar: glycerol (3:1.25:0.75). The water activity of the analyzed films varied between 0.2886 and 0.3428 (aw< 0.6), demonstrating that all the compositions ensure the preservation of the products in the absence of microorganisms. All the determined parameters allow the appreciation of the quality of the packaging films in terms of mechanical resistance, its protection against the influence of light, the transfer of water through the packaging. Acknowledgments: This work was supported by a grant of the Ministry of Research, Innovation, and Digitization, CNCS/CCCDI – UEFISCDI, project number PN-III-P2-2.1-PED-2019-3863, within PNCDI III.Keywords: meat products, alginate, agar, starch, glycerol
Procedia PDF Downloads 167268 Multi-Objective Optimization of the Thermal-Hydraulic Behavior for a Sodium Fast Reactor with a Gas Power Conversion System and a Loss of off-Site Power Simulation
Authors: Avent Grange, Frederic Bertrand, Jean-Baptiste Droin, Amandine Marrel, Jean-Henry Ferrasse, Olivier Boutin
Abstract:
CEA and its industrial partners are designing a gas Power Conversion System (PCS) based on a Brayton cycle for the ASTRID Sodium-cooled Fast Reactor. Investigations of control and regulation requirements to operate this PCS during operating, incidental and accidental transients are necessary to adapt core heat removal. To this aim, we developed a methodology to optimize the thermal-hydraulic behavior of the reactor during normal operations, incidents and accidents. This methodology consists of a multi-objective optimization for a specific sequence, whose aim is to increase component lifetime by reducing simultaneously several thermal stresses and to bring the reactor into a stable state. Furthermore, the multi-objective optimization complies with safety and operating constraints. Operating, incidental and accidental sequences use specific regulations to control the thermal-hydraulic reactor behavior, each of them is defined by a setpoint, a controller and an actuator. In the multi-objective problem, the parameters used to solve the optimization are the setpoints and the settings of the controllers associated with the regulations included in the sequence. In this way, the methodology allows designers to define an optimized and specific control strategy of the plant for the studied sequence and hence to adapt PCS piloting at its best. The multi-objective optimization is performed by evolutionary algorithms coupled to surrogate models built on variables computed by the thermal-hydraulic system code, CATHARE2. The methodology is applied to a loss of off-site power sequence. Three variables are controlled: the sodium outlet temperature of the sodium-gas heat exchanger, turbomachine rotational speed and water flow through the heat sink. These regulations are chosen in order to minimize thermal stresses on the gas-gas heat exchanger, on the sodium-gas heat exchanger and on the vessel. The main results of this work are optimal setpoints for the three regulations. Moreover, Proportional-Integral-Derivative (PID) control setting is considered and efficient actuators used in controls are chosen through sensitivity analysis results. Finally, the optimized regulation system and the reactor control procedure, provided by the optimization process, are verified through a direct CATHARE2 calculation.Keywords: gas power conversion system, loss of off-site power, multi-objective optimization, regulation, sodium fast reactor, surrogate model
Procedia PDF Downloads 308267 An Efficient Automated Radiation Measuring System for Plasma Monopole Antenna
Authors: Gurkirandeep Kaur, Rana Pratap Yadav
Abstract:
This experimental study is aimed to examine the radiation characteristics of different plasma structures of a surface wave-driven plasma antenna by an automated measuring system. In this study, a 30 cm long plasma column of argon gas with a diameter of 3 cm is excited by surface wave discharge mechanism operating at 13.56 MHz with RF power level up to 100 Watts and gas pressure between 0.01 to 0.05 mb. The study reveals that a single structured plasma monopole can be modified into an array of plasma antenna elements by forming multiple striations or plasma blobs inside the discharge tube by altering the values of plasma properties such as working pressure, operating frequency, input RF power, discharge tube dimensions, i.e., length, radius, and thickness. It is also reported that plasma length, electron density, and conductivity are functions of operating plasma parameters and controlled by changing working pressure and input power. To investigate the antenna radiation efficiency for the far-field region, an automation-based radiation measuring system has been fabricated and presented in detail. This developed automated system involves a combined setup of controller, dc servo motors, vector network analyzer, and computing device to evaluate the radiation intensity, directivity, gain and efficiency of plasma antenna. In this system, the controller is connected to multiple motors for moving aluminum shafts in both elevation and azimuthal plane whereas radiation from plasma monopole antenna is measured by a Vector Network Analyser (VNA) which is further wired up with the computing device to display radiations in polar plot forms. Here, the radiation characteristics of both continuous and array plasma monopole antenna have been studied for various working plasma parameters. The experimental results clearly indicate that the plasma antenna is as efficient as a metallic antenna. The radiation from plasma monopole antenna is significantly influenced by plasma properties which provides a wider range in radiation pattern where desired radiation parameters like beam-width, the direction of radiation, radiation intensity, antenna efficiency, etc. can be achieved in a single monopole. Due to its wide range of selectivity in radiation pattern; this can meet the demands of wider bandwidth to get high data speed in communication systems. Moreover, this developed system provides an efficient and cost-effective solution for measuring the radiation pattern in far-field zone for any kind of antenna system.Keywords: antenna radiation characteristics, dynamically reconfigurable, plasma antenna, plasma column, plasma striations, surface wave
Procedia PDF Downloads 119266 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles
Authors: Nozar Kishi, Babak Kamrani, Filmon Habte
Abstract:
Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM
Procedia PDF Downloads 269265 [Keynote Talk]: Monitoring of Ultrafine Particle Number and Size Distribution at One Urban Background Site in Leicester
Authors: Sarkawt M. Hama, Paul S. Monks, Rebecca L. Cordell
Abstract:
Within the Joaquin project, ultrafine particles (UFP) are continuously measured at one urban background site in Leicester. The main aims are to examine the temporal and seasonal variations in UFP number concentration and size distribution in an urban environment, and to try to assess the added value of continuous UFP measurements. In addition, relations of UFP with more commonly monitored pollutants such as black carbon (BC), nitrogen oxides (NOX), particulate matter (PM2.5), and the lung deposited surface area(LDSA) were evaluated. The effects of meteorological conditions, particularly wind speed and direction, and also temperature on the observed distribution of ultrafine particles will be detailed. The study presents the results from an experimental investigation into the particle number concentration size distribution of UFP, BC, and NOX with measurements taken at the Automatic Urban and Rural Network (AURN) monitoring site in Leicester. The monitoring was performed as part of the EU project JOAQUIN (Joint Air Quality Initiative) supported by the INTERREG IVB NWE program. The total number concentrations (TNC) were measured by a water-based condensation particle counter (W-CPC) (TSI model 3783), the particle number concentrations (PNC) and size distributions were measured by an ultrafine particle monitor (UFP TSI model 3031), the BC by MAAP (Thermo-5012), the NOX by NO-NO2-NOx monitor (Thermos Scientific 42i), and a Nanoparticle Surface Area Monitor (NSAM, TSI 3550) was used to measure the LDSA (reported as μm2 cm−3) corresponding to the alveolar region of the lung between November 2013 and November 2015. The average concentrations of particle number concentrations were observed in summer with lower absolute values of PNC than in winter might be related mainly to particles directly emitted by traffic and to the more favorable conditions of atmospheric dispersion. Results showed a traffic-related diurnal variation of UFP, BC, NOX and LDSA with clear morning and evening rush hour peaks on weekdays, only an evening peak at the weekends. Correlation coefficients were calculated between UFP and other pollutants (BC and NOX). The highest correlation between them was found in winter months. Overall, the results support the notion that local traffic emissions were a major contributor of the atmospheric particles pollution and a clear seasonal pattern was found, with higher values during the cold season.Keywords: size distribution, traffic emissions, UFP, urban area
Procedia PDF Downloads 330264 Investigating the Impacts on Cyclist Casualty Severity at Roundabouts: A UK Case Study
Authors: Nurten Akgun, Dilum Dissanayake, Neil Thorpe, Margaret C. Bell
Abstract:
Cycling has gained a great attention with comparable speeds, low cost, health benefits and reducing the impact on the environment. The main challenge associated with cycling is the provision of safety for the people choosing to cycle as their main means of transport. From the road safety point of view, cyclists are considered as vulnerable road users because they are at higher risk of serious casualty in the urban network but more specifically at roundabouts. This research addresses the development of an enhanced mathematical model by including a broad spectrum of casualty related variables. These variables were geometric design measures (approach number of lanes and entry path radius), speed limit, meteorological condition variables (light, weather, road surface) and socio-demographic characteristics (age and gender), as well as contributory factors. Contributory factors included driver’s behavior related variables such as failed to look properly, sudden braking, a vehicle passing too close to a cyclist, junction overshot, failed to judge other person’s path, restart moving off at the junction, poor turn or manoeuvre and disobeyed give-way. Tyne and Wear in the UK were selected as a case study area. The cyclist casualty data was obtained from UK STATS19 National dataset. The reference categories for the regression model were set to slight and serious cyclist casualties. Therefore, binary logistic regression was applied. Binary logistic regression analysis showed that approach number of lanes was statistically significant at the 95% level of confidence. A higher number of approach lanes increased the probability of severity of cyclist casualty occurrence. In addition, sudden braking statistically significantly increased the cyclist casualty severity at the 95% level of confidence. The result concluded that cyclist casualty severity was highly related to approach a number of lanes and sudden braking. Further research should be carried out an in-depth analysis to explore connectivity of sudden braking and approach number of lanes in order to investigate the driver’s behavior at approach locations. The output of this research will inform investment in measure to improve the safety of cyclists at roundabouts.Keywords: binary logistic regression, casualty severity, cyclist safety, roundabout
Procedia PDF Downloads 177263 A Grid Synchronization Method Based On Adaptive Notch Filter for SPV System with Modified MPPT
Authors: Priyanka Chaudhary, M. Rizwan
Abstract:
This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.Keywords: solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique
Procedia PDF Downloads 594262 Analyzing the Heat Transfer Mechanism in a Tube Bundle Air-PCM Heat Exchanger: An Empirical Study
Authors: Maria De Los Angeles Ortega, Denis Bruneau, Patrick Sebastian, Jean-Pierre Nadeau, Alain Sommier, Saed Raji
Abstract:
Phase change materials (PCM) present attractive features that made them a passive solution for thermal comfort assessment in buildings during summer time. They show a large storage capacity per volume unit in comparison with other structural materials like bricks or concrete. If their use is matched with the peak load periods, they can contribute to the reduction of the primary energy consumption related to cooling applications. Despite these promising characteristics, they present some drawbacks. Commercial PCMs, as paraffines, offer a low thermal conductivity affecting the overall performance of the system. In some cases, the material can be enhanced, adding other elements that improve the conductivity, but in general, a design of the unit that optimizes the thermal performance is sought. The material selection is the departing point during the designing stage, and it does not leave plenty of room for optimization. The PCM melting point depends highly on the atmospheric characteristics of the building location. The selection must relay within the maximum, and the minimum temperature reached during the day. The geometry of the PCM container and the geometrical distribution of these containers are designing parameters, as well. They significantly affect the heat transfer, and therefore its phenomena must be studied exhaustively. During its lifetime, an air-PCM unit in a building must cool down the place during daytime, while the melting of the PCM occurs. At night, the PCM must be regenerated to be ready for next uses. When the system is not in service, a minimal amount of thermal exchanges is desired. The aforementioned functions result in the presence of sensible and latent heat storage and release. Hence different types of mechanisms drive the heat transfer phenomena. An experimental test was designed to study the heat transfer phenomena occurring in a circular tube bundle air-PCM exchanger. An in-line arrangement was selected as the geometrical distribution of the containers. With the aim of visual identification, the containers material and a section of the test bench were transparent. Some instruments were placed on the bench for measuring temperature and velocity. The PCM properties were also available through differential scanning calorimeter (DSC) tests. An evolution of the temperature during both cycles, melting and solidification were obtained. The results showed some phenomena at a local level (tubes) and on an overall level (exchanger). Conduction and convection appeared as the main heat transfer mechanisms. From these results, two approaches to analyze the heat transfer were followed. The first approach described the phenomena in a single tube as a series of thermal resistances, where a pure conduction controlled heat transfer was assumed in the PCM. For the second approach, the temperature measurements were used to find some significant dimensionless numbers and parameters as Stefan, Fourier and Rayleigh numbers, and the melting fraction. These approaches allowed us to identify the heat transfer phenomena during both cycles. The presence of natural convection during melting might have been stated from the influence of the Rayleigh number on the correlations obtained.Keywords: phase change materials, air-PCM exchangers, convection, conduction
Procedia PDF Downloads 178261 Genomic and Proteomic Variability in Glycine Max Genotypes in Response to Salt Stress
Authors: Faheema Khan
Abstract:
To investigate the ability of sensitive and tolerant genotype of Glycine max to adapt to a saline environment in a field, we examined the growth performance, water relation and activities of antioxidant enzymes in relation to photosynthetic rate, chlorophyll a fluorescence, photosynthetic pigment concentration, protein and proline in plants exposed to salt stress. Ten soybean genotypes (Pusa-20, Pusa-40, Pusa-37, Pusa-16, Pusa-24, Pusa-22, BRAGG, PK-416, PK-1042, and DS-9712) were selected and grown hydroponically. After 3 days of proper germination, the seedlings were transferred to Hoagland’s solution (Hoagland and Arnon 1950). The growth chamber was maintained at a photosynthetic photon flux density of 430 μmol m−2 s−1, 14 h of light, 10 h of dark and a relative humidity of 60%. The nutrient solution was bubbled with sterile air and changed on alternate days. Ten-day-old seedlings were given seven levels of salt in the form of NaCl viz., T1 = 0 mM NaCl, T2=25 mM NaCl, T3=50 mM NaCl, T4=75 mM NaCl, T5=100 mM NaCl, T6=125 mM NaCl, T7=150 mM NaCl. The investigation showed that genotype Pusa-24, PK-416 and Pusa-20 appeared to be the most salt-sensitive. genotypes as inferred from their significantly reduced length, fresh weight and dry weight in response to the NaCl exposure. Pusa-37 appeared to be the most tolerant genotype since no significant effect of NaCl treatment on growth was found. We observed a greater decline in the photosynthetic variables like photosynthetic rate, chlorophyll fluorescence and chlorophyll content, in salt-sensitive (Pusa-24) genotype than in salt-tolerant Pusa-37 under high salinity. Numerous primers were verified on ten soybean genotypes obtained from Operon technologies among which 30 RAPD primers shown high polymorphism and genetic variation. The Jaccard’s similarity coefficient values for each pairwise comparison between cultivars were calculated and similarity coefficient matrix was constructed. The closer varieties in the cluster behaved similar in their response to salinity tolerance. Intra-clustering within the two clusters precisely grouped the 10 genotypes in sub-cluster as expected from their physiological findings.Salt tolerant genotype Pusa-37, was further analysed by 2-Dimensional gel electrophoresis to analyse the differential expression of proteins at high salt stress. In the Present study, 173 protein spots were identified. Of these, 40 proteins responsive to salinity were either up- or down-regulated in Pusa-37. Proteomic analysis in salt-tolerant genotype (Pusa-37) led to the detection of proteins involved in a variety of biological processes, such as protein synthesis (12 %), redox regulation (19 %), primary and secondary metabolism (25 %), or disease- and defence-related processes (32 %). In conclusion, the soybean plants in our study responded to salt stress by changing their protein expression pattern. The photosynthetic, biochemical and molecular study showed that there is variability in salt tolerance behaviour in soybean genotypes. Pusa-24 is the salt-sensitive and Pusa-37 is the salt-tolerant genotype. Moreover this study gives new insights into the salt-stress response in soybean and demonstrates the power of genomic and proteomic approach in plant biology studies which finally could help us in identifying the possible regulatory switches (gene/s) controlling the salt tolerant genotype of the crop plants and their possible role in defence mechanism.Keywords: glycine max, salt stress, RAPD, genomic and proteomic variability
Procedia PDF Downloads 423260 Absolute Quantification of the Bexsero Vaccine Component Factor H Binding Protein (fHbp) by Selected Reaction Monitoring: The Contribution of Mass Spectrometry in Vaccinology
Authors: Massimiliano Biagini, Marco Spinsanti, Gabriella De Angelis, Sara Tomei, Ilaria Ferlenghi, Maria Scarselli, Alessia Biolchi, Alessandro Muzzi, Brunella Brunelli, Silvana Savino, Marzia M. Giuliani, Isabel Delany, Paolo Costantino, Rino Rappuoli, Vega Masignani, Nathalie Norais
Abstract:
The gram-negative bacterium Neisseria meningitidis serogroup B (MenB) is an exclusively human pathogen representing the major cause of meningitides and severe sepsis in infants and children but also in young adults. This pathogen is usually present in the 30% of healthy population that act as a reservoir, spreading it through saliva and respiratory fluids during coughing, sneezing, kissing. Among surface-exposed protein components of this diplococcus, factor H binding protein is a lipoprotein proved to be a protective antigen used as a component of the recently licensed Bexsero vaccine. fHbp is a highly variable meningococcal protein: to reflect its remarkable sequence variability, it has been classified in three variants (or two subfamilies), and with poor cross-protection among the different variants. Furthermore, the level of fHbp expression varies significantly among strains, and this has also been considered an important factor for predicting MenB strain susceptibility to anti-fHbp antisera. Different methods have been used to assess fHbp expression on meningococcal strains, however, all these methods use anti-fHbp antibodies, and for this reason, the results are affected by the different affinity that antibodies can have to different antigenic variants. To overcome the limitations of an antibody-based quantification, we developed a quantitative Mass Spectrometry (MS) approach. Selected Reaction Monitoring (SRM) recently emerged as a powerful MS tool for detecting and quantifying proteins in complex mixtures. SRM is based on the targeted detection of ProteoTypicPeptides (PTPs), which are unique signatures of a protein that can be easily detected and quantified by MS. This approach, proven to be highly sensitive, quantitatively accurate and highly reproducible, was used to quantify the absolute amount of fHbp antigen in total extracts derived from 105 clinical isolates, evenly distributed among the three main variant groups and selected to be representative of the fHbp circulating subvariants around the world. We extended the study at the genetic level investigating the correlation between the differential level of expression and polymorphisms present within the genes and their promoter sequences. The implications of fHbp expression on the susceptibility of the strain to killing by anti-fHbp antisera are also presented. To date this is the first comprehensive fHbp expression profiling in a large panel of Neisseria meningitidis clinical isolates driven by an antibody-independent MS-based methodology, opening the door to new applications in vaccine coverage prediction and reinforcing the molecular understanding of released vaccines.Keywords: quantitative mass spectrometry, Neisseria meningitidis, vaccines, bexsero, molecular epidemiology
Procedia PDF Downloads 312259 Imbalance on the Croatian Housing Market in the Aftermath of an Economic Crisis
Authors: Tamara Slišković, Tomislav Sekur
Abstract:
This manuscript examines factors that affect demand and supply of the housing market in Croatia. The period from the beginning of this century, until 2008, was characterized by a strong expansion of construction, housing and real estate market in general. Demand for residential units was expanding, and this was supported by favorable lending conditions of banks. Indicators on the supply side, such as the number of newly built houses and the construction volume index were also increasing. Rapid growth of demand, along with the somewhat slower supply growth, led to the situation in which new apartments were sold before the completion of residential buildings. This resulted in a rise of housing price which was indication of a clear link between the housing prices with the supply and demand in the housing market. However, after 2008 general economic conditions in Croatia worsened and demand for housing has fallen dramatically, while supply descended at much slower pace. Given that there is a gap between supply and demand, it can be concluded that the housing market in Croatia is in imbalance. Such trend is accompanied by a relatively small decrease in housing price. The final result of such movements is the large number of unsold housing units at relatively high price levels. For this reason, it can be argued that housing prices are sticky and that, consequently, the price level in the aftermath of a crisis does not correspond to the discrepancy between supply and demand on the Croatian housing market. The degree of rigidity of the housing price can be determined by inclusion of the housing price as the explanatory variable in the housing demand function. Other independent variables are demographic variable (e.g. the number of households), the interest rate on housing loans, households' disposable income and rent. The equilibrium price is reached when the demand for housing equals its supply, and the speed of adjustment of actual prices to equilibrium prices reveals the extent to which the prices are rigid. The latter requires inclusion of the housing prices with time lag as an independent variable in estimating demand function. We also observe the supply side of the housing market, in order to explain to what extent housing prices explain the movement of new construction activity, and other variables that describe the supply. In this context, we test whether new construction on the Croatian market is dependent on current prices or prices with a time lag. Number of dwellings is used to approximate new construction (flow variable), while the housing prices (current or lagged), quantity of dwellings in the previous period (stock variable) and a series of costs related to new construction are independent variables. We conclude that the key reason for the imbalance in the Croatian housing market should be sought in the relative relationship of price elasticities of supply and demand.Keywords: Croatian housing market, economic crisis, housing prices, supply imbalance, demand imbalance
Procedia PDF Downloads 271258 Evaluation of Existing Wheat Genotypes of Bangladesh in Response to Salinity
Authors: Jahangir Alam, Ayman El Sabagh, Kamrul Hasan, Shafiqul Islam Sikdar, Celaleddin Barutçular, Sohidul Islam
Abstract:
The experiment (Germination test and seedling growth) was carried out at the laboratory of Agronomy Department, Hajee Mohammad Danesh Science and Technology University (HSTU), Dinajpur, Bangladesh during January 2014. Germination and seedling growth of 22 existing wheat genotypes in Bangladesh viz. Kheri, Kalyansona, Sonora, Sonalika, Pavon, Kanchan, Akbar, Barkat, Aghrani, Prativa, Sourab, Gourab, Shatabdi, Sufi, Bijoy, Prodip, BARI Gom 25, BARI Gom 26, BARI Gom 27, BARI Gom 28, Durum and Triticale were tested with three salinity levels (0, 100 and 200 mM NaCl) for 10 days in sand culture in small plastic pot. Speed of germination as expressed by germination percentage (GP), rate of germination (GR), germination coefficient (GC) and germination vigor index (GVI) of all wheat genotypes was delayed and germination percentage was reduced due to salinization compared to control. The lower reduction of GP, GR, GC and VI due to salinity was observed in BARI Gom 25, BARI Gom 27, Shatabdi, Sonora, and Akbbar and higher reduction was recorded in BARI Gom 26, Duram, Triticale, Sufi and Kheri. Shoot and root lengths, fresh and dry weights were found to be affected due to salinization and shoot was more affected than root. Under saline conditions, longer shoot and root length were recorded in BARI Gom 25, BARI Gom 27, Akbar, and Shatabdi, i.e. less reduction of shoot and root lengths was observed while, BARI Gom 26, Duram, Prodip and Triticale produced shorted shoot and root lengths. In this study, genotypes BARI Gom 25, BARI Gom 27, Shatabdi, Sonora and Aghrani showed better performance in terms shoot and root growth (fresh and dry weights) and proved to be tolerant genotypes to salinity. On the other hand, Duram, BARI Gom 26, Triticale, Kheri and Prodip affected seriously in terms of fresh and dry weights by the saline environment. BARI Gom 25, BARI Gom 27, Shatabdi, Sonora and Aghrani showed more salt tolerance index (STI) based on shoot dry weight while, BARI Gom 26, Triticale, Durum, Sufi, Prodip and Kalyanson demonstrate lower STI value under saline conditions. Based on the most salt tolerance and susceptible trait, genotypes under 100 and 200 mM NaCl stresses can be arranged as salt tolerance genotypes: BARI Gom 25> BARI Gom 27> Shatabdi> Sonora, and salt susceptible genotypes: BARI Gom 26> Durum> Triticale> Prodip> Sufi> Kheri. Considering the experiment, it can be concluded that the BARI Gom 25 may be treated as the most salt tolerant and BARI Gom 26 as the most salt sensitive genotypes in Bangladesh.Keywords: genotypes, germination, salinity, wheat
Procedia PDF Downloads 305