Search results for: density measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5875

Search results for: density measurement

3685 Solution of the Blast Wave Problem in Dusty Gas

Authors: Triloki Nath, R. K. Gupta, L. P. Singh

Abstract:

The aim of this paper is to find the new exact solution of the blast wave problem in one-dimensional unsteady adiabatic flow for generalized geometry in a compressible, inviscid ideal gas with dust particles. The density of the undisturbed region is assumed to vary according to a power law of the distance from the point of explosion. The exact solution of the problem in form of a power in the distance and the time is obtained. Further, the behaviour of the total energy carried out by the blast wave for planar, cylindrically symmetric and spherically symmetric flow corresponding to different Mach number of the fluid flow in dusty gas is presented. It is observed that the presence of dust particles in the gas yields more complex expression as compared to the ordinary Gasdynamics.

Keywords: shock wave, blast wave, dusty gas, strong shock

Procedia PDF Downloads 315
3684 Further Investigation of α+12C and α+16O Elastic Scattering

Authors: Sh. Hamada

Abstract:

The current work aims to study the rainbow like-structure observed in the elastic scattering of alpha particles on both 12C and 16O nuclei. We reanalyzed the experimental elastic scattering angular distributions data for α+12C and α+16O nuclear systems at different energies using both optical model and double folding potential of different interaction models such as: CDM3Y1, DDM3Y1, CDM3Y6 and BDM3Y1. Potential created by BDM3Y1 interaction model has the shallowest depth which reflects the necessity to use higher renormalization factor (Nr). Both optical model and double folding potential of different interaction models fairly reproduce the experimental data.

Keywords: density distribution, double folding, elastic scattering, nuclear rainbow, optical model

Procedia PDF Downloads 223
3683 Experimental Study of Discharge with Sharp-Crested Weirs

Authors: E. Keramaris, V. Kanakoudis

Abstract:

In this study the water flow in an open channel over a sharp-crested weir is investigated experimentally. For this reason a series of laboratory experiments were performed in an open channel with a sharp-crested weir. The maximum head expected over the weir, the total upstream water height and the downstream water height of the impact in the constant bed of the open channel were measured. The discharge was measured using a tank put right after the open channel. In addition, the discharge and the upstream velocity were also calculated using already known equations. The main finding is that the relative error percentage for the majority of the experimental measurements is ± 4%, meaning that the calculation of the discharge with a sharp-crested weir gives very good results compared to the numerical results from known equations.

Keywords: sharp-crested weir, weir height, flow measurement, open channel flow

Procedia PDF Downloads 128
3682 Understanding the Role of Nitric Oxide Synthase 1 in Low-Density Lipoprotein Uptake by Macrophages and Implication in Atherosclerosis Progression

Authors: Anjali Roy, Mirza S. Baig

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the formation of lipid rich plaque enriched with necrotic core, modified lipid accumulation, smooth muscle cells, endothelial cells, leucocytes and macrophages. Macrophage foam cells play a critical role in the occurrence and development of inflammatory atherosclerotic plaque. Foam cells are the fat-laden macrophages in the initial stage atherosclerotic lesion formation. Foam cells are an indication of plaque build-up, or atherosclerosis, which is commonly associated with increased risk of heart attack and stroke as a result of arterial narrowing and hardening. The mechanisms that drive atherosclerotic plaque progression remain largely unknown. Dissecting the molecular mechanism involved in process of macrophage foam cell formation will help to develop therapeutic interventions for atherosclerosis. To investigate the mechanism, we studied the role of nitric oxide synthase 1(NOS1)-mediated nitric oxide (NO) on low-density lipoprotein (LDL) uptake by bone marrow derived macrophages (BMDM). Using confocal microscopy, we found that incubation of macrophages with NOS1 inhibitor, TRIM (1-(2-Trifluoromethylphenyl) imidazole) or L-NAME (N omega-nitro-L-arginine methyl ester) prior to LDL treatment significantly reduces the LDL uptake by BMDM. Further, addition of NO donor (DEA NONOate) in NOS1 inhibitor treated macrophages recovers the LDL uptake. Our data strongly suggest that NOS1 derived NO regulates LDL uptake by macrophages and foam cell formation. Moreover, we also checked proinflammatory cytokine mRNA expression through real time PCR in BMDM treated with LDL and copper oxidized LDL (OxLDL) in presences and absences of inhibitor. Normal LDL does not evoke cytokine expression whereas OxLDL induced proinflammatory cytokine expression which significantly reduced in presences of NOS1 inhibitor. Rapid NOS-1-derived NO and its stable derivative formation act as signaling agents for inducible NOS-2 expression in endothelial cells, leading to endothelial vascular wall lining disruption and dysfunctioning. This study highlights the role of NOS1 as critical players of foam cell formation and would reveal much about the key molecular proteins involved in atherosclerosis. Thus, targeting NOS1 would be a useful strategy in reducing LDL uptake by macrophages at early stage of disease and hence dampening the atherosclerosis progression.

Keywords: atherosclerosis, NOS1, inflammation, oxidized LDL

Procedia PDF Downloads 117
3681 Analysis of the Properties of Hydrophobised Heat-Insulating Mortar with Perlite

Authors: Danuta Barnat-Hunek

Abstract:

The studies are devoted to assessing the effectiveness of hydrophobic and air entraining admixtures based on organ silicon compounds. Mortars with lightweight aggregate–perlite were the subjects of the investigation. The following laboratory tests were performed: density, open porosity, total porosity, absorptivity, capability to diffuse water vapour, compressive strength, flexural strength, frost resistance, sodium sulphate corrosion resistance and the thermal conductivity coefficient. The composition of the two mixtures of mortars was prepared: mortars without a hydrophobic admixture and mortars with cementitious waterproofing material. Surface hydrophobisation was produced on the mortars without a hydrophobic admixture using a methyl silicone resin, a water-based emulsion of methyl silicone resin in potassium hydroxide and alkyl-alkoxy-silane in organic solvents. The results of the effectiveness of hydrophobisation of mortars are the following: The highest absorption after 14 days of testing was shown by mortar without an agent (57.5%), while the lowest absorption was demonstrated by the mortar with methyl silicone resin (52.7%). After 14 days in water the hydrophobisation treatment of the samples proved to be ineffective. The hydrophobised mortars are characterized by an insignificant mass change due to freezing and thawing processes in the case of the methyl silicone resin – 1%, samples without hydrophobisation –5%. This agent efficiently protected the mortars against frost corrosion. The standard samples showed very good resistance to the pressure of sodium sulphate crystallization. Organosilicon compounds have a negative influence on the chemical resistance (weight loss about 7%). The mass loss of non-hydrophobic mortar was 2 times lower than mortar with the hydrophobic admixture. Hydrophobic and aeration admixtures significantly affect the thermal conductivity and the difference is mainly due to the difference in porosity of the compared materials. Hydrophobisation of the mortar mass slightly decreased the porosity of the mortar, and thus in an increase of 20% of its compressive strength. The admixture adversely affected the ability of the hydrophobic mortar – it achieved the opposite effect. As a result of hydrophobising the mass, the mortar samples decreased in density and had improved wettability. Poor protection of the mortar surface is probably due to the short time of saturating the sample in the preparation. The mortars were characterized by high porosity (65%) and water absorption (57.5%), so in order to achieve better efficiency, extending the time of hydrophobisation would be advisable. The highest efficiency was obtained for the surface hydrophobised with the methyl silicone resin.

Keywords: hydrophobisation, mortars, salt crystallization, frost resistance

Procedia PDF Downloads 196
3680 Thickness Measurement and Void Detection in Concrete Elements through Ultrasonic Pulse

Authors: Leonel Lipa Cusi, Enrique Nestor Pasquel Carbajal, Laura Marina Navarro Alvarado, José Del Álamo Carazas

Abstract:

This research analyses the accuracy of the ultrasound and the pulse echo ultrasound technic to find voids and to measure thickness of concrete elements. These mentioned air voids are simulated by polystyrene expanded and hollow containers of thin thickness made of plastic or cardboard of different sizes and shapes. These targets are distributed strategically inside concrete at different depths. For this research, a shear wave pulse echo ultrasonic device of 50 KHz is used to scan the concrete elements. Despite the small measurements of the concrete elements and because of voids’ size are near the half of the wavelength, pre and post processing steps like voltage, gain, SAFT, envelope and time compensation were made in order to improve imaging results.

Keywords: ultrasonic, concrete, thickness, pulse echo, void

Procedia PDF Downloads 312
3679 Application of Principal Component Analysis and Ordered Logit Model in Diabetic Kidney Disease Progression in People with Type 2 Diabetes

Authors: Mequanent Wale Mekonen, Edoardo Otranto, Angela Alibrandi

Abstract:

Diabetic kidney disease is one of the main microvascular complications caused by diabetes. Several clinical and biochemical variables are reported to be associated with diabetic kidney disease in people with type 2 diabetes. However, their interrelations could distort the effect estimation of these variables for the disease's progression. The objective of the study is to determine how the biochemical and clinical variables in people with type 2 diabetes are interrelated with each other and their effects on kidney disease progression through advanced statistical methods. First, principal component analysis was used to explore how the biochemical and clinical variables intercorrelate with each other, which helped us reduce a set of correlated biochemical variables to a smaller number of uncorrelated variables. Then, ordered logit regression models (cumulative, stage, and adjacent) were employed to assess the effect of biochemical and clinical variables on the order-level response variable (progression of kidney function) by considering the proportionality assumption for more robust effect estimation. This retrospective cross-sectional study retrieved data from a type 2 diabetic cohort in a polyclinic hospital at the University of Messina, Italy. The principal component analysis yielded three uncorrelated components. These are principal component 1, with negative loading of glycosylated haemoglobin, glycemia, and creatinine; principal component 2, with negative loading of total cholesterol and low-density lipoprotein; and principal component 3, with negative loading of high-density lipoprotein and a positive load of triglycerides. The ordered logit models (cumulative, stage, and adjacent) showed that the first component (glycosylated haemoglobin, glycemia, and creatinine) had a significant effect on the progression of kidney disease. For instance, the cumulative odds model indicated that the first principal component (linear combination of glycosylated haemoglobin, glycemia, and creatinine) had a strong and significant effect on the progression of kidney disease, with an effect or odds ratio of 0.423 (P value = 0.000). However, this effect was inconsistent across levels of kidney disease because the first principal component did not meet the proportionality assumption. To address the proportionality problem and provide robust effect estimates, alternative ordered logit models, such as the partial cumulative odds model, the partial adjacent category model, and the partial continuation ratio model, were used. These models suggested that clinical variables such as age, sex, body mass index, medication (metformin), and biochemical variables such as glycosylated haemoglobin, glycemia, and creatinine have a significant effect on the progression of kidney disease.

Keywords: diabetic kidney disease, ordered logit model, principal component analysis, type 2 diabetes

Procedia PDF Downloads 20
3678 A Concept in Addressing the Singularity of the Emerging Universe

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation

Procedia PDF Downloads 73
3677 Information Retrieval for Kafficho Language

Authors: Mareye Zeleke Mekonen

Abstract:

The Kafficho language has distinct issues in information retrieval because of its restricted resources and dearth of standardized methods. In this endeavor, with the cooperation and support of linguists and native speakers, we investigate the creation of information retrieval systems specifically designed for the Kafficho language. The Kafficho information retrieval system allows Kafficho speakers to access information easily in an efficient and effective way. Our objective is to conduct an information retrieval experiment using 220 Kafficho text files, including fifteen sample questions. Tokenization, normalization, stop word removal, stemming, and other data pre-processing chores, together with additional tasks like term weighting, were prerequisites for the vector space model to represent each page and a particular query. The three well-known measurement metrics we used for our word were Precision, Recall, and and F-measure, with values of 87%, 28%, and 35%, respectively. This demonstrates how well the Kaffiho information retrieval system performed well while utilizing the vector space paradigm.

Keywords: Kafficho, information retrieval, stemming, vector space

Procedia PDF Downloads 37
3676 Computational Analysis of Thermal Degradation in Wind Turbine Spars' Equipotential Bonding Subjected to Lightning Strikes

Authors: Antonio A. M. Laudani, Igor O. Golosnoy, Ole T. Thomsen

Abstract:

Rotor blades of large, modern wind turbines are highly susceptible to downward lightning strikes, as well as to triggering upward lightning; consequently, it is necessary to equip them with an effective lightning protection system (LPS) in order to avoid any damage. The performance of existing LPSs is affected by carbon fibre reinforced polymer (CFRP) structures, which lead to lightning-induced damage in the blades, e.g. via electrical sparks. A solution to prevent internal arcing would be to electrically bond the LPS and the composite structures such that to obtain the same electric potential. Nevertheless, elevated temperatures are achieved at the joint interfaces because of high contact resistance, which melts and vaporises some of the epoxy resin matrix around the bonding. The produced high-pressure gasses open up the bonding and can ignite thermal sparks. The objective of this paper is to predict the current density distribution and the temperature field in the adhesive joint cross-section, in order to check whether the resin pyrolysis temperature is achieved and any damage is expected. The finite element method has been employed to solve both the current and heat transfer problems, which are considered weakly coupled. The mathematical model for electric current includes Maxwell-Ampere equation for induced electric field solved together with current conservation, while the thermal field is found from heat diffusion equation. In this way, the current sub-model calculates Joule heat release for a chosen bonding configuration, whereas the thermal analysis allows to determining threshold values of voltage and current density not to be exceeded in order to maintain the temperature across the joint below the pyrolysis temperature, therefore preventing the occurrence of outgassing. In addition, it provides an indication of the minimal number of bonding points. It is worth to mention that the numerical procedures presented in this study can be tailored and applied to any type of joints other than adhesive ones for wind turbine blades. For instance, they can be applied for lightning protection of aerospace bolted joints. Furthermore, they can even be customized to predict the electromagnetic response under lightning strikes of other wind turbine systems, such as nacelle and hub components.

Keywords: carbon fibre reinforced polymer, equipotential bonding, finite element method, FEM, lightning protection system, LPS, wind turbine blades

Procedia PDF Downloads 148
3675 1G2A IMU\GPS Integration Algorithm for Land Vehicle Navigation

Authors: O. Maklouf, Ahmed Abdulla

Abstract:

A general decline in the cost, size, and power requirements of electronics is accelerating the adoption of integrated GPS/INS technologies in consumer applications such Land Vehicle Navigation. Researchers are looking for ways to eliminate additional components from product designs. One possibility is to drop one or more of the relatively expensive gyroscopes from microelectromechanical system (MEMS) versions of inertial measurement units (IMUs). For land vehicular use, the most important gyroscope is the vertical gyro that senses the heading of the vehicle and two horizontal accelerometers for determining the velocity of the vehicle. This paper presents a simplified integration algorithm for strap down (ParIMU)\GPS combination, with data post processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of the low-cost IMU and because of the relatively small area of the trajectory.

Keywords: GPS, ParIMU, INS, Kalman filter

Procedia PDF Downloads 500
3674 Theoretical Paradigms for Total Quality Environmental Management (TQEM)

Authors: Mohammad Hossein Khasmafkan Nezam, Nader Chavoshi Boroujeni, Mohamad Reza Veshaghi

Abstract:

Quality management is dominated by rational paradigms for the measurement and management of quality, but these paradigms start to ‘break down’, when faced with the inherent complexity of managing quality in intensely competitive changing environments. In this article, the various theoretical paradigms employed to manage quality are reviewed and the advantages and limitations of these paradigms are highlighted. A major implication of this review is that when faced with complexity, an ideological stance to any single strategy paradigm for total quality environmental management is ineffective. We suggest that as complexity increases and we envisage intensely competitive changing environments there will be a greater need to consider a multi-paradigm integrationist view of strategy for TQEM.

Keywords: total quality management (TQM), total quality environmental management (TQEM), ideologies (philosophy), theoretical paradigms

Procedia PDF Downloads 295
3673 Effects of Plant Densities on Seed Yield and Some Agricultural Characteristics of Jofs Pea Variety

Authors: Ayhan Aydoğdu, Ercan Ceyhan, Ali Kahraman, Nursel Çöl

Abstract:

This research was conducted to determine effects of plant densities on seed yield and some agricultural characteristics of pea variety- Jofs in Konya ecological conditions during 2012 vegetation period. The trial was set up according to “Randomized Blocks Design” with three replications. The material “Jofs” pea variety was subjected to 3-row spaces (30, 40 and 50 cm) and 3-row distances (5, 10 and 15 cm). According to the results, difference was shown statistically for the effects of row spaces and row distances on seed yield. The highest seed yield was 2582.1 kg ha-1 on 30 cm of row spaces while 2562.2 kg ha-1 on 15 cm of distances. Consequently, the optimum planting density was determined as 30 x 15 cm for Jofs pea variety growing in Konya.

Keywords: pea, row space, row distance, seed yield

Procedia PDF Downloads 553
3672 Mechanical Characterization of Extrudable Foamed Concrete: An Experimental Study

Authors: D. Falliano, D. De Domenico, G. Ricciardi, E. Gugliandolo

Abstract:

This paper is focused on the mechanical characterization of foamed concrete specimens with protein-based foaming agent. Unlike classic foamed concrete, a peculiar property of the analyzed foamed concrete is the extrudability, which is achieved via a specific additive in the concrete mix that significantly improves the cohesion and viscosity of the fresh cementitious paste. A broad experimental campaign was conducted to evaluate the compressive strength and the indirect tensile strength of the specimens. The study has comprised three different cement types, two water/cement ratios, three curing conditions and three target dry densities. The variability of the strength values upon the above mentioned factors is discussed.

Keywords: cement type, curing conditions, density, extrudable concrete, foamed concrete, mechanical characterization

Procedia PDF Downloads 246
3671 Linear Study of Electrostatic Ion Temperature Gradient Mode with Entropy Gradient Drift and Sheared Ion Flows

Authors: M. Yaqub Khan, Usman Shabbir

Abstract:

History of plasma reveals that continuous struggle of experimentalists and theorists are not fruitful for confinement up to now. It needs a change to bring the research through entropy. Approximately, all the quantities like number density, temperature, electrostatic potential, etc. are connected to entropy. Therefore, it is better to change the way of research. In ion temperature gradient mode with the help of Braginskii model, Boltzmannian electrons, effect of velocity shear is studied inculcating entropy in the magnetoplasma. New dispersion relation is derived for ion temperature gradient mode, and dependence on entropy gradient drift is seen. It is also seen velocity shear enhances the instability but in anomalous transport, its role is not seen significantly but entropy. This work will be helpful to the next step of tokamak and space plasmas.

Keywords: entropy, velocity shear, ion temperature gradient mode, drift

Procedia PDF Downloads 369
3670 Land Use, Land Cover Changes and Woody Vegetation Status of Tsimur Saint Gebriel Monastery, in Tigray Region, Northern Ethiopia

Authors: Abraha Hatsey, Nesibu Yahya, Abeje Eshete

Abstract:

Ethiopian Orthodox Tewahido Church has a long tradition of conserving the Church vegetation and is an area treated as a refugee camp for many endangered indigenous tree species in Northern Ethiopia. Though around 36,000 churches exist in Ethiopia, only a few churches have been studied so far. Thus, this study assessed the land use land cover change of 3km buffer (1986-2018) and the woody species diversity and regeneration status of Tsimur St. Gebriel monastery in Tigray region, Northern Ethiopia. For vegetation study, systematic sampling was used with 100m spacing between plots and between transects. Plot size was 20m*20m for the main plot and 2 subplots (5m*5m each) for the regeneration study. Tree height, diameter at breast height(DBH) and crown area were measured in the main plot for all trees with DBH ≥ 5cm. In the subplots, all seedlings and saplings were counted with DBH < 5cm. The data was analyzed on excel and Pass biodiversity software for diversity and evenness analysis. The major land cover classes identified include bare land, farmland, forest, shrubland and wetland. The extents of forest and shrubland were declined considerably due to bare land and agricultural land expansions within the 3km buffer, indicating an increasing pressure on the church forest. Regarding the vegetation status, A total of 19 species belonging to 13 families were recorded in the monastery. The diversity (H’) and evenness recorded were 2.4 and 0.5, respectively. The tree density (DBH ≥ 5cm) was 336/ha and a crown cover of 65%. Olea europaea was the dominant (6.4m2/ha out of 10.5m2 total basal area) and a frequent species (100%) with good regeneration in the monastery. The rest of the species are less frequent and are mostly confined to water sources with good site conditions. Juniperus procera (overharvested) and the other indigenous species were with few trees left and with no/very poor regeneration status. The species having poor density, frequency and regeneration (Junperus procera, Nuxia congesta Fersen and Jasminium abyssinica) need prior conservation and enrichment planting. The indigenous species could also serve as a potential seed source for the reproduction and restoration of nearby degraded landscapes. The buffer study also demonstrated expansion of agriculture and bare land, which could be a threat to the forest of the isolated monastery. Hence, restoring the buffer zone is the only guarantee for the healthy existence of the church forest.

Keywords: church forests, regeneration, land use change, vegetation status

Procedia PDF Downloads 189
3669 Material Selection for Footwear Insole Using Analytical Hierarchal Process

Authors: Mohammed A. Almomani, Dina W. Al-Qudah

Abstract:

Product performance depends on the type and quality of its building material. Successful product must be made using high quality material, and using the right methods. Many foot problems took place as a result of using poor insole material. Therefore, selecting a proper insole material is crucial to eliminate these problems. In this study, the analytical hierarchy process (AHP) is used to provide a systematic procedure for choosing the best material adequate for this application among three material alternatives (polyurethane, poron, and plastzote). Several comparison criteria are used to build the AHP model including: density, stiffness, durability, energy absorption, and ease of fabrication. Poron was selected as the best choice. Inconsistency testing indicates that the model is reasonable, and the materials alternative ranking is effective.

Keywords: AHP, footwear insole, insole material, materials selection

Procedia PDF Downloads 332
3668 Absorption Control of Organic Solar Cells under LED Light for High Efficiency Indoor Power System

Authors: Premkumar Vincent, Hyeok Kim, Jin-Hyuk Bae

Abstract:

Organic solar cells have high potential which enables these to absorb much weaker light than 1-sun in indoor environment. They also have several practical advantages, such as flexibility, cost-advantage, and semi-transparency that can have superiority in indoor solar energy harvesting. We investigate organic solar cells based on poly(3-hexylthiophene) (P3HT) and indene-C60 bisadduct (ICBA) for indoor application while Finite Difference Time Domain (FDTD) simulations were run to find the optimized structure. This may provide the highest short-circuit current density to acquire high efficiency under indoor illumination.

Keywords: indoor solar cells, indoor light harvesting, organic solar cells, P3HT:ICBA, renewable energy

Procedia PDF Downloads 289
3667 Investigation of the Turbulent Cavitating Flows from the Viewpoint of the Lift Coefficient

Authors: Ping-Ben Liu, Chien-Chou Tseng

Abstract:

The objective of this study is to investigate the relationship between the lift coefficient and dynamic behaviors of cavitating flow around a two-dimensional Clark Y hydrofoil at 8° angle of attack, cavitation number of 0.8, and Reynolds number of 7.10⁵. The flow field is investigated numerically by using a vapor transfer equation and a modified turbulence model which applies the filter and local density correction. The results including time-averaged lift/drag coefficient and shedding frequency agree well with experimental observations, which confirmed the reliability of this simulation. According to the variation of lift coefficient, the cycle which consists of growth and shedding of cavitation can be divided into three stages, and the lift coefficient at each stage behaves similarly due to the formation and shedding of the cavity around the trailing edge.

Keywords: Computational Fluid Dynamics, cavitation, turbulence, lift coefficient

Procedia PDF Downloads 333
3666 A Systematic Review of Situational Awareness and Cognitive Load Measurement in Driving

Authors: Aly Elshafei, Daniela Romano

Abstract:

With the development of autonomous vehicles, a human-machine interaction (HMI) system is needed for a safe transition of control when a takeover request (TOR) is required. An important part of the HMI system is the ability to monitor the level of situational awareness (SA) of any driver in real-time, in different scenarios, and without any pre-calibration. Presenting state-of-the-art machine learning models used to measure SA is the purpose of this systematic review. Investigating the limitations of each type of sensor, the gaps, and the most suited sensor and computational model that can be used in driving applications. To the author’s best knowledge this is the first literature review identifying online and offline classification methods used to measure SA, explaining which measurements are subject or session-specific, and how many classifications can be done with each classification model. This information can be very useful for researchers measuring SA to identify the most suited model to measure SA for different applications.

Keywords: situational awareness, autonomous driving, gaze metrics, EEG, ECG

Procedia PDF Downloads 105
3665 Simulation and Analytical Investigation of Different Combination of Single Phase Power Transformers

Authors: M. Salih Taci, N. Tayebi, I. Bozkır

Abstract:

In this paper, the equivalent circuit of the ideal single-phase power transformer with its appropriate voltage current measurement was presented. The calculated values of the voltages and currents of the different connections single phase normal transformer and the results of the simulation process are compared. As it can be seen, the calculated results are the same as the simulated results. This paper includes eight possible different transformer connections. Depending on the desired voltage level, step-down and step-up application transformer is considered. Modelling and analysis of a system consisting of an equivalent source, transformer (primary and secondary), and loads are performed to investigate the combinations. The obtained values are simulated in PSpice environment and then how the currents, voltages and phase angle are distributed between them is explained based on calculation.

Keywords: transformer, simulation, equivalent model, parallel series combinations

Procedia PDF Downloads 348
3664 Assessment of Noise Pollution in the City of Biskra, Algeria

Authors: Tallal Abdel Karim Bouzir, Nourdinne Zemmouri, Djihed Berkouk

Abstract:

In this research, a quantitative assessment of the urban sound environment of the city of Biskra, Algeria, was conducted. To determine the quality of the soundscape based on in-situ measurement, using a Landtek SL5868P sound level meter in 47 points, which have been identified to represent the whole city. The result shows that the urban noise level varies from 55.3 dB to 75.8 dB during the weekdays and from 51.7 dB to 74.3 dB during the weekend. On the other hand, we can also note that 70.20% of the results of the weekday measurements and 55.30% of the results of the weekend measurements have levels of sound intensity that exceed the levels allowed by Algerian law and the recommendations of the World Health Organization. These very high urban noise levels affect the quality of life, the acoustic comfort and may even pose multiple risks to people's health.

Keywords: road traffic, noise pollution, sound intensity, public health

Procedia PDF Downloads 247
3663 Experimental Investigation of On-Body Channel Modelling at 2.45 GHz

Authors: Hasliza A. Rahim, Fareq Malek, Nur A. M. Affendi, Azuwa Ali, Norshafinash Saudin, Latifah Mohamed

Abstract:

This paper presents the experimental investigation of on-body channel fading at 2.45 GHz considering two effects of the user body movement; stationary and mobile. A pair of body-worn antennas was utilized in this measurement campaign. A statistical analysis was performed by comparing the measured on-body path loss to five well-known distributions; lognormal, normal, Nakagami, Weibull and Rayleigh. The results showed that the average path loss of moving arm varied higher than the path loss in sitting position for upper-arm-to-left-chest link, up to 3.5 dB. The analysis also concluded that the Nakagami distribution provided the best fit for most of on-body static link path loss in standing still and sitting position, while the arm movement can be best described by log-normal distribution.

Keywords: on-body channel communications, fading characteristics, statistical model, body movement

Procedia PDF Downloads 337
3662 Relationship of Workplace Stress and Mental Wellbeing among Health Professionals

Authors: Rabia Mushtaq, Uroosa Javaid

Abstract:

It has been observed that health professionals are at higher danger of stress in light of the fact that being a specialist is physically and emotionally demanding. The study aimed to investigate the relationship between workplace stress and mental wellbeing among health professionals. Sample of 120 male and female health professionals belonging to two age groups, i.e., early adulthood and middle adulthood, was employed through purposive sampling technique. Job stress scale, mindful attention awareness scale, and Warwick Edinburgh mental wellbeing scales were used for the measurement of study variables. Results of the study indicated that job stress has a significant negative relationship with mental wellbeing among health professionals. The current study opened the door for more exploratory work on mindfulness among health professionals. Yielding outcomes helped in consolidating adapting procedures among workers to improve their mental wellbeing and lessen the job stress.

Keywords: health professionals, job stress, mental wellbeing, mindfulness

Procedia PDF Downloads 155
3661 Possible Exposure of Persons with Cardiac Pacemakers to Extremely Low Frequency (ELF) Electric and Magnetic Fields

Authors: Leena Korpinen, Rauno Pääkkönen, Fabriziomaria Gobba, Vesa Virtanen

Abstract:

The number of persons with implanted cardiac pacemakers (PM) has increased in Western countries. The aim of this paper is to investigate the possible situations where persons with a PM may be exposed to extremely low frequency (ELF) electric (EF) and magnetic fields (MF) that may disturb their PM. Based on our earlier studies, it is possible to find such high public exposure to EFs only in some places near 400 kV power lines, where an EF may disturb a PM in unipolar mode. Such EFs cannot be found near 110 kV power lines. Disturbing MFs can be found near welding machines. However, we do not have measurement data from welding. Based on literature and earlier studies at Tampere University of Technology, it is difficult to find public EF or MF exposure that is high enough to interfere with PMs.

Keywords: cardiac pacemaker, electric field, magnetic field, electrical engineering

Procedia PDF Downloads 418
3660 Fuzzy Rules Based Improved BEENISH Protocol for Wireless Sensor Networks

Authors: Rishabh Sharma

Abstract:

The main design parameter of WSN (wireless sensor network) is the energy consumption. To compensate this parameter, hierarchical clustering is a technique that assists in extending duration of the networks life by efficiently consuming the energy. This paper focuses on dealing with the WSNs and the FIS (fuzzy interface system) which are deployed to enhance the BEENISH protocol. The node energy, mobility, pause time and density are considered for the selection of CH (cluster head). The simulation outcomes exhibited that the projected system outperforms the traditional system with regard to the energy utilization and number of packets transmitted to sink.

Keywords: wireless sensor network, sink, sensor node, routing protocol, fuzzy rule, fuzzy inference system

Procedia PDF Downloads 86
3659 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 103
3658 Visual Odometry and Trajectory Reconstruction for UAVs

Authors: Sandro Bartolini, Alessandro Mecocci, Alessio Medaglini

Abstract:

The growing popularity of systems based on unmanned aerial vehicles (UAVs) is highlighting their vulnerability, particularly in relation to the positioning system used. Typically, UAV architectures use the civilian GPS, which is exposed to a number of different attacks, such as jamming or spoofing. This is why it is important to develop alternative methodologies to accurately estimate the actual UAV position without relying on GPS measurements only. In this paper, we propose a position estimate method for UAVs based on monocular visual odometry. We have developed a flight control system capable of keeping track of the entire trajectory travelled, with a reduced dependency on the availability of GPS signals. Moreover, the simplicity of the developed solution makes it applicable to a wide range of commercial drones. The final goal is to allow for safer flights in all conditions, even under cyber-attacks trying to deceive the drone.

Keywords: visual odometry, autonomous uav, position measurement, autonomous outdoor flight

Procedia PDF Downloads 200
3657 The Impact of Online Advertising on Consumer Purchase Behaviour Based on Malaysian Organizations

Authors: Naser Zourikalatehsamad, Seyed Abdorreza Payambarpour, Ibrahim Alwashali, Zahra Abdolkarimi

Abstract:

The paper aims to evaluate the effect of online advertising on consumer purchase behavior in Malaysian organizations. The paper has potential to extend and refine theory. A survey was distributed among Students of UTM university during the winter 2014 and 160 responses were collected. Regression analysis was used to test the hypothesized relationships of the model. Result shows that the predictors (cost saving factor, convenience factor and customized product or services) have positive impact on intention to continue seeking online advertising.

Keywords: consumer purchase, convenience, customized product, cost saving, customization, flow theory, mass communication, online advertising ads, online advertising measurement, online advertising mechanism, online intelligence system, self-confidence, willingness to purchase

Procedia PDF Downloads 464
3656 Membranes for Direct Lithium Extraction (DLE)

Authors: Amir Razmjou, Elika Karbassi Yazdi

Abstract:

Several direct lithium extraction (DLE) technologies have been developed for Li extraction from different brines. Although laboratory studies showed that they can technically recover Li to 90%, challenges still remain in developing a sustainable process that can serve as a foundation for the lithium dependent low-carbon economy. There is a continuing quest for DLE technologies that do not need extensive pre-treatments, fewer materials, and have simplified extraction processes with high Li selectivity. Here, an overview of DLE technologies will be provided with an emphasis on the basic principles of the materials’ design for the development of membranes with nanochannels and nanopores with Li ion selectivity. We have used a variety of building blocks such as nano-clay, organic frameworks, Graphene/oxide, MXene, etc., to fabricate the membranes. Molecular dynamic simulation (MD) and density functional theory (DFT) were used to reveal new mechanisms by which high Li selectivity was obtained.

Keywords: lithium recovery, membrane, lithium selectivity, decarbonization

Procedia PDF Downloads 94