Search results for: particle fixed on cone defect
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3563

Search results for: particle fixed on cone defect

323 Synthesis of Temperature Sensitive Nano/Microgels by Soap-Free Emulsion Polymerization and Their Application in Hydrate Sediments Drilling Operations

Authors: Xuan Li, Weian Huang, Jinsheng Sun, Fuhao Zhao, Zhiyuan Wang, Jintang Wang

Abstract:

Natural gas hydrates (NGHs) as promising alternative energy sources have gained increasing attention. Hydrate-bearing formation in marine areas is highly unconsolidated formation and is fragile, which is composed of weakly cemented sand-clay and silty sediments. During the drilling process, the invasion of drilling fluid can easily lead to excessive water content in the formation. It will change the soil liquid plastic limit index, which significantly affects the formation quality, leading to wellbore instability due to the metastable character of hydrate-bearing sediments. Therefore, controlling the filtrate loss into the formation in the drilling process has to be highly regarded for protecting the stability of the wellbore. In this study, the temperature-sensitive nanogel of P(NIPAM-co-AMPS-co-tBA) was prepared by soap-free emulsion polymerization, and the temperature-sensitive behavior was employed to achieve self-adaptive plugging in hydrate sediments. First, the effects of additional amounts of AMPS, tBA, and cross-linker MBA on the microgel synthesis process and temperature-sensitive behaviors were investigated. Results showed that, as a reactive emulsifier, AMPS can not only participate in the polymerization reaction but also act as an emulsifier to stabilize micelles and enhance the stability of nanoparticles. The volume phase transition temperature (VPTT) of nanogels gradually decreased with the increase of the contents of hydrophobic monomer tBA. An increase in the content of the cross-linking agent MBA can lead to a rise in the coagulum content and instability of the emulsion. The plugging performance of nanogel was evaluated in a core sample with a pore size distribution range of 100-1000nm. The temperature-sensitive nanogel can effectively improve the microfiltration performance of drilling fluid. Since a combination of a series of nanogels could have a wide particle size distribution at any temperature, around 200nm to 800nm, the self-adaptive plugging capacity of nanogels for the hydrate sediments was revealed. Thermosensitive nanogel is a potential intelligent plugging material for drilling operations in natural gas hydrate-bearing sediments.

Keywords: temperature-sensitive nanogel, NIPAM, self-adaptive plugging performance, drilling operations, hydrate-bearing sediments

Procedia PDF Downloads 137
322 The Foundation Binary-Signals Mechanics and Actual-Information Model of Universe

Authors: Elsadig Naseraddeen Ahmed Mohamed

Abstract:

In contrast to the uncertainty and complementary principle, it will be shown in the present paper that the probability of the simultaneous occupation event of any definite values of coordinates by any definite values of momentum and energy at any definite instance of time can be described by a binary definite function equivalent to the difference between their numbers of occupation and evacuation epochs up to that time and also equivalent to the number of exchanges between those occupation and evacuation epochs up to that times modulus two, these binary definite quantities can be defined at all point in the time’s real-line so it form a binary signal represent a complete mechanical description of physical reality, the time of these exchanges represent the boundary of occupation and evacuation epochs from which we can calculate these binary signals using the fact that the time of universe events actually extends in the positive and negative of time’s real-line in one direction of extension when these number of exchanges increase, so there exists noninvertible transformation matrix can be defined as the matrix multiplication of invertible rotation matrix and noninvertible scaling matrix change the direction and magnitude of exchange event vector respectively, these noninvertible transformation will be called actual transformation in contrast to information transformations by which we can navigate the universe’s events transformed by actual transformations backward and forward in time’s real-line, so these information transformations will be derived as an elements of a group can be associated to their corresponded actual transformations. The actual and information model of the universe will be derived by assuming the existence of time instance zero before and at which there is no coordinate occupied by any definite values of momentum and energy, and then after that time, the universe begin its expanding in spacetime, this assumption makes the need for the existence of Laplace’s demon who at one moment can measure the positions and momentums of all constituent particle of the universe and then use the law of classical mechanics to predict all future and past of universe’s events, superfluous, we only need for the establishment of our analog to digital converters to sense the binary signals that determine the boundaries of occupation and evacuation epochs of the definite values of coordinates relative to its origin by the definite values of momentum and energy as present events of the universe from them we can predict approximately in high precision it's past and future events.

Keywords: binary-signal mechanics, actual-information model of the universe, actual-transformation, information-transformation, uncertainty principle, Laplace's demon

Procedia PDF Downloads 161
321 Sample Preparation and Coring of Highly Friable and Heterogeneous Bonded Geomaterials

Authors: Mohammad Khoshini, Arman Khoshghalb, Meghdad Payan, Nasser Khalili

Abstract:

Most of the Earth’s crust surface rocks are technically categorized as weak rocks or weakly bonded geomaterials. Deeply weathered, weakly cemented, friable and easily erodible, they demonstrate complex material behaviour and understanding the overlooked mechanical behaviour of such materials is of particular importance in geotechnical engineering practice. Weakly bonded geomaterials are so susceptible to surface shear and moisture that conventional methods of core drilling fail to extract high-quality undisturbed samples out of them. Moreover, most of these geomaterials are of high heterogeneity rendering less reliable and feasible material characterization. In order to compensate for the unpredictability of the material response, either numerous experiments are needed to be conducted or large factors of safety must be implemented in the design process. However, none of these approaches is sustainable. In this study, a method for dry core drilling of such materials is introduced to take high-quality undisturbed core samples. By freezing the material at certain moisture content, a secondary structure is developed throughout the material which helps the whole structure to remain intact during the core drilling process. Moreover, to address the heterogeneity issue, the natural material was reconstructed artificially to obtain a homogeneous material with very high similarity to the natural one in both micro and macro-mechanical perspectives. The method is verified for both micro and macro scale. In terms of micro-scale analysis, using Scanning Electron Microscopy (SEM), pore spaces and inter-particle bonds were investigated and compared between natural and artificial materials. X-Ray Diffraction, XRD, analyses are also performed to control the chemical composition. At the macro scale, several uniaxial compressive strength tests, as well as triaxial tests, were performed to verify the similar mechanical response of the materials. A high level of agreement is observed between micro and macro results of natural and artificially bonded geomaterials. The proposed methods can play an important role to cut down the costs of experimental programs for material characterization and also to promote the accuracy of the numerical modellings based on the experimental results.

Keywords: Artificial geomaterial, core drilling, macro-mechanical behavior, micro-scale, sample preparation, SEM photography, weakly bonded geomaterials

Procedia PDF Downloads 196
320 A Green Optically Active Hydrogen and Oxygen Generation System Employing Terrestrial and Extra-Terrestrial Ultraviolet Solar Irradiance

Authors: H. Shahid

Abstract:

Due to Ozone layer depletion on earth, the incoming ultraviolet (UV) radiation is recorded at its high index levels such as 25 in South Peru (13.5° S, 3360 m a.s.l.) Also, the planning of human inhabitation on Mars is under discussion where UV radiations are quite high. The exposure to UV is health hazardous and is avoided by UV filters. On the other hand, artificial UV sources are in use for water thermolysis to generate Hydrogen and Oxygen, which are later used as fuels. This paper presents the utility of employing UVA (315-400nm) and UVB (280-315nm) electromagnetic radiation from the solar spectrum to design and implement an optically active, Hydrogen and Oxygen generation system via thermolysis of desalinated seawater. The proposed system finds its utility on earth and can be deployed in the future on Mars (UVB). In this system, by using Fresnel lens arrays as an optical filter and via active tracking, the ultraviolet light from the sun is concentrated and then allowed to fall on two sub-systems of the proposed system. The first sub-system generates electrical energy by using UV based tandem photovoltaic cells such as GaAs/GaInP/GaInAs/GaInAsP and the second elevates temperature of water to lower the electric potential required to electrolyze the water. An empirical analysis is performed at 30 atm and an electrical potential is observed to be the main controlling factor for the rate of production of Hydrogen and Oxygen and hence the operating point (Q-Point) of the proposed system. The hydrogen production rate in the case of the commercial system in static mode (650ᵒC, 0.6V) is taken as a reference. The silicon oxide electrolyzer cell (SOEC) is used in the proposed (UV) system for the Hydrogen and Oxygen production. To achieve the same amount of Hydrogen as in the case of the reference system, with minimum chamber operating temperature of 850ᵒC in static mode, the corresponding required electrical potential is calculated as 0.3V. However, practically, the Hydrogen production rate is observed to be low in comparison to the reference system at 850ᵒC at 0.3V. However, it has been shown empirically that the Hydrogen production can be enhanced and by raising the electrical potential to 0.45V. It increases the production rate to the same level as is of the reference system. Therefore, 850ᵒC and 0.45V are assigned as the Q-point of the proposed system which is actively stabilized via proportional integral derivative controllers which adjust the axial position of the lens arrays for both subsystems. The functionality of the controllers is based on maintaining the chamber fixed at 850ᵒC (minimum operating temperature) and 0.45V; Q-Point to realize the same Hydrogen production rate as-is for the reference system.

Keywords: hydrogen, oxygen, thermolysis, ultraviolet

Procedia PDF Downloads 109
319 Characterization of Aerosol Droplet in Absorption Columns to Avoid Amine Emissions

Authors: Hammad Majeed, Hanna Knuutila, Magne Hilestad, Hallvard Svendsen

Abstract:

Formation of aerosols can cause serious complications in industrial exhaust gas CO2 capture processes. SO3 present in the flue gas can cause aerosol formation in an absorption based capture process. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. In absorption processes aerosols are generated by spontaneous condensation or desublimation processes in supersaturated gas phases. Undesired aerosol development may lead to amine emissions many times larger than what would be encountered in a mist free gas phase in PCCC development. It is thus of crucial importance to understand the formation and build-up of these aerosols in order to mitigate the problem.Rigorous modelling of aerosol dynamics leads to a system of partial differential equations. In order to understand mechanics of a particle entering an absorber an implementation of the model is created in Matlab. The model predicts the droplet size, the droplet internal variable profiles and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. The model comprises a set of mass transfer equations for transferring components and the essential diffusion reaction equations to describe the droplet internal profiles for all relevant constituents. Also included is heat transfer across the interface and inside the droplet. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and gives examples as to how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles. Results: As an example a droplet of initial size of 3 microns, initially containing a 5M MEA, solution is exposed to an atmosphere free of MEA. Composition of the gas phase and temperature is changing with respect to time throughout the absorber.

Keywords: amine solvents, emissions, global climate change, simulation and modelling, aerosol generation

Procedia PDF Downloads 246
318 Crisis, Identity and Challenge: Next Steps for the ‘English’ Constitution

Authors: Carol Howells, Edwin Parks

Abstract:

This paper explores the existing and evolving constitutional arrangements within the United Kingdom and within the wider international context of the EU. It considers the nature of an ‘English’ constitution and internal colonialism that underpins it. The debates over the UK’s exit from the EU have been many however the constitutional position of the devolved nations (Scotland, Northern Ireland and Wales) is little understood or explored. Their constitutional position has been touched upon in academic debate (but not widely) and is only now beginning to receive attention. The paper considers the constitutional role of the legislatures within the UK; the UK Parliament Bill for exiting the European Union and provides a commentary on the Brexit process in relation to constitutional arrangements within the UK and EU. Questions arise over the constitutional framework and, whether, having delegated competencies, the UK Parliament can now legislate in relation to delegated competencies without the consent. The Scottish Parliament and Welsh Assembly are a permanent and a fixed feature of the UK’s constitution, but their position is set within the traditional concept of the ‘English’ constitution. The current situation is opaque and complex and raises significant constitutional questions. In relation to exit from the EU two of the nations did not vote in favour of Brexit and the third is in receipt of an inequitable funding settlement. Questions arise as to whether the work of modernising the UK’s constitution over the past twenty years in recognising the Nations and governments within those nations is now being unpicked and whether the piecemeal and unequal process of devolution and new constitutional arrangements hold weight. Questions of democratic legitimacy arise throughout. An advisory referendum (where no definition of the EU was provided) in which two of the four nations voted to leave the EU and two voted to remain has led the UK Government negotiating a wholesale exit from the EU based on ‘English’ constitutional law principles. Previous constitutional referendums in relation to devolution within the UK have been treated differently. Within the EU questions are being raised in relation to the focus on member states. The goals of the EU mention member countries and its purpose is seen as being to promote greater social, political and economic harmony among the nations of Europe. The emphasis on member states is proving challenging and has led flawed processes. Scrutiny of legislative proposals, historical developments, and social commentary reveal distinct national identities within the UK. Analysis of the debate, legislation and case law surrounding the exiting process from the EU reveal a muddled picture of a constitution in crisis and significant challenges to principles underpinning the rule of law. Suggestions are made for future reforms and a move towards new constitutional arrangements beyond the current ‘English’ constitution.

Keywords: English, constitution, parliament, devolved

Procedia PDF Downloads 107
317 Effect of Whey Proteins and Caffeic Acid Interactions on Antioxidant Activity and Protein Structure

Authors: Tassia Batista Pessato, Francielli Pires Ribeiro Morais, Fernanda Guimaraes Drummond Silva, Flavia Maria Netto

Abstract:

Proteins and phenolic compounds can interact mainly by hydrophobic interactions. Those interactions may lead to structural changes in both molecules, which in turn could affect positively or negatively their functional and nutritional properties. Here, the structural changes of whey proteins (WPI) due to interaction with caffeic acid (CA) were investigated by intrinsic and extrinsic fluorescence. The effects of protein-phenolic compounds interactions on the total phenolic content and antioxidant activity were also assessed. The WPI-CA complexes were obtained by mixture of WPI and CA stock solutions in deionized water. The complexation was carried out at room temperature during 60 min, using 0.1 M NaOH to adjust pH at 7.0. The WPI concentration was fixed at 5 mg/mL, whereas the CA concentration varied in order to obtain four different WPI:CA molar relations (1:1; 2:1; 5:1; 10:1). WPI and phenolic solutions were used as controls. Intrinsic fluorescence spectra of the complexes (mainly due to Trp fluorescence emission) were obtained at λex = 280 nm and the emission intensities were measured from 290 to 500 nm. Extrinsic fluorescence was obtained as the measure of protein surface hydrophobicity (S0) using ANS as a fluorescence probe. Total phenolic content was determined by Folin-Ciocalteau and the antioxidant activity by FRAP and ORAC methods. Increasing concentrations of CA resulted in decreasing of WPI intrinsic fluorescence. The emission band of WPI red shifted from 332 to 354 nm as the phenolic concentration increased, which is related to the exposure of Trp residue to the more hydrophilic environment and unfolding of protein structure. In general, the complexes presented lower S0 values than WPI, suggesting that CA hindered ANS binding to hydrophobic sites of WPI. The total phenolic content in the complexes was lower than the sum of two compounds isolated. WPI showed negligible AA measured by FRAP. However, as the relative concentration of CA increased in the complexes, the FRAP values enhanced, indicating that AA measure by this technique comes mainly from CA. In contrast, the WPI ORAC value (82.3 ± 1.5 µM TE/g) suggest that its AA is related to the capacity of H+ transfer. The complexes exhibited no important improvement of AA measured by ORAC in relation to the isolated components, suggesting complexation partially suppressed AA of the compounds. The results hereby presented indicate that interaction of WPI and CA occurred, and this interaction caused a structural change in the proteins. The complexation can either hide or expose antioxidant sites of both components. In conclusion, although the CA can undergo an AA suppression due to the interaction with proteins, the AA of WPI could be enhanced due to protein unfolding and exposure of antioxidant sites.

Keywords: bioactive properties, milk proteins, phenolic acids, protein-phenolic compounds complexation

Procedia PDF Downloads 520
316 Optimal Applications of Solar Energy Systems: Comparative Analysis of Ground-Mounted and Rooftop Solar PV Installations in Drought-Prone and Residential Areas of the Indian Subcontinent

Authors: Rajkumar Ghosh, Bhabani Prasad Mukhopadhyay

Abstract:

The increasing demand for environmentally friendly energy solutions highlights the need to optimize solar energy systems. This study compares two types of solar energy systems: ground-mounted solar panels for drought-prone locations and rooftop solar PV installations measuring 300 sq. ft. (approx. 28 sq. m.). The electricity output of 4730 kWh/year saves ₹ 14191/year. As a clean and sustainable energy source, solar power is pivotal in reducing greenhouse gas CO2 emissions reduction by 85 tonnes in 25 years and combating climate change. This effort, "PM Suryadaya Ghar-Muft Bijli Yojana," seeks to empower Indian homes by giving free access to solar energy. The initiative is part of the Indian government's larger attempt to encourage clean and renewable energy sources while reducing reliance on traditional fossil fuels. This report reviews various installations and government reports to analyse the performance and impact of both ground-mounted and rooftop solar systems. Besides, effectiveness of government subsidy programs for residential on-grid solar systems, including the ₹78,000 incentive for systems above 3 kW. The study also looks into the subsidy schemes available for domestic agricultural grid use. Systems up to 3 kW receive ₹43,764, while systems over 10 kW receive a fixed subsidy of ₹94,822. Households can save a substantial amount of energy and minimize their reliance on grid electricity by installing the proper solar plant capacity. In terms of monthly consumption at home, the acceptable Rooftop Solar Plant capacity for households is 0-150 units (1-2 kW), 150-300 units (2-3 kW), and >300 units (above 3 kW). Ground-mounted panels, particularly in arid regions, offer benefits such as scalability and optimal orientation but face challenges like land use conflicts and environmental impact, particularly in drought-prone regions. By evaluating the distinct advantages and challenges of each system, this study aims to provide insights into their optimal applications, guiding stakeholders in making informed decisions to enhance solar energy efficiency and sustainability within regulatory constraints. This research also explores the implications of regulations, such as Italy's ban on ground-mounted solar panels on productive agricultural land, on solar energy strategies.

Keywords: sustainability, solar energy, subsidy, rooftop solar energy, renewable energy

Procedia PDF Downloads 20
315 Digitalization, Economic Growth and Financial Sector Development in Africa

Authors: Abdul Ganiyu Iddrisu

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth, and reducing poverty. Yet compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, low-income flows among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector, however, empirical evidence on digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We therefore argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa focusing on the role of digitization, and financial sector development. First, we assess how digitization influence financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on 2 economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improves economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, economic growth, financial sector development, Africa

Procedia PDF Downloads 82
314 Dietary Anion-Cation Balance of Grass and Net Acid-Base Excretion in Urine of Suckler Cows

Authors: H. Scholz, P. Kuehne, G. Heckenberger

Abstract:

Dietary Anion-Cation Balance (DCAB) in grazing systems under German conditions has a tendency to decrease from May until September and often are measured DCAB lower than 100 meq per kg dry matter. Lower DCAB in grass feeding system can change the metabolic status of suckler cows and often are results in acidotic metabolism. Measurement of acid-base excretion in dairy cows has been proved to a method to evaluate the acid-base status. The hypothesis was that metabolic imbalances could be identified by urine measurement in suckler cows. The farm study was conducted during the grazing seasons 2017 and 2018 and involved 7 suckler cow farms in Germany. Suckler cows were grazing during the whole time of the investigation and had no access to other feeding components. Cows had free access to water and salt block and free access to minerals (loose). The dry matter of the grass was determined at 60 °C and were then analysed for energy and nutrient content and for the Dietary Cation-Anion Balance (DCAB). Urine was collected in 50 ml-glasses and analysed for net acid-base excretion (NSBA) and the concentration of creatinine and urea in the laboratory. Statistical analysis took place with ANOVA with fixed effects of farms (1-7), month (May until September), and number of lactations (1, 2, and ≥ 3 lactations) using SPSS Version 25.0 for windows. An alpha of 0.05 was used for all statistical tests. During the grazing periods of years 2017 and 2018, an average DCAB was observed in the grass of 167 meq per kg DM. A very high mean variation could be determined from -42 meq/kg to +439 meq/kg. Reference values in relation to DCAB were described between 150 meq and 400 meq per kg DM. It was found the high chlorine content with reduced potassium level led to this reduction in DCAB at the end of the grazing period. Between the DCAB of the grass and the NSBA in urine of suckler cows was a correlation according to PEARSON of r = 0.478 (p ≤ 0.001) or after SPEARMAN of r = 0.601 (p ≤ 0.001) observed. For the control of urine values of grazing suckler cows, the wide spread of the values poses a challenge of the interpretation, especially since the DCAB is unknown. The influence of several feeding components such as chlorine, sulfur, potassium, and sodium (ions for the DCAB) and dry matter feed intake during the grazing period of suckler cows should be taken into account in further research. The results obtained show that up a decrease in the DCAB is related to a decrease in NSBA in urine of suckler cows. Monitoring of metabolic disturbances should include analysis of urine, blood, milk, and ruminal fluid.

Keywords: dietary anion-cation balance, DCAB, net acid-base excretion, NSBA, suckler cow, grazing period

Procedia PDF Downloads 136
313 Valorization of Mineralogical Byproduct TiO₂ Using Photocatalytic Degradation of Organo-Sulfur Industrial Effluent

Authors: Harish Kuruva, Vedasri Bai Khavala, Tiju Thomas, K. Murugan, B. S. Murty

Abstract:

Industries are growing day to day to increase the economy of the country. The biggest problem with industries is wastewater treatment. Releasing these wastewater directly into the river is more harmful to human life and a threat to aquatic life. These industrial effluents contain many dissolved solids, organic/inorganic compounds, salts, toxic metals, etc. Phenols, pesticides, dioxins, herbicides, pharmaceuticals, and textile dyes were the types of industrial effluents and more challenging to degrade eco-friendly. So many advanced techniques like electrochemical, oxidation process, and valorization have been applied for industrial wastewater treatment, but these are not cost-effective. Industrial effluent degradation is complicated compared to commercially available pollutants (dyes) like methylene blue, methylene orange, rhodamine B, etc. TiO₂ is one of the widely used photocatalysts which can degrade organic compounds using solar light and moisture available in the environment (organic compounds converted to CO₂ and H₂O). TiO₂ is widely studied in photocatalysis because of its low cost, non-toxic, high availability, and chemically and physically stable in the atmosphere. This study mainly focused on valorizing the mineralogical product TiO₂ (IREL, India). This mineralogical graded TiO₂ was characterized and compared with its structural and photocatalytic properties (industrial effluent degradation) with the commercially available Degussa P-25 TiO₂. It was testified that this mineralogical TiO₂ has the best photocatalytic properties (particle shape - spherical, size - 30±5 nm, surface area - 98.19 m²/g, bandgap - 3.2 eV, phase - 95% anatase, and 5% rutile). The industrial effluent was characterized by TDS (total dissolved solids), ICP-OES (inductively coupled plasma – optical emission spectroscopy), CHNS (Carbon, Hydrogen, Nitrogen, and sulfur) analyzer, and FT-IR (fourier-transform infrared spectroscopy). It was observed that it contains high sulfur (S=11.37±0.15%), organic compounds (C=4±0.1%, H=70.25±0.1%, N=10±0.1%), heavy metals, and other dissolved solids (60 g/L). However, the organo-sulfur industrial effluent was degraded by photocatalysis with the industrial mineralogical product TiO₂. In this study, the industrial effluent pH value (2.5 to 10), catalyst concentration (50 to 150 mg) were varied, and effluent concentration (0.5 Abs) and light exposure time (2 h) were maintained constant. The best degradation is about 80% of industrial effluent was achieved at pH 5 with a concentration of 150 mg - TiO₂. The FT-IR results and CHNS analyzer confirmed that the sulfur and organic compounds were degraded.

Keywords: wastewater treatment, industrial mineralogical product TiO₂, photocatalysis, organo-sulfur industrial effluent

Procedia PDF Downloads 96
312 Comparative Studies and Optimization of Biodiesel Production from Oils of Selected Seeds of Nigerian Origin

Authors: Ndana Mohammed, Abdullahi Musa Sabo

Abstract:

The oils used in this work were extracted from seeds of Ricinuscommunis, Heaveabrasiliensis, Gossypiumhirsutum, Azadirachtaindica, Glycin max and Jatrophacurcasby solvent extraction method using n-hexane, and gave the yield of 48.00±0.00%, 44.30±0.52%, 45.50±0.64%, 47.60±0.51%, 41.50±0.32% and 46.50±0.71% respectively. However these feed stocks are highly challenging to trans-esterification reaction because they were found to contain high amount of free fatty acids (FFA) (6.37±0.18, 17.20±0.00, 6.14±0.05, 8.60±0.14, 5.35±0.07, 4.24±0.02mgKOH/g) in order of the above. As a result, two-stage trans-esterification reactions process was used to produce biodiesel; Acid esterification was used to reduce high FFA to 1% or less, and the second stage involve the alkaline trans-esterification/optimization of process condition to obtain high yield quality biodiesel. The salient features of this study include; characterization of oils using AOAC, AOCS standard methods to reveal some properties that may determine the viability of sample seeds as potential feed stocks for biodiesel production, such as acid value, saponification value, Peroxide value, Iodine value, Specific gravity, Kinematic viscosity, and free fatty acid profile. The optimization of process parameters in biodiesel production was investigated. Different concentrations of alkaline catalyst (KOH) (0.25, 0.5, 0.75, 1.0 and 1.50w/v, methanol/oil molar ratio (3:1, 6:1, 9:1, 12:1, and 15:1), reaction temperature (500 C, 550 C, 600 C, 650 C, 700 C), and the rate of stirring (150 rpm,225 rpm,300 rpm and 375 rpm) were used for the determination of optimal condition at which maximum yield of biodiesel would be obtained. However, while optimizing one parameter other parameters were kept fixed. The result shows the optimal biodiesel yield at a catalyst concentration of 1%, methanol/oil molar ratio of 6:1, except oil from ricinuscommunis which was obtained at 9:1, the reaction temperature of 650 C was observed for all samples, similarly the stirring rate of 300 rpm was also observed for all samples except oil from ricinuscommunis which was observed at 375 rpm. The properties of biodiesel fuel were evaluated and the result obtained conformed favorably to ASTM and EN standard specifications for fossil diesel and biodiesel. Therefore biodiesel fuel produced can be used as substitute for fossil diesel. The work also reports the result of the study on the evaluation of the effect of the biodiesel storage on its physicochemical properties to ascertain the level of deterioration with time. The values obtained for the entire samples are completely out of standard specification for biodiesel before the end of the twelve months test period, and are clearly degraded. This suggests the biodiesels from oils of Ricinuscommunis, Heaveabrasiliensis, Gossypiumhirsutum, Azadirachtaindica, Glycin max and Jatrophacurcascannot be stored beyond twelve months.

Keywords: biodiesel, characterization, esterification, optimization, transesterification

Procedia PDF Downloads 398
311 The Production of Reinforced Insulation Bricks out of the Concentration of Ganoderma lucidum Fungal Inoculums and Cement Paste

Authors: Jovie Esquivias Nicolas, Ron Aldrin Lontoc Austria, Crisabelle Belleza Bautista, Mariane Chiho Espinosa Bundalian, Owwen Kervy Del Rosario Castillo, Mary Angelyn Mercado Dela Cruz, Heinrich Theraja Recana De Luna, Chriscell Gipanao Eustaquio, Desiree Laine Lauz Gilbas, Jordan Ignacio Legaspi, Larah Denise David Madrid, Charles Linelle Malapote Mendoza, Hazel Maxine Manalad Reyes, Carl Justine Nabora Saberdo, Claire Mae Rendon Santos

Abstract:

In response to the global race in discovering the next advanced sustainable material that will reduce our ecological footprint, the researchers aimed to create a masonry unit which is competent in physical edifices and other constructional facets. From different proven researches, mycelium has been concluded that when dried can be used as a robust and waterproof building material that can be grown into explicit forms, thus reducing the processing requirements. Hypothesizing inclusive measures to attest fungi’s impressive structural qualities and absorbency, the researchers projected to perform comparative analyses in creating mycelium bricks from mushroom spores of G. lucidum. Three treatments were intended to classify the most ideal concentration of clay and substrate fixings. The substrate bags fixed with 30% clay and 70% mixings indicated highest numerical frequencies in terms of full occupation of fungal mycelia. Subsequently, sorted parts of white portions from the treatment were settled in a thermoplastic mold and burnt. Three proportional concentrations of cultivated substrate and cement were also prioritized to gather results of variation focused on the weights of the bricks in the Water Absorption Test and Durability Test. Fungal inoculums with solutions of cement showed small to moderate amounts of decrease and increase in load. This proves that the treatments did not show any significant difference when it comes to strength, efficiency and absorption capacity. Each of the concentration is equally valid and could be used in supporting the worldwide demands of creating numerous bricks while also taking into consideration the recovery of our nature.

Keywords: mycelium, fungi, fungal mycelia, durability test, water absorption test

Procedia PDF Downloads 115
310 Finite Element Molecular Modeling: A Structural Method for Large Deformations

Authors: A. Rezaei, M. Huisman, W. Van Paepegem

Abstract:

Atomic interactions in molecular systems are mainly studied by particle mechanics. Nevertheless, researches have also put on considerable effort to simulate them using continuum methods. In early 2000, simple equivalent finite element models have been developed to study the mechanical properties of carbon nanotubes and graphene in composite materials. Afterward, many researchers have employed similar structural simulation approaches to obtain mechanical properties of nanostructured materials, to simplify interface behavior of fiber-reinforced composites, and to simulate defects in carbon nanotubes or graphene sheets, etc. These structural approaches, however, are limited to small deformations due to complicated local rotational coordinates. This article proposes a method for the finite element simulation of molecular mechanics. For ease in addressing the approach, here it is called Structural Finite Element Molecular Modeling (SFEMM). SFEMM method improves the available structural approaches for large deformations, without using any rotational degrees of freedom. Moreover, the method simulates molecular conformation, which is a big advantage over the previous approaches. Technically, this method uses nonlinear multipoint constraints to simulate kinematics of the atomic multibody interactions. Only truss elements are employed, and the bond potentials are implemented through constitutive material models. Because the equilibrium bond- length, bond angles, and bond-torsion potential energies are intrinsic material parameters, the model is independent of initial strains or stresses. In this paper, the SFEMM method has been implemented in ABAQUS finite element software. The constraints and material behaviors are modeled through two Fortran subroutines. The method is verified for the bond-stretch, bond-angle and bond-torsion of carbon atoms. Furthermore, the capability of the method in the conformation simulation of molecular structures is demonstrated via a case study of a graphene sheet. Briefly, SFEMM builds up a framework that offers more flexible features over the conventional molecular finite element models, serving the structural relaxation modeling and large deformations without incorporating local rotational degrees of freedom. Potentially, the method is a big step towards comprehensive molecular modeling with finite element technique, and thereby concurrently coupling an atomistic domain to a solid continuum domain within a single finite element platform.

Keywords: finite element, large deformation, molecular mechanics, structural method

Procedia PDF Downloads 133
309 Bank Internal Controls and Credit Risk in Europe: A Quantitative Measurement Approach

Authors: Ellis Kofi Akwaa-Sekyi, Jordi Moreno Gené

Abstract:

Managerial actions which negatively profile banks and impair corporate reputation are addressed through effective internal control systems. Disregard for acceptable standards and procedures for granting credit have affected bank loan portfolios and could be cited for the crises in some European countries. The study intends to determine the effectiveness of internal control systems, investigate whether perceived agency problems exist on the part of board members and to establish the relationship between internal controls and credit risk among listed banks in the European Union. Drawing theoretical support from the behavioural compliance and agency theories, about seventeen internal control variables (drawn from the revised COSO framework), bank-specific, country, stock market and macro-economic variables will be involved in the study. A purely quantitative approach will be employed to model internal control variables covering the control environment, risk management, control activities, information and communication and monitoring. Panel data from 2005-2014 on listed banks from 28 European Union countries will be used for the study. Hypotheses will be tested and the Generalized Least Squares (GLS) regression will be run to establish the relationship between dependent and independent variables. The Hausman test will be used to select whether random or fixed effect model will be used. It is expected that listed banks will have sound internal control systems but their effectiveness cannot be confirmed. A perceived agency problem on the part of the board of directors is expected to be confirmed. The study expects significant effect of internal controls on credit risk. The study will uncover another perspective of internal controls as not only an operational risk issue but credit risk too. Banks will be cautious that observing effective internal control systems is an ethical and socially responsible act since the collapse (crisis) of financial institutions as a result of excessive default is a major contagion. This study deviates from the usual primary data approach to measuring internal control variables and rather models internal control variables in a quantitative approach for the panel data. Thus a grey area in approaching the revised COSO framework for internal controls is opened for further research. Most bank failures and crises could be averted if effective internal control systems are religiously adhered to.

Keywords: agency theory, credit risk, internal controls, revised COSO framework

Procedia PDF Downloads 289
308 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation

Authors: Li-hsing Shih, Wei-Jen Hsu

Abstract:

Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.

Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation

Procedia PDF Downloads 51
307 Resonant Tunnelling Diode Output Characteristics Dependence on Structural Parameters: Simulations Based on Non-Equilibrium Green Functions

Authors: Saif Alomari

Abstract:

The paper aims at giving physical and mathematical descriptions of how the structural parameters of a resonant tunnelling diode (RTD) affect its output characteristics. Specifically, the value of the peak voltage, peak current, peak to valley current ratio (PVCR), and the difference between peak and valley voltages and currents ΔV and ΔI. A simulation-based approach using the Non-Equilibrium Green Function (NEGF) formalism based on the Silvaco ATLAS simulator is employed to conduct a series of designed experiments. These experiments show how the doping concentration in the emitter and collector layers, their thicknesses, and the width of the barriers and the quantum well influence the above-mentioned output characteristics. Each of these parameters was systematically changed while holding others fixed in each set of experiments. Factorial experiments are outside the scope of this work and will be investigated in future. The physics involved in the operation of the device is thoroughly explained and mathematical models based on curve fitting and underlaying physical principles are deduced. The models can be used to design devices with predictable output characteristics. These models were found absent in the literature that the author acanned. Results show that the doping concentration in each region has an effect on the value of the peak voltage. It is found that increasing the carrier concentration in the collector region shifts the peak to lower values, whereas increasing it in the emitter shifts the peak to higher values. In the collector’s case, the shift is either controlled by the built-in potential resulting from the concentration gradient or the conductivity enhancement in the collector. The shift to higher voltages is found to be also related to the location of the Fermi-level. The thicknesses of these layers play a role in the location of the peak as well. It was found that increasing the thickness of each region shifts the peak to higher values until a specific characteristic length, afterwards the peak becomes independent of the thickness. Finally, it is shown that the thickness of the barriers can be optimized for a particular well width to produce the highest PVCR or the highest ΔV and ΔI. The location of the peak voltage is important in optoelectronic applications of RTDs where the operating point of the device is usually the peak voltage point. Furthermore, the PVCR, ΔV, and ΔI are of great importance for building RTD-based oscillators as they affect the frequency response and output power of the oscillator.

Keywords: peak to valley ratio, peak voltage shift, resonant tunneling diodes, structural parameters

Procedia PDF Downloads 126
306 Partial M-Sequence Code Families Applied in Spectral Amplitude Coding Fiber-Optic Code-Division Multiple-Access Networks

Authors: Shin-Pin Tseng

Abstract:

Nowadays, numerous spectral amplitude coding (SAC) fiber-optic code-division-multiple-access (FO-CDMA) techniques were appealing due to their capable of providing moderate security and relieving the effects of multiuser interference (MUI). Nonetheless, the performance of the previous network is degraded due to fixed in-phase cross-correlation (IPCC) value. Based on the above problems, a new SAC FO-CDMA network using partial M-sequence (PMS) code is presented in this study. Because the proposed PMS code is originated from M-sequence code, the system using the PMS code could effectively suppress the effects of MUI. In addition, two-code keying (TCK) scheme can applied in the proposed SAC FO-CDMA network and enhance the whole network performance. According to the consideration of system flexibility, simple optical encoders/decoders (codecs) using fiber Bragg gratings (FBGs) were also developed. First, we constructed a diagram of the SAC FO-CDMA network, including (N/2-1) optical transmitters, (N/2-1) optical receivers, and one N×N star coupler for broadcasting transmitted optical signals to arrive at the input port of each optical receiver. Note that the parameter N for the PMS code was the code length. In addition, the proposed SAC network was using superluminescent diodes (SLDs) as light sources, which then can save a lot of system cost compared with the other FO-CDMA methods. For the design of each optical transmitter, it is composed of an SLD, one optical switch, and two optical encoders according to assigned PMS codewords. On the other hand, each optical receivers includes a 1 × 2 splitter, two optical decoders, and one balanced photodiode for mitigating the effect of MUI. In order to simplify the next analysis, the some assumptions were used. First, the unipolarized SLD has flat power spectral density (PSD). Second, the received optical power at the input port of each optical receiver is the same. Third, all photodiodes in the proposed network have the same electrical properties. Fourth, transmitting '1' and '0' has an equal probability. Subsequently, by taking the factors of phase‐induced intensity noise (PIIN) and thermal noise, the corresponding performance was displayed and compared with the performance of the previous SAC FO-CDMA networks. From the numerical result, it shows that the proposed network improved about 25% performance than that using other codes at BER=10-9. This is because the effect of PIIN was effectively mitigated and the received power was enhanced by two times. As a result, the SAC FO-CDMA network using PMS codes has an opportunity to apply in applications of the next-generation optical network.

Keywords: spectral amplitude coding, SAC, fiber-optic code-division multiple-access, FO-CDMA, partial M-sequence, PMS code, fiber Bragg grating, FBG

Procedia PDF Downloads 370
305 Authoring of Augmented Reality Manuals for Not Physically Available Products

Authors: Vito M. Manghisi, Michele Gattullo, Alessandro Evangelista, Enricoandrea Laviola

Abstract:

In this work, we compared two solutions for displaying a demo version of an Augmented Reality (AR) manual when the real product is not available, opting to replace it with its computer-aided design (CAD) model. AR has been proved to be effective in maintenance and assembly operations by many studies in the literature. However, most of them present solutions for existing products, usually converting old, printed manuals into AR manuals. In this case, authoring consists of defining how to convey existing instructions through AR. It is not a simple choice, and demo versions are created to test the design goodness. However, this becomes impossible when the product is not physically available, as for new products. A solution could be creating an entirely virtual environment with the product and the instructions. However, in this way, user interaction is completely different from that in the real application, then it would be hard testing the usability of the AR manual. This work aims to propose and compare two different solutions for the displaying of a demo version of an AR manual to support authoring in case of a product that is not physically available. We used as a case study that of an innovative semi-hermetic compressor that has not yet been produced. The applications were developed for a handheld device, using Unity 3D. The main issue was how to show the compressor and attach instructions on it. In one approach, we used Vuforia natural feature tracking to attach a CAD model of the compressor to a 2D image that is a drawing in scale 1:1 of the top-view of the CAD model. In this way, during the AR manual demonstration, the 3D model of the compressor is displayed on the user's device in place of the real compressor, and all the virtual instructions are attached to it. In the other approach, we first created a support application that shows the CAD model of the compressor on a marker. Then, we registered a video of this application, moving around the marker, obtaining a video that shows the CAD model from every point of view. For the AR manual, we used the Vuforia model target (360° option) to track the CAD model of the compressor, as it was the real compressor. Then, during the demonstration, the video is shown on a fixed large screen, and instructions are displayed attached to it in the AR manual. The first solution presents the main drawback to keeping the printed image with everyone working on the authoring of the AR manual, but allows to show the product in a real scale and interaction during the demonstration is very simple. The second one does not need a printed marker during the demonstration but a screen. Still, the compressor model is resized, and interaction is awkward since the user has to play the video on the screen to rotate the compressor. The two solutions were evaluated together with the company, and the preferred was the first one due to a more natural interaction.

Keywords: augmented reality, human computer interaction, operating instructions, maintenance, assembly

Procedia PDF Downloads 107
304 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer

Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu

Abstract:

Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.

Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature

Procedia PDF Downloads 195
303 Digitization and Economic Growth in Africa: The Role of Financial Sector Development

Authors: Abdul Ganiyu Iddrisu, Bei Chen

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth and reducing poverty. Yet the compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, and low-income flows, among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector. However, empirical evidence on the digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We, therefore, argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa, focusing on the role of digitization and financial sector development. First, we assess how digitization influences financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to the private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improve economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, financial sector development, Africa, economic growth

Procedia PDF Downloads 118
302 Emissions and Total Cost of Ownership Assessment of Hybrid Propulsion Concepts for Bus Transport with Compressed Natural Gases or Diesel Engine

Authors: Volker Landersheim, Daria Manushyna, Thinh Pham, Dai-Duong Tran, Thomas Geury, Omar Hegazy, Steven Wilkins

Abstract:

Air pollution is one of the emerging problems in our society. Targets of reduction of CO₂ emissions address low-carbon and resource-efficient transport. (Plug-in) hybrid electric propulsion concepts offer the possibility to reduce total cost of ownership (TCO) and emissions for public transport vehicles (e.g., bus application). In this context, typically, diesel engines are used to form the hybrid propulsion system of the vehicle. Though the technological development of diesel engines experience major advantages, some challenges such as the high amount of particle emissions remain relevant. Gaseous fuels (i.e., compressed natural gases (CNGs) or liquefied petroleum gases (LPGs) represent an attractive alternative to diesel because of their composition. In the framework of the research project 'Optimised Real-world Cost-Competitive Modular Hybrid Architecture' (ORCA), which was funded by the EU, two different hybrid-electric propulsion concepts have been investigated: one using a diesel engine as internal combustion engine and one using CNG as fuel. The aim of the current study is to analyze specific benefits for the aforementioned hybrid propulsion systems for predefined driving scenarios with regard to emissions and total cost of ownership in bus application. Engine models based on experimental data for diesel and CNG were developed. For the purpose of designing optimal energy management strategies for each propulsion system, maps-driven or quasi-static models for specific engine types are used in the simulation framework. An analogous modelling approach has been chosen to represent emissions. This paper compares the two concepts regarding their CO₂ and NOx emissions. This comparison is performed for relevant bus missions (urban, suburban, with and without zero-emission zone) and with different energy management strategies. In addition to the emissions, also the downsizing potential of the combustion engine has been analysed to minimize the powertrain TCO (pTCO) for plug-in hybrid electric buses. The results of the performed analyses show that the hybrid vehicle concept using the CNG engine shows advantages both with respect to emissions as well as to pTCO. The pTCO is 10% lower, CO₂ emissions are 13% lower, and the NOx emissions are more than 50% lower than with the diesel combustion engine. These results are consistent across all usage profiles under investigation.

Keywords: bus transport, emissions, hybrid propulsion, pTCO, CNG

Procedia PDF Downloads 123
301 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor under Liquefaction and Scour

Authors: Vinay Kumar Vanjakula, Frank Adam, Nils Goseberg, Christian Windt

Abstract:

When a structure is installed on a seabed, the presence of the structure will influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scour On fixed structures (bridges and Monopiles) in rivers and oceans has been carried out, and very limited research work on scour and liquefaction for gravity anchors, particularly for floating Tension Leg Platform (TLP) substructures. Due to its importance and need for enhancement of knowledge in scour and liquefaction around marine structures, the MarTERA funded a three-year (2020-2023) research program called NuLIMAS (Numerical Modeling of Liquefaction Around Marine Structures). It’s a group consists of European institutions (Universities, laboratories, and consulting companies). The objective of this study is to build a numerical model that replicates the reality, which indeed helps to simulate (predict) underwater flow conditions and to study different marine scour and Liquefication situations. It helps to design a heavyweight anchor for the TLP substructure and to minimize the time and expenditure on experiments. And also, the achieved results and the numerical model will be a basis for the development of other design and concepts For marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM. A conceptual design of heavyweight anchor for TLP substructure is designed through taking considerations of available state-of-the-art knowledge on scour and Liquefication concepts and references to Previous existing designs. These conceptual designs are validated with the available similar experimental benchmark data and also with the CFD numerical benchmark standards (CFD quality assurance study). CFD optimization model/tool is designed as to minimize the effect of fluid flow, scour, and Liquefication. A parameterized model is also developed to automate the calculation process to reduce user interactions. The parameters such as anchor Lowering Process, flow optimized outer contours, seabed interaction study, and FSSI (Fluid-Structure-Seabed Interactions) are investigated and used to carve the model as to build an optimized anchor.

Keywords: gravity anchor, liquefaction, scour, computational fluid dynamics

Procedia PDF Downloads 127
300 Improved Traveling Wave Method Based Fault Location Algorithm for Multi-Terminal Transmission System of Wind Farm with Grounding Transformer

Authors: Ke Zhang, Yongli Zhu

Abstract:

Due to rapid load growths in today’s highly electrified societies and the requirement for green energy sources, large-scale wind farm power transmission system is constantly developing. This system is a typical multi-terminal power supply system, whose structure of the network topology of transmission lines is complex. What’s more, it locates in the complex terrain of mountains and grasslands, thus increasing the possibility of transmission line faults and finding the fault location with difficulty after the faults and resulting in an extremely serious phenomenon of abandoning the wind. In order to solve these problems, a fault location method for multi-terminal transmission line based on wind farm characteristics and improved single-ended traveling wave positioning method is proposed. Through studying the zero sequence current characteristics by using the characteristics of the grounding transformer(GT) in the existing large-scale wind farms, it is obtained that the criterion for judging the fault interval of the multi-terminal transmission line. When a ground short-circuit fault occurs, there is only zero sequence current on the path between GT and the fault point. Therefore, the interval where the fault point exists is obtained by determining the path of the zero sequence current. After determining the fault interval, The location of the short-circuit fault point is calculated by the traveling wave method. However, this article uses an improved traveling wave method. It makes the positioning accuracy more accurate by combining the single-ended traveling wave method with double-ended electrical data. What’s more, a method of calculating the traveling wave velocity is deduced according to the above improvements (it is the actual wave velocity in theory). The improvement of the traveling wave velocity calculation method further improves the positioning accuracy. Compared with the traditional positioning method, the average positioning error of this method is reduced by 30%.This method overcomes the shortcomings of the traditional method in poor fault location of wind farm transmission lines. In addition, it is more accurate than the traditional fixed wave velocity method in the calculation of the traveling wave velocity. It can calculate the wave velocity in real time according to the scene and solve the traveling wave velocity can’t be updated with the environment and real-time update. The method is verified in PSCAD/EMTDC.

Keywords: grounding transformer, multi-terminal transmission line, short circuit fault location, traveling wave velocity, wind farm

Procedia PDF Downloads 241
299 Study of Biofouling Wastewater Treatment Technology

Authors: Sangho Park, Mansoo Kim, Kyujung Chae, Junhyuk Yang

Abstract:

The International Maritime Organization (IMO) recognized the problem of invasive species invasion and adopted the "International Convention for the Control and Management of Ships' Ballast Water and Sediments" in 2004, which came into force on September 8, 2017. In 2011, the IMO approved the "Guidelines for the Control and Management of Ships' Biofouling to Minimize the Transfer of Invasive Aquatic Species" to minimize the movement of invasive species by hull-attached organisms and required ships to manage the organisms attached to their hulls. Invasive species enter new environments through ships' ballast water and hull attachment. However, several obstacles to implementing these guidelines have been identified, including a lack of underwater cleaning equipment, regulations on underwater cleaning activities in ports, and difficulty accessing crevices in underwater areas. The shipping industry, which is the party responsible for understanding these guidelines, wants to implement them for fuel cost savings resulting from the removal of organisms attached to the hull, but they anticipate significant difficulties in implementing the guidelines due to the obstacles mentioned above. Robots or people remove the organisms attached to the hull underwater, and the resulting wastewater includes various species of organisms and particles of paint and other pollutants. Currently, there is no technology available to sterilize the organisms in the wastewater or stabilize the heavy metals in the paint particles. In this study, we aim to analyze the characteristics of the wastewater generated from the removal of hull-attached organisms and select the optimal treatment technology. The organisms in the wastewater generated from the removal of the attached organisms meet the biological treatment standard (D-2) using the sterilization technology applied in the ships' ballast water treatment system. The heavy metals and other pollutants in the paint particles generated during removal are treated using stabilization technologies such as thermal decomposition. The wastewater generated is treated using a two-step process: 1) development of sterilization technology through pretreatment filtration equipment and electrolytic sterilization treatment and 2) development of technology for removing particle pollutants such as heavy metals and dissolved inorganic substances. Through this study, we will develop a biological removal technology and an environmentally friendly processing system for the waste generated after removal that meets the requirements of the government and the shipping industry and lays the groundwork for future treatment standards.

Keywords: biofouling, ballast water treatment system, filtration, sterilization, wastewater

Procedia PDF Downloads 88
298 Dose Saving and Image Quality Evaluation for Computed Tomography Head Scanning with Eye Protection

Authors: Yuan-Hao Lee, Chia-Wei Lee, Ming-Fang Lin, Tzu-Huei Wu, Chih-Hsiang Ko, Wing P. Chan

Abstract:

Computed tomography (CT) scan of the head is a good method for investigating cranial lesions. However, radiation-induced oxidative stress can be accumulated in the eyes and promote carcinogenesis and cataract. In this regard, we aimed to protect the eyes with barium sulfate shield(s) during CT scans and investigate the resultant image quality and radiation dose to the eye. Patients who underwent health examinations were selectively enrolled in this study in compliance with the protocol approved by the Ethics Committee of the Joint Institutional Review Board at Taipei Medical University. Participants’ brains were scanned with a water-based marker simultaneously by a multislice CT scanner (SOMATON Definition Flash) under a fixed tube current-time setting or automatic tube current modulation (TCM). The lens dose was measured by Gafchromic films, whose dose response curve was previously fitted using thermoluminescent dosimeters, with or without barium sulfate or bismuth-antimony shield laid above. For the assessment of image quality CT images at slice planes that exhibit the interested regions on the zygomatic, orbital and nasal bones of the head phantom as well as the water-based marker were used for calculating the signal-to-noise and contrast-to-noise ratios. The application of barium sulfate and bismuth-antimony shields decreased 24% and 47% of the lens dose on average, respectively. Under topogram-based TCM, the dose saving power of bismuth-antimony shield was mitigated whereas that of barium sulfate shield was enhanced. On the other hand, the signal-to-noise and contrast-to-noise ratios of DSCT images were decreased separately by barium sulfate and bismuth-antimony shield, resulting in an overall reduction of the CNR. In contrast, the integration of topogram-based TCM elevated signal difference between the ROIs on the zygomatic bones and eyeballs while preferentially decreasing the signal-to-noise ratios upon the use of barium sulfate shield. The results of this study indicate that the balance between eye exposure and image quality can be optimized by combining eye shields with topogram-based TCM on the multislice scanner. Eye shielding could change the photon attenuation characteristics of tissues that are close to the shield. The application of both shields on eye protection hence is not recommended for seeking intraorbital lesions.

Keywords: computed tomography, barium sulfate shield, dose saving, image quality

Procedia PDF Downloads 251
297 Discrete Element Simulations of Composite Ceramic Powders

Authors: Julia Cristina Bonaldo, Christophe L. Martin, Severine Romero Baivier, Stephane Mazerat

Abstract:

Alumina refractories are commonly used in steel and foundry industries. These refractories are prepared through a powder metallurgy route. They are a mixture of hard alumina particles and graphite platelets embedded into a soft carbonic matrix (binder). The powder can be cold pressed isostatically or uniaxially, depending on the application. The compact is then fired to obtain the final product. The quality of the product is governed by the microstructure of the composite and by the process parameters. The compaction behavior and the mechanical properties of the fired product depend greatly on the amount of each phase, on their morphology and on the initial microstructure. In order to better understand the link between these parameters and the macroscopic behavior, we use the Discrete Element Method (DEM) to simulate the compaction process and the fracture behavior of the fired composite. These simulations are coupled with well-designed experiments. Four mixes with various amounts of Al₂O₃ and binder were tested both experimentally and numerically. In DEM, each particle is modelled and the interactions between particles are taken into account through appropriate contact or bonding laws. Here, we model a bimodal mixture of large Al₂O₃ and small Al₂O₃ covered with a soft binder. This composite is itself mixed with graphite platelets. X-ray tomography images are used to analyze the morphologies of the different components. Large Al₂O₃ particles and graphite platelets are modelled in DEM as sets of particles bonded together. The binder is modelled as a soft shell that covers both large and small Al₂O₃ particles. When two particles with binder indent each other, they first interact through this soft shell. Once a critical indentation is reached (towards the end of compaction), hard Al₂O₃ - Al₂O₃ contacts appear. In accordance with experimental data, DEM simulations show that the amount of Al₂O₃ and the amount of binder play a major role for the compaction behavior. The graphite platelets bend and break during the compaction, also contributing to the macroscopic stress. Firing step is modeled in DEM by ascribing bonds to particles which contact each other after compaction. The fracture behavior of the compacted mixture is also simulated and compared with experimental data. Both diametrical tests (Brazilian tests) and triaxial tests are carried out. Again, the link between the amount of Al₂O₃ particles and the fracture behavior is investigated. The methodology described here can be generalized to other particulate materials that are used in the ceramic industry.

Keywords: cold compaction, composites, discrete element method, refractory materials, x-ray tomography

Procedia PDF Downloads 125
296 Microbiological Analysis on Anatomical Specimens of Cats for Use in Veterinary Surgery

Authors: Raphael C. Zero, Marita V. Cardozo, Thiago A. S. S. Rocha, Mariana T. Kihara, Fernando A. Ávila, Fabrício S. Oliveira

Abstract:

There are several fixative and preservative solutions for use on cadavers, many of them using formaldehyde as the fixative or anatomical part preservative. In some countries, such as Brazil, this toxic agent has been increasingly restricted. The objective of this study was to microbiologically identify and quantify the key agents in tanks containing 96GL ethanol or sodium chloride solutions, used respectively as fixatives and preservatives of cat cadavers. Eight adult cat corpses, three females and five males, with an average weight of 4.3 kg, were used. After injection via the external common carotid artery (120 ml/kg, 95% 96GL ethyl alcohol and 5% pure glycerin), the cadavers were fixed in a plastic tank with 96GL ethanol for 60 days. After fixing, they were stored in a 30% sodium chloride aqueous solution for 120 days in a similar tank. Samples were collected at the start of the experiment - before the animals were placed in the ethanol tanks, and monthly thereafter. The bacterial count was performed by Pour Plate Method in BHI agar (Brain Heart Infusion) and the plates were incubated aerobically and anaerobically for 24h at 37ºC. MacConkey agar, SPS agar (Sulfite Polymyxin Sulfadizine) and MYP Agar Base were used to isolate the microorganisms. There was no microbial growth in the samples prior to alcohol fixation. After 30 days of fixation in the alcohol solution, total aerobic and anaerobic (<1.0 x 10 CFU/ml) were found and Pseudomonas sp., Staphylococcus sp., Clostridium sp. were the identified agents. After 60 days in the alcohol fixation solution, total aerobes (<1.0 x 10 CFU/ml) and total anaerobes (<2.2 x 10 CFU/mL) were found, and the identified agents were the same. After 30 days of storage in the aqueous solution of 30% sodium chloride, total aerobic (<5.2 x 10 CFU/ml) and total anaerobes (<3.7 x 10 CFU/mL) were found and the agents identified were Staphylococcus sp., Clostridium sp., and fungi. After 60 days of sodium chloride storage, total aerobic (<3.0 x 10 CFU / ml) and total anaerobes (<7.0 x 10 CFU/mL) were found and the identified agents remained the same: Staphylococcus sp., Clostridium sp., and fungi. The microbiological count was low and visual inspection did not reveal signs of contamination in the tanks. There was no strong odor or purification, which proved the technique to be microbiologically effective in fixing and preserving the cat cadavers for the four-month period in which they are provided to undergraduate students of University of Veterinary Medicine for surgery practice. All experimental procedures were approved by the Municipal Legal Department (protocol 02.2014.000027-1). The project was funded by FAPESP (protocol 2015-08259-9).

Keywords: anatomy, fixation, microbiology, small animal, surgery

Procedia PDF Downloads 263
295 An Integrated Geophysical Investigation for Earthen Dam Inspection: A Case Study of Huai Phueng Dam, Udon Thani, Northeastern Thailand

Authors: Noppadol Poomvises, Prateep Pakdeerod, Anchalee Kongsuk

Abstract:

In the middle of September 2017, a tropical storm named ‘DOKSURI’ swept through Udon Thani, Northeastern Thailand. The storm dumped heavy rain for many hours and caused large amount of water flowing into Huai Phueng reservoir. Level of impounding water increased rapidly, and the extra water flowed over a service spillway, morning-glory type constructed by concrete material for about 50 years ago. Subsequently, a sinkhole was formed on the dam crest and five points of water piping were found on downstream slope closely to spillway. Three techniques of geophysical investigation were carried out to inspect cause of failures; Electrical Resistivity Imaging (ERI), Multichannel Analysis of Surface Wave (MASW), and Ground Penetrating Radar (GPR), respectively. Result of ERI clearly shows evidence of overtop event and heterogeneity around spillway that implied possibility of previous shape of sinkhole around the pipe. The shear wave velocity of subsurface soil measured by MASW can numerically convert to undrained shear strength of impervious clay core. Result of GPR clearly reveals partial settlements of freeboard zone at top part of the dam and also shaping new refilled material to plug the sinkhole back to the condition it should be. In addition, the GPR image is a main answer to confirm that there are not any sinkholes in the survey lines, only that found on top of the spillway. Integrity interpretation of the three results together with several evidences observed during a field walk-through and data from drilled holes can be interpreted that there are four main causes in this account. The first cause is too much water flowing over the spillway. Second, the water attacking morning glory spillway creates cracks upon concrete contact where the spillway is cross-cut to the center of the dam. Third, high velocity of water inside the concrete pipe sucking fine particle of embankment material down via those cracks and flushing out to the river channel. Lastly, loss of clay material of the dam into the concrete pipe creates the sinkhole at the crest. However, in case of failure by piping, it is possible that they can be formed both by backward erosion (internal erosion along or into embedded structure of spillway walls) and also by excess saturated water of downstream material.

Keywords: dam inspection, GPR, MASW, resistivity

Procedia PDF Downloads 220
294 Connecting MRI Physics to Glioma Microenvironment: Comparing Simulated T2-Weighted MRI Models of Fixed and Expanding Extracellular Space

Authors: Pamela R. Jackson, Andrea Hawkins-Daarud, Cassandra R. Rickertsen, Kamala Clark-Swanson, Scott A. Whitmire, Kristin R. Swanson

Abstract:

Glioblastoma Multiforme (GBM), the most common primary brain tumor, often presents with hyperintensity on T2-weighted or T2-weighted fluid attenuated inversion recovery (T2/FLAIR) magnetic resonance imaging (MRI). This hyperintensity corresponds with vasogenic edema, however there are likely many infiltrating tumor cells within the hyperintensity as well. While MRIs do not directly indicate tumor cells, MRIs do reflect the microenvironmental water abnormalities caused by the presence of tumor cells and edema. The inherent heterogeneity and resulting MRI features of GBMs complicate assessing disease response. To understand how hyperintensity on T2/FLAIR MRI may correlate with edema in the extracellular space (ECS), a multi-compartmental MRI signal equation which takes into account tissue compartments and their associated volumes with input coming from a mathematical model of glioma growth that incorporates edema formation was explored. The reasonableness of two possible extracellular space schema was evaluated by varying the T2 of the edema compartment and calculating the possible resulting T2s in tumor and peripheral edema. In the mathematical model, gliomas were comprised of vasculature and three tumor cellular phenotypes: normoxic, hypoxic, and necrotic. Edema was characterized as fluid leaking from abnormal tumor vessels. Spatial maps of tumor cell density and edema for virtual tumors were simulated with different rates of proliferation and invasion and various ECS expansion schemes. These spatial maps were then passed into a multi-compartmental MRI signal model for generating simulated T2/FLAIR MR images. Individual compartments’ T2 values in the signal equation were either from literature or estimated and the T2 for edema specifically was varied over a wide range (200 ms – 9200 ms). T2 maps were calculated from simulated images. T2 values based on simulated images were evaluated for regions of interest (ROIs) in normal appearing white matter, tumor, and peripheral edema. The ROI T2 values were compared to T2 values reported in literature. The expanding scheme of extracellular space is had T2 values similar to the literature calculated values. The static scheme of extracellular space had a much lower T2 values and no matter what T2 was associated with edema, the intensities did not come close to literature values. Expanding the extracellular space is necessary to achieve simulated edema intensities commiserate with acquired MRIs.

Keywords: extracellular space, glioblastoma multiforme, magnetic resonance imaging, mathematical modeling

Procedia PDF Downloads 217