Search results for: hemodynamic quantities
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 551

Search results for: hemodynamic quantities

101 The Foundation Binary-Signals Mechanics and Actual-Information Model of Universe

Authors: Elsadig Naseraddeen Ahmed Mohamed

Abstract:

In contrast to the uncertainty and complementary principle, it will be shown in the present paper that the probability of the simultaneous occupation event of any definite values of coordinates by any definite values of momentum and energy at any definite instance of time can be described by a binary definite function equivalent to the difference between their numbers of occupation and evacuation epochs up to that time and also equivalent to the number of exchanges between those occupation and evacuation epochs up to that times modulus two, these binary definite quantities can be defined at all point in the time’s real-line so it form a binary signal represent a complete mechanical description of physical reality, the time of these exchanges represent the boundary of occupation and evacuation epochs from which we can calculate these binary signals using the fact that the time of universe events actually extends in the positive and negative of time’s real-line in one direction of extension when these number of exchanges increase, so there exists noninvertible transformation matrix can be defined as the matrix multiplication of invertible rotation matrix and noninvertible scaling matrix change the direction and magnitude of exchange event vector respectively, these noninvertible transformation will be called actual transformation in contrast to information transformations by which we can navigate the universe’s events transformed by actual transformations backward and forward in time’s real-line, so these information transformations will be derived as an elements of a group can be associated to their corresponded actual transformations. The actual and information model of the universe will be derived by assuming the existence of time instance zero before and at which there is no coordinate occupied by any definite values of momentum and energy, and then after that time, the universe begin its expanding in spacetime, this assumption makes the need for the existence of Laplace’s demon who at one moment can measure the positions and momentums of all constituent particle of the universe and then use the law of classical mechanics to predict all future and past of universe’s events, superfluous, we only need for the establishment of our analog to digital converters to sense the binary signals that determine the boundaries of occupation and evacuation epochs of the definite values of coordinates relative to its origin by the definite values of momentum and energy as present events of the universe from them we can predict approximately in high precision it's past and future events.

Keywords: binary-signal mechanics, actual-information model of the universe, actual-transformation, information-transformation, uncertainty principle, Laplace's demon

Procedia PDF Downloads 151
100 Study of Polyphenol Profile and Antioxidant Capacity in Italian Ancient Apple Varieties by Liquid Chromatography

Authors: A. M. Tarola, R. Preti, A. M. Girelli, P. Campana

Abstract:

Safeguarding, studying and enhancing biodiversity play an important and indispensable role in re-launching agriculture. The ancient local varieties are therefore a precious resource for genetic and health improvement. In order to protect biodiversity through the recovery and valorization of autochthonous varieties, in this study we analyzed 12 samples of four ancient apple cultivars representative of Friuli Venezia Giulia, selected by local farmers who work on a project for the recovery of ancient apple cultivars. The aim of this study is to evaluate the polyphenolic profile and the antioxidant capacity that characterize the organoleptic and functional qualities of this fruit species, besides having beneficial properties for health. In particular, for each variety, the following compounds were analyzed, both in the skins and in the pulp: gallic acid, catechin, chlorogenic acid, epicatechin, caffeic acid, coumaric acid, ferulic acid, rutin, phlorizin, phloretin and quercetin to highlight any differences in the edible parts of the apple. The analysis of individual phenolic compounds was performed by High Performance Liquid Chromatography (HPLC) coupled with a diode array UV detector (DAD), the antioxidant capacity was estimated using an in vitro essay based on a Free Radical Scavenging Method and the total phenolic compounds was determined using the Folin-Ciocalteau method. From the results, it is evident that the catechins are the most present polyphenols, reaching a value of 140-200 μg/g in the pulp and of 400-500 μg/g in the skin, with the prevalence of epicatechin. Catechins and phlorizin, a dihydrohalcone typical of apples, are always contained in larger quantities in the peel. Total phenolic compounds content was positively correlated with antioxidant activity in apple pulp (r2 = 0,850) and peel (r2 = 0,820). Comparing the results, differences between the varieties analyzed and between the edible parts (pulp and peel) of the apple were highlighted. In particular, apple peel is richer in polyphenolic compounds than pulp and flavonols are exclusively present in the peel. In conclusion, polyphenols, being antioxidant substances, have confirmed the benefits of fruit in the diet, especially as a prevention and treatment for degenerative diseases. They demonstrated to be also a good marker for the characterization of different apple cultivars. The importance of protecting biodiversity in agriculture was also highlighted through the exploitation of native products and ancient varieties of apples now forgotten.

Keywords: apple, biodiversity, polyphenols, antioxidant activity, HPLC-DAD, characterization

Procedia PDF Downloads 118
99 Creep Analysis and Rupture Evaluation of High Temperature Materials

Authors: Yuexi Xiong, Jingwu He

Abstract:

The structural components in an energy facility such as steam turbine machines are operated under high stress and elevated temperature in an endured time period and thus the creep deformation and creep rupture failure are important issues that need to be addressed in the design of such components. There are numerous creep models being used for creep analysis that have both advantages and disadvantages in terms of accuracy and efficiency. The Isochronous Creep Analysis is one of the simplified approaches in which a full-time dependent creep analysis is avoided and instead an elastic-plastic analysis is conducted at each time point. This approach has been established based on the rupture dependent creep equations using the well-known Larson-Miller parameter. In this paper, some fundamental aspects of creep deformation and the rupture dependent creep models are reviewed and the analysis procedures using isochronous creep curves are discussed. Four rupture failure criteria are examined from creep fundamental perspectives including criteria of Stress Damage, Strain Damage, Strain Rate Damage, and Strain Capability. The accuracy of these criteria in predicting creep life is discussed and applications of the creep analysis procedures and failure predictions of simple models will be presented. In addition, a new failure criterion is proposed to improve the accuracy and effectiveness of the existing criteria. Comparisons are made between the existing criteria and the new one using several examples materials. Both strain increase and stress relaxation form a full picture of the creep behaviour of a material under high temperature in an endured time period. It is important to bear this in mind when dealing with creep problems. Accordingly there are two sets of rupture dependent creep equations. While the rupture strength vs LMP equation shows how the rupture time depends on the stress level under load controlled condition, the strain rate vs rupture time equation reflects how the rupture time behaves under strain-controlled condition. Among the four existing failure criteria for rupture life predictions, the Stress Damage and Strain Damage Criteria provide the most conservative and non-conservative predictions, respectively. The Strain Rate and Strain Capability Criteria provide predictions in between that are believed to be more accurate because the strain rate and strain capability are more determined quantities than stress to reflect the creep rupture behaviour. A modified Strain Capability Criterion is proposed making use of the two sets of creep equations and therefore is considered to be more accurate than the original Strain Capability Criterion.

Keywords: creep analysis, high temperature mateials, rapture evalution, steam turbine machines

Procedia PDF Downloads 263
98 Life Cycle Carbon Dioxide Emissions from the Construction Phase of Highway Sector in China

Authors: Yuanyuan Liu, Yuanqing Wang, Di Li

Abstract:

Carbon dioxide (CO2) emissions mitigation from road construction activities is one of the potential pathways to deal with climate change due to its higher use of materials, machinery energy consumption, and high quantity of vehicle and equipment fuels for transportation and on-site construction activities. Aiming to assess the environmental impact of the road infrastructure construction activities and to identify hotspots of emissions sources, this study developed a life-cycle CO2 emissions assessment framework covering three stages of material production, to-site and on-site transportation under the guidance of the principle of LCA ISO14040. Then streamlined inventory analysis on sub-processes of each stage was conducted based on the budget files from cases of highway projects in China. The calculation results were normalized into functional unit represented as ton per km per lane. Then a comparison between the amount of emissions from each stage, and sub-process was made to identify the major contributor in the whole highway lifecycle. In addition, the calculating results were used to be compared with results in other countries for understanding the level of CO2 emissions associated with Chinese road infrastructure in the world. The results showed that materials production stage produces the most of the CO2 emissions (for more than 80%), and the production of cement and steel accounts for large quantities of carbon emissions. Life cycle CO2 emissions of fuel and electric energy associated with to-site and on-site transportation vehicle and equipment are a minor component of total life cycle CO2 emissions from highway project construction activities. Bridges and tunnels are dominant large carbon contributor compared to the road segments. The life cycle CO2 emissions of road segment in highway project in China are slightly higher than the estimation results of highways in European countries and USA, about 1500 ton per km per lane. In particularly, the life cycle CO2 emissions of road pavement in majority cities all over the world are about 500 ton per km per lane. However, there is obvious difference between the cities when the estimation on life cycle CO2 emissions of highway projects included bridge and tunnel. The findings of the study could offer decision makers a more comprehensive reference to understand the contribution of road infrastructure to climate change, especially understand the contribution from road infrastructure construction activities in China. In addition, the identified hotspots of emissions sources provide the insights of how to reduce road carbon emissions for development of sustainable transportation.

Keywords: carbon dioxide emissions, construction activities, highway, life cycle assessment

Procedia PDF Downloads 239
97 A Model of the Universe without Expansion of Space

Authors: Jia-Chao Wang

Abstract:

A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.

Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction

Procedia PDF Downloads 104
96 The Ballistics Case Study of the Enrica Lexie Incident

Authors: Diego Abbo

Abstract:

On February 15, 2012 off the Indian coast of Kerala, in position 091702N-0760180E by the oil tanker Enrica Lexie, flying the Italian flag, bursts of 5.56 x45 caliber shots were fired from assault rifles AR/70 Italian-made Beretta towards the Indian fisher boat St. Anthony. The shots that hit the St. Anthony fishing boat were six, of which two killed the Indian fishermen Ajesh Pink and Valentine Jelestine. From the analysis concerning the kinematic engagement of the two ships and from the autopsy and ballistic results of the Indian judicial authorities it is possible to reconstruct the trajectories of the six aforementioned shots. This essay reconstructs the trajectories of the six shots that cannot be of direct shooting but have undergone a rebound on the water. The investigation carried out scientifically demonstrates the rebound of the blows on the water, the gyrostatic deviation due to the rebound and the tumbling effect always due to the rebound as regards intermediate ballistics. In consideration of the four shots that directly impacted the fishing vessel, the current examination proves, with scientific value, that the trajectories could not be downwards but upwards. Also, the trajectory of two shots that hit to death the two fishermen could not be downwards but only upwards. In fact, this paper demonstrates, with scientific value: The loss of speed of the projectiles due to the rebound on the water; The tumbling effect in the ballistic medium within the two victims; The permanent cavities subject to the injury ballistics and the related ballistic trauma that prevented homeostasis causing bleeding in one case; The thermo-hardening deformation of the bullet found in Valentine Jelestine's skull; The upward and non-downward trajectories. The paper constitutes a tool in forensic ballistics in that it manages to reconstruct, from the final spot of the projectiles fired, all phases of ballistics like the internal one of the weapons that fired, the intermediate one, the terminal one and the penetrative structural one. In general terms the ballistics reconstruction is based on measurable parameters whose entity is contained with certainty within a lower and upper limit. Therefore, quantities that refer to angles, speed, impact energy and firing position of the shooter can be identified within the aforementioned limits. Finally, the investigation into the internal bullet track, obtained from any autopsy examination, offers a significant “lesson learned” but overall a starting point to contain or mitigate bleeding as a rescue from future gunshot wounds.

Keywords: impact physics, intermediate ballistics, terminal ballistics, tumbling effect

Procedia PDF Downloads 144
95 Preparation of hydrophobic silica membranes supported on alumina hollow fibers for pervaporation applications

Authors: Ami Okabe, Daisuke Gondo, Akira Ogawa, Yasuhisa Hasegawa, Koichi Sato, Sadao Araki, Hideki Yamamoto

Abstract:

Membrane separation draws attention as the energy-saving technology. Pervaporation (PV) uses hydrophobic ceramic membranes to separate organic compounds from industrial wastewaters. PV makes it possible to separate organic compounds from azeotropic mixtures and from aqueous solutions. For the PV separation of low concentrations of organics from aqueous solutions, hydrophobic ceramic membranes are expected to have high separation performance compared with that of conventional hydrophilic membranes. Membrane separation performance is evaluated based on the pervaporation separation index (PSI), which depends on both the separation factor and the permeate flux. Ingenuity is required to increase the PSI such that the permeate flux increases without reducing the separation factor or to increase the separation factor without reducing the flux. A thin separation layer without defects and pinholes is required. In addition, it is known that the flux can be increased without reducing the separation factor by reducing the diffusion resistance of the membrane support. In a previous study, we prepared hydrophobic silica membranes by a molecular templating sol−gel method using cetyltrimethylammonium bromide (CTAB) to form pores suitable for permitting the passage of organic compounds through the membrane. We separated low-concentration organics from aqueous solutions by PV using these membranes. In the present study, hydrophobic silica membranes were prepared on a porous alumina hollow fiber support that is thinner than the previously used alumina support. Ethyl acetate (EA) is used in large industrial quantities, so it was selected as the organic substance to be separated. Hydrophobic silica membranes were prepared by dip-coating porous alumina supports with a -alumina interlayer into a silica sol containing CTAB and vinyltrimethoxysilane (VTMS) as the silica precursor. Membrane thickness increases with the lifting speed of the sol in the dip-coating process. Different thicknesses of the γ-alumina layer were prepared by dip-coating the support into a boehmite sol at different lifting speeds (0.5, 1, 3, and 5 mm s-1). Silica layers were subsequently formed by dip-coating using an immersion time of 60 s and lifting speed of 1 mm s-1. PV measurements of the EA (5 wt.%)/water system were carried out using VTMS hydrophobic silica membranes prepared on -alumina layers of different thicknesses. Water and EA flux showed substantially constant value despite of the change of the lifting speed to form the γ-alumina interlayer. All prepared hydrophobic silica membranes showed the higher PSI compared with the hydrophobic membranes using the previous alumina support of hollow fiber.

Keywords: membrane separation, pervaporation, hydrophobic, silica

Procedia PDF Downloads 383
94 Non-Newtonian Fluid Flow Simulation for a Vertical Plate and a Square Cylinder Pair

Authors: Anamika Paul, Sudipto Sarkar

Abstract:

The flow behaviour of non-Newtonian fluid is quite complicated, although both the pseudoplastic (n < 1, n being the power index) and dilatant (n > 1) fluids under this category are used immensely in chemical and process industries. A limited research work is carried out for flow over a bluff body in non-Newtonian flow environment. In the present numerical simulation we control the vortices of a square cylinder by placing an upstream vertical splitter plate for pseudoplastic (n=0.8), Newtonian (n=1) and dilatant (n=1.2) fluids. The position of the upstream plate is also varied to calculate the critical distance between the plate and cylinder, below which the cylinder vortex shedding suppresses. Here the Reynolds number is considered as Re = 150 (Re = U∞a/ν, where U∞ is the free-stream velocity of the flow, a is the side of the cylinder and ν is the maximum value of kinematic viscosity of the fluid), which comes under laminar periodic vortex shedding regime. The vertical plate is having a dimension of 0.5a × 0.05a and it is placed at the cylinder centre-line. Gambit 2.2.30 is used to construct the flow domain and to impose the boundary conditions. In detail, we imposed velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition) at upper and lower domain. Wall boundary condition (u = v = 0) is considered both on the cylinder and the splitter plate surfaces. The unsteady 2-D Navier Stokes equations in fully conservative form are then discretized in second-order spatial and first-order temporal form. These discretized equations are then solved by Ansys Fluent 14.5 implementing SIMPLE algorithm written in finite volume method. Here, fine meshing is used surrounding the plate and cylinder. Away from the cylinder, the grids are slowly stretched out in all directions. To get an account of mesh quality, a total of 297 × 208 grid points are used for G/a = 3 (G being the gap between the plate and cylinder) in the streamwise and flow-normal directions respectively after a grid independent study. The computed mean flow quantities obtained from Newtonian flow are agreed well with the available literatures. The results are depicted with the help of instantaneous and time-averaged flow fields. Qualitative and quantitative noteworthy differences are obtained in the flow field with the changes in rheology of fluid. Also, aerodynamic forces and vortex shedding frequencies differ with the gap-ratio and power index of the fluid. We can conclude from the present simulation that fluent is capable to capture the vortex dynamics of unsteady laminar flow regime even in the non-Newtonian flow environment.

Keywords: CFD, critical gap-ratio, splitter plate, wake-wake interactions, dilatant, pseudoplastic

Procedia PDF Downloads 95
93 Study of Oxidative Stability, Cold Flow Properties and Iodine Value of Macauba Biodiesel Blends

Authors: Acacia A. Salomão, Willian L. Gomes da Silva, Gustavo G. Shimamoto, Matthieu Tubino

Abstract:

Biodiesel physical and chemical properties depend on the raw material composition used in its synthesis. Saturated fatty acid esters confer high oxidative stability, while unsaturated fatty acid esters improve the cold flow properties. In this study, an alternative vegetal source - the macauba kernel oil - was used in the biodiesel synthesis instead of conventional sources. Macauba can be collected from native palm trees and is found in several regions in Brazil. Its oil is a promising source when compared to several other oils commonly obtained from food products, such as soybean, corn or canola oil, due to its specific characteristics. However, the usage of biodiesel made from macauba oil alone is not recommended due to the difficulty of producing macauba in large quantities. For this reason, this project proposes the usage of blends of the macauba oil with conventional oils. These blends were prepared by mixing the macauba biodiesel with biodiesels obtained from soybean, corn, and from residual frying oil, in the following proportions: 20:80, 50:50 e 80:20 (w/w). Three parameters were evaluated, using the standard methods, in order to check the quality of the produced biofuel and its blends: oxidative stability, cold filter plugging point (CFPP), and iodine value. The induction period (IP) expresses the oxidative stability of the biodiesel, the CFPP expresses the lowest temperature in which the biodiesel flows through a filter without plugging the system and the iodine value is a measure of the number of double bonds in a sample. The biodiesels obtained from soybean, residual frying oil and corn presented iodine values higher than 110 g/100 g, low oxidative stability and low CFPP. The IP values obtained from these biodiesels were lower than 8 h, which is below the recommended standard value. On the other hand, the CFPP value was found within the allowed limit (5 ºC is the maximum). Regarding the macauba biodiesel, a low iodine value was observed (31.6 g/100 g), which indicates the presence of high content of saturated fatty acid esters. The presence of saturated fatty acid esters should imply in a high oxidative stability (which was found accordingly, with IP = 64 h), and high CFPP, but curiously the latter was not observed (-3 ºC). This behavior can be explained by looking at the size of the carbon chains, as 65% of this biodiesel is composed by short chain saturated fatty acid esters (less than 14 carbons). The high oxidative stability and the low CFPP of macauba biodiesel are what make this biofuel a promising source. The soybean, corn and residual frying oil biodiesels also have low CFPP, but low oxidative stability. Therefore the blends proposed in this work, if compared to the common biodiesels, maintain the flow properties but present enhanced oxidative stability.

Keywords: biodiesel, blends, macauba kernel oil, stability oxidative

Procedia PDF Downloads 507
92 Antiulcer Potential of Heme Oxygenase-1 Inducers

Authors: Gaweł Magdalena, Lipkowska Anna, Olbert Magdalena, Frąckiewicz Ewelina, Librowski Tadeusz, Nowak Gabriel, Pilc Andrzej

Abstract:

Heme oxygenase-1 (HO-1), also known as heat shock protein 32 (HSP32), has been shown to be implicated in cytoprotection in various organs. Its activation plays a significant role in acute and chronic inflammation, protecting cells from oxidative injury and apoptosis. This inducible isoform of HO catalyzes the first and rate-limiting step in heme degradation to produce equimolar quantities of biologically active products: carbon monoxide (CO), free iron and biliverdin. CO has been reported to possess anti-apoptotic properties. Moreover, it inhibits the production of proinflammatory cytokines and stimulates the synthesis of the anti-inflammatory interleukin-10 (IL-10), as well as promotes vasodilatation at sites of inflammation. The second product of catalytic HO-1 activity, free cytotoxic iron, is promptly sequestered into the iron storage protein ferritin, which lowers the pro-oxidant state of the cell. The third product, biliverdin, is subsequently converted by biliverdin reductase into the bile pigment bilirubin, the most potent endogenous antioxidant among the constituents of human serum, which modulates immune effector functions and suppresses inflammatory response. Furthermore, being one of the so-called stress proteins, HO-1 adaptively responds to different stressors, such as reactive oxygen species (ROS), inflammatory cytokines and heavy metals and thus protects cells against such conditions as ischemia, hemorrhagic shock, heat shock or hypoxia. It is suggested that pharmacologic modulation of HO-1 may represent an effective strategy for prevention of stress and drug-induced gastrointestinal toxicity. HO-1 is constitutively expressed in normal gastric, intestinal and colonic mucosa and up-regulated during inflammation. It has been proven that HO-1 up-regulated by hemin, heme and cobalt-protoporphyrin ameliorates experimental colitis. In addition, the up-regulation of HO-1 partially explains the mechanism of action of 5-aminosalicylic acid (5-ASA), which is used clinically as an anti-colitis agent. In 2009 Ueda et al. has reported for the first time that mucosal protection by Polaprezinc, a chelate compound of zinc and L-carnosine used as an anti-ulcer drug in Japan, is also attributed to induction of HO-1 in the stomach. Since then, inducers of HO-1 are desired subject of research, as they may constitute therapeutically effective anti-ulcer drugs.

Keywords: heme oxygenase-1, gastric lesions, gastroprotection, Polaprezinc

Procedia PDF Downloads 485
91 Optimal Delivery of Two Similar Products to N Ordered Customers

Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis

Abstract:

The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering products located at a central depot to customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from the depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity of the goods that must be delivered. In the present work, we present a specific capacitated stochastic vehicle routing problem which has realistic applications to distributions of materials to shops or to healthcare facilities or to military units. A vehicle starts its route from a depot loaded with items of two similar but not identical products. We name these products, product 1 and product 2. The vehicle must deliver the products to N customers according to a predefined sequence. This means that first customer 1 must be serviced, then customer 2 must be serviced, then customer 3 must be serviced and so on. The vehicle has a finite capacity and after servicing all customers it returns to the depot. It is assumed that each customer prefers either product 1 or product 2 with known probabilities. The actual preference of each customer becomes known when the vehicle visits the customer. It is also assumed that the quantity that each customer demands is a random variable with known distribution. The actual demand is revealed upon the vehicle’s arrival at customer’s site. The demand of each customer cannot exceed the vehicle capacity and the vehicle is allowed during its route to return to the depot to restock with quantities of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. If there is shortage for the desired product, it is permitted to deliver the other product at a reduced price. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the expected total cost among all possible strategies. It is possible to find the optimal routing strategy using a suitable stochastic dynamic programming algorithm. It is also possible to prove that the optimal routing strategy has a specific threshold-type structure, i.e. it is characterized by critical numbers. This structural result enables us to construct an efficient special-purpose dynamic programming algorithm that operates only over those routing strategies having this structure. The findings of the present study lead us to the conclusion that the dynamic programming method may be a very useful tool for the solution of specific vehicle routing problems. A problem for future research could be the study of a similar stochastic vehicle routing problem in which the vehicle instead of delivering, it collects products from ordered customers.

Keywords: collection of similar products, dynamic programming, stochastic demands, stochastic preferences, vehicle routing problem

Procedia PDF Downloads 245
90 Mature Field Rejuvenation Using Hydraulic Fracturing: A Case Study of Tight Mature Oilfield with Reveal Simulator

Authors: Amir Gharavi, Mohamed Hassan, Amjad Shah

Abstract:

The main characteristics of unconventional reservoirs include low-to ultra low permeability and low-to-moderate porosity. As a result, hydrocarbon production from these reservoirs requires different extraction technologies than from conventional resources. An unconventional reservoir must be stimulated to produce hydrocarbons at an acceptable flow rate to recover commercial quantities of hydrocarbons. Permeability for unconventional reservoirs is mostly below 0.1 mD, and reservoirs with permeability above 0.1 mD are generally considered to be conventional. The hydrocarbon held in these formations naturally will not move towards producing wells at economic rates without aid from hydraulic fracturing which is the only technique to assess these tight reservoir productions. Horizontal well with multi-stage fracking is the key technique to maximize stimulated reservoir volume and achieve commercial production. The main objective of this research paper is to investigate development options for a tight mature oilfield. This includes multistage hydraulic fracturing and spacing by building of reservoir models in the Reveal simulator to model potential development options based on sidetracking the existing vertical well. To simulate potential options, reservoir models have been built in the Reveal. An existing Petrel geological model was used to build the static parts of these models. A FBHP limit of 40bars was assumed to take into account pump operating limits and to maintain the reservoir pressure above the bubble point. 300m, 600m and 900m lateral length wells were modelled, in conjunction with 4, 6 and 8 stages of fracs. Simulation results indicate that higher initial recoveries and peak oil rates are obtained with longer well lengths and also with more fracs and spacing. For a 25year forecast, the ultimate recovery ranging from 0.4% to 2.56% for 300m and 1000m laterals respectively. The 900m lateral with 8 fracs 100m spacing gave the highest peak rate of 120m3/day, with the 600m and 300m cases giving initial peak rates of 110m3/day. Similarly, recovery factor for the 900m lateral with 8 fracs and 100m spacing was the highest at 2.65% after 25 years. The corresponding values for the 300m and 600m laterals were 2.37% and 2.42%. Therefore, the study suggests that longer laterals with 8 fracs and 100m spacing provided the optimal recovery, and this design is recommended as the basis for further study.

Keywords: unconventional, resource, hydraulic, fracturing

Procedia PDF Downloads 278
89 Evaluation of the Potential of Olive Pomace Compost for Using as a Soil Amendment

Authors: M. Černe, I. Palčić, D. Anđelini, D. Cvitan, N. Major, M. Lukić, S. Goreta Ban, D. Ban, T. Rijavec, A. Lapanje

Abstract:

Context: In the Mediterranean basin, large quantities of lignocellulosic by-products, such as olive pomace (OP), are generated during olive processing on an annual basis. Due to the phytotoxic nature of OP, composting is recommended for its stabilisation to produce the end-product safe for agricultural use. Research Aim: This study aims to evaluate the applicability of olive pomace compost (OPC) for use as a soil amendment by considering its physical and chemical characteristics and microbiological parameters. Methodology: The OPC samples were collected from the surface and depth layers of the compost pile after 8 months. The samples were analyzed for their C/N, pH, EC, total phenolic content, residual oils, and elemental content, as well as colloidal properties and microbial community structure. The specific analytical approaches used are detailed in the poster. Findings: The results showed that the pH of OPC ranged from 7.8 to 8.6, while the electrical conductivity was from 770 to 1608 mS/cm. The levels of nitrogen (N), phosphorus (P), and potassium (K) varied within the ranges of 1.5 to 27.2 g/kg d.w., 1.6 to 1.8 g/kg d.w., and 6.5 to 7.5 g/kg d.w., respectively. The contents of potentially toxic metals such as chromium (Cr), copper (Cu), nickel (Ni), lead (Pb), and zinc (Zn) were below the EU limits for soil improvers. The microbial structure follows the changes of the gradient from the outer to the innermost layer with relatively low amounts of DNA. The gradient nature shows that it is needed to develop better strategies for composting surpassing the conventional approach. However, the low amounts of total phenols and oil residues indicated efficient biodegradation during composting. The carbon-to-nitrogen ratio (C/N) within the range of 13 to 16 suggested that OPC can be used as a soil amendment. Overall, the study suggests that composting can be a promising strategy for environmentally-friendly OP recycling. Theoretical Importance: This study contributes to the understanding of the use of OPC as a soil amendment and its potential benefits in resource recycling and reducing environmental burdens. It also highlights the need for improved composting strategies to optimize its process. Data Collection and Analysis Procedures: The OPC samples were taken from the compost pile and charasterised for selected chemical, physical and microbial parameters. The specific analytical procedures utilized are described in detail in the poster. Question Addressed: This study addresses the question of whether composting can be optimized to improve the biodegradation of OP. Conclusion: The study concludes that OPC has the potential to be used as a soil amendment due to its favorable physical and chemical characteristics, low levels of potentially toxic metals, and efficient biodegradation during composting. However, the results also suggest the need for improved composting strategies to improve the quality of OPC.

Keywords: olive pomace compost, waste valorisation, agricultural use, soil amendment

Procedia PDF Downloads 38
88 Production of Bricks Using Mill Waste and Tyre Crumbs at a Low Temperature by Alkali-Activation

Authors: Zipeng Zhang, Yat C. Wong, Arul Arulrajah

Abstract:

Since automobiles became widely popular around the early 20th century, end-of-life tyres have been one of the major types of waste humans encounter. Every minute, there are considerable quantities of tyres being disposed of around the world. Most end-of-life tyres are simply landfilled or simply stockpiled, other than recycling. To address the potential issues caused by tyre waste, incorporating it into construction materials can be a possibility. This research investigated the viability of manufacturing bricks using mill waste and tyre crumb by alkali-activation at a relatively low temperature. The mill waste was extracted from a brick factory located in Melbourne, Australia, and the tyre crumbs were supplied by a local recycling company. As the main precursor, the mill waste was activated by the alkaline solution, which was comprised of sodium hydroxide (8m) and sodium silicate (liquid). The introduction ratio of alkaline solution (relative to the solid weight) and the weight ratio between sodium hydroxide and sodium silicate was fixed at 20 wt.% and 1:1, respectively. The tyre crumb was introduced to substitute part of the mill waste at four ratios by weight, namely 0, 5, 10 and 15%. The mixture of mill waste and tyre crumbs were firstly dry-mixed for 2 min to ensure the homogeneity, followed by a 2.5-min wet mixing after adding the solution. The ready mixture subsequently was press-moulded into blocks with the size of 109 mm in length, 112.5 mm in width and 76 mm in height. The blocks were cured at 50°C with 95% relative humidity for 2 days, followed by a 110°C oven-curing for 1 day. All the samples were then placed under the ambient environment until the age of 7 and 28 days for testing. A series of tests were conducted to evaluate the linear shrinkage, compressive strength and water absorption of the samples. In addition, the microstructure of the samples was examined via the scanning electron microscope (SEM) test. The results showed the highest compressive strength was 17.6 MPa, found in the 28-day-old group using 5 wt.% tyre crumbs. Such strength has been able to satisfy the requirement of ASTM C67. However, the increasing addition of tyre crumb weakened the compressive strength of samples. Apart from the strength, the linear shrinkage and water absorption of all the groups can meet the requirements of the standard. It is worth noting that the use of tyre crumbs tended to decrease the shrinkage and even caused expansion when the tyre content was over 15 wt.%. The research also found that there was a significant reduction in compressive strength for the samples after water absorption tests. In conclusion, the tyre crumbs have the potential to be used as a filler material in brick manufacturing, but more research needs to be done to tackle the durability problem in the future.

Keywords: bricks, mill waste, tyre crumbs, waste recycling

Procedia PDF Downloads 101
87 Nuclear Materials and Nuclear Security in India: A Brief Overview

Authors: Debalina Ghoshal

Abstract:

Nuclear security is the ‘prevention and detection of, and response to unauthorised removal, sabotage, unauthorised access, illegal transfer or other malicious acts involving nuclear or radiological material or their associated facilities.’ Ever since the end of Cold War, nuclear materials security has remained a concern for global security. However, with the increase in terrorist attacks not just in India especially, security of nuclear materials remains a priority. Therefore, India has made continued efforts to tighten its security on nuclear materials to prevent nuclear theft and radiological terrorism. Nuclear security is different from nuclear safety. Physical security is also a serious concern and India had been careful of the physical security of its nuclear materials. This is more so important since India is expanding its nuclear power capability to generate electricity for economic development. As India targets 60,000 MW of electricity production by 2030, it has a range of reactors to help it achieve its goal. These include indigenous Pressurised Heavy Water Reactors, now standardized at 700 MW per reactor Light Water Reactors, and the indigenous Fast Breeder Reactors that can generate more fuel for the future and enable the country to utilise its abundant thorium resource. Nuclear materials security can be enhanced through two important ways. One is through proliferation resistant technologies and diplomatic efforts to take non proliferation initiatives. The other is by developing technical means to prevent any leakage in nuclear materials in the hands of asymmetric organisations. New Delhi has already implemented IAEA Safeguards on their civilian nuclear installations. Moreover, the IAEA Additional Protocol has also been ratified by India in order to enhance its transparency of nuclear material and strengthen nuclear security. India is a party to the IAEA Conventions on Nuclear Safety and Security, and in particular the 1980 Convention on the Physical Protection of Nuclear Material and its amendment in 2005, Code of Conduct in Safety and Security of Radioactive Sources, 2006 which enables the country to provide for the highest international standards on nuclear and radiological safety and security. India's nuclear security approach is driven by five key components: Governance, Nuclear Security Practice and Culture, Institutions, Technology and International Cooperation. However, there is still scope for further improvements to strengthen nuclear materials and nuclear security. The NTI Report, ‘India’s improvement reflects its first contribution to the IAEA Nuclear Security Fund etc. in the future, India’s nuclear materials security conditions could be further improved by strengthening its laws and regulations for security and control of materials, particularly for control and accounting of materials, mitigating the insider threat, and for the physical security of materials during transport. India’s nuclear materials security conditions also remain adversely affected due to its continued increase in its quantities of nuclear material, and high levels of corruption among public officials.’ This paper would study briefly the progress made by India in nuclear and nuclear material security and the step ahead for India to further strengthen this.

Keywords: India, nuclear security, nuclear materials, non proliferation

Procedia PDF Downloads 325
86 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products

Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola

Abstract:

The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.

Keywords: decision making, design euristics, product design, product design process, design paradigms

Procedia PDF Downloads 91
85 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste

Authors: Rajeev Ravindran, Amit K. Jaiswal

Abstract:

Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.

Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound

Procedia PDF Downloads 344
84 Co-pyrolysis of Sludge and Kaolin/Zeolite to Stabilize Heavy Metals

Authors: Qian Li, Zhaoping Zhong

Abstract:

Sewage sludge, a typical solid waste, has inevitably been produced in enormous quantities in China. Still worse, the amount of sewage sludge produced has been increasing due to rapid economic development and urbanization. Compared to the conventional method to treat sewage sludge, pyrolysis has been considered an economic and ecological technology because it can significantly reduce the sludge volume, completely kill pathogens, and produce valuable solid, gas, and liquid products. However, the large-scale utilization of sludge biochar has been limited due to the considerable risk posed by heavy metals in the sludge. Heavy metals enriched in pyrolytic biochar could be divided into exchangeable, reducible, oxidizable, and residual forms. The residual form of heavy metals is the most stable and cannot be used by organisms. Kaolin and zeolite are environmentally friendly inorganic minerals with a high surface area and heat resistance characteristics. So, they exhibit the enormous potential to immobilize heavy metals. In order to reduce the risk of leaching heavy metals in the pyrolysis biochar, this study pyrolyzed sewage sludge mixed with kaolin/zeolite in a small rotary kiln. The influences of additives and pyrolysis temperature on the leaching concentration and morphological transformation of heavy metals in pyrolysis biochar were investigated. The potential mechanism of stabilizing heavy metals in the co-pyrolysis of sludge blended with kaolin/zeolite was explained by scanning electron microscopy, X-ray diffraction, and specific surface area and porosity analysis. The European Community Bureau of Reference sequential extraction procedure has been applied to analyze the forms of heavy metals in sludge and pyrolysis biochar. All the concentrations of heavy metals were examined by flame atomic absorption spectrophotometry. Compared with the proportions of heavy metals associated with the F4 fraction in pyrolytic carbon prepared without additional agents, those in carbon obtained by co-pyrolysis of sludge and kaolin/zeolite increased. Increasing the additive dosage could improve the proportions of the stable fraction of various heavy metals in biochar. Kaolin exhibited a better effect on stabilizing heavy metals than zeolite. Aluminosilicate additives with excellent adsorption performance could capture more released heavy metals during sludge pyrolysis. Then heavy metal ions would react with the oxygen ions of additives to form silicate and aluminate, causing the conversion of heavy metals from unstable fractions (sulfate, chloride, etc.) to stable fractions (silicate, aluminate, etc.). This study reveals that the efficiency of stabilizing heavy metals depends on the formation of stable mineral compounds containing heavy metals in pyrolysis biochar.

Keywords: co-pyrolysis, heavy metals, immobilization mechanism, sewage sludge

Procedia PDF Downloads 45
83 Food for Health: Understanding the Importance of Food Safety in the Context of Food Security

Authors: Carmen J. Savelli, Romy Conzade

Abstract:

Background: Access to sufficient amounts of safe and nutritious food is a basic human necessity, required to sustain life and promote good health. Food safety and food security are therefore inextricably linked, yet the importance of food safety in this relationship is often overlooked. Methodologies: A literature review and desk study were conducted to examine existing frameworks for discussing food security, especially from an international perspective, to determine the entry points for enhancing considerations for food safety in national and international policies. Major Findings: Food security is commonly understood as the state when all people at all times have physical, social and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life. Conceptually, food security is built upon four pillars including food availability, access, utilization and stability. Within this framework, the safety of food is often wrongly assumed as a given. However, in places where food supplies are insufficient, coping mechanisms for food insecurity are primarily focused on access to food without considerations for ensuring safety. Under such conditions, hygiene and nutrition are often ignored as people shift to less nutritious diets and consume more potentially unsafe foods, in which chemical, microbiological, zoonotic and other hazards can pose serious, acute and chronic health risks. While food supplies might be safe and nutritious, if consumed in quantities insufficient to support normal growth, health and activity, the result is hunger and famine. Recent estimates indicate that at least 842 million people, or roughly one in eight, still suffer from chronic hunger. Even if people eat enough food that is safe, they will become malnourished if the food does not provide the proper amounts of micronutrients and/or macronutrients to meet daily nutritional requirements, resulting in under- or over-nutrition. Two billion people suffer from one or more micronutrient deficiencies and over half a billion adults are obese. Access to sufficient amounts of nutritious food is not enough. If food is unsafe, whether arising from poor quality supplies or inadequate treatment and preparation, it increases the risk of foodborne infections such as diarrhoea. 70% of diarrhoea episodes occurring annually in children under five are due to biologically contaminated food. Conclusions: An integrated approach is needed where food safety and nutrition are systematically introduced into mainstream food system policies and interventions worldwide in order to achieve health and development goals. A new framework, “Food for Health” is proposed to guide policy development and requires all three aspects of food security to be addressed in balance: sufficiency, nutrition and safety.

Keywords: food safety, food security, nutrition, policy

Procedia PDF Downloads 391
82 Wildland Fire in Terai Arc Landscape of Lesser Himalayas Threatning the Tiger Habitat

Authors: Amit Kumar Verma

Abstract:

The present study deals with fire prediction model in Terai Arc Landscape, one of the most dramatic ecosystems in Asia where large, wide-ranging species such as tiger, rhinos, and elephant will thrive while bringing economic benefits to the local people. Forest fires cause huge economic and ecological losses and release considerable quantities of carbon into the air and is an important factor inflating the global burden of carbon emissions. Forest fire is an important factor of behavioral cum ecological habit of tiger in wild. Post fire changes i.e. micro and macro habitat directly affect the tiger habitat or land. Vulnerability of fire depicts the changes in microhabitat (humus, soil profile, litter, vegetation, grassland ecosystem). Microorganism like spider, annelids, arthropods and other favorable microorganism directly affect by the forest fire and indirectly these entire microorganisms are responsible for the development of tiger (Panthera tigris) habitat. On the other hand, fire brings depletion in prey species and negative movement of tiger from wild to human- dominated areas, which may leads the conflict i.e. dangerous for both tiger & human beings. Early forest fire prediction through mapping the risk zones can help minimize the fire frequency and manage forest fires thereby minimizing losses. Satellite data plays a vital role in identifying and mapping forest fire and recording the frequency with which different vegetation types are affected. Thematic hazard maps have been generated by using IDW technique. A prediction model for fire occurrence is developed for TAL. The fire occurrence records were collected from state forest department from 2000 to 2014. Disciminant function models was used for developing a prediction model for forest fires in TAL, random points for non-occurrence of fire have been generated. Based on the attributes of points of occurrence and non-occurrence, the model developed predicts the fire occurrence. The map of predicted probabilities classified the study area into five classes very high (12.94%), high (23.63%), moderate (25.87%), low(27.46%) and no fire (10.1%) based upon the intensity of hazard. model is able to classify 78.73 percent of points correctly and hence can be used for the purpose with confidence. Overall, also the model works correctly with almost 69% of points. This study exemplifies the usefulness of prediction model of forest fire and offers a more effective way for management of forest fire. Overall, this study depicts the model for conservation of tiger’s natural habitat and forest conservation which is beneficial for the wild and human beings for future prospective.

Keywords: fire prediction model, forest fire hazard, GIS, landsat, MODIS, TAL

Procedia PDF Downloads 330
81 The Use of Punctuation by Primary School Students Writing Texts Collaboratively: A Franco-Brazilian Comparative Study

Authors: Cristina Felipeto, Catherine Bore, Eduardo Calil

Abstract:

This work aims to analyze and compare the punctuation marks (PM) in school texts of Brazilian and French students and the comments on these PM made spontaneously by the students during the ongoing text. Assuming textual genetics as an investigative field within a dialogical and enunciative approach, we defined a common methodological design in two 1st year classrooms (7 years old) of the primary school, one classroom in Brazil (Maceio) and the other one in France (Paris). Through a multimodal capture system of writing processes in real time and space (Ramos System), we recorded the collaborative writing proposal in dyads in each of the classrooms. This system preserves the classroom’s ecological characteristics and provides a video recording synchronized with dialogues, gestures and facial expressions of the students, the stroke of the pen’s ink on the sheet of paper and the movement of the teacher and students in the classroom. The multimodal register of the writing process allowed access to the text in progress and the comments made by the students on what was being written. In each proposed text production, teachers organized their students in dyads and requested that they should talk, combine and write a fictional narrative. We selected a Dyad of Brazilian students (BD) and another Dyad of French students (FD) and we have filmed 6 proposals for each of the dyads. The proposals were collected during the 2nd Term of 2013 (Brazil) and 2014 (France). In 6 texts written by the BD there were identified 39 PMs and 825 written words (on average, a PM every 23 words): Of these 39 PMs, 27 were highlighted orally and commented by either student. In the texts written by the FD there were identified 48 PMs and 258 written words (on average, 1 PM every 5 words): Of these 48 PM, 39 were commented by the French students. Unlike what the studies on punctuation acquisition point out, the PM that occurred the most were hyphens (BD) and commas (FD). Despite the significant difference between the types and quantities of PM in the written texts, the recognition of the need for writing PM in the text in progress and the comments have some common characteristics: i) the writing of the PM was not anticipated in relation to the text in progress, then they were added after the end of a sentence or after the finished text itself; ii) the need to add punctuation marks in the text came after one of the students had ‘remembered’ that a particular sign was needed; iii) most of the PM inscribed were not related to their linguistic functions, but the graphic-visual feature of the text; iv) the comments justify or explain the PM, indicating metalinguistic reflections made by the students. Our results indicate how the comments of the BD and FD express the dialogic and subjective nature of knowledge acquisition. Our study suggests that the initial learning of PM depends more on its graphic features and interactional conditions than on its linguistic functions.

Keywords: collaborative writing, erasure, graphic marks, learning, metalinguistic awareness, textual genesis

Procedia PDF Downloads 141
80 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design

Authors: Mohammad Bagher Anvari, Arman Shojaei

Abstract:

Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.

Keywords: incremental launching, bridge construction, finite element model, optimization

Procedia PDF Downloads 70
79 Deasphalting of Crude Oil by Extraction Method

Authors: A. N. Kurbanova, G. K. Sugurbekova, N. K. Akhmetov

Abstract:

The asphaltenes are heavy fraction of crude oil. Asphaltenes on oilfield is known for its ability to plug wells, surface equipment and pores of the geologic formations. The present research is devoted to the deasphalting of crude oil as the initial stage refining oil. Solvent deasphalting was conducted by extraction with organic solvents (cyclohexane, carbon tetrachloride, chloroform). Analysis of availability of metals was conducted by ICP-MS and spectral feature at deasphalting was achieved by FTIR. High contents of asphaltenes in crude oil reduce the efficiency of refining processes. Moreover, high distribution heteroatoms (e.g., S, N) were also suggested in asphaltenes cause some problems: environmental pollution, corrosion and poisoning of the catalyst. The main objective of this work is to study the effect of deasphalting process crude oil to improve its properties and improving the efficiency of recycling processes. Experiments of solvent extraction are using organic solvents held in the crude oil JSC “Pavlodar Oil Chemistry Refinery. Experimental results show that deasphalting process also leads to decrease Ni, V in the composition of the oil. One solution to the problem of cleaning oils from metals, hydrogen sulfide and mercaptan is absorption with chemical reagents directly in oil residue and production due to the fact that asphalt and resinous substance degrade operational properties of oils and reduce the effectiveness of selective refining of oils. Deasphalting of crude oil is necessary to separate the light fraction from heavy metallic asphaltenes part of crude oil. For this oil is pretreated deasphalting, because asphaltenes tend to form coke or consume large quantities of hydrogen. Removing asphaltenes leads to partly demetallization, i.e. for removal of asphaltenes V/Ni and organic compounds with heteroatoms. Intramolecular complexes are relatively well researched on the example of porphyinous complex (VO2) and nickel (Ni). As a result of studies of V/Ni by ICP MS method were determined the effect of different solvents-deasphalting – on the process of extracting metals on deasphalting stage and select the best organic solvent. Thus, as the best DAO proved cyclohexane (C6H12), which as a result of ICP MS retrieves V-51.2%, Ni-66.4%? Also in this paper presents the results of a study of physical and chemical properties and spectral characteristics of oil on FTIR with a view to establishing its hydrocarbon composition. Obtained by using IR-spectroscopy method information about the specifics of the whole oil give provisional physical, chemical characteristics. They can be useful in the consideration of issues of origin and geochemical conditions of accumulation of oil, as well as some technological challenges. Systematic analysis carried out in this study; improve our understanding of the stability mechanism of asphaltenes. The role of deasphalted crude oil fractions on the stability asphaltene is described.

Keywords: asphaltenes, deasphalting, extraction, vanadium, nickel, metalloporphyrins, ICP-MS, IR spectroscopy

Procedia PDF Downloads 220
78 Fabrication of Aluminum Nitride Thick Layers by Modified Reactive Plasma Spraying

Authors: Cécile Dufloux, Klaus Böttcher, Heike Oppermann, Jürgen Wollweber

Abstract:

Hexagonal aluminum nitride (AlN) is a promising candidate for several wide band gap semiconductor compound applications such as deep UV light emitting diodes (UVC LED) and fast power transistors (HEMTs). To date, bulk AlN single crystals are still commonly grown from the physical vapor transport (PVT). Single crystalline AlN wafers obtained from this process could offer suitable substrates for a defect-free growth of ultimately active AlGaN layers, however, these wafers still lack from small sizes, limited delivery quantities and high prices so far.Although there is already an increasing interest in the commercial availability of AlN wafers, comparatively cheap Si, SiC or sapphire are still predominantly used as substrate material for the deposition of active AlGaN layers. Nevertheless, due to a lattice mismatch up to 20%, the obtained material shows high defect densities and is, therefore, less suitable for high power devices as described above. Therefore, the use of AlN with specially adapted properties for optical and sensor applications could be promising for mass market products which seem to fulfill fewer requirements. To respond to the demand of suitable AlN target material for the growth of AlGaN layers, we have designed an innovative technology based on reactive plasma spraying. The goal is to produce coarse grained AlN boules with N-terminated columnar structure and high purity. In this process, aluminum is injected into a microwave stimulated nitrogen plasma. AlN, as the product of the reaction between aluminum powder and the plasma activated N2, is deposited onto the target. We used an aluminum filament as the initial material to minimize oxygen contamination during the process. The material was guided through the nitrogen plasma so that the mass turnover was 10g/h. To avoid any impurity contamination by an erosion of the electrodes, an electrode-less discharge was used for the plasma ignition. The pressure was maintained at 600-700 mbar, so the plasma reached a temperature high enough to vaporize the aluminum which subsequently was reacting with the surrounding plasma. The obtained products consist of thick polycrystalline AlN layers with a diameter of 2-3 cm. The crystallinity was determined by X-ray crystallography. The grain structure was systematically investigated by optical and scanning electron microscopy. Furthermore, we performed a Raman spectroscopy to provide evidence of stress in the layers. This paper will discuss the effects of process parameters such as microwave power and deposition geometry (specimen holder, radiation shields, ...) on the topography, crystallinity, and stress distribution of AlN.

Keywords: aluminum nitride, polycrystal, reactive plasma spraying, semiconductor

Procedia PDF Downloads 263
77 Effects of Drying and Extraction Techniques on the Profile of Volatile Compounds in Banana Pseudostem

Authors: Pantea Salehizadeh, Martin P. Bucknall, Robert Driscoll, Jayashree Arcot, George Srzednicki

Abstract:

Banana is one of the most important crops produced in large quantities in tropical and sub-tropical countries. Of the total plant material grown, approximately 40% is considered waste and left in the field to decay. This practice allows fungal diseases such as Sigatoka Leaf Spot to develop, limiting plant growth and spreading spores in the air that can cause respiratory problems in the surrounding population. The pseudostem is considered a waste residue of production (60 to 80 tonnes/ha/year), although it is a good source of dietary fiber and volatile organic compounds (VOC’s). Strategies to process banana pseudostem into palatable, nutritious and marketable food materials could provide significant social and economic benefits. Extraction of VOC’s with desirable odor from dried and fresh pseudostem could improve the smell of products from the confectionary and bakery industries. Incorporation of banana pseudostem flour into bakery products could provide cost savings and improve nutritional value. The aim of this study was to determine the effects of drying methods and different banana species on the profile of volatile aroma compounds in dried banana pseudostem. The banana species analyzed were Musa acuminata and Musa balbisiana. Fresh banana pseudostem samples were processed by either freeze-drying (FD) or heat pump drying (HPD). The extraction of VOC’s was performed at ambient temperature using vacuum distillation and the resulting, mostly aqueous, distillates were analyzed using headspace solid phase microextraction (SPME) gas chromatography – mass spectrometry (GC-MS). Optimal SPME adsorption conditions were 50 °C for 60 min using a Supelco 65 μm PDMS/DVB Stableflex fiber1. Compounds were identified by comparison of their electron impact mass spectra with those from the Wiley 9 / NIST 2011 combined mass spectral library. The results showed that the two species have notably different VOC profiles. Both species contained VOC’s that have been established in literature to have pleasant appetizing aromas. These included l-Menthone, D-Limonene, trans-linlool oxide, 1-Nonanol, CIS 6 Nonen-1ol, 2,6 Nonadien-1-ol, Benzenemethanol, 4-methyl, 1-Butanol, 3-methyl, hexanal, 1-Propanol, 2-methyl- acid، 2-Methyl-2-butanol. Results show banana pseudostem VOC’s are better preserved by FD than by HPD. This study is still in progress and should lead to the optimization of processing techniques that would promote the utilization of banana pseudostem in the food industry.

Keywords: heat pump drying, freeze drying, SPME, vacuum distillation, VOC analysis

Procedia PDF Downloads 298
76 Genetic Structuring of Four Tectona grandis L. F. Seed Production Areas in Southern India

Authors: P. M. Sreekanth

Abstract:

Teak (Tectona grandis L. f.) is a tree species indigenous to India and other Southeastern countries. It produces high-value timber and is easily established in plantations. Reforestation requires a constant supply of high quality seeds. Seed Production Areas (SPA) of teak are improved stands used for collection of open-pollinated quality seeds in large quantities. Information on the genetic diversity of major teak SPAs in India is scanty. The genetic structure of four important seed production areas of Kerala State in Southern India was analyzed employing amplified fragment length polymorphism markers using ten selective primer combinations on 80 samples (4 populations X 20 trees). The study revealed that the gene diversity of the SPAs varied from 0.169 (Konni SPA) to 0.203 (Wayanad SPA). The percentage of polymorphic loci ranged from 74.42 (Parambikulam SPA) to 84.06 (Konni SPA). The mean total gene diversity index (HT) of all the four SPAs was 0.2296 ±0.02. A high proportion of genetic diversity was observed within the populations (83%) while diversity between populations was lower (17%) (GST = 0.17). Principal coordinate analysis and STRUCTURE analysis of the genotypes indicated that the pattern of clustering was in accordance with the origin and geographic location of SPAs, indicating specific identity of each population. A UPGMA dendrogram was prepared and showed that all the twenty samples from each of Konni and Parambikulam SPAs clustered into two separate groups, respectively. However, five Nilambur genotypes and one Wayanad genotype intruded into the Konni cluster. The higher gene flow estimated (Nm = 2.4) reflected the inclusion of Konni origin planting stock in the Nilambur and Wayanad plantations. Evidence for population structure investigated using 3D Principal Coordinate Analysis of FAMD software 1.30 indicated that the pattern of clustering was in accordance with the origin of SPAs. The present study showed that assessment of genetic diversity in seed production plantations can be achieved using AFLP markers. The AFLP fingerprinting was also capable of identifying the geographical origin of planting stock and there by revealing the occurrence of the errors in genotype labeling. Molecular marker-based selective culling of genetically similar trees from a stand so as to increase the genetic base of seed production areas could be a new proposition to improve quality of seeds required for raising commercial plantations of teak. The technique can also be used to assess the genetic diversity status of plus trees within provenances during their selection for raising clonal seed orchards for assuring the quality of seeds available for raising future plantations.

Keywords: AFLP, genetic structure, spa, teak

Procedia PDF Downloads 291
75 Analytical Solutions of Josephson Junctions Dynamics in a Resonant Cavity for Extended Dicke Model

Authors: S.I.Mukhin, S. Seidov, A. Mukherjee

Abstract:

The Dicke model is a key tool for the description of correlated states of quantum atomic systems, excited by resonant photon absorption and subsequently emitting spontaneous coherent radiation in the superradiant state. The Dicke Hamiltonian (DH) is successfully used for the description of the dynamics of the Josephson Junction (JJ) array in a resonant cavity under applied current. In this work, we have investigated a generalized model, which is described by DH with a frustrating interaction term. This frustrating interaction term is explicitly the infinite coordinated interaction between all the spin half in the system. In this work, we consider an array of N superconducting islands, each divided into two sub-islands by a Josephson Junction, taken in a charged qubit / Cooper Pair Box (CPB) condition. The array is placed inside the resonant cavity. One important aspect of the problem lies in the dynamical nature of the physical observables involved in the system, such as condensed electric field and dipole moment. It is important to understand how these quantities behave with time to define the quantum phase of the system. The Dicke model without frustrating term is solved to find the dynamical solutions of the physical observables in analytic form. We have used Heisenberg’s dynamical equations for the operators and on applying newly developed Rotating Holstein Primakoff (HP) transformation and DH we have arrived at the four coupled nonlinear dynamical differential equations for the momentum and spin component operators. It is possible to solve the system analytically using two-time scales. The analytical solutions are expressed in terms of Jacobi's elliptic functions for the metastable ‘bound luminosity’ dynamic state with the periodic coherent beating of the dipoles that connect the two double degenerate dipolar ordered phases discovered previously. In this work, we have proceeded the analysis with the extended DH with a frustrating interaction term. Inclusion of the frustrating term involves complexity in the system of differential equations and it gets difficult to solve analytically. We have solved semi-classical dynamic equations using the perturbation technique for small values of Josephson energy EJ. Because the Hamiltonian contains parity symmetry, thus phase transition can be found if this symmetry is broken. Introducing spontaneous symmetry breaking term in the DH, we have derived the solutions which show the occurrence of finite condensate, showing quantum phase transition. Our obtained result matches with the existing results in this scientific field.

Keywords: Dicke Model, nonlinear dynamics, perturbation theory, superconductivity

Procedia PDF Downloads 108
74 Quantifying the Effects of Canopy Cover and Cover Crop Species on Water Use Partitioning in Micro-Sprinkler Irrigated Orchards in South Africa

Authors: Zanele Ntshidi, Sebinasi Dzikiti, Dominic Mazvimavi

Abstract:

South Africa is a dry country and yet it is ranked as the 8th largest exporter of fresh apples (Malus Domestica) globally. Prime apple producing regions are in the Eastern and Western Cape Provinces of the country where all the fruit is grown under irrigation. Climate change models predict increasingly drier future conditions in these regions and the frequency and severity of droughts is expected to increase. For the sustainability and growth of the fruit industry it is important to minimize non-beneficial water losses from the orchard floor. The aims of this study were firstly to compare the water use of cover crop species used in South African orchards for which there is currently no information. The second aim was to investigate how orchard water use (evapotranspiration) was partitioned into beneficial (tree transpiration) and non-beneficial (orchard floor evaporation) water uses for micro-sprinkler irrigated orchards with different canopy covers. This information is important in order to explore opportunities to minimize non-beneficial water losses. Six cover crop species (four exotic and two indigenous) were grown in 2 L pots in a greenhouse. Cover crop transpiration was measured using the gravimetric method on clear days. To establish how water use was partitioned in orchards, evapotranspiration (ET) was measured using an open path eddy covariance system, while tree transpiration was measured hourly throughout the season (October to June) on six trees per orchard using the heat ratio sap flow method. On selected clear days, soil evaporation was measured hourly from sunrise to sunset using six micro-lysimeters situated at different wet/dry and sun/shade positions on the orchard floor. Transpiration of cover crops was measured using miniature (2 mm Ø) stem heat balance sap flow gauges. The greenhouse study showed that exotic cover crops had significantly higher (p < 0.01) average transpiration rates (~3.7 L/m2/d) than the indigenous species (~ 2.2 L/m²/d). In young non-bearing orchards, orchard floor evaporative fluxes accounted for more than 60% of orchard ET while this ranged from 10 to 30% in mature orchards with a high canopy cover. While exotic cover crops are preferred by most farmers, this study shows that they use larger quantities of water than indigenous species. This in turn contributes to a larger orchard floor evaporation flux. In young orchards non-beneficial losses can be minimized by adopting drip or short range micro-sprinkler methods that reduce the wetted soil fraction thereby conserving water.

Keywords: evapotranspiration, sap flow, soil evaporation, transpiration

Procedia PDF Downloads 367
73 Effects of Different Fungicide In-Crop Treatments on Plant Health Status of Sunflower (Helianthus annuus L.)

Authors: F. Pal-Fam, S. Keszthelyi

Abstract:

Phytosanitary condition of sunflower (Helianthus annuus L.) was endangered by several phytopathogenic agents, mainly microfungi, such as Sclerotinia sclerotiorum, Diaporthe helianthi, Plasmopara halstedtii, Macrophomina phaseolina and so on. There are more agrotechnical and chemical technologies against them, for instance, tolerant hybrids, crop rotations and eventually several in-crop chemical treatments. There are different fungicide treatment methods in sunflower in Hungarian agricultural practice in the quest of obtaining healthy and economic plant products. Besides, there are many choices of useable active ingredients in Hungarian sunflower protection. This study carried out into the examination of the effect of five different fungicide active substances (found on the market) and three different application modes (early; late; and early and late treatments) in a total number of 9 sample plots, 0.1 ha each other. Five successive vegetation periods have been investigated in long term, between 2013 and 2017. The treatments were: 1)untreated control; 2) boscalid and dimoxystrobin late treatment (July); 3) boscalid and dimoxystrobin early treatment (June); 4) picoxystrobin and cyproconazole early treatment; 5) picoxystrobin and cymoxanil and famoxadone early treatment; 6) picoxystrobin and cyproconazole early; cymoxanil and famoxadone late treatments; 7) picoxystrobin and cyproconazole early; picoxystrobin and cymoxanil and famoxadone late treatments; 8) trifloxystrobin and cyproconazole early treatment; and 9) trifloxystrobin and cyproconazole both early and late treatments. Due to the very different yearly weather conditions different phytopathogenic fungi were dominant in the particular years: Diaporthe and Alternaria in 2013; Alternaria and Sclerotinia in 2014 and 2015; Alternaria, Sclerotinia and Diaporthe in 2016; and Alternaria in 2017. As a result of treatments ‘infection frequency’ and ‘infestation rate’ showed a significant decrease compared to the control plot. There were no significant differences between the efficacies of the different fungicide mixes; all were almost the same effective against the phytopathogenic fungi. The most dangerous Sclerotinia infection was practically eliminated in all of the treatments. Among the single treatments, the late treatment realised in July was the less efficient, followed by the early treatments effectuated in June. The most efficient was the double treatments realised in both June and July, resulting 70-80% decrease of the infection frequency, respectively 75-90% decrease of the infestation rate, comparing with the control plot in the particular years. The lowest yield quantity was observed in the control plot, followed by the late single treatment. The yield of the early single treatments was higher, while the double treatments showed the highest yield quantities (18.3-22.5% higher than the control plot in particular years). In total, according to our five years investigation, the most effective application mode is the double in-crop treatment per vegetation time, which is reflected by the yield surplus.

Keywords: fungicides, treatments, phytopathogens, sunflower

Procedia PDF Downloads 116
72 Mathematical Modeling of Nonlinear Process of Assimilation

Authors: Temur Chilachava

Abstract:

In work the new nonlinear mathematical model describing assimilation of the people (population) with some less widespread language by two states with two various widespread languages, taking into account demographic factor is offered. In model three subjects are considered: the population and government institutions with the widespread first language, influencing by means of state and administrative resources on the third population with some less widespread language for the purpose of their assimilation; the population and government institutions with the widespread second language, influencing by means of state and administrative resources on the third population with some less widespread language for the purpose of their assimilation; the third population (probably small state formation, an autonomy), exposed to bilateral assimilation from two rather powerful states. Earlier by us it was shown that in case of zero demographic factor of all three subjects, the population with less widespread language completely assimilates the states with two various widespread languages, and the result of assimilation (redistribution of the assimilated population) is connected with initial quantities, technological and economic capabilities of the assimilating states. In considered model taking into account demographic factor natural decrease in the population of the assimilating states and a natural increase of the population which has undergone bilateral assimilation is supposed. At some ratios between coefficients of natural change of the population of the assimilating states, and also assimilation coefficients, for nonlinear system of three differential equations are received the two first integral. Cases of two powerful states assimilating the population of small state formation (autonomy), with different number of the population, both with identical and with various economic and technological capabilities are considered. It is shown that in the first case the problem is actually reduced to nonlinear system of two differential equations describing the classical model "predator - the victim", thus, naturally a role of the victim plays the population which has undergone assimilation, and a predator role the population of one of the assimilating states. The population of the second assimilating state in the first case changes in proportion (the coefficient of proportionality is equal to the relation of the population of assimilators in an initial time point) to the population of the first assimilator. In the second case the problem is actually reduced to nonlinear system of two differential equations describing type model "a predator – the victim", with the closed integrated curves on the phase plane. In both cases there is no full assimilation of the population to less widespread language. Intervals of change of number of the population of all three objects of model are found. The considered mathematical models which in some approach can model real situations, with the real assimilating countries and the state formations (an autonomy or formation with the unrecognized status), undergone to bilateral assimilation, show that for them the only possibility to avoid from assimilation is the natural demographic increase in population and hope for natural decrease in the population of the assimilating states.

Keywords: nonlinear mathematical model, bilateral assimilation, demographic factor, first integrals, result of assimilation, intervals of change of number of the population

Procedia PDF Downloads 443