Search results for: stochastic approximation gradient
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1620

Search results for: stochastic approximation gradient

210 Application of Shore Protective Structures in Optimum Land Using of Defense Sites Located in Coastal Cities

Authors: Mir Ahmad Lashteh Neshaei, Hamed Afsoos Biria, Ata Ghabraei, Mir Abdolhamid Mehrdad

Abstract:

Awareness of effective land using issues in coastal area including protection of natural ecosystems and coastal environment due to the increasing of human life along the coast is of great importance. There are numerous valuable structures and heritages which are located in defence sites and waterfront area. Marine structures such as groins, sea walls and detached breakwaters are constructed in coast to improve the coast stability against bed erosion due to changing wave and climate pattern. Marine mechanisms and interaction with the shore protection structures need to be intensively studied. Groins are one of the most prominent structures that are used in shore protection to create a safe environment for coastal area by maintaining the land against progressive coastal erosion. The main structural function of a groin is to control the long shore current and littoral sediment transport. This structure can be submerged and provide the necessary beach protection without negative environmental impact. However, for submerged structures adopted for beach protection, the shoreline response to these structures is not well understood at present. Nowadays, modelling and computer simulation are used to assess beach morphology in the vicinity of marine structures to reduce their environmental impact. The objective of this study is to predict the beach morphology in the vicinity of submerged groins and comparison with non-submerged groins with focus on a part of the coast located in Dahane sar Sefidrood, Guilan province, Iran where serious coast erosion has occurred recently. The simulations were obtained using a one-line model which can be used as a first approximation of shoreline prediction in the vicinity of groins. The results of the proposed model are compared with field measurements to determine the shape of the coast. Finally, the results of the present study show that using submerged groins can have a good efficiency to control the beach erosion without causing severe environmental impact to the coast. The important outcome from this study can be employed in optimum designing of defence sites in the coastal cities to improve their efficiency in terms of re-using the heritage lands.

Keywords: submerged structures, groin, shore protective structures, coastal cities

Procedia PDF Downloads 307
209 Geo-Spatial Distribution of Radio Refractivity and the Influence of Fade Depth on Microwave Propagation Signals over Nigeria

Authors: Olalekan Lawrence Ojo

Abstract:

Designing microwave terrestrial propagation networks requires a thorough evaluation of the severity of multipath fading, especially at frequencies below 10 GHz. In nations like Nigeria, without a large enough databases to support the existing empirical models, the mistakes in the prediction technique intended for the evaluation may be severe. The need for higher bandwidth for various satellite applications makes the investigation of the effects of radio refractivity, fading due to multipath, and Geoclimatic factors on satellite propagation links more important. One of the key elements to take into account for the best functioning of microwave frequencies is the clear air effects. This work has taken into account the geographical distribution of radio refractivity and fades depth over a number of stations in Nigeria. Data from five locations in Nigeria—Akure, Enugu, Jos, Minna, and Sokoto—based on five-year (2017–2021) measurement methods of atmospheric pressure, relative, and humidity temperature—at two levels (ground surface and 100 m heights)—are studied to deduced their effects on signals propagated through a µwave communication links. The assessments included considerations for µwave communication systems as well as the impacts of the dry and wet components of radio refractivity, the effects of the fade depth at various frequencies, and a 20 km link distance. The results demonstrate that the percentage occurrence of the dry terms dominated the radio refractivity constituent at the surface level, contributing a minimum of about 78% and a maximum of about 92%, while at heights of 100 meters, the percentage occurrence of the dry terms dominated the radio refractivity constituent, contributing a minimum of about 79% and a maximum of about 92%. The spatial distribution reveals that, regardless of height, the country's tropical rainforest (TRF) and freshwater swampy mangrove (FWSM) regions reported the greatest values of radio refractivity. The statistical estimate shows that fading values can differ by as much as 1.5 dB, especially near the TRF and FWSM coastlines, even during clear air conditions. The current findings will be helpful for budgeting Earth-space microwave links, particularly for the rollout of Nigeria's 5G and 6G projected microcellular networks.

Keywords: fade depth, geoclimatic factor, refractivity, refractivity gradient

Procedia PDF Downloads 65
208 Assessment of Metal Dynamics in Dissolved and Particulate Phase in Human Impacted Hooghly River Estuary, India

Authors: Soumita Mitra, Santosh Kumar Sarkar

Abstract:

Hooghly river estuary (HRE), situated at the north eastern part of Bay of Bengal has global significance due to its holiness. It is of immense importance to the local population as it gives perpetual water supply for various activities such as transportation, fishing, boating, bathing etc. to the local people who settled on both the banks of this estuary. This study was done to assess the dissolved and particulate trace metal in the estuary covering a stretch of about 175 Km. The water samples were collected from the surface (0-5 cm) along the salinity gradient and metal concentration were studied both in dissolved and particulate phase using Graphite Furnace Atomic Absorption Spectrophotometer (GF-AAS) along some physical characteristics such as water temperature, salinity, pH, turbidity and total dissolved solids. Although much significant spatial variation was noticed but little enrichment was found along the downstream of the estuary. The mean concentration of the metals in the dissolved and particulate phase followed the same trend and as follows: Fe>Mn>Cr>Zn>Cu>Ni>Pb. The concentration of the metals in the particulate phase were much greater than that in dissolved phase which was also depicted from the values of the partition coefficient (Kd)(ml mg-1). The Kdvalues ranged from 1.5x105 (in case of Pb) to 4.29x106 (in case of Cr). The high value of Kd for Cr denoted that the metal Cr is mostly bounded with the suspended particulate matter while the least value for Pb signified it presence more in dissolved phase. Moreover, the concentrations of all the studied metals in the dissolved phase were many folds higher than their respective permissible limits assested by WHO 2008, 2009 and 2011. On the other hand, according to Sediment Quality Guidelines (SQGs), Zn, Cu and Ni in the particulate phase lied between ERL and ERM values but Cr exceeded ERM values at all the stations confirming that the estuary is mostly contaminated with the particulate Cr and it might cause frequent adverse effects on the aquatic life. Multivariate statistics Cluster analysis was also performed which separated the stations according to the level of contamination from several point and nonpoint sources. Thus, it is found that the estuarine system is much polluted by the toxic metals and further investigation, toxicological studies should be implemented for full risk assessment of this system, better management and restoration of the water quality of this globally significant aquatic system.

Keywords: dissolved and particulate phase, Hooghly river estuary, partition coefficient, surface water, toxic metals

Procedia PDF Downloads 265
207 Simulation of Ester Based Mud Performance through Drilling Genting Timur Field

Authors: Lina Ismail Jassim, Robiah Yunus

Abstract:

To successfully drill oil or gas well, two main characteristics of numerous other tasks of an efficient drilling fluid are required, which are suspended and carrying cuttings from the beneath wellbore to the surface and managed between pore (formation) and hydrostatic pressure (mud pressure). Several factors like mud composition and its rheology, wellbore design, drilled cuttings characteristics and drilling string rotation contribute to drill wellbore successfully. Simulation model can support an appropriate indication on the drilling fluid performance in the real field as Genting Timur field, located in Pahang in Malaysia on 4295 m depth, held the world record in Sempah Muda 1 (Vertical). A detailed 3 dimensional CFD analysis of vertical, concentric annular two phase flow was developed to study and asses Herschel Bulkley drilling fluid. The effect of Hematite, Barite and calcium carbonates types and size of cutting rock particles on such flow is analyzed. The vertical flows are also associated with a good amount of temperature variation along the depth. This causes a good amount of change in viscosity of the fluid, which is non-Newtonian in nature. Good understanding of the nature of such flows is imperative in developing and maintaining successful vertical well systems. A detailed analysis of flow characteristics due to the drill pipe rotation is done in this work. The inner cylinder of the annulus gets different rotational speed, depending upon the operating conditions. This speed induces a good swirl on the particles and primary fluids which interpret in Ester based drilling fluid cleaning well ability, which in turn determines energy loss along the pipe. Energy loss is assessed in this work in terms of wall shear stress and pressure drop along the pipe. The flow is under an adverse pressure gradient condition, which causes chance of reversed flow and transfers the rock cuttings to the surface.

Keywords: concentric annulus, non-Newtonian, two phase, Herschel Bulkley

Procedia PDF Downloads 297
206 Discovering Event Outliers for Drug as Commercial Products

Authors: Arunas Burinskas, Aurelija Burinskiene

Abstract:

On average, ten percent of drugs - commercial products are not available in pharmacies due to shortage. The shortage event disbalance sales and requires a recovery period, which is too long. Therefore, one of the critical issues that pharmacies do not record potential sales transactions during shortage and recovery periods. The authors suggest estimating outliers during shortage and recovery periods. To shorten the recovery period, the authors suggest using average sales per sales day prediction, which helps to protect the data from being downwards or upwards. Authors use the outlier’s visualization method across different drugs and apply the Grubbs test for significance evaluation. The researched sample is 100 drugs in a one-month time frame. The authors detected that high demand variability products had outliers. Among analyzed drugs, which are commercial products i) High demand variability drugs have a one-week shortage period, and the probability of facing a shortage is equal to 69.23%. ii) Mid demand variability drugs have three days shortage period, and the likelihood to fall into deficit is equal to 34.62%. To avoid shortage events and minimize the recovery period, real data must be set up. Even though there are some outlier detection methods for drug data cleaning, they have not been used for the minimization of recovery period once a shortage has occurred. The authors use Grubbs’ test real-life data cleaning method for outliers’ adjustment. In the paper, the outliers’ adjustment method is applied with a confidence level of 99%. In practice, the Grubbs’ test was used to detect outliers for cancer drugs and reported positive results. The application of the Grubbs’ test is used to detect outliers which exceed boundaries of normal distribution. The result is a probability that indicates the core data of actual sales. The application of the outliers’ test method helps to represent the difference of the mean of the sample and the most extreme data considering the standard deviation. The test detects one outlier at a time with different probabilities from a data set with an assumed normal distribution. Based on approximation data, the authors constructed a framework for scaling potential sales and estimating outliers with Grubbs’ test method. The suggested framework is applicable during the shortage event and recovery periods. The proposed framework has practical value and could be used for the minimization of the recovery period required after the shortage of event occurrence.

Keywords: drugs, Grubbs' test, outlier, shortage event

Procedia PDF Downloads 124
205 Further Development of Offshore Floating Solar and Its Design Requirements

Authors: Madjid Karimirad

Abstract:

Floating solar was not very well-known in the renewable energy field a decade ago; however, there has been tremendous growth internationally with a Compound Annual Growth Rate (CAGR) of nearly 30% in recent years. To reach the goal of global net-zero emission by 2050, all renewable energy sources including solar should be used. Considering that 40% of the world’s population lives within 100 kilometres of the coasts, floating solar in coastal waters is an obvious energy solution. However, this requires more robust floating solar solutions. This paper tries to enlighten the fundamental requirements in the design of floating solar for offshore installations from the hydrodynamic and offshore engineering points of view. In this regard, a closer look at dynamic characteristics, stochastic behaviour and nonlinear phenomena appearing in this kind of structure is a major focus of the current article. Floating solar structures are alternative and very attractive green energy installations with (a) Less strain on land usage for densely populated areas; (b) Natural cooling effect with efficiency gain; and (c) Increased irradiance from the reflectivity of water. Also, floating solar in conjunction with the hydroelectric plants can optimise energy efficiency and improve system reliability. The co-locating of floating solar units with other types such as offshore wind, wave energy, tidal turbines as well as aquaculture (fish farming) can result in better ocean space usage and increase the synergies. Floating solar technology has seen considerable developments in installed capacities in the past decade. Development of design standards and codes of practice for floating solar technologies deployed on both inland water-bodies and offshore is required to ensure robust and reliable systems that do not have detrimental impacts on the hosting water body. Floating solar will account for 17% of all PV energy produced worldwide by 2030. To enhance the development, further research in this area is needed. This paper aims to discuss the main critical design aspects in light of the load and load effects that the floating solar platforms are subjected to. The key considerations in hydrodynamics, aerodynamics and simultaneous effects from the wind and wave load actions will be discussed. The link of dynamic nonlinear loading, limit states and design space considering the environmental conditions is set to enable a better understanding of the design requirements of fast-evolving floating solar technology.

Keywords: floating solar, offshore renewable energy, wind and wave loading, design space

Procedia PDF Downloads 60
204 Physiological Responses of Dominant Grassland Species to Different Grazing Intensity in Inner Mongolia, China

Authors: Min Liu, Jirui Gong, Qinpu Luo, Lili Yang, Bo Yang, Zihe Zhang, Yan Pan, Zhanwei Zhai

Abstract:

Grazing disturbance is one of the important land-use types that affect plant growth and ecosystem processes. In order to study the responses of dominant species to grazing in the semiarid temperate grassland of Inner Mongolia, we set five grazing intensity plots: a control and four levels of grazing (light (LG), moderate (MG), heavy (HG) and extreme heavy grazing (EHG)) to test the morphological and physiological responses of Stipa grandis, Leymus chinensis at the individual levels. With the increase of grazing intensity, Stipa grandis and Leymus chinensis both exhibited reduced plant height, leaf area, stem length and aboveground biomass, showing a significant dwarf phenomenon especially in HG and EHG plots. The photosynthetic capacity decreased along the grazing gradient. Especially in the MG plot, the two dominant species have lowest net photosynthetic rate (Pn) and water use efficiency (WUE). However, in the HG and EHG plots, the two species had high light saturation point (LSP) and low light compensation point (LCP), indicating they have high light-use efficiency. They showed a stimulation of compensatory photosynthesis to the remnant leaves as compared with grasses in MG plot. For Leymus chinensis, the lipid peroxidation level did not increase with the low malondialdehyde (MDA) content even in the EHG plot. It may be due to the high enzymes activity of superoxide dismutase (SOD) and peroxidase (POD) to reduce the damage of reactive oxygen species. Meanwhile, more carbohydrate was stored in the leaf of Leymus chinensis to provide energy to the plant regrowth. On the contrary, Stipa grandis showed the high level of lipid peroxidation especially in the HG and EHG plots with decreased antioxidant enzymes activity. The soluble protein content did not change significantly in the different plots. Therefore, with the increase of grazing intensity, plants changed morphological and physiological traits to defend themselves effectively to herbivores. Leymus chinensis is more resistant to grazing than Stipa grandis in terms of tolerance traits, particularly under heavy grazing pressure.

Keywords: antioxidant enzymes activity, grazing density, morphological responses, photosynthesis

Procedia PDF Downloads 352
203 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies

Authors: Paolino Di Felice

Abstract:

The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.

Keywords: quality of life, distance measurement error, Italian administrative units, spatial database

Procedia PDF Downloads 363
202 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel

Authors: Hamed Kalhori, Lin Ye

Abstract:

In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.

Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction

Procedia PDF Downloads 525
201 Comparison of Finite Difference Schemes for Numerical Study of Ripa Model

Authors: Sidrah Ahmed

Abstract:

The river and lakes flows are modeled mathematically by shallow water equations that are depth-averaged Reynolds Averaged Navier-Stokes equations under Boussinesq approximation. The temperature stratification dynamics influence the water quality and mixing characteristics. It is mainly due to the atmospheric conditions including air temperature, wind velocity, and radiative forcing. The experimental observations are commonly taken along vertical scales and are not sufficient to estimate small turbulence effects of temperature variations induced characteristics of shallow flows. Wind shear stress over the water surface influence flow patterns, heat fluxes and thermodynamics of water bodies as well. Hence it is crucial to couple temperature gradients with shallow water model to estimate the atmospheric effects on flow patterns. The Ripa system has been introduced to study ocean currents as a variant of shallow water equations with addition of temperature variations within the flow. Ripa model is a hyperbolic system of partial differential equations because all the eigenvalues of the system’s Jacobian matrix are real and distinct. The time steps of a numerical scheme are estimated with the eigenvalues of the system. The solution to Riemann problem of the Ripa model is composed of shocks, contact and rarefaction waves. Solving Ripa model with Riemann initial data with the central schemes is difficult due to the eigen structure of the system.This works presents the comparison of four different finite difference schemes for the numerical solution of Riemann problem for Ripa model. These schemes include Lax-Friedrichs, Lax-Wendroff, MacCormack scheme and a higher order finite difference scheme with WENO method. The numerical flux functions in both dimensions are approximated according to these methods. The temporal accuracy is achieved by employing TVD Runge Kutta method. The numerical tests are presented to examine the accuracy and robustness of the applied methods. It is revealed that Lax-Freidrichs scheme produces results with oscillations while Lax-Wendroff and higher order difference scheme produce quite better results.

Keywords: finite difference schemes, Riemann problem, shallow water equations, temperature gradients

Procedia PDF Downloads 194
200 Discharge Estimation in a Two Flow Braided Channel Based on Energy Concept

Authors: Amiya Kumar Pati, Spandan Sahu, Kishanjit Kumar Khatua

Abstract:

River is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. A river flow consisting of small and shallow channels sometimes divide and recombine numerous times because of the slow water flow or the built up sediments. The pattern formed during this process resembles the strands of a braid. Braided streams form where the sediment load is so heavy that some of the sediments are deposited as shifting islands. Braided rivers often exist near the mountainous regions and typically carry coarse-grained and heterogeneous sediments down a fairly steep gradient. In this paper, the apparent shear stress formulae were suitably modified, and the Energy Concept Method (ECM) was applied for the prediction of discharges at the junction of a two-flow braided compound channel. The Energy Concept Method has not been applied for estimating the discharges in the braided channels. The energy loss in the channels is analyzed based on mechanical analysis. The cross-section of channel is divided into two sub-areas, namely the main-channel below the bank-full level and region above the bank-full level for estimating the total discharge. The experimental data are compared with a wide range of theoretical data available in the published literature to verify this model. The accuracy of this approach is also compared with Divided Channel Method (DCM). From error analysis of this method, it is observed that the relative error is less for the data-sets having smooth floodplains when compared to rough floodplains. Comparisons with other models indicate that the present method has reasonable accuracy for engineering purposes.

Keywords: critical flow, energy concept, open channel flow, sediment, two-flow braided compound channel

Procedia PDF Downloads 119
199 Mg Doped CuCrO₂ Thin Oxides Films for Thermoelectric Properties

Authors: I. Sinnarasa, Y. Thimont, L. Presmanes, A. Barnabé

Abstract:

The thermoelectricity is a promising technique to overcome the issues in recovering waste heat to electricity without using moving parts. In fact, the thermoelectric (TE) effect defines as the conversion of a temperature gradient directly into electricity and vice versa. To optimize TE materials, the power factor (PF = σS² where σ is electrical conductivity and S is Seebeck coefficient) must be increased by adjusting the carrier concentration, and/or the lattice thermal conductivity Kₜₕ must be reduced by introducing scattering centers with point defects, interfaces, and nanostructuration. The PF does not show the advantages of the thin film because it does not take into account the thermal conductivity. In general, the thermal conductivity of the thin film is lower than the bulk material due to their microstructure and increasing scattering effects with decreasing thickness. Delafossite type oxides CuᴵMᴵᴵᴵO₂ received main attention for their optoelectronic properties as a p-type semiconductor they exhibit also interesting thermoelectric (TE) properties due to their high electrical conductivity and their stability in room atmosphere. As there are few proper studies on the TE properties of Mg-doped CuCrO₂ thin films, we have investigated, the influence of the annealing temperature on the electrical conductivity and the Seebeck coefficient of Mg-doped CuCrO₂ thin films and calculated the PF in the temperature range from 40 °C to 220 °C. For it, we have deposited Mg-doped CuCrO₂ thin films on fused silica substrates by RF magnetron sputtering. This study was carried out on 300 nm thin films. The as-deposited Mg doped CuCrO₂ thin films have been annealed at different temperatures (from 450 to 650 °C) under primary vacuum. Electrical conductivity and Seebeck coefficient of the thin films have been measured from 40 to 220 °C. The highest electrical conductivity of 0.60 S.cm⁻¹ with a Seebeck coefficient of +329 µV.K⁻¹ at 40 °C have been obtained for the sample annealed at 550 °C. The calculated power factor of optimized CuCrO₂:Mg thin film was 6 µW.m⁻¹K⁻² at 40 °C. Due to the constant Seebeck coefficient and the increasing electrical conductivity with temperature it reached 38 µW.m⁻¹K⁻² at 220 °C that was a quite good result for an oxide thin film. Moreover, the degenerate behavior and the hopping mechanism of CuCrO₂:Mg thin film were elucidated. Their high and constant Seebeck coefficient in temperature and their stability in room atmosphere could be a great advantage for an application of this material in a high accuracy temperature measurement devices.

Keywords: thermoelectric, oxides, delafossite, thin film, power factor, degenerated semiconductor, hopping mode

Procedia PDF Downloads 189
198 The Effects of Subjective and Objective Indicators of Inequality on Life Satisfaction in a Comparative Perspective Using a Multi-Level Analysis

Authors: Atefeh Bagherianziarat, Dana Hamplova

Abstract:

The inverse social gradient in life satisfaction (LS) is a well-established research finding. To estimate the influence of inequality on LS, most of the studies have explored the effect of the objective aspects of inequality or individuals’ socioeconomic status (SES). However, relatively fewer studies have confirmed recently the significant effect of the subjective aspect of inequality or subjective socioeconomic status (SSS) on life satisfaction over and above SES. In other words, it is confirmed by some studies that individuals’ perception of their unequal status in society or SSS can moderate the impact of their absolute unequal status on their life satisfaction. Nevertheless, this newly confirmed moderating link has not been affirmed to work likewise in societies with different levels of social inequality and also for people who believe in the value of equality, at different levels. In this study, we compared the moderative influence of subjective inequality on the link between objective inequality and life satisfaction. In particular, we focus on differences across welfare state regimes based on Esping-Andersen's theory. Also, we explored the moderative role of believing in the value of equality on the link between objective and subjective inequality on LS in the given societies. Since our studied variables were measured at both individual and country levels, we applied a multilevel analysis to the European Social Survey data (round 9). The results showed that people in deferent regimes reported statistically meaningful different levels of life satisfaction that is explained to different extends by their household income and their perception of their income inequality. The findings of the study supported the previous findings of the moderator influence of perceived inequality on the link between objective inequality and LS. However, this link is different in various welfare state regimes. The results of the multilevel modeling showed that country-level subjective equality is a positive predictor for individuals’ life satisfaction, while the GINI coefficient that was considered as the indicator of absolute inequality has a smaller effect on life satisfaction. Also, country-level subjective equality moderates the confirmed link between individuals’ income and their life satisfaction. It can be concluded that both individual and country-level subjective inequality slightly moderate the effect of individuals’ income on their life satisfaction.

Keywords: individual values, life satisfaction, multilevel analysis, objective inequality, subjective inequality, welfare regimes status

Procedia PDF Downloads 86
197 Latitudinal Impact on Spatial and Temporal Variability of 7Be Activity Concentrations in Surface Air along Europe

Authors: M. A. Hernández-Ceballos, M. Marín-Ferrer, G. Cinelli, L. De Felice, T. Tollefsen, E. Nweke, P. V. Tognoli, S. Vanzo, M. De Cort

Abstract:

This study analyses the latitudinal impact of the spatial and temporal distribution on the cosmogenic isotope 7Be in surface air along Europe. The long-term database of the 6 sampling sites (Ivalo, Helsinki, Berlin, Freiburg, Sevilla and La Laguna), that regularly provide data to the Radioactivity Environmental Monitoring (REM) network managed by the Joint Research Centre (JRC) in Ispra, were used. The selection of the stations was performed attending to different factors, such as 1) heterogeneity in terms of latitude and altitude, and 2) long database coverage. The combination of these two parameters ensures a high degree of representativeness of the results. In the later, the temporal coverage varies between stations, being used in the present study sampling stations with a database more or less continuously from 1984 to 2011. The mean values of 7Be activity concentration presented a spatial distribution value ranging from 2.0 ± 0.9 mBq/m3 (Ivalo, north) to 4.8 ± 1.5 mBq/m3 (La Laguna, south). An increasing gradient with latitude was observed from the north to the south, 0.06 mBq/m3. However, there was no correlation with altitude, since all stations are sited within the atmospheric boundary layer. The analyses of the data indicated a dynamic range of 7Be activity for solar cycle and phase (maximum or minimum), having been observed different impact on stations according to their location. The results indicated a significant seasonal behavior, with the maximum concentrations occurring in the summer and minimum in the winter, although with differences in the values reached and in the month registered. Due to the large heterogeneity in the temporal pattern with which the individual radionuclide analyses were performed in each station, the 7Be monthly index was calculated to normalize the measurements and perform the direct comparison of monthly evolution among stations. Different intensity and evolution of the mean monthly index were observed. The knowledge of the spatial and temporal distribution of this natural radionuclide in the atmosphere is a key parameter for modeling studies of atmospheric processes, which are important phenomena to be taken into account in the case of a nuclear accident.

Keywords: Berilium-7, latitudinal impact in Europe, seasonal and monthly variability, solar cycle

Procedia PDF Downloads 330
196 Delineation of Subsurface Tectonic Structures Using Gravity, Magnetic and Geological Data, in the Sarir-Hameimat Arm of the Sirt Basin, NE Libya

Authors: Mohamed Abdalla Saleem, Hana Ellafi

Abstract:

The study area is located in the eastern part of the Sirt Basin, in the Sarir-Hameimat arm of the basin, south of Amal High. The area covers the northern part of the Hamemat Trough and the Rakb High. All of these tectonic elements are part of the major and common tectonics that were created when the old Sirt Arch collapsed, and most of them are trending NW-SE. This study has been conducted to investigate the subsurface structures and the sedimentology characterization of the area and attempt to define its development tectonically and stratigraphically. About 7600 land gravity measurements, 22500 gridded magnetic data, and petrographic core data from some wells were used to investigate the subsurface structural features both vertically and laterally. A third-order separation of the regional trends from the original Bouguer gravity data has been chosen. The residual gravity map reveals a significant number of high anomalies distributed in the area, separated by a group of thick sediment centers. The reduction to the pole magnetic map also shows nearly the same major trends and anomalies in the area. Applying the further interpretation filters reveals that these high anomalies are sourced from different depth levels; some are deep-rooted, and others are intruded igneous bodies within the sediment layers. The petrographic sedimentology study for some wells in the area confirmed the presence of these igneous bodies and defined their composition as most likely to be gabbro hosted by marine shale layers. Depth investigation of these anomalies by the average depth spectrum shows that the average basement depth is about 7.7 km, while the top of the intrusions is about 2.65 km, and some near-surface magnetic sources are about 1.86 km. The depth values of the magnetic anomalies and their location were estimated specifically using the 3D Euler deconvolution technique. The obtained results suggest that the maximum depth of the sources is about 4938m. The total horizontal gradient of the magnetic data shows that the trends are mostly extending NW-SE, others are NE-SW, and a third group has an N-S extension. This variety in trend direction shows that the area experienced different tectonic regimes throughout its geological history.

Keywords: sirt basin, tectonics, gravity, magnetic

Procedia PDF Downloads 50
195 User Experience in Relation to Eye Tracking Behaviour in VR Gallery

Authors: Veslava Osinska, Adam Szalach, Dominik Piotrowski

Abstract:

Contemporary VR technologies allow users to explore virtual 3D spaces where they can work, socialize, learn, and play. User's interaction with GUI and the pictures displayed implicate perceptual and also cognitive processes which can be monitored due to neuroadaptive technologies. These modalities provide valuable information about the users' intentions, situational interpretations, and emotional states, to adapt an application or interface accordingly. Virtual galleries outfitted by specialized assets have been designed using the Unity engine BITSCOPE project in the frame of CHIST-ERA IV program. Users interaction with gallery objects implies the questions about his/her visual interests in art works and styles. Moreover, an attention, curiosity, and other emotional states are possible to be monitored and analyzed. Natural gaze behavior data and eye position were recorded by built-in eye-tracking module within HTC Vive headset gogle for VR. Eye gaze results are grouped due to various users’ behavior schemes and the appropriate perpetual-cognitive styles are recognized. Parallelly usability tests and surveys were adapted to identify the basic features of a user-centered interface for the virtual environments across most of the timeline of the project. A total of sixty participants were selected from the distinct faculties of University and secondary schools. Users’ primary knowledge about art and was evaluated during pretest and this way the level of art sensitivity was described. Data were collected during two months. Each participant gave written informed consent before participation. In data analysis reducing the high-dimensional data into a relatively low-dimensional subspace ta non linear algorithms were used such as multidimensional scaling and novel technique technique t-Stochastic Neighbor Embedding. This way it can classify digital art objects by multi modal time characteristics of eye tracking measures and reveal signatures describing selected artworks. Current research establishes the optimal place on aesthetic-utility scale because contemporary interfaces of most applications require to be designed in both functional and aesthetical ways. The study concerns also an analysis of visual experience for subsamples of visitors, differentiated, e.g., in terms of frequency of museum visits, cultural interests. Eye tracking data may also show how to better allocate artefacts and paintings or increase their visibility when possible.

Keywords: eye tracking, VR, UX, visual art, virtual gallery, visual communication

Procedia PDF Downloads 29
194 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”

Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy

Abstract:

Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared together

Keywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network

Procedia PDF Downloads 433
193 Magnetic Biomaterials for Removing Organic Pollutants from Wastewater

Authors: L. Obeid, A. Bee, D. Talbot, S. Abramson, M. Welschbillig

Abstract:

The adsorption process is one of the most efficient methods to remove pollutants from wastewater provided that suitable adsorbents are used. In order to produce environmentally safe adsorbents, natural polymers have received increasing attention in recent years. Thus, alginate and chitosane are extensively used as inexpensive, non-toxic and efficient biosorbents. Alginate is an anionic polysaccharide extracted from brown seaweeds. Chitosan is an amino-polysaccharide; this cationic polymer is obtained by deacetylation of chitin the major constituent of crustaceans. Furthermore, it has been shown that the encapsulation of magnetic materials in alginate and chitosan beads facilitates their recovery from wastewater after the adsorption step, by the use of an external magnetic field gradient, obtained with a magnet or an electromagnet. In the present work, we have studied the adsorption affinity of magnetic alginate beads and magnetic chitosan beads (called magsorbents) for methyl orange (MO) (an anionic dye), methylene blue (MB) (a cationic dye) and p-nitrophenol (PNP) (a hydrophobic pollutant). The effect of different parameters (pH solution, contact time, pollutant initial concentration…) on the adsorption of pollutant on the magnetic beads was investigated. The adsorption of anionic and cationic pollutants is mainly due to electrostatic interactions. Consequently methyl orange is highly adsorbed by chitosan beads in acidic medium and methylene blue by alginate beads in basic medium. In the case of a hydrophobic pollutant, which is weakly adsorbed, we have shown that the adsorption is enhanced by adding a surfactant. Cetylpyridinium chloride (CPC), a cationic surfactant, was used to increase the adsorption of PNP by magnetic alginate beads. Adsorption of CPC by alginate beads occurs through two mechanisms: (i) electrostatic attractions between cationic head groups of CPC and negative carboxylate functions of alginate; (ii) interaction between the hydrocarbon chains of CPC. The hydrophobic pollutant is adsolubilized within the surface aggregated structures of surfactant. Figure c shows that PNP can reach up to 95% of adsorption in presence of CPC. At highest CPC concentrations, desorption occurs due to the formation of micelles in the solution. Our magsorbents appear to efficiently remove ionic and hydrophobic pollutants and we hope that this fundamental research will be helpful for the future development of magnetically assisted processes in water treatment plants.

Keywords: adsorption, alginate, chitosan, magsorbent, magnetic, organic pollutant

Procedia PDF Downloads 240
192 Understanding the Processwise Entropy Framework in a Heat-powered Cooling Cycle

Authors: P. R. Chauhan, S. K. Tyagi

Abstract:

Adsorption refrigeration technology offers a sustainable and energy-efficient cooling alternative over traditional refrigeration technologies for meeting the fast-growing cooling demands. With its ability to utilize natural refrigerants, low-grade heat sources, and modular configurations, it has the potential to revolutionize the cooling industry. Despite these benefits, the commercial viability of this technology is hampered by several fundamental limiting constraints, including its large size, low uptake capacity, and poor performance as a result of deficient heat and mass transfer characteristics. The primary cause of adequate heat and mass transfer characteristics and magnitude of exergy loss in various real processes of adsorption cooling system can be assessed by the entropy generation rate analysis, i. e. Second law of Thermodynamics. Therefore, this article presents the second law of thermodynamic-based investigation in terms of entropy generation rate (EGR) to identify the energy losses in various processes of the HPCC-based adsorption system using MATLAB R2021b software. The adsorption technology-based cooling system consists of two beds made up of silica gel and arranged in a single stage, while the water is employed as a refrigerant, coolant, and hot fluid. The variation in process-wise EGR is examined corresponding to cycle time, and a comparative analysis is also presented. Moreover, the EGR is also evaluated in the external units, such as the heat source and heat sink unit used for regeneration and heat dump, respectively. The research findings revealed that the combination of adsorber and desorber, which operates across heat reservoirs with a higher temperature gradient, shares more than half of the total amount of EGR. Moreover, the EGR caused by the heat transfer process is determined to be the highest, followed by a heat sink, heat source, and mass transfer, respectively. in case of heat transfer process, the operation of the valve is determined to be responsible for more than half (54.9%) of the overall EGR during the heat transfer. However, the combined contribution of the external units, such as the source (18.03%) and sink (21.55%), to the total EGR, is 35.59%. The analysis and findings of the present research are expected to pinpoint the source of the energy waste in HPCC based adsorption cooling systems.

Keywords: adsorption cooling cycle, heat transfer, mass transfer, entropy generation, silica gel-water

Procedia PDF Downloads 97
191 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging

Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati

Abstract:

Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.

Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization

Procedia PDF Downloads 65
190 Economic Efficiency of Cassava Production in Nimba County, Liberia: An Output-Oriented Approach

Authors: Kollie B. Dogba, Willis Oluoch-Kosura, Chepchumba Chumo

Abstract:

In Liberia, many of the agricultural households cultivate cassava for either sustenance purposes, or to generate farm income. Many of the concentrated cassava farmers reside in Nimba, a north-eastern County that borders two other economies: the Republics of Cote D’Ivoire and Guinea. With a high demand for cassava output and products in emerging Asian markets coupled with an objective of the Liberia agriculture policies to increase the competitiveness of valued agriculture crops; there is a need to examine the level of resource-use efficiency for many agriculture crops. However, there is a scarcity of information on the efficiency of many agriculture crops, including cassava. Hence the study applying an output-oriented method seeks to assess the economic efficiency of cassava farmers in Nimba County, Liberia. A multi-stage sampling technique was employed to generate a sample for the study. From 216 cassava farmers, data related to on-farm attributes, socio-economic and institutional factors were collected. The stochastic frontier models, using the Translog functional forms, of production and revenue, were used to determine the level of revenue efficiency and its determinants. The result showed that most of the cassava farmers are male (60%). Many of the farmers are either married, engaged or living together with a spouse (83%), with a mean household size of nine persons. Farmland is prevalently obtained by inheritance (95%), average farm size is 1.34 hectares, and most cassava farmers did not access agriculture credits (76%) and extension services (91%). The mean cassava output per hectare is 1,506.02 kg, which estimates average revenue of L$23,551.16 (Liberian dollars). Empirical results showed that the revenue efficiency of cassava farmers varies from 0.1% to 73.5%; with the mean revenue efficiency of 12.9%. This indicates that on average, there is a vast potential of 87.1% to increase the economic efficiency of cassava farmers in Nimba by improving technical and allocative efficiencies. For the significant determinants of revenue efficiency, age and group membership had negative effects on revenue efficiency of cassava production; while farming experience, access to extension, formal education, and average wage rate have positive effects. The study recommends the setting-up and incentivizing of farmer field schools for cassava farmers to primarily share their farming experiences with others and to learn robust cultivation techniques of sustainable agriculture. Also, farm managers and farmers should consider a fix wage rate in labor contracts for all stages of cassava farming.

Keywords: economic efficiency, frontier production and revenue functions, Nimba County, Liberia, output-oriented approach, revenue efficiency, sustainable agriculture

Procedia PDF Downloads 116
189 Potential Serological Biomarker for Early Detection of Pregnancy in Cows

Authors: Shveta Bathla, Preeti Rawat, Sudarshan Kumar, Rubina Baithalu, Jogender Singh Rana, Tushar Kumar Mohanty, Ashok Kumar Mohanty

Abstract:

Pregnancy is a complex process which includes series of events such as fertilization, formation of blastocyst, implantation of embryo, placental formation and development of fetus. The success of these events depends on various interactions which are synchronized by endocrine interaction between a receptive dam and competent embryo. These interactions lead to change in expression of hormones and proteins. But till date no protein biomarker is available which can be used to detect successful completion of these events. We employed quantitative proteomics approach to develop putative serological biomarker which has diagnostic applicability for early detection of pregnancy in cows. For this study, sera were collected from control (non-pregnant, n=6) and pregnant animals on successive days of pregnancy (7, 19, 45, n=6). The sera were subjected to depletion for removal of albumin using Norgen depletion kit. The tryptic peptides were labeled with iTRAQ. The peptides were pooled and fractionated using bRPLC over 80 min gradient. Then 12 fractions were injected to nLC for identification and quantitation in DDA mode using ESI. Identification using Mascot search revealed 2056 proteins out of which 352 proteins were differentially expressed. Twenty proteins were upregulated and twelve proteins were down-regulated with fold change > 1.5 and < 0.6 respectively (p < 0.05). The gene ontology studies of DEPs using Panther software revealed that majority of proteins are actively involved in catalytic activities, binding and enzyme regulatory activities. The DEP'S such as NF2, MAPK, GRIPI, UGT1A1, PARP, CD68 were further subjected to pathway analysis using KEGG and Cytoscape plugin Cluego that showed involvement of proteins in successful implantation, maintenance of pluripotency, regulation of luteal function, differentiation of endometrial macrophages, protection from oxidative stress and developmental pathways such as Hippo. Further efforts are continuing for targeted proteomics, western blot to validate potential biomarkers and development of diagnostic kit for early pregnancy diagnosis in cows.

Keywords: bRPLC, Cluego, ESI, iTRAQ, KEGG, Panther

Procedia PDF Downloads 447
188 Estimates of Freshwater Content from ICESat-2 Derived Dynamic Ocean Topography

Authors: Adan Valdez, Shawn Gallaher, James Morison, Jordan Aragon

Abstract:

Global climate change has impacted atmospheric temperatures contributing to rising sea levels, decreasing sea ice, and increased freshening of high latitude oceans. This freshening has contributed to increased stratification inhibiting local mixing and nutrient transport and modifying regional circulations in polar oceans. In recent years, the Western Arctic has seen an increase in freshwater volume at an average rate of 397+-116 km3/year. The majority of the freshwater volume resides in the Beaufort Gyre surface lens driven by anticyclonic wind forcing, sea ice melt, and Arctic river runoff. The total climatological freshwater content is typically defined as water fresher than 34.8. The near-isothermal nature of Arctic seawater and non-linearities in the equation of state for near-freezing waters result in a salinity driven pycnocline as opposed to the temperature driven density structure seen in the lower latitudes. In this study, we investigate the relationship between freshwater content and remotely sensed dynamic ocean topography (DOT). In-situ measurements of freshwater content are useful in providing information on the freshening rate of the Beaufort Gyre; however, their collection is costly and time consuming. NASA’s Advanced Topographic Laser Altimeter System (ATLAS) derived dynamic ocean topography (DOT), and Air Expendable CTD (AXCTD) derived Freshwater Content are used to develop a linear regression model. In-situ data for the regression model is collected across the 150° West meridian, which typically defines the centerline of the Beaufort Gyre. Two freshwater content models are determined by integrating the freshwater volume between the surface and an isopycnal corresponding to reference salinities of 28.7 and 34.8. These salinities correspond to those of the winter pycnocline and total climatological freshwater content, respectively. Using each model, we determine the strength of the linear relationship between freshwater content and satellite derived DOT. The result of this modeling study could provide a future predictive capability of freshwater volume changes in the Beaufort-Chukchi Sea using non in-situ methods. Successful employment of the ICESat-2’s DOT approximation of freshwater content could potentially reduce reliance on field deployment platforms to characterize physical ocean properties.

Keywords: ICESat-2, dynamic ocean topography, freshwater content, beaufort gyre

Procedia PDF Downloads 69
187 Design and Development of Permanent Magnet Quadrupoles for Low Energy High Intensity Proton Accelerator

Authors: Vikas Teotia, Sanjay Malhotra, Elina Mishra, Prashant Kumar, R. R. Singh, Priti Ukarde, P. P. Marathe, Y. S. Mayya

Abstract:

Bhabha Atomic Research Centre, Trombay is developing low energy high intensity Proton Accelerator (LEHIPA) as pre-injector for 1 GeV proton accelerator for accelerator driven sub-critical reactor system (ADSS). LEHIPA consists of RFQ (Radio Frequency Quadrupole) and DTL (Drift Tube Linac) as major accelerating structures. DTL is RF resonator operating in TM010 mode and provides longitudinal E-field for acceleration of charged particles. The RF design of drift tubes of DTL was carried out to maximize the shunt impedance; this demands the diameter of drift tubes (DTs) to be as low as possible. The width of the DT is however determined by the particle β and trade-off between a transit time factor and effective accelerating voltage in the DT gap. The array of Drift Tubes inside DTL shields the accelerating particle from decelerating RF phase and provides transverse focusing to the charged particles which otherwise tends to diverge due to Columbic repulsions and due to transverse e-field at entry of DTs. The magnetic lenses housed inside DTS controls the transverse emittance of the beam. Quadrupole magnets are preferred over solenoid magnets due to relative high focusing strength of former over later. The availability of small volume inside DTs for housing magnetic quadrupoles has motivated the usage of permanent magnet quadrupoles rather than Electromagnetic Quadrupoles (EMQ). This provides another advantage as joule heating is avoided which would have added thermal loaded in the continuous cycle accelerator. The beam dynamics requires uniformity of integral magnetic gradient to be better than ±0.5% with the nominal value of 2.05 tesla. The paper describes the magnetic design of the PMQ using Sm2Co17 rare earth permanent magnets. The paper discusses the results of five pre-series prototype fabrications and qualification of their prototype permanent magnet quadrupoles and a full scale DT developed with embedded PMQs. The paper discusses the magnetic pole design for optimizing integral Gdl uniformity and the value of higher order multipoles. A novel but simple method of tuning the integral Gdl is discussed.

Keywords: DTL, focusing, PMQ, proton, rate earth magnets

Procedia PDF Downloads 460
186 Normal Hematopoietic Stem Cell and the Toxic Effect of Parthenolide

Authors: Alsulami H., Alghamdi N., Alasker A., Almohen N., Shome D.

Abstract:

Most conventional chemotherapeutic agents which are used for the treatment of cancers not only eradicate cancer cells but also affect normal hematopoietic Stem cells (HSCs) that leads to severe pancytopenia during treatment. Therefore, a need exists for novel approaches to treat cancer without or with minimum effect on normal HSCs. Parthenolide (PTL), a herbal product occurring naturally in the plant Feverfew, is a potential new chemotherapeutic agent for the treatment of many cancers such as acute myeloid leukemia (AML) and chronic lymphocytic leukemia (CLL). In this study we investigated the effect of different PTL concentrations on the viability of normal HSCs and also on the ability of these cells to form colonies after they have been treated with PTL in vitro. Methods: In this study, 24 samples of bone marrow and cord blood were collected with consent, and mononuclear cells were separated using density gradient separation. These cells were then exposed to various concentrations of PTL for 24 hours. Cell viability after culture was determined using 7ADD in a flow cytometry test. Additionally, the impact of PTL on hematopoietic stem cells (HSCs) was evaluated using a colony forming unit assay (CFU). Furthermore, the levels of NFҝB expression were assessed by using a PE-labelled anti-pNFκBP65 antibody. Results: this study showed that there was no statistically significant difference in the percentage of cell death between untreated and PTL treated cells with 5 μM PTL (p = 0.7), 10 μM PTL (p = 0.4) and 25 μM (p = 0.09) respectively. However, at higher doses, PTL caused significant increase in the percentage of cell death. These results were significant when compared to untreated control (p < 0.001). The response of cord blood cells (n=4) on the other hand was slightly different from that for bone marrow cells in that the percentage of cell death was significant at 100 μM PTL. Therefore, cord blood cells seemed more resistant than bone marrow cells. Discussion &Conclusion: At concentrations ≤25 μM PTL has a minimum or no effect on HSCs in vitro. Cord blood HSCs are more resistant to PTL compared to bone marrow HSCs. This could be due to the higher percentage of T-lymphocytes, which are resistant to PTL, in CB samples (85% in CB vs. 56% in BM. Additionally, CB samples contained a higher proportion of CD34+ cells, with 14.5% of brightly CD34+ cells compared to only 1% in normal BM. These bright CD34+ cells in CB were mostly negative for early-stage stem cell maturation antigens, making them young and resilient to oxidative stress and high concentrations of PTL.

Keywords: stem cell, parthenolide, NFKB, CLL

Procedia PDF Downloads 27
185 Robust Numerical Method for Singularly Perturbed Semilinear Boundary Value Problem with Nonlocal Boundary Condition

Authors: Habtamu Garoma Debela, Gemechis File Duressa

Abstract:

In this work, our primary interest is to provide ε-uniformly convergent numerical techniques for solving singularly perturbed semilinear boundary value problems with non-local boundary condition. These singular perturbation problems are described by differential equations in which the highest-order derivative is multiplied by an arbitrarily small parameter ε (say) known as singular perturbation parameter. This leads to the existence of boundary layers, which are basically narrow regions in the neighborhood of the boundary of the domain, where the gradient of the solution becomes steep as the perturbation parameter tends to zero. Due to the appearance of the layer phenomena, it is a challenging task to provide ε-uniform numerical methods. The term 'ε-uniform' refers to identify those numerical methods in which the approximate solution converges to the corresponding exact solution (measured to the supremum norm) independently with respect to the perturbation parameter ε. Thus, the purpose of this work is to develop, analyze, and improve the ε-uniform numerical methods for solving singularly perturbed problems. These methods are based on nonstandard fitted finite difference method. The basic idea behind the fitted operator, finite difference method, is to replace the denominator functions of the classical derivatives with positive functions derived in such a way that they capture some notable properties of the governing differential equation. A uniformly convergent numerical method is constructed via nonstandard fitted operator numerical method and numerical integration methods to solve the problem. The non-local boundary condition is treated using numerical integration techniques. Additionally, Richardson extrapolation technique, which improves the first-order accuracy of the standard scheme to second-order convergence, is applied for singularly perturbed convection-diffusion problems using the proposed numerical method. Maximum absolute errors and rates of convergence for different values of perturbation parameter and mesh sizes are tabulated for the numerical example considered. The method is shown to be ε-uniformly convergent. Finally, extensive numerical experiments are conducted which support all of our theoretical findings. A concise conclusion is provided at the end of this work.

Keywords: nonlocal boundary condition, nonstandard fitted operator, semilinear problem, singular perturbation, uniformly convergent

Procedia PDF Downloads 135
184 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 115
183 Using Rainfall Simulators to Design and Assess the Post-Mining Erosional Stability

Authors: Ashraf M. Khalifa, Hwat Bing So, Greg Maddocks

Abstract:

Changes to the mining environmental approvals process in Queensland have been rolled out under the MERFP Act (2018). This includes requirements for a Progressive Rehabilitation and Closure Plan (PRC Plan). Key considerations of the landform design report within the PRC Plan must include: (i) identification of materials available for landform rehabilitation, including their ability to achieve the required landform design outcomes, (ii) erosion assessments to determine landform heights, gradients, profiles, and material placement, (iii) slope profile design considering the interactions between soil erodibility, rainfall erosivity, landform height, gradient, and vegetation cover to identify acceptable erosion rates over a long-term average, (iv) an analysis of future stability based on the factors described above e.g., erosion and /or landform evolution modelling. ACARP funded an extensive and thorough erosion assessment program using rainfall simulators from 1998 to 2010. The ACARP program included laboratory assessment of 35 soil and spoil samples from 16 coal mines and samples from a gold mine in Queensland using 3 x 0.8 m laboratory rainfall simulator. The reliability of the laboratory rainfall simulator was verified through field measurements using larger flumes 20 x 5 meters and catchment scale measurements at three sites (3 different catchments, average area of 2.5 ha each). Soil cover systems are a primary component of a constructed mine landform. The primary functions of a soil cover system are to sustain vegetation and limit the infiltration of water and oxygen into underlying reactive mine waste. If the external surface of the landform erodes, the functions of the cover system cannot be maintained, and the cover system will most likely fail. Assessing a constructed landform’s potential ‘long-term’ erosion stability requires defensible erosion rate thresholds below which rehabilitation landform designs are considered acceptably erosion-resistant or ‘stable’. The process used to quantify erosion rates using rainfall simulators (flumes) to measure rill and inter-rill erosion on bulk samples under laboratory conditions or on in-situ material under field conditions will be explained.

Keywords: open-cut, mining, erosion, rainfall simulator

Procedia PDF Downloads 95
182 The Numerical Model of the Onset of Acoustic Oscillation in Pulse Tube Engine

Authors: Alexander I. Dovgyallo, Evgeniy A. Zinoviev, Svetlana O. Nekrasova

Abstract:

The most of works applied for the pulse tube converters contain the workflow description implemented through the use of mathematical models on stationary modes. However, the study of the thermoacoustic systems unsteady behavior in the start, stop, and acoustic load changes modes is in the particular interest. The aim of the present study was to develop a mathematical thermal excitation model of acoustic oscillations in pulse tube engine (PTE) as a small-scale scheme of pulse tube engine operating at atmospheric air. Unlike some previous works this standing wave configuration is a fully closed system. The improvements over previous mathematical models are the following: the model allows specifying any values of porosity for regenerator, takes into account the piston weight and the friction in the cylinder and piston unit, and determines the operating frequency. The numerical method is based on the relation equations between the pressure and volume velocity variables at the ends of each element of PTE which is recorded through the appropriate transformation matrix. A solution demonstrates that the PTE operation frequency is the complex value, and it depends on the piston mass and the dynamic friction due to its movement in the cylinder. On the basis of the determined frequency thermoacoustically induced heat transport and generation of acoustic power equations were solved for channel with temperature gradient on its ends. The results of numerical simulation demonstrate the features of the initialization process of oscillation and show that that generated acoustic power more than power on the steady mode in a factor of 3…4. But doesn`t mean the possibility of its further continuous utilizing due to its existence only in transient mode which lasts only for a 30-40 sec. The experiments were carried out on small-scale PTE. The results shows that the value of acoustic power is in the range of 0.7..1.05 W for the defined frequency range f = 13..18 Hz and pressure amplitudes 11..12 kPa. These experimental data are satisfactorily correlated with the numerical modeling results. The mathematical model can be straightforwardly applied for the thermoacoustic devices with variable temperatures of thermal reservoirs and variable transduction loads which are expected to occur in practical implementations of portable thermoacoustic engines.

Keywords: nonlinear processes, pulse tube engine, thermal excitation, standing wave

Procedia PDF Downloads 364
181 Lactate Biostimulation for Remediation of Aquifers Affected by Recalcitrant Sources of Chloromethanes

Authors: Diana Puigserver Cuerda, Jofre Herrero Ferran, José M. Carmona Perez

Abstract:

In the transition zone between aquifers and basal aquitards, DNAPL-pools of chlorinated solvents are more recalcitrant than at other depths in the aquifer. Although degradation of carbon tetrachloride (CT) and chloroform (CF) occurs in this zone, this is a slow process, which is why an adequate remediation strategy is necessary. The working hypothesis of this study is that the biostimulation of the transition zone of an aquifer contaminated by CT and CF can be an effective remediation strategy. This hypothesis has been tested in a site on an unconfined aquifer in which the major contaminants were CT and CF of industrial origin and where the hydrochemical background was rich in other compounds that can hinder natural attenuation of chloromethanes. Field studies and five laboratory microcosm experiments were carried out at the level of groundwater and sediments to identify: i) the degradation processes of CT and CF; ii) the structure of microbial communities; and iii) the microorganisms implicated on this degradation. For this, concentration of contaminants and co-contaminants (nitrate and sulfate), Compound Specific Isotope Analysis, molecular techniques (Denaturing Gradient Gel Electrophoresis) and clone library analysis were used. The main results were: i) degradation processes of CT and CF occurred in groundwater and in the lesser conductive sediments; ii) sulfate-reducing conditions in the transition zone were high and similar to those in the source of contamination; iii) two microorganisms (Azospira suillum and a bacterium of the Clostridiales order) were identified in the transition zone at the field and lab experiments that were compatible with the role of carrying out the reductive dechlorination of CT, CF and their degradation products (dichloromethane and chloromethane); iv) these two microorganisms were present at the high starting concentrations of the microcosm experiments (similar to those in the source of DNAPL) and continued being present until the last day of the lactate biostimulation; and v) the lactate biostimulation gave rise to the fastest and highest degradation rates and promoted the elimination of other electron acceptors (e.g. nitrate and sulfate). All these results are evidence that lactate biostimulation can be effective in remediating the source and plume, especially in the transition zone, and highlight the environmental relevance of the treatment of contaminated transition zones in industrial contexts similar to that studied.

Keywords: Azospira suillum, lactate biostimulation of carbon tetrachloride and chloroform, reductive dechlorination, transition zone between aquifer and aquitard

Procedia PDF Downloads 167