Search results for: solution parameters
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13361

Search results for: solution parameters

9521 3D Modeling for Frequency and Time-Domain Airborne EM Systems with Topography

Authors: C. Yin, B. Zhang, Y. Liu, J. Cai

Abstract:

Airborne EM (AEM) is an effective geophysical exploration tool, especially suitable for ridged mountain areas. In these areas, topography will have serious effects on AEM system responses. However, until now little study has been reported on topographic effect on airborne EM systems. In this paper, an edge-based unstructured finite-element (FE) method is developed for 3D topographic modeling for both frequency and time-domain airborne EM systems. Starting from the frequency-domain Maxwell equations, a vector Helmholtz equation is derived to obtain a stable and accurate solution. Considering that the AEM transmitter and receiver are both located in the air, the scattered field method is used in our modeling. The Galerkin method is applied to discretize the Helmholtz equation for the final FE equations. Solving the FE equations, the frequency-domain AEM responses are obtained. To accelerate the calculation speed, the response of source in free-space is used as the primary field and the PARDISO direct solver is used to deal with the problem with multiple transmitting sources. After calculating the frequency-domain AEM responses, a Hankel’s transform is applied to obtain the time-domain AEM responses. To check the accuracy of present algorithm and to analyze the characteristic of topographic effect on airborne EM systems, both the frequency- and time-domain AEM responses for 3 model groups are simulated: 1) a flat half-space model that has a semi-analytical solution of EM response; 2) a valley or hill earth model; 3) a valley or hill earth with an abnormal body embedded. Numerical experiments show that close to the node points of the topography, AEM responses demonstrate sharp changes. Special attentions need to be paid to the topographic effects when interpreting AEM survey data over rugged topographic areas. Besides, the profile of the AEM responses presents a mirror relation with the topographic earth surface. In comparison to the topographic effect that mainly occurs at the high-frequency end and early time channels, the EM responses of underground conductors mainly occur at low frequencies and later time channels. For the signal of the same time channel, the dB/dt field reflects the change of conductivity better than the B-field. The research of this paper will serve airborne EM in the identification and correction of the topographic effects.

Keywords: 3D, Airborne EM, forward modeling, topographic effect

Procedia PDF Downloads 300
9520 Microsimulation of Potential Crashes as a Road Safety Indicator

Authors: Vittorio Astarita, Giuseppe Guido, Vincenzo Pasquale Giofre, Alessandro Vitale

Abstract:

Traffic microsimulation has been used extensively to evaluate consequences of different traffic planning and control policies in terms of travel time delays, queues, pollutant emissions, and every other common measured performance while at the same time traffic safety has not been considered in common traffic microsimulation packages as a measure of performance for different traffic scenarios. Vehicle conflict techniques that were introduced at intersections in the early traffic researches carried out at the General Motor laboratory in the USA and in the Swedish traffic conflict manual have been applied to vehicles trajectories simulated in microscopic traffic simulators. The concept is that microsimulation can be used as a base for calculating the number of conflicts that will define the safety level of a traffic scenario. This allows engineers to identify unsafe road traffic maneuvers and helps in finding the right countermeasures that can improve safety. Unfortunately, most commonly used indicators do not consider conflicts between single vehicles and roadside obstacles and barriers. A great number of vehicle crashes take place with roadside objects or obstacles. Only some recent proposed indicators have been trying to address this issue. This paper introduces a new procedure based on the simulation of potential crash events for the evaluation of safety levels in microsimulation traffic scenarios, which takes into account also potential crashes with roadside objects and barriers. The procedure can be used to define new conflict indicators. The proposed simulation procedure generates with the random perturbation of vehicle trajectories a set of potential crashes which can be evaluated accurately in terms of DeltaV, the energy of the impact, and/or expected number of injuries or casualties. The procedure can also be applied to real trajectories giving birth to new surrogate safety performance indicators, which can be considered as “simulation-based”. The methodology and a specific safety performance indicator are described and applied to a simulated test traffic scenario. Results indicate that the procedure is able to evaluate safety levels both at the intersection level and in the presence of roadside obstacles. The procedure produces results that are expressed in the same unity of measure for both vehicle to vehicle and vehicle to roadside object conflicts. The total energy for a square meter of all generated crash can be used and is shown on the map, for the test network, after the application of a threshold to evidence the most dangerous points. Without any detailed calibration of the microsimulation model and without any calibration of the parameters of the procedure (standard values have been used), it is possible to identify dangerous points. A preliminary sensitivity analysis has shown that results are not dependent on the different energy thresholds and different parameters of the procedure. This paper introduces a specific new procedure and the implementation in the form of a software package that is able to assess road safety, also considering potential conflicts with roadside objects. Some of the principles that are at the base of this specific model are discussed. The procedure can be applied on common microsimulation packages once vehicle trajectories and the positions of roadside barriers and obstacles are known. The procedure has many calibration parameters and research efforts will have to be devoted to make confrontations with real crash data in order to obtain the best parameters that have the potential of giving an accurate evaluation of the risk of any traffic scenario.

Keywords: road safety, traffic, traffic safety, traffic simulation

Procedia PDF Downloads 123
9519 The Contact between a Rigid Substrate and a Thick Elastic Layer

Authors: Nicola Menga, Giuseppe Carbone

Abstract:

Although contact mechanics has been widely focused on the study of contacts between half-space, it has been recently pointed out that in presence of finite thickness elastic layers the results of the contact problem show significant difference in terms of the main contact quantities (e.g. contact area, penetration, mean pressure, etc.). Actually, there exist a wide range of industrial application demanding for this kind of studies, such as seals leakage prediction or pressure-sensitive coatings for electrical applications. In this work, we focus on the contact between a rigid profile and an elastic layer of thickness h confined under two different configurations: rigid constrain and applied uniform pressure. The elastic problem at hand has been formalized following Green’s function method and then numerically solved by means of a matrix inversion. We study different contact conditions, both considering and neglecting adhesive interactions at the interface. This leads to different solution techniques: Adhesive contacts equilibrium solution is found, in term of contact area for given penetration, making stationary the total free energy of the system; whereas, adhesiveless contacts are addressed defining an equilibrium criterion, again on the contact area, relying on the fracture mechanics stress intensity factor KI. In particular, we make the KI vanish at the edges of the contact area, as peculiar for adhesiveless elastic contacts. The results are obtained in terms of contact area, penetration, and mean pressure for both adhesive and adhesiveless contact conditions. As expected, in the case of a uniform applied pressure the slab turns out much more compliant than the rigidly constrained one. Indeed, we have observed that the peak value of the contact pressure, for both the adhesive and adhesiveless condition, is much higher for the rigidly constrained configuration than in the case of applied uniform pressure. Furthermore, we observed that, for little contact area, both systems behave the same and the pull-off occurs at approximately the same contact area and mean contact pressure. This is an expected result since in this condition the ratio between the layers thickness and the contact area is very high and both layer configurations recover the half-space behavior where the pull-off occurrence is mainly controlled by the adhesive interactions, which are kept constant among the cases.

Keywords: contact mechanics, adhesion, friction, thick layer

Procedia PDF Downloads 494
9518 A Quadratic Model to Early Predict the Blastocyst Stage with a Time Lapse Incubator

Authors: Cecile Edel, Sandrine Giscard D'Estaing, Elsa Labrune, Jacqueline Lornage, Mehdi Benchaib

Abstract:

Introduction: The use of incubator equipped with time-lapse technology in Artificial Reproductive Technology (ART) allows a continuous surveillance. With morphocinetic parameters, algorithms are available to predict the potential outcome of an embryo. However, the different proposed time-lapse algorithms do not take account the missing data, and then some embryos could not be classified. The aim of this work is to construct a predictive model even in the case of missing data. Materials and methods: Patients: A retrospective study was performed, in biology laboratory of reproduction at the hospital ‘Femme Mère Enfant’ (Lyon, France) between 1 May 2013 and 30 April 2015. Embryos (n= 557) obtained from couples (n=108) were cultured in a time-lapse incubator (Embryoscope®, Vitrolife, Goteborg, Sweden). Time-lapse incubator: The morphocinetic parameters obtained during the three first days of embryo life were used to build the predictive model. Predictive model: A quadratic regression was performed between the number of cells and time. N = a. T² + b. T + c. N: number of cells at T time (T in hours). The regression coefficients were calculated with Excel software (Microsoft, Redmond, WA, USA), a program with Visual Basic for Application (VBA) (Microsoft) was written for this purpose. The quadratic equation was used to find a value that allows to predict the blastocyst formation: the synthetize value. The area under the curve (AUC) obtained from the ROC curve was used to appreciate the performance of the regression coefficients and the synthetize value. A cut-off value has been calculated for each regression coefficient and for the synthetize value to obtain two groups where the difference of blastocyst formation rate according to the cut-off values was maximal. The data were analyzed with SPSS (IBM, Il, Chicago, USA). Results: Among the 557 embryos, 79.7% had reached the blastocyst stage. The synthetize value corresponds to the value calculated with time value equal to 99, the highest AUC was then obtained. The AUC for regression coefficient ‘a’ was 0.648 (p < 0.001), 0.363 (p < 0.001) for the regression coefficient ‘b’, 0.633 (p < 0.001) for the regression coefficient ‘c’, and 0.659 (p < 0.001) for the synthetize value. The results are presented as follow: blastocyst formation rate under cut-off value versus blastocyst rate formation above cut-off value. For the regression coefficient ‘a’ the optimum cut-off value was -1.14.10-3 (61.3% versus 84.3%, p < 0.001), 0.26 for the regression coefficient ‘b’ (83.9% versus 63.1%, p < 0.001), -4.4 for the regression coefficient ‘c’ (62.2% versus 83.1%, p < 0.001) and 8.89 for the synthetize value (58.6% versus 85.0%, p < 0.001). Conclusion: This quadratic regression allows to predict the outcome of an embryo even in case of missing data. Three regression coefficients and a synthetize value could represent the identity card of an embryo. ‘a’ regression coefficient represents the acceleration of cells division, ‘b’ regression coefficient represents the speed of cell division. We could hypothesize that ‘c’ regression coefficient could represent the intrinsic potential of an embryo. This intrinsic potential could be dependent from oocyte originating the embryo. These hypotheses should be confirmed by studies analyzing relationship between regression coefficients and ART parameters.

Keywords: ART procedure, blastocyst formation, time-lapse incubator, quadratic model

Procedia PDF Downloads 295
9517 A Study of Preliminary Findings of Behavioral Patterns under Captive Conditions in Chinkara (Gazella bennettii) with Prospects for Future Conservation

Authors: Muhammad Idnan, Arshad Javid, Muhammad Nadeem

Abstract:

The present study was conducted from April 2013 to March 2014 to observe the behavioral parameters of Chinkara (Gazella bennettii) under captive conditions by comparing the captive-born and wild-caught animals for conservation strategies. Understanding the behavioral conformations plays a significant role in captive management. Due to human population explosion and mechanized hunting, the captive breeding seems to be the best way for sports hunting, bush meat, for leather industry and horns for traditional medicinal usage. Primarily, captive management has been used on trial and error basis due to deficiency of ethology of this least concerned species. Behavior of [(20 wild-caught (WC) and 10 captive-bred (CB)] adult Chinkara was observed at captive breeding facilities for ungulates at Ravi Campus, University of Veterinary and Animal Sciences at Kasur district which is situated on southeast side of Lahore. The average annual rainfall is about 650 mm, with frequent raining during monsoon. A focal sample was used to observe the various behavioral patterns for CB and WC chinkara. A similarity was observed in behavioral parameters in WC and CB animals, however, when the differences were considered, WC male deer showed a significantly higher degree of agonistic interaction as compared to the CB male chinkara. These findings suggest that there is no immediate impact of captivity on behavior of chinkara nevertheless 10 generations of captivity. It is suggested that the Chinkara is not suitable for domestication and for successful deer farming, a further study is recommended for ethology of chinkara.

Keywords: Chinkara (Gazella bennettii), domestication, deer farming, ex-situ conservation

Procedia PDF Downloads 148
9516 Eu+3 Ion as a Luminescent Probe in ZrO2: Gd+3 Co-Doped Nanophosphor

Authors: S. Manjunatha, M. S. Dharmaprakash

Abstract:

Well-defined 2D Eu+3 co-doped ZrO2: Gd+3 nanoparticles were successfully synthesized by microwave assisted solution combustion technique for luminescent applications. The present investigation reports the rapid and effective method for the synthesis of the Eu+3 co-doped ZrO2:Gd+3 nanoparticles and study of the luminescence behavior of Eu+3 ion in ZrO2:Gd+3 nanostructures. The optical properties of the prepared nanostructures were investigated by using UV-visible spectroscopy and photoluminescence spectra. The phase formation and the morphology of the nanoplatelets were studied by XRD, FESEM and HRTEM. The average grain size was found to be 45-50 nm. The presence of Gd3+ ion increases the crystallinity of the material and hence acts as a good nucleating agent. The ZrO2:Gd3+ co-doped with Eu+3 nanoplatelets gives an emission at 607 nm, a strong red emission under the excitation wavelength of 255 nm.

Keywords: nanoparticles, XRD, TEM, photoluminescence

Procedia PDF Downloads 303
9515 Blood Flow Simulations to Understand the Role of the Distal Vascular Branches of Carotid Artery in the Stroke Prediction

Authors: Muhsin Kizhisseri, Jorg Schluter, Saleh Gharie

Abstract:

Atherosclerosis is the main reason of stroke, which is one of the deadliest diseases in the world. The carotid artery in the brain is the prominent location for atherosclerotic progression, which hinders the blood flow into the brain. The inclusion of computational fluid dynamics (CFD) into the diagnosis cycle to understand the hemodynamics of the patient-specific carotid artery can give insights into stroke prediction. Realistic outlet boundary conditions are an inevitable part of the numerical simulations, which is one of the major factors in determining the accuracy of the CFD results. The Windkessel model-based outlet boundary conditions can give more realistic characteristics of the distal vascular branches of the carotid artery, such as the resistance to the blood flow and compliance of the distal arterial walls. This study aims to find the most influential distal branches of the carotid artery by using the Windkessel model parameters in the outlet boundary conditions. The parametric study approach to Windkessel model parameters can include the geometrical features of the distal branches, such as radius and length. The incorporation of the variations of the geometrical features of the major distal branches such as the middle cerebral artery, anterior cerebral artery, and ophthalmic artery through the Windkessel model can aid in identifying the most influential distal branch in the carotid artery. The results from this study can help physicians and stroke neurologists to have a more detailed and accurate judgment of the patient's condition.

Keywords: stroke, carotid artery, computational fluid dynamics, patient-specific, Windkessel model, distal vascular branches

Procedia PDF Downloads 197
9514 Modeling Depth Averaged Velocity and Boundary Shear Stress Distributions

Authors: Ebissa Gadissa Kedir, C. S. P. Ojha, K. S. Hari Prasad

Abstract:

In the present study, the depth-averaged velocity and boundary shear stress in non-prismatic compound channels with three different converging floodplain angles ranging from 1.43ᶱ to 7.59ᶱ have been studied. The analytical solutions were derived by considering acting forces on the channel beds and walls. In the present study, five key parameters, i.e., non-dimensional coefficient, secondary flow term, secondary flow coefficient, friction factor, and dimensionless eddy viscosity, were considered and discussed. An expression for non-dimensional coefficient and integration constants was derived based on the boundary conditions. The model was applied to different data sets of the present experiments and experiments from other sources, respectively, to examine and analyse the influence of floodplain converging angles on depth-averaged velocity and boundary shear stress distributions. The results show that the non-dimensional parameter plays important in portraying the variation of depth-averaged velocity and boundary shear stress distributions with different floodplain converging angles. Thus, the variation of the non-dimensional coefficient needs attention since it affects the secondary flow term and secondary flow coefficient in both the main channel and floodplains. The analysis shows that the depth-averaged velocities are sensitive to a shear stress-dependent model parameter non-dimensional coefficient, and the analytical solutions are well agreed with experimental data when five parameters are included. It is inferred that the developed model may facilitate the interest of others in complex flow modeling.

Keywords: depth-average velocity, converging floodplain angles, non-dimensional coefficient, non-prismatic compound channels

Procedia PDF Downloads 63
9513 Levansucrase from Zymomonas Mobilis KIBGE-IB14: Production Optimization and Characterization for High Enzyme Yield

Authors: Sidra Shaheen, Nadir Naveed Siddiqui, Shah Ali Ul Qader

Abstract:

In recent years, significant progress has been made in discovering and developing new bacterial polysaccharides producing organisms possessing extremely functional properties. Levan is a natural biopolymer of fructose which is produced by transfructosylation reaction in the presence of levansucrase. It is one of the industrially promising enzymes that offer a variety of industrial applications in the field of cosmetics, foods and pharmaceuticals. Although levan has significant applications but the yield of levan produced is not equal to other biopolymers due to the inefficiency of producer microorganism. Among wide range of levansucrase producing microorganisms, Zymomonas mobilis is considered as a potential candidate for large scale production of this natural polysaccharide. The present investigation is concerned with the isolation of levansucrase producing natural isolate having maximum enzyme production. Furthermore, production parameters were optimized to get higher enzyme yield. Levansucrase was partially purified and characterized to study its applicability on industrial scale. The results of this study revealed that the bacterial strain Z. mobilis KIBGE-IB14 was the best producer of levansucrase. Bacterial growth and enzyme production was greatly influenced by physical and chemical parameters. Maximum levansucrase production was achieved after 24 hours of fermentation at 30°C using modified medium of pH-6.5. Contrary to other levansucrases, the one presented in the current study is able to produce high amount of products in relatively short period of time with optimum temperature at 35°C. Due to these advantages, this enzyme can be used on large scale for commercial production of levan and other important metabolites.

Keywords: levansucrase, metabolites, polysaccharides, transfructosylation

Procedia PDF Downloads 490
9512 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction

Authors: Yan Zhang

Abstract:

Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.

Keywords: Internet of Things, machine learning, predictive maintenance, streaming data

Procedia PDF Downloads 374
9511 Use of Short Piles for Stabilizing the Side Slope of the Road Embankment along the Canal

Authors: Monapat Sasingha, Suttisak Soralump

Abstract:

This research presents the behavior of slope of the road along the canal stabilized by short piles. In this investigation, the centrifuge machine was used, modelling the condition of the water levels in the canal. The centrifuge tests were performed at 35 g. To observe the movement of the soil, visual analysis was performed to evaluate the failure behavior. Conclusively, the use of short piles to stabilize the canal slope proved to be an effective solution. However, the certain amount of settlement was found behind the short pile rows.

Keywords: centrifuge test, slope failure, embankment, stability of slope

Procedia PDF Downloads 251
9510 Biophysical Features of Glioma-Derived Extracellular Vesicles as Potential Diagnostic Markers

Authors: Abhimanyu Thakur, Youngjin Lee

Abstract:

Glioma is a lethal brain cancer whose early diagnosis and prognosis are limited due to the dearth of a suitable technique for its early detection. Current approaches, including magnetic resonance imaging (MRI), computed tomography (CT), and invasive biopsy for the diagnosis of this lethal disease, hold several limitations, demanding an alternative method. Recently, extracellular vesicles (EVs) have been used in numerous biomarker studies, majorly exosomes and microvesicles (MVs), which are found in most of the cells and biofluids, including blood, cerebrospinal fluid (CSF), and urine. Remarkably, glioma cells (GMs) release a high number of EVs, which are found to cross the blood-brain-barrier (BBB) and impersonate the constituents of parent GMs including protein, and lncRNA; however, biophysical properties of EVs have not been explored yet as a biomarker for glioma. We isolated EVs from cell culture conditioned medium of GMs and regular primary culture, blood, and urine of wild-type (WT)- and glioma mouse models, and characterized by nano tracking analyzer, transmission electron microscopy, immunogold-EM, and differential light scanning. Next, we measured the biophysical parameters of GMs-EVs by using atomic force microscopy. Further, the functional constituents of EVs were examined by FTIR and Raman spectroscopy. Exosomes and MVs-derived from GMs, blood, and urine showed distinction biophysical parameters (roughness, adhesion force, and stiffness) and different from that of regular primary glial cells, WT-blood, and -urine, which can be attributed to the characteristic functional constituents. Therefore, biophysical features can be potential diagnostic biomarkers for glioma.

Keywords: glioma, extracellular vesicles, exosomes, microvesicles, biophysical properties

Procedia PDF Downloads 130
9509 Study on the Effects of Grassroots Characteristics on Reinforced Soil Performance by Direct Shear Test

Authors: Zhanbo Cheng, Xueyu Geng

Abstract:

Vegetation slope protection technique is economic, aesthetic and practical. Herbs are widely used in practice because of rapid growth, strong erosion resistance, obvious slope protection and simple method, in which the root system of grass plays a very important role. In this paper, through changing the variables value of grassroots quantity, grassroots diameter, grassroots length and grassroots reinforce layers, the direct shear tests were carried out to discuss the change of shear strength indexes of grassroots reinforced soil under different reinforce situations, and analyse the effects of grassroots characteristics on reinforced soil performance. The laboratory test results show that: (1) in the certain number of grassroots diameter, grassroots length and grassroots reinforce layers, the value of shear strength, and cohesion first increase and then reduce with the increasing of grassroots quantity; (2) in the certain number of grassroots quantity, grassroots length and grassroots reinforce layers, the value of shear strength and cohesion rise with the increasing of grassroots diameter; (3) in the certain number of grassroots diameter, and grassroots reinforce layers, the value of shear strength and cohesion raise with the increasing of grassroots length in a certain range of grassroots quantity, while the value of shear strength and cohesion first rise and then decline with the increasing of grassroots length when the grassroots quantity reaches a certain value; (4) in the certain number of grassroots quantity, grassroots diameter, and grassroots length, the value of shear strength and cohesion first climb and then decline with the increasing of grassroots reinforced layers; (5) the change of internal friction angle is small in different parameters of grassroots. The research results are of importance for understanding the mechanism of vegetation protection for slopes and determining the parameters of grass planting.

Keywords: direct shear test, reinforced soil, grassroots characteristics, shear strength indexes

Procedia PDF Downloads 161
9508 Identification of Breeding Objectives for Begait Goat in Western Tigray, North Ethiopia

Authors: Hagos Abraham, Solomon Gizaw, Mengistu Urge

Abstract:

A sound breeding objective is the basis for genetic improvement in overall economic merit of farm animals. Begait goat is one of the identified breeds in Ethiopia, which is a multipurpose breed as it serves as source of cash income and source of food (meat and milk). Despite its importance, no formal breeding objectives exist for Begait goat. The objective of the present study was to identify breeding objectives for the breed through two approaches: using own-flock ranking experiment and developing deterministic bio-economic models as a preliminary step towards designing sustainable breeding programs for the breed. In the own-flock ranking experiment, a total of forty five households were visited at their homesteads and were asked to select, with reasons, the first best, second best, third best and the most inferior does from their own flock. Age, previous reproduction and production information of the identified animals were inquired; live body weight and some linear body measurements were taken. The bio-economic model included performance traits (weights, daily weight gain, kidding interval, litter size, milk yield, kid mortality, pregnancy and replacement rates) and economic (revenue and costs) parameters. It was observed that there was close agreement between the farmers’ ranking and bio-economic model results. In general, the results of the present study indicated that Begait goat owners could improve performance of their goats and profitability of their farms by selecting for litter size, six month weight, pre-weaning kid survival rate and milk yield.

Keywords: bio-economic model, economic parameters, own-flock ranking, performance traits

Procedia PDF Downloads 49
9507 The Assessment of Natural Ventilation Performance for Thermal Comfort in Educational Space: A Case Study of Design Studio in the Arab Academy for Science and Technology, Alexandria

Authors: Alaa Sarhan, Rania Abd El Gelil, Hana Awad

Abstract:

Through the last decades, the impact of thermal comfort on the working performance of users and occupants of an indoor space has been a concern. Research papers concluded that natural ventilation quality directly impacts the levels of thermal comfort. Natural ventilation must be put into account during the design process in order to improve the inhabitant's efficiency and productivity. One example of daily long-term occupancy spaces is educational facilities. Many individuals spend long times receiving a considerable amount of knowledge, and it takes additional time to apply this knowledge. Thus, this research is concerned with user's level of thermal comfort in design studios of educational facilities. The natural ventilation quality in spaces is affected by a number of parameters including orientation, opening design, and many other factors. This research aims to investigate the conscious manipulation of the physical parameters of the spaces and its impact on natural ventilation performance which subsequently affects thermal comfort of users. The current research uses inductive and deductive methods to define natural ventilation design considerations, which are used in a field study in a studio in the university building in Alexandria (AAST) to evaluate natural ventilation performance through analyzing and comparing the current case to the developed framework and conducting computational fluid dynamics simulation. Results have proved that natural ventilation performance is successful by only 50% of the natural ventilation design framework; these results are supported by CFD simulation.

Keywords: educational buildings, natural ventilation, , mediterranean climate, thermal comfort

Procedia PDF Downloads 202
9506 Determination of Bromides, Chlorides and Fluorides in Case of Their Joint Presence in Ion-Conducting Electrolyte

Authors: V. Golubeva, O. Vakhnina, I. Konopkina, N. Gerasimova, N. Taturina, K. Zhogova

Abstract:

To improve chemical current sources, the ion-conducting electrolytes based on Li halides (LiCl-KCl, LiCl-LiBr-KBr, LiCl-LiBr-LiF) are developed. It is necessary to have chemical analytical methods for determination of halides to control the electrolytes technology. The methods of classical analytical chemistry are of interest, as they are characterized by high accuracy. Using these methods is a difficult task because halides have similar chemical properties. The objective of this work is to develop a titrimetric method for determining the content of bromides, chlorides, and fluorides in their joint presence in an ion-conducting electrolyte. In accordance with the developed method of analysis to determine fluorides, electrolyte sample is dissolved in diluted HCl acid; fluorides are titrated by La(NO₃)₃ solution with potentiometric indication of equivalence point, fluoride ion-selective electrode is used as sensor. Chlorides and bromides do not form a hardly soluble compound with La and do not interfere in result of analysis. To determine the bromides, the sample is dissolved in a diluted H₂SO₄ acid. The bromides are oxidized with a solution of KIO₃ to Br₂, which is removed from the reaction zone by boiling. Excess of KIO₃ is titrated by iodometric method. The content of bromides is calculated from the amount of KIO₃ spent on Br₂ oxidation. Chlorides and fluorides are not oxidized by KIO₃ and do not interfere in result of analysis. To determine the chlorides, the sample is dissolved in diluted HNO₃ acid and the total content of chlorides and bromides is determined by method of visual mercurometric titration with diphenylcarbazone indicator. Fluorides do not form a hardly soluble compound with mercury and do not interfere with determination. The content of chlorides is calculated taking into account the content of bromides in the sample of electrolyte. The validation of the developed analytical method was evaluated by analyzing internal reference material with known chlorides, bromides and fluorides content. The analytical method allows to determine chlorides, bromides and fluorides in case of their joint presence in ion-conducting electrolyte within the range and with relative total error (δ): for bromides from 60.0 to 65.0 %, δ = ± 2.1 %; for chlorides from 8.0 to 15.0 %, δ = ± 3.6 %; for fluorides from 5.0 to 8.0%, ± 1.5% . The analytical method allows to analyze electrolytes and mixtures that contain chlorides, bromides, fluorides of alkali metals and their mixtures (K, Na, Li).

Keywords: bromides, chlorides, fluorides, ion-conducting electrolyte

Procedia PDF Downloads 113
9505 Neuroprotective Effects of Dehydroepiandrosterone (DHEA) in Rat Model of Alzheimer’s Disease

Authors: Hanan F. Aly, Fateheya M. Metwally, Hanaa H. Ahmed

Abstract:

The current study is undertaken to elucidate a possible neuroprotective role of dehydroepiandrosterone (DHEA) against the development of Alzheimer’s disease in experimental rat model. Alzheimer’s disease was produced in young female ovariectomized rats by intraperitoneal administration of AlCl3 (4.2 mg/kg body weight) daily for 12 weeks. Half of these animals also received orally DHEA (250 mg/kg body weight, three times weekly) for 18 weeks. Control groups of animals received either DHAE alone, or no DHEA, or were not ovariectomized. After such treatment the animals were analyzed for oxidative stress biomarkers such as hydrogen peroxide, nitric oxide and malondialdehyde, total antioxidant capacity, reduced glutathione, glutathione peroxidase, glutathione reductase, superoxide dismutase and catalase activities, antiapoptotic marker Bcl-2 and brain derived neurotrophic factor. Also, brain cholinergic markers (acetylcholinesterase and acetylcholine) were determined. The results revealed significant increase in oxidative stress parameters associated with significant decrease in the antioxidant enzyme activities in Al-intoxicated ovariectomized rats. Significant depletion in brain Bcl-2 and brain-derived neurotrophic factor levels were also detected. Moreover, significant elevations in brain acetylcholinesterase activity accompanied with significant reduction in acetylcholine level were recorded. Significant amelioration in all investigated parameters was detected as a result of treatment of Al-intoxicated ovariectomized rats with DHEA. These results were confirmed by histological examination of brain sections. These results clearly indicate a neuroprotective effect of DHEA against Alzheimer’s disease.

Keywords: Alzheimer’s disease, oxidative stress, apoptosis, dehydroepiandrosterone

Procedia PDF Downloads 310
9504 A Dual Spark Ignition Timing Influence for the High Power Aircraft Radial Engine Using a CFD Transient Modeling

Authors: Tytus Tulwin, Ksenia Siadkowska, Rafał Sochaczewski

Abstract:

A high power radial reciprocating engine is characterized by a large displacement volume of a combustion chamber. Choosing the right moment for ignition is important for a high performance or high reliability and ignition certainty. This work shows methods of simulating ignition process and its impact on engine parameters. For given conditions a flame speed is limited when a deflagration combustion takes place. Therefore, a larger length scale of the combustion chamber compared to a standard size automotive engine makes combustion take longer time to propagate. In order to speed up the mixture burn-up time the second spark is introduced. The transient Computational Fluid Dynamics model capable of simulating multicycle engine processes was developed. The CFD model consists of ECFM-3Z combustion and species transport models. A relative ignition timing difference for the both spark sources is constant. The temperature distribution on engine walls was calculated in the separate conjugate heat transfer simulation. The in-cylinder pressure validation was performed for take-off power flight conditions. The influence of ignition timing on parameters like in-cylinder temperature or rate of heat release was analyzed. The most advantageous spark timing for the highest power output was chosen. The conditions around the spark plug locations for the pre-ignition period were analyzed. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: CFD, combustion, ignition, simulation, timing

Procedia PDF Downloads 284
9503 Numerical Modeling of Storm Swells in Harbor by Boussinesq Equations Model

Authors: Mustapha Kamel Mihoubi, Hocine Dahmani

Abstract:

The purpose of work is to study the phenomenon of agitation of storm waves at basin caused by different directions of waves relative to the current provision thrown numerical model based on the equation in shallow water using Boussinesq model MIKE 21 BW. According to the diminishing effect of penetration of a wave optimal solution will be available to be reproduced in reduced model. Another alternative arrangement throws will be proposed to reduce the agitation and the effects of the swell reflection caused by the penetration of waves in the harbor.

Keywords: agitation, Boussinesq equations, combination, harbor

Procedia PDF Downloads 374
9502 Application of Response Surface Methodology to Optimize the Factor Influencing the Wax Deposition of Malaysian Crude Oil

Authors: Basem Elarbe, Ibrahim Elganidi, Norida Ridzuan, Norhyati Abdullah

Abstract:

Wax deposition in production pipelines and transportation tubing from offshore to onshore is critical in the oil and gas industry due to low-temperature conditions. It may lead to a reduction in production, shut-in, plugging of pipelines and increased fluid viscosity. The most significant popular approach to solve this issue is by injection of a wax inhibitor into the channel. This research aims to determine the amount of wax deposition of Malaysian crude oil by estimating the effective parameters using (Design-Expert version 7.1.6) by response surface methodology (RSM) method. Important parameters affecting wax deposition such as cold finger temperature, inhibitor concentration and experimental duration were investigated. It can be concluded that SA-co-BA copolymer had a higher capability of reducing wax in different conditions where the minimum point of wax reduction was found at 300 rpm, 14℃, 1h, 1200 ppmThe amount of waxes collected for each parameter were 0.12g. RSM approach was applied using rotatable central composite design (CCD) to minimize the wax deposit amount. The regression model’s variance (ANOVA) results revealed that the R2 value of 0.9906, indicating that the model can be clarified 99.06% of the data variation, and just 0.94% of the total variation were not clarified by the model. Therefore, it indicated that the model is extremely significant, confirming a close agreement between the experimental and the predicted values. In addition, the result has shown that the amount of wax deposit decreased significantly with the increase of temperature and the concentration of poly (stearyl acrylate-co-behenyl acrylate) (SABA), which were set at 14°C and 1200 ppm, respectively. The amount of wax deposit was successfully reduced to the minimum value of 0.01 g after the optimization.

Keywords: wax deposition, SABA inhibitor, RSM, operation factors

Procedia PDF Downloads 268
9501 Novel Routes to the Synthesis and Functionalization of Metallic and Semiconductor Thin Film and Nanoparticles

Authors: Hanan. Al Chaghouri, Mohammad Azad Malik, P. John Thomas, Paul O’Brien

Abstract:

The process of assembling metal nanoparticles at the interface of two liquids has received a great deal of attention over the past few years due to a wide range of important applications and their unusual properties as compared to bulk materials. We present a low cost, simple and cheap synthesis of metal nanoparticles, core/shell structures and semiconductors followed by assembly of these particles between immiscible liquids. The aim of this talk is divided to three parts: Firstly, to describe the achievement of a closed loop recycling for producing cadmium sulfide as powders and/or nanostructured thin films for solar cells or other optoelectronic devices applications by using a different chain length of commercially available secondary amines of dithiocarbamato complexes. The approach can be extended to other metal sulfides such as those of Zn, Pb, Cu, or Fe and many transition metals and oxides. Secondly, to synthesis significantly cheaper magnetic particles suited for the mass market. Ni/NiO nanoparticles with ferromagnetic properties at room temperature were among the smallest and strongest magnets (5 nm) were made in solution. The applications of this work can be to produce viable storage devices and the other possibility is to disperse these nanocrystals in solution and use it to make ferrofluids which have a number of mature applications. The third part is about preparing and assembling of submicron silver, cobalt and nickel particles by using polyol methods and liquid/liquid interface, respectively. Coinage metals like gold, copper and silver are suitable for plasmonic thin film solar cells because of their low resistivity and strong interactions with visible light waves. Silver is the best choice for solar cell application since it has low absorption losses and high radiative efficiency compared to gold and copper. Assembled cobalt and nickel as films are promising for spintronic, magnetic and magneto-electronic and biomedics.

Keywords: metal nanoparticles, core/shell structures and semiconductors, ferromagnetic properties, closed loop recycling, liquid/liquid interface

Procedia PDF Downloads 452
9500 On the Study of the Electromagnetic Scattering by Large Obstacle Based on the Method of Auxiliary Sources

Authors: Hidouri Sami, Aguili Taoufik

Abstract:

We consider fast and accurate solutions of scattering problems by large perfectly conducting objects (PEC) formulated by an optimization of the Method of Auxiliary Sources (MAS). We present various techniques used to reduce the total computational cost of the scattering problem. The first technique is based on replacing the object by an array of finite number of small (PEC) object with the same shape. The second solution reduces the problem on considering only the half of the object.These two solutions are compared to results from the reference bibliography.

Keywords: method of auxiliary sources, scattering, large object, RCS, computational resources

Procedia PDF Downloads 226
9499 Ecological and Biological Effects of Pollution and Dredging Activities on Fisheries and Fisheries Products in Niger Delta Ecological Zone

Authors: Ikpesu, Thomas Ohwofasa, Babtunde Ilesanmi

Abstract:

The effects of anthropogenic activities on fish and fisheries products in Niger Delta water bodies were investigated. The rivers were selected based on their close proximity to contaminants and dredging activities. Three stations were chosen per river. The stations chosen to depicting downstream and upstream stations were visited and samples collected on monthly basis. The down streams stations are the polluted and heavily dredged sites, where the upstream station is far, without any evidence of pollution or human activities. During these periods, the fishes of the same species were collected and analyzed for morphological and physiological changes, after which they were returned back to the rivers. The physico-chemicals parameters of these stations were also taken. Morphological changes such as skin ulcerations and other lesions, as well as fungi infections were observed in the down streams fishes. The fish in up streams look healthier and bigger (though the age could not be affirmed) than the downstream fishes. The physico-chemical parameters between the up streams and down streams stations vary significantly (p < 0.01). These anthropogenic effects must have interfere with the normal migration pattern of these fishes, because there were changes in the composition of population and species diversity in the samples sites, with the upstream having true species diversity. The release of pollutants into the water in the Niger Delta areas may triggers off naturally occurring bio toxicity cycles and other fish poisoning. There is risk of biomagnifications of these poisons along the tropic level. This makes the normally valuable food resource dangerous for human consumption and thereby instances of human death caused by such poisoning.

Keywords: anthropogenic, dredging, fisheries, niger delta, pollution, rivers

Procedia PDF Downloads 296
9498 Growth Performance and Meat Quality of Cobb 500 Broilers Fed Phytase and Tannase Treated Sorghum-Based Diets

Authors: Magaya Rutendo P., Mutibvu Tonderai, Nyahangare emmanuel T., Ncube Sharai

Abstract:

This study aimed to evaluate the effects of phytase and tannase addition in broiler diets on growth performance and meat quality of broilers fed sorghum-based diets. Twelve experimental diets were formulated at three sorghum levels, which include 0, 50, and 100%, and 4 enzyme levels: No enzyme, 5000FTU phytase, 25TU tannase, and a combination of 5000FTU phytase plus 25TU tannase. Data on voluntary feed intake, average weekly weight gain and feed conversion ratio were recorded and used to assess growth performance. Meat technical and nutritional parameters were used to determine meat quality. Broilers fed total sorghum diets with phytase and tannase enzyme combination had the highest feed intake in the first (24.4 ± 0.04g/bird/day) and second weeks of life (23.0 ± 1.06g/bird/day), respectively. Complete sorghum diets with phytase (83.0 ± 0.88g/bird/day) and tannase (122.0 ± 0.88g/bird/day) showed the highest feed intake in the third and fourth weeks, respectively. Broilers fed 50% sorghum diets with tannase (135.3 ± 0.05g/bird/day) and complete maize diets with phytase (158.1 ± 0.88g/bird/day) had the highest feed intake during weeks five and six, respectively. Broilers fed a 50% sorghum diet without enzymes had the highest weight gain in the final week (606.5 ± 32.39g). Comparable feed conversion was observed in birds fed complete maize and 50% sorghum diets. Dietary treatment significantly influences the live body, carcass, liver, kidneys, abdominal fat pad weight, and intestinal length. However, it did not affect Pectoralis major meat nutritional and technical parameters.

Keywords: feed efficiency, sorghum, carcass, exogenous enzymes

Procedia PDF Downloads 41
9497 Developing Stability Monitoring Parameters for NIPRIMAL®: A Monoherbal Formulation for the Treatment of Uncomplicated Malaria

Authors: Ekere E. Kokonne, Isimi C. Yetunde, Okoh E. Judith, Okafor E. Ijeoma, Ajeh J. Isaac, Olobayo O. Kunle, Emeje O. Martins

Abstract:

NIPRIMAL® is a mono herbal formulation of Nauclea latifolia used in the treatment of malaria. The stability of extracts made from plant material is essential to ensure the quality, safety and efficacy of the finished product. This study assessed the stability of the formulation under three different storage conditions; normal room temperature, infrared and under refrigeration. Differential Scanning Calorimetry (DSC) and Thin Layer Chromatography (TLC) were used to monitor the formulations. The DSC analysis was done from 0oC to 350oC under the three storage conditions. Results obtained indicate that NIPRIMAL® was stable at all the storage conditions investigated. Thin layer chromatography (TLC) after 6 months showed there was no significant difference between retention factor (RF) values for the various storage conditions. The reference sample had four spots with RF values of 0.47, 0.68, 0.76, 0.82 respectively and these spots were retained in the test formulations with corresponding RF values were after 6 months at room temperature and refrigerated temperature been 0.56, 0.73, 0.80, 0.92 and 0.47, 0.68, 0.76, 0.82 respectively. On the other hand, the RF values (0.55, 0.74, 0.77, 0.93) obtained under infrared after 1 month varied slightly from the reference. The sample exposed to infrared had a lower heat capacity compared to that stored under room temperature or refrigeration. A combination of TLC and DSC measurements has been applied for assessing the stability of NIPRIMAL®. Both methods were found to be rapid, sensitive and reliable in determining its stability. It is concluded that NIPRIMAL® can be stored under any of the tested conditions without degradation. This study is a major contribution towards developing appropriate stability monitoring parameters for herbal products.

Keywords: differential scanning calorimetry, formulation, NIPRIMAL®, stability, thin layer hromatography

Procedia PDF Downloads 235
9496 Five Years Analysis and Mitigation Plans on Adjustment Orders Impacts on Projects in Kuwait's Oil and Gas Sector

Authors: Rawan K. Al-Duaij, Salem A. Al-Salem

Abstract:

Projects, the unique and temporary process of achieving a set of requirements have always been challenging; Planning the schedule and budget, managing the resources and risks are mostly driven by a similar past experience or the technical consultations of experts in the matter. With that complexity of Projects in Scope, Time, and execution environment, Adjustment Orders are tools to reflect changes to the original project parameters after Contract signature. Adjustment Orders are the official/legal amendments to the terms and conditions of a live Contract. Reasons for issuing Adjustment Orders arise from changes in Contract scope, technical requirement and specification resulting in scope addition, deletion, or alteration. It can be as well a combination of most of these parameters resulting in an increase or decrease in time and/or cost. Most business leaders (handling projects in the interest of the owner) refrain from using Adjustment Orders considering their main objectives of staying within budget and on schedule. Success in managing the changes results in uninterrupted execution and agreed project costs as well as schedule. Nevertheless, this is not always practically achievable. In this paper, a detailed study through utilizing Industrial Engineering & Systems Management tools such as Six Sigma, Data Analysis, and Quality Control were implemented on the organization’s five years records of the issued Adjustment Orders in order to investigate their prevalence, and time and cost impact. The analysis outcome revealed and helped to identify and categorize the predominant causations with the highest impacts, which were considered most in recommending the corrective measures to reach the objective of minimizing the Adjustment Orders impacts. Data analysis demonstrated no specific trend in the AO frequency in past five years; however, time impact is more than the cost impact. Although Adjustment Orders might never be avoidable; this analysis offers’ some insight to the procedural gaps, and where it is highly impacting the organization. Possible solutions are concluded such as improving project handling team’s coordination and communication, utilizing a blanket service contract, and modifying the projects gate system procedures to minimize the possibility of having similar struggles in future. Projects in the Oil and Gas sector are always evolving and demand a certain amount of flexibility to sustain the goals of the field. As it will be demonstrated, the uncertainty of project parameters, in adequate project definition, operational constraints and stringent procedures are main factors resulting in the need for Adjustment Orders and accordingly the recommendation will be to address that challenge.

Keywords: adjustment orders, data analysis, oil and gas sector, systems management

Procedia PDF Downloads 146
9495 Company-Independent Standardization of Timber Construction to Promote Urban Redensification of Housing Stock

Authors: Andreas Schweiger, Matthias Gnigler, Elisabeth Wieder, Michael Grobbauer

Abstract:

Especially in the alpine region, available areas for new residential development are limited. One possible solution is to exploit the potential of existing settlements. Urban redensification, especially the addition of floors to existing buildings, requires efficient, lightweight constructions with short construction times. This topic is being addressed in the five-year Alpine Building Centre. The focus of this cooperation between Salzburg University of Applied Sciences and RSA GH Studio iSPACE is on transdisciplinary research in the fields of building and energy technology, building envelopes and geoinformation, as well as the transfer of research results to industry. One development objective is a system of wood panel system construction with a high degree of prefabrication to optimize the construction quality, the construction time and the applicability for small and medium-sized enterprises. The system serves as a reliable working basis for mastering the complex building task of redensification. The technical solution is the development of an open system in timber frame and solid wood construction, which is suitable for a maximum two-story addition of residential buildings. The applicability of the system is mainly influenced by the existing building stock. Therefore, timber frame and solid timber construction are combined where necessary to bridge large spans of the existing structure while keeping the dead weight as low as possible. Escape routes are usually constructed in reinforced concrete and are located outside the system boundary. Thus, within the framework of the legal and normative requirements of timber construction, a hybrid construction method for redensification created. Component structure, load-bearing structure and detail constructions are developed in accordance with the relevant requirements. The results are directly applicable in individual cases, with the exception of the required verifications. In order to verify the practical suitability of the developed system, stakeholder workshops are held on the one hand, and the system is applied in the planning of a two-storey extension on the other hand. A company-independent construction standard offers the possibility of cooperation and bundling of capacities in order to be able to handle larger construction volumes in collaboration with several companies. Numerous further developments can take place on the basis of the system, which is under open license. The construction system will support planners and contractors from design to execution. In this context, open means publicly published and freely usable and modifiable for own use as long as the authorship and deviations are mentioned. The companies are provided with a system manual, which contains the system description and an application manual. This manual will facilitate the selection of the correct component cross-sections for the specific construction projects by means of all component and detail specifications. This presentation highlights the initial situation, the motivation, the approach, but especially the technical solution as well as the possibilities for the application. After an explanation of the objectives and working methods, the component and detail specifications are presented as work results and their application.

Keywords: redensification, SME, urban development, wood building system

Procedia PDF Downloads 97
9494 Enhancing Heavy Oil Recovery: Experimental Insights into Low Salinity Polymer in Sandstone Reservoirs

Authors: Intisar, Khalifa, Salim, Al Busaidi

Abstract:

Recently, the synergic combination of low salinity water flooding with polymer flooding has been a subject of paramount interest for the oil industry. Numerous studies have investigated the efficiency of enhanced oil recovery using low salinity polymer flooding (LSPF). However, there is no clear conclusion that can explain the incremental oil recovery, determine the main factors controlling the oil recovery process, and define the relative contribution of rock/fluids or fluid/fluid interactions to extra oil recovery. Therefore, this study aims to perform a systematic investigation of the interactions between oil, polymer, low salinity and sandstone rock surface from pore to core scale during LSPF. Partially hydrolyzed polyacrylamide (HPAM) polymer, Boise outcrop, a crude oil sample and reservoir cores from an Omani oil field, and brine at two different salinities were used in the study. Several experimental measurements including static bulk measurements of polymer solutions prepared with brines of high and low salinities, single phase displacement experiments, along with rheological, total organic carbon and ion chromatography measurements to analyze ion exchange reactions, polymer adsorption, and viscosity loss were used. In addition, two-phase experiments were performed to demonstrate the oil recovery efficiency of LSPF. The results revealed that the incremental oil recovery from LSPF was attributed to the combination of the reduction in the water-oil mobility ratio, an increase in the repulsion forces between crude oil/brine/rock interfaces and an increase in pH of the aqueous solution. In addition, lowering the salinity of the make-up brine resulted in a larger conformation (expansion) of the polymer molecules, which in turn resulted in less adsorption and a greater in-situ viscosity without any negative impact on injectivity. This plays a positive role in the oil displacement process. Moreover, the loss of viscosity in the effluent of polymer solutions was lower in low-salinity than in high-salinity brine, indicating that an increase in cations concentration (mainly driven by Ca2+ ions) has stronger effect on the viscosity of high-salinity polymer solution compared with low-salinity polymer.

Keywords: polymer, heavy oil, low salinity, COBR interactions

Procedia PDF Downloads 75
9493 Additional Method for the Purification of Lanthanide-Labeled Peptide Compounds Pre-Purified by Weak Cation Exchange Cartridge

Authors: K. Eryilmaz, G. Mercanoglu

Abstract:

Aim: Purification of the final product, which is the last step in the synthesis of lanthanide-labeled peptide compounds, can be accomplished by different methods. Among these methods, the two most commonly used methods are C18 solid phase extraction (SPE) and weak cation exchanger cartridge elution. SPE C18 solid phase extraction method yields high purity final product, while elution from the weak cation exchanger cartridge is pH dependent and ineffective in removing colloidal impurities. The aim of this work is to develop an additional purification method for the lanthanide-labeled peptide compound in cases where the desired radionuclidic and radiochemical purity of the final product can not be achieved because of pH problem or colloidal impurity. Material and Methods: For colloidal impurity formation, 3 mL of water for injection (WFI) was added to 30 mCi of 177LuCl3 solution and allowed to stand for 1 day. 177Lu-DOTATATE was synthesized using EZAG ML-EAZY module (10 mCi/mL). After synthesis, the final product was mixed with the colloidal impurity solution (total volume:13 mL, total activity: 40 mCi). The resulting mixture was trapped in SPE-C18 cartridge. The cartridge was washed with 10 ml saline to remove impurities to the waste vial. The product trapped in the cartridge was eluted with 2 ml of 50% ethanol and collected to the final product vial via passing through a 0.22μm filter. The final product was diluted with 10 mL of saline. Radiochemical purity before and after purification was analysed by HPLC method. (column: ACE C18-100A. 3µm. 150 x 3.0mm, mobile phase: Water-Acetonitrile-Trifluoro acetic acid (75:25:1), flow rate: 0.6 mL/min). Results: UV and radioactivity detector results in HPLC analysis showed that colloidal impurities were completely removed from the 177Lu-DOTATATE/ colloidal impurity mixture by purification method. Conclusion: The improved purification method can be used as an additional method to remove impurities that may result from the lanthanide-peptide synthesis in which the weak cation exchange purification technique is used as the last step. The purification of the final product and the GMP compliance (the final aseptic filtration and the sterile disposable system components) are two major advantages.

Keywords: lanthanide, peptide, labeling, purification, radionuclide, radiopharmaceutical, synthesis

Procedia PDF Downloads 146
9492 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption

Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu

Abstract:

By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.

Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture

Procedia PDF Downloads 359