Search results for: optimum weight vector
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6406

Search results for: optimum weight vector

556 Geochemical Study of the Bound Hydrocarbon in the Asphaltene of Biodegraded Oils of Cambay Basin

Authors: Sayani Chatterjee, Kusum Lata Pangtey, Sarita Singh, Harvir Singh

Abstract:

Biodegradation leads to a systematic alteration of the chemical and physical properties of crude oil showing sequential depletion of n-alkane, cycloalkanes, aromatic which increases its specific gravity, viscosity and the abundance of heteroatom-containing compounds. The biodegradation leads to a change in the molecular fingerprints and geochemical parameters of degraded oils, thus make source and maturity identification inconclusive or ambiguous. Asphaltene is equivalent to the most labile part of the respective kerogen and generally has high molecular weight. Its complex chemical structure with substantial microporous units makes it suitable to occlude the hydrocarbon expelled from the source. The occluded molecules are well preserved by the macromolecular structure and thus prevented from secondary alterations. They retain primary organic geochemical information over the geological time. The present study involves the extraction of this occluded hydrocarbon from the asphaltene cage through mild oxidative degradation using mild oxidative reagents like Hydrogen Peroxide (H₂O₂) and Acetic Acid (CH₃COOH) on purified asphaltene of the biodegraded oils of Mansa, Lanwa and Santhal fields in Cambay Basin. The study of these extracted occluded hydrocarbons was carried out for establishing oil to oil and oil to source correlation in the Mehsana block of Cambay Basin. The n-alkane and biomarker analysis through GC and GC-MS of these occluded hydrocarbons show similar biomarker imprint as the normal oil in the area and hence correlatable with them. The abundance of C29 steranes, presence of Oleanane, Gammacerane and 4-Methyl sterane depicts that the oils are derived from terrestrial organic matter deposited in the stratified saline water column in the marine environment with moderate maturity (VRc 0.6-0.8). The oil source correlation study suggests that the oils are derived from Jotana-Warosan Low area. The developed geochemical technique to extract the occluded hydrocarbon has effectively resolved the ambiguity that resulted from the inconclusive fingerprint of the biodegraded oil and the method can be also applied in other biodegraded oils as well.

Keywords: asphaltene, biomarkers, correlation, mild oxidation, occluded hydrocarbon

Procedia PDF Downloads 141
555 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads

Authors: Raja Umer Sajjad, Chang Hee Lee

Abstract:

Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.

Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters

Procedia PDF Downloads 223
554 Non Destructive Ultrasound Testing for the Determination of Elastic Characteristics of AlSi7Zn3Cu2Mg Foundry Alloy

Authors: A. Hakem, Y. Bouafia

Abstract:

Characterization of materials used for various mechanical components is of great importance in their design. Several studies were conducted by various authors in order to improve their physical and/or chemical properties in general and mechanical or metallurgical properties in particular. The foundry alloy AlSi7Zn3Cu2Mg is one of the main components constituting the various mechanisms for the implementation of applications and various industrial projects. Obtaining a reliable product is not an easy task; several results proposed by different authors show sometimes results that can contradictory. Due to their high mechanical characteristics, these alloys are widely used in engineering. Silicon improves casting properties and magnesium allows heat treatment. It is thus possible to obtain various degrees of hardening and therefore interesting compromise between tensile strength and yield strength, on one hand, and elongation, on the other hand. These mechanical characteristics can be further enhanced by a series of mechanical treatments or heat treatments. Their light weight coupled with high mechanical characteristics, aluminum alloys are very much used in cars and aircraft industry. The present study is focused on the influence of heat treatments which cause significant micro structural changes, usually hardening by variation of annealing temperatures by increments of 10°C and 20°C on the evolution of the main elastic characteristics, the resistance, the ductility and the structural characteristics of AlSi7Zn3Cu2Mg foundry alloy cast in sand by gravity. These elastic properties are determined in three directions for each specimen of dimensions 200x150x20 mm³ by the ultrasonic method based on acoustic or elastic waves. The hardness, the micro hardness and the structural characteristics are evaluated by a non-destructive method. The aim of this work is to study the hardening ability of AlSi7Zn3Cu2Mg alloy by considering ten states. To improve the mechanical properties obtained with the raw casting, one should use heat treatment for structural hardening; the addition of magnesium is necessary to increase the sensitivity to this specific heat treatment: Treatment followed by homogenization which generates a diffusion of atoms in a substitution solid solution inside a hardening furnace at 500°C during 8h, followed immediately by quenching in water at room temperature 20 to 25°C, then an ageing process for 17h at room temperature and at different annealing temperature (150, 160, 170, 180, 190, 240, 200, 220 and 240°C) for 20h in an annealing oven. The specimens were allowed to cool inside the oven.

Keywords: aluminum, foundry alloy, magnesium, mechanical characteristics, silicon

Procedia PDF Downloads 241
553 Effects of Lateness Gene on Yield and Related Traits in Indica Rice

Authors: B. B. Rana, M. Yokota, Y. Shimizu, Y. Koide, I. Takamure, T. Kawano, M. Murai

Abstract:

Various genes which control or affect heading time have been found in rice. Out of them, Se1 and E1 loci play important roles in determining heading time by controlling photosensitivity. An isogenic-line pair of late and early lines were developed from progenies of the F1 from Suweon 258 × 36U. A lateness gene tentatively designated as “Ex” was found to control the difference in heading time between the early and late lines mentioned above. The present study was conducted to examine the effect of Ex on yield and related traits. Indica-type variety Suweon 258 was crossed with 36U, which is an Ur1 (Undulate rachis-1) isogenic line of IR36. In the F2 population, comparatively early-heading, late-heading and intermediate-heading plants were segregated. Segregation similar to that by the three types of heading was observed in the F3 and later generations. A late-heading plant and an early-heading plant were selected in the F8 population from an intermediate-heading F7 plant, for developing L and E of the isogenic-line pair, respectively. Experiments for L and E were conducted by randomized block design with three replications. Transplanting was conducted on May 3 at a planting distance of 30 cm × 15 cm with two seedlings per hill to an experimental field of the Faculty of Agriculture, Kochi University. Chemical fertilizers containing N, P2O5 and K2O were applied at the nitrogen levels of 4 g/m2, 9 g/m2 and 18 g/m2 in total being denoted by "N4", "N9" and "N18", respectively. Yield, yield components and other traits were measured. Ex delayed 80%-heading by 17 or 18 days in L as compared with E. In total brown rice yield (g/m2), L was 635, 606 and 590, and E was 577, 548 and 501, respectively, at N18, N9 and N4, indicating that Ex increased this trait by 10% to 18%. Ex increased yield-1.5 mm sieve (g/m2) b 9% to 15% at the three fertilizer levels. Ex increased the spikelet number per panicle by 16% to 22%. As a result, the spikelet number per m2 was increased by 11% to 18% at the three fertilizer levels. Ex decreased 1000-grain weight (g) by 2 to 4%. L was not significantly different from E in ripened-grain percentage, fertilized-spikelet percentage and percentage of ripened grains to fertilized spikelets. Hence, it is inferred that Ex increased yield by increasing spikelet number per panicle. Hence, Ex could be utilized to develop high yielding varieties for warmer districts.

Keywords: heading time, lateness gene, photosensitivity, yield, yield components

Procedia PDF Downloads 181
552 Investigation of a Novel Dual Band Microstrip/Waveguide Hybrid Antenna Element

Authors: Raoudane Bouziyan, Kawser Mohammad Tawhid

Abstract:

Microstrip antennas are low in profile, light in weight, conformable in structure and are now developed for many applications. The main difficulty of the microstrip antenna is its narrow bandwidth. Several modern applications like satellite communications, remote sensing, and multi-function radar systems will find it useful if there is dual-band antenna operating from a single aperture. Some applications require covering both transmitting and receiving frequency bands which are spaced apart. Providing multiple antennas to handle multiple frequencies and polarizations becomes especially difficult if the available space is limited as with airborne platforms and submarine periscopes. Dual band operation can be realized from a single feed using slot loaded or stacked microstrip antenna or two separately fed antennas sharing a common aperture. The former design, when used in arrays, has certain limitations like complicated beam forming or diplexing network and difficulty to realize good radiation patterns at both the bands. The second technique provides more flexibility with separate feed system as beams in each frequency band can be controlled independently. Another desirable feature of a dual band antenna is easy adjustability of upper and lower frequency bands. This thesis presents investigation of a new dual-band antenna, which is a hybrid of microstrip and waveguide radiating elements. The low band radiator is a Shorted Annular Ring (SAR) microstrip antenna and the high band radiator is an aperture antenna. The hybrid antenna is realized by forming a waveguide radiator in the shorted region of the SAR microstrip antenna. It is shown that the upper to lower frequency ratio can be controlled by the proper choice of various dimensions and dielectric material. Operation in both linear and circular polarization is possible in either band. Moreover, both broadside and conical beams can be generated in either band from this antenna element. Finite Element Method based software, HFSS and Method of Moments based software, FEKO were employed to perform parametric studies of the proposed dual-band antenna. The antenna was not tested physically. Therefore, in most cases, both HFSS and FEKO were employed to corroborate the simulation results.

Keywords: FEKO, HFSS, dual band, shorted annular ring patch

Procedia PDF Downloads 380
551 Vibroacoustic Modulation of Wideband Vibrations and its Possible Application for Windmill Blade Diagnostics

Authors: Abdullah Alnutayfat, Alexander Sutin, Dong Liu

Abstract:

Wind turbine has become one of the most popular energy productions. However, failure of blades and maintenance costs evolve into significant issues in the wind power industry, so it is essential to detect the initial blade defects to avoid the collapse of the blades and structure. This paper aims to apply modulation of high-frequency blade vibrations by low-frequency blade rotation, which is close to the known Vibro-Acoustic Modulation (VAM) method. The high-frequency wideband blade vibration is produced by the interaction of the surface blades with the environment air turbulence, and the low-frequency modulation is produced by alternating bending stress due to gravity. The low-frequency load of rotational wind turbine blades ranges between 0.2-0.4 Hz and can reach up to 2 Hz for strong wind. The main difference between this study and previous ones on VAM methods is the use of a wideband vibration signal from the blade's natural vibrations. Different features of the vibroacoustic modulation are considered using a simple model of breathing crack. This model considers the simple mechanical oscillator, where the parameters of the oscillator are varied due to low-frequency blade rotation. During the blade's operation, the internal stress caused by the weight of the blade modifies the crack's elasticity and damping. The laboratory experiment using steel samples demonstrates the possibility of VAM using a probe wideband noise signal. A cycle load with a small amplitude was used as a pump wave to damage the tested sample, and a small transducer generated a wideband probe wave. The received signal demodulation was conducted using the Detecting of Envelope Modulation on Noise (DEMON) approach. In addition, the experimental results were compared with the modulation index (MI) technique regarding the harmonic pump wave. The wideband and traditional VAM methods demonstrated similar sensitivity for earlier detection of invisible cracks. Importantly, employing a wideband probe signal with the DEMON approach speeds up and simplifies testing since it eliminates the need to conduct tests repeatedly for various harmonic probe frequencies and to adjust the probe frequency.

Keywords: vibro-acoustic modulation, detecting of envelope modulation on noise, damage, turbine blades

Procedia PDF Downloads 80
550 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses

Authors: Ayon Mukherjee

Abstract:

Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.

Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability

Procedia PDF Downloads 146
549 In Vitro Propagation of Vanilla Planifolia Using Nodal Explants and Varied Concentrations of Naphthaleneacetic acid (NAA) and 6-Benzylaminopurine (BAP).

Authors: Jessica Arthur, Duke Amegah, Kingsley Akenten Wiafe

Abstract:

Background: Vanilla planifolia is the only edible fruit of the orchid family (Orchidaceae) among the over 35,000 Orchidaceae species found worldwide. In Ghana, Vanilla was discovered in the wild, but it is underutilized for commercial production, most likely due to a lack of knowledge on the best NAA and BAP combinations for in vitro propagation to promote successfully regenerated plant acclimatization. The growing interest and global demand for elite Vanilla planifolia plants and natural vanilla flavour emphasize the need for an effective industrial-scale micropropagation protocol. Tissue culture systems are increasingly used to grow disease-free plants and reliable in vitro methods can also produce plantlets with typically modest proliferation rates. This study sought to develop an efficient protocol for in vitro propagation of vanilla using nodal explants by testing different concentrations of NAA and BAP, for the proliferation of the entire plant. Methods: Nodal explants with dormant axillary buds were obtained from year-old laboratory-grown Vanilla planifolia plants. MS media was prepared with a nutrient stock solution (containing macronutrients, micronutrients, iron solution and vitamins) and semi-solidified using phytagel. It was supplemented with different concentrations of NAA and BAP to induce multiple shoots and roots (0.5mg/L BAP with NAA at 0, 0.5, 1, 1.5, 2.0mg/L and vice-versa). The explants were sterilized, cultured in labelled test tubes and incubated at 26°C ± 2°C with 16/8 hours light/dark cycle. Data on shoot and root growth, leaf number, node number, and survival percentage were collected over three consecutive two-week periods. The data were square root transformed and subjected to ANOVA and LSD at a 5% significance level using the R statistical package. Results: Shoots emerged at 8 days and roots at 12 days after inoculation with 94% survival rate. It was discovered that for the NAA treatments, MS media supplemented with 2.00 mg/l NAA resulted in the highest shoot length (10.45cm), maximum root number (1.51), maximum shoot number (1.47) and the highest number of leaves (1.29). MS medium containing 1.00 mg/l NAA produced the highest number of nodes (1.62) and root length (14.27cm). Also, a similar growth pattern for the BAP treatments was observed. MS medium supplemented with 1.50 mg/l BAP resulted in the highest shoot length (14.98 cm), the highest number of nodes (4.60), the highest number of leaves (1.75) and the maximum shoot number (1.57). MS medium containing 0.50 mg/l BAP and 1.0 mg/l BAP generated a maximum root number (1.44) and the highest root length (13.25cm), respectively. However, the best concentration combination for maximizing shoot and root was media containing 1.5mg/l BAP combined with 0.5mg/l NAA, and 1.0mg/l NAA combined with 0.5mg/l of BAP respectively. These concentrations were optimum for in vitro growth and production of Vanilla planifolia. Significance: This study presents a standardized protocol for labs to produce clean vanilla plantlets, enhancing cultivation in Ghana and beyond. It provides insights into Vanilla planifolia's growth patterns and hormone responses, aiding future research and cultivation.

Keywords: Vanilla planifolia, In vitro propagation, plant hormones, MS media

Procedia PDF Downloads 37
548 Sulforaphane Alleviates Muscular Dystrophy in Mdx Mice by Activation of Nrf2

Authors: Chengcao Sun, Cuili Yang, Shujun Li, Ruilin Xue, Liang Wang, Yongyong Xi, Dejia Li

Abstract:

Backgrounds: Sulforaphane, one of the most important isothiocyanates in the human diet, is known to have chemopreventive and antioxidant activities in different tissues via activation of NF-E2-related factor 2 (Nrf2)-mediated induction of antioxidant/phase II enzymes, such as heme oxygenase-1 (HO-1) and NAD(P)H quinone oxidoreductase 1 (NQO1). However, its effects on muscular dystrophy remain unknown. This work was undertaken to evaluate the effects of Sulforaphane on Duchenne muscular dystrophy (DMD). Methods: 4-week-old mdx mice were treated with SFN by gavage (2 mg/kg body weight per day) for 8 weeks. Blood was collected from eye socket every week, and tibial anterior, extensor digitorum longus, gastrocnemius, soleus, triceps brachii muscles and heart samples were collected after 8-week gavage. Force measurements and mice exercise capacity assays were detected. GSH/GSSG ratio, TBARS, CK and LDH levels were analyzed by spectrophotometric methods. H&E staining was used to analyze histological and morphometric of skeletal muscles of mdx mice, and Evas blue dye staining was made to detect sarcolemmal integrity of mdx mice. Further, the role of Sulforaphane on Nrf2/ARE signaling pathway was analyzed by ELISA, western blot and qRT-PCR. Results: Our results demonstrated that SFN treatment increased the expression and activity of muscle phase II enzymes NQO1 and HO-1 with Nrf2 dependent manner. SFN significantly increased skeletal muscle mass, muscle force (~30%), running distance (~20%) and GSH/GSSG ratio (~3.2 folds) of mdx mice, and decreased the activities of plasma creatine phosphokinase (CK) (~45%) and lactate dehydrogenase (LDH) (~40%), gastrocnemius hypertrophy (~25%), myocardial hypertrophy (~20%) and MDA levels (~60%). Further, SFN treatment also reduced the central nucleation (~40%), fiber size variability, inflammation and improved the sarcolemmal integrity of mdx mice. Conclusions: Collectively, these results show that SFN can improve muscle function, pathology and protect dystrophic muscle from oxidative damage in mdx mice through Nrf2 signaling pathway, which indicate Nrf2 may have clinical implications for the treatment of patients with muscular dystrophy.

Keywords: sulforaphane, duchenne muscular dystrophy, Nrf2, oxidative stress

Procedia PDF Downloads 303
547 Biomechanical Analysis on Skin and Jejunum of Chemically Prepared Cat Cadavers Used in Surgery Training

Authors: Raphael C. Zero, Thiago A. S. S. Rocha, Marita V. Cardozo, Caio C. C. Santos, Alisson D. S. Fechis, Antonio C. Shimano, FabríCio S. Oliveira

Abstract:

Biomechanical analysis is an important factor in tissue studies. The objective of this study was to determine the feasibility of a new anatomical technique and quantify the changes in skin and the jejunum resistance of cats’ corpses throughout the process. Eight adult cat cadavers were used. For every kilogram of weight, 120ml of fixative solution (95% 96GL ethyl alcohol and 5% pure glycerin) was applied via the external common carotid artery. Next, the carcasses were placed in a container with 96 GL ethyl alcohol for 60 days. After fixing, all carcasses were preserved in a 30% sodium chloride solution for 60 days. Before fixation, control samples were collected from fresh cadavers and after fixation, three skin and jejunum fragments from each cadaver were tested monthly for strength and displacement until complete rupture in a universal testing machine. All results were analyzed by F-test (P <0.05). In the jejunum, the force required to rupture the fresh samples and the samples fixed in alcohol for 60 days was 31.27±19.14N and 29.25±11.69N, respectively. For the samples preserved in the sodium chloride solution for 30 and 60 days, the strength was 26.17±16.18N and 30.57±13.77N, respectively. In relation to the displacement required for the rupture of the samples, the values of fresh specimens and those fixed in alcohol for 60 days was 2.79±0.73mm and 2.80±1.13mm, respectively. For the samples preserved for 30 and 60 days with sodium chloride solution, the displacement was 2.53±1.03mm and 2.83±1.27mm, respectively. There was no statistical difference between the samples (P=0.68 with respect to strength, and P=0.75 with respect to displacement). In the skin, the force needed to rupture the fresh samples and the samples fixed for 60 days in alcohol was 223.86±131.5N and 211.86±137.53N respectively. For the samples preserved in sodium chloride solution for 30 and 60 days, the force was 227.73±129.06 and 224.78±143.83N, respectively. In relation to the displacement required for the rupture of the samples, the values of fresh specimens and those fixed in alcohol for 60 days were 3.67±1.03mm and 4.11±0.87mm, respectively. For the samples preserved for 30 and 60 days with sodium chloride solution, the displacement was 4.21±0.93mm and 3.93±0.71mm, respectively. There was no statistical difference between the samples (P=0.65 with respect to strength, and P=0.98 with respect to displacement). The resistance of the skin and intestines of the cat carcasses suffered little change when subjected to alcohol fixation and preservation in sodium chloride solution, each for 60 days, which is promising for use in surgery training. All experimental procedures were approved by the Municipal Legal Department (protocol 02.2014.000027-1). The project was funded by FAPESP (protocol 2015-08259-9).

Keywords: anatomy, conservation, fixation, small animal

Procedia PDF Downloads 270
546 Nanofiltration Membranes with Deposyted Polyelectrolytes: Caracterisation and Antifouling Potential

Authors: Viktor Kochkodan

Abstract:

The main problem arising upon water treatment and desalination using pressure driven membrane processes such as microfiltration, ultrafiltration, nanofiltration and reverse osmosis is membrane fouling that seriously hampers the application of the membrane technologies. One of the main approaches to mitigate membrane fouling is to minimize adhesion interactions between a foulant and a membrane and the surface coating of the membranes with polyelectrolytes seems to be a simple and flexible technique to improve the membrane fouling resistance. In this study composite polyamide membranes NF-90, NF-270, and BW-30 were modified using electrostatic deposition of polyelectrolyte multilayers made from various polycationic and polyanionic polymers of different molecular weights. Different anionic polyelectrolytes such as: poly(sodium 4-styrene sulfonate), poly(vinyl sulfonic acid, sodium salt), poly(4-styrene sulfonic acid-co-maleic acid) sodium salt, poly(acrylic acid) sodium salt (PA) and cationic polyelectrolytes such as poly(diallyldimethylammonium chloride), poly(ethylenimine) and poly(hexamethylene biguanide were used for membrane modification. An effect of deposition time and a number of polyelectrolyte layers on the membrane modification has been evaluated. It was found that degree of membrane modification depends on chemical nature and molecular weight of polyelectrolytes used. The surface morphology of the prepared composite membranes was studied using atomic force microscopy. It was shown that the surface membrane roughness decreases significantly as a number of the polyelectrolyte layers on the membrane surface increases. This smoothening of the membrane surface might contribute to the reduction of membrane fouling as lower roughness most often associated with a decrease in surface fouling. Zeta potentials and water contact angles on the membrane surface before and after modification have also been evaluated to provide addition information regarding membrane fouling issues. It was shown that the surface charge of the membranes modified with polyelectrolytes could be switched between positive and negative after coating with a cationic or an anionic polyelectrolyte. On the other hand, the water contact angle was strongly affected when the outermost polyelectrolyte layer was changed. Finally, a distinct difference in the performance of the noncoated membranes and the polyelectrolyte modified membranes was found during treatment of seawater in the non-continuous regime. A possible mechanism of the higher fouling resistance of the modified membranes has been discussed.

Keywords: contact angle, membrane fouling, polyelectrolytes, surface modification

Procedia PDF Downloads 234
545 Development of a Miniature and Low-Cost IoT-Based Remote Health Monitoring Device

Authors: Sreejith Jayachandran, Mojtaba Ghods, Morteza Mohammadzaheri

Abstract:

The modern busy world is running behind new embedded technologies based on computers and software; meanwhile, some people forget to do their health condition and regular medical check-ups. Some of them postpone medical check-ups due to a lack of time and convenience, while others skip these regular evaluations and medical examinations due to huge medical bills and hospital expenses. Engineers and medical experts have come together to give birth to a new device in the telemonitoring system capable of monitoring, checking, and evaluating the health status of the human body remotely through the internet for the needs of all kinds of people. The remote health monitoring device is a microcontroller-based embedded unit. Various types of sensors in this device are connected to the human body, and with the help of an Arduino UNO board, the required analogue data is collected from the sensors. The microcontroller on the Arduino board processes the analogue data collected in this way into digital data and transfers that information to the cloud, and stores it there, and the processed digital data is instantly displayed through the LCD attached to the machine. By accessing the cloud storage with a username and password, the concerned person’s health care teams/doctors and other health staff can collect this data for the assessment and follow-up of that patient. Besides that, the family members/guardians can use and evaluate this data for awareness of the patient's current health status. Moreover, the system is connected to a Global Positioning System (GPS) module. In emergencies, the concerned team can position the patient or the person with this device. The setup continuously evaluates and transfers the data to the cloud, and also the user can prefix a normal value range for the evaluation. For example, the blood pressure normal value is universally prefixed between 80/120 mmHg. Similarly, the RHMS is also allowed to fix the range of values referred to as normal coefficients. This IoT-based miniature system (11×10×10) cm³ with a low weight of 500 gr only consumes 10 mW. This smart monitoring system is manufactured with 100 GBP, which can be used not only for health systems, it can be used for numerous other uses including aerospace and transportation sections.

Keywords: embedded technology, telemonitoring system, microcontroller, Arduino UNO, cloud storage, global positioning system, remote health monitoring system, alert system

Procedia PDF Downloads 66
544 The Role of Metaheuristic Approaches in Engineering Problems

Authors: Ferzat Anka

Abstract:

Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.

Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems

Procedia PDF Downloads 57
543 The Examination of Cement Effect on Isotropic Sands during Static, Dynamic, Melting and Freezing Cycles

Authors: Mehdi Shekarbeigi

Abstract:

The consolidation of loose substrates as well as substrate layers through promoting stabilizing materials is one of the most commonly used road construction techniques. Cement, lime, and flax, as well as asphalt emulsion, are common materials used for soil stabilization to enhance the soil’s strength and durability properties. Cement could be simply used to stabilize permeable materials such as sand in a relatively short time threshold. In this research, typical Portland cement is selected for the stabilization of isotropic sand; the effect of static and cyclic loading on the behavior of these soils has been examined with various percentages of Portland cement. Thus, firstly, a soil’s general features are investigated, and then static tests, including direct cutting, density and single axis tests, and California Bearing Ratio, are performed on the samples. After that, the dynamic behavior of cement on silica sand with the same grain size is analyzed. These experiments are conducted on cement samples of 3, 6, and 9 of the same rates and ineffective limiting pressures of 0 to 1200 kPa with 200 kPa steps of the face according to American Society for Testing and Materials D 3999 standards. Also, to test the effect of temperature on molds and frost samples, 0, 5, 10, and 20 are carried out during 0, 5, 10, and 20-second periods. Results of the static tests showed that increasing the cement percentage increases the soil density and shear strength. The single-axis compressive strength increase is higher for samples with higher cement content and lower densities. The results also illustrate the relationship between single-axial compressive strength and cement weight parameters. Results of the dynamic experiments indicate that increasing the number of loading cycles and melting and freezing cycles enhances permeability and decreases the applied pressure. According to the results of this research, it could be stated that samples containing 9% cement have the highest amount of shear modulus and, therefore, decrease the permeability of soil. This amount could be considered as the optimal amount. Also, the enhancement of effective limited pressure from 400 to 800kPa increased the shear modulus of the sample by an average of 20 to 30 percent in small strains.

Keywords: cement, isotropic sands, static load, three-axis cycle, melting and freezing cycles

Procedia PDF Downloads 56
542 Distribution Routs Redesign through the Vehicle Problem Routing in Havana Distribution Center

Authors: Sonia P. Marrero Duran, Lilian Noya Dominguez, Lisandra Quintana Alvarez, Evert Martinez Perez, Ana Julia Acevedo Urquiaga

Abstract:

Cuban business and economic policy are in the constant update as well as facing a client ever more knowledgeable and demanding. For that reason become fundamental for companies competitiveness through the optimization of its processes and services. One of the Cuban’s pillars, which has been sustained since the triumph of the Cuban Revolution back in 1959, is the free health service to all those who need it. This service is offered without any charge under the concept of preserving human life, but it implied costly management processes and logistics services to be able to supply the necessary medicines to all the units who provide health services. One of the key actors on the medicine supply chain is the Havana Distribution Center (HDC), which is responsible for the delivery of medicines in the province; as well as the acquisition of medicines from national and international producers and its subsequent transport to health care units and pharmacies in time, and with the required quality. This HDC also carries for all distribution centers in the country. Given the eminent need to create an actor in the supply chain that specializes in the medicines supply, the possibility of centralizing this operation in a logistics service provider is analyzed. Based on this decision, pharmacies operate as clients of the logistic service center whose main function is to centralize all logistics operations associated with the medicine supply chain. The HDC is precisely the logistic service provider in Havana and it is the center of this research. In 2017 the pharmacies had affectations in the availability of medicine due to deficiencies in the distribution routes. This is caused by the fact that they are not based on routing studies, besides the long distribution cycle. The distribution routs are fixed, attend only one type of customer and there respond to a territorial location by the municipality. Taking into consideration the above-mentioned problem, the objective of this research is to optimize the routes system in the Havana Distribution Center. To accomplish this objective, the techniques applied were document analysis, random sampling, statistical inference and tools such as Ishikawa diagram and the computerized software’s: ArcGis, Osmand y MapIfnfo. As a result, were analyzed four distribution alternatives; the actual rout, by customer type, by the municipality and the combination of the two last. It was demonstrated that the territorial location alternative does not take full advantage of the transportation capacities or the distance of the trips, which leads to elevated costs breaking whit the current ways of distribution and the currents characteristics of the clients. The principal finding of the investigation was the optimum option distribution rout is the 4th one that is formed by hospitals and the join of pharmacies, stomatology clinics, polyclinics and maternal and elderly homes. This solution breaks the territorial location by the municipality and permits different distribution cycles in dependence of medicine consumption and transport availability.

Keywords: computerized geographic software, distribution, distribution routs, vehicle problem routing (VPR)

Procedia PDF Downloads 138
541 Accelerated Carbonation of Construction Materials by Using Slag from Steel and Metal Production as Substitute for Conventional Raw Materials

Authors: Karen Fuchs, Michael Prokein, Nils Mölders, Manfred Renner, Eckhard Weidner

Abstract:

Due to the high CO₂ emissions, the energy consumption for the production of sand-lime bricks is of great concern. Especially the production of quicklime from limestone and the energy consumption for hydrothermal curing contribute to high CO₂ emissions. Hydrothermal curing is carried out under a saturated steam atmosphere at about 15 bar and 200°C for 12 hours. Therefore, we are investigating the opportunity to replace quicklime and sand in the production of building materials with different types of slag as calcium-rich waste from steel production. We are also investigating the possibility of substituting conventional hydrothermal curing with CO₂ curing. Six different slags (Linz-Donawitz (LD), ferrochrome (FeCr), ladle (LS), stainless steel (SS), ladle furnace (LF), electric arc furnace (EAF)) provided by "thyssenkrupp MillServices & Systems GmbH" were ground at "Loesche GmbH". Cylindrical blocks with a diameter of 100 mm were pressed at 12 MPa. The composition of the blocks varied between pure slag and mixtures of slag and sand. The effects of pressure, temperature, and time on the CO₂ curing process were studied in a 2-liter high-pressure autoclave. Pressures between 0.1 and 5 MPa, temperatures between 25 and 140°C, and curing times between 1 and 100 hours were considered. The quality of the CO₂-cured blocks was determined by measuring the compressive strength by "Ruhrbaustoffwerke GmbH & Co. KG." The degree of carbonation was determined by total inorganic carbon (TIC) and X-ray diffraction (XRD) measurements. The pH trends in the cross-section of the blocks were monitored using phenolphthalein as a liquid pH indicator. The parameter set that yielded the best performing material was tested on all slag types. In addition, the method was scaled to steel slag-based building blocks (240 mm x 115 mm x 60 mm) provided by "Ruhrbaustoffwerke GmbH & Co. KG" and CO₂-cured in a 20-liter high-pressure autoclave. The results show that CO₂ curing of building blocks consisting of pure wetted LD slag leads to severe cracking of the cylindrical specimens. The high CO₂ uptake leads to an expansion of the specimens. However, if LD slag is used only proportionally to replace quicklime completely and sand proportionally, dimensionally stable bricks with high compressive strength are produced. The tests to determine the optimum pressure and temperature show 2 MPa and 50°C as promising parameters for the CO₂ curing process. At these parameters and after 3 h, the compressive strength of LD slag blocks reaches the highest average value of almost 50 N/mm². This is more than double that of conventional sand-lime bricks. Longer CO₂ curing times do not result in higher compressive strengths. XRD and TIC measurements confirmed the formation of carbonates. All tested slag-based bricks show higher compressive strengths compared to conventional sand-lime bricks. However, the type of slag has a significant influence on the compressive strength values. The results of the tests in the 20-liter plant agreed well with the results of the 2-liter tests. With its comparatively moderate operating conditions, the CO₂ curing process has a high potential for saving CO₂ emissions.

Keywords: CO₂ curing, carbonation, CCU, steel slag

Procedia PDF Downloads 87
540 Understanding Complexity at Pre-Construction Stage in Project Planning of Construction Projects

Authors: Mehran Barani Shikhrobat, Roger Flanagan

Abstract:

The construction planning and scheduling based on using the current tools and techniques is resulted deterministic in nature (Gantt chart, CPM) or applying a very little probability of completion (PERT) for each task. However, every project embodies assumptions and influences and should start with a complete set of clearly defined goals and constraints that remain constant throughout the duration of the project. Construction planners continue to apply the traditional methods and tools of “hard” project management that were developed for “ideal projects,” neglecting the potential influence of complexity on the design and construction process. The aim of this research is to investigate the emergence and growth of complexity in project planning and to provide a model to consider the influence of complexity on the total project duration at the post-contract award pre-construction stage of a project. The literature review showed that complexity originates from different sources of environment, technical, and workflow interactions. They can be divided into two categories of complexity factors, first, project tasks, and second, project organisation management. Project tasks may originate from performance, lack of resources, or environmental changes for a specific task. Complexity factors that relate to organisation and management refer to workflow and interdependence of different parts. The literature review highlighted the ineffectiveness of traditional tools and techniques in planning for complexity. However, this research focus on understanding the fundamental causes of the complexity of construction projects were investigated through a questionnaire with industry experts. The results were used to develop a model that considers the core complexity factors and their interactions. System dynamics were used to investigate the model to consider the influence of complexity on project planning. Feedback from experts revealed 20 major complexity factors that impact project planning. The factors are divided into five categories known as core complexity factors. To understand the weight of each factor in comparison, the Analytical Hierarchy Process (AHP) analysis method is used. The comparison showed that externalities are ranked as the biggest influence across the complexity factors. The research underlines that there are many internal and external factors that impact project activities and the project overall. This research shows the importance of considering the influence of complexity on the project master plan undertaken at the post-contract award pre-construction phase of a project.

Keywords: project planning, project complexity measurement, planning uncertainty management, project risk management, strategic project scheduling

Procedia PDF Downloads 83
539 Micelles Made of Pseudo-Proteins for Solubilization of Hydrophobic Biologicals

Authors: Sophio Kobauri, David Tugushi, Vladimir P. Torchilin, Ramaz Katsarava

Abstract:

Hydrophobic / hydrophilically modified functional polymers are of high interest in modern biomedicine due to their ability to solubilize water-insoluble / poorly soluble (hydrophobic) drugs. Among the many approaches that are being developed in this direction, one of the most effective methods is the use of polymeric micelles (PMs) (micelles formed by amphiphilic block-copolymers) for solubilization of hydrophobic biologicals. For therapeutic purposes, PMs are required to be stable and biodegradable, although quite a few amphiphilic block-copolymers are described capable of forming stable micelles with good solubilization properties. For obtaining micelle-forming block-copolymers, polyethylene glycol (PEG) derivatives are desirable to use as hydrophilic shell because it represents the most popular biocompatible hydrophilic block and various hydrophobic blocks (polymers) can be attached to it. Although the construction of the hydrophobic core, due to the complex requirements and micelles structure development, is the very actual and the main problem for nanobioengineers. Considering the above, our research goal was obtaining biodegradable micelles for the solubilization of hydrophobic drugs and biologicals. For this purpose, we used biodegradable polymers– pseudo-proteins (PPs)(synthesized with naturally occurring amino acids and other non-toxic building blocks, such as fatty diols and dicarboxylic acids) as hydrophobic core since these polymers showed reasonable biodegradation rates and excellent biocompatibility. In the present study, we used the hydrophobic amino acid – L-phenylalanine (MW 4000-8000Da) instead of L-leucine. Amino-PEG (MW 2000Da) was used as hydrophilic fragments for constructing the suitable micelles. The molecular weight of PP (the hydrophobic core of micelle) was regulated by variation of used monomers ratios. Micelles were obtained by dissolving of synthesized amphiphilic polymer in water. The micelle-forming property was tested using dynamic light scattering (Malvern zetasizer NanoZSZEN3600). The study showed that obtaining amphiphilic block-copolymer form stable neutral micelles 100 ± 7 nm in size at 10mg/mL concentration, which is considered as an optimal range for pharmaceutical micelles. The obtained preliminary data allow us to conclude that the obtained micelles are suitable for the delivery of poorly water-soluble drugs and biologicals.

Keywords: amino acid – L-phenylalanine, pseudo-proteins, amphiphilic block-copolymers, biodegradable micelles

Procedia PDF Downloads 122
538 Numerical Optimization of Cooling System Parameters for Multilayer Lithium Ion Cell and Battery Packs

Authors: Mohammad Alipour, Ekin Esen, Riza Kizilel

Abstract:

Lithium-ion batteries are a commonly used type of rechargeable batteries because of their high specific energy and specific power. With the growing popularity of electric vehicles and hybrid electric vehicles, increasing attentions have been paid to rechargeable Lithium-ion batteries. However, safety problems, high cost and poor performance in low ambient temperatures and high current rates, are big obstacles for commercial utilization of these batteries. By proper thermal management, most of the mentioned limitations could be eliminated. Temperature profile of the Li-ion cells has a significant role in the performance, safety, and cycle life of the battery. That is why little temperature gradient can lead to great loss in the performances of the battery packs. In recent years, numerous researchers are working on new techniques to imply a better thermal management on Li-ion batteries. Keeping the battery cells within an optimum range is the main objective of battery thermal management. Commercial Li-ion cells are composed of several electrochemical layers each consisting negative-current collector, negative electrode, separator, positive electrode, and positive current collector. However, many researchers have adopted a single-layer cell to save in computing time. Their hypothesis is that thermal conductivity of the layer elements is so high and heat transfer rate is so fast. Therefore, instead of several thin layers, they model the cell as one thick layer unit. In previous work, we showed that single-layer model is insufficient to simulate the thermal behavior and temperature nonuniformity of the high-capacity Li-ion cells. We also studied the effects of the number of layers on thermal behavior of the Li-ion batteries. In this work, first thermal and electrochemical behavior of the LiFePO₄ battery is modeled with 3D multilayer cell. The model is validated with the experimental measurements at different current rates and ambient temperatures. Real time heat generation rate is also studied at different discharge rates. Results showed non-uniform temperature distribution along the cell which requires thermal management system. Therefore, aluminum plates with mini-channel system were designed to control the temperature uniformity. Design parameters such as channel number and widths, inlet flow rate, and cooling fluids are optimized. As cooling fluids, water and air are compared. Pressure drop and velocity profiles inside the channels are illustrated. Both surface and internal temperature profiles of single cell and battery packs are investigated with and without cooling systems. Our results show that using optimized Mini-channel cooling plates effectively controls the temperature rise and uniformity of the single cells and battery packs. With increasing the inlet flow rate, cooling efficiency could be reached up to 60%.

Keywords: lithium ion battery, 3D multilayer model, mini-channel cooling plates, thermal management

Procedia PDF Downloads 141
537 A Patient Passport Application for Adults with Cystic Fibrosis

Authors: Tamara Vagg, Cathy Shortt, Claire Hickey, Joseph A. Eustace, Barry J. Plant, Sabin Tabirca

Abstract:

Introduction: Paper-based patient passports have been used advantageously for older patients, patients with diabetes, and patients with learning difficulties. However, these passports can experience issues with data security, patients forgetting to bring the passport, patients being over encumbered, and uncertainty with who is responsible for entering and managing data in this passport. These issues could be resolved by transferring the paper-based system to a convenient platform such as a smartphone application (app). Background: Life expectancy for some Cystic Fibrosis (CF) patients are rising and as such new complications and procedures are predicted. Subsequently, there is a need for education and management interventions that can benefit CF adults. This research proposes a CF patient passport to record basic medical information through a smartphone app which will allow CF adults access to their basic medical information. Aim: To provide CF patients with their basic medical information via mobile multimedia so that they can receive care when traveling abroad or between CF centres. Moreover, by recording their basic medical information, CF patients may become more aware of their own condition and more active in their health care. Methods: This app is designed by a CF multidisciplinary team to be a lightweight reflection of a hospital patient file. The passport app is created using PhoneGap so that it can be deployed for both Android and iOS devices. Data entered into the app is encrypted and stored locally only. The app is password protected and includes the ability to set reminders and a graph to visualise weight and lung function over time. The app is introduced to seven participants as part of a stress test. The participants are asked to test the performance and usability of the app and report any issues identified. Results: Feedback and suggestions received via this testing include the ability to reorder the list of clinical appointments via date, an open format of recording dates (in the event specifics are unknown), and a drop down menu for data which is difficult to enter (such as bugs found in mucus). The app is found to be usable and accessible and is now being prepared for a pilot study with adult CF patients. Conclusions: It is anticipated that such an app will be beneficial to CF adult patients when travelling abroad and between CF centres.

Keywords: Cystic Fibrosis, digital patient passport, mHealth, self management

Procedia PDF Downloads 228
536 Surface Display of Lipase on Yarrowia lipolytica Cells

Authors: Evgeniya Y. Yuzbasheva, Tigran V. Yuzbashev, Natalia I. Perkovskaya, Elizaveta B. Mostova

Abstract:

Cell-surface display of lipase is of great interest as it has many applications in the field of biotechnology owing to its unique advantages: simplified product purification, and cost-effective downstream processing. One promising area of application for whole-cell biocatalysts with surface displayed lipase is biodiesel synthesis. Biodiesel is biodegradable, renewable, and nontoxic alternative fuel for diesel engines. Although the alkaline catalysis method has been widely used for biodiesel production, it has a number of limitations, such as rigorous feedstock specifications, complicated downstream processes, including removal of inorganic salts from the product, recovery of the salt-containing by-product glycerol, and treatment of alkaline wastewater. Enzymatic synthesis of biodiesel can overcome these drawbacks. In this study, Lip2p lipase was displayed on Yarrowia lipolytica cells via C- and N-terminal fusion variant. The active site of lipase is located near the C-terminus, therefore to prevent the activity loosing the insertion of glycine-serine linker between Lip2p and C-domains was performed. The hydrolytic activity of the displayed lipase reached 12,000–18,000 U/g of dry weight. However, leakage of enzyme from the cell wall was observed. In case of C-terminal fusion variant, the leakage was occurred due to the proteolytic cleavage within the linker peptide. In case of N-terminal fusion variant, the leaking enzyme was presented as three proteins, one of which corresponded to the whole hybrid protein. The calculated number of recombinant enzyme displayed on the cell surface is approximately 6–9 × 105 molecules per cell, which is close to the theoretical maximum (2 × 106 molecules/cell). Thus, we attribute the enzyme leakage to the limited space available on the cell surface. Nevertheless, cell-bound lipase exhibited greater stability to short-term and long-term temperature treatment than the native enzyme. It retained 74% of original activity at 60°C for 5 min of incubation, and 83% of original activity after incubation at 50°C during 5 h. Cell-bound lipase had also higher stability in organic solvents and detergents. The developed whole-cell biocatalyst was used for recycling biodiesel synthesis. Two repeated cycles of methanolysis yielded 84.1–% and 71.0–% methyl esters after 33–h and 45–h reactions, respectively.

Keywords: biodiesel, cell-surface display, lipase, whole-cell biocatalyst

Procedia PDF Downloads 467
535 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 118
534 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms

Authors: Abdul Rehman, Bo Liu

Abstract:

Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.

Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization

Procedia PDF Downloads 211
533 A Distributed Smart Battery Management System – sBMS, for Stationary Energy Storage Applications

Authors: António J. Gano, Carmen Rangel

Abstract:

Currently, electric energy storage systems for stationary applications have known an increasing interest, namely with the integration of local renewable energy power sources into energy communities. Li-ion batteries are considered the leading electric storage devices to achieve this integration, and Battery Management Systems (BMS) are decisive for their control and optimum performance. In this work, the advancement of a smart BMS (sBMS) prototype with a modular distributed topology is described. The system, still under development, has a distributed architecture with modular characteristics to operate with different battery pack topologies and charge capacities, integrating adaptive algorithms for functional state real-time monitoring and management of multicellular Li-ion batteries, and is intended for application in the context of a local energy community fed by renewable energy sources. This sBMS system includes different developed hardware units: (1) Cell monitoring units (CMUs) for interfacing with each individual cell or module monitoring within the battery pack; (2) Battery monitoring and switching unit (BMU) for global battery pack monitoring, thermal control and functional operating state switching; (3) Main management and local control unit (MCU) for local sBMS’s management and control, also serving as a communications gateway to external systems and devices. This architecture is fully expandable to battery packs with a large number of cells, or modules, interconnected in series, as the several units have local data acquisition and processing capabilities, communicating over a standard CAN bus and will be able to operate almost autonomously. The CMU units are intended to be used with Li-ion cells but can be used with other cell chemistries, with output voltages within the 2.5 to 5 V range. The different unit’s characteristics and specifications are described, including the different implemented hardware solutions. The developed hardware supports both passive and active methods for charge equalization, considered fundamental functionalities for optimizing the performance and the useful lifetime of a Li-ion battery package. The functional characteristics of the different units of this sBMS system, including different process variables data acquisition using a flexible set of sensors, can support the development of custom algorithms for estimating the parameters defining the functional states of the battery pack (State-of-Charge, State-of-Health, etc.) as well as different charge equalizing strategies and algorithms. This sBMS system is intended to interface with other systems and devices using standard communication protocols, like those used by the Internet of Things. In the future, this sBMS architecture can evolve to a fully decentralized topology, with all the units using Wi-Fi protocols and integrating a mesh network, making unnecessary the MCU unit. The status of the work in progress is reported, leading to conclusions on the system already executed, considering the implemented hardware solution, not only as fully functional advanced and configurable battery management system but also as a platform for developing custom algorithms and optimizing strategies to achieve better performance of electric energy stationary storage devices.

Keywords: Li-ion battery, smart BMS, stationary electric storage, distributed BMS

Procedia PDF Downloads 77
532 Effects of Caprine Arthritis-Encephalitis Virus (CAEV) Infection on the Expression of Cathelicidin Genes in Goat Blood Leukocytes

Authors: Daria Reczynska, Justyna Jarczak, Michal Czopowicz, Danuta Sloniewska, Karina Horbanczuk, Wieslaw Jarmuz, Jaroslaw Kaba, Emilia Bagnicka

Abstract:

Since people, animals and plants are constantly exposed to pathogens they have developed very complex systems of defense. Among ca. 1000 antimicrobial peptides from different families so far identified, approximately 30 belonging to cathelicidin family can be found in mammals. Cathelicidins probably constitute the first line of defense because they can act at a physiological salt concentration which is present in healthy tissues. Moreover, the low salt concentration which is present in infected tissues inhibits their activity. In goat bactenecin 7.5 (BAC7.5), bactenecin 5 (BAC5), myeloid antimicrobial peptide 28 (MAP28), myeloid antimicrobial peptide 34 (MAP34 A and B), goat bactenecin3.4 (ChBac3.4) were identified. Caprine arthritis-encephalitis (CAE) caused by small ruminant lentivirus (SRLV) is economic problem. The main CAE symptoms are weight loss, arthritis, pneumonia and mastitis (significant elevation of the somatic cell count and deterioration of some technological parameters). The study was conducted on 24 dairy goats. The animals were divided into two groups: experimental (SRLV-infected) and control (non-infected). The blood samples were collected five times: on the 1st, 7th, 30th, 90th and 150thday of lactation. The levels of transcripts of BAC7.5, BAC5, MAP28 and MAP34 genes in blood leucocytes were measured using qPCR method. There were no differences in mRNA levels of studied genes between stages of lactation. The differences were observed in expressions of BAC5, MAP28 and MAP34 genes with lower levels in the experimental group. There was no difference in BAC7.5 expression between groups. The decreased levels of transcripts of cathelicidin genes in blood leucocytes of SRLV-infected goats may indicate the disturbances of homeostasis in organisms. It can be concluded that SRLV infection seems to inhibit expression of cathelicidin genes. The study was financed by a grant from the National Scientific Center No. UMO-2013/09/B/NZ/03514.

Keywords: goat, CAEV, cathelicidins, blood leukocytes, gene expression

Procedia PDF Downloads 263
531 A Concept Study to Assist Non-Profit Organizations to Better Target Developing Countries

Authors: Malek Makki

Abstract:

The main purpose of this research study is to assist non-profit organizations (NPOs) to better segment a group of least developing countries and to optimally target the most needier areas, so that the provided aids make positive and lasting differences. We applied international marketing and strategy approaches to segment a sub-group of candidates among a group of 151 countries identified by the UN-G77 list, and furthermore, we point out the areas of priorities. We use reliable and well known criteria on the basis of economics, geography, demography and behavioral. These criteria can be objectively estimated and updated so that a follow-up can be performed to measure the outcomes of any program. We selected 12 socio-economic criteria that complement each other: GDP per capita, GDP growth, industry value added, export per capita, fragile state index, corruption perceived index, environment protection index, ease of doing business index, global competitiveness index, Internet use, public spending on education, and employment rate. A weight was attributed to each variable to highlight the relative importance of each criterion within the country. Care was taken to collect the most recent available data from trusted well-known international organizations (IMF, WB, WEF, and WTO). Construct of equivalence was carried out to compare the same variables across countries. The combination of all these weighted estimated criteria provides us with a global index that represents the level of development per country. An absolute index that combines wars and risks was introduced to exclude or include a country on the basis of conflicts and a collapsing state. The final step applied to the included countries consists of a benchmarking method to select the segment of countries and the percentile of each criterion. The results of this study allowed us to exclude 16 countries for risks and security. We also excluded four countries because they lack reliable and complete data. The other countries were classified per percentile thru their global index, and we identified the needier and the areas where aids are highly required to help any NPO to prioritize the area of implementation. This new concept is based on defined, actionable, accessible and accurate variables by which NPO can implement their program and it can be extended to profit companies to perform their corporate social responsibility acts.

Keywords: developing countries, international marketing, non-profit organization, segmentation

Procedia PDF Downloads 283
530 Gabriel Marcel and Friedrich Nietzsche: Existence and Death of God

Authors: Paolo Scolari

Abstract:

Nietzschean thought flows like a current throughout Marcel’s philosophy. Marcel is in constant dialogue with him. He wants to give homage to him, making him one of the most eminent representatives of existential thought. His enthusiasm is triggered by Nietzsche’s phrase: ‘God is dead,’ the fil rouge that ties all of the Nietzschean references scattered through marcelian texts. The death of God is the theme which emphasises both the greatness and simultaneously the tragedy of Nietzsche. Marcel wants to substitute the idea ‘God is dead’ with its original meaning: a tragic existential characteristic that imitators of Nietzsche seemed to have blurred. An interpretation that Marcel achieves aiming at double target. On the one hand he removes the heavy metaphysical suit from Nietzsche’s aphorisms on the death of God, that his interpreters have made them wear – Heidegger especially. On the other hand, he removes a stratus of trivialisation which takes the aphorisms out of context and transforms them into advertising slogans – here Sartre becomes the target. In the lecture: Nietzsche: l'homme devant la mort de dieu, Marcel hurls himself against the metaphysical Heidegger interpretation of the death of God. A hermeneutical proposal definitely original, but also a bit too abstract. An interpretation without bite, that does not grasp the tragic existential weight of the original Nietzschean idea. ‘We are probably on the wrong road,’ announces, ‘when at all costs, like Heidegger, we want to make a metaphysic out of Nietzsche.’ Marcel also criticizes Sartre. He lands in Geneva and reacts to the journalists, by saying: ‘Gentlemen, God is dead’. Marcel only needs this impromptu exclamation to understand how Sartre misinterprets the meaning of the death of God. Sartre mistakes and loses the existential sense of this idea in favour of the sensational and trivialisation of it. Marcel then wipes the slate clean from these two limited interpretations of the declaration of the death of God. This is much more than a metaphysical quarrel and not at all comparable to any advertising slogan. Behind the cry ‘God is dead’ there is the existence of an anguished man who experiences in his solitude the actual death of God. A man who has killed God with his own hands, haunted by the chill that from now on he will have to live in a completely different way. The death of God, however, is not the end. Marcel spots a new beginning at the point in which nihilism is overcome and the Übermensch is born. Dialoguing with Nietzsche he notices to being in the presence of a great spirit that has contributed to the renewal of a spiritual horizon. He descends to the most profound depths of his thought, aware that the way out is really far below, in the remotest areas of existence. The ambivalence of Nietzsche does not scare him. Rather such a thought, characterised by contradiction, will simultaneously be infinitely dangerous and infinitely healthy.

Keywords: Nietzsche's Death of God, Gabriel Marcel, Heidegger, Sartre

Procedia PDF Downloads 205
529 Sustainable Geographic Information System-Based Map for Suitable Landfill Sites in Aley and Chouf, Lebanon

Authors: Allaw Kamel, Bazzi Hasan

Abstract:

Municipal solid waste (MSW) generation is among the most significant sources which threaten the global environmental health. Solid Waste Management has been an important environmental problem in developing countries because of the difficulties in finding sustainable solutions for solid wastes. Therefore, more efforts are needed to be implemented to overcome this problem. Lebanon has suffered a severe solid waste management problem in 2015, and a new landfill site was proposed to solve the existing problem. The study aims to identify and locate the most suitable area to construct a landfill taking into consideration the sustainable development to overcome the present situation and protect the future demands. Throughout the article, a landfill site selection methodology was discussed using Geographic Information System (GIS) and Multi Criteria Decision Analysis (MCDA). Several environmental, economic and social factors were taken as criterion for selection of a landfill. Soil, geology, and LUC (Land Use and Land Cover) indices with the Sustainable Development Index were main inputs to create the final map of Environmentally Sensitive Area (ESA) for landfill site. Different factors were determined to define each index. Input data of each factor was managed, visualized and analyzed using GIS. GIS was used as an important tool to identify suitable areas for landfill. Spatial Analysis (SA), Analysis and Management GIS tools were implemented to produce input maps capable of identifying suitable areas related to each index. Weight has been assigned to each factor in the same index, and the main weights were assigned to each index used. The combination of the different indices map generates the final output map of ESA. The output map was reclassified into three suitability classes of low, moderate, and high suitability. Results showed different locations suitable for the construction of a landfill. Results also reflected the importance of GIS and MCDA in helping decision makers finding a solution of solid wastes by a sanitary landfill.

Keywords: sustainable development, landfill, municipal solid waste (MSW), geographic information system (GIS), multi criteria decision analysis (MCDA), environmentally sensitive area (ESA)

Procedia PDF Downloads 133
528 Effect of Amount of Crude Fiber in Grass or Silage to the Digestibility of Organic Matter in Suckler Cow Feeding Systems

Authors: Scholz Heiko, Kuhne Petra, Heckenberger Gerd

Abstract:

Problems during the calving period (December to May) often result in a high body condition score (BCS) at this time. At the end of the grazing period (frequently after early weaning), however, an increase of BCS can often be observed under German conditions. In the last eight weeks before calving, the body condition should be reduced or at least not increased. Rations with a higher amount of crude fiber can be used (rations with straw or late mowed grass silage). Fermentative digestion of fiber is slow and incomplete; that’s why the fermentative process in the rumen can be reduced over a long feeding time. Viewed in this context, feed intake of suckler cows (8 weeks before calving) in different rations and fermentation in the rumen should be checked by taking rumen fluid. Eight suckler cows (Charolais) were feeding a Total Mixed Ration (TMR) in the last eight weeks before calving and grass silage after calving. By the addition of straw (30 % [TMR1] vs. 60 % [TMR2] of dry matter) was varied the amount of crude fiber in the TMR (grass silage, straw, mineral) before calving. After calving of the cow's grass, silage [GS] was fed ad libitum, and the last measurement of rumen fluid took place on the pasture [PS]. Rumen fluid, plasma, body weight, and backfat thickness were collected. Rumen fluid pH was assessed using an electronic pH meter. Volatile fatty acids (VFA), sedimentation, methylene-blue and amount of infusorians were measured. From these 4 parameters, an “index of rumen fermentation” [IRF] in the rumen was formed. Fixed effects of treatment (TMR1, TMR2, GS and PS) and a number of lactations (3-7 lactations) were analyzed by ANOVA using SPSS Version 25.0 (significant by p ≤ 5 %). Rumen fluid pH was significant influenced by variants (TMR 1 by 6.6; TMR 2 by 6.9; GS by 6.6 and PS by 6.9) but was not affected by other effects. The IRF showed disturbed fermentation in the rumen by feeding the TMR 1+2 with a high amount of crude fiber (Score: > 10.0 points) and a very good environment for fermentation during grazing the pasture (Score: 6.9 points). Furthermore, significant differences were found for VFA, methylene blue and the number of infusorians. The use of rations with the high amount of crude fiber from weaning to calving may cause deviations from undisturbed fermentation in the rumen and adversely affect the utilization of the feed in the rumen.

Keywords: suckler cow, feeding systems, crude fiber, digestibilty of organic matter

Procedia PDF Downloads 121
527 Case Study on Innovative Aquatic-Based Bioeconomy for Chlorella sorokiniana

Authors: Iryna Atamaniuk, Hannah Boysen, Nils Wieczorek, Natalia Politaeva, Iuliia Bazarnova, Kerstin Kuchta

Abstract:

Over the last decade due to climate change and a strategy of natural resources preservation, the interest for the aquatic biomass has dramatically increased. Along with mitigation of the environmental pressure and connection of waste streams (including CO2 and heat emissions), microalgae bioeconomy can supply food, feed, as well as the pharmaceutical and power industry with number of value-added products. Furthermore, in comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, thus addressing issues associated with negative social and the environmental impacts. This paper presents the state-of-the art technology for microalgae bioeconomy from cultivation process to production of valuable components and by-streams. Microalgae Chlorella sorokiniana were cultivated in the pilot-scale innovation concept in Hamburg (Germany) using different systems such as race way pond (5000 L) and flat panel reactors (8 x 180 L). In order to achieve the optimum growth conditions along with suitable cellular composition for the further extraction of the value-added components, process parameters such as light intensity, temperature and pH are continuously being monitored. On the other hand, metabolic needs in nutrients were provided by addition of micro- and macro-nutrients into a medium to ensure autotrophic growth conditions of microalgae. The cultivation was further followed by downstream process and extraction of lipids, proteins and saccharides. Lipids extraction is conducted in repeated-batch semi-automatic mode using hot extraction method according to Randall. As solvents hexane and ethanol are used at different ratio of 9:1 and 1:9, respectively. Depending on cell disruption method along with solvents ratio, the total lipids content showed significant variations between 8.1% and 13.9 %. The highest percentage of extracted biomass was reached with a sample pretreated with microwave digestion using 90% of hexane and 10% of ethanol as solvents. Proteins content in microalgae was determined by two different methods, namely: Total Kejadahl Nitrogen (TKN), which further was converted to protein content, as well as Bradford method using Brilliant Blue G-250 dye. Obtained results, showed a good correlation between both methods with protein content being in the range of 39.8–47.1%. Characterization of neutral and acid saccharides from microalgae was conducted by phenol-sulfuric acid method at two wavelengths of 480 nm and 490 nm. The average concentration of neutral and acid saccharides under the optimal cultivation conditions was 19.5% and 26.1%, respectively. Subsequently, biomass residues are used as substrate for anaerobic digestion on the laboratory-scale. The methane concentration, which was measured on the daily bases, showed some variations for different samples after extraction steps but was in the range between 48% and 55%. CO2 which is formed during the fermentation process and after the combustion in the Combined Heat and Power unit can potentially be used within the cultivation process as a carbon source for the photoautotrophic synthesis of biomass.

Keywords: bioeconomy, lipids, microalgae, proteins, saccharides

Procedia PDF Downloads 229