Search results for: symmetric distributions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 878

Search results for: symmetric distributions

188 Radioactivity Assessment of Sediments in Negombo Lagoon Sri Lanka

Authors: H. M. N. L. Handagiripathira

Abstract:

The distributions of naturally occurring and anthropogenic radioactive materials were determined in surface sediments taken at 27 different locations along the bank of Negombo Lagoon in Sri Lanka. Hydrographic parameters of lagoon water and the grain size analyses of the sediment samples were also carried out for this study. The conductivity of the adjacent water was varied from 13.6 mS/cm to 55.4 mS/cm near to the southern end and the northern end of the lagoon, respectively, and equally salinity levels varied from 7.2 psu to 32.1 psu. The average pH in the water was 7.6 and average water temperature was 28.7 °C. The grain size analysis emphasized the mass fractions of the samples as sand (60.9%), fine sand (30.6%) and fine silt+clay (1.3%) in the sampling locations. The surface sediment samples of wet weight, 1 kg each from upper 5-10 cm layer, were oven dried at 105 °C for 24 hours to get a constant weight, homogenized and sieved through a 2 mm sieve (IAEA technical series no. 295). The radioactivity concentrations were determined using gamma spectrometry technique. Ultra Low Background Broad Energy High Purity Ge Detector, BEGe (Model BE5030, Canberra) was used for radioactivity measurement with Canberra Industries' Laboratory Source-less Calibration Software (LabSOCS) mathematical efficiency calibration approach and Geometry composer software. The mean activity concentration was found to be 24 ± 4, 67 ± 9, 181 ± 10, 59 ± 8, 3.5 ± 0.4 and 0.47 ± 0.08 Bq/kg for 238U, 232Th, 40K, 210Pb, 235U and 137Cs respectively. The mean absorbed dose rate in air, radium equivalent activity, external hazard index, annual gonadal dose equivalent and annual effective dose equivalent were 60.8 nGy/h, 137.3 Bq/kg, 0.4, 425.3 mSv/year and 74.6 mSv/year, respectively. The results of this study will provide baseline information on the natural and artificial radioactive isotopes and environmental pollution associated with information on radiological risk.

Keywords: gamma spectrometry, lagoon, radioactivity, sediments

Procedia PDF Downloads 133
187 TNFRSF11B Gene Polymorphisms A163G and G11811C in Prediction of Osteoporosis Risk

Authors: I. Boroňová, J.Bernasovská, J. Kľoc, Z. Tomková, E. Petrejčíková, D. Gabriková, S. Mačeková

Abstract:

Osteoporosis is a complex health disease characterized by low bone mineral density, which is determined by an interaction of genetics with metabolic and environmental factors. Current research in genetics of osteoporosis is focused on identification of responsible genes and polymorphisms. TNFRSF11B gene plays a key role in bone remodeling. The aim of this study was to investigate the genotype and allele distribution of A163G (rs3102735) osteoprotegerin gene promoter and G1181C (rs2073618) osteoprotegerin first exon polymorphisms in the group of 180 unrelated postmenopausal women with diagnosed osteoporosis and 180 normal controls. Genomic DNA was isolated from peripheral blood leukocytes using standard methodology. Genotyping for presence of different polymorphisms was performed using the Custom Taqman®SNP Genotyping assays. Hardy-Weinberg equilibrium was tested for each SNP in the groups of participants using the chi-square (χ2) test. The distribution of investigated genotypes in the group of patients with osteoporosis were as follows: AA (66.7%), AG (32.2%), GG (1.1%) for A163G polymorphism; GG (19.4%), CG (44.4%), CC (36.1%) for G1181C polymorphism. The distribution of genotypes in normal controls were follows: AA (71.1%), AG (26.1%), GG (2.8%) for A163G polymorphism; GG (22.2%), CG (48.9%), CC (28.9%) for G1181C polymorphism. In A163G polymorphism the variant G allele was more common among patients with osteoporosis: 17.2% versus 15.8% in normal controls. Also, in G1181C polymorphism the phenomenon of more frequent occurrence of C allele in the group of patients with osteoporosis was observed (58.3% versus 53.3%). Genotype and allele distributions showed no significant differences (A163G: χ2=0.270, p=0.605; χ2=0.250, p=0.616; G1181C: χ2= 1.730, p=0.188; χ2=1.820, p=0.177). Our results represents an initial study, further studies of more numerous file and associations studies will be carried out. Knowing the distribution of genotypes is important for assessing the impact of these polymorphisms on various parameters associated with osteoporosis. Screening for identification of “at-risk” women likely to develop osteoporosis and initiating subsequent early intervention appears to be most effective strategy to substantially reduce the risks of osteoporosis.

Keywords: osteoporosis, real-time PCR method, SNP polymorphisms

Procedia PDF Downloads 323
186 Completion of the Modified World Health Organization (WHO) Partograph during Labour in Public Health Institutions of Addis Ababa, Ethiopia

Authors: Engida Yisma, Berhanu Dessalegn, Ayalew Astatkie, Nebreed Fesseha

Abstract:

Background: The World Health Organization (WHO) recommends using the partograph to follow labour and delivery, with the objective to improve health care and reduce maternal and foetal morbidity and death. Methods: A retrospective document review was undertaken to assess the completion of the modified WHO partograph during labour in public health institutions of Addis Ababa, Ethiopia. A total of 420 of the modified WHO partographs used to monitor mothers in labour from five public health institutions that provide maternity care were reviewed. A structured checklist was used to gather the required data. The collected data were analyzed using SPSS version 16.0. Frequency distributions, cross-tabulations and a graph were used to describe the results of the study. Results: All facilities were using the modified WHO partograph. The correct completion of the partograph was very low. From 420 partographs reviewed across all the five health facilities, foetal heart rate was recorded into the recommended standard in 129(30.7%) of the partographs, while 138 (32.9%) of cervical dilatation and 87 (20.70%) of uterine contractions were recorded to the recommended standard. The study did not document descent of the presenting part in 353 (84%). Moulding in 364 (86.7%) of the partographs reviewed was not recorded. Documentation of state of the liquor was 113(26.9%), while the maternal blood pressure was recorded to standard only in 78(18.6%) of the partographs reviewed. Conclusions: This study showed a poor completion of the modified WHO partographs during labour in public health institutions of Addis Ababa, Ethiopia. The findings may reflect poor management of labour and indicate the need for pre-service and periodic on-job training of health workers on the proper completion of the partograph. Regular supportive supervision, provision of guidelines and mandatory health facility policy are also needed in support of a collaborative effort to reduce maternal and perinatal deaths.

Keywords: modified WHO partograph, completion, public health institutions, Addis Ababa, Ethiopia

Procedia PDF Downloads 341
185 An Experimental Investigation of the Surface Pressure on Flat Plates in Turbulent Boundary Layers

Authors: Azadeh Jafari, Farzin Ghanadi, Matthew J. Emes, Maziar Arjomandi, Benjamin S. Cazzolato

Abstract:

The turbulence within the atmospheric boundary layer induces highly unsteady aerodynamic loads on structures. These loads, if not accounted for in the design process, will lead to structural failure and are therefore important for the design of the structures. For an accurate prediction of wind loads, understanding the correlation between atmospheric turbulence and the aerodynamic loads is necessary. The aim of this study is to investigate the effect of turbulence within the atmospheric boundary layer on the surface pressure on a flat plate over a wide range of turbulence intensities and integral length scales. The flat plate is chosen as a fundamental geometry which represents structures such as solar panels and billboards. Experiments were conducted at the University of Adelaide large-scale wind tunnel. Two wind tunnel boundary layers with different intensities and length scales of turbulence were generated using two sets of spires with different dimensions and a fetch of roughness elements. Average longitudinal turbulence intensities of 13% and 26% were achieved in each boundary layer, and the longitudinal integral length scale within the three boundary layers was between 0.4 m and 1.22 m. The pressure distributions on a square flat plate at different elevation angles between 30° and 90° were measured within the two boundary layers with different turbulence intensities and integral length scales. It was found that the peak pressure coefficient on the flat plate increased with increasing turbulence intensity and integral length scale. For example, the peak pressure coefficient on a flat plate elevated at 90° increased from 1.2 to 3 with increasing turbulence intensity from 13% to 26%. Furthermore, both the mean and the peak pressure distribution on the flat plates varied with turbulence intensity and length scale. The results of this study can be used to provide a more accurate estimation of the unsteady wind loads on structures such as buildings and solar panels.

Keywords: atmospheric boundary layer, flat plate, pressure coefficient, turbulence

Procedia PDF Downloads 133
184 Occurrence of Half-Metallicity by Sb-Substitution in Non-Magnetic Fe₂TiSn

Authors: S. Chaudhuri, P. A. Bhobe

Abstract:

Fe₂TiSn is a non-magnetic full Heusler alloy with a small gap (~ 0.07 eV) at the Fermi level. The electronic structure is highly symmetric in both the spin bands and a small percentage of substitution of holes or electrons can push the system towards spin polarization. A stable 100% spin polarization or half-metallicity is very desirable in the field of spintronics, making Fe₂TiSn a highly attractive material. However, this composition suffers from an inherent anti-site disorder between Fe and Ti sites. This paper reports on the method adopted to control the anti-site disorder and the realization of the half-metallic ground state in Fe₂TiSn, achieved by chemical substitution. Here, Sb was substituted at Sn site to obtain Fe₂TiSn₁₋ₓSbₓ compositions with x = 0, 0.1, 0.25, 0.5 and 0.6. All prepared compositions with x ≤ 0.6 exhibit long-range L2₁ ordering and a decrease in Fe – Ti anti-site disorder. The transport and magnetic properties of Fe₂TiSn₁₋ₓSbₓ compositions were investigated as a function of temperature in the range, 5 K to 400 K. Electrical resistivity, magnetization, and Hall voltage measurements were carried out. All the experimental results indicate the presence of the half-metallic ground state in x ≥ 0.25 compositions. However, the value of saturation magnetization is small, indicating the presence of compensated magnetic moments. The observed magnetic moments' values are in close agreement with the Slater–Pauling rule in half-metallic systems. Magnetic interactions in Fe₂TiSn₁₋ₓSbₓ are understood from the local crystal structural perspective using extended X-ray absorption fine structure (EXAFS) spectroscopy. The changes in bond distances extracted from EXAFS analysis can be correlated with the hybridization between constituent atoms and hence the RKKY type magnetic interactions that govern the magnetic ground state of these alloys. To complement the experimental findings, first principle electronic structure calculations were also undertaken. The spin-polarized DOS complies with the experimental results for Fe₂TiSn₁₋ₓSbₓ. Substitution of Sb (an electron excess element) at Sn–site shifts the majority spin band to the lower energy side of Fermi level, thus making the system 100% spin polarized and inducing long-range magnetic order in an otherwise non-magnetic Fe₂TiSn. The present study concludes that a stable half-metallic system can be realized in Fe₂TiSn with ≥ 50% Sb – substitution at Sn – site.

Keywords: antisite disorder, EXAFS, Full Heusler alloy, half metallic ferrimagnetism, RKKY interactions

Procedia PDF Downloads 133
183 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 183
182 Study of Phase Separation Behavior in Flexible Polyurethane Foam

Authors: El Hatka Hicham, Hafidi Youssef, Saghiri Khalid, Ittobane Najim

Abstract:

Flexible polyurethane foam (FPUF) is a low-density cellular material generally used as a cushioning material in many applications such as furniture, bedding, packaging, etc. It is commercially produced during a continuous process, where a reactive mixture of foam chemicals is poured onto a moving conveyor. FPUFs are produced by the catalytic balancing of two reactions involved, the blowing reaction (isocyanate-water) and the gelation reaction (isocyanate-polyol). The microstructure of FPUF is generally composed of soft phases (polyol phases) and rigid domains that separate into two domains of different sizes: the rigid polyurea microdomains and the macrodomains (larger aggregates). The morphological features of FPUF are strongly influenced by the phase separation morphology that plays a key role in determining the global FPUF properties. This phase-separated morphology results from a thermodynamic incompatibility between soft segments derived from aliphatic polyether and hard segments derived from the commonly used aromatic isocyanate. In order to improve the properties of FPUF against the different stresses faced by this material during its use, we report in this work a study of the phase separation phenomenon in FPUF that has been examined using SAXS WAXS and FTIR. Indeed, we have studied with these techniques the effect of water, isocyanates, and alkaline chlorides on the phase separation behavior. SAXS was used to study the morphology of the microphase separated, WAXS to examine the nature of the hard segment packing, and FTIR to investigate the hydrogen bonding characteristics of the materials studied. The prepared foams were shown to have different levels of urea phase connectivity; the increase in water content in the FPUF formulation leads to an increase in the amount of urea formed and consequently the increase of the size of urea aggregates formed. Alkali chlorides (NaCl, KCl, and LiCl) incorporated into FPUF formulations show that is the ability to prevent hydrogen bond formation and subsequently alter the rigid domains. FPUFs prepared by different isocyanate structures showed that urea aggregates are difficult to be formed in foams prepared by asymmetric diisocyanate, while are more easily formed in foams prepared by symmetric and aliphatic diisocyanate.

Keywords: flexible polyurethane foam, hard segments, phase separation, soft segments

Procedia PDF Downloads 151
181 Affects Associations Analysis in Emergency Situations

Authors: Joanna Grzybowska, Magdalena Igras, Mariusz Ziółko

Abstract:

Association rule learning is an approach for discovering interesting relationships in large databases. The analysis of relations, invisible at first glance, is a source of new knowledge which can be subsequently used for prediction. We used this data mining technique (which is an automatic and objective method) to learn about interesting affects associations in a corpus of emergency phone calls. We also made an attempt to match revealed rules with their possible situational context. The corpus was collected and subjectively annotated by two researchers. Each of 3306 recordings contains information on emotion: (1) type (sadness, weariness, anxiety, surprise, stress, anger, frustration, calm, relief, compassion, contentment, amusement, joy) (2) valence (negative, neutral, or positive) (3) intensity (low, typical, alternating, high). Also, additional information, that is a clue to speaker’s emotional state, was annotated: speech rate (slow, normal, fast), characteristic vocabulary (filled pauses, repeated words) and conversation style (normal, chaotic). Exponentially many rules can be extracted from a set of items (an item is a previously annotated single information). To generate the rules in the form of an implication X → Y (where X and Y are frequent k-itemsets) the Apriori algorithm was used - it avoids performing needless computations. Then, two basic measures (Support and Confidence) and several additional symmetric and asymmetric objective measures (e.g. Laplace, Conviction, Interest Factor, Cosine, correlation coefficient) were calculated for each rule. Each applied interestingness measure revealed different rules - we selected some top rules for each measure. Owing to the specificity of the corpus (emergency situations), most of the strong rules contain only negative emotions. There are though strong rules including neutral or even positive emotions. Three examples of the strongest rules are: {sadness} → {anxiety}; {sadness, weariness, stress, frustration} → {anger}; {compassion} → {sadness}. Association rule learning revealed the strongest configurations of affects (as well as configurations of affects with affect-related information) in our emergency phone calls corpus. The acquired knowledge can be used for prediction to fulfill the emotional profile of a new caller. Furthermore, a rule-related possible context analysis may be a clue to the situation a caller is in.

Keywords: data mining, emergency phone calls, emotional profiles, rules

Procedia PDF Downloads 403
180 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation

Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell

Abstract:

Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.

Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models

Procedia PDF Downloads 140
179 Relationship between Monthly Shrimp Catch Rates and the Oceanography-Related Variables

Authors: Hussain M. Al-foudari, Weizhong Chen, James M. Bishop

Abstract:

Correlations between oceanographic variables and monthly catch rates of total shrimp and those of each of the major species (Penaeus semisulcatus, Metapenaeus affinis and Parapenaeopsis stylifera) showed significant differences for particular conditions. Catches of P. semisulcatus were basically positively correlated with temperature, i.e., the higher the temperature, the higher the catch rate, while those of M. affinis and P. stylifera were negatively correlated with temperature, i.e., high catch rates occurred in the low temperature waters. Thus, during the months January and April, P. semisulcatus preferred waters with high temperature, usually the offshore and southern areas, while M. affinis and P. stylifera preferred waters with low temperature, usually inshore and northern areas. The relationships between the catch rate of P. semisulcatus and salinity were not so clear. Results indicated that although salinity was one of the factors affecting the distribution of P. semisulcatus, it was not the principal factor, and impacts from other variables, such as temperature, might overshadow the correlation between the catch rates of P. semisulcatus and salinity. The relationship between shrimp catch rates and dissolved oxygen (DO) also showed mixed results. The catch rates of M. affinis increased with a decrease of surface DO in November 2013, but decreased with lower bottom DO in December. These results indicated that DO might be a factor affecting distributions of the shrimp; however; the true correlation between catch rate and DO might be easily overshadowed by other environmental variables. Catch rates of P. semisulcatus did not show any relationship with depth. P. semisulcatus is a migratory species and widely distributed in Kuwait's waters.During the shrimp season from July through December, P. semisulcatus occurs in almost all areas in Kuwait's waters irrespective of water depth. The catch rates of M. affinis and P. stylifera, however, showed clear relationships with depth. Both species had significantly higher catch rates in shallower waters, indicative of their restricted distribution.

Keywords: Kuwait, Penaeus semisulcatus, Metapenaeus affinis, Parapenaeopsis stylifera, Arabian gulf

Procedia PDF Downloads 483
178 Outcome at the Extreme of Viability: A Single-Centre Experience

Authors: Antonia Harold-Barry, Eugene Dempsey

Abstract:

Background: The objective is to examine the survival and outcome of infants born under 26 weeks gestation in an Irish tertiary maternity hospital from 2007-2016 and to describe the survival and neurodevelopmental outcomes of these extremely preterm infants. Method: The population is 132 infants born at 23, 24, and 25 weeks in Cork University Maternity Hospital from 2007 to 2016. Ethical approval was granted by the Cork Clinical Research Ethics Committee. Patient details were obtained from the Vermont Oxford and Badger Networks. Survival rates and Bayley scores were calculated to assess neurodevelopmental outcomes. Statistical analysis with SPSS included frequencies, distributions, and comparisons between data from 2007-2011 and 2012-2016. Results: Overall survival rate was 63%. Of the surviving babies, 61% had Bayley scores calculated. Survival stood at 39% for delivery at 23 weeks, 50% at 24 weeks, and 83% at 25 weeks. The 2012 to 2016 cohort has shown further increases in survival, with 50% of babies at 23 weeks, 58% at 24 weeks, and 89% at 25 weeks. Corresponding figures for 2007-2011 are 20%, 39%, and 75%. Gestational age and incidence of periventricular leukomalacia were statistically significant, with a p-value of 0.022. Gestational age and delivery room deaths had a p-value of 0.025, as did gestational age and birth weight. A comparison of the two cohorts (2007-2011 and 2012-2016) with the administration of antenatal steroids showed a statistically significant p-value of 0.044. Conclusion: There is less morbidity and mortality in infants born at 25 than at 23 or 24 weeks. Survival of extremely premature infants has increased significantly over the past ten years. Survival rates with normal neurodevelopmental outcomes are comparable with international standards and reflect positive changes in attitude and practices in neonatal intensive care. This study will inform parents about the potential outcomes of extreme prematurity and policy regarding the management of extreme prematurity.

Keywords: extreme of viability, neurodevelopmental outcome, periventricular leukomalacia, prematurity

Procedia PDF Downloads 81
177 Babouchite Siliceous Rocks: Mineralogical and Geochemical Characterization

Authors: Ben Yahia Nouha, Sebei Abdelaziz, Boussen Slim, Chaabani Fredj

Abstract:

The present work aims to determine mineralogical and geochemical characteristics of siliceous rock levels and to clarify the origin through geochemical arguments. This study was performed on the deposit of Tabarka-Babouch, which belongs to the northwestern of Tunisia; they spread out the later Miocene. Investigations were carried out to study mineralogical structure by XRD and chemical analysis by ICP-AES. The X-ray diffraction (XRD) patterns of the powdered natural rocks show that the Babouchite is composed mainly of quartz and clay minerals (smectite, illite, and kaolinite). Siliceous rocks contain quartz as a major silica mineral, which is characterized by two broad reflections at the vicinity of 4.26Å and 3.34 Å, respectively, with a total lack of opal-CT. That confirms that these siliceous rocks are quartz-rich (can reach 90%). Indeed, the amounts of all clay minerals (ACM), constituted essentially by smectite marked by a close association with illite and kaolinite, are relatively high, where their percentages vary from 7 to 46%. Chemical analyses show that the major oxide contents are consistent with mineralogical observations. It reveals that the siliceous rocks of the Babouchite formation are rich in SiO₂. The data of whole-rock chemical analyses indicate that the SiO₂ content is generally in the range 73-91 wt.%; (average: 80.43 wt.%). The concentration of Al₂O₃, which represent the detrital fractions in the studied samples, varies from 3.99 to 10.55 wt. % and Fe₂O₃ from 0.73 to 4.41wt. %. The low levels recorded in CaO (%) show that the carbonate is considered impurities. However, these rocks contain a low amount of some others oxides, such as the following: Na₂O, MgO, K₂O, and TiO₂. The trace elemental distributions also vary with high Sr (up to 84.55 ppm), Cu (5–127 ppm), and Zn (up to 124 ppm), with a relatively lower concentration of Co (2.43-25.54 ppm), Cr (10–61 ppm) and Pb (8-22ppm). The Babouchite siliceous rocks of northwestern of Tunisia have generally high Al/ (Al+Fe+Mn) values (0.63-0.83). The majority of Al/ (Al+Fe+Mn) values are nearly of 0.6, which is the biogenic end-member. Thus, Al/ (Al+Fe+Mn) values revealed the biogenic origin of silica.

Keywords: siliceous rocks, Babouchite formation, XRD, chemical analysis, biogenic silica, Northwestern of Tunisia

Procedia PDF Downloads 125
176 Variation of Manning’s Coefficient in a Meandering Channel with Emergent Vegetation Cover

Authors: Spandan Sahu, Amiya Kumar Pati, Kishanjit Kumar Khatua

Abstract:

Vegetation plays a major role in deciding the flow parameters in an open channel. It enhances the aesthetic view of the revetments. The major types of vegetation in river typically comprises of herbs, grasses, weeds, trees, etc. The vegetation in an open channel usually consists of aquatic plants with complete submergence, partial submergence, floating plants. The presence of vegetative plants can have both benefits and problems. The major benefits of aquatic plants are they reduce the soil erosion, which provides the water with a free surface to move on without hindrance. The obvious problems are they retard the flow of water and reduce the hydraulic capacity of the channel. The degree to which the flow parameters are affected depends upon the density of the vegetation, degree of submergence, pattern of vegetation, vegetation species. Vegetation in open channel tends to provide resistance to flow, which in turn provides a background to study the varying trends in flow parameters having vegetative growth in the channel surface. In this paper, an experiment has been conducted on a meandering channel having sinuosity of 1.33 with rigid vegetation cover to investigate the effect on flow parameters, variation of manning’s n with degree of the denseness of vegetation, vegetation pattern and submergence criteria. The measurements have been carried out in four different cross-sections two on trough portion of the meanders, two on the crest portion. In this study, the analytical solution of Shiono and knight (SKM) for lateral distributions of depth-averaged velocity and bed shear stress have been taken into account. Dimensionless eddy viscosity and bed friction have been incorporated to modify the SKM to provide more accurate results. A mathematical model has been formulated to have a comparative analysis with the results obtained from Shiono-Knight Method.

Keywords: bed friction, depth averaged velocity, eddy viscosity, SKM

Procedia PDF Downloads 134
175 Capacity Oversizing for Infrastructure Sharing Synergies: A Game Theoretic Analysis

Authors: Robin Molinier

Abstract:

Industrial symbiosis (I.S) rely on two basic modes of cooperation between organizations that are infrastructure/service sharing and resource substitution (the use of waste materials, fatal energy and recirculated utilities for production). The former consists in the intensification of use of an asset and thus requires to compare the incremental investment cost to be incurred and the stand-alone cost faced by each potential participant to satisfy its own requirements. In order to investigate the way such a cooperation mode can be implemented we formulate a game theoretic model integrating the grassroot investment decision and the ex-post access pricing problem. In the first period two actors set cooperatively (resp. non-cooperatively) a level of common (resp. individual) infrastructure capacity oversizing to attract ex-post a potential entrant with a plug-and-play offer (available capacity, tariff). The entrant’s requirement is randomly distributed and known only after investments took place. Capacity cost exhibits sub-additive property so that there is room for profitable overcapacity setting in the first period under some conditions that we derive. The entrant willingness-to-pay for the access to the infrastructure is driven by both her standalone cost and the complement cost to be incurred in case she chooses to access an infrastructure whose the available capacity is lower than her requirement level. The expected complement cost function is thus derived, and we show that it is decreasing, convex and shaped by the entrant’s requirements distribution function. For both uniform and triangular distributions optimal capacity level is obtained in the cooperative setting and equilibrium levels are determined in the non-cooperative case. Regarding the latter, we show that competition is deterred by the first period investor with the highest requirement level. Using the non-cooperative game outcomes which gives lower bounds for the profit sharing problem in the cooperative one we solve the whole game and describe situations supporting sharing agreements.

Keywords: capacity, cooperation, industrial symbiosis, pricing

Procedia PDF Downloads 434
174 Lithium and Sodium Ion Capacitors with High Energy and Power Densities based on Carbons from Recycled Olive Pits

Authors: Jon Ajuria, Edurne Redondo, Roman Mysyk, Eider Goikolea

Abstract:

Hybrid capacitor configurations are now of increasing interest to overcome the current energy limitations of supercapacitors entirely based on non-Faradaic charge storage. Among them, Li-ion capacitors including a negative battery-type lithium intercalation electrode and a positive capacitor-type electrode have achieved tremendous progress and have gone up to commercialization. Inexpensive electrode materials from renewable sources have recently received increased attention since cost is a persistently major criterion to make supercapacitors a more viable energy solution, with electrode materials being a major contributor to supercapacitor cost. Additionally, Na-ion battery chemistries are currently under development as less expensive and accessible alternative to Li-ion based battery electrodes. In this work, we are presenting both lithium and sodium ion capacitor (LIC & NIC) entirely based on electrodes prepared from carbon materials derived from recycled olive pits. Yearly, around 1 million ton of olive pit waste is generated worldwide, of which a third originates in the Spanish olive oil industry. On the one hand, olive pits were pyrolized at different temperatures to obtain a low specific surface area semigraphitic hard carbon to be used as the Li/Na ion intercalation (battery-type) negative electrode. The best hard carbon delivers a total capacity of 270mAh/g vs Na/Na+ in 1M NaPF6 and 350mAh/g vs Li/Li+ in 1M LiPF6. On the other hand, the same hard carbon is chemically activated with KOH to obtain high specific surface area -about 2000 m2g-1- activated carbon that is further used as the ion-adsorption (capacitor-type) positive electrode. In a voltage window of 1.5-4.2V, activated carbon delivers a specific capacity of 80 mAh/g vs. Na/Na+ and 95 mAh/g vs. Li/Li+ at 0.1A /g. Both electrodes were assembled in the same hybrid cell to build a LIC/NIC. For comparison purposes, a symmetric EDLC supercapacitor cell using the same activated carbon in 1.5M Et4NBF4 electrolyte was also built. Both LIC & NIC demonstrates considerable improvements in the energy density over its EDLC counterpart, delivering a maximum energy density of 110Wh/Kg at a power density of 30W/kg AM and a maximum power density of 6200W/Kg at an energy density of 27 Wh/Kg in the case of NIC and a maximum energy density of 110Wh/Kg at a power density of 30W/kg and a maximum power density of 18000W/Kg at an energy density of 22 Wh/Kg in the case of LIC. In conclusion, our work demonstrates that the same biomass waste can be adapted to offer a hybrid capacitor/battery storage device overcoming the limited energy density of corresponding double layer capacitors.

Keywords: hybrid supercapacitor, Na-Ion capacitor, supercapacitor, Li-Ion capacitor, EDLC

Procedia PDF Downloads 196
173 Clinical Efficacy of Nivolumab and Ipilimumab Combination Therapy for the Treatment of Advanced Melanoma: A Systematic Review and Meta-Analysis of Clinical Trials

Authors: Zhipeng Yan, Janice Wing-Tung Kwong, Ching-Lung Lai

Abstract:

Background: Advanced melanoma accounts for the majority of skin cancer death due to its poor prognosis. Nivolumab and ipilimumab are monoclonal antibodies targeting programmed cell death protein 1 (PD-1) and cytotoxic T-lymphocytes antigen 4 (CTLA-4). Nivolumab and ipilimumab combination therapy has been proven to be effective for advanced melanoma. This systematic review and meta-analysis are to evaluate its clinical efficacy and adverse events. Method: A systematic search was done on databases (Pubmed, Embase, Medline, Cochrane) on 21 June 2020. Search keywords were nivolumab, ipilimumab, melanoma, and randomised controlled trials. Clinical trials fulfilling the inclusion criteria were selected to evaluate the efficacy of combination therapy in terms of prolongation of progression-free survival (PFS), overall survival (OS), and objective response rate (ORR). The odd ratios and distributions of grade 3 or above adverse events were documented. Subgroup analysis was performed based on PD-L1 expression-status and BRAF-mutation status. Results: Compared with nivolumab monotherapy, the hazard ratios of PFS, OS and odd ratio of ORR in combination therapy were 0.64 (95% CI, 0.48-0.85; p=0.002), 0.84 (95% CI, 0.74-0.95; p=0.007) and 1.76 (95% CI, 1.51-2.06; p < 0.001), respectively. Compared with ipilimumab monotherapy, the hazard ratios of PFS, OS and odd ratio of ORR were 0.46 (95% CI, 0.37-0.57; p < 0.001), 0.54 (95% CI, 0.48-0.61; p < 0.001) and 6.18 (95% CI, 5.19-7.36; p < 0.001), respectively. In combination therapy, the odds ratios of grade 3 or above adverse events were 4.71 (95% CI, 3.57-6.22; p < 0.001) compared with nivolumab monotherapy, and 3.44 (95% CI, 2.49-4.74; p < 0.001) compared with ipilimumab monotherapy, respectively. High PD-L1 expression level and BRAF mutation were associated with better clinical outcomes in patients receiving combination therapy. Conclusion: Combination therapy is effective for the treatment of advanced melanoma. Adverse events were common but manageable. Better clinical outcomes were observed in patients with high PD-L1 expression levels and positive BRAF-mutation.

Keywords: nivolumab, ipilimumab, advanced melanoma, systematic review, meta-analysis

Procedia PDF Downloads 134
172 Genesis and Survival Chance of Autotriploid in Natural Diploid Population of Lilium lancifolium Thunb

Authors: Ji-Won Park, Jong-Wha Kim

Abstract:

Triploid L. lancifolium have a wide geographic distribution. By contrast, diploid L. lancifolium have limited distributions in the islands and coastal regions of the South and West Korean Peninsula and northern Tsushima Island, Japan. L. lancifolium diploids and triploids are not sympatrically distributed with other lily species or ploidy lines in West Sea and South Sea Islands of the Korean Peninsula. This observation raises the following questions: 'Why have autotriploid L. lancifolium never been observed in those isolated islands?', 'What mechanism excludes the occurrence of autotriploids, if they arise?'. To determine the occurrence and survival of triploid plants in natural diploid populations of tiger lily (Lilium lancifolium), ploidy analysis was conducted on natural open-pollinated seeds produced from plants grown on isolated islands, and on hybrid seeds produced by artificial crossing between plant populations originating on different Korean islands. Normal seeds were classified into five grades depending on the ratio of embryo/endosperm lengths, including 5/5, 4/5, 3/5, 2/5, and 1/5. Triploids were not observed among seedlings produced from natural open pollinations on isolated islands. Triploids were detected only in seedlings of underdeveloped seed grades(3/5 and 2/5) from artificial crosses between populations from different isolated islands. The triploid occurrence frequency was calculated as 0.0 for natural open-pollinated seedlings and 0.000582 for artificial crosses(6 triploids from 10,303 seedlings). Triploids were produced from crosses between isolated populations located at least 70 km apart; no triploids were detected in inter-population crosses of plants originating on the same islands. Triploid seedlings have very low viability in soil. We analyzed factors affecting triploid occurrence and survival in natural diploid populations of L. lancifolium. The results suggest that triploids originate from fertilization between plants that are genetically isolated due to geographical isolation and/or genotypic differences.

Keywords: Lilium lancifolium, autotriploid, natural population, genetic distance, 2n female gamete

Procedia PDF Downloads 517
171 Development of Green Cement, Based on Partial Replacement of Clinker with Limestone Powder

Authors: Yaniv Knop, Alva Peled

Abstract:

Over the past few years there has been a growing interest in the development of Portland Composite Cement, by partial replacement of the clinker with mineral additives. The motivations to reduce the clinker content are threefold: (1) Ecological - due to lower emission of CO2 to the atmosphere; (2) Economical - due to cost reduction; and (3) Scientific\Technology – improvement of performances. Among the mineral additives being used and investigated, limestone is one of the most attractive, as it is considered natural, available, and with low cost. The goal of the research is to develop green cement, by partial replacement of the clinker with limestone powder while improving the performances of the cement paste. This work studied blended cements with three limestone powder particle diameters: smaller than, larger than, and similarly sized to the clinker particle. Blended cement with limestone consisting of one particle size distribution and limestone consisting of a combination of several particle sizes were studied and compared in terms of hydration rate, hydration degree, and water demand to achieve normal consistency. The performances of these systems were also compared with that of the original cement (without added limestone). It was found that the ability to replace an active material with an inert additive, while achieving improved performances, can be obtained by increasing the packing density of the cement-based particles. This may be achieved by replacing the clinker with limestone powders having a combination of several different particle size distributions. Mathematical and physical models were developed to simulate the setting history from initial to final setting time and to predict the packing density of blended cement with limestone having different sizes and various contents. Besides the effect of limestone, as inert additive, on the packing density of the blended cement, the influence of the limestone particle size on three different chemical reactions were studied; hydration of the cement, carbonation of the calcium hydroxide and the reactivity of the limestone with the hydration reaction products. The main results and developments will be presented.

Keywords: packing density, hydration degree, limestone, blended cement

Procedia PDF Downloads 280
170 Numerical Study on the Effects of Truncated Ribs on Film Cooling with Ribbed Cross-Flow Coolant Channel

Authors: Qijiao He, Lin Ye

Abstract:

To evaluate the effect of the ribs on internal structure in film hole and the film cooling performance on outer surface, the numerical study investigates on the effects of rib configuration on the film cooling performance with ribbed cross-flow coolant channel. The base smooth case and three ribbed cases, including the continuous rib case and two cross-truncated rib cases with different arrangement, are studied. The distributions of adiabatic film cooling effectiveness and heat transfer coefficient are obtained under the blowing ratios with the value of 0.5 and 1.0, respectively. A commercial steady RANS (Reynolds-averaged Navier-Stokes) code with realizable k-ε turbulence model and enhanced wall treatment were performed for numerical simulations. The numerical model is validated against available experimental data. The two cross-truncated rib cases produce approximately identical cooling effectiveness compared with the smooth case under lower blowing ratio. The continuous rib case significantly outperforms the other cases. With the increase of blowing ratio, the cases with ribs are inferior to the smooth case, especially in the upstream region. The cross-truncated rib I case produces the highest cooling effectiveness among the studied the ribbed channel case. It is found that film cooling effectiveness deteriorates with the increase of spiral intensity of the cross-flow inside the film hole. Lower spiral intensity leads to a better film coverage and thus results in better cooling effectiveness. The distinct relative merits among the cases at different blowing ratios are explored based on the aforementioned dominant mechanism. With regard to the heat transfer coefficient, the smooth case has higher heat transfer intensity than the ribbed cases under the studied blowing ratios. The laterally-averaged heat transfer coefficient of the cross-truncated rib I case is higher than the cross-truncated rib II case.

Keywords: cross-flow, cross-truncated rib, film cooling, numerical simulation

Procedia PDF Downloads 132
169 Journal Bearing with Controllable Radial Clearance, Design and Analysis

Authors: Majid Rashidi, Shahrbanoo Farkhondeh Biabnavi

Abstract:

The hydrodynamic instability phenomenon in a journal bearing may occur by either a reduction in the load carried by journal bearing, by an increase in the journal speed, by change in the lubricant viscosity, or a combination of these factors. The previous research and development work done to overcome the instability issue of journal bearings, operating in hydrodynamic lubricate regime, can be categorized as follows: A) Actively controlling the bearing sleeve by using piezo actuator, b) Inclusion of strategically located and shaped internal grooves within inner surface of the bearing sleeve, c) Actively controlling the bearing sleeve using an electromagnetic actuator, d)Actively and externally pressurizing the lubricant within a journal bearing set, and e)Incorporating tilting pads within the inner surface of the bearing sleeve that assume different equilibrium angular position in response to changes in the bearing design parameter such as speed and load. This work presents an innovative design concept for a 'smart journal bearing' set to operate in a stable hydrodynamic lubrication regime, despite variations in bearing speed, load, and its lubricant viscosity. The proposed bearing design allows adjusting its radial clearance for an attempt to maintain a stable bearing operation under those conditions that may cause instability for a bearing with a fixed radial clearance. The design concept allows adjusting the radial clearance at small increments in the order of 0.00254 mm. This is achieved by axially moving two symmetric conical rigid cavities that are in close contact with the conically shaped outer shell of a sleeve bearing. The proposed work includes a 3D model of the bearing that depicts the structural interactions of the bearing components. The 3D model is employed to conduct finite element Analyses to simulate the mechanical behavior of the bearing from a structural point of view. The concept of controlling of the radial clearance, as presented in this work, is original and has not been proposed and discuss in previous research. A typical journal bearing was analyzed under a set of design parameters, namely r =1.27 cm (journal radius), c = 0.0254 mm (radial clearance), L=1.27 cm (bearing length), w = 445N (bearing load), μ = 0.028 Pascale (lubricant viscosity). A shaft speed as 3600 r.p.m was considered, and the mass supported by the bearing, m, is set to be 4.38kg. The Summerfield Number associated with the above bearing design parameters turn to be, S=0.3. These combinations resulted in stable bearing operation. Subsequently, the speed was postulated to increase from 3600 r.p.mto 7200 r.p.m; the bearing was found to be unstable under the new increased speed. In order to regain stability, the radial clearance was increased from c = 0.0254 mm to0.0358mm. The change in the radial clearance was shown to bring the bearing back to stable an operating condition.

Keywords: adjustable clearance, bearing, hydrodynamic, instability, journal

Procedia PDF Downloads 278
168 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 337
167 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics

Authors: Titus A. Beu

Abstract:

Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.

Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.

Procedia PDF Downloads 114
166 An Approach to Study the Biodegradation of Low Density Polyethylene Using Microbial Strains of Bacillus subtilus, Aspergillus niger, Pseudomonas fluroscence in Different Media Form and Salt Condition

Authors: Monu Ojha, Rahul Rana, Satywati Sharma, Kavya Dashora

Abstract:

The global production rate of plastics has increased enormously and global demand for polyethylene resins –High-density polyethylene (HDPE), Linear low-density polyethylene (LLDPE) and Low-density polyethylene (LDPE) is expected to rise drastically, with very high value. These get accumulated in the environment, posing a potential ecological threat as they are degrading at a very slow rate and remain in the environment indefinitely. The aim of the present study was to investigate the potential of commonly found soil microbes like Bacillus subtilus, Aspergillus niger, Pseudomonas fluroscence for their ability to biodegrade LDPE in the lab on solid and liquid media conditions as well as in presence of 1% salt in the soil. This study was conducted at Indian Institute of Technology, Delhi, India from July to September where average temperature and RH (Relative Humidity) were 33 degrees Celcius and 80% respectively. It revealed that the weight loss of LDPE strip obtained from market of approximately 4x6 cm dimensions is more in liquid broth media than in solid agar media. The percentage weight loss by P. fluroscence, A. niger and B. subtilus observed after 80 days of incubation was 15.52, 9.24 and 8.99% respectively in broth media and 6.93, 2.18 and 4.76 % in agar media. The LDPE strips from same source and on the same were subjected to soil in presence of above microbes with 1% salt (NaCl: obtained from commercial table salt) with temperature and RH 33 degree Celcius and 80%. It was found that the rate of degradation increased in the soil than under lab conditions. The rate of weight loss of LDPE strips under same conditions given in lab was found to be 32.98, 15.01 and17.09 % by P. fluroscence, A. niger and B. subtilus respectively. The breaking strength was found to be 9.65N, 29N and 23.85 N for P. fluroscence, A. niger and B. subtilus respectively. SEM analysis conducted on Zeiss EVO 50 confirmed that surface of LDPE becomes physically weak after biological treatment. There was the increase in the surface roughness indicating Surface erosion of LDPE film. FTIR (Fourier-transform infrared spectroscopy) analysis of the degraded LDPE films showed stretching of aldehyde group at 3334.92 and 3228.84 cm-1,, C–C=C symmetric of aromatic ring at 1639.49 cm-1.There was also C=O stretching of aldehyde group at 1735.93 cm-1. N=O peak bend was also observed which corresponds to 1365.60 cm-1, C–O stretching of ether group at 1217.08 and 1078.21 cm-1.

Keywords: microbial degradation, LDPE, Aspergillus niger, Bacillus subtilus, Peudomonas fluroscence, common salt

Procedia PDF Downloads 158
165 Examining Statistical Monitoring Approach against Traditional Monitoring Techniques in Detecting Data Anomalies during Conduct of Clinical Trials

Authors: Sheikh Omar Sillah

Abstract:

Introduction: Monitoring is an important means of ensuring the smooth implementation and quality of clinical trials. For many years, traditional site monitoring approaches have been critical in detecting data errors but not optimal in identifying fabricated and implanted data as well as non-random data distributions that may significantly invalidate study results. The objective of this paper was to provide recommendations based on best statistical monitoring practices for detecting data-integrity issues suggestive of fabrication and implantation early in the study conduct to allow implementation of meaningful corrective and preventive actions. Methodology: Electronic bibliographic databases (Medline, Embase, PubMed, Scopus, and Web of Science) were used for the literature search, and both qualitative and quantitative studies were sought. Search results were uploaded into Eppi-Reviewer Software, and only publications written in the English language from 2012 were included in the review. Gray literature not considered to present reproducible methods was excluded. Results: A total of 18 peer-reviewed publications were included in the review. The publications demonstrated that traditional site monitoring techniques are not efficient in detecting data anomalies. By specifying project-specific parameters such as laboratory reference range values, visit schedules, etc., with appropriate interactive data monitoring, statistical monitoring can offer early signals of data anomalies to study teams. The review further revealed that statistical monitoring is useful to identify unusual data patterns that might be revealing issues that could impact data integrity or may potentially impact study participants' safety. However, subjective measures may not be good candidates for statistical monitoring. Conclusion: The statistical monitoring approach requires a combination of education, training, and experience sufficient to implement its principles in detecting data anomalies for the statistical aspects of a clinical trial.

Keywords: statistical monitoring, data anomalies, clinical trials, traditional monitoring

Procedia PDF Downloads 66
164 Pavement Management for a Metropolitan Area: A Case Study of Montreal

Authors: Luis Amador Jimenez, Md. Shohel Amin

Abstract:

Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.

Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization

Procedia PDF Downloads 456
163 A Study of the Interactions between the Inter-City Traffic System and the Spatial Structure Evolution in the Yangtze River Delta from Time and Space Dimensions

Authors: Zhang Cong, Cai Runlin, Jia Fengjiao

Abstract:

The evolution of the urban agglomeration spatial structure requires strong support of the inter-city traffic system. And the inter-city traffic system can not only meet the demand of the urban agglomeration transportation but also guide the economic development. To correctly understand the relationship between inter-city traffic planning and urban agglomeration can help the urban agglomeration coordinated developing with the inter-city traffic system. The Yangtze River Delta is one of the most representative urban agglomerations in China with strong economic vitality, high city levels, diversified urban space form, and improved transport infrastructure. With the promotion of industrial division in the Yangtze River Delta and the regional travel facilitation brought by inter-city traffic, the urban agglomeration is characterized by highly increasing of inter-city transportation demand, the urbanization of regional traffic, adjacent regional transportation links breaking administrative boundaries, the networked channels and so on. Therefore, the development of inter-city traffic system presents new trends and challenges. This paper studies the interactions between inter-city traffic system and regional economic growth, regional factor flow, and regional spatial structure evolution in the Yangtze River Delta from two dimensions of time and space. On this basis, the adaptability of inter-city traffic development mode and urban agglomeration space structure is analyzed. First of all, the coordination between urban agglomeration planning and inter-city traffic planning is judged from the planning level. Secondly, the coordination between inter-city traffic elements and industries and population distributions is judged from the perspective of space. Finally, the coordination of the cross-regional planning and construction of inter-city traffic system is judged. The conclusions can provide an empirical reference for intercity traffic planning in Yangtze River Delta region and other urban agglomerations, and it is also of great significance to optimize the allocation of urban agglomerations and the overall operational efficiency.

Keywords: evolution, interaction, inter-city traffic system, spatial structure

Procedia PDF Downloads 309
162 Transboundary Pollution after Natural Disasters: Scenario Analyses for Uranium at Kyrgyzstan-Uzbekistan Border

Authors: Fengqing Li, Petra Schneider

Abstract:

Failure of tailings management facilities (TMF) of radioactive residues is an enormous challenge worldwide and can result in major catastrophes. Particularly in transboundary regions, such failure is most likely to lead to international conflict. This risk occurs in Kyrgyzstan and Uzbekistan, where the current major challenge is the quantification of impacts due to pollution from uranium legacy sites and especially the impact on river basins after natural hazards (i.e., landslides). By means of GoldSim, a probabilistic simulation model, the amount of tailing material that flows into the river networks of Mailuu Suu in Kyrgyzstan after pond failure was simulated for three scenarios, namely 10%, 20%, and 30% of material inputs. Based on Muskingum-Cunge flood routing procedure, the peak value of uranium flood wave along the river network was simulated. Among the 23 TMF, 19 ponds are close to the river networks. The spatiotemporal distributions of uranium along the river networks were then simulated for all the 19 ponds under three scenarios. Taking the TP7 which is 30 km far from the Kyrgyzstan-Uzbekistan border as one example, the uranium concentration decreased continuously along the longitudinal gradient of the river network, the concentration of uranium was observed at the border after 45 min of the pond failure and the highest value was detected after 69 min. The highest concentration of uranium at the border were 16.5, 33, and 47.5 mg/L under scenarios of 10%, 20%, and 30% of material inputs, respectively. In comparison to the guideline value of uranium in drinking water (i.e., 30 µg/L) provided by the World Health Organization, the observed concentrations of uranium at the border were 550‒1583 times higher. In order to mitigate the transboundary impact of a radioactive pollutant release, an integrated framework consisting of three major strategies were proposed. Among, the short-term strategy can be used in case of emergency event, the medium-term strategy allows both countries handling the TMF efficiently based on the benefit-sharing concept, and the long-term strategy intends to rehabilitate the site through the relocation of all TMF.

Keywords: Central Asia, contaminant transport modelling, radioactive residue, transboundary conflict

Procedia PDF Downloads 112
161 Impact of Terrorism as an Asymmetrical Threat on the State's Conventional Security Forces

Authors: Igor Pejic

Abstract:

The main focus of this research will be on analyzing correlative links between terrorism as an asymmetrical threat and the consequences it leaves on conventional security forces. The methodology behind the research will include qualitative research methods focusing on comparative analysis of books, scientific papers, documents and other sources, in order to deduce, explore and formulate the results of the research. With the coming of the 21st century and the rising multi-polar, new world threats quickly emerged. The realistic approach in international relations deems that relations among nations are in a constant state of anarchy since there are no definitive rules and the distribution of power varies widely. International relations are further characterized by egoistic and self-orientated human nature, anarchy or absence of a higher government, security and lack of morality. The asymmetry of power is also reflected on countries' security capabilities and its abilities to project power. With the coming of the new millennia and the rising multi-polar world order, the asymmetry of power can be also added as an important trait of the global society which consequently brought new threats. Among various others, terrorism is probably the most well-known, well-based and well-spread asymmetric threat. In today's global political arena, terrorism is used by state and non-state actors to fulfill their political agendas. Terrorism is used as an all-inclusive tool for regime change, subversion or a revolution. Although the nature of terrorist groups is somewhat inconsistent, terrorism as a security and social phenomenon has a one constant which is reflected in its political dimension. The state's security apparatus, which was embodied in the form of conventional armed forces, is now becoming fragile, unable to tackle new threats and to a certain extent outdated. Conventional security forces were designed to defend or engage an exterior threat which is more or less symmetric and visible. On the other hand, terrorism as an asymmetrical threat is a part of hybrid, special or asymmetric warfare in which specialized units, institutions or facilities represent the primary pillars of security. In today's global society, terrorism is probably the most acute problem which can paralyze entire countries and their political systems. This problem, however, cannot be engaged on an open field of battle, but rather it requires a different approach in which conventional armed forces cannot be used traditionally and their role must be adjusted. The research will try to shed light on the phenomena of modern day terrorism and to prove its correlation with the state conventional armed forces. States are obliged to adjust their security apparatus to the new realism of global society and terrorism as an asymmetrical threat which is a side-product of the unbalanced world.

Keywords: asymmetrical warfare, conventional forces, security, terrorism

Procedia PDF Downloads 259
160 Experimental Modeling of Spray and Water Sheet Formation Due to Wave Interactions with Vertical and Slant Bow-Shaped Model

Authors: Armin Bodaghkhani, Bruce Colbourne, Yuri S. Muzychka

Abstract:

The process of spray-cloud formation and flow kinematics produced from breaking wave impact on vertical and slant lab-scale bow-shaped models were experimentally investigated. Bubble Image Velocimetry (BIV) and Image Processing (IP) techniques were applied to study the various types of wave-model impacts. Different wave characteristics were generated in a tow tank to investigate the effects of wave characteristics, such as wave phase velocity, wave steepness on droplet velocities, and behavior of the process of spray cloud formation. The phase ensemble-averaged vertical velocity and turbulent intensity were computed. A high-speed camera and diffused LED backlights were utilized to capture images for further post processing. Various pressure sensors and capacitive wave probes were used to measure the wave impact pressure and the free surface profile at different locations of the model and wave-tank, respectively. Droplet sizes and velocities were measured using BIV and IP techniques to trace bubbles and droplets in order to measure their velocities and sizes by correlating the texture in these images. The impact pressure and droplet size distributions were compared to several previously experimental models, and satisfactory agreements were achieved. The distribution of droplets in front of both models are demonstrated. Due to the highly transient process of spray formation, the drag coefficient for several stages of this transient displacement for various droplet size ranges and different Reynolds number were calculated based on the ensemble average method. From the experimental results, the slant model produces less spray in comparison with the vertical model, and the droplet velocities generated from the wave impact with the slant model have a lower velocity as compared with the vertical model.

Keywords: spray charachteristics, droplet size and velocity, wave-body interactions, bubble image velocimetry, image processing

Procedia PDF Downloads 297
159 Quality Assurance Comparison of Map Check 2, Epid, and Gafchromic® EBT3 Film for IMRT Treatment Planning

Authors: Khalid Iqbal, Saima Altaf, M. Akram, Muhammad Abdur Rafaye, Saeed Ahmad Buzdar

Abstract:

Objective: Verification of patient-specific intensity modulated radiation therapy (IMRT) plans using different 2-D detectors has become increasingly popular due to their ease of use and immediate readout of the results. The purpose of this study was to test and compare various 2-D detectors for dosimetric quality assurance (QA) of intensity-modulated radiotherapy (IMRT) with the vision to find alternative QA methods. Material and Methods: Twenty IMRT patients (12 of brain and 8 of the prostate) were planned on Eclipse treatment planning system using Varian Clinac DHX on both energies 6MV and 15MV. Verification plans of all such patients were also made and delivered to Map check2, EPID (Electronic portal imaging device) and Gafchromic EBT3. Gamma index analyses were performed using different criteria to evaluate and compare the dosimetric results. Results: Statistical analysis shows the passing rate of 99.55%, 97.23% and 92.9% for 6MV and 99.53%, 98.3% and 94.85% for 15 MV energy using a criteria of ±5% of 3mm, ±3% of 3mm and ±3% of 2mm respectively for brain, whereas using ±5% of 3mm and ±3% of 3mm gamma evaluation criteria, the passing rate is 94.55% and 90.45% for 6MV and 95.25%9 and 95% for 15 MV energy for the case of prostate using EBT3 film. Map check 2 results shows the passing rates of 98.17%, 97.68% and 86.78% for 6MV energy and 94.87%,97.46% and 88.31% for 15 MV energy respectively for brain using a criteria of ±5% of 3mm, ±3% of 3mm and ±3% of 2mm, whereas using ±5% of 3mm and ±3% of 3mm gamma evaluation criteria gives the passing rate of 97.7% and 96.4% for 6MV and 98.75%9 and 98.05% for 15 MV energy for the case of prostate. EPID 6 MV and gamma analysis shows the passing rate of 99.56%, 98.63% and 98.4% for the brain, 100% and 99.9% for prostate using the same criteria as for map check 2 and EBT 3 film. Conclusion: The results demonstrate excellent passing rates were obtained for all dosimeter when compared with the planar dose distributions for 6 MV IMRT fields as well as for 15 MV. EPID results are better than EBT3 films and map check 2 because it is likely that part of this difference is real, and part is due to manhandling and different treatment set up verification which contributes dose distribution difference. Overall all three dosimeter exhibits results within limits according to AAPM report.120.

Keywords: gafchromic EBT3, radiochromic film dosimetry, IMRT verification, EPID

Procedia PDF Downloads 415