Search results for: heavy-tailed distributions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 642

Search results for: heavy-tailed distributions

162 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation

Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell

Abstract:

Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.

Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models

Procedia PDF Downloads 121
161 Relationship between Monthly Shrimp Catch Rates and the Oceanography-Related Variables

Authors: Hussain M. Al-foudari, Weizhong Chen, James M. Bishop

Abstract:

Correlations between oceanographic variables and monthly catch rates of total shrimp and those of each of the major species (Penaeus semisulcatus, Metapenaeus affinis and Parapenaeopsis stylifera) showed significant differences for particular conditions. Catches of P. semisulcatus were basically positively correlated with temperature, i.e., the higher the temperature, the higher the catch rate, while those of M. affinis and P. stylifera were negatively correlated with temperature, i.e., high catch rates occurred in the low temperature waters. Thus, during the months January and April, P. semisulcatus preferred waters with high temperature, usually the offshore and southern areas, while M. affinis and P. stylifera preferred waters with low temperature, usually inshore and northern areas. The relationships between the catch rate of P. semisulcatus and salinity were not so clear. Results indicated that although salinity was one of the factors affecting the distribution of P. semisulcatus, it was not the principal factor, and impacts from other variables, such as temperature, might overshadow the correlation between the catch rates of P. semisulcatus and salinity. The relationship between shrimp catch rates and dissolved oxygen (DO) also showed mixed results. The catch rates of M. affinis increased with a decrease of surface DO in November 2013, but decreased with lower bottom DO in December. These results indicated that DO might be a factor affecting distributions of the shrimp; however; the true correlation between catch rate and DO might be easily overshadowed by other environmental variables. Catch rates of P. semisulcatus did not show any relationship with depth. P. semisulcatus is a migratory species and widely distributed in Kuwait's waters.During the shrimp season from July through December, P. semisulcatus occurs in almost all areas in Kuwait's waters irrespective of water depth. The catch rates of M. affinis and P. stylifera, however, showed clear relationships with depth. Both species had significantly higher catch rates in shallower waters, indicative of their restricted distribution.

Keywords: Kuwait, Penaeus semisulcatus, Metapenaeus affinis, Parapenaeopsis stylifera, Arabian gulf

Procedia PDF Downloads 466
160 Outcome at the Extreme of Viability: A Single-Centre Experience

Authors: Antonia Harold-Barry, Eugene Dempsey

Abstract:

Background: The objective is to examine the survival and outcome of infants born under 26 weeks gestation in an Irish tertiary maternity hospital from 2007-2016 and to describe the survival and neurodevelopmental outcomes of these extremely preterm infants. Method: The population is 132 infants born at 23, 24, and 25 weeks in Cork University Maternity Hospital from 2007 to 2016. Ethical approval was granted by the Cork Clinical Research Ethics Committee. Patient details were obtained from the Vermont Oxford and Badger Networks. Survival rates and Bayley scores were calculated to assess neurodevelopmental outcomes. Statistical analysis with SPSS included frequencies, distributions, and comparisons between data from 2007-2011 and 2012-2016. Results: Overall survival rate was 63%. Of the surviving babies, 61% had Bayley scores calculated. Survival stood at 39% for delivery at 23 weeks, 50% at 24 weeks, and 83% at 25 weeks. The 2012 to 2016 cohort has shown further increases in survival, with 50% of babies at 23 weeks, 58% at 24 weeks, and 89% at 25 weeks. Corresponding figures for 2007-2011 are 20%, 39%, and 75%. Gestational age and incidence of periventricular leukomalacia were statistically significant, with a p-value of 0.022. Gestational age and delivery room deaths had a p-value of 0.025, as did gestational age and birth weight. A comparison of the two cohorts (2007-2011 and 2012-2016) with the administration of antenatal steroids showed a statistically significant p-value of 0.044. Conclusion: There is less morbidity and mortality in infants born at 25 than at 23 or 24 weeks. Survival of extremely premature infants has increased significantly over the past ten years. Survival rates with normal neurodevelopmental outcomes are comparable with international standards and reflect positive changes in attitude and practices in neonatal intensive care. This study will inform parents about the potential outcomes of extreme prematurity and policy regarding the management of extreme prematurity.

Keywords: extreme of viability, neurodevelopmental outcome, periventricular leukomalacia, prematurity

Procedia PDF Downloads 50
159 Babouchite Siliceous Rocks: Mineralogical and Geochemical Characterization

Authors: Ben Yahia Nouha, Sebei Abdelaziz, Boussen Slim, Chaabani Fredj

Abstract:

The present work aims to determine mineralogical and geochemical characteristics of siliceous rock levels and to clarify the origin through geochemical arguments. This study was performed on the deposit of Tabarka-Babouch, which belongs to the northwestern of Tunisia; they spread out the later Miocene. Investigations were carried out to study mineralogical structure by XRD and chemical analysis by ICP-AES. The X-ray diffraction (XRD) patterns of the powdered natural rocks show that the Babouchite is composed mainly of quartz and clay minerals (smectite, illite, and kaolinite). Siliceous rocks contain quartz as a major silica mineral, which is characterized by two broad reflections at the vicinity of 4.26Å and 3.34 Å, respectively, with a total lack of opal-CT. That confirms that these siliceous rocks are quartz-rich (can reach 90%). Indeed, the amounts of all clay minerals (ACM), constituted essentially by smectite marked by a close association with illite and kaolinite, are relatively high, where their percentages vary from 7 to 46%. Chemical analyses show that the major oxide contents are consistent with mineralogical observations. It reveals that the siliceous rocks of the Babouchite formation are rich in SiO₂. The data of whole-rock chemical analyses indicate that the SiO₂ content is generally in the range 73-91 wt.%; (average: 80.43 wt.%). The concentration of Al₂O₃, which represent the detrital fractions in the studied samples, varies from 3.99 to 10.55 wt. % and Fe₂O₃ from 0.73 to 4.41wt. %. The low levels recorded in CaO (%) show that the carbonate is considered impurities. However, these rocks contain a low amount of some others oxides, such as the following: Na₂O, MgO, K₂O, and TiO₂. The trace elemental distributions also vary with high Sr (up to 84.55 ppm), Cu (5–127 ppm), and Zn (up to 124 ppm), with a relatively lower concentration of Co (2.43-25.54 ppm), Cr (10–61 ppm) and Pb (8-22ppm). The Babouchite siliceous rocks of northwestern of Tunisia have generally high Al/ (Al+Fe+Mn) values (0.63-0.83). The majority of Al/ (Al+Fe+Mn) values are nearly of 0.6, which is the biogenic end-member. Thus, Al/ (Al+Fe+Mn) values revealed the biogenic origin of silica.

Keywords: siliceous rocks, Babouchite formation, XRD, chemical analysis, biogenic silica, Northwestern of Tunisia

Procedia PDF Downloads 104
158 Variation of Manning’s Coefficient in a Meandering Channel with Emergent Vegetation Cover

Authors: Spandan Sahu, Amiya Kumar Pati, Kishanjit Kumar Khatua

Abstract:

Vegetation plays a major role in deciding the flow parameters in an open channel. It enhances the aesthetic view of the revetments. The major types of vegetation in river typically comprises of herbs, grasses, weeds, trees, etc. The vegetation in an open channel usually consists of aquatic plants with complete submergence, partial submergence, floating plants. The presence of vegetative plants can have both benefits and problems. The major benefits of aquatic plants are they reduce the soil erosion, which provides the water with a free surface to move on without hindrance. The obvious problems are they retard the flow of water and reduce the hydraulic capacity of the channel. The degree to which the flow parameters are affected depends upon the density of the vegetation, degree of submergence, pattern of vegetation, vegetation species. Vegetation in open channel tends to provide resistance to flow, which in turn provides a background to study the varying trends in flow parameters having vegetative growth in the channel surface. In this paper, an experiment has been conducted on a meandering channel having sinuosity of 1.33 with rigid vegetation cover to investigate the effect on flow parameters, variation of manning’s n with degree of the denseness of vegetation, vegetation pattern and submergence criteria. The measurements have been carried out in four different cross-sections two on trough portion of the meanders, two on the crest portion. In this study, the analytical solution of Shiono and knight (SKM) for lateral distributions of depth-averaged velocity and bed shear stress have been taken into account. Dimensionless eddy viscosity and bed friction have been incorporated to modify the SKM to provide more accurate results. A mathematical model has been formulated to have a comparative analysis with the results obtained from Shiono-Knight Method.

Keywords: bed friction, depth averaged velocity, eddy viscosity, SKM

Procedia PDF Downloads 116
157 Capacity Oversizing for Infrastructure Sharing Synergies: A Game Theoretic Analysis

Authors: Robin Molinier

Abstract:

Industrial symbiosis (I.S) rely on two basic modes of cooperation between organizations that are infrastructure/service sharing and resource substitution (the use of waste materials, fatal energy and recirculated utilities for production). The former consists in the intensification of use of an asset and thus requires to compare the incremental investment cost to be incurred and the stand-alone cost faced by each potential participant to satisfy its own requirements. In order to investigate the way such a cooperation mode can be implemented we formulate a game theoretic model integrating the grassroot investment decision and the ex-post access pricing problem. In the first period two actors set cooperatively (resp. non-cooperatively) a level of common (resp. individual) infrastructure capacity oversizing to attract ex-post a potential entrant with a plug-and-play offer (available capacity, tariff). The entrant’s requirement is randomly distributed and known only after investments took place. Capacity cost exhibits sub-additive property so that there is room for profitable overcapacity setting in the first period under some conditions that we derive. The entrant willingness-to-pay for the access to the infrastructure is driven by both her standalone cost and the complement cost to be incurred in case she chooses to access an infrastructure whose the available capacity is lower than her requirement level. The expected complement cost function is thus derived, and we show that it is decreasing, convex and shaped by the entrant’s requirements distribution function. For both uniform and triangular distributions optimal capacity level is obtained in the cooperative setting and equilibrium levels are determined in the non-cooperative case. Regarding the latter, we show that competition is deterred by the first period investor with the highest requirement level. Using the non-cooperative game outcomes which gives lower bounds for the profit sharing problem in the cooperative one we solve the whole game and describe situations supporting sharing agreements.

Keywords: capacity, cooperation, industrial symbiosis, pricing

Procedia PDF Downloads 413
156 Clinical Efficacy of Nivolumab and Ipilimumab Combination Therapy for the Treatment of Advanced Melanoma: A Systematic Review and Meta-Analysis of Clinical Trials

Authors: Zhipeng Yan, Janice Wing-Tung Kwong, Ching-Lung Lai

Abstract:

Background: Advanced melanoma accounts for the majority of skin cancer death due to its poor prognosis. Nivolumab and ipilimumab are monoclonal antibodies targeting programmed cell death protein 1 (PD-1) and cytotoxic T-lymphocytes antigen 4 (CTLA-4). Nivolumab and ipilimumab combination therapy has been proven to be effective for advanced melanoma. This systematic review and meta-analysis are to evaluate its clinical efficacy and adverse events. Method: A systematic search was done on databases (Pubmed, Embase, Medline, Cochrane) on 21 June 2020. Search keywords were nivolumab, ipilimumab, melanoma, and randomised controlled trials. Clinical trials fulfilling the inclusion criteria were selected to evaluate the efficacy of combination therapy in terms of prolongation of progression-free survival (PFS), overall survival (OS), and objective response rate (ORR). The odd ratios and distributions of grade 3 or above adverse events were documented. Subgroup analysis was performed based on PD-L1 expression-status and BRAF-mutation status. Results: Compared with nivolumab monotherapy, the hazard ratios of PFS, OS and odd ratio of ORR in combination therapy were 0.64 (95% CI, 0.48-0.85; p=0.002), 0.84 (95% CI, 0.74-0.95; p=0.007) and 1.76 (95% CI, 1.51-2.06; p < 0.001), respectively. Compared with ipilimumab monotherapy, the hazard ratios of PFS, OS and odd ratio of ORR were 0.46 (95% CI, 0.37-0.57; p < 0.001), 0.54 (95% CI, 0.48-0.61; p < 0.001) and 6.18 (95% CI, 5.19-7.36; p < 0.001), respectively. In combination therapy, the odds ratios of grade 3 or above adverse events were 4.71 (95% CI, 3.57-6.22; p < 0.001) compared with nivolumab monotherapy, and 3.44 (95% CI, 2.49-4.74; p < 0.001) compared with ipilimumab monotherapy, respectively. High PD-L1 expression level and BRAF mutation were associated with better clinical outcomes in patients receiving combination therapy. Conclusion: Combination therapy is effective for the treatment of advanced melanoma. Adverse events were common but manageable. Better clinical outcomes were observed in patients with high PD-L1 expression levels and positive BRAF-mutation.

Keywords: nivolumab, ipilimumab, advanced melanoma, systematic review, meta-analysis

Procedia PDF Downloads 108
155 Genesis and Survival Chance of Autotriploid in Natural Diploid Population of Lilium lancifolium Thunb

Authors: Ji-Won Park, Jong-Wha Kim

Abstract:

Triploid L. lancifolium have a wide geographic distribution. By contrast, diploid L. lancifolium have limited distributions in the islands and coastal regions of the South and West Korean Peninsula and northern Tsushima Island, Japan. L. lancifolium diploids and triploids are not sympatrically distributed with other lily species or ploidy lines in West Sea and South Sea Islands of the Korean Peninsula. This observation raises the following questions: 'Why have autotriploid L. lancifolium never been observed in those isolated islands?', 'What mechanism excludes the occurrence of autotriploids, if they arise?'. To determine the occurrence and survival of triploid plants in natural diploid populations of tiger lily (Lilium lancifolium), ploidy analysis was conducted on natural open-pollinated seeds produced from plants grown on isolated islands, and on hybrid seeds produced by artificial crossing between plant populations originating on different Korean islands. Normal seeds were classified into five grades depending on the ratio of embryo/endosperm lengths, including 5/5, 4/5, 3/5, 2/5, and 1/5. Triploids were not observed among seedlings produced from natural open pollinations on isolated islands. Triploids were detected only in seedlings of underdeveloped seed grades(3/5 and 2/5) from artificial crosses between populations from different isolated islands. The triploid occurrence frequency was calculated as 0.0 for natural open-pollinated seedlings and 0.000582 for artificial crosses(6 triploids from 10,303 seedlings). Triploids were produced from crosses between isolated populations located at least 70 km apart; no triploids were detected in inter-population crosses of plants originating on the same islands. Triploid seedlings have very low viability in soil. We analyzed factors affecting triploid occurrence and survival in natural diploid populations of L. lancifolium. The results suggest that triploids originate from fertilization between plants that are genetically isolated due to geographical isolation and/or genotypic differences.

Keywords: Lilium lancifolium, autotriploid, natural population, genetic distance, 2n female gamete

Procedia PDF Downloads 498
154 Development of Green Cement, Based on Partial Replacement of Clinker with Limestone Powder

Authors: Yaniv Knop, Alva Peled

Abstract:

Over the past few years there has been a growing interest in the development of Portland Composite Cement, by partial replacement of the clinker with mineral additives. The motivations to reduce the clinker content are threefold: (1) Ecological - due to lower emission of CO2 to the atmosphere; (2) Economical - due to cost reduction; and (3) Scientific\Technology – improvement of performances. Among the mineral additives being used and investigated, limestone is one of the most attractive, as it is considered natural, available, and with low cost. The goal of the research is to develop green cement, by partial replacement of the clinker with limestone powder while improving the performances of the cement paste. This work studied blended cements with three limestone powder particle diameters: smaller than, larger than, and similarly sized to the clinker particle. Blended cement with limestone consisting of one particle size distribution and limestone consisting of a combination of several particle sizes were studied and compared in terms of hydration rate, hydration degree, and water demand to achieve normal consistency. The performances of these systems were also compared with that of the original cement (without added limestone). It was found that the ability to replace an active material with an inert additive, while achieving improved performances, can be obtained by increasing the packing density of the cement-based particles. This may be achieved by replacing the clinker with limestone powders having a combination of several different particle size distributions. Mathematical and physical models were developed to simulate the setting history from initial to final setting time and to predict the packing density of blended cement with limestone having different sizes and various contents. Besides the effect of limestone, as inert additive, on the packing density of the blended cement, the influence of the limestone particle size on three different chemical reactions were studied; hydration of the cement, carbonation of the calcium hydroxide and the reactivity of the limestone with the hydration reaction products. The main results and developments will be presented.

Keywords: packing density, hydration degree, limestone, blended cement

Procedia PDF Downloads 257
153 Numerical Study on the Effects of Truncated Ribs on Film Cooling with Ribbed Cross-Flow Coolant Channel

Authors: Qijiao He, Lin Ye

Abstract:

To evaluate the effect of the ribs on internal structure in film hole and the film cooling performance on outer surface, the numerical study investigates on the effects of rib configuration on the film cooling performance with ribbed cross-flow coolant channel. The base smooth case and three ribbed cases, including the continuous rib case and two cross-truncated rib cases with different arrangement, are studied. The distributions of adiabatic film cooling effectiveness and heat transfer coefficient are obtained under the blowing ratios with the value of 0.5 and 1.0, respectively. A commercial steady RANS (Reynolds-averaged Navier-Stokes) code with realizable k-ε turbulence model and enhanced wall treatment were performed for numerical simulations. The numerical model is validated against available experimental data. The two cross-truncated rib cases produce approximately identical cooling effectiveness compared with the smooth case under lower blowing ratio. The continuous rib case significantly outperforms the other cases. With the increase of blowing ratio, the cases with ribs are inferior to the smooth case, especially in the upstream region. The cross-truncated rib I case produces the highest cooling effectiveness among the studied the ribbed channel case. It is found that film cooling effectiveness deteriorates with the increase of spiral intensity of the cross-flow inside the film hole. Lower spiral intensity leads to a better film coverage and thus results in better cooling effectiveness. The distinct relative merits among the cases at different blowing ratios are explored based on the aforementioned dominant mechanism. With regard to the heat transfer coefficient, the smooth case has higher heat transfer intensity than the ribbed cases under the studied blowing ratios. The laterally-averaged heat transfer coefficient of the cross-truncated rib I case is higher than the cross-truncated rib II case.

Keywords: cross-flow, cross-truncated rib, film cooling, numerical simulation

Procedia PDF Downloads 110
152 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 299
151 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics

Authors: Titus A. Beu

Abstract:

Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.

Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.

Procedia PDF Downloads 94
150 Examining Statistical Monitoring Approach against Traditional Monitoring Techniques in Detecting Data Anomalies during Conduct of Clinical Trials

Authors: Sheikh Omar Sillah

Abstract:

Introduction: Monitoring is an important means of ensuring the smooth implementation and quality of clinical trials. For many years, traditional site monitoring approaches have been critical in detecting data errors but not optimal in identifying fabricated and implanted data as well as non-random data distributions that may significantly invalidate study results. The objective of this paper was to provide recommendations based on best statistical monitoring practices for detecting data-integrity issues suggestive of fabrication and implantation early in the study conduct to allow implementation of meaningful corrective and preventive actions. Methodology: Electronic bibliographic databases (Medline, Embase, PubMed, Scopus, and Web of Science) were used for the literature search, and both qualitative and quantitative studies were sought. Search results were uploaded into Eppi-Reviewer Software, and only publications written in the English language from 2012 were included in the review. Gray literature not considered to present reproducible methods was excluded. Results: A total of 18 peer-reviewed publications were included in the review. The publications demonstrated that traditional site monitoring techniques are not efficient in detecting data anomalies. By specifying project-specific parameters such as laboratory reference range values, visit schedules, etc., with appropriate interactive data monitoring, statistical monitoring can offer early signals of data anomalies to study teams. The review further revealed that statistical monitoring is useful to identify unusual data patterns that might be revealing issues that could impact data integrity or may potentially impact study participants' safety. However, subjective measures may not be good candidates for statistical monitoring. Conclusion: The statistical monitoring approach requires a combination of education, training, and experience sufficient to implement its principles in detecting data anomalies for the statistical aspects of a clinical trial.

Keywords: statistical monitoring, data anomalies, clinical trials, traditional monitoring

Procedia PDF Downloads 45
149 Pavement Management for a Metropolitan Area: A Case Study of Montreal

Authors: Luis Amador Jimenez, Md. Shohel Amin

Abstract:

Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.

Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization

Procedia PDF Downloads 431
148 A Study of the Interactions between the Inter-City Traffic System and the Spatial Structure Evolution in the Yangtze River Delta from Time and Space Dimensions

Authors: Zhang Cong, Cai Runlin, Jia Fengjiao

Abstract:

The evolution of the urban agglomeration spatial structure requires strong support of the inter-city traffic system. And the inter-city traffic system can not only meet the demand of the urban agglomeration transportation but also guide the economic development. To correctly understand the relationship between inter-city traffic planning and urban agglomeration can help the urban agglomeration coordinated developing with the inter-city traffic system. The Yangtze River Delta is one of the most representative urban agglomerations in China with strong economic vitality, high city levels, diversified urban space form, and improved transport infrastructure. With the promotion of industrial division in the Yangtze River Delta and the regional travel facilitation brought by inter-city traffic, the urban agglomeration is characterized by highly increasing of inter-city transportation demand, the urbanization of regional traffic, adjacent regional transportation links breaking administrative boundaries, the networked channels and so on. Therefore, the development of inter-city traffic system presents new trends and challenges. This paper studies the interactions between inter-city traffic system and regional economic growth, regional factor flow, and regional spatial structure evolution in the Yangtze River Delta from two dimensions of time and space. On this basis, the adaptability of inter-city traffic development mode and urban agglomeration space structure is analyzed. First of all, the coordination between urban agglomeration planning and inter-city traffic planning is judged from the planning level. Secondly, the coordination between inter-city traffic elements and industries and population distributions is judged from the perspective of space. Finally, the coordination of the cross-regional planning and construction of inter-city traffic system is judged. The conclusions can provide an empirical reference for intercity traffic planning in Yangtze River Delta region and other urban agglomerations, and it is also of great significance to optimize the allocation of urban agglomerations and the overall operational efficiency.

Keywords: evolution, interaction, inter-city traffic system, spatial structure

Procedia PDF Downloads 286
147 Transboundary Pollution after Natural Disasters: Scenario Analyses for Uranium at Kyrgyzstan-Uzbekistan Border

Authors: Fengqing Li, Petra Schneider

Abstract:

Failure of tailings management facilities (TMF) of radioactive residues is an enormous challenge worldwide and can result in major catastrophes. Particularly in transboundary regions, such failure is most likely to lead to international conflict. This risk occurs in Kyrgyzstan and Uzbekistan, where the current major challenge is the quantification of impacts due to pollution from uranium legacy sites and especially the impact on river basins after natural hazards (i.e., landslides). By means of GoldSim, a probabilistic simulation model, the amount of tailing material that flows into the river networks of Mailuu Suu in Kyrgyzstan after pond failure was simulated for three scenarios, namely 10%, 20%, and 30% of material inputs. Based on Muskingum-Cunge flood routing procedure, the peak value of uranium flood wave along the river network was simulated. Among the 23 TMF, 19 ponds are close to the river networks. The spatiotemporal distributions of uranium along the river networks were then simulated for all the 19 ponds under three scenarios. Taking the TP7 which is 30 km far from the Kyrgyzstan-Uzbekistan border as one example, the uranium concentration decreased continuously along the longitudinal gradient of the river network, the concentration of uranium was observed at the border after 45 min of the pond failure and the highest value was detected after 69 min. The highest concentration of uranium at the border were 16.5, 33, and 47.5 mg/L under scenarios of 10%, 20%, and 30% of material inputs, respectively. In comparison to the guideline value of uranium in drinking water (i.e., 30 µg/L) provided by the World Health Organization, the observed concentrations of uranium at the border were 550‒1583 times higher. In order to mitigate the transboundary impact of a radioactive pollutant release, an integrated framework consisting of three major strategies were proposed. Among, the short-term strategy can be used in case of emergency event, the medium-term strategy allows both countries handling the TMF efficiently based on the benefit-sharing concept, and the long-term strategy intends to rehabilitate the site through the relocation of all TMF.

Keywords: Central Asia, contaminant transport modelling, radioactive residue, transboundary conflict

Procedia PDF Downloads 87
146 Experimental Modeling of Spray and Water Sheet Formation Due to Wave Interactions with Vertical and Slant Bow-Shaped Model

Authors: Armin Bodaghkhani, Bruce Colbourne, Yuri S. Muzychka

Abstract:

The process of spray-cloud formation and flow kinematics produced from breaking wave impact on vertical and slant lab-scale bow-shaped models were experimentally investigated. Bubble Image Velocimetry (BIV) and Image Processing (IP) techniques were applied to study the various types of wave-model impacts. Different wave characteristics were generated in a tow tank to investigate the effects of wave characteristics, such as wave phase velocity, wave steepness on droplet velocities, and behavior of the process of spray cloud formation. The phase ensemble-averaged vertical velocity and turbulent intensity were computed. A high-speed camera and diffused LED backlights were utilized to capture images for further post processing. Various pressure sensors and capacitive wave probes were used to measure the wave impact pressure and the free surface profile at different locations of the model and wave-tank, respectively. Droplet sizes and velocities were measured using BIV and IP techniques to trace bubbles and droplets in order to measure their velocities and sizes by correlating the texture in these images. The impact pressure and droplet size distributions were compared to several previously experimental models, and satisfactory agreements were achieved. The distribution of droplets in front of both models are demonstrated. Due to the highly transient process of spray formation, the drag coefficient for several stages of this transient displacement for various droplet size ranges and different Reynolds number were calculated based on the ensemble average method. From the experimental results, the slant model produces less spray in comparison with the vertical model, and the droplet velocities generated from the wave impact with the slant model have a lower velocity as compared with the vertical model.

Keywords: spray charachteristics, droplet size and velocity, wave-body interactions, bubble image velocimetry, image processing

Procedia PDF Downloads 277
145 Quality Assurance Comparison of Map Check 2, Epid, and Gafchromic® EBT3 Film for IMRT Treatment Planning

Authors: Khalid Iqbal, Saima Altaf, M. Akram, Muhammad Abdur Rafaye, Saeed Ahmad Buzdar

Abstract:

Objective: Verification of patient-specific intensity modulated radiation therapy (IMRT) plans using different 2-D detectors has become increasingly popular due to their ease of use and immediate readout of the results. The purpose of this study was to test and compare various 2-D detectors for dosimetric quality assurance (QA) of intensity-modulated radiotherapy (IMRT) with the vision to find alternative QA methods. Material and Methods: Twenty IMRT patients (12 of brain and 8 of the prostate) were planned on Eclipse treatment planning system using Varian Clinac DHX on both energies 6MV and 15MV. Verification plans of all such patients were also made and delivered to Map check2, EPID (Electronic portal imaging device) and Gafchromic EBT3. Gamma index analyses were performed using different criteria to evaluate and compare the dosimetric results. Results: Statistical analysis shows the passing rate of 99.55%, 97.23% and 92.9% for 6MV and 99.53%, 98.3% and 94.85% for 15 MV energy using a criteria of ±5% of 3mm, ±3% of 3mm and ±3% of 2mm respectively for brain, whereas using ±5% of 3mm and ±3% of 3mm gamma evaluation criteria, the passing rate is 94.55% and 90.45% for 6MV and 95.25%9 and 95% for 15 MV energy for the case of prostate using EBT3 film. Map check 2 results shows the passing rates of 98.17%, 97.68% and 86.78% for 6MV energy and 94.87%,97.46% and 88.31% for 15 MV energy respectively for brain using a criteria of ±5% of 3mm, ±3% of 3mm and ±3% of 2mm, whereas using ±5% of 3mm and ±3% of 3mm gamma evaluation criteria gives the passing rate of 97.7% and 96.4% for 6MV and 98.75%9 and 98.05% for 15 MV energy for the case of prostate. EPID 6 MV and gamma analysis shows the passing rate of 99.56%, 98.63% and 98.4% for the brain, 100% and 99.9% for prostate using the same criteria as for map check 2 and EBT 3 film. Conclusion: The results demonstrate excellent passing rates were obtained for all dosimeter when compared with the planar dose distributions for 6 MV IMRT fields as well as for 15 MV. EPID results are better than EBT3 films and map check 2 because it is likely that part of this difference is real, and part is due to manhandling and different treatment set up verification which contributes dose distribution difference. Overall all three dosimeter exhibits results within limits according to AAPM report.120.

Keywords: gafchromic EBT3, radiochromic film dosimetry, IMRT verification, EPID

Procedia PDF Downloads 400
144 Diversity and Distribution of Cytochrome P450 2C9 Genes Related with Medical Cannabis in Thai Patients

Authors: Tanakrit Doltanakarn

Abstract:

Introduction: These days, cannabis is being accepted in many countries due to the fact that cannabis could be use in medical. The medical cannabis is used to treat and reduce the pain many diseases. For example, neuropathic pain, Parkinson, autism disorders, cancer pain reduce the adverse effect of chemotherapy, diabetes, and migraine. Active ingredients in cannabis that modulate patients' perceptions of their conditions include Δ9‐tetrahydrocannabinol (THC), cannabidiol (CBD), flavonoids, and terpenes. However, there is an adverse effect of cannabis, cardiovascular effects, psychosis, schizophrenia, mood disorder, and cognitive alternation. These effects are from the THC and CBD ingredients in the cannabis. The metabolize processes of delta-9 THC to 11-OH-delta 9 -THC (inactive form), THC were cause of adverse effects. Interestingly, the distributions of CYP2C9 gene (CYP2C9*2 and CYP2C9*3, poor metabolizer) that might affect incidences of adverse effects in patients who treated with medical cannabis. Objective: The aim of this study we want to investigate the association between genetic polymorphism of CYP2C9 frequency and Thai patients who treated with medical cannabis. Materials and Methods:We recruited sixty-five unrelated Thai patients from the College of Pharmacy, Rangsit University. DNA were extracted using Genomic DNA Mini Kit. Genotyping of CYP2C9*2 (430C>T, rs1799853) and CYP2C9*3 (1075A>C, rs1057910) were genotyped by the TaqMan Real-time PCR assay. Results: Among these 31 medicals cannabis-induced ADRs patients, they were diagnosed with 22 (33.85%) tachycardia and 3 (4.62%) arrhythmia. There were 34 (52.31%) medical cannabis-tolerant controls who were included in this study.40 (61.53%) Thai patients were female, and 25 (38.46%) were male, with median age of 57 (range 27 – 87) years. In this study, we found none of the medical cannabis-induced ADRs carried CYP2C9*2 variant along with medical cannabis-tolerant control group. CYP2C9*3 variant (intermediate metabolizer, IM) was found just only one of thirty-one (3.23%) in the medical cannabis-induced ADRs and two of thirty-fourth (5.88%) in the tolerant controls. Conclusions: Thus, the distribution of CYP2C9 alleles offer a comprehensive view of pharmacogenomics marker in Thai population that could be used as a reference for worldwide to investigate the pharmacogenomics application.

Keywords: medical cannabis, adverse effect, CYP2C9, thai patients

Procedia PDF Downloads 75
143 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction

Authors: Talal Alsulaiman, Khaldoun Khashanah

Abstract:

In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent's attributes. Also, the influence of social networks in the developing of agents’ interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.

Keywords: artificial stock markets, market dynamics, bounded rationality, agent based simulation, learning, interaction, social networks

Procedia PDF Downloads 326
142 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis

Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab

Abstract:

Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.

Keywords: deep neural network, foot disorder, plantar pressure, support vector machine

Procedia PDF Downloads 316
141 Off-Body Sub-GHz Wireless Channel Characterization for Dairy Cows in Barns

Authors: Said Benaissa, David Plets, Emmeric Tanghe, Jens Trogh, Luc Martens, Leen Vandaele, Annelies Van Nuffel, Frank A. M. Tuyttens, Bart Sonck, Wout Joseph

Abstract:

The herd monitoring and managing - in particular the detection of ‘attention animals’ that require care, treatment or assistance is crucial for effective reproduction status, health, and overall well-being of dairy cows. In large sized farms, traditional methods based on direct observation or analysis of video recordings become labour-intensive and time-consuming. Thus, automatic monitoring systems using sensors have become increasingly important to continuously and accurately track the health status of dairy cows. Wireless sensor networks (WSNs) and internet-of-things (IoT) can be effectively used in health tracking of dairy cows to facilitate herd management and enhance the cow welfare. Since on-cow measuring devices are energy-constrained, a proper characterization of the off-body wireless channel between the on-cow sensor nodes and the back-end base station is required for a power-optimized deployment of these networks in barns. The aim of this study was to characterize the off-body wireless channel in indoor (barns) environment at 868 MHz using LoRa nodes. LoRa is an emerging wireless technology mainly targeted at WSNs and IoT networks. Both large scale fading (i.e., path loss) and temporal fading were investigated. The obtained path loss values as a function of the transmitter-receiver separation were well fitted by a lognormal path loss model. The path loss showed an additional increase of 4 dB when the wireless node was actually worn by the cow. The temporal fading due to movement of other cows was well described by Rician distributions with a K-factor of 8.5 dB. Based on this characterization, network planning and energy consumption optimization of the on-body wireless nodes could be performed, which enables the deployment of reliable dairy cow monitoring systems.

Keywords: channel, channel modelling, cow monitoring, dairy cows, health monitoring, IoT, LoRa, off-body propagation, PLF, propagation

Procedia PDF Downloads 288
140 Contrasted Mean and Median Models in Egyptian Stock Markets

Authors: Mai A. Ibrahim, Mohammed El-Beltagy, Motaz Khorshid

Abstract:

Emerging Markets return distributions have shown significance departure from normality were they are characterized by fatter tails relative to the normal distribution and exhibit levels of skewness and kurtosis that constitute a significant departure from normality. Therefore, the classical Markowitz Mean-Variance is not applicable for emerging markets since it assumes normally-distributed returns (with zero skewness and kurtosis) and a quadratic utility function. Moreover, the Markowitz mean-variance analysis can be used in cases of moderate non-normality and it still provides a good approximation of the expected utility, but it may be ineffective under large departure from normality. Higher moments models and median models have been suggested in the literature for asset allocation in this case. Higher moments models have been introduced to account for the insufficiency of the description of a portfolio by only its first two moments while the median model has been introduced as a robust statistic which is less affected by outliers than the mean. Tail risk measures such as Value-at Risk (VaR) and Conditional Value-at-Risk (CVaR) have been introduced instead of Variance to capture the effect of risk. In this research, higher moment models including the Mean-Variance-Skewness (MVS) and Mean-Variance-Skewness-Kurtosis (MVSK) are formulated as single-objective non-linear programming problems (NLP) and median models including the Median-Value at Risk (MedVaR) and Median-Mean Absolute Deviation (MedMAD) are formulated as a single-objective mixed-integer linear programming (MILP) problems. The higher moment models and median models are compared to some benchmark portfolios and tested on real financial data in the Egyptian main Index EGX30. The results show that all the median models outperform the higher moment models were they provide higher final wealth for the investor over the entire period of study. In addition, the results have confirmed the inapplicability of the classical Markowitz Mean-Variance to the Egyptian stock market as it resulted in very low realized profits.

Keywords: Egyptian stock exchange, emerging markets, higher moment models, median models, mixed-integer linear programming, non-linear programming

Procedia PDF Downloads 284
139 In vitro Characterization of Mice Bone Microstructural Changes by Low-Field and High-Field Nuclear Magnetic Resonance

Authors: Q. Ni, J. A. Serna, D. Holland, X. Wang

Abstract:

The objective of this study is to develop Nuclear Magnetic Resonance (NMR) techniques to enhance bone related research applied on normal and disuse (Biglycan knockout) mice bone in vitro by using both low-field and high-field NMR simultaneously. It is known that the total amplitude of T₂ relaxation envelopes, measured by the Carr-Purcell-Meiboom-Gill NMR spin echo train (CPMG), is a representation of the liquid phase inside the pores. Therefore, the NMR CPMG magnetization amplitude can be transferred to the volume of water after calibration with the NMR signal amplitude of the known volume of the selected water. In this study, the distribution of mobile water, porosity that can be determined by using low-field (20 MHz) CPMG relaxation technique, and the pore size distributions can be determined by a computational inversion relaxation method. It is also known that the total proton intensity of magnetization from the NMR free induction decay (FID) signal is due to the water present inside the pores (mobile water), the water that has undergone hydration with the bone (bound water), and the protons in the collagen and mineral matter (solid-like protons). Therefore, the components of total mobile and bound water within bone that can be determined by low-field NMR free induction decay technique. Furthermore, the bound water in solid phase (mineral and organic constituents), especially, the dominated component of calcium hydroxyapatite (Ca₁₀(OH)₂(PO₄)₆) can be determined by using high-field (400 MHz) magic angle spinning (MAS) NMR. With MAS technique reducing NMR spectral linewidth inhomogeneous broadening and susceptibility broadening of liquid-solid mix, in particular, we can conduct further research into the ¹H and ³¹P elements and environments of bone materials to identify the locations of bound water such as OH- group within minerals and bone architecture. We hypothesize that with low-field and high-field magic angle spinning NMR can provide a more complete interpretation of water distribution, particularly, in bound water, and these data are important to access bone quality and predict the mechanical behavior of bone.

Keywords: bone, mice bone, NMR, water in bone

Procedia PDF Downloads 145
138 Joint Probability Distribution of Extreme Water Level with Rainfall and Temperature: Trend Analysis of Potential Impacts of Climate Change

Authors: Ali Razmi, Saeed Golian

Abstract:

Climate change is known to have the potential to impact adversely hydrologic patterns for variables such as rainfall, maximum and minimum temperature and sea level rise. Long-term average of these climate variables could possibly change over time due to climate change impacts. In this study, trend analysis was performed on rainfall, maximum and minimum temperature and water level data of a coastal area in Manhattan, New York City, Central Park and Battery Park stations to investigate if there is a significant change in the data mean. Partial Man-Kendall test was used for trend analysis. Frequency analysis was then performed on data using common probability distribution functions such as Generalized Extreme Value (GEV), normal, log-normal and log-Pearson. Goodness of fit tests such as Kolmogorov-Smirnov are used to determine the most appropriate distributions. In flood frequency analysis, rainfall and water level data are often separately investigated. However, in determining flood zones, simultaneous consideration of rainfall and water level in frequency analysis could have considerable effect on floodplain delineation (flood extent and depth). The present study aims to perform flood frequency analysis considering joint probability distribution for rainfall and storm surge. First, correlation between the considered variables was investigated. Joint probability distribution of extreme water level and temperature was also investigated to examine how global warming could affect sea level flooding impacts. Copula functions were fitted to data and joint probability of water level with rainfall and temperature for different recurrence intervals of 2, 5, 25, 50, 100, 200, 500, 600 and 1000 was determined and compared with the severity of individual events. Results for trend analysis showed increase in long-term average of data that could be attributed to climate change impacts. GEV distribution was found as the most appropriate function to be fitted to the extreme climate variables. The results for joint probability distribution analysis confirmed the necessity for incorporation of both rainfall and water level data in flood frequency analysis.

Keywords: climate change, climate variables, copula, joint probability

Procedia PDF Downloads 326
137 Comparison of the Hospital Patient Safety Culture between Bulgarian, Croatian and American: Preliminary Results

Authors: R. Stoyanova, R. Dimova, M. Tarnovska, T. Boeva, R. Dimov, I. Doykov

Abstract:

Patient safety culture (PSC) is an essential component of quality of healthcare. Improving PSC is considered a priority in many developed countries. Specialized software platform for registration and evaluation of hospital patient safety culture has been developed with the support of the Medical University Plovdiv Project №11/2017. The aim of the study is to assess the status of PSC in Bulgarian hospitals and to compare it to that in USA and Croatian hospitals. Methods: The study was conducted from June 01 to July 31, 2018 using the web-based Bulgarian Version of the Hospital Survey on Patient Safety Culture Questionnaire (B-HSOPSC). Two hundred and forty-eight medical professionals from different hospitals in Bulgaria participated in the study. To quantify the differences of positive scores distributions for each of the 42 HSOPSC items between Bulgarian, Croatian and USA samples, the x²-test was applied. The research hypothesis assumed that there are no significant differences between the Bulgarian, Croatian and US PSCs. Results: The results revealed 14 significant differences in the positive scores between the Bulgarian and Croatian PSCs and 15 between the Bulgarian and the USA PSC, respectively. Bulgarian medical professionals provided less positive responses to 12 items compared with Croatian and USA respondents. The Bulgarian respondents were more positive compared to Croatians on the feedback and communication of medical errors (Items - C1, C4, C5) as well as on the employment of locum staff (A7) and the frequency of reported mistakes (D1). Bulgarian medical professionals were more positive compared with their USA colleagues on the communication of information at shift handover and across hospital units (F5, F7). The distribution of positive scores on items: ‘Staff worries that their mistakes are kept in their personnel file’ (RA16), ‘Things ‘fall between the cracks’ when transferring patients from one unit to another’ (RF3) and ‘Shift handovers are problematic for patients in this hospital’ (RF11) were significantly higher among Bulgarian respondents compared with Croatian and US respondents. Conclusions: Significant differences of positive scores distribution were found between Bulgarian and USA PSC on one hand and between Bulgarian and Croatian on the other. The study reveals that distribution of positive responses could be explained by the cultural, organizational and healthcare system differences.

Keywords: patient safety culture, healthcare, HSOPSC, medical error

Procedia PDF Downloads 112
136 Constraints on Source Rock Organic Matter Biodegradation in the Biogenic Gas Fields in the Sanhu Depression, Qaidam Basin, Northwestern China: A Study of Compound Concentration and Concentration Ratio Changes Using GC-MS Data

Authors: Mengsha Yin

Abstract:

Extractable organic matter (EOM) from thirty-six biogenic gas source rocks from the Sanhu Depression in Qaidam Basin in northwestern China were obtained via Soxhlet extraction. Twenty-nine of them were conducted SARA (Saturates, Aromatics, Resins and Asphaltenes) separation for bulk composition analysis. Saturated and aromatic fractions of all the extractions were analyzed by Gas Chromatography-Mass Spectrometry (GC-MS) to investigate the compound compositions. More abundant n-alkanes, naphthalene, phenanthrene, dibenzothiophene and their alkylated products occur in samples in shallower depths. From 2000m downward, concentrations of these compounds increase sharply, and concentration ratios of more-over-less biodegradation susceptible compounds coincidently decrease dramatically. ∑iC15-16, 18-20/∑nC15-16, 18-20 and hopanoids/∑n-alkanes concentration ratios and mono- and tri-aromatic sterane concentrations and concentration ratios frequently fluctuate with depth rather than trend with it, reflecting effects from organic input and paleoenvironments other than biodegradation. Saturated and aromatic compound distributions on the saturates and aromatics total ion chromatogram (TIC) traces of samples display different degrees of biodegradation. Dramatic and simultaneous variations in compound concentrations and their ratios at 2000m and their changes with depth underneath cooperatively justified the crucial control of burial depth on organic matter biodegradation scales in source rocks and prompted the proposition that 2000m is the bottom depth boundary for active microbial activities in this study. The study helps to better curb the conditions where effective source rocks occur in terms of depth in the Sanhu biogenic gas fields and calls for additional attention to source rock pore size estimation during biogenic gas source rock appraisals.

Keywords: pore space, Sanhu depression, saturated and aromatic hydrocarbon compound concentration, source rock organic matter biodegradation, total ion chromatogram

Procedia PDF Downloads 126
135 Effect of Punch Diameter on Optimal Loading Profiles in Hydromechanical Deep Drawing Process

Authors: Mehmet Halkaci, Ekrem Öztürk, Mevlüt Türköz, H. Selçuk Halkacı

Abstract:

Hydromechanical deep drawing (HMD) process is an advanced manufacturing process used to form deep parts with only one forming step. In this process, sheet metal blank can be drawn deeper by means of fluid pressure acting on sheet surface in the opposite direction of punch movement. High limiting drawing ratio, good surface quality, less springback characteristic and high dimensional accuracy are some of the advantages of this process. The performance of the HMD process is affected by various process parameters such as fluid pressure, blank holder force, punch-die radius, pre-bulging pressure and height, punch diameter, friction between sheet-die and sheet-punch. The fluid pressure and bank older force are the main loading parameters and affect the formability of HMD process significantly. The punch diameter also influences the limiting drawing ratio (the ratio of initial sheet diameter to punch diameter) of the sheet metal blank. In this research, optimal loading (fluid pressure and blank holder force) profiles were determined for AA 5754-O sheet material through fuzzy control algorithm developed in previous study using LS-DYNA finite element analysis (FEA) software. In the preceding study, the fuzzy control algorithm was developed utilizing geometrical criteria such as thinning and wrinkling. In order to obtain the final desired part with the developed algorithm in terms of the punch diameter requested, the effect of punch diameter, which is the one of the process parameters, on loading profiles was investigated separately using blank thickness of 1 mm. Thus, the practicality of the previously developed fuzzy control algorithm with different punch diameters was clarified. Also, thickness distributions of the sheet metal blank along a curvilinear distance were compared for the FEA in which different punch diameters were used. Consequently, it was found that the use of different punch diameters did not affect the optimal loading profiles too much.

Keywords: Finite Element Analysis (FEA), fuzzy control, hydromechanical deep drawing, optimal loading profiles, punch diameter

Procedia PDF Downloads 399
134 Broadband Optical Plasmonic Antennas Using Fano Resonance Effects

Authors: Siamak Dawazdah Emami, Amin Khodaei, Harith Bin Ahmad, Hairul A. Adbul-Rashid

Abstract:

The Fano resonance effect on plasmonic nanoparticle materials results in such materials possessing a number of unique optical properties, and the potential applicability for sensing, nonlinear devices and slow-light devices. A Fano resonance is a consequence of coherent interference between superradiant and subradiant hybridized plasmon modes. Incident light on subradiant modes will initiate excitation that results in superradiant modes, and these superradient modes possess zero or finite dipole moments alongside a comparable negligible coupling with light. This research work details the derivation of an electrodynamics coupling model for the interaction of dipolar transitions and radiation via plasmonic nanoclusters such as quadrimers, pentamers and heptamers. The directivity calculation is analyzed in order to qualify the redirection of emission. The geometry of a configured array of nanostructures strongly influenced the transmission and reflection properties, which subsequently resulted in the directivity of each antenna being related to the nanosphere size and gap distances between the nanospheres in each model’s structure. A well-separated configuration of nanospheres resulted in the structure behaving similarly to monomers, with spectra peaks of a broad superradiant mode being centered within the vicinity of 560 nm wavelength. Reducing the distance between ring nanospheres in pentamers and heptamers to 20~60 nm caused the coupling factor and charge distributions to increase and invoke a subradiant mode centered within the vicinity of 690 nm. Increasing the outside ring’s nanosphere distance from the centered nanospheres caused the coupling factor to decrease, with the coupling factor being inversely proportional to cubic of the distance between nanospheres. This phenomenon led to a dramatic decrease of the superradiant mode at a 200 nm distance between the central nanosphere and outer rings. Effects from a superradiant mode vanished beyond a 240 nm distance between central and outer ring nanospheres.

Keywords: fano resonance, optical antenna, plasmonic, nano-clusters

Procedia PDF Downloads 405
133 A Decadal Flood Assessment Using Time-Series Satellite Data in Cambodia

Authors: Nguyen-Thanh Son

Abstract:

Flood is among the most frequent and costliest natural hazards. The flood disasters especially affect the poor people in rural areas, who are heavily dependent on agriculture and have lower incomes. Cambodia is identified as one of the most climate-vulnerable countries in the world, ranked 13th out of 181 countries most affected by the impacts of climate change. Flood monitoring is thus a strategic priority at national and regional levels because policymakers need reliable spatial and temporal information on flood-prone areas to form successful monitoring programs to reduce possible impacts on the country’s economy and people’s likelihood. This study aims to develop methods for flood mapping and assessment from MODIS data in Cambodia. We processed the data for the period from 2000 to 2017, following three main steps: (1) data pre-processing to construct smooth time-series vegetation and water surface indices, (2) delineation of flood-prone areas, and (3) accuracy assessment. The results of flood mapping were verified with the ground reference data, indicating the overall accuracy of 88.7% and a Kappa coefficient of 0.77, respectively. These results were reaffirmed by close agreement between the flood-mapping area and ground reference data, with the correlation coefficient of determination (R²) of 0.94. The seasonally flooded areas observed for 2010, 2015, and 2016 were remarkably smaller than other years, mainly attributed to the El Niño weather phenomenon exacerbated by impacts of climate change. Eventually, although several sources potentially lowered the mapping accuracy of flood-prone areas, including image cloud contamination, mixed-pixel issues, and low-resolution bias between the mapping results and ground reference data, our methods indicated the satisfactory results for delineating spatiotemporal evolutions of floods. The results in the form of quantitative information on spatiotemporal flood distributions could be beneficial to policymakers in evaluating their management strategies for mitigating the negative effects of floods on agriculture and people’s likelihood in the country.

Keywords: MODIS, flood, mapping, Cambodia

Procedia PDF Downloads 102