Search results for: probability distributions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1768

Search results for: probability distributions

268 Geospatial Multi-Criteria Evaluation to Predict Landslide Hazard Potential in the Catchment of Lake Naivasha, Kenya

Authors: Abdel Rahman Khider Hassan

Abstract:

This paper describes a multi-criteria geospatial model for prediction of landslide hazard zonation (LHZ) for Lake Naivasha catchment (Kenya), based on spatial analysis of integrated datasets of location intrinsic parameters (slope stability factors) and external landslides triggering factors (natural and man-made factors). The intrinsic dataset included: lithology, geometry of slope (slope inclination, aspect, elevation, and curvature) and land use/land cover. The landslides triggering factors included: rainfall as the climatic factor, in addition to the destructive effects reflected by proximity of roads and drainage network to areas that are susceptible to landslides. No published study on landslides has been obtained for this area. Thus, digital datasets of the above spatial parameters were conveniently acquired, stored, manipulated and analyzed in a Geographical Information System (GIS) using a multi-criteria grid overlay technique (in ArcGIS 10.2.2 environment). Deduction of landslide hazard zonation is done by applying weights based on relative contribution of each parameter to the slope instability, and finally, the weighted parameters grids were overlaid together to generate a map of the potential landslide hazard zonation (LHZ) for the lake catchment. From the total surface of 3200 km² of the lake catchment, most of the region (78.7 %; 2518.4 km²) is susceptible to moderate landslide hazards, whilst about 13% (416 km²) is occurring under high hazards. Only 1.0% (32 km²) of the catchment is displaying very high landslide hazards, and the remaining area (7.3 %; 233.6 km²) displays low probability of landslide hazards. This result confirms the importance of steep slope angles, lithology, vegetation land cover and slope orientation (aspect) as the major determining factors of slope failures. The information provided by the produced map of landslide hazard zonation (LHZ) could lay the basis for decision making as well as mitigation and applications in avoiding potential losses caused by landslides in the Lake Naivasha catchment in the Kenya Highlands.

Keywords: decision making, geospatial, landslide, multi-criteria, Naivasha

Procedia PDF Downloads 177
267 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study

Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming

Abstract:

Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.

Keywords: binary outcomes, statistical methods, clinical trials, simulation study

Procedia PDF Downloads 84
266 Fast Detection of Local Fiber Shifts by X-Ray Scattering

Authors: Peter Modregger, Özgül Öztürk

Abstract:

Glass fabric reinforced thermoplastic (GFRT) are composite materials, which combine low weight and resilient mechanical properties rendering them especially suitable for automobile construction. However, defects in the glass fabric as well as in the polymer matrix can occur during manufacturing, which may compromise component lifetime or even safety. One type of these defects is local fiber shifts, which can be difficult to detect. Recently, we have experimentally demonstrated the reliable detection of local fiber shifts by X-ray scattering based on the edge-illumination (EI) principle. EI constitutes a novel X-ray imaging technique that utilizes two slit masks, one in front of the sample and one in front of the detector, in order to simultaneously provide absorption, phase, and scattering contrast. The principle of contrast formation is as follows. The incident X-ray beam is split into smaller beamlets by the sample mask, resulting in small beamlets. These are distorted by the interaction with the sample, and the distortions are scaled up by the detector masks, rendering them visible to a pixelated detector. In the experiment, the sample mask is laterally scanned, resulting in Gaussian-like intensity distributions in each pixel. The area under the curves represents absorption, the peak offset refraction, and the width of the curve represents the scattering occurring in the sample. Here, scattering is caused by the numerous glass fiber/polymer matrix interfaces. In our recent publication, we have shown that the standard deviation of the absorption and scattering values over a selected field of view can be used to distinguish between intact samples and samples with local fiber shift defects. The quantification of defect detection performance was done by using p-values (p=0.002 for absorption and p=0.009 for scattering) and contrast-to-noise ratios (CNR=3.0 for absorption and CNR=2.1 for scattering) between the two groups of samples. This was further improved for the scattering contrast to p=0.0004 and CNR=4.2 by utilizing a harmonic decomposition analysis of the images. Thus, we concluded that local fiber shifts can be reliably detected by the X-ray scattering contrasts provided by EI. However, a potential application in, for example, production monitoring requires fast data acquisition times. For the results above, the scanning of the sample masks was performed over 50 individual steps, which resulted in long total scan times. In this paper, we will demonstrate that reliable detection of local fiber shift defects is also possible by using single images, which implies a speed up of total scan time by a factor of 50. Additional performance improvements will also be discussed, which opens the possibility for real-time acquisition. This contributes a vital step for the translation of EI to industrial applications for a wide variety of materials consisting of numerous interfaces on the micrometer scale.

Keywords: defects in composites, X-ray scattering, local fiber shifts, X-ray edge Illumination

Procedia PDF Downloads 36
265 Military Families’ Attachment to the Royal Guards Community of Dusit District, Bangkok Metropolitan

Authors: Kanikanun Photchong, Phusit Phukamchanoad

Abstract:

The objective of this research is to study the people’s level of participation in activities of the community, their satisfaction towards the community, the attachment they have to the community, factors that influence the attachment, as well as the characteristics of the relationships of military families’ of the Royal Guards community of Dusit District. The method used was non-probability sampling by quota sampling according to people’s age. The determined age group was 18 years or older. One set of a sample group was done per family. The questionnaires were conducted by 287 people. Snowball sampling was also used by interviewing people of the community, starting from the Royal Guards Community’s leader, then by 20 of the community’s well-respected persons. The data was analyzed by using descriptive statistics, such as arithmetic mean and standard deviation, as well as by inferential statistics, such as Independent - Samples T test (T-test), One-Way ANOVA (F-test), Chi-Square. Descriptive analysis according to the structure of the interview content was also used. The results of the research is that the participation of the population in the Royal Guards Community in various activities is at a medium level, with the average participation level during Mother’s and Father’s Days. The people’s general level of satisfaction towards the premises of the Royal Guards Community is at the highest level. The people were most satisfied with the transportation within the community and in contacting with people from outside the premises. The access to the community is convenient and there are various entrances. The attachment of the people to the Royal Guards Community in general and by each category is at a high level. The feeling that the community is their home rated the highest average. Factors that influence the attachment of the people of the Royal Guards Community are age, status, profession, income, length of stay in the community, membership of social groups, having neighbors they feel close and familiar with, and as well as the benefits they receive from the community. In addition, it was found that people that participate in activities have a high level of positive relationship towards the attachment of the people to the Royal Guards Community. The satisfaction of the community has a very high level of positive relationship with the attachment of the people to the Royal Guards Community. The characteristics of the attachment of military families’ is that they live in big houses that everyone has to protect and care for, starting from the leader of the family as well as all members. Therefore, they all love the community they live in. The characteristics that show the participation of activities within the community and the high level of satisfaction towards the premises of the community will enable the people to be more attached to the community. The people feel that everyone is close neighbors within the community, as if they are one big family.

Keywords: community attachment, community satisfaction, royal guards community, activities of the community

Procedia PDF Downloads 344
264 Gendered Mobility: Deep Distributions in Urban Transport Systems in Delhi

Authors: Nidhi Prabha

Abstract:

Transportation as a sector is one of the most significant infrastructural elements of the ‘urban.' The distinctness of an urban life in a city is marked by the dynamic movements that it enables within the city-space. Therefore it is important to study the public-transport systems that enable and foster mobility which characterizes the urban. It is also crucial to underscore the way one is examining the urban transport systems - either as an infrastructural unit in a strict physical-structural sense or as a structural unit which acts as a prism refracting multiple experiences depending on the location of the ‘commuter.' In the proposed paper, the attempt is to uncover and investigate the assumption of the neuter-commuter by looking at urban transportation in the secondary sense i.e. as a structural unit which is experienced differently by different kinds of commuters, thus making transportation deeply distributed with various social structures and locations like class or gender which map onto the transport systems. To this end, the public-transit systems operating in Urban Delhi i.e. the Delhi Metros and the Delhi Transport Corporation run public-buses are looked at as case studies. The study is premised on the knowledge and data gained from both primary and secondary sources. Primary sources include data and knowledge collected from fieldwork, the methodology for which has ranged from adopting ‘mixed-methods’ which is ‘Qualitative-then-Quantitative’ as well as borrowing ethnographic techniques. Apart from fieldwork, other primary sources looked at including Annual Reports and policy documents of the Delhi Metro Rail Corporation (DMRC) and the Delhi Transport Corporation (DTC), Union and Delhi budgets, Economic Survey of Delhi, press releases, etc. Secondary sources include the vast array of literature available on the critical nodes that inform the research like gender, transport geographies, urban-space, etc. The study indicates a deeply-distributed urban transport system wherein the various social-structural locations or different kinds of commuters map onto the way these different commuters experience mobility or movement within the city space. Mobility or movement, therefore, becomes gendered or has class-based ramifications. The neuter-commuter assumption is thus challenged. Such an understanding enables us to challenge the anonymity which the ‘urban’ otherwise claims it provides over the rural. The rural is opposed to the urban wherein urban ushers a modern way of life, breaking ties of traditional social identities. A careful study of the transport systems through the traveling patterns and choices of the commuters, however, indicate that this does not hold true as even the same ‘public-space’ of the transport systems allocates different places to different kinds of commuters. The central argument made though the research done is therefore that infrastructure like urban-transport-systems has to be studied and examined as seen beyond just a physical structure. The various experiences of daily mobility of different kinds of commuters have to be taken into account in order to design and plan more inclusive transport systems.

Keywords: gender, infrastructure, mobility, urban-transport-systems

Procedia PDF Downloads 191
263 Preparation and Characterization of Poly(L-Lactic Acid)/Oligo(D-Lactic Acid) Grafted Cellulose Composites

Authors: Md. Hafezur Rahaman, Mohd. Maniruzzaman, Md. Shadiqul Islam, Md. Masud Rana

Abstract:

With the growth of environmental awareness, enormous researches are running to develop the next generation materials based on sustainability, eco-competence, and green chemistry to preserve and protect the environment. Due to biodegradability and biocompatibility, poly (L-lactic acid) (PLLA) has a great interest in ecological and medical applications. Also, cellulose is one of the most abundant biodegradable, renewable polymers found in nature. It has several advantages such as low cost, high mechanical strength, biodegradability and so on. Recently, an immense deal of attention has been paid for the scientific and technological development of α-cellulose based composite material. PLLA could be used for grafting of cellulose to improve the compatibility prior to the composite preparation. Here it is quite difficult to form a bond between lower hydrophilic molecules like PLLA and α-cellulose. Dimmers and oligomers can easily be grafted onto the surface of the cellulose by ring opening or polycondensation method due to their low molecular weight. In this research, α-cellulose extracted from jute fiber is grafted with oligo(D-lactic acid) (ODLA) via graft polycondensation reaction in presence of para-toluene sulphonic acid and potassium persulphate in toluene at 130°C for 9 hours under 380 mmHg. Here ODLA is synthesized by ring opening polymerization of D-lactides in the presence of stannous octoate (0.03 wt% of lactide) and D-lactic acids at 140°C for 10 hours. Composites of PLLA with ODLA grafted α-cellulose are prepared by solution mixing and film casting method. Confirmation of grafting was carried out through FTIR spectroscopy and SEM analysis. A strongest carbonyl peak of FTIR spectroscopy at 1728 cm⁻¹ of ODLA grafted α-cellulose confirms the grafting of ODLA onto α-cellulose which is absent in α-cellulose. It is also observed from SEM photographs that there are some white areas (spot) on ODLA grafted α-cellulose as compared to α-cellulose may indicate the grafting of ODLA and consistent with FTIR results. Analysis of the composites is carried out by FTIR, SEM, WAXD and thermal gravimetric analyzer. Most of the FTIR characteristic absorption peak of the composites shifted to higher wave number with increasing peak area may provide a confirmation that PLLA and grafted cellulose have better compatibility in composites via intermolecular hydrogen bonding and this supports previously published results. Grafted α-cellulose distributions in composites are uniform which is observed by SEM analysis. WAXD studied show that only homo-crystalline structures of PLLA present in the composites. Thermal stability of the composites is enhanced with increasing the percentages of ODLA grafted α-cellulose in composites. As a consequence, the resultant composites have a resistance toward the thermal degradation. The effects of length of the grafted chain and biodegradability of the composites will be studied in further research.

Keywords: α-cellulose, composite, graft polycondensation, oligo(D-lactic acid), poly(L-lactic acid)

Procedia PDF Downloads 96
262 Developing a High Performance Cement Based Material: The Influence of Silica Fume and Organosilane

Authors: Andrea Cretu, Calin Cadar, Maria Miclaus, Lucian Barbu-Tudoran, Siegfried Stapf, Ioan Ardelean

Abstract:

Additives and mineral admixtures have become an integral part of cement-based materials. It is common practice to add silica fume to cement based mixes in order to produce high-performance concrete. There is still a lack of scientific understanding regarding the effects that silica fume has on the microstructure of hydrated cement paste. The aim of the current study is to develop high-performance materials with low permeability and high resistance to flexural stress using silica fume and an organosilane. Organosilane bonds with cement grains and silica fume, influencing both the workability and the final properties of the mix, especially the pore size distributions and pore connectivity. Silica fume is a known pozzolanic agent which reacts with the calcium hydroxide in hydrated cement paste, producing more C-S-H and improving the mechanical properties of the mix. It is believed that particles of silica fume act as capillary pore fillers and nucleation centers for C-S-H and other hydration products. In order to be able to design cement-based materials with added silica fume and organosilane, it is necessary first to understand the formation of the porous network during hydration and to observe the distribution of pores and their connectivity. Nuclear magnetic resonance (NMR) methods in low-fields are non-destructive and allow the study of cement-based materials from the standpoint of their porous structure. Other methods, such as XRD and SEM-EDS, help create a comprehensive picture of the samples, along with the classic mechanical tests (compressive and flexural strength measurements). The transverse relaxation time (T₂) was measured during the hydration of 16 samples prepared with two water/cement ratios (0.3 and 0.4) and different concentrations or organosilane (APTES, up to 2% by mass of cement) and silica fume (up to 6%). After their hydration, the pore size distribution was assessed using the same NMR approach on the samples filled with cyclohexane. The SEM-EDS and XRD measurements were applied on pieces and powders prepared from the samples that were used in mechanical testing, which were kept under water for 28 days. Adding silica fume does not influence the hydration dynamics of cement paste, while the addition of organosilane extends the dormancy stage up to 10 hours. The size distribution of the capillary pores is not influenced by the addition of silica fume or organosilane, while the connectivity of capillary pores is decreased only when there is organosilane in the mix. No filling effect is observed even at the highest concentration of silica fume. There is an apparent increase in flexural strength of samples prepared only with silica fume and a decrease for those prepared with organosilane, with a few exceptions. XRD reveals that the pozzolanic reactivity of silica fume can only be observed when there is no organosilane present and the SEM-EDS method reveals the pore distribution, as well as hydration products and the presence or absence of calcium hydroxide. The current work was funded by the Romanian National Authority for Scientific Research, CNCS – UEFISCDI, through project PN-III-P2-2.1-PED-2016-0719.

Keywords: cement hydration, concrete admixtures, NMR, organosilane, porosity, silica fume

Procedia PDF Downloads 141
261 Discontinuous Spacetime with Vacuum Holes as Explanation for Gravitation, Quantum Mechanics and Teleportation

Authors: Constantin Z. Leshan

Abstract:

Hole Vacuum theory is based on discontinuous spacetime that contains vacuum holes. Vacuum holes can explain gravitation, some laws of quantum mechanics and allow teleportation of matter. All massive bodies emit a flux of holes which curve the spacetime; if we increase the concentration of holes, it leads to length contraction and time dilation because the holes do not have the properties of extension and duration. In the limited case when space consists of holes only, the distance between every two points is equal to zero and time stops - outside of the Universe, the extension and duration properties do not exist. For this reason, the vacuum hole is the only particle in physics capable of describing gravitation using its own properties only. All microscopic particles must 'jump' continually and 'vibrate' due to the appearance of holes (impassable microscopic 'walls' in space), and it is the cause of the quantum behavior. Vacuum holes can explain the entanglement, non-locality, wave properties of matter, tunneling, uncertainty principle and so on. Particles do not have trajectories because spacetime is discontinuous and has impassable microscopic 'walls' due to the simple mechanical motion is impossible at small scale distances; it is impossible to 'trace' a straight line in the discontinuous spacetime because it contains the impassable holes. Spacetime 'boils' continually due to the appearance of the vacuum holes. For teleportation to be possible, we must send a body outside of the Universe by enveloping it with a closed surface consisting of vacuum holes. Since a material body cannot exist outside of the Universe, it reappears instantaneously in a random point of the Universe. Since a body disappears in one volume and reappears in another random volume without traversing the physical space between them, such a transportation method can be called teleportation (or Hole Teleportation). It is shown that Hole Teleportation does not violate causality and special relativity due to its random nature and other properties. Although Hole Teleportation has a random nature, it can be used for colonization of extrasolar planets by the help of the method called 'random jumps': after a large number of random teleportation jumps, there is a probability that the spaceship may appear near a habitable planet. We can create vacuum holes experimentally using the method proposed by Descartes: we must remove a body from the vessel without permitting another body to occupy this volume.

Keywords: border of the Universe, causality violation, perfect isolation, quantum jumps

Procedia PDF Downloads 394
260 Wetting Induced Collapse Behavior of Loosely Compacted Kaolin Soil: A Microstructural Study

Authors: Dhanesh Sing Das, Bharat Tadikonda Venkata

Abstract:

Collapsible soils undergo significant volume reduction upon wetting under the pre-existing mechanically applied normal stress (inundation pressure). These soils exhibit a very high strength in air-dried conditions and can carry up to a considerable magnitude of normal stress without undergoing significant volume change. The soil strength is, however, lost upon saturation and results in a sudden collapse of the soil structure under the existing mechanical stress condition. The intrusion of water into the dry deposits of such soil causes ground subsidence leading to damages in the overlying buildings/structures. A study on the wetting-induced volume change behavior of collapsible soils is essential in dealing with the ground subsidence problems in various geotechnical engineering practices. The collapse of loosely compacted Kaolin soil upon wetting under various inundation pressures has been reported in recent studies. The collapse in the Kaolin soil is attributed to the alteration in the soil particle-particle association (fabric) resulting due to the changes in the various inter-particle (microscale) forces induced by the water saturation. The inundation pressure plays a significant role in the fabric evolution during the wetting process, thus controls the collapse potential of the compacted soil. A microstructural study is useful to understand the collapse mechanisms at various pore-fabric levels under different inundation pressure. Kaolin soil compacted to a dry density of 1.25 g/cc was used in this work to study the wetting-induced volume change behavior under different inundation pressures in the range of 10-1600 kPa. The compacted specimen of Kaolin soil exhibited a consistent collapse under all the studied inundation pressure. The collapse potential was observed to be increasing with an increase in the inundation pressure up to a maximum value of 13.85% under 800 kPa and then decreased to 11.7% under 1600 kPa. Microstructural analysis was carried out based on the fabric images and the pore size distributions (PSDs) obtained from FESEM analysis and mercury intrusion porosimetry (MIP), respectively. The PSDs and the soil fabric images of ‘as-compacted’ specimen and post-collapse specimen under 400 kPa were analyzed to understand the changes in the soil fabric and pores due to wetting. The pore size density curve for the post-collapse specimen was found to be on the finer side with respect to the ‘as-compacted’ specimen, indicating the reduction of the larger pores during the collapse. The inter-aggregate pores in the range of 0.1-0.5μm were identified as the major contributing pore size classes to the macroscopic volume change. Wetting under an inundation pressure results in the reduction of these pore sizes and lead to an increase in the finer pore sizes. The magnitude of inundation pressure influences the amount of reduction of these pores during the wetting process. The collapse potential was directly related to the degree of reduction in the pore volume contributed by these pore sizes.

Keywords: collapse behavior, inundation pressure, kaolin, microstructure

Procedia PDF Downloads 117
259 Organic Geochemical Evaluation of the Ecca Group Shale: Implications for Hydrocarbon Potential

Authors: Temitope L. Baiyegunhi, Kuiwu Liu, Oswald Gwavava, Christopher Baiyegunhi

Abstract:

Shale gas has recently been the exploration focus for future energy resource in South Africa. Specifically, the black shales of the lower Ecca Group in the study area are considered to be one of the most prospective targets for shale gas exploration. Evaluation of this potential resource has been restricted due to the lack of exploration and scarcity of existing drill core data. Thus, only limited previous geochemical data exist for these formations. In this study, outcrop and core samples of the Ecca Group were analysed to assess their total organic carbon (TOC), organic matter type, thermal maturity and hydrocarbon generation potential (SP). The results show that these rocks have TOC ranging from 0.11 to 7.35 wt.%. The SP values vary from 0.09 to 0.53 mg HC/g, suggesting poor hydrocarbon generative potential. The plot of S1 versus TOC shows that the source rocks were characterized by autochthonous hydrocarbons. S2/S3 values range between 0.40 and 7.5, indicating Type- II/III, III, and IV kerogen. With the exception of one sample from the collingham formation which has HI value of 53 mg HC/g TOC, all other samples have HI values of less than 50 mg HC/g TOC, thus suggesting Type-IV kerogen, which is mostly derived from reworked organic matter (mainly dead carbon) with little or no potential for hydrocarbon generation. Tmax values range from 318 to 601℃, indicating immature to over-maturity of hydrocarbon. The vitrinite reflectance values range from 2.22 to 3.93%, indicating over-maturity of the kerogen. Binary plots of HI against OI and HI versus Tmax show that the shales are of Type II and mixed Type II-III kerogen, which are capable of generating both natural gas and minor oil at suitable burial depth. Based on the geochemical data, it can be inferred that the source rocks are immature to over-matured variable from localities and have potential of producing wet to dry gas at present-stage. Generally, the Whitehill formation of the Ecca Group is comparable to the Marcellus and Barnett Shales. This further supports the assumption that the Whitehill Formation has a high probability of being a profitable shale gas play, but only when explored in dolerite-free area and away from the Cape Fold Belt.

Keywords: source rock, organic matter type, thermal maturity, hydrocarbon generation potential, Ecca Group

Procedia PDF Downloads 117
258 A Method To Assess Collaboration Using Perception of Risk from the Architectural Engineering Construction Industry

Authors: Sujesh F. Sujan, Steve W. Jones, Arto Kiviniemi

Abstract:

The use of Building Information Modelling (BIM) in the Architectural-Engineering-Construction (AEC) industry is a form of systemic innovation. Unlike incremental innovation, (such as the technological development of CAD from hand based drawings to 2D electronically printed drawings) any form of systemic innovation in Project-Based Inter-Organisational Networks requires complete collaboration and results in numerous benefits if adopted and utilised properly. Proper use of BIM involves people collaborating with the use of interoperable BIM compliant tools. The AEC industry globally has been known for its adversarial and fragmented nature where firms take advantage of one another to increase their own profitability. Due to the industry’s nature, getting people to collaborate by unifying their goals is critical to successful BIM adoption. However, this form of innovation is often being forced artificially in the old ways of working which do not suit collaboration. This may be one of the reasons for its low global use even though the technology was developed more than 20 years ago. Therefore, there is a need to develop a metric/method to support and allow industry players to gain confidence in their investment into BIM software and workflow methods. This paper departs from defining systemic risk as a risk that affects all the project participants at a given stage of a project and defines categories of systemic risks. The need to generalise is to allow method applicability to any industry where the category will be the same, but the example of the risk will depend on the industry the study is done in. The method proposed seeks to use individual perception of an example of systemic risk as a key parameter. The significance of this study lies in relating the variance of individual perception of systemic risk to how much the team is collaborating. The method bases its notions on the claim that a more unified range of individual perceptions would mean a higher probability that the team is collaborating better. Since contracts and procurement devise how a project team operates, the method could also break the methodological barrier of highly subjective findings that case studies inflict, which has limited the possibility of generalising between global industries. Since human nature applies in all industries, the authors’ intuition is that perception can be a valuable parameter to study collaboration which is essential especially in projects that utilise systemic innovation such as BIM.

Keywords: building information modelling, perception of risk, systemic innovation, team collaboration

Procedia PDF Downloads 160
257 Spectroscopic Studies and Reddish Luminescence Enhancement with the Increase in Concentration of Europium Ions in Oxy-Fluoroborate Glasses

Authors: Mahamuda Sk, Srinivasa Rao Allam, Vijaya Prakash G.

Abstract:

The different concentrations of Eu3+ ions doped in Oxy-fluoroborate glasses of composition 60 B2O3-10 BaF2-10 CaF2-15 CaF2- (5-x) Al2O3 -x Eu2O3 where x = 0.1, 0.5, 1.0 and 2.0 mol%, have been prepared by conventional melt quenching technique and are characterized through absorption and photoluminescence (PL), decay, color chromaticity and Confocal measurements. The absorption spectra of all the glasses consists of six peaks corresponding to the transitions 7F0→5D2, 7F0→5D1, 7F1→5D1, 7F1→5D0, 7F0→7F6 and 7F1→7F6 respectively. The experimental oscillator strengths with and without thermal corrections have been evaluated using absorption spectra. Judd-Ofelt (JO) intensity parameters (Ω2 and Ω4) have been evaluated from the photoluminescence spectra of all the glasses. PL spectra of all the glasses have been recorded at excitation wavelengths 395 nm (conventional excitation source) and 410 nm (diode laser) to observe the intensity variation in the PL spectra. All the spectra consists of five emission peaks corresponding to the transitions 5D0→7FJ (J = 0, 1, 2, 3 and 4). Surprisingly no concentration quenching is observed on PL spectra. Among all the glasses the glass with 2.0 mol% of Eu3+ ion concentration possesses maximum intensity for the transition 5D0→7F2 (612 nm) in bright red region. The JO parameters derived from the photoluminescence spectra have been used to evaluate the essential radiative properties such as transition probability (A), radiative lifetime (τR), branching ratio (βR) and peak stimulated emission cross-section (σse) for the 5D0→7FJ (J = 0, 1, 2, 3 and 4) transitions of the Eu3+ ions. The decay rates of the 5D0 fluorescent level of Eu3+ ions in the title glasses are found to be single exponential for all the studied Eu3+ ion concentrations. A marginal increase in lifetime of the 5D0 level has been noticed with increase in Eu3+ ion concentration from 0.1 mol% to 2.0 mol%. Among all the glasses, the glass with 2.0 mol% of Eu3+ ion concentration possesses maximum values of branching ratio, stimulated emission cross-section and quantum efficiency for the transition 5D0→7F2 (612 nm) in bright red region. The color chromaticity coordinates are also evaluated to confirm the reddish luminescence from these glasses. These color coordinates exactly fall in the bright red region. Confocal images also recorded to confirm reddish luminescence from these glasses. From all the obtained results in the present study, it is suggested that the glass with 2.0 mol% of Eu3+ ion concentration is suitable to emit bright red color laser.

Keywords: Europium, Judd-Ofelt parameters, laser, luminescence

Procedia PDF Downloads 215
256 Prediction of Finned Projectile Aerodynamics Using a Lattice-Boltzmann Method CFD Solution

Authors: Zaki Abiza, Miguel Chavez, David M. Holman, Ruddy Brionnaud

Abstract:

In this paper, the prediction of the aerodynamic behavior of the flow around a Finned Projectile will be validated using a Computational Fluid Dynamics (CFD) solution, XFlow, based on the Lattice-Boltzmann Method (LBM). XFlow is an innovative CFD software developed by Next Limit Dynamics. It is based on a state-of-the-art Lattice-Boltzmann Method which uses a proprietary particle-based kinetic solver and a LES turbulent model coupled with the generalized law of the wall (WMLES). The Lattice-Boltzmann method discretizes the continuous Boltzmann equation, a transport equation for the particle probability distribution function. From the Boltzmann transport equation, and by means of the Chapman-Enskog expansion, the compressible Navier-Stokes equations can be recovered. However to simulate compressible flows, this method has a Mach number limitation because of the lattice discretization. Thanks to this flexible particle-based approach the traditional meshing process is avoided, the discretization stage is strongly accelerated reducing engineering costs, and computations on complex geometries are affordable in a straightforward way. The projectile that will be used in this work is the Army-Navy Basic Finned Missile (ANF) with a caliber of 0.03 m. The analysis will consist in varying the Mach number from M=0.5 comparing the axial force coefficient, normal force slope coefficient and the pitch moment slope coefficient of the Finned Projectile obtained by XFlow with the experimental data. The slope coefficients will be obtained using finite difference techniques in the linear range of the polar curve. The aim of such an analysis is to find out the limiting Mach number value starting from which the effects of high fluid compressibility (related to transonic flow regime) lead the XFlow simulations to differ from the experimental results. This will allow identifying the critical Mach number which limits the validity of the isothermal formulation of XFlow and beyond which a fully compressible solver implementing a coupled momentum-energy equations would be required.

Keywords: CFD, computational fluid dynamics, drag, finned projectile, lattice-boltzmann method, LBM, lift, mach, pitch

Procedia PDF Downloads 391
255 Elasticity of Soil Fertility Indicators and pH in Termite Infested Cassava Field as Influenced by Tillage and Organic Manure Sources

Authors: K. O. Ogbedeh, T. T. Epidi, E. U. Onweremadu, E. E. Ihem

Abstract:

Apart from the devastating nature of termites as pest of cassava, nearly all termite species have been implicated in soil fertility modifications. Elasticity of soil fertility indicators and pH in termite infested cassava field as influenced by tillage and organic manure sources in Owerri, Southeast, Nigeria was investigated in this study. Three years of of field trials were conducted in 2007, 2008 and 2009 cropping seasons respectively at the Teaching and Research Farm of the Federal University of Technology, Owerri. The experiments were laid out in a 3x6 split-plot factorial arrangement fitted into a randomized complete block design (RCBD) with three replications. The TMS 4 (2)1425 was the cassava cultivar used. Treatments consists three tillage methods (zero, flat and mound), two rates of municipal waste (1.5 and 3.0tonnes/ha), two rates of Azadirachta indica (neem) leaves (20 and 30tonnes/ha), control (0.0 tonnes/ha) and a unit dose of carbofuran (chemical check). Data were collected on pre-planting soil physical and chemical properties, post-harvest soil pH (both in water and KCl) and residual total exchangeable bases (Ca, K, Mg and Na). These were analyzed using a Mixed-model procedure of Statistical Analysis Software (SAS). Means were separated using Least Significant Difference (LSD.) at 5% level of probability. Result shows that the native soil fertility status of the experimental site was poor. However soil pH increased substantially in plots where mounds, A.indica leaves at 30t/ha and municipal waste (1.5 and 3.0t/ha) were treated especially in 2008 and 2009. In 2007 trial, highest soil pH was maintained with flat (5.41 in water and 4.97 in KCl). Control on the other hand, recorded least soil pH especially in 2009 with values of 5.18 and 4.63 in water and KCl respectively. Equally, mound, A. indica leaves at 30t/ha and municipal waste at 3.0t/ha consistently increased organic matter content of the soil than other treatments. Finally, mound and A. indica leaves at 30t/ha linearly and consistently increased residual total exchangeable bases of the soil.

Keywords: elasticity, fertility, indicators, termites, tillage, cassava and manure sources

Procedia PDF Downloads 275
254 Detection the Ice Formation Processes Using Multiple High Order Ultrasonic Guided Wave Modes

Authors: Regina Rekuviene, Vykintas Samaitis, Liudas Mažeika, Audrius Jankauskas, Virginija Jankauskaitė, Laura Gegeckienė, Abdolali Sadaghiani, Shaghayegh Saeidiharzand

Abstract:

Icing brings significant damage to aviation and renewable energy installations. Air-conditioning, refrigeration, wind turbine blades, airplane and helicopter blades often suffer from icing phenomena, which cause severe energy losses and impair aerodynamic performance. The icing process is a complex phenomenon with many different causes and types. Icing mechanisms, distributions, and patterns are still relevant to research topics. The adhesion strength between ice and surfaces differs in different icing environments. This makes the task of anti-icing very challenging. The techniques for various icing environments must satisfy different demands and requirements (e.g., efficient, lightweight, low power consumption, low maintenance and manufacturing costs, reliable operation). It is noticeable that most methods are oriented toward a particular sector and adapting them to or suggesting them for other areas is quite problematic. These methods often use various technologies and have different specifications, sometimes with no clear indication of their efficiency. There are two major groups of anti-icing methods: passive and active. Active techniques have high efficiency but, at the same time, quite high energy consumption and require intervention in the structure’s design. It’s noticeable that vast majority of these methods require specific knowledge and personnel skills. The main effect of passive methods (ice-phobic, superhydrophobic surfaces) is to delay ice formation and growth or reduce the adhesion strength between the ice and the surface. These methods are time-consuming and depend on forecasting. They can be applied on small surfaces only for specific targets, and most are non-biodegradable (except for anti-freezing proteins). There is some quite promising information on ultrasonic ice mitigation methods that employ UGW (Ultrasonic Guided Wave). These methods are have the characteristics of low energy consumption, low cost, lightweight, and easy replacement and maintenance. However, fundamental knowledge of ultrasonic de-icing methodology is still limited. The objective of this work was to identify the ice formation processes and its progress by employing ultrasonic guided wave technique. Throughout this research, the universal set-up for acoustic measurement of ice formation in a real condition (temperature range from +240 C to -230 C) was developed. Ultrasonic measurements were performed by using high frequency 5 MHz transducers in a pitch-catch configuration. The selection of wave modes suitable for detection of ice formation phenomenon on copper metal surface was performed. Interaction between the selected wave modes and ice formation processes was investigated. It was found that selected wave modes are sensitive to temperature changes. It was demonstrated that proposed ultrasonic technique could be successfully used for the detection of ice layer formation on a metal surface.

Keywords: ice formation processes, ultrasonic GW, detection of ice formation, ultrasonic testing

Procedia PDF Downloads 39
253 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis

Authors: H. Jung, N. Kim, B. Kang, J. Choe

Abstract:

History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.

Keywords: history matching, principal component analysis, reservoir modelling, support vector machine

Procedia PDF Downloads 136
252 Last ca 2500 Yr History of the Harmful Algal Blooms in South China Reconstructed on Organic-Walled Dinoflagellate Cysts

Authors: Anastasia Poliakova

Abstract:

Harmful algal bloom (HAB) is a known negative phenomenon that is caused both by natural factors and anthropogenic influence. HABs can result in a series of deleterious effects, such as beach fouling, paralytic shellfish poisoning, mass mortality of marine species, and a threat to human health, especially if toxins pollute drinking water or occur nearby public resorts. In South China, the problem of HABs has an ultimately important meaning. For this study, we used a 1.5 m sediment core LAX-2018-2 collected in 2018 from the Zhanjiang Mangrove National Nature Reserve (109°03´E, 20°30´N), Guangdong Province, South China. High-resolution coastal environment reconstruction with a specific focus on the HABs history during the last ca 2500 yrs was attempted. Age control was performed with five radiocarbon dates obtained from benthic foraminifera. A total number of 71 dinoflagellate cyst types was recorded. The most common types found consistently throughout the sediment sequence were autotrophic Spiniferites spp., Spiniferites hyperacanthus and S. mirabilis, S. ramosus, Operculodinium centrocarpum sensu Wall and Dale 1966, Polysphaeridium zoharyi, and heterotrophic Brigantedinium ssp., cyst of Gymnodinium catenatum and cysts mixture of Protoperidinium. Three local dinoflagellate zones LAX-1 to LAX-3 were established based on the results of the constrained cluster analysis and data ordination; additionally, the middle zone LAX-2 was derived into two subzones, LAX-2a and LAX-2b based on the dynamics of toxic and heterotrophic cysts as well as on the significant changes (probability, P=0.89) in percentages of eutrophic indicators. The total cyst count varied from 106 to 410 dinocysts per slide, with 177 cyst types on average. Dinocyst assemblages are characterized by high values of the dost-depositional degradation index (kt) that varies between 3.6 and 7.6 (averaging 5.4), which is relatively high and is very typical for the areas with selective dinoflagellate cyst preservation that is related to bottom-water oxygen concentrations.

Keywords: reconstruction of palaeoenvironment, harmful algal blooms, anthropogenic influence on coastal zones, South China Sea

Procedia PDF Downloads 60
251 Biodiesel Production from Edible Oil Wastewater Sludge with Bioethanol Using Nano-Magnetic Catalysis

Authors: Wighens Ngoie Ilunga, Pamela J. Welz, Olewaseun O. Oyekola, Daniel Ikhu-Omoregbe

Abstract:

Currently, most sludge from the wastewater treatment plants of edible oil factories is disposed to landfills, but landfill sites are finite and potential sources of environmental pollution. Production of biodiesel from wastewater sludge can contribute to energy production and waste minimization. However, conventional biodiesel production is energy and waste intensive. Generally, biodiesel is produced from the transesterification reaction of oils with alcohol (i.e., Methanol, ethanol) in the presence of a catalyst. Homogeneously catalysed transesterification is the conventional approach for large-scale production of biodiesel as reaction times are relatively short. Nevertheless, homogenous catalysis presents several challenges such as high probability of soap. The current study aimed to reuse wastewater sludge from the edible oil industry as a novel feedstock for both monounsaturated fats and bioethanol for the production of biodiesel. Preliminary results have shown that the fatty acid profile of the oilseed wastewater sludge is favourable for biodiesel production with 48% (w/w) monounsaturated fats and that the residue left after the extraction of fats from the sludge contains sufficient fermentable sugars after steam explosion followed by an enzymatic hydrolysis for the successful production of bioethanol [29% (w/w)] using a commercial strain of Saccharomyces cerevisiae. A novel nano-magnetic catalyst was synthesised from mineral processing alkaline tailings, mainly containing dolomite originating from cupriferous ores using a modified sol-gel. The catalyst elemental chemical compositions and structural properties were characterised by X-ray diffraction (XRD), scanning electron microscopy (SEM), Fourier transform infra-red (FTIR) and the BET for the surface area with 14.3 m²/g and 34.1 nm average pore diameter. The mass magnetization of the nano-magnetic catalyst was 170 emu/g. Both the catalytic properties and reusability of the catalyst were investigated. A maximum biodiesel yield of 78% was obtained, which dropped to 52% after the fourth transesterification reaction cycle. The proposed approach has the potential to reduce material costs, energy consumption and water usage associated with conventional biodiesel production technologies. It may also mitigate the impact of conventional biodiesel production on food and land security, while simultaneously reducing waste.

Keywords: biodiesel, bioethanol, edible oil wastewater sludge, nano-magnetism

Procedia PDF Downloads 119
250 Emergency Physician Performance for Hydronephrosis Diagnosis and Grading Compared with Radiologist Assessment in Renal Colic: The EPHyDRA Study

Authors: Sameer A. Pathan, Biswadev Mitra, Salman Mirza, Umais Momin, Zahoor Ahmed, Lubna G. Andraous, Dharmesh Shukla, Mohammed Y. Shariff, Magid M. Makki, Tinsy T. George, Saad S. Khan, Stephen H. Thomas, Peter A. Cameron

Abstract:

Study objective: Emergency physician’s (EP) ability to identify hydronephrosis on point-of-care ultrasound (POCUS) has been assessed in the past using CT scan as the reference standard. We aimed to assess EP interpretation of POCUS to identify and grade the hydronephrosis in a direct comparison with the consensus-interpretation of POCUS by radiologists, and also to compare the EP and radiologist performance using CT scan as the criterion standard. Methods: Using data from a POCUS databank, a prospective interpretation study was conducted at an urban academic emergency department. All POCUS exams were performed on patients presenting with renal colic to the ED. Institutional approval was obtained for conducting this study. All the analyses were performed using Stata MP 14.0 (Stata Corp, College Station, Texas). Results: A total of 651 patients were included, with paired sets of renal POCUS video clips and the CT scan performed at the same ED visit. Hydronephrosis was reported in 69.6% of POCUS exams by radiologists and 72.7% of CT scans (p=0.22). The κ for consensus interpretation of POCUS between the radiologists to detect hydronephrosis was 0.77 (0.72 to 0.82) and weighted κ for grading the hydronephrosis was 0.82 (0.72 to 0.90), interpreted as good to very good. Using CT scan findings as the criterion standard, Eps had an overall sensitivity of 81.1% (95% CI: 79.6% to 82.5%), specificity of 59.4% (95% CI: 56.4% to 62.5%), PPV of 84.3% (95% CI: 82.9% to 85.7%), and NPV of 53.8% (95% CI: 50.8% to 56.7%); compared to radiologist sensitivity of 85.0% (95% CI: 82.5% to 87.2%), specificity of 79.7% (95% CI: 75.1% to 83.7%), PPV of 91.8% (95% CI: 89.8% to 93.5%), and NPV of 66.5% (95% CI: 61.8% to 71.0%). Testing for a report of moderate or high degree of hydronephrosis, specificity of EP was 94.6% (95% CI: 93.7% to 95.4%) and to 99.2% (95% CI: 98.9% to 99.5%) for identifying severe hydronephrosis alone. Conclusion: EP POCUS interpretations were comparable to the radiologists for identifying moderate to severe hydronephrosis using CT scan results as the criterion standard. Among patients with moderate or high pre-test probability of ureteric calculi, as calculated by the STONE-score, the presence of moderate to severe (+LR 6.3 and –LR 0.69) or severe hydronephrosis (+LR 54.4 and –LR 0.57) was highly diagnostic of the stone disease. Low dose CT is indicated in such patients for evaluation of stone size and location.

Keywords: renal colic, point-of-care, ultrasound, bedside, emergency physician

Procedia PDF Downloads 257
249 A Corpus Output Error Analysis of Chinese L2 Learners From America, Myanmar, and Singapore

Authors: Qiao-Yu Warren Cai

Abstract:

Due to the rise of big data, building corpora and using them to analyze ChineseL2 learners’ language output has become a trend. Various empirical research has been conducted using Chinese corpora built by different academic institutes. However, most of the research analyzed the data in the Chinese corpora usingcorpus-based qualitative content analysis with descriptive statistics. Descriptive statistics can be used to make summations about the subjects or samples that research has actually measured to describe the numerical data, but the collected data cannot be generalized to the population. Comte, a Frenchpositivist, has argued since the 19th century that human beings’ knowledge, whether the discipline is humanistic and social science or natural science, should be verified in a scientific way to construct a universal theory to explain the truth and human beings behaviors. Inferential statistics, able to make judgments of the probability of a difference observed between groups being dependable or caused by chance (Free Geography Notes, 2015)and to infer from the subjects or examples what the population might think or behave, is just the right method to support Comte’s argument in the field of TCSOL. Also, inferential statistics is a core of quantitative research, but little research has been conducted by combing corpora with inferential statistics. Little research analyzes the differences in Chinese L2 learners’ language corpus output errors by using theOne-way ANOVA so that the findings of previous research are limited to inferring the population's Chinese errors according to the given samples’ Chinese corpora. To fill this knowledge gap in the professional development of Taiwanese TCSOL, the present study aims to utilize the One-way ANOVA to analyze corpus output errors of Chinese L2 learners from America, Myanmar, and Singapore. The results show that no significant difference exists in ‘shì (是) sentence’ and word order errors, but compared with Americans and Singaporeans, it is significantly easier for Myanmar to have ‘sentence blends.’ Based on the above results, the present study provides an instructional approach and contributes to further exploration of how Chinese L2 learners can have (and use) learning strategies to lower errors.

Keywords: Chinese corpus, error analysis, one-way analysis of variance, Chinese L2 learners, Americans, myanmar, Singaporeans

Procedia PDF Downloads 81
248 Application Research of Stilbene Crystal for the Measurement of Accelerator Neutron Sources

Authors: Zhao Kuo, Chen Liang, Zhang Zhongbing, Ruan Jinlu. He Shiyi, Xu Mengxuan

Abstract:

Stilbene, C₁₄H₁₂, is well known as one of the most useful organic scintillators for pulse shape discrimination (PSD) technique for its good scintillation properties. An on-line acquisition system and an off-line acquisition system were developed with several CAMAC standard plug-ins, NIM plug-ins, neutron/γ discriminating plug-in named 2160A and a digital oscilloscope with high sampling rate respectively for which stilbene crystals and photomultiplier tube detectors (PMT) as detector for accelerator neutron sources measurement carried out in China Institute of Atomic Energy. Pulse amplitude spectrums and charge amplitude spectrums were real-time recorded after good neutron/γ discrimination whose best PSD figure-of-merits (FoMs) are 1.756 for D-D accelerator neutron source and 1.393 for D-T accelerator neutron source. The probability of neutron events in total events was 80%, and neutron detection efficiency was 5.21% for D-D accelerator neutron sources, which were 50% and 1.44% for D-T accelerator neutron sources after subtracting the background of scattering observed by the on-line acquisition system. Pulse waveform signals were acquired by the off-line acquisition system randomly while the on-line acquisition system working. The PSD FoMs obtained by the off-line acquisition system were 2.158 for D-D accelerator neutron sources and 1.802 for D-T accelerator neutron sources after waveform digitization off-line processing named charge integration method for just 1000 pulses. In addition, the probabilities of neutron events in total events obtained by the off-line acquisition system matched very well with the probabilities of the on-line acquisition system. The pulse information recorded by the off-line acquisition system could be repetitively used to adjust the parameters or methods of PSD research and obtain neutron charge amplitude spectrums or pulse amplitude spectrums after digital analysis with a limited number of pulses. The off-line acquisition system showed equivalent or better measurement effects compared with the online system with a limited number of pulses which indicated a feasible method based on stilbene crystals detectors for the measurement of prompt neutrons neutron sources like prompt accelerator neutron sources emit a number of neutrons in a short time.

Keywords: stilbene crystal, accelerator neutron source, neutron / γ discrimination, figure-of-merits, CAMAC, waveform digitization

Procedia PDF Downloads 157
247 Improving Functionality of Radiotherapy Department Through: Systemic Periodic Clinical Audits

Authors: Kamal Kaushik, Trisha, Dandapni, Sambit Nanda, A. Mukherjee, S. Pradhan

Abstract:

INTRODUCTION: As complexity in radiotherapy practice and processes are increasing, there is a need to assure quality control to a greater extent. At present, no international literature available with regards to the optimal quality control indicators for radiotherapy; moreover, few clinical audits have been conducted in the field of radiotherapy. The primary aim is to improve the processes that directly impact clinical outcomes for patients in terms of patient safety and quality of care. PROCEDURE: A team of an Oncologist, a Medical Physicist and a Radiation Therapist was formed for weekly clinical audits of patient’s undergoing radiotherapy audits The stages for audits include Pre planning audits, Simulation, Planning, Daily QA, Implementation and Execution (with image guidance). Errors in all the parts of the chain were evaluated and recorded for the development of further departmental protocols for radiotherapy. EVALUATION: The errors at various stages of radiotherapy chain were evaluated and recorded for comparison before starting the clinical audits in the department of radiotherapy and after starting the audits. It was also evaluated to find the stage in which maximum errors were recorded. The clinical audits were used to structure standard protocols (in the form of checklist) in department of Radiotherapy, which may lead to further reduce the occurrences of clinical errors in the chain of radiotherapy. RESULTS: The aim of this study is to perform a comparison between number of errors in different part of RT chain in two groups (A- Before Audit and B-After Audit). Group A: 94 pts. (48 males,46 female), Total no. of errors in RT chain:19 (9 needed Resimulation) Group B: 94 pts. (61 males,33 females), Total no. of errors in RT chain: 8 (4 needed Resimulation) CONCLUSION: After systematic periodic clinical audits percentage of error in radiotherapy process reduced more than 50% within 2 months. There is a great need in improving quality control in radiotherapy, and the role of clinical audits can only grow. Although clinical audits are time-consuming and complex undertakings, the potential benefits in terms of identifying and rectifying errors in quality control procedures are potentially enormous. Radiotherapy being a chain of various process. There is always a probability of occurrence of error in any part of the chain which may further propagate in the chain till execution of treatment. Structuring departmental protocols and policies helps in reducing, if not completely eradicating occurrence of such incidents.

Keywords: audit, clinical, radiotherapy, improving functionality

Procedia PDF Downloads 54
246 Expert Supporting System for Diagnosing Lymphoid Neoplasms Using Probabilistic Decision Tree Algorithm and Immunohistochemistry Profile Database

Authors: Yosep Chong, Yejin Kim, Jingyun Choi, Hwanjo Yu, Eun Jung Lee, Chang Suk Kang

Abstract:

For the past decades, immunohistochemistry (IHC) has been playing an important role in the diagnosis of human neoplasms, by helping pathologists to make a clearer decision on differential diagnosis, subtyping, personalized treatment plan, and finally prognosis prediction. However, the IHC performed in various tumors of daily practice often shows conflicting and very challenging results to interpret. Even comprehensive diagnosis synthesizing clinical, histologic and immunohistochemical findings can be helpless in some twisted cases. Another important issue is that the IHC data is increasing exponentially and more and more information have to be taken into account. For this reason, we reached an idea to develop an expert supporting system to help pathologists to make a better decision in diagnosing human neoplasms with IHC results. We gave probabilistic decision tree algorithm and tested the algorithm with real case data of lymphoid neoplasms, in which the IHC profile is more important to make a proper diagnosis than other human neoplasms. We designed probabilistic decision tree based on Bayesian theorem, program computational process using MATLAB (The MathWorks, Inc., USA) and prepared IHC profile database (about 104 disease category and 88 IHC antibodies) based on WHO classification by reviewing the literature. The initial probability of each neoplasm was set with the epidemiologic data of lymphoid neoplasm in Korea. With the IHC results of 131 patients sequentially selected, top three presumptive diagnoses for each case were made and compared with the original diagnoses. After the review of the data, 124 out of 131 were used for final analysis. As a result, the presumptive diagnoses were concordant with the original diagnoses in 118 cases (93.7%). The major reason of discordant cases was that the similarity of the IHC profile between two or three different neoplasms. The expert supporting system algorithm presented in this study is in its elementary stage and need more optimization using more advanced technology such as deep-learning with data of real cases, especially in differentiating T-cell lymphomas. Although it needs more refinement, it may be used to aid pathological decision making in future. A further application to determine IHC antibodies for a certain subset of differential diagnoses might be possible in near future.

Keywords: database, expert supporting system, immunohistochemistry, probabilistic decision tree

Procedia PDF Downloads 207
245 Adapting to Rural Demographic Change: Impacts, Challenges and Opportunities for Ageing Farmers in Prachin Buri Province, Thailand

Authors: Para Jansuwan, Kerstin K. Zander

Abstract:

Most people in rural Thailand still depend on agriculture. The rural areas are undergoing changes in their demographic structures with an increasing older population, out migration of younger people and a shift away from work in the agricultural sector towards manufacturing and service provisioning. These changes may lead to a decline in agricultural productivity and food insecurity. Our research aims to examine perceptions of older farmers on how rural demographic change affects them, to investigate how farmers may change their agricultural practices to cope with their ageing and to explore the factors affecting these changes, including the opportunities and challenges arising from them. The data were collected through a household survey with 368 farmers in the Prachin Buri province in central Thailand, the main area for agricultural production. A series of binomial logistic regression models were applied to analyse the data. We found that most farmers suffered from age-related diseases, which compromised their working capacity. Most farmers attempted to reduce labour intense work, by either stopping farming through transferring farmland to their children (41%), stopping farming by giving the land to the others (e.g., selling, leasing out) (28%) and continuing farming with making some changes (e.g., changing crops, employing additional workers) (24%). Farmers’ health and having a potential farm successor were positively associated with the probability of stopping farming by transferring the land to the children. Farmers with a successor were also less likely to stop farming by giving the land to the others. Farmers’ age was negatively associated with the likelihood of continuing farming by making some changes. The results show that most farmers base their decisions on the hope that their children will take over the farms, and that without successor, farmers lease out or sell the land. Without successor, they also no longer invest in expansion and improvement of their farm production, especially adoption of innovative technologies that could help them to maintain their farm productivity. To improve farmers’ quality of life and sustain their farm productivity, policies are needed to support the viability of farms, the access to a pension system and the smooth and successful transfer of the land to a successor of farmers.

Keywords: rural demographic change, older farmer, stopping farming, continuing farming, health and age, farm successor, Thailand

Procedia PDF Downloads 77
244 The Asymptotic Hole Shape in Long Pulse Laser Drilling: The Influence of Multiple Reflections

Authors: Torsten Hermanns, You Wang, Stefan Janssen, Markus Niessen, Christoph Schoeler, Ulrich Thombansen, Wolfgang Schulz

Abstract:

In long pulse laser drilling of metals, it can be demonstrated that the ablation shape approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from ultra short pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in long pulse drilling of metals is identified, a model for the description of the asymptotic hole shape numerically implemented, tested and clearly confirmed by comparison with experimental data. The model assumes a robust process in that way that the characteristics of the melt flow inside the arising melt film does not change qualitatively by changing the laser or processing parameters. Only robust processes are technically controllable and thus of industrial interest. The condition for a robust process is identified by a threshold for the mass flow density of the assist gas at the hole entrance which has to be exceeded. Within a robust process regime the melt flow characteristics can be captured by only one model parameter, namely the intensity threshold. In analogy to USP ablation (where it is already known for a long time that the resulting hole shape results from a threshold for the absorbed laser fluency) it is demonstrated that in the case of robust long pulse ablation the asymptotic shape forms in that way that along the whole contour the absorbed heat flux density is equal to the intensity threshold. The intensity threshold depends on the special material and radiation properties and has to be calibrated be one reference experiment. The model is implemented in a numerical simulation which is called AsymptoticDrill and requires such a few amount of resources that it can run on common desktop PCs, laptops or even smart devices. Resulting hole shapes can be calculated within seconds what depicts a clear advantage over other simulations presented in literature in the context of industrial every day usage. Against this background the software additionally is equipped with a user-friendly GUI which allows an intuitive usage. Individual parameters can be adjusted using sliders while the simulation result appears immediately in an adjacent window. A platform independent development allow a flexible usage: the operator can use the tool to adjust the process in a very convenient manner on a tablet during the developer can execute the tool in his office in order to design new processes. Furthermore, at the best knowledge of the authors AsymptoticDrill is the first simulation which allows the import of measured real beam distributions and thus calculates the asymptotic hole shape on the basis of the real state of the specific manufacturing system. In this paper the emphasis is placed on the investigation of the effect of multiple reflections on the asymptotic hole shape which gain in importance when drilling holes with large aspect ratios.

Keywords: asymptotic hole shape, intensity threshold, long pulse laser drilling, robust process

Procedia PDF Downloads 192
243 The Role of Accounting and Auditing in Anti-Corruption Strategies: The Case of ECOWAS

Authors: Edna Gnomblerou

Abstract:

Given the current scale of corruption epidemic in West African economies, governments are seeking for immediate and effective measures to reduce the likelihood of the plague within the region. Generally, accountants and auditors are expected to help organizations in detecting illegal practices. However, their role in the fight against corruption is sometimes limited due to the collusive nature of corruption. The Denmark anti-corruption model shows that the implementation of additional controls over public accounts and independent efficient audits improve transparency and increase the probability of detection. This study is aimed at reviewing the existing anti-corruption policies of the Economic Commission of West African States (ECOWAS) as to observe the role attributed to accounting, auditing and other managerial practices in their anti-corruption drive. It further discusses the usefulness of accounting and auditing in helping anti-corruption commissions in controlling misconduct and increasing the perception to detect irregularities within public administration. The purpose of this initiative is to identify and assess the relevance of accounting and auditing in curbing corruption. To meet this purpose, the study was designed to answer the questions of whether accounting and auditing processes were included in the reviewed anti-corruption strategies, and if yes, whether they were effective in the detection process. A descriptive research method was adopted in examining the role of accounting and auditing in West African anti-corruption strategies. The analysis reveals that proper recognition of accounting standards and implementation of financial audits are viewed as strategic mechanisms in tackling corruption. Additionally, codes of conduct, whistle-blowing and information disclosure to the public are among the most common managerial practices used throughout anti-corruption policies to effectively and efficiently address the problem. These observations imply that sound anti-corruption strategies cannot ignore the values of including accounting and auditing processes. On one hand, this suggests that governments should employ all resources possible to improve accounting and auditing practices in the management of public sector organizations. On the other hand, governments must ensure that accounting and auditing practices are not limited to the private sector, but when properly implemented constitute crucial mechanisms to control and reduce corrupt incentives in public sector.

Keywords: accounting, anti-corruption strategy, auditing, ECOWAS

Procedia PDF Downloads 230
242 Comparative Investigation of Two Non-Contact Prototype Designs Based on a Squeeze-Film Levitation Approach

Authors: A. Almurshedi, M. Atherton, C. Mares, T. Stolarski, M. Miyatake

Abstract:

Transportation and handling of delicate and lightweight objects is currently a significant issue in some industries. Two common contactless movement prototype designs, ultrasonic transducer design and vibrating plate design, are compared. Both designs are based on the method of squeeze-film levitation, and this study aims to identify the limitations, and challenges of each. The designs are evaluated in terms of levitation capabilities, and characteristics. To this end, theoretical and experimental explorations are made. It is demonstrated that the ultrasonic transducer prototype design is better suited to the terms of levitation capabilities. However, the design has some operating and mechanical designing difficulties. For making accurate industrial products in micro-fabrication and nanotechnology contexts, such as semiconductor silicon wafers, micro-components and integrated circuits, non-contact oil-free, ultra-precision and low wear transport along the production line is crucial for enabling. One of the designs (design A) is called the ultrasonic chuck, for which an ultrasonic transducer (Langevin, FBI 28452 HS) comprises the main part. Whereas the other (design B), is a vibrating plate design, which consists of a plain rectangular plate made of Aluminium firmly fastened at both ends. The size of the rectangular plate is 200x100x2 mm. In addition, four rounded piezoelectric actuators of size 28 mm diameter with 0.5 mm thickness are glued to the underside of the plate. The vibrating plate is clamped at both ends in the horizontal plane through a steel supporting structure. In addition, the dynamic of levitation using the designs (A and B) has been investigated based on the squeeze film levitation (SFL). The input apparatus that is used with designs consist of a sine wave signal generator connected to an amplifier type ENP-1-1U (Echo Electronics). The latter has to be utilised to magnify the sine wave voltage that is produced by the signal generator. The measurements of the maximum levitation for three different semiconductor wafers of weights 52, 70 and 88 [g] for design A are 240, 205 and 187 [um], respectively. Whereas the physical results show that the average separation distance for a disk of 5 [g] weight for design B reaches 70 [um]. By using the methodology of squeeze film levitation, it is possible to hold an object in a non-contact manner. The analyses of the investigation outcomes signify that the non-contact levitation of design A provides more improvement than design B. However, design A is more complicated than design B in terms of its manufacturing. In order to identify an adequate non-contact SFL design, a comparison between two common such designs has been adopted for the current investigation. Specifically, the study will involve making comparisons in terms of the following issues: floating component geometries and material type constraints; final created pressure distributions; dangerous interactions with the surrounding space; working environment constraints; and complication and compactness of the mechanical design. Considering all these matters is essential for proficiently distinguish the better SFL design.

Keywords: ANSYS, floating, piezoelectric, squeeze-film

Procedia PDF Downloads 124
241 Regression-Based Approach for Development of a Cuff-Less Non-Intrusive Cardiovascular Health Monitor

Authors: Pranav Gulati, Isha Sharma

Abstract:

Hypertension and hypotension are known to have repercussions on the health of an individual, with hypertension contributing to an increased probability of risk to cardiovascular diseases and hypotension resulting in syncope. This prompts the development of a non-invasive, non-intrusive, continuous and cuff-less blood pressure monitoring system to detect blood pressure variations and to identify individuals with acute and chronic heart ailments, but due to the unavailability of such devices for practical daily use, it becomes difficult to screen and subsequently regulate blood pressure. The complexities which hamper the steady monitoring of blood pressure comprises of the variations in physical characteristics from individual to individual and the postural differences at the site of monitoring. We propose to develop a continuous, comprehensive cardio-analysis tool, based on reflective photoplethysmography (PPG). The proposed device, in the form of an eyewear captures the PPG signal and estimates the systolic and diastolic blood pressure using a sensor positioned near the temporal artery. This system relies on regression models which are based on extraction of key points from a pair of PPG wavelets. The proposed system provides an edge over the existing wearables considering that it allows for uniform contact and pressure with the temporal site, in addition to minimal disturbance by movement. Additionally, the feature extraction algorithms enhance the integrity and quality of the extracted features by reducing unreliable data sets. We tested the system with 12 subjects of which 6 served as the training dataset. For this, we measured the blood pressure using a cuff based BP monitor (Omron HEM-8712) and at the same time recorded the PPG signal from our cardio-analysis tool. The complete test was conducted by using the cuff based blood pressure monitor on the left arm while the PPG signal was acquired from the temporal site on the left side of the head. This acquisition served as the training input for the regression model on the selected features. The other 6 subjects were used to validate the model by conducting the same test on them. Results show that the developed prototype can robustly acquire the PPG signal and can therefore be used to reliably predict blood pressure levels.

Keywords: blood pressure, photoplethysmograph, eyewear, physiological monitoring

Procedia PDF Downloads 249
240 Inappropriate Prescribing Defined by START and STOPP Criteria and Its Association with Adverse Drug Events among Older Hospitalized Patients

Authors: Mohd Taufiq bin Azmy, Yahaya Hassan, Shubashini Gnanasan, Loganathan Fahrni

Abstract:

Inappropriate prescribing in older patients has been associated with resource utilization and adverse drug events (ADE) such as hospitalization, morbidity and mortality. Globally, there is a lack of published data on ADE induced by inappropriate prescribing. Our study is specific to an older population and is aimed at identifying risk factors for ADE and to develop a model that will link ADE to inappropriate prescribing. The design of the study was prospective whereby computerized medical records of 302 hospitalized elderly aged 65 years and above in 3 public hospitals in Malaysia (Hospital Serdang, Hospital Selayang and Hospital Sungai Buloh) were studied over a 7 month period from September 2013 until March 2014. Potentially inappropriate medications and potential prescribing omissions were determined using the published and validated START-STOPP criteria. Patients who had at least one inappropriate medication were included in Phase II of the study where ADE were identified by local expert consensus panel based on the published and validated Naranjo ADR probability scale. The panel also assessed whether ADE were causal or contributory to current hospitalization. The association between inappropriate prescribing and ADE (hospitalization, mortality and adverse drug reactions) was determined by identifying whether or not the former was causal or contributory to the latter. Rate of ADE avoidability was also determined. Our findings revealed that the prevalence of potential inappropriate prescribing was 58.6%. A total of ADEs were detected in 31 of 105 patients (29.5%) when STOPP criteria were used to identify potentially inappropriate medication; All of the 31 ADE (100%) were considered causal or contributory to admission. Of the 31 ADEs, 28 (90.3%) were considered avoidable or potentially avoidable. After adjusting for age, sex, comorbidity, dementia, baseline activities of daily living function, and number of medications, the likelihood of a serious avoidable ADE increased significantly when a potentially inappropriate medication was prescribed (odds ratio, 11.18; 95% confidence interval [CI], 5.014 - 24.93; p < .001). The medications identified by STOPP criteria, are significantly associated with avoidable ADE in older people that cause or contribute to urgent hospitalization but contributed less towards morbidity and mortality. Findings of the study underscore the importance of preventing inappropriate prescribing.

Keywords: adverse drug events, appropriate prescribing, health services research

Procedia PDF Downloads 380
239 From Theory to Practice: Harnessing Mathematical and Statistical Sciences in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in diverse domains has created an urgent need for effective utilization of mathematical and statistical sciences in data analytics. This abstract explores the journey from theory to practice, emphasizing the importance of harnessing mathematical and statistical innovations to unlock the full potential of data analytics. Drawing on a comprehensive review of existing literature and research, this study investigates the fundamental theories and principles underpinning mathematical and statistical sciences in the context of data analytics. It delves into key mathematical concepts such as optimization, probability theory, statistical modeling, and machine learning algorithms, highlighting their significance in analyzing and extracting insights from complex datasets. Moreover, this abstract sheds light on the practical applications of mathematical and statistical sciences in real-world data analytics scenarios. Through case studies and examples, it showcases how mathematical and statistical innovations are being applied to tackle challenges in various fields such as finance, healthcare, marketing, and social sciences. These applications demonstrate the transformative power of mathematical and statistical sciences in data-driven decision-making. The abstract also emphasizes the importance of interdisciplinary collaboration, as it recognizes the synergy between mathematical and statistical sciences and other domains such as computer science, information technology, and domain-specific knowledge. Collaborative efforts enable the development of innovative methodologies and tools that bridge the gap between theory and practice, ultimately enhancing the effectiveness of data analytics. Furthermore, ethical considerations surrounding data analytics, including privacy, bias, and fairness, are addressed within the abstract. It underscores the need for responsible and transparent practices in data analytics, and highlights the role of mathematical and statistical sciences in ensuring ethical data handling and analysis. In conclusion, this abstract highlights the journey from theory to practice in harnessing mathematical and statistical sciences in data analytics. It showcases the practical applications of these sciences, the importance of interdisciplinary collaboration, and the need for ethical considerations. By bridging the gap between theory and practice, mathematical and statistical sciences contribute to unlocking the full potential of data analytics, empowering organizations and decision-makers with valuable insights for informed decision-making.

Keywords: data analytics, mathematical sciences, optimization, machine learning, interdisciplinary collaboration, practical applications

Procedia PDF Downloads 67