Search results for: modal assurance criterion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1233

Search results for: modal assurance criterion

993 An In-Depth Experimental Study of Wax Deposition in Pipelines

Authors: Arias M. L., D’Adamo J., Novosad M. N., Raffo P. A., Burbridge H. P., Artana G.

Abstract:

Shale oils are highly paraffinic and, consequently, can create wax deposits that foul pipelines during transportation. Several factors must be considered when designing pipelines or treatment programs that prevents wax deposition: including chemical species in crude oils, flowrates, pipes diameters and temperature. This paper describes the wax deposition study carried out within the framework of Y-TEC's flow assurance projects, as part of the process to achieve a better understanding on wax deposition issues. Laboratory experiments were performed on a medium size, 1 inch diameter, wax deposition loop of 15 mts long equipped with a solid detector system, online microscope to visualize crystals, temperature and pressure sensors along the loop pipe. A baseline test was performed with diesel with no paraffin or additive content. Tests were undertaken with different temperatures of circulating and cooling fluid at different flow conditions. Then, a solution formed with a paraffin added to the diesel was considered. Tests varying flowrate and cooling rate were again run. Viscosity, density, WAT (Wax Appearance Temperature) with DSC (Differential Scanning Calorimetry), pour point and cold finger measurements were carried out to determine physical properties of the working fluids. The results obtained in the loop were analyzed through momentum balance and heat transfer models. To determine possible paraffin deposition scenarios temperature and pressure loop output signals were studied. They were compared with WAT static laboratory methods. Finally, we scrutinized the effect of adding a chemical inhibitor to the working fluid on the dynamics of the process of wax deposition in the loop.

Keywords: paraffin desposition, flow assurance, chemical inhibitors, flow loop

Procedia PDF Downloads 74
992 Risk Measurement and Management Strategies in Poultry Farm Enterprises in Imo State, Nigeria

Authors: Donatus Otuiheoma Ohajianya, Augusta Onyekachi Unamba

Abstract:

This study analyzed risk among poultry farm enterprises in Imo State of Nigeria. Specifically, it examined sources of risks, the major risks associated with poultry farm enterprise, and the risk-reducing strategies among the poultry farm enterprises in the study area. Primary data collected in 2015 with validated questionnaire from 120 proportionately and randomly selected poultry farm enterprises were used for the study. The data were analyzed with descriptive statistics and W-Statistic that was validated with Pearson Criterion (X2). The results showed that major risk sources affecting poultry farm enterprises were production, marketing, financial and political in that order. The results found a W-Statistic value of 0.789, which was verified by Pearson Criterion to obtain X2-Calculated value of 4.65 which is lower that X2-Critical value of 11.07 at 5% significant level. The risk-reducing strategies were found to be diversification, savings, co-operative marketing, borrowing, and insurance. It was recommended that government and donor agencies should make policies aimed at encouraging poultry farm enterprises adopt the highlighted risk-reducing strategies in risk management to improve their productivity and farm income.

Keywords: risk, measurement, management, poultry farm, Imo State

Procedia PDF Downloads 275
991 Addressing Supply Chain Data Risk with Data Security Assurance

Authors: Anna Fowler

Abstract:

When considering assets that may need protection, the mind begins to contemplate homes, cars, and investment funds. In most cases, the protection of those assets can be covered through security systems and insurance. Data is not the first thought that comes to mind that would need protection, even though data is at the core of most supply chain operations. It includes trade secrets, management of personal identifiable information (PII), and consumer data that can be used to enhance the overall experience. Data is considered a critical element of success for supply chains and should be one of the most critical areas to protect. In the supply chain industry, there are two major misconceptions about protecting data: (i) We do not manage or store confidential/personally identifiable information (PII). (ii) Reliance on Third-Party vendor security. These misconceptions can significantly derail organizational efforts to adequately protect data across environments. These statistics can be exciting yet overwhelming at the same time. The first misconception, “We do not manage or store confidential/personally identifiable information (PII)” is dangerous as it implies the organization does not have proper data literacy. Enterprise employees will zero in on the aspect of PII while neglecting trade secret theft and the complete breakdown of information sharing. To circumvent the first bullet point, the second bullet point forges an ideology that “Reliance on Third-Party vendor security” will absolve the company from security risk. Instead, third-party risk has grown over the last two years and is one of the major causes of data security breaches. It is important to understand that a holistic approach should be considered when protecting data which should not involve purchasing a Data Loss Prevention (DLP) tool. A tool is not a solution. To protect supply chain data, start by providing data literacy training to all employees and negotiating the security component of contracts with vendors to highlight data literacy training for individuals/teams that may access company data. It is also important to understand the origin of the data and its movement to include risk identification. Ensure processes effectively incorporate data security principles. Evaluate and select DLP solutions to address specific concerns/use cases in conjunction with data visibility. These approaches are part of a broader solutions framework called Data Security Assurance (DSA). The DSA Framework looks at all of the processes across the supply chain, including their corresponding architecture and workflows, employee data literacy, governance and controls, integration between third and fourth-party vendors, DLP as a solution concept, and policies related to data residency. Within cloud environments, this framework is crucial for the supply chain industry to avoid regulatory implications and third/fourth party risk.

Keywords: security by design, data security architecture, cybersecurity framework, data security assurance

Procedia PDF Downloads 60
990 Distance Learning in Vocational Mass Communication Courses during COVID-19 in Kuwait: A Media Richness Perspective of Students’ Perceptions

Authors: Husain A. Murad, Ali A. Dashti, Ali Al-Kandari

Abstract:

The outbreak of Coronavirus during the Spring semester of 2020 brought new challenges for the teaching of vocational mass communication courses at universities in Kuwait. Using the Media Richness Theory (MRT), this study examines the response of 252 university students on mass communication programs. A questionnaire regarding their perceptions and preferences concerning modes of instruction on vocational courses online, focusing on the four factors of MRT: immediacy of feedback, capacity to include personal focus, conveyance of multiple cues, and variety of language. The outcomes show that immediacy of feedback predicted all criterion variables: suitability of distance learning (DL) for teaching vocational courses, sentiments of students toward DL, perceptions of easiness of evaluation of DL coursework, and the possibility of retaking DL courses. Capacity to include personal focus was another positive predictor of the criterion variables. It predicted students’ sentiments toward DL and the possibility of retaking DL courses. The outcomes are discussed in relation to implications for using DL, as well as constructing an agenda for DL research.

Keywords: distance learning, media richness theory, traditional learning, vocational media courses

Procedia PDF Downloads 38
989 Assessing Walkability in New Cities around Cairo

Authors: Lobna Ahmed Galal

Abstract:

Modal integration has given minimal consideration in cities of developing countries, as well as the declining dominance of public transport, and predominance of informal transport, the modal share of informal taxis in greater Cairo has increased from 6% in 1987 to 37% in 2001 and this has since risen even higher, informal and non-motorized modes of transport acting as a gap filler by feeding other modes of transport, not by design or choice, but often by lack of accessible and affordable public transport. Yet non-motorized transport is peripheral, with minimal priority in urban planning and investments, lacking of strong polices to support non-motorized transport, for authorities development is associated with technology and motorized transport, and promotion of non-motorized transport may be considered corresponding to development, as well as social stigma against non-motorized transport, as it is seen a travel mode for the poor. Cairo as a city of a developing country, has poor quality infrastructure for non-motorized transport, suffering from absence of dedicated corridors, and when existing they are often encroached for commercial purposes, widening traffic lanes at the expense of sidewalks, absence of footpaths, or being overcrowded, poor lighting, making walking unsafe and yet, lack of financial supply to such facilities as it is often considered beyond city capabilities. This paper will deal with the objective measuring of the built environment relating to walking, in some neighborhoods of new cities around Cairo, In addition to comparing the results of the objective measures of the built environment with the level of self-reported survey. The first paper's objective is to show how the index ‘walkability of community neighborhoods’ works in the contexts in neighborhoods of new cities around Cairo. The procedure of objective measuring is of a high potential to be carried out by using GIS.

Keywords: assessing, built environment, Cairo, walkability

Procedia PDF Downloads 352
988 Constitutive Modeling of Different Types of Concrete under Uniaxial Compression

Authors: Mostafa Jafarian Abyaneh, Khashayar Jafari, Vahab Toufigh

Abstract:

The cost of experiments on different types of concrete has raised the demand for prediction of their behavior with numerical analysis. In this research, an advanced numerical model has been presented to predict the complete elastic-plastic behavior of polymer concrete (PC), high-strength concrete (HSC), high performance concrete (HPC) along with different steel fiber contents under uniaxial compression. The accuracy of the numerical response was satisfactory as compared to other conventional simple models such as Mohr-Coulomb and Drucker-Prager. In order to predict the complete elastic-plastic behavior of specimens including softening behavior, disturbed state concept (DSC) was implemented by nonlinear finite element analysis (NFEA) and hierarchical single surface (HISS) failure criterion, which is a failure surface without any singularity.

Keywords: disturbed state concept (DSC), hierarchical single surface (HISS) failure criterion, high performance concrete (HPC), high-strength concrete (HSC), nonlinear finite element analysis (NFEA), polymer concrete (PC), steel fibers, uniaxial compression test

Procedia PDF Downloads 286
987 Thermodynamic Modeling of Three Pressure Level Reheat HRSG, Parametric Analysis and Optimization Using PSO

Authors: Mahmoud Nadir, Adel Ghenaiet

Abstract:

The main purpose of this study is the thermodynamic modeling, the parametric analysis, and the optimization of three pressure level reheat HRSG (Heat Recovery Steam Generator) using PSO method (Particle Swarm Optimization). In this paper, a parametric analysis followed by a thermodynamic optimization is presented. The chosen objective function is the specific work of the steam cycle that may be, in the case of combined cycle (CC), a good criterion of thermodynamic performance analysis, contrary to the conventional steam turbines in which the thermal efficiency could be also an important criterion. The technologic constraints such as maximal steam cycle temperature, minimal steam fraction at steam turbine outlet, maximal steam pressure, minimal stack temperature, minimal pinch point, and maximal superheater effectiveness are also considered. The parametric analyses permitted to understand the effect of design parameters and the constraints on steam cycle specific work variation. PSO algorithm was used successfully in HRSG optimization, knowing that the achieved results are in accordance with those of the previous studies in which genetic algorithms were used. Moreover, this method is easy to implement comparing with the other methods.

Keywords: combined cycle, HRSG thermodynamic modeling, optimization, PSO, steam cycle specific work

Procedia PDF Downloads 350
986 Damping Optimal Design of Sandwich Beams Partially Covered with Damping Patches

Authors: Guerich Mohamed, Assaf Samir

Abstract:

The application of viscoelastic materials in the form of constrained layers in mechanical structures is an efficient and cost-effective technique for solving noise and vibration problems. This technique requires a design tool to select the best location, type, and thickness of the damping treatment. This paper presents a finite element model for the vibration of beams partially or fully covered with a constrained viscoelastic damping material. The model is based on Bernoulli-Euler theory for the faces and Timoshenko beam theory for the core. It uses four variables: the through-thickness constant deflection, the axial displacements of the faces, and the bending rotation of the beam. The sandwich beam finite element is compatible with the conventional C1 finite element for homogenous beams. To validate the proposed model, several free vibration analyses of fully or partially covered beams, with different locations of the damping patches and different percent coverage, are studied. The results show that the proposed approach can be used as an effective tool to study the influence of the location and treatment size on the natural frequencies and the associated modal loss factors. Then, a parametric study regarding the variation in the damping characteristics of partially covered beams has been conducted. In these studies, the effect of core shear modulus value, the effect of patch size variation, the thickness of constraining layer, and the core and the locations of the patches are considered. In partial coverage, the spatial distribution of additive damping by using viscoelastic material is as important as the thickness and material properties of the viscoelastic layer and the constraining layer. Indeed, to limit added mass and to attain maximum damping, the damping patches should be placed at optimum locations. These locations are often selected using the modal strain energy indicator. Following this approach, the damping patches are applied over regions of the base structure with the highest modal strain energy to target specific modes of vibration. In the present study, a more efficient indicator is proposed, which consists of placing the damping patches over regions of high energy dissipation through the viscoelastic layer of the fully covered sandwich beam. The presented approach is used in an optimization method to select the best location for the damping patches as well as the material thicknesses and material properties of the layers that will yield optimal damping with the minimum area of coverage.

Keywords: finite element model, damping treatment, viscoelastic materials, sandwich beam

Procedia PDF Downloads 118
985 Modeling Anisotropic Damage Algorithms of Metallic Structures

Authors: Bahar Ayhan

Abstract:

The present paper is concerned with the numerical modeling of the inelastic behavior of the anisotropically damaged ductile materials, which are based on a generalized macroscopic theory within the framework of continuum damage mechanics. Kinematic decomposition of the strain rates into elastic, plastic and damage parts is basis for accomplishing the structure of continuum theory. The evolution of the damage strain rate tensor is detailed with the consideration of anisotropic effects. Helmholtz free energy functions are constructed separately for the elastic and inelastic behaviors in order to be able to address the plastic and damage process. Additionally, the constitutive structure, which is based on the standard dissipative material approach, is elaborated with stress tensor, a yield criterion for plasticity and a fracture criterion for damage besides the potential functions of each inelastic phenomenon. The finite element method is used to approximate the linearized variational problem. Stress and strain outcomes are solved by using the numerical integration algorithm based on operator split methodology with a plastic and damage (multiplicator) variable separately. Numerical simulations are proposed in order to demonstrate the efficiency of the formulation by comparing the examples in the literature.

Keywords: anisotropic damage, finite element method, plasticity, coupling

Procedia PDF Downloads 175
984 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 91
983 Efficient Study of Substrate Integrated Waveguide Devices

Authors: J. Hajri, H. Hrizi, N. Sboui, H. Baudrand

Abstract:

This paper presents a study of SIW circuits (Substrate Integrated Waveguide) with a rigorous and fast original approach based on Iterative process (WCIP). The theoretical suggested study is validated by the simulation of two different examples of SIW circuits. The obtained results are in good agreement with those of measurement and with software HFSS.

Keywords: convergence study, HFSS, modal decomposition, SIW circuits, WCIP method

Procedia PDF Downloads 468
982 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 364
981 Future of Nanotechnology in Digital MacDraw

Authors: Pejman Hosseinioun, Abolghasem Ghasempour, Elham Gholami, Hamed Sarbazi

Abstract:

Considering the development in global semiconductor technology, it is anticipated that gadgets such as diodes and resonant transistor tunnels (RTD/RTT), Single electron transistors (SET) and quantum cellular automata (QCA) will substitute CMOS (Complementary Metallic Oxide Semiconductor) gadgets in many applications. Unfortunately, these new technologies cannot disembark the common Boolean logic efficiently and are only appropriate for liminal logic. Therefor there is no doubt that with the development of these new gadgets it is necessary to find new MacDraw technologies which are compatible with them. Resonant transistor tunnels (RTD/RTT) and circuit MacDraw with enhanced computing abilities are candida for accumulating Nano criterion in the future. Quantum cellular automata (QCA) are also advent Nano technological gadgets for electrical circuits. Advantages of these gadgets such as higher speed, smaller dimensions, and lower consumption loss are of great consideration. QCA are basic gadgets in manufacturing gates, fuses and memories. Regarding the complex Nano criterion physical entity, circuit designers can focus on logical and constructional design to decrease complication in MacDraw. Moreover Single electron technology (SET) is another noteworthy gadget considered in Nano technology. This article is a survey in future of Nano technology in digital MacDraw.

Keywords: nano technology, resonant transistor tunnels, quantum cellular automata, semiconductor

Procedia PDF Downloads 242
980 An Audit on the Quality of Pre-Operative Intra-Oral Digital Radiographs Taken for Dental Extractions in a General Practice Setting

Authors: Gabrielle O'Donoghue

Abstract:

Background: Pre-operative radiographs facilitate assessment and treatment planning in minor oral surgery. Quality assurance for dental radiography advocates the As Low As Reasonably Achievable (ALARA) principle in collecting accurate diagnostic information. Aims: To audit the quality of digital intraoral periapicals (IOPAs) taken prior to dental extractions in a metropolitan general dental practice setting. Standards: The National Radiological Protection Board (NRPB) guidance outlines three grades of radiograph quality: excellent (Grade 1 > 70% of total exposures), diagnostically acceptable (Grade 2 <20%), and unacceptable (Grade 3 <10%). Methodology: A study of pre-operative radiographs taken prior to dental extractions across 12 private general dental practices in a large metropolitan area by 44 practitioners. A total of 725 extractions were assessed, allowing 258 IOPAs to be reviewed in one audit cycle. Results: First cycle: Of 258 IOPAs: 223(86.4%) scored Grade 1, 27(10.5%) Grade 2, and 8(3.1%) Grade 3. The standard was met. 35 dental extractions were performed without an available pre-operative radiograph. Action Plan & Recommendations: Results were distributed to all staff and a continuous professional development evening organized to outline recommendations to improve image quality. A second audit cycle is proposed at a six-month interval to review the recommendations and appraise results. Conclusion: The overall standard of radiographs met the published guidelines. A significant improvement in the number of procedures undertaken without pre-operative imaging is expected at a six-month interval period. An investigation into undiagnostic imaging and associated adverse patient outcomes is being considered. Maintenance of the standards achieved is predicted in the second audit cycle to ensure consistent high quality imaging.

Keywords: audit, oral radiology, oral surgery, periapical radiographs, quality assurance

Procedia PDF Downloads 135
979 Comparative Study of Free Vibrational Analysis and Modes Shapes of FSAE Car Frame Using Different FEM Modules

Authors: Rajat Jain, Himanshu Pandey, Somesh Mehta, Pravin P. Patil

Abstract:

Formula SAE cars are the student designed and fabricated formula prototype cars, designed according to SAE INTERNATIONAL design rules which compete in the various national and international events. This paper shows a FEM based comparative study of free vibration analysis of different mode shapes of a formula prototype car chassis frame. Tubing sections of different diameters as per the design rules are designed in such a manner that the desired strength can be achieved. Natural frequency of first five mode was determined using finite element analysis method. SOLIDWORKS is used for designing the frame structure and SOLIDWORKS SIMULATION and ANSYS WORKBENCH 16.2 are used for the modal analysis. Mode shape results of ANSYS and SOLIDWORKS were compared. Fixed –fixed boundary conditions are used for fixing the A-arm wishbones. The simulation results were compared for the validation of the study. First five modes were compared and results were found within the permissible limits. The AISI4130 (CROMOLY- chromium molybdenum steel) material is used and the chassis frame is discretized with fine quality QUAD mesh followed by Fixed-fixed boundary conditions. The natural frequency of the chassis frame is 53.92-125.5 Hz as per the results of ANSYS which is found within the permissible limits. The study is concluded with the light weight and compact chassis frame without compensation with strength. This design allows to fabricate an extremely safe driver ergonomics, compact, dynamically stable, simple and light weight tubular chassis frame with higher strength.

Keywords: FEM, modal analysis, formula SAE cars, chassis frame, Ansys

Procedia PDF Downloads 309
978 Effect of the Binary and Ternary Exchanges on Crystallinity and Textural Properties of X Zeolites

Authors: H. Hammoudi, S. Bendenia, K. Marouf-Khelifa, R. Marouf, J. Schott, A. Khelifa

Abstract:

The ionic exchange of the NaX zeolite by Cu2+ and/or Zn2+ cations is progressively driven while following the development of some of its characteristic: crystallinity by XR diffraction, profile of isotherms, RI criterion, isosteric adsorption heat and microporous volume using both the Dubinin–Radushkevich (DR) equation and the t-plot through the Lippens–de Boer method which also makes it possible to determine the external surface area. Results show that the cationic exchange process, in the case of Cu2+ introduced at higher degree, is accompanied by crystalline degradation for Cu(x)X, in contrast to Zn2+-exchanged zeolite X. This degradation occurs without significant presence of mesopores, because the RI criterion values were found to be much lower than 2.2. A comparison between the binary and ternary exchanges shows that the curves of CuZn(x)X are clearly below those of Zn(x)X and Cu(x)X, whatever the examined parameter. On the other hand, the curves relating to CuZn(x)X tend towards those of Cu(x)X. This would again confirm the sensitivity of the crystalline structure of CuZn(x)X with respect to the introduction of Cu2+ cations. An original result is the distortion of the zeolitic framework of X zeolites at middle exchange degree, when Cu2+ competes with another divalent cation, such as Zn2+, for the occupancy of sites distributed within zeolitic cavities. In other words, the ternary exchange accentuates the crystalline degradation of X zeolites. An unexpected result also is the no correlation between crystal damage and the external surface area.

Keywords: adsorption, crystallinity, ion exchange, zeolite

Procedia PDF Downloads 227
977 Understanding Surface Failures in Thick Asphalt Pavement: A 3-D Finite Element Model Analysis

Authors: Hana Gebremariam Liliso

Abstract:

This study investigates the factors contributing to the deterioration of thick asphalt pavements, such as rutting and cracking. We focus on the combined influence of traffic loads and pavement structure. This study uses a three-dimensional finite element model with a Mohr-Coulomb failure criterion to analyze the stress levels near the pavement's surface under realistic conditions. Our model considers various factors, including tire-pavement contact stresses, asphalt properties, moving loads, and dynamic analysis. This research suggests that cracking tends to occur between dual tires. Some key discoveries include the risk of cracking increases as temperatures rise; surface cracking at high temperatures is associated with distortional deformation; using a uniform contact stress distribution underestimates the risk of failure compared to realistic three-dimensional tire contact stress, particularly at high temperatures; the risk of failure is higher near the surface when there is a negative temperature gradient in the asphalt layer; and debonding beneath the surface layer leads to increased shear stress and premature failure around the interface.

Keywords: asphalt pavement, surface failure, 3d finite element model, multiaxial stress states, Mohr-Coulomb failure criterion

Procedia PDF Downloads 24
976 Analytical Solutions for Tunnel Collapse Mechanisms in Circular Cross-Section Tunnels under Seepage and Seismic Forces

Authors: Zhenyu Yang, Qiunan Chen, Xiaocheng Huang

Abstract:

Reliable prediction of tunnel collapse remains a prominent challenge in the field of civil engineering. In this study, leveraging the nonlinear Hoek-Brown failure criterion and the upper-bound theorem, an analytical solution for the collapse surface of shallowly buried circular tunnels was derived, taking into account the coupled effects of surface loads and pore water pressures. Initially, surface loads and pore water pressures were introduced as external force factors, equating the energy dissipation rate to the external force, yielding our objective function. Subsequently, the variational method was employed for optimization, and the outcomes were juxtaposed with previous research findings. Furthermore, we utilized the deduced equation set to systematically analyze the influence of various rock mass parameters on collapse shape and extent. To validate our analytical solutions, a comparison with prior studies was executed. The corroboration underscored the efficacy of our proposed methodology, offering invaluable insights for collapse risk assessment in practical engineering applications.

Keywords: tunnel roof stability, analytical solution, hoek–brown failure criterion, limit analysis

Procedia PDF Downloads 53
975 Experimental Modal Analysis of a Suspended Composite Beam

Authors: First A. Lahmar Lahbib, Second B. Abdeldjebar Rabiâ, Third C. Moudden B, forth D. Missoum L

Abstract:

Vibration tests are used to identify the elasticity modulus in two directions. This strategy is applied to composite materials glass / polyester. Experimental results made on a specimen in free vibration showed the efficiency of this method. Obtained results were validated by a comparison to results stemming from static tests.

Keywords: beam, characterization, composite, elasticity modulus, vibration.

Procedia PDF Downloads 438
974 Experimental Study and Numerical Modelling of Failure of Rocks Typical for Kuzbass Coal Basin

Authors: Mikhail O. Eremin

Abstract:

Present work is devoted to experimental study and numerical modelling of failure of rocks typical for Kuzbass coal basin (Russia). The main goal was to define strength and deformation characteristics of rocks on the base of uniaxial compression and three-point bending loadings and then to build a mathematical model of failure process for both types of loading. Depending on particular physical-mechanical characteristics typical rocks of Kuzbass coal basin (sandstones, siltstones, mudstones, etc. of different series – Kolchuginsk, Tarbagansk, Balohonsk) manifest brittle and quasi-brittle character of failure. The strength characteristics for both tension and compression are found. Other characteristics are also found from the experiment or taken from literature reviews. On the base of obtained characteristics and structure (obtained from microscopy) the mathematical and structural models are built and numerical modelling of failure under different types of loading is carried out. Effective characteristics obtained from modelling and character of failure correspond to experiment and thus, the mathematical model was verified. An Instron 1185 machine was used to carry out the experiments. Mathematical model includes fundamental conservation laws of solid mechanics – mass, impulse, energy. Each rock has a sufficiently anisotropic structure, however, each crystallite might be considered as isotropic and then a whole rock model has a quasi-isotropic structure. This idea gives an opportunity to use the Hooke’s law inside of each crystallite and thus explicitly accounting for the anisotropy of rocks and the stress-strain state at loading. Inelastic behavior is described in frameworks of two different models: von Mises yield criterion and modified Drucker-Prager yield criterion. The damage accumulation theory is also implemented in order to describe a failure process. Obtained effective characteristics of rocks are used then for modelling of rock mass evolution when mining is carried out both by an open-pit or underground opening.

Keywords: damage accumulation, Drucker-Prager yield criterion, failure, mathematical modelling, three-point bending, uniaxial compression

Procedia PDF Downloads 142
973 Risk in the South African Sectional Title Industry: An Assurance Perspective

Authors: Leandi Steenkamp

Abstract:

The sectional title industry has been a part of the property landscape in South Africa for almost half a century, and plays a significant role in addressing the housing problem in the country. Stakeholders such as owners and investors in sectional title property are in most cases not directly involved in the management thereof, and place reliance on the audited annual financial statements of bodies corporate for decision-making purposes. Although the industry seems to be highly regulated, the legislation regarding accounting and auditing of sectional title is vague and ambiguous. Furthermore, there are no industry-specific auditing and accounting standards to guide accounting and auditing practitioners in performing their work and industry financial benchmarks are not readily available. In addition, financial pressure on sectional title schemes is often very high due to the fact that some owners exercise unrealistic pressure to keep monthly levies as low as possible. All these factors have an impact on the business risk as well as audit risk of bodies corporate. Very little academic research has been undertaken on the sectional title industry in South Africa from an accounting and auditing perspective. The aim of this paper is threefold: Firstly, to discuss the findings of a literature review on uncertainties, ambiguity and confusing aspects in current legislation regarding the audit of a sectional title property that may cause or increase audit and business risk. Secondly, empirical findings of risk-related aspects from the results of interviews with three groups of body corporate role-players will be discussed. The role-players were body corporate trustee chairpersons, body corporate managing agents and accounting and auditing practitioners of bodies corporate. Specific reference will be made to business risk and audit risk. Thirdly, practical recommendations will be made on possibilities of closing the audit expectation gap, and further research opportunities in this regard will be discussed.

Keywords: assurance, audit, audit risk, body corporate, corporate governance, sectional title

Procedia PDF Downloads 238
972 A New Approach – A Numerical Assessment of Ground Strata Failure Potentials in Underground Mines

Authors: Omer Yeni

Abstract:

Ground strata failure or fall-of-ground is one of the underground mines' most prominent catastrophic risks. Mining companies use various methods/technics to prevent and critically control the associated risks. Some of those are safety by design, excavation methods, ground support, training, and competency, which all require quality control and assurance activities to confirm their efficiencies and performances and identify improvement opportunities through monitoring. However, many mining companies use quality control (QC) methods without quality assurance (QA), and they call it QA/QC together as a habit. From a simple definition, QC is a method of detecting defects, and QA is a method of preventing defects. Testing the final products at the end of the production line is not the way of proper QA/QC application but testing every component before assembly and the final product once completed. The installed ground support elements are some final products mining companies use to prevent ground strata failure. Testing the final product (i.e., rock bolt pull testing, shotcrete strength test, etc.) with QC methods only while those areas are already accessible; is not like testing an airplane full of passengers right after the production line or testing a car after the sale. Can only QC methods be called QA/QC? Can QA/QC activities be numerically scored for each critical control implemented to assess ground strata failure potential? Can numerical scores be used to identify Geotechnical Risk Rating (GRR) to determine the ground strata failure risk and its probability? This paper sets out to provide a specific QA/QC methodology to manage and confirm efficiencies and performances of the implemented critical controls and a numerical approach through the Geotechnical Risk Rating (GRR) process to assess ground strata failure risk to determine the gaps where proactive action is required to evaluate the probability of ground strata failures in underground mines.

Keywords: fall of ground, ground strata failure, QA/QC, underground

Procedia PDF Downloads 38
971 Emergency Physician Performance for Hydronephrosis Diagnosis and Grading Compared with Radiologist Assessment in Renal Colic: The EPHyDRA Study

Authors: Sameer A. Pathan, Biswadev Mitra, Salman Mirza, Umais Momin, Zahoor Ahmed, Lubna G. Andraous, Dharmesh Shukla, Mohammed Y. Shariff, Magid M. Makki, Tinsy T. George, Saad S. Khan, Stephen H. Thomas, Peter A. Cameron

Abstract:

Study objective: Emergency physician’s (EP) ability to identify hydronephrosis on point-of-care ultrasound (POCUS) has been assessed in the past using CT scan as the reference standard. We aimed to assess EP interpretation of POCUS to identify and grade the hydronephrosis in a direct comparison with the consensus-interpretation of POCUS by radiologists, and also to compare the EP and radiologist performance using CT scan as the criterion standard. Methods: Using data from a POCUS databank, a prospective interpretation study was conducted at an urban academic emergency department. All POCUS exams were performed on patients presenting with renal colic to the ED. Institutional approval was obtained for conducting this study. All the analyses were performed using Stata MP 14.0 (Stata Corp, College Station, Texas). Results: A total of 651 patients were included, with paired sets of renal POCUS video clips and the CT scan performed at the same ED visit. Hydronephrosis was reported in 69.6% of POCUS exams by radiologists and 72.7% of CT scans (p=0.22). The κ for consensus interpretation of POCUS between the radiologists to detect hydronephrosis was 0.77 (0.72 to 0.82) and weighted κ for grading the hydronephrosis was 0.82 (0.72 to 0.90), interpreted as good to very good. Using CT scan findings as the criterion standard, Eps had an overall sensitivity of 81.1% (95% CI: 79.6% to 82.5%), specificity of 59.4% (95% CI: 56.4% to 62.5%), PPV of 84.3% (95% CI: 82.9% to 85.7%), and NPV of 53.8% (95% CI: 50.8% to 56.7%); compared to radiologist sensitivity of 85.0% (95% CI: 82.5% to 87.2%), specificity of 79.7% (95% CI: 75.1% to 83.7%), PPV of 91.8% (95% CI: 89.8% to 93.5%), and NPV of 66.5% (95% CI: 61.8% to 71.0%). Testing for a report of moderate or high degree of hydronephrosis, specificity of EP was 94.6% (95% CI: 93.7% to 95.4%) and to 99.2% (95% CI: 98.9% to 99.5%) for identifying severe hydronephrosis alone. Conclusion: EP POCUS interpretations were comparable to the radiologists for identifying moderate to severe hydronephrosis using CT scan results as the criterion standard. Among patients with moderate or high pre-test probability of ureteric calculi, as calculated by the STONE-score, the presence of moderate to severe (+LR 6.3 and –LR 0.69) or severe hydronephrosis (+LR 54.4 and –LR 0.57) was highly diagnostic of the stone disease. Low dose CT is indicated in such patients for evaluation of stone size and location.

Keywords: renal colic, point-of-care, ultrasound, bedside, emergency physician

Procedia PDF Downloads 254
970 Influence of Glass Plates Different Boundary Conditions on Human Impact Resistance

Authors: Alberto Sanchidrián, José A. Parra, Jesús Alonso, Julián Pecharromán, Antonia Pacios, Consuelo Huerta

Abstract:

Glass is a commonly used material in building; there is not a unique design solution as plates with a different number of layers and interlayers may be used. In most façades, a security glazing have to be used according to its performance in the impact pendulum. The European Standard EN 12600 establishes an impact test procedure for classification under the point of view of the human security, of flat plates with different thickness, using a pendulum of two tires and 50 kg mass that impacts against the plate from different heights. However, this test does not replicate the actual dimensions and border conditions used in building configurations and so the real stress distribution is not determined with this test. The influence of different boundary conditions, as the ones employed in construction sites, is not well taking into account when testing the behaviour of safety glazing and there is not a detailed procedure and criteria to determinate the glass resistance against human impact. To reproduce the actual boundary conditions on site, when needed, the pendulum test is arranged to be used "in situ", with no account for load control, stiffness, and without a standard procedure. Fracture stress of small and large glass plates fit a Weibull distribution with quite a big dispersion so conservative values are adopted for admissible fracture stress under static loads. In fact, test performed for human impact gives a fracture strength two or three times higher, and many times without a total fracture of the glass plate. Newest standards, as for example DIN 18008-4, states for an admissible fracture stress 2.5 times higher than the ones used for static and wing loads. Now two working areas are open: a) to define a standard for the ‘in situ’ test; b) to prepare a laboratory procedure that allows testing with more real stress distribution. To work on both research lines a laboratory that allows to test medium size specimens with different border conditions, has been developed. A special steel frame allows reproducing the stiffness of the glass support substructure, including a rigid condition used as reference. The dynamic behaviour of the glass plate and its support substructure have been characterized with finite elements models updated with modal tests results. In addition, a new portable impact machine is being used to get enough force and direction control during the impact test. Impact based on 100 J is used. To avoid problems with broken glass plates, the test have been done using an aluminium plate of 1000 mm x 700 mm size and 10 mm thickness supported on four sides; three different substructure stiffness conditions are used. A detailed control of the dynamic stiffness and the behaviour of the plate is done with modal tests. Repeatability of the test and reproducibility of results prove that procedure to control both, stiffness of the plate and the impact level, is necessary.

Keywords: glass plates, human impact test, modal test, plate boundary conditions

Procedia PDF Downloads 281
969 The Leadership Criterion: Challenges in Pursuing Excellence in the Jordanian Public Sector

Authors: Shaker Aladwan, Paul Forrester

Abstract:

This paper explores the challenges that face leaders when implementing business excellence programmes in the Jordanian public sector. The study adopted a content analysis approach to analyse the excellence assessment reports that have been produced by the King Abdullah II Centre for Excellence (KACE). The sample comprises ten public organisations which have participated in the King Abdullah Award for Excellence (KAA) more than once and acknowledge in their reports that they have failed to achieve satisfactory results. The key challenges to the implementation of leadership criteria in the public sector in Jordan were found to be poor strategic planning, lack of employee empowerment, weaknesses in benchmarking performance, a lack of financial resources, poor integration and coordination, and poor measurement system: This study proposes a conceptual model for the as assessment of challenges that face managers when seeking to implement excellence in leadership in the Jordanian public sector. Theoretically, this paper fills context gaps in the excellence literature in general and organisational excellence in the public sector in particular. Leadership challenges in the public sector are generally widely studied, but it is important to gain a better understanding of how these challenges can be overcome. In comparison to many existing studies, this research has provided specific and detailed insights these organisational excellence challenges in the public sector and provides a conceptual model for use by other researchers into the future.

Keywords: leadership criterion, organisational excellence, challenges, quality awards, public sector, Jordan

Procedia PDF Downloads 362
968 A Study of Two Disease Models: With and Without Incubation Period

Authors: H. C. Chinwenyi, H. D. Ibrahim, J. O. Adekunle

Abstract:

The incubation period is defined as the time from infection with a microorganism to development of symptoms. In this research, two disease models: one with incubation period and another without incubation period were studied. The study involves the use of a  mathematical model with a single incubation period. The test for the existence and stability of the disease free and the endemic equilibrium states for both models were carried out. The fourth order Runge-Kutta method was used to solve both models numerically. Finally, a computer program in MATLAB was developed to run the numerical experiments. From the results, we are able to show that the endemic equilibrium state of the model with incubation period is locally asymptotically stable whereas the endemic equilibrium state of the model without incubation period is unstable under certain conditions on the given model parameters. It was also established that the disease free equilibrium states of the model with and without incubation period are locally asymptotically stable. Furthermore, results from numerical experiments using empirical data obtained from Nigeria Centre for Disease Control (NCDC) showed that the overall population of the infected people for the model with incubation period is higher than that without incubation period. We also established from the results obtained that as the transmission rate from susceptible to infected population increases, the peak values of the infected population for the model with incubation period decrease and are always less than those for the model without incubation period.

Keywords: asymptotic stability, Hartman-Grobman stability criterion, incubation period, Routh-Hurwitz criterion, Runge-Kutta method

Procedia PDF Downloads 148
967 The Examination And Assurance Of The Microbiological Safety Pertaining To Raw Milk And its Derived Processed Products

Authors: Raana Babadi Fathipour

Abstract:

The production of dairy holds significant importance in the sustenance of billions of individuals worldwide, as they rely on milk and its derived products for daily consumption. In addition to being a source of essential nutrients crucial for human well-being, such as proteins, fats, vitamins, and minerals; dairy items are witnessing an increasing demand worldwide. Amongst all the factors contributing to the quality and safety assurance of dairy products, the strong focus lies on maintaining high standards in raw milk procurement. Raw milk serves as an externally nutritious medium for various microorganisms due to its inherent properties. This poses a considerable challenge for the dairy industry in ensuring that microbial contamination is minimized throughout every stage of the value chain. Despite implementing diverse process technologies—both conventional and innovative—the occurrence of microbial spoilage still results in substantial losses within this industry context. Moreover, milk and dairy products have been associated with numerous cases of foodborne illnesses across the globe. Various pathogens such as Salmonella serovars, Campylobacter spp., Shiga toxin-producing Escherichia coli, Listeria monocytogenes, and enterotoxin producing Staphylococcus aureus are commonly identified as the culprits behind these outbreaks in the dairy industry. The effective management of food safety within this sector necessitates a proactive and risk-based approach to reform. However, this strategy presents difficulties for developing nations where informal value chains dominate the dairy sector. Whether operating on a small or large scale or falling within formal or informal realms, it is imperative that the dairy industry adheres to principles of good hygiene practices and good manufacturing practices. Additionally, identifying and managing potential sources of contamination is crucial in mitigating challenges pertaining to quality and safety precautions.

Keywords: dairy value chain, microbial contamination, food safety, hygiene

Procedia PDF Downloads 41
966 Improving Quality of Family Planning Services in Pakistan

Authors: Mohammad Zakir, Saamia Shams

Abstract:

Background: Provision of quality family planning services remarkably contribute towards increased uptake of modern contraceptive methods and have important implications on reducing fertility rates. The quality of care in family planning has beneficial impact on reproductive health of women, yet little empirical evidence is present to show the relationship between the impact of adequate training of Community Mid Wives (CMW) and quality family planning services. Aim: This study aimed to enhance the knowledge and counseling skills of CMWs in improving the access to quality client-centered family planning services in Pakistan. Methodology: A quasi-experimental longitudinal study using Initial Quality Assurance Scores-Training-Post Training Quality Assurance Scores design with a non- equivalent control group was adopted to compare a set of experimental CMWs that received four days training package including Family Planning Methods, Counselling, Communication skills and Practical training on IUCD insertion with a set of comparison CMWs that did not receive any intervention. A sample size of 100 CMW from Suraj Social Franchise (SSF) private providers was recruited from both urban and rural Pakistan. Results: Significant improvement in the family planning knowledge and counseling skills (p< 0.001) of the CMWs was evident in the experimental group as compared to comparison group with p > 0.05. Non- significant association between pre-test level family planning knowledge and counseling skills was observed in both the groups (p>0.05). Conclusion: The findings demonstrate that adequate training is an important determinant of quality of family planning services received by clients. Provider level training increases the likelihood of contraceptives uptake and decreases the likelihood of both unintended and unwanted pregnancies. Enhancing quality of family planning services may significantly help reduce the fertility and improve the reproductive health indicators of women in Pakistan.

Keywords: community mid wives, family planning services, quality of care, training

Procedia PDF Downloads 309
965 Scour Damaged Detection of Bridge Piers Using Vibration Analysis - Numerical Study of a Bridge

Authors: Solaine Hachem, Frédéric Bourquin, Dominique Siegert

Abstract:

The brutal collapse of bridges is mainly due to scour. Indeed, the soil erosion in the riverbed around a pier modifies the embedding conditions of the structure, reduces its overall stiffness and threatens its stability. Hence, finding an efficient technique that allows early scour detection becomes mandatory. Vibration analysis is an indirect method for scour detection that relies on real-time monitoring of the bridge. It tends to indicate the presence of a scour based on its consequences on the stability of the structure and its dynamic response. Most of the research in this field has focused on the dynamic behavior of a single pile and has examined the depth of the scour. In this paper, a bridge is fully modeled with all piles and spans and the scour is represented by a reduction in the foundation's stiffnesses. This work aims to identify the vibration modes sensitive to the rigidity’s loss in the foundations so that their variations can be considered as a scour indicator: the decrease in soil-structure interaction rigidity leads to a decrease in the natural frequencies’ values. By using the first-order perturbation method, the expression of sensitivity, which depends only on the selected vibration modes, is established to determine the deficiency of foundations stiffnesses. The solutions are obtained by using the singular value decomposition method for the regularization of the inverse problem. The propagation of uncertainties is also calculated to verify the efficiency of the inverse problem method. Numerical simulations describing different scenarios of scour are investigated on a simplified model of a real composite steel-concrete bridge located in France. The results of the modal analysis show that the modes corresponding to in-plane and out-of-plane piers vibrations are sensitive to the loss of foundation stiffness. While the deck bending modes are not affected by this damage.

Keywords: bridge’s piers, inverse problems, modal sensitivity, scour detection, vibration analysis

Procedia PDF Downloads 68
964 Quality Assurance Comparison of Map Check 2, Epid, and Gafchromic® EBT3 Film for IMRT Treatment Planning

Authors: Khalid Iqbal, Saima Altaf, M. Akram, Muhammad Abdur Rafaye, Saeed Ahmad Buzdar

Abstract:

Objective: Verification of patient-specific intensity modulated radiation therapy (IMRT) plans using different 2-D detectors has become increasingly popular due to their ease of use and immediate readout of the results. The purpose of this study was to test and compare various 2-D detectors for dosimetric quality assurance (QA) of intensity-modulated radiotherapy (IMRT) with the vision to find alternative QA methods. Material and Methods: Twenty IMRT patients (12 of brain and 8 of the prostate) were planned on Eclipse treatment planning system using Varian Clinac DHX on both energies 6MV and 15MV. Verification plans of all such patients were also made and delivered to Map check2, EPID (Electronic portal imaging device) and Gafchromic EBT3. Gamma index analyses were performed using different criteria to evaluate and compare the dosimetric results. Results: Statistical analysis shows the passing rate of 99.55%, 97.23% and 92.9% for 6MV and 99.53%, 98.3% and 94.85% for 15 MV energy using a criteria of ±5% of 3mm, ±3% of 3mm and ±3% of 2mm respectively for brain, whereas using ±5% of 3mm and ±3% of 3mm gamma evaluation criteria, the passing rate is 94.55% and 90.45% for 6MV and 95.25%9 and 95% for 15 MV energy for the case of prostate using EBT3 film. Map check 2 results shows the passing rates of 98.17%, 97.68% and 86.78% for 6MV energy and 94.87%,97.46% and 88.31% for 15 MV energy respectively for brain using a criteria of ±5% of 3mm, ±3% of 3mm and ±3% of 2mm, whereas using ±5% of 3mm and ±3% of 3mm gamma evaluation criteria gives the passing rate of 97.7% and 96.4% for 6MV and 98.75%9 and 98.05% for 15 MV energy for the case of prostate. EPID 6 MV and gamma analysis shows the passing rate of 99.56%, 98.63% and 98.4% for the brain, 100% and 99.9% for prostate using the same criteria as for map check 2 and EBT 3 film. Conclusion: The results demonstrate excellent passing rates were obtained for all dosimeter when compared with the planar dose distributions for 6 MV IMRT fields as well as for 15 MV. EPID results are better than EBT3 films and map check 2 because it is likely that part of this difference is real, and part is due to manhandling and different treatment set up verification which contributes dose distribution difference. Overall all three dosimeter exhibits results within limits according to AAPM report.120.

Keywords: gafchromic EBT3, radiochromic film dosimetry, IMRT verification, EPID

Procedia PDF Downloads 400