Search results for: interferences and analytical errors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3316

Search results for: interferences and analytical errors

2716 Analytical and Numerical Investigation of Friction-Restricted Growth and Buckling of Elastic Fibers

Authors: Peter L. Varkonyi, Andras A. Sipos

Abstract:

The quasi-static growth of elastic fibers is studied in the presence of distributed contact with an immobile surface, subject to isotropic dry or viscous friction. Unlike classical problems of elastic stability modelled by autonomous dynamical systems with multiple time scales (slowly varying bifurcation parameter, and fast system dynamics), this problem can only be formulated as a non-autonomous system without time scale separation. It is found that the fibers initially converge to a trivial, straight configuration, which is later replaced by divergence reminiscent of buckling phenomena. In order to capture the loss of stability, a new definition of exponential stability against infinitesimal perturbations for systems defined over finite time intervals is developed. A semi-analytical method for the determination of the critical length based on eigenvalue analysis is proposed. The post-critical behavior of the fibers is studied numerically by using variational methods. The emerging post-critical shapes and the asymptotic behavior as length goes to infinity are identified for simple spatial distributions of growth. Comparison with physical experiments indicates reasonable accuracy of the theoretical model. Some applications from modeling plant root growth to the design of soft manipulators in robotics are briefly discussed.

Keywords: buckling, elastica, friction, growth

Procedia PDF Downloads 190
2715 A Higher Order Shear and Normal Deformation Theory for Functionally Graded Sandwich Beam

Authors: R. Bennai, H. Ait Atmane, Jr., A. Tounsi

Abstract:

In this work, a new analytical approach using a refined theory of hyperbolic shear deformation of a beam was developed to study the free vibration of graduated sandwiches beams under different boundary conditions. The effects of transverse shear strains and the transverse normal deformation are considered. The constituent materials of the beam are supposed gradually variable depending the height direction based on a simple power distribution law in terms of the volume fractions of the constituents; the two materials with which we worked are metals and ceramics. The core layer is taken homogeneous and made of an isotropic material; while the banks layers consist of FGM materials with a homogeneous fraction compared to the middle layer. Movement equations are obtained by the energy minimization principle. Analytical solutions of free vibration and buckling are obtained for sandwich beams under different support conditions; these conditions are taken into account by incorporating new form functions. In the end, illustrative examples are presented to show the effects of changes in different parameters such as (material graduation, the stretching effect of the thickness, boundary conditions and thickness ratio - length) on the vibration free and buckling of an FGM sandwich beams.

Keywords: functionally graded sandwich beam, refined shear deformation theory, stretching effect, free vibration

Procedia PDF Downloads 246
2714 Heat Transfer and Entropy Generation in a Partial Porous Channel Using LTNE and Exothermicity/Endothermicity Features

Authors: Mohsen Torabi, Nader Karimi, Kaili Zhang

Abstract:

This work aims to provide a comprehensive study on the heat transfer and entropy generation rates of a horizontal channel partially filled with a porous medium which experiences internal heat generation or consumption due to exothermic or endothermic chemical reaction. The focus has been given to the local thermal non-equilibrium (LTNE) model. The LTNE approach helps us to deliver more accurate data regarding temperature distribution within the system and accordingly to provide more accurate Nusselt number and entropy generation rates. Darcy-Brinkman model is used for the momentum equations, and constant heat flux is assumed for boundary conditions for both upper and lower surfaces. Analytical solutions have been provided for both velocity and temperature fields. By incorporating the investigated velocity and temperature formulas into the provided fundamental equations for the entropy generation, both local and total entropy generation rates are plotted for a number of cases. Bifurcation phenomena regarding temperature distribution and interface heat flux ratio are observed. It has been found that the exothermicity or endothermicity characteristic of the channel does have a considerable impact on the temperature fields and entropy generation rates.

Keywords: entropy generation, exothermicity or endothermicity, forced convection, local thermal non-equilibrium, analytical modelling

Procedia PDF Downloads 415
2713 The Exercise of Deliberative Democracy on Public Administrations Agencies' Decisions

Authors: Mauricio Filho, Carina Castro

Abstract:

The object of this project is to analyze long-time public agents that passed through several governments and see themselves in the position of having to deliberate with new agents, recently settled in the public administration. For theoretical ends, internal deliberation is understood as the one practiced on the public administration agencies, without any direct participation from the general public in the process. The assumption is: agents with longer periods of public service tend to step away from momentary political discussions that guide the current administration and seek to concentrate on institutionalized routines and procedures, making the most politically aligned individuals with the current government deliberate with less "passion" and more exchanging of knowledge and information. The theoretical framework of this research is institutionalism, which is guided by a more pragmatic view, facing the fluidity of reality in ways showing the multiple relations between agents and their respective institutions. The critical aspirations of this project rest on the works of professors Cass Sunstein, Adrian Vermeule, Philipp Pettit and in literature from both institutional theory and economic analysis of law, greatly influenced by the Chicago Law School. Methodologically, the paper is a theoretical review and pretends to be unfolded, in a future moment, in empirical tests for verification. This work has as its main analytical tool the appeal to theoretical and doctrinaire areas from the Juridical Sciences, by adopting the deductive and analytical method.

Keywords: institutions, state, law, agencies

Procedia PDF Downloads 264
2712 Statistical Correlation between Logging-While-Drilling Measurements and Wireline Caliper Logs

Authors: Rima T. Alfaraj, Murtadha J. Al Tammar, Khaqan Khan, Khalid M. Alruwaili

Abstract:

OBJECTIVE/SCOPE (25-75): Caliper logging data provides critical information about wellbore shape and deformations, such as stress-induced borehole breakouts or washouts. Multiarm mechanical caliper logs are often run using wireline, which can be time-consuming, costly, and/or challenging to run in certain formations. To minimize rig time and improve operational safety, it is valuable to develop analytical solutions that can estimate caliper logs using available Logging-While-Drilling (LWD) data without the need to run wireline caliper logs. As a first step, the objective of this paper is to perform statistical analysis using an extensive datasetto identify important physical parameters that should be considered in developing such analytical solutions. METHODS, PROCEDURES, PROCESS (75-100): Caliper logs and LWD data of eleven wells, with a total of more than 80,000 data points, were obtained and imported into a data analytics software for analysis. Several parameters were selected to test the relationship of the parameters with the measured maximum and minimum caliper logs. These parameters includegamma ray, porosity, shear, and compressional sonic velocities, bulk densities, and azimuthal density. The data of the eleven wells were first visualized and cleaned.Using the analytics software, several analyses were then preformed, including the computation of Pearson’s correlation coefficients to show the statistical relationship between the selected parameters and the caliper logs. RESULTS, OBSERVATIONS, CONCLUSIONS (100-200): The results of this statistical analysis showed that some parameters show good correlation to the caliper log data. For instance, the bulk density and azimuthal directional densities showedPearson’s correlation coefficients in the range of 0.39 and 0.57, which wererelatively high when comparedto the correlation coefficients of caliper data with other parameters. Other parameters such as porosity exhibited extremely low correlation coefficients to the caliper data. Various crossplots and visualizations of the data were also demonstrated to gain further insights from the field data. NOVEL/ADDITIVE INFORMATION (25-75): This study offers a unique and novel look into the relative importance and correlation between different LWD measurements and wireline caliper logs via an extensive dataset. The results pave the way for a more informed development of new analytical solutions for estimating the size and shape of the wellbore in real-time while drilling using LWD data.

Keywords: LWD measurements, caliper log, correlations, analysis

Procedia PDF Downloads 121
2711 Machine Learning Algorithms for Rocket Propulsion

Authors: Rômulo Eustáquio Martins de Souza, Paulo Alexandre Rodrigues de Vasconcelos Figueiredo

Abstract:

In recent years, there has been a surge in interest in applying artificial intelligence techniques, particularly machine learning algorithms. Machine learning is a data-analysis technique that automates the creation of analytical models, making it especially useful for designing complex situations. As a result, this technology aids in reducing human intervention while producing accurate results. This methodology is also extensively used in aerospace engineering since this is a field that encompasses several high-complexity operations, such as rocket propulsion. Rocket propulsion is a high-risk operation in which engine failure could result in the loss of life. As a result, it is critical to use computational methods capable of precisely representing the spacecraft's analytical model to guarantee its security and operation. Thus, this paper describes the use of machine learning algorithms for rocket propulsion to aid the realization that this technique is an efficient way to deal with challenging and restrictive aerospace engineering activities. The paper focuses on three machine-learning-aided rocket propulsion applications: set-point control of an expander-bleed rocket engine, supersonic retro-propulsion of a small-scale rocket, and leak detection and isolation on rocket engine data. This paper describes the data-driven methods used for each implementation in depth and presents the obtained results.

Keywords: data analysis, modeling, machine learning, aerospace, rocket propulsion

Procedia PDF Downloads 115
2710 Passive Aeration of Wastewater: Analytical Model

Authors: Ayman M. El-Zahaby, Ahmed S. El-Gendy

Abstract:

Aeration for wastewater is essential for the proper operation of aerobic treatment units where the wastewater normally has zero dissolved oxygen. This is due to the need of oxygen by the aerobic microorganisms to grow and survive. Typical aeration units for wastewater treatment require electric energy for their operation such as mechanical aerators or diffused aerators. The passive units are units that operate without the need of electric energy such as cascade aerators, spray aerators and tray aerators. In contrary to the cascade aerators and spray aerators, tray aerators require much smaller area foot print for their installation as the treatment stages are arranged vertically. To the extent of the authors knowledge, the design of tray aerators for the aeration purpose has not been presented in the literature. The current research concerns with an analytical study for the design of tray aerators for the purpose of increasing the dissolved oxygen in wastewater treatment systems, including an investigation on different design parameters and their impact on the aeration efficiency. The studied aerator shall act as an intermediate stage between an anaerobic primary treatment unit and an aerobic treatment unit for small scale treatment systems. Different free falling flow regimes were investigated, and the thresholds for transition between regimes were obtained from the literature. The study focused on the jetting flow regime between trays. Starting from the two film theory, an equation that relates the dissolved oxygen concentration effluent from the system was derived as a function of the flow rate, number of trays, tray area, spacing between trays, number and diameter of holes and the water temperature. A MATLab ® model was developed for the derived equation. The expected aeration efficiency under different tray configurations and operating conditions were illustrated through running the model with varying the design parameters. The impact of each parameter was illustrated. The overall system efficiency was found to increase by decreasing the hole diameter. On the other side, increasing the number of trays, tray area, flow rate per hole or tray spacing had positive effect on the system efficiency.

Keywords: aeration, analytical, passive, wastewater

Procedia PDF Downloads 209
2709 Structural Performance of Composite Steel and Concrete Beams

Authors: Jakub Bartus

Abstract:

In general, composite steel and concrete structures present an effective structural solution utilizing full potential of both materials. As they have a numerous advantages on the construction side, they can reduce greatly the overall cost of construction, which is the main objective of the last decade, highlighted by the current economic and social crisis. The study represents not only an analysis of composite beams’ behaviour having web openings but emphasizes the influence of these openings on the total strain distribution at the level of steel bottom flange as well. The major investigation was focused on a change of structural performance with respect to various layouts of openings. Examining this structural modification, an improvement of load carrying capacity of composite beams was a prime object. The study is devided into analytical and numerical part. The analytical part served as an initial step into the design process of composite beam samples, in which optimal dimensions and specific levels of utilization in individual stress states were taken into account. The numerical part covered description of imposed structural issue in a form of a finite element model (FEM) using strut and shell elements accounting for material non-linearities. As an outcome, a number of conclusions were drawn describing and explaining an effect of web opening presence on the structural performance of composite beams.

Keywords: composite beam, web opening, steel flange, totalstrain, finite element analysis

Procedia PDF Downloads 68
2708 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data

Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin

Abstract:

The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.

Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline

Procedia PDF Downloads 309
2707 Criticality Assessment Model for Water Pipelines Using Fuzzy Analytical Network Process

Authors: A. Assad, T. Zayed

Abstract:

Water networks (WNs) are responsible of providing adequate amounts of safe, high quality, water to the public. As other critical infrastructure systems, WNs are subjected to deterioration which increases the number of breaks and leaks and lower water quality. In Canada, 35% of water assets require critical attention and there is a significant gap between the needed and the implemented investments. Thus, the need for efficient rehabilitation programs is becoming more urgent given the paradigm of aging infrastructure and tight budget. The first step towards developing such programs is to formulate a Performance Index that reflects the current condition of water assets along with its criticality. While numerous studies in the literature have focused on various aspects of condition assessment and reliability, limited efforts have investigated the criticality of such components. Critical water mains are those whose failure cause significant economic, environmental or social impacts on a community. Inclusion of criticality in computing the performance index will serve as a prioritizing tool for the optimum allocating of the available resources and budget. In this study, several social, economic, and environmental factors that dictate the criticality of a water pipelines have been elicited from analyzing the literature. Expert opinions were sought to provide pairwise comparisons of the importance of such factors. Subsequently, Fuzzy Logic along with Analytical Network Process (ANP) was utilized to calculate the weights of several criteria factors. Multi Attribute Utility Theories (MAUT) was then employed to integrate the aforementioned weights with the attribute values of several pipelines in Montreal WN. The result is a criticality index, 0-1, that quantifies the severity of the consequence of failure of each pipeline. A novel contribution of this approach is that it accounts for both the interdependency between criteria factors as well as the inherited uncertainties in calculating the criticality. The practical value of the current study is represented by the automated tool, Excel-MATLAB, which can be used by the utility managers and decision makers in planning for future maintenance and rehabilitation activities where high-level efficiency in use of materials and time resources is required.

Keywords: water networks, criticality assessment, asset management, fuzzy analytical network process

Procedia PDF Downloads 147
2706 Evaluation of Oxidative Changes in Soybean Oil During Shelf-Life by Physico-Chemical Methods and Headspace-Liquid Phase Microextraction (HS-LPME) Technique

Authors: Maryam Enteshari, Kooshan Nayebzadeh, Abdorreza Mohammadi

Abstract:

In this study, the oxidative stability of soybean oil under different storage temperatures (4 and 25˚C) and during 6-month shelf-life was investigated by various analytical methods and headspace-liquid phase microextraction (HS-LPME) coupled to gas chromatography-mass spectrometry (GC-MS). Oxidation changes were monitored by analytical parameters consisted of acid value (AV), peroxide value (PV), p-Anisidine value (p-AV), thiobarbituric acid value (TBA), fatty acids profile, iodine value (IV), and oxidative stability index (OSI). In addition, concentrations of hexanal and heptanal as secondary volatile oxidation compounds were determined by HS-LPME/GC-MS technique. Rate of oxidation in soybean oil which stored at 25˚C was so higher. The AV, p-AV, and TBA were gradually increased during 6 months while the amount of unsaturated fatty acids, IV, and OSI decreased. Other parameters included concentrations of both hexanal and heptanal, and PV exhibited increasing trend during primitive months of storage; then, at the end of third and fourth months a sudden decrement was understood for the concentrations of hexanal and heptanal and the amount of PV, simultaneously. The latter parameters increased again until the end of shelf-time. As a result, the temperature and time were effective factors in oxidative stability of soybean oil. Also intensive correlations were found for soybean oil at 4 ˚C between AV and TBA (r2=0.96), PV and p-AV (r2=0.9), IV and TBA (-r2=0.9), and for soybean oil stored at 4˚C between p-AV and TBA (r2=0.99).

Keywords: headspace-liquid phase microextraction, oxidation, shelf-life, soybean oil

Procedia PDF Downloads 403
2705 A Handheld Light Meter Device for Methamphetamine Detection in Oral Fluid

Authors: Anindita Sen

Abstract:

Oral fluid is a promising diagnostic matrix for drugs of abuse compared to urine and serum. Detection of methamphetamine in oral fluid would pave way for the easy evaluation of impairment in drivers during roadside drug testing as well as ensure safe working environments by facilitating evaluation of impairment in employees at workplaces. A membrane-based point-of-care (POC) friendly pre-treatment technique has been developed which aided elimination of interferences caused by salivary proteins and facilitated the demonstration of methamphetamine detection in saliva using a gold nanoparticle based colorimetric aptasensor platform. It was found that the colorimetric response in saliva was always suppressed owing to the matrix effects. By navigating the challenging interfering issues in saliva, we were successfully able to detect methamphetamine at nanomolar levels in saliva offering immense promise for the translation of these platforms for on-site diagnostic systems. This subsequently motivated the development of a handheld portable light meter device that can reliably transduce the aptasensors colorimetric response into absorbance, facilitating quantitative detection of analyte concentrations on-site. This is crucial due to the prevalent unreliability and sensitivity problems of the conventional drug testing kits. The fabricated light meter device response was validated against a standard UV-Vis spectrometer to confirm reliability. The portable and cost-effective handheld detector device features sensitivity comparable to the well-established UV-Vis benchtop instrument and the easy-to-use device could potentially serve as a prototype for a commercial device in the future.

Keywords: aptasensors, colorimetric gold nanoparticle assay, point-of-care, oral fluid

Procedia PDF Downloads 59
2704 Analysing the Permanent Deformation of Cohesive Subsoil Subject to Long Term Cyclic Train Loading

Authors: Natalie M. Wride, Xueyu Geng

Abstract:

Subgrade soils of railway infrastructure are subjected to a significant number of load applications over their design life. The use of slab track on existing and future proposed rail links requires a reduced maintenance and repair regime for the embankment subgrade, due to restricted access to the subgrade soils for remediation caused by cyclic deformation. It is, therefore, important to study the deformation behaviour of soft cohesive subsoils induced as a result of long term cyclic loading. In this study, a series of oedometer tests and cyclic triaxial tests (10,000 cycles) have been undertaken to investigate the undrained deformation behaviour of soft kaolin. X-ray Computer Tomography (CT) scanning of the samples has been performed to determine the change in porosity and soil structure density from the sample microstructure as a result of the laboratory testing regime undertaken. Combined with the examination of excess pore pressures and strains obtained from the cyclic triaxial tests, the results are compared with an existing analytical solution for long term settlement considering repeated low amplitude loading. Modifications to the analytical solution are presented based on the laboratory analysis that shows good agreement with further test data.

Keywords: creep, cyclic loading, deformation, long term settlement, train loading

Procedia PDF Downloads 299
2703 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 78
2702 Effect of Concrete Strength and Aspect Ratio on Strength and Ductility of Concrete Columns

Authors: Mohamed A. Shanan, Ashraf H. El-Zanaty, Kamal G. Metwally

Abstract:

This paper presents the effect of concrete compressive strength and rectangularity ratio on strength and ductility of normal and high strength reinforced concrete columns confined with transverse steel under axial compressive loading. Nineteen normal strength concrete rectangular columns with different variables tested in this research were used to study the effect of concrete compressive strength and rectangularity ratio on strength and ductility of columns. The paper also presents a nonlinear finite element analysis for these specimens and another twenty high strength concrete square columns tested by other researchers using ANSYS 15 finite element software. The results indicate that the axial force – axial strain relationship obtained from the analytical model using ANSYS are in good agreement with the experimental data. The comparison shows that the ANSYS is capable of modeling and predicting the actual nonlinear behavior of confined normal and high-strength concrete columns under concentric loading. The maximum applied load and the maximum strain have also been confirmed to be satisfactory. Depending on this agreement between the experimental and analytical results, a parametric numerical study was conducted by ANSYS 15 to clarify and evaluate the effect of each variable on strength and ductility of the columns.

Keywords: ANSYS, concrete compressive strength effect, ductility, rectangularity ratio, strength

Procedia PDF Downloads 510
2701 On Influence of Web Openings Presence on Structural Performance of Steel and Concrete Beams

Authors: Jakub Bartus, Jaroslav Odrobinak

Abstract:

In general, composite steel and concrete structures present an effective structural solution utilizing the full potential of both materials. As they have numerous advantages on the construction side, they can greatly reduce the overall cost of construction, which has been the main objective of the last decade, highlighted by the current economic and social crisis. The study represents not only an analysis of composite beams’ behavior having web openings but emphasizes the influence of these openings on the total strain distribution at the level of the steel bottom flange as well. The major investigation was focused on a change in structural performance with respect to various layouts of openings. Examining this structural modification, an improvement of load carrying capacity of composite beams was a prime objective. The study is divided into analytical and numerical parts. The analytical part served as an initial step into the design process of composite beam samples, in which optimal dimensions and specific levels of utilization in individual stress states were taken into account. The numerical part covered the discretization of the preset structural issue in the form of a finite element (FE) model using beam and shell elements accounting for material non–linearities. As an outcome, several conclusions were drawn describing and explaining the effect of web opening presence on the structural performance of composite beams.

Keywords: beam, steel flange, total strain, web opening

Procedia PDF Downloads 76
2700 Nonlinear Modelling of Sloshing Waves and Solitary Waves in Shallow Basins

Authors: Mohammad R. Jalali, Mohammad M. Jalali

Abstract:

The earliest theories of sloshing waves and solitary waves based on potential theory idealisations and irrotational flow have been extended to be applicable to more realistic domains. To this end, the computational fluid dynamics (CFD) methods are widely used. Three-dimensional CFD methods such as Navier-Stokes solvers with volume of fluid treatment of the free surface and Navier-Stokes solvers with mappings of the free surface inherently impose high computational expense; therefore, considerable effort has gone into developing depth-averaged approaches. Examples of such approaches include Green–Naghdi (GN) equations. In Cartesian system, GN velocity profile depends on horizontal directions, x-direction and y-direction. The effect of vertical direction (z-direction) is also taken into consideration by applying weighting function in approximation. GN theory considers the effect of vertical acceleration and the consequent non-hydrostatic pressure. Moreover, in GN theory, the flow is rotational. The present study illustrates the application of GN equations to propagation of sloshing waves and solitary waves. For this purpose, GN equations solver is verified for the benchmark tests of Gaussian hump sloshing and solitary wave propagation in shallow basins. Analysis of the free surface sloshing of even harmonic components of an initial Gaussian hump demonstrates that the GN model gives predictions in satisfactory agreement with the linear analytical solutions. Discrepancies between the GN predictions and the linear analytical solutions arise from the effect of wave nonlinearities arising from the wave amplitude itself and wave-wave interactions. Numerically predicted solitary wave propagation indicates that the GN model produces simulations in good agreement with the analytical solution of the linearised wave theory. Comparison between the GN model numerical prediction and the result from perturbation analysis confirms that nonlinear interaction between solitary wave and a solid wall is satisfactorilly modelled. Moreover, solitary wave propagation at an angle to the x-axis and the interaction of solitary waves with each other are conducted to validate the developed model.

Keywords: Green–Naghdi equations, nonlinearity, numerical prediction, sloshing waves, solitary waves

Procedia PDF Downloads 285
2699 Mechanical Characterization of Porcine Skin with the Finite Element Method Based Inverse Optimization Approach

Authors: Djamel Remache, Serge Dos Santos, Michael Cliez, Michel Gratton, Patrick Chabrand, Jean-Marie Rossi, Jean-Louis Milan

Abstract:

Skin tissue is an inhomogeneous and anisotropic material. Uniaxial tensile testing is one of the primary testing techniques for the mechanical characterization of skin at large scales. In order to predict the mechanical behavior of materials, the direct or inverse analytical approaches are often used. However, in case of an inhomogeneous and anisotropic material as skin tissue, analytical approaches are not able to provide solutions. The numerical simulation is thus necessary. In this work, the uniaxial tensile test and the FEM (finite element method) based inverse method were used to identify the anisotropic mechanical properties of porcine skin tissue. The uniaxial tensile experiments were performed using Instron 8800 tensile machine®. The uniaxial tensile test was simulated with FEM, and then the inverse optimization approach (or the inverse calibration) was used for the identification of mechanical properties of the samples. Experimentally results were compared to finite element solutions. The results showed that the finite element model predictions of the mechanical behavior of the tested skin samples were well correlated with experimental results.

Keywords: mechanical skin tissue behavior, uniaxial tensile test, finite element analysis, inverse optimization approach

Procedia PDF Downloads 408
2698 Analyzing the Efficiency of Initiatives Taken against Disinformation during Election Campaigns: Case Study of Young Voters

Authors: Fatima-Zohra Ghedir

Abstract:

Social media platforms have been actively working on solutions and combined their efforts with media, policy makers, educators and researchers to protect citizens and prevent interferences in information, political discourses and elections. Facebook, for instance, deleted fake accounts, implemented fake accounts and fake content detection algorithms, partnered with news agencies to manually fact check content and changed its newsfeeds display. Twitter and Instagram regularly communicate on their efforts and notify their users of improvements and safety guidelines. More funds have been allocated to media literacy programs to empower citizens in prevision of the coming elections. This paper investigates the efficiency of these initiatives and analyzes the metrics to measure their success or failure. The objective is also to determine the segments of population more prone to fall in disinformation traps during the elections despite the measures taken over the last four years. This study will also examine the groups who were positively impacted by these measures. This paper relies on both desk and field methodologies. For this study, a survey was administered to French students aged between 17 and 29 years old. Semi-guided interviews were conducted on a similar audience. The analysis of the survey and of the interviews show that respondents were exposed to the initiatives described above and are aware of the existence of disinformation issues. However, they do not understand what disinformation really entails or means. For instance, for most of them, disinformation is synonymous of the opposite point of view without taking into account the truthfulness of the content. Besides, they still consume and believe the information shared by their friends and family, with little questioning about the ways their closed ones get informed.

Keywords: democratic elections, disinformation, foreign interference, social media, success metrics

Procedia PDF Downloads 109
2697 Portfolio Management for Construction Company during Covid-19 Using AHP Technique

Authors: Sareh Rajabi, Salwa Bheiry

Abstract:

In general, Covid-19 created many financial and non-financial damages to the economy and community. Level and severity of covid-19 as pandemic case varies over the region and due to different types of the projects. Covid-19 virus emerged as one of the most imperative risk management factors word-wide recently. Therefore, as part of portfolio management assessment, it is essential to evaluate severity of such risk on the project and program in portfolio management level to avoid any risky portfolio. Covid-19 appeared very effectively in South America, part of Europe and Middle East. Such pandemic infection affected the whole universe, due to lock down, interruption in supply chain management, health and safety requirements, transportations and commercial impacts. Therefore, this research proposes Analytical Hierarchy Process (AHP) to analyze and assess such pandemic case like Covid-19 and its impacts on the construction projects. The AHP technique uses four sub-criteria: Health and safety, commercial risk, completion risk and contractual risk to evaluate the project and program. The result will provide the decision makers with information which project has higher or lower risk in case of Covid-19 and pandemic scenario. Therefore, the decision makers can have most feasible solution based on effective weighted criteria for project selection within their portfolio to match with the organization’s strategies.

Keywords: portfolio management, risk management, COVID-19, analytical hierarchy process technique

Procedia PDF Downloads 109
2696 Pro Life-Pro Choice Debate: Looking through the Prism of Abortion Right in the Indian Context

Authors: Satabdi Das

Abstract:

Background:The abortion debate has polarized women, pitting them against each other in the binary of pro-choice and pro-life. While the followers of pro-choice views the right to an abortion as inherent to a women's right to sovereignty, the latter believes that it is unethical to kill a unborn baby as it is in a way denying the foetus' right to life. So there are innumerable arguments and counter arguments without hyphenation and the dilemma remains that which one is more significant – the mother's right to terminate pregnancy or the foetus' right to life. This pro-life and pro-choice debate has an western root which is more about reproductive freedom. But the Western standard of looking at abortion debate is not fully relevant in the Indian context. The situation is entirely different here. Sex selective foeticide is a social ill in India which cannot be explained through the prism of abortion debate only. It must take into account the problems of forced female foeticide. Objectives: Against this backdrop the study sheds light on the following issues: -How the Reproductive debate has been evolved? -How it is relevant in the Indian Context where female foeticide is a harsh reality? -How one should address the dilemma between life and death in the context of pro life-pro choice debate? Methodology: The study employs historical analytical and descriptive analytical methods and uses primary documents like governmental documents and secondary sources like analytical articles in books, journals, and relevant websites. Findings: -Fertility control is not a modern day phenomenon. It has its roots throughout ancient, medieval and present epochs. However, there existed debates over the rights of the foetus and the question of ethics pertaining to the act of abortion. -Pre-natal sex determination for sex selective abortion is a common phenomenon in India because of the wish for male heirs. The cultural preferences for male child over female ones have resulted in the disappearance of girl children. -When does the life begin has not been recognized by any law. Considering Indian case, it can be said that the Pro life/ pro choice is not that relevant as it is in the US. Here the women are often denied the basic human rights. They are murdered at the womb in many places. Their right to lives are jeopardised in that way. In the liberal abortion regime of India, women's choice to end a pregnancy is limited among very few enlightened families. In many cases, it is the decision of the family to end a pregnancy for boy preference. For that pre natal sex determination plays a crucial role. Conclusion: In India, we can be pro life only when the right to life of the unborn can be secured irrespective of its sex. Similarly we belong to pro-choice group only when the choice to terminate a baby is entirely decided by the mother for her own reasons.

Keywords: female foeticide, India, prolife/pro choice, right to abortion

Procedia PDF Downloads 192
2695 Innovative Technologies of Distant Spectral Temperature Control

Authors: Leonid Zhukov, Dmytro Petrenko

Abstract:

Optical thermometry has no alternative in many cases of industrial most effective continuous temperature control. Classical optical thermometry technologies can be used on available for pyrometers controlled objects with stable radiation characteristics and transmissivity of the intermediate medium. Without using temperature corrections, it is possible in the case of a “black” body for energy pyrometry and the cases of “black” and “grey” bodies for spectral ratio pyrometry or with using corrections – for any colored bodies. Consequently, with increasing the number of operating waves, optical thermometry possibilities to reduce methodical errors significantly expand. That is why, in recent 25-30 years, research works have been reoriented on more perfect spectral (multicolor) thermometry technologies. There are two physical material substances, i.e., substance (controlled object) and electromagnetic field (thermal radiation), to be operated in optical thermometry. Heat is transferred by radiation; therefore, radiation has the energy, entropy, and temperature. Optical thermometry was originating simultaneously with the developing of thermal radiation theory when the concept and the term "radiation temperature" was not used, and therefore concepts and terms "conditional temperatures" or "pseudo temperature" of controlled objects were introduced. They do not correspond to the physical sense and definitions of temperature in thermodynamics, molecular-kinetic theory, and statistical physics. Launched by the scientific thermometric society, discussion about the possibilities of temperature measurements of objects, including colored bodies, using the temperatures of their radiation is not finished. Are the information about controlled objects transferred by their radiation enough for temperature measurements? The positive and negative answers on this fundamental question divided experts into two opposite camps. Recent achievements of spectral thermometry develop events in her favour and don’t leave any hope for skeptics. This article presents the results of investigations and developments in the field of spectral thermometry carried out by the authors in the Department of Thermometry and Physics-Chemical Investigations. The authors have many-year’s of experience in the field of modern optical thermometry technologies. Innovative technologies of optical continuous temperature control have been developed: symmetric-wave, two-color compensative, and based on obtained nonlinearity equation of spectral emissivity distribution linear, two-range, and parabolic. Тhe technologies are based on direct measurements of physically substantiated and proposed by Prof. L. Zhukov, radiation temperatures with the next calculation of the controlled object temperature using this radiation temperatures and corresponding mathematical models. Тhe technologies significantly increase metrological characteristics of continuous contactless and light-guide temperature control in energy, metallurgical, ceramic, glassy, and other productions. For example, under the same conditions, the methodical errors of proposed technologies are less than the errors of known spectral and classical technologies in 2 and 3-13 times, respectively. Innovative technologies provide quality products obtaining at the lowest possible resource-including energy costs. More than 600 publications have been published on the completed developments, including more than 100 domestic patents, as well as 34 patents in Australia, Bulgaria, Germany, France, Canada, the USA, Sweden, and Japan. The developments have been implemented in the enterprises of USA, as well as Western Europe and Asia, including Germany and Japan.

Keywords: emissivity, radiation temperature, object temperature, spectral thermometry

Procedia PDF Downloads 98
2694 Mineralogical and Geochemical Constraints on the Origin and Environment of Numidian Siliceous Sedimentary Rocks of the Extreme Northwest Tunisia

Authors: Ben Yahia Nouha, Harris Chris, Sebei Abdelaziz, Boussen Slim, Chaabani Fredj

Abstract:

The present work has set itself the objective of studying non-detritic siliceous rocks of the extreme northwest Tunisia. It aims to examine the origin and their sedimentary depositional environment based on mineralogical and geochemical characteristics. The different sections were located in the area of Babouch and the area of Tabarka. The collected samples were subjected to mineralogical and geochemical characterization using different analytical methods: X-ray diffraction (XRD), geochemical analysis (ICP- AES), isotopic geochemistry (δ18O), to assess their suitability for industrial use. X-ray powder diffraction of the pure siliceous rock indicates quartz as the major mineral, with the total lack of amorphous silica. Trace impurities, such as carbonate and clay minerals, are concealed in the analytical results. The petrographic examination revealed allowed us to deduce that this rock was deriving from tests of siliceous organisms (the radiolarians). The chemical composition shows that SiO2, Al2O3, and Fe2O3 represent the most abundant oxides. The other oxides are present in negligible quantities. Geochemical data support a biogenic and non-hydrothermal origin of babouchite silica. Oxygen isotopic has shown that babouchites were formed in an environment with a high temperature ranging from 56 °C to 73 °C.

Keywords: biogenic silica, babouchite formation, XRD, chemical analysis, oxygen isotopic, northwest tunisia

Procedia PDF Downloads 145
2693 The Analysis of Priority Flood Control Management Using Analysis Hierarchy Process

Authors: Pravira Rizki Suwarno, Fanny Aliza Savitri, Priseyola Ayunda Prima, Pipin Surahman, Mahelga Levina Amran, Khoirunisa Ulya Nur Utari, Nora Permatasari

Abstract:

The Bogowonto River or commonly called the Bhagawanta River, is one of the rivers on Java Island. It is located in Central Java, Indonesia. Its watershed area is 35 km² with 57 km long. This river covers three regencies, namely Wonosobo Regency and Magelang Regency in the upstream and Purworejo Regency in the south and downstream. The Bogowonto River experiences channel narrowing and silting. It is caused by garbage along the river that comes from livestock and household waste. The narrowing channel and siltation cause a capacity reduction of the river to drain flood discharge. Comprehensive and sustainable actions are needed in dealing with current and future floods. Based on these current conditions, a priority scale is required. Therefore, this study aims to determine the priority scale of flood management in Purworejo Regency using the Analytical Hierarchy Process (AHP) method. This method will determine the appropriate actions based on the rating. In addition, there will be field observations through distributing questionnaires to several parties, including the stakeholders and the community. The results of this study will be in 2 (two) forms of actions, both structurally covering water structures and non-structural, including social, environmental, and law enforcement.

Keywords: analytical hierarchy process, bogowonto, flood control, management

Procedia PDF Downloads 208
2692 RAFU Functions in Robotics and Automation

Authors: Alicia C. Sanchez

Abstract:

This paper investigates the implementation of RAFU functions (radical functions) in robotics and automation. Specifically, the main goal is to show how these functions may be useful in lane-keeping control and the lateral control of autonomous machines, vehicles, robots or the like. From the knowledge of several points of a certain route, the RAFU functions are used to achieve the lateral control purpose and maintain the lane-keeping errors within the fixed limits. The stability that these functions provide, their ease of approaching any continuous trajectory and the control of the possible error made on the approximation may be useful in practice.

Keywords: automatic navigation control, lateral control, lane-keeping control, RAFU approximation

Procedia PDF Downloads 302
2691 UV-Vis Spectroscopy as a Tool for Online Tar Measurements in Wood Gasification Processes

Authors: Philip Edinger, Christian Ludwig

Abstract:

The formation and control of tars remain one of the major challenges in the implementation of biomass gasification technologies. Robust, on-line analytical methods are needed to investigate the fate of tar compounds when different measures for their reduction are applied. This work establishes an on-line UV-Vis method, based on a liquid quench sampling system, to monitor tar compounds in biomass gasification processes. Recorded spectra from the liquid phase were analyzed for their tar composition by means of a classical least squares (CLS) and partial least squares (PLS) approach. This allowed for the detection of UV-Vis active tar compounds with detection limits in the low part per million by volume (ppmV) region. The developed method was then applied to two case studies. The first involved a lab-scale reactor, intended to investigate the decomposition of a limited number of tar compounds across a catalyst. The second study involved a gas scrubber as part of a pilot scale wood gasification plant. Tar compound quantification results showed good agreement with off-line based reference methods (GC-FID) when the complexity of tar composition was limited. The two case studies show that the developed method can provide rapid, qualitative information on the tar composition for the purpose of process monitoring. In cases with a limited number of tar species, quantitative information about the individual tar compound concentrations provides an additional benefit of the analytical method.

Keywords: biomass gasification, on-line, tar, UV-Vis

Procedia PDF Downloads 259
2690 Identification of Failures Occurring on a System on Chip Exposed to a Neutron Beam for Safety Applications

Authors: S. Thomet, S. De-Paoli, F. Ghaffari, J. M. Daveau, P. Roche, O. Romain

Abstract:

In this paper, we present a hardware module dedicated to understanding the fail reason of a System on Chip (SoC) exposed to a particle beam. Impact of Single-Event Effects (SEE) on processor-based SoCs is a concern that has increased in the past decade, particularly for terrestrial applications with automotive safety increasing requirements, as well as consumer and industrial domains. The SEE created by the impact of a particle on an SoC may have consequences that can end to instability or crashes. Specific hardening techniques for hardware and software have been developed to make such systems more reliable. SoC is then qualified using cosmic ray Accelerated Soft-Error Rate (ASER) to ensure the Soft-Error Rate (SER) remains in mission profiles. Understanding where errors are occurring is another challenge because of the complexity of operations performed in an SoC. Common techniques to monitor an SoC running under a beam are based on non-intrusive debug, consisting of recording the program counter and doing some consistency checking on the fly. To detect and understand SEE, we have developed a module embedded within the SoC that provide support for recording probes, hardware watchpoints, and a memory mapped register bank dedicated to software usage. To identify CPU failure modes and the most important resources to probe, we have carried out a fault injection campaign on the RTL model of the SoC. Probes are placed on generic CPU registers and bus accesses. They highlight the propagation of errors and allow identifying the failure modes. Typical resulting errors are bit-flips in resources creating bad addresses, illegal instructions, longer than expected loops, or incorrect bus accesses. Although our module is processor agnostic, it has been interfaced to a RISC-V by probing some of the processor registers. Probes are then recorded in a ring buffer. Associated hardware watchpoints are allowing to do some control, such as start or stop event recording or halt the processor. Finally, the module is also providing a bank of registers where the firmware running on the SoC can log information. Typical usage is for operating system context switch recording. The module is connected to a dedicated debug bus and is interfaced to a remote controller via a debugger link. Thus, a remote controller can interact with the monitoring module without any intrusiveness on the SoC. Moreover, in case of CPU unresponsiveness, or system-bus stall, the recorded information can still be recovered, providing the fail reason. A preliminary version of the module has been integrated into a test chip currently being manufactured at ST in 28-nm FDSOI technology. The module has been triplicated to provide reliable information on the SoC behavior. As the primary application domain is automotive and safety, the efficiency of the module will be evaluated by exposing the test chip under a fast-neutron beam by the end of the year. In the meantime, it will be tested with alpha particles and electromagnetic fault injection (EMFI). We will report in the paper on fault-injection results as well as irradiation results.

Keywords: fault injection, SoC fail reason, SoC soft error rate, terrestrial application

Procedia PDF Downloads 229
2689 European Food Safety Authority (EFSA) Safety Assessment of Food Additives: Data and Methodology Used for the Assessment of Dietary Exposure for Different European Countries and Population Groups

Authors: Petra Gergelova, Sofia Ioannidou, Davide Arcella, Alexandra Tard, Polly E. Boon, Oliver Lindtner, Christina Tlustos, Jean-Charles Leblanc

Abstract:

Objectives: To assess chronic dietary exposure to food additives in different European countries and population groups. Method and Design: The European Food Safety Authority’s (EFSA) Panel on Food Additives and Nutrient Sources added to Food (ANS) estimates chronic dietary exposure to food additives with the purpose of re-evaluating food additives that were previously authorized in Europe. For this, EFSA uses concentration values (usage and/or analytical occurrence data) reported through regular public calls for data by food industry and European countries. These are combined, at individual level, with national food consumption data from the EFSA Comprehensive European Food Consumption Database including data from 33 dietary surveys from 19 European countries and considering six different population groups (infants, toddlers, children, adolescents, adults and the elderly). EFSA ANS Panel estimates dietary exposure for each individual in the EFSA Comprehensive Database by combining the occurrence levels per food group with their corresponding consumption amount per kg body weight. An individual average exposure per day is calculated, resulting in distributions of individual exposures per survey and population group. Based on these distributions, the average and 95th percentile of exposure is calculated per survey and per population group. Dietary exposure is assessed based on two different sets of data: (a) Maximum permitted levels (MPLs) of use set down in the EU legislation (defined as regulatory maximum level exposure assessment scenario) and (b) usage levels and/or analytical occurrence data (defined as refined exposure assessment scenario). The refined exposure assessment scenario is sub-divided into the brand-loyal consumer scenario and the non-brand-loyal consumer scenario. For the brand-loyal consumer scenario, the consumer is considered to be exposed on long-term basis to the highest reported usage/analytical level for one food group, and at the mean level for the remaining food groups. For the non-brand-loyal consumer scenario, the consumer is considered to be exposed on long-term basis to the mean reported usage/analytical level for all food groups. An additional exposure from sources other than direct addition of food additives (i.e. natural presence, contaminants, and carriers of food additives) is also estimated, as appropriate. Results: Since 2014, this methodology has been applied in about 30 food additive exposure assessments conducted as part of scientific opinions of the EFSA ANS Panel. For example, under the non-brand-loyal scenario, the highest 95th percentile of exposure to α-tocopherol (E 307) and ammonium phosphatides (E 442) was estimated in toddlers up to 5.9 and 8.7 mg/kg body weight/day, respectively. The same estimates under the brand-loyal scenario in toddlers resulted in exposures of 8.1 and 20.7 mg/kg body weight/day, respectively. For the regulatory maximum level exposure assessment scenario, the highest 95th percentile of exposure to α-tocopherol (E 307) and ammonium phosphatides (E 442) was estimated in toddlers up to 11.9 and 30.3 mg/kg body weight/day, respectively. Conclusions: Detailed and up-to-date information on food additive concentration values (usage and/or analytical occurrence data) and food consumption data enable the assessment of chronic dietary exposure to food additives to more realistic levels.

Keywords: α-tocopherol, ammonium phosphatides, dietary exposure assessment, European Food Safety Authority, food additives, food consumption data

Procedia PDF Downloads 325
2688 Energy Complementary in Colombia: Imputation of Dataset

Authors: Felipe Villegas-Velasquez, Harold Pantoja-Villota, Sergio Holguin-Cardona, Alejandro Osorio-Botero, Brayan Candamil-Arango

Abstract:

Colombian electricity comes mainly from hydric resources, affected by environmental variations such as the El Niño phenomenon. That is why incorporating other types of resources is necessary to provide electricity constantly. This research seeks to fill the wind speed and global solar irradiance dataset for two years with the highest amount of information. A further result is the characterization of the data by region that led to infer which errors occurred and offered the incomplete dataset.

Keywords: energy, wind speed, global solar irradiance, Colombia, imputation

Procedia PDF Downloads 146
2687 Seismic Assessment of a Pre-Cast Recycled Concrete Block Arch System

Authors: Amaia Martinez Martinez, Martin Turek, Carlos Ventura, Jay Drew

Abstract:

This study aims to assess the seismic performance of arch and dome structural systems made from easy to assemble precast blocks of recycled concrete. These systems have been developed by Lock Block Ltd. Company from Vancouver, Canada, as an extension of their currently used retaining wall system. The characterization of the seismic behavior of these structures is performed by a combination of experimental static and dynamic testing, and analytical modeling. For the experimental testing, several tilt tests, as well as a program of shake table testing were undertaken using small scale arch models. A suite of earthquakes with different characteristics from important past events are chosen and scaled properly for the dynamic testing. Shake table testing applying the ground motions in just one direction (in the weak direction of the arch) and in the three directions were conducted and compared. The models were tested with increasing intensity until collapse occurred; which determines the failure level for each earthquake. Since the failure intensity varied with type of earthquake, a sensitivity analysis of the different parameters was performed, being impulses the dominant factor. For all cases, the arches exhibited the typical four-hinge failure mechanism, which was also shown in the analytical model. Experimental testing was also performed reinforcing the arches using a steel band over the structures anchored at both ends of the arch. The models were tested with different pretension levels. The bands were instrumented with strain gauges to measure the force produced by the shaking. These forces were used to develop engineering guidelines for the design of the reinforcement needed for these systems. In addition, an analytical discrete element model was created using 3DEC software. The blocks were designed as rigid blocks, assigning all the properties to the joints including also the contribution of the interlocking shear key between blocks. The model is calibrated to the experimental static tests and validated with the obtained results from the dynamic tests. Then the model can be used to scale up the results to the full scale structure and expanding it to different configurations and boundary conditions.

Keywords: arch, discrete element model, seismic assessment, shake-table testing

Procedia PDF Downloads 206