Search results for: parameter calibration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2458

Search results for: parameter calibration

568 Risk Assessment of Heavy Metals in River Sediments and Suspended Matter in Small Tributaries of Abandoned Mercury Mines in Wanshan, Guizhou

Authors: Guo-Hui Lu, Jing-Yi Cai, Ke-Yan Tan, Xiao-Cai Yin, Yu Zheng, Peng-Wei Shao, Yong-Liang Yang

Abstract:

Soil erosion around abandoned mines is one of the important geological agents for pollutant diffuses to the lower reaches of the local river basin system. River loading of pollutants is an important parameter for remediation of abandoned mines. In order to obtain information on pollutant transport and diffusion downstream in mining area, the small tributary system of the Xiaxi River in Wanshan District of Guizhou Province was selected as the research area. Sediment and suspended matter samples were collected and determined for Pb, As, Hg, Zn, Co, Cd, Cu, Ni, Cr, and Mn by inductively coupled plasma mass spectrometry (ICP-MS) and atomic fluorescence spectrometry (AFS) with the pretreatment of wet digestion. Discussions are made for pollution status and spatial distribution characteristics. The total Hg content in the sediments ranged from 0.45 to 16.0 g/g (dry weight) with an average of 5.79 g/g, which was ten times higher than the limit of Class II soil for mercury by the National Soil Environmental Quality Standard. The maximum occurred at the intersection of the Jin River and the Xiaxi River. The potential ecological hazard index (RI) was used to evaluate the ecological risk of heavy metals in the sediments. The average RI value for the whole study area suggests the high potential ecological risk level. High Cd potential ecological risk was found at individual sites.

Keywords: heavy metal, risk assessment, sediment, suspended matter, Wanshan mercury mine, small tributary system

Procedia PDF Downloads 130
567 Response Surface Methodology Approach to Defining Ultrafiltration of Steepwater from Corn Starch Industry

Authors: Zita I. Šereš, Ljubica P. Dokić, Dragana M. Šoronja Simović, Cecilia Hodur, Zsuzsanna Laszlo, Ivana Nikolić, Nikola Maravić

Abstract:

In this work the concentration of steep-water from corn starch industry is monitored using ultrafiltration membrane. The aim was to examine the conditions of ultrafiltration of steep-water by applying the membrane of 2.5nm. The parameters that vary during the course of ultrafiltration, were the transmembrane pressure, flow rate, while the permeate flux and the dry matter content of permeate and retentive were the dependent parameter constantly monitored during the process. Experiments of ultrafiltration are conducted on the samples of steep-water, which were obtained from the starch wet milling plant Jabuka, Pancevo. The procedure of ultrafiltration on a single-channel 250mm length, with inner diameter of 6.8mm and outer diameter of 10mm membrane were carried on. The membrane is made of a-Al2O3 with TiO2 layer obtained from GEA (Germany). The experiments are carried out at a flow rate ranging from 100 to 200lh-1 and transmembrane pressure of 1-3 bars. During the experiments of steep-water ultrafiltration, the change of permeate flux, dry matter content of permeate and retentive, as well as the absorbance changes of the permeate and retentive were monitored. The experimental results showed that the maximum flux reaches about 40lm-2h-1. For responses obtained after experiments, a polynomial model of the second degree is established to evaluate and quantify the influence of the variables. The quadratic equitation fits with the experimental values, where the coefficient of determination for flux is 0.96. The dry matter content of the retentive is increased for about 6%, while the dry matter content of permeate was reduced for about 35-40%, respectively. During steep-water ultrafiltration in permeate stays 40% less dry matter compared to the feed.

Keywords: ultrafiltration, steep-water, starch industry, ceramic membrane

Procedia PDF Downloads 284
566 Consumers of Counterfeit Goods and the Role of Context: A Behavioral Perspective of the Process

Authors: Carla S. C. da Silva, Cristiano Coelho, Junio Souza

Abstract:

The universe of luxury has charmed and seduced consumers for centuries. Since the middle ages, their symbols are displayed as objects of power and status, arousing desire and provoking social covetousness. In this way, the counterfeit market is growing every day, offering a group of consumers the opportunity to enter into a distinct social position, where the beautiful and shiny brand logo signals an inclusion passport to everything this group wants. This work sought to investigate how the context and the social environment can influence consumers to choose products of symbolic brands even if they are not legitimate and how this behavior is accepted in society. The study proposed: a) to evaluate the measures of knowledge and quality of a set of marks presented in the manipulation of two contexts (luxury x academic) between buyers and non-buyers of forgeries, both for original products and their correspondence with counterfeit products; b) measure the effect of layout on the verbal responses of buyers and non-buyers in relation to their assessment of the behavior of buyers of counterfeits. The present study, in addition to measuring the level of knowledge and quality attributed to each brand investigated, also verified the willingness of consumers to pay for a falsified good of the brands of predilection compared to the original study. This data can serve as a parameter for luxury brand managers in their counterfeit coping strategies. The investigation into the frequency of purchase has shown that those who buy counterfeit goods do so regularly, and there is a propensity to repeat the purchase. It was noted that a significant majority of buyers of counterfeits are prone to invest in illegality to meet their expectations of being in line with the standards of their interest groups.

Keywords: luxury, consumers, counterfeits, context, behaviorism

Procedia PDF Downloads 301
565 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant

Authors: Michael Smalenberger

Abstract:

Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.

Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation

Procedia PDF Downloads 174
564 Object-Based Image Analysis for Gully-Affected Area Detection in the Hilly Loess Plateau Region of China Using Unmanned Aerial Vehicle

Authors: Hu Ding, Kai Liu, Guoan Tang

Abstract:

The Chinese Loess Plateau suffers from serious gully erosion induced by natural and human causes. Gully features detection including gully-affected area and its two dimension parameters (length, width, area et al.), is a significant task not only for researchers but also for policy-makers. This study aims at gully-affected area detection in three catchments of Chinese Loess Plateau, which were selected in Changwu, Ansai, and Suide by using unmanned aerial vehicle (UAV). The methodology includes a sequence of UAV data generation, image segmentation, feature calculation and selection, and random forest classification. Two experiments were conducted to investigate the influences of segmentation strategy and feature selection. Results showed that vertical and horizontal root-mean-square errors were below 0.5 and 0.2 m, respectively, which were ideal for the Loess Plateau region. The segmentation strategy adopted in this paper, which considers the topographic information, and optimal parameter combination can improve the segmentation results. Besides, the overall extraction accuracy in Changwu, Ansai, and Suide achieved was 84.62%, 86.46%, and 93.06%, respectively, which indicated that the proposed method for detecting gully-affected area is more objective and effective than traditional methods. This study demonstrated that UAV can bridge the gap between field measurement and satellite-based remote sensing, obtaining a balance in resolution and efficiency for catchment-scale gully erosion research.

Keywords: unmanned aerial vehicle (UAV), object-analysis image analysis, gully erosion, gully-affected area, Loess Plateau, random forest

Procedia PDF Downloads 218
563 Study of the Effect of the Contra-Rotating Component on the Performance of the Centrifugal Compressor

Authors: Van Thang Nguyen, Amelie Danlos, Richard Paridaens, Farid Bakir

Abstract:

This article presents a study of the effect of a contra-rotating component on the efficiency of centrifugal compressors. A contra-rotating centrifugal compressor (CRCC) is constructed using two independent rotors, rotating in the opposite direction and replacing the single rotor of a conventional centrifugal compressor (REF). To respect the geometrical parameters of the REF one, two rotors of the CRCC are designed, based on a single rotor geometry, using the hub and shroud length ratio parameter of the meridional contour. Firstly, the first rotor is designed by choosing a value of length ratio. Then, the second rotor is calculated to be adapted to the fluid flow of the first rotor according aerodynamics principles. In this study, four values of length ratios 0.3, 0.4, 0.5, and 0.6 are used to create four configurations CF1, CF2, CF3, and CF4 respectively. For comparison purpose, the circumferential velocity at the outlet of the REF and the CRCC are preserved, which means that the single rotor of the REF and the second rotor of the CRCC rotate with the same speed of 16000rpm. The speed of the first rotor in this case is chosen to be equal to the speed of the second rotor. The CFD simulation is conducted to compare the performance of the CRCC and the REF with the same boundary conditions. The results show that the configuration with a higher length ratio gives higher pressure rise. However, its efficiency is lower. An investigation over the entire operating range shows that the CF1 is the best configuration in this case. In addition, the CRCC can improve the pressure rise as well as the efficiency by changing the speed of each rotor independently. The results of changing the first rotor speed show with a 130% speed increase, the pressure ratio rises of 8.7% while the efficiency remains stable at the flow rate of the design operating point.

Keywords: centrifugal compressor, contra-rotating, interaction rotor, vacuum

Procedia PDF Downloads 135
562 Parameter Optimization and Thermal Simulation in Laser Joining of Coach Peel Panels of Dissimilar Materials

Authors: Masoud Mohammadpour, Blair Carlson, Radovan Kovacevic

Abstract:

The quality of laser welded-brazed (LWB) joints were strongly dependent on the main process parameters, therefore the effect of laser power (3.2–4 kW), welding speed (60–80 mm/s) and wire feed rate (70–90 mm/s) on mechanical strength and surface roughness were investigated in this study. The comprehensive optimization process by means of response surface methodology (RSM) and desirability function was used for multi-criteria optimization. The experiments were planned based on Box– Behnken design implementing linear and quadratic polynomial equations for predicting the desired output properties. Finally, validation experiments were conducted on an optimized process condition which exhibited good agreement between the predicted and experimental results. AlSi3Mn1 was selected as the filler material for joining aluminum alloy 6022 and hot-dip galvanized steel in coach peel configuration. The high scanning speed could control the thickness of IMC as thin as 5 µm. The thermal simulations of joining process were conducted by the Finite Element Method (FEM), and results were validated through experimental data. The Fe/Al interfacial thermal history evidenced that the duration of critical temperature range (700–900 °C) in this high scanning speed process was less than 1 s. This short interaction time leads to the formation of reaction-control IMC layer instead of diffusion-control mechanisms.

Keywords: laser welding-brazing, finite element, response surface methodology (RSM), multi-response optimization, cross-beam laser

Procedia PDF Downloads 352
561 Effect of 3-Dimensional Knitted Spacer Fabrics Characteristics on Its Thermal and Compression Properties

Authors: Veerakumar Arumugam, Rajesh Mishra, Jiri Militky, Jana Salacova

Abstract:

The thermo-physiological comfort and compression properties of knitted spacer fabrics have been evaluated by varying the different spacer fabric parameters. Air permeability and water vapor transmission of the fabrics were measured using the Textest FX-3300 air permeability tester and PERMETEST. Then thermal behavior of fabrics was obtained by Thermal conductivity analyzer and overall moisture management capacity was evaluated by moisture management tester. Spacer Fabrics compression properties were also tested using Kawabata Evaluation System (KES-FB3). In the KES testing, the compression resilience, work of compression, linearity of compression and other parameters were calculated from the pressure-thickness curves. Analysis of Variance (ANOVA) was performed using new statistical software named QC expert trilobite and Darwin in order to compare the influence of different fabric parameters on thermo-physiological and compression behavior of samples. This study established that the raw materials, type of spacer yarn, density, thickness and tightness of surface layer have significant influence on both thermal conductivity and work of compression in spacer fabrics. The parameter which mainly influence on the water vapor permeability of these fabrics is the properties of raw material i.e. the wetting and wicking properties of fibers. The Pearson correlation between moisture capacity of the fabrics and water vapour permeability was found using statistical software named QC expert trilobite and Darwin. These findings are important requirements for the further designing of clothing for extreme environmental conditions.

Keywords: 3D spacer fabrics, thermal conductivity, moisture management, work of compression (WC), resilience of compression (RC)

Procedia PDF Downloads 544
560 Beam Spatio-Temporal Multiplexing Approach for Improving Control Accuracy of High Contrast Pulse

Authors: Ping Li, Bing Feng, Junpu Zhao, Xudong Xie, Dangpeng Xu, Kuixing Zheng, Qihua Zhu, Xiaofeng Wei

Abstract:

In laser driven inertial confinement fusion (ICF), the control of the temporal shape of the laser pulse is a key point to ensure an optimal interaction of laser-target. One of the main difficulties in controlling the temporal shape is the foot part control accuracy of high contrast pulse. Based on the analysis of pulse perturbation in the process of amplification and frequency conversion in high power lasers, an approach of beam spatio-temporal multiplexing is proposed to improve the control precision of high contrast pulse. In the approach, the foot and peak part of high contrast pulse are controlled independently, which propagate separately in the near field, and combine together in the far field to form the required pulse shape. For high contrast pulse, the beam area ratio of the two parts is optimized, and then beam fluence and intensity of the foot part are increased, which brings great convenience to the control of pulse. Meanwhile, the near field distribution of the two parts is also carefully designed to make sure their F-numbers are the same, which is another important parameter for laser-target interaction. The integrated calculation results show that for a pulse with a contrast of up to 500, the deviation of foot part can be improved from 20% to 5% by using beam spatio-temporal multiplexing approach with beam area ratio of 1/20, which is almost the same as that of peak part. The research results are expected to bring a breakthrough in power balance of high power laser facility.

Keywords: inertial confinement fusion, laser pulse control, beam spatio-temporal multiplexing, power balance

Procedia PDF Downloads 148
559 Integrating Knowledge Distillation of Multiple Strategies

Authors: Min Jindong, Wang Mingxia

Abstract:

With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.

Keywords: object detection, knowledge distillation, convolutional network, model compression

Procedia PDF Downloads 278
558 Effect of Food Supplies Holstein Calves Supplemented with Bacillus Subtilis PB6 in Morbidity and Mortality

Authors: Banca Patricia Pena Revuelta, Ramiro Gonzalez Avalos, Juan Leonardo Rocha Valdez, Jose Gonzalez Avalos, Karla Rodriguez Hernandez

Abstract:

Probiotics are a promising alternative to improve productivity and animals' health. In addition, they can be part of the composition of different types of products, including foods (functional foods), medicines, and dietary supplements. The objective of the present work was to evaluate the effect of the feeding of Holstein calves supplemented with bacillus subtilis PB6 in morbidity and mortality. 60 newborn animals were used, randomly included in 1 of 3 treatments. The treatments were as follows: T1 = control, T2 = 10 g / calf / day. The first takes within 20 min after birth, T3 = 10 g / calf/day. The first takes between 12 and 24 h after birth. In all the treatments, 432 L of pasteurized whole milk divided into two doses/day 07:00 and 15:00, respectively, were given for 60 days. The addition of bacillus subtilis PB6 was carried out in the milk tub at the time of feeding them. The first colostrum intake (2 L • intake) was given within 2 h after birth, after which they were given a second 6 h after the first one. The diseases registered to monitor the morbidity and mortality of the calves were: diarrhea and pneumonia. The registry was carried out from birth to 60 days of life. The parameter evaluated was food consumption. The variable statistical analysis was performed using analysis of variance, and comparison of means was performed using the Tukey test. The value of P < 0.05 was used to consider the statistical difference. The results of the present study in relation to the consumption of food show no statistical difference P < 0.05 between treatments (14,762, 11,698, and 12,403 kg of food average, respectively). Calves group to which they were not provided Bacillus subtilis PB6 obtained higher feed intake. The addition of Bacillus subtilis PB6 in feeding calves does not increase feed intake.

Keywords: feeding, development, milk, probiotic

Procedia PDF Downloads 149
557 Determination of Gross Alpha and Gross Beta Activity in Water Samples by iSolo Alpha/Beta Counting System

Authors: Thiwanka Weerakkody, Lakmali Handagiripathira, Poshitha Dabare, Thisari Guruge

Abstract:

The determination of gross alpha and beta activity in water is important in a wide array of environmental studies and these parameters are considered in international legislations on the quality of water. This technique is commonly applied as screening method in radioecology, environmental monitoring, industrial applications, etc. Measuring of Gross Alpha and Beta emitters by using iSolo alpha beta counting system is an adequate nuclear technique to assess radioactivity levels in natural and waste water samples due to its simplicity and low cost compared with the other methods. Twelve water samples (Six samples of commercially available bottled drinking water and six samples of industrial waste water) were measured by standard method EPA 900.0 consisting of the gas-less, firm wear based, single sample, manual iSolo alpha beta counter (Model: SOLO300G) with solid state silicon PIPS detector. Am-241 and Sr90/ Y90 calibration standards were used to calibrate the detector. The minimum detectable activities are 2.32mBq/L and 406mBq/L, for alpha and beta activity, respectively. Each of the 2L water samples was evaporated (at low heat) to a small volume and transferred into 50mm stainless steel counting planchet evenly (for homogenization) and heated by IR lamp and the constant weighted residue was obtained. Then the samples were counted for gross alpha and beta. Sample density on the planchet area was maintained below 5mg/cm. Large quantities of solid wastes sludges and waste water are generated every year due to various industries. This water can be reused for different applications. Therefore implementation of water treatment plants and measuring water quality parameters in industrial waste water discharge is very important before releasing them into the environment. This waste may contain different types of pollutants, including radioactive substances. All these measured waste water samples having gross alpha and beta activities, lower than the maximum tolerance limits for industrial waste water discharge of industrial waste in to inland surface water, that is 10-9µCi/mL and 10-8µCi/mL for gross alpha and beta respectively (National Environmental Act, No. 47 of 1980). This is according to extraordinary gazette of the democratic socialist republic of Sri Lanka in February 2008. The measured water samples were below the recommended radioactivity levels and do not pose any radiological hazard when releasing the environment. Drinking water is an essential requirement of life. All the drinking water samples were below the permissible levels of 0.5Bq/L for gross alpha activity and 1Bq/L for gross beta activity. The values have been proposed by World Health Organization in 2011; therefore the water is acceptable for consumption of humans without any further clarification with respect to their radioactivity. As these screening levels are very low, the individual dose criterion (IDC) would usually not be exceeded (0.1mSv y⁻¹). IDC is a criterion for evaluating health risks from long term exposure to radionuclides in drinking water. Recommended level of 0.1mSv/y expressed a very low level of health risk. This monitoring work will be continued further for environmental protection purposes.

Keywords: drinking water, gross alpha, gross beta, waste water

Procedia PDF Downloads 198
556 Estimation Model for Concrete Slump Recovery by Using Superplasticizer

Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert

Abstract:

This paper is aimed to introduce the solution of concrete slump recovery using chemical admixture type-F (superplasticizer, naphthalene base) to the practice, in order to solve unusable concrete problem due to concrete loss its slump, especially for those tropical countries that have faster slump loss rate. In the other hand, randomly adding superplasticizer into concrete can cause concrete to segregate. Therefore, this paper also develops the estimation model used to calculate amount of second dose of superplasticizer need for concrete slump recovery. Fresh properties of ordinary Portland cement concrete with volumetric ratio of paste to void between aggregate (paste content) of 1.1-1.3 with water-cement ratio zone of 0.30 to 0.67 and initial superplasticizer (naphthalene base) of 0.25%- 1.6% were tested for initial slump and slump loss for every 30 minutes for one and half hour by slump cone test. Those concretes with slump loss range from 10% to 90% were re-dosed and successfully recovered back to its initial slump. Slump after re-dosed was tested by slump cone test. From the result, it has been concluded that, slump loss was slower for those mix with high initial dose of superplasticizer due to addition of superplasticizer will disturb cement hydration. The required second dose of superplasticizer was affected by two major parameter, which were water-cement ratio and paste content, where lower water-cement ratio and paste content cause an increase in require second dose of superplasticizer. The amount of second dose of superplasticizer is higher as the solid content within the system is increase, solid can be either from cement particles or aggregate. The data was analyzed to form an equation use to estimate the amount of second dosage requirement of superplasticizer to recovery slump to its original.

Keywords: estimation model, second superplasticizer dosage, slump loss, slump recovery

Procedia PDF Downloads 199
555 Differential Effects of Parity, Stress and Fluoxetine Treatment on Locomotor Activity and Swimming Behavior in Rats

Authors: Nur Hidayah Kaz Abdul Aziz, Norhalida Hashim, Zurina Hassan

Abstract:

Peripartum period is a time where women are vulnerable to depression, and stress may further increase the risk of its occurrence. Use of selective serotonin reuptake inhibitors (SSRI) in the treatment of postpartum depression is a common practice. Comparison of antidepressant treatment, however, is rarely studied between gestated and nulliparous animals exposed to stress. This study was aimed to investigate the effect of parity and stress, as well as fluoxetine (an SSRI) treatment after stress exposure on the behavior of rats. Gestating and nulliparous Sprague Dawley rats were either subjected to chronic stressors or left undisturbed throughout the gestation period. After parturition, all stressors were stopped and some of the stressed rats were treated with fluoxetine (10mg/kg). Hence, the final groups formed were: 1. Non-stressed nulliparous rats, 2. Non-stressed dams, 3. Stressed nulliparous rats, 4. Stressed dams, 5. Fluoxetine-treated stressed nulliparous rats, and 6. Fluoxetine-treated stressed dams. Rats were tested in open field test (OFT), novel object recognition test (NOR) and forced swim test (FST) after weaning of pups. Gestational stress significantly reduced the locomotor activity of rats in OFT (p<0.05), while fluoxetine significantly increased the activity in nulliparous rats (p<0.001) but not the dams. While no differences were observed in NOR, stress and parity inhibited the rats from performing swimming behavior in FST. However, climbing and immobile behaviors in FST were found to have no significant differences, although there is a tendency of effect of treatment for immobility parameter (p=0.06) where fluoxetine-treated stressed dams were being the least immobile. In conclusion, the effects of parity and stress, as well as fluoxetine treatment, depended on the type of behavioral test performed.

Keywords: stress, parity, SSRI, behavioral tests

Procedia PDF Downloads 174
554 Numerical and Sensitivity Analysis of Modeling the Newcastle Disease Dynamics

Authors: Nurudeen Oluwasola Lasisi

Abstract:

Newcastle disease is a highly contagious disease of birds caused by a para-myxo virus. In this paper, we presented Novel quarantine-adjusted incident and linear incident of Newcastle disease model equations. We considered the dynamics of transmission and control of Newcastle disease. The existence and uniqueness of the solutions were obtained. The existence of disease-free points was shown, and the model threshold parameter was examined using the next-generation operator method. The sensitivity analysis was carried out in order to identify the most sensitive parameters of the disease transmission. This revealed that as parameters β,ω, and ᴧ increase while keeping other parameters constant, the effective reproduction number R_ev increases. This implies that the parameters increase the endemicity of the infection of individuals. More so, when the parameters μ,ε,γ,δ_1, and α increase, while keeping other parameters constant, the effective reproduction number R_ev decreases. This implies the parameters decrease the endemicity of the infection as they have negative indices. Analytical results were numerically verified by the Differential Transformation Method (DTM) and quantitative views of the model equations were showcased. We established that as contact rate (β) increases, the effective reproduction number R_ev increases, as the effectiveness of drug usage increases, the R_ev decreases and as the quarantined individual decreases, the R_ev decreases. The results of the simulations showed that the infected individual increases when the susceptible person approaches zero, also the vaccination individual increases when the infected individual decreases and simultaneously increases the recovery individual.

Keywords: disease-free equilibrium, effective reproduction number, endemicity, Newcastle disease model, numerical, Sensitivity analysis

Procedia PDF Downloads 45
553 Buffer Allocation and Traffic Shaping Policies Implemented in Routers Based on a New Adaptive Intelligent Multi Agent Approach

Authors: M. Taheri Tehrani, H. Ajorloo

Abstract:

In this paper, an intelligent multi-agent framework is developed for each router in which agents have two vital functionalities, traffic shaping and buffer allocation and are positioned in the ports of the routers. With traffic shaping functionality agents shape the traffic forward by dynamic and real time allocation of the rate of generation of tokens in a Token Bucket algorithm and with buffer allocation functionality agents share their buffer capacity between each other based on their need and the conditions of the network. This dynamic and intelligent framework gives this opportunity to some ports to work better under burst and more busy conditions. These agents work intelligently based on Reinforcement Learning (RL) algorithm and will consider effective parameters in their decision process. As RL have limitation considering much parameter in its decision process due to the volume of calculations, we utilize our novel method which invokes Principle Component Analysis (PCA) on the RL and gives a high dimensional ability to this algorithm to consider as much as needed parameters in its decision process. This implementation when is compared to our previous work where traffic shaping was done without any sharing and dynamic allocation of buffer size for each port, the lower packet drop in the whole network specifically in the source routers can be seen. These methods are implemented in our previous proposed intelligent simulation environment to be able to compare better the performance metrics. The results obtained from this simulation environment show an efficient and dynamic utilization of resources in terms of bandwidth and buffer capacities pre allocated to each port.

Keywords: principal component analysis, reinforcement learning, buffer allocation, multi- agent systems

Procedia PDF Downloads 519
552 Apparent Temperature Distribution on Scaffoldings during Construction Works

Authors: I. Szer, J. Szer, K. Czarnocki, E. Błazik-Borowa

Abstract:

People on construction scaffoldings work in dynamically changing, often unfavourable climate. Additionally, this kind of work is performed on low stiffness structures at high altitude, which increases the risk of accidents. It is therefore desirable to define the parameters of the work environment that contribute to increasing the construction worker occupational safety level. The aim of this article is to present how changes in microclimate parameters on scaffolding can impact the development of dangerous situations and accidents. For this purpose, indicators based on the human thermal balance were used. However, use of this model under construction conditions is often burdened by significant errors or even impossible to implement due to the lack of precise data. Thus, in the target model, the modified parameter was used – apparent environmental temperature. Apparent temperature in the proposed Scaffold Use Risk Assessment Model has been a perceived outdoor temperature, caused by the combined effects of air temperature, radiative temperature, relative humidity and wind speed (wind chill index, heat index). In the paper, correlations between component factors and apparent temperature for facade scaffolding with a width of 24.5 m and a height of 42.3 m, located at south-west side of building are presented. The distribution of factors on the scaffolding has been used to evaluate fitting of the microclimate model. The results of the studies indicate that observed ranges of apparent temperature on the scaffolds frequently results in a worker’s inability to adapt. This leads to reduced concentration and increased fatigue, adversely affects health, and consequently increases the risk of dangerous situations and accidental injuries

Keywords: apparent temperature, health, safety work, scaffoldings

Procedia PDF Downloads 183
551 Fatigue Life Prediction under Variable Loading Based a Non-Linear Energy Model

Authors: Aid Abdelkrim

Abstract:

A method of fatigue damage accumulation based upon application of energy parameters of the fatigue process is proposed in the paper. Using this model is simple, it has no parameter to be determined, it requires only the knowledge of the curve W–N (W: strain energy density N: number of cycles at failure) determined from the experimental Wöhler curve. To examine the performance of nonlinear models proposed in the estimation of fatigue damage and fatigue life of components under random loading, a batch of specimens made of 6082 T 6 aluminium alloy has been studied and some of the results are reported in the present paper. The paper describes an algorithm and suggests a fatigue cumulative damage model, especially when random loading is considered. This work contains the results of uni-axial random load fatigue tests with different mean and amplitude values performed on 6082T6 aluminium alloy specimens. The proposed model has been formulated to take into account the damage evolution at different load levels and it allows the effect of the loading sequence to be included by means of a recurrence formula derived for multilevel loading, considering complex load sequences. It is concluded that a ‘damaged stress interaction damage rule’ proposed here allows a better fatigue damage prediction than the widely used Palmgren–Miner rule, and a formula derived in random fatigue could be used to predict the fatigue damage and fatigue lifetime very easily. The results obtained by the model are compared with the experimental results and those calculated by the most fatigue damage model used in fatigue (Miner’s model). The comparison shows that the proposed model, presents a good estimation of the experimental results. Moreover, the error is minimized in comparison to the Miner’s model.

Keywords: damage accumulation, energy model, damage indicator, variable loading, random loading

Procedia PDF Downloads 396
550 The Effect of Grading Characteristics on the Shear Strength and Mechanical Behavior of Granular Classes of Sands

Authors: Salah Brahim Belakhdar, Tari Mohammed Amin, Rafai Abderrahmen, Amalsi Bilal

Abstract:

Shear strength of sandy soils has been considered as the important parameter to study the stability of different civil engineering structures when subjected to monotonic, cyclic, and earthquake loading conditions. The proposed research investigated the effect of grading characteristics on the shear strength and mechanical behaviour of granular classes of sands mixed with salt in loose and dense states (Dr=15% and 90%). The laboratory investigation aimed at understanding the extent or degree at which shear strength of sand-silt mixture soil is affected by its gradation under static loading conditions. For the purpose of clarifying and evaluating the shear strength characteristics of sandy soils, a series of Casagrande shear box tests were carried out on different reconstituted samples of sand-silt mixtures with various gradations. The soil samples were tested under different normal stresses (100, 200, and 300 kPa). The results from this laboratory investigation were used to develop insight into the shear strength response of sand and sand-silt mixtures under monotonic loading conditions. The analysis of the obtained data revealed that the grading characteristics (D10, D50, Cu, ESR, and MGSR) have a significant influence on the shear strength response. It was found that shear strength can be correlated to the grading characteristics for the sand-silt mixture. The effective size ratio (ESR) and mean grain size ratio (MGSR) appear as pertinent parameters to predict the shear strength response of the sand-silt mixtures for soil gradation under study.

Keywords: mechanical behavior, silty sand, friction angle, cohesion, fines content

Procedia PDF Downloads 374
549 Hydrogen Purity: Developing Low-Level Sulphur Speciation Measurement Capability

Authors: Sam Bartlett, Thomas Bacquart, Arul Murugan, Abigail Morris

Abstract:

Fuel cell electric vehicles provide the potential to decarbonise road transport, create new economic opportunities, diversify national energy supply, and significantly reduce the environmental impacts of road transport. A potential issue, however, is that the catalyst used at the fuel cell cathode is susceptible to degradation by impurities, especially sulphur-containing compounds. A recent European Directive (2014/94/EU) stipulates that, from November 2017, all hydrogen provided to fuel cell vehicles in Europe must comply with the hydrogen purity specifications listed in ISO 14687-2; this includes reactive and toxic chemicals such as ammonia and total sulphur-containing compounds. This requirement poses great analytical challenges due to the instability of some of these compounds in calibration gas standards at relatively low amount fractions and the difficulty associated with undertaking measurements of groups of compounds rather than individual compounds. Without the available reference materials and analytical infrastructure, hydrogen refuelling stations will not be able to demonstrate compliance to the ISO 14687 specifications. The hydrogen purity laboratory at NPL provides world leading, accredited purity measurements to allow hydrogen refuelling stations to evidence compliance to ISO 14687. Utilising state-of-the-art methods that have been developed by NPL’s hydrogen purity laboratory, including a novel method for measuring total sulphur compounds at 4 nmol/mol and a hydrogen impurity enrichment device, we provide the capabilities necessary to achieve these goals. An overview of these capabilities will be given in this paper. As part of the EMPIR Hydrogen co-normative project ‘Metrology for sustainable hydrogen energy applications’, NPL are developing a validated analytical methodology for the measurement of speciated sulphur-containing compounds in hydrogen at low amount fractions pmol/mol to nmol/mol) to allow identification and measurement of individual sulphur-containing impurities in real samples of hydrogen (opposed to a ‘total sulphur’ measurement). This is achieved by producing a suite of stable gravimetrically-prepared primary reference gas standards containing low amount fractions of sulphur-containing compounds (hydrogen sulphide, carbonyl sulphide, carbon disulphide, 2-methyl-2-propanethiol and tetrahydrothiophene have been selected for use in this study) to be used in conjunction with novel dynamic dilution facilities to enable generation of pmol/mol to nmol/mol level gas mixtures (a dynamic method is required as compounds at these levels would be unstable in gas cylinder mixtures). Method development and optimisation are performed using gas chromatographic techniques assisted by cryo-trapping technologies and coupled with sulphur chemiluminescence detection to allow improved qualitative and quantitative analyses of sulphur-containing impurities in hydrogen. The paper will review the state-of-the art gas standard preparation techniques, including the use and testing of dynamic dilution technologies for reactive chemical components in hydrogen. Method development will also be presented highlighting the advances in the measurement of speciated sulphur compounds in hydrogen at low amount fractions.

Keywords: gas chromatography, hydrogen purity, ISO 14687, sulphur chemiluminescence detector

Procedia PDF Downloads 226
548 Comparison of Various Policies under Different Maintenance Strategies on a Multi-Component System

Authors: Demet Ozgur-Unluakin, Busenur Turkali, Ayse Karacaorenli

Abstract:

Maintenance strategies can be classified into two types, which are reactive and proactive, with respect to the time of the failure and maintenance. If the maintenance activity is done after a breakdown, it is called reactive maintenance. On the other hand, proactive maintenance, which is further divided as preventive and predictive, focuses on maintaining components before a failure occurs to prevent expensive halts. Recently, the number of interacting components in a system has increased rapidly and therefore, the structure of the systems have become more complex. This situation has made it difficult to provide the right maintenance decisions. Herewith, determining effective decisions has played a significant role. In multi-component systems, many methodologies and strategies can be applied when a component or a system has already broken down or when it is desired to identify and avoid proactively defects that could lead to future failure. This study focuses on the comparison of various maintenance strategies on a multi-component dynamic system. Components in the system are hidden, although there exists partial observability to the decision maker and they deteriorate in time. Several predefined policies under corrective, preventive and predictive maintenance strategies are considered to minimize the total maintenance cost in a planning horizon. The policies are simulated via Dynamic Bayesian Networks on a multi-component system with different policy parameters and cost scenarios, and their performances are evaluated. Results show that when the difference between the corrective and proactive maintenance cost is low, none of the proactive maintenance policies is significantly better than the corrective maintenance. However, when the difference is increased, at least one policy parameter for each proactive maintenance strategy gives significantly lower cost than the corrective maintenance.

Keywords: decision making, dynamic Bayesian networks, maintenance, multi-component systems, reliability

Procedia PDF Downloads 131
547 Finding a Set of Long Common Substrings with Repeats from m Input Strings

Authors: Tiantian Li, Lusheng Wang, Zhaohui Zhan, Daming Zhu

Abstract:

In this paper, we propose two string problems, and study algorithms and complexity of various versions for those problems. Let S = {s₁, s₂, . . . , sₘ} be a set of m strings. A common substring of S is a substring appearing in every string in S. Given a set of m strings S = {s₁, s₂, . . . , sₘ} and a positive integer k, we want to find a set C of k common substrings of S such that the k common substrings in C appear in the same order and have no overlap among the m input strings in S, and the total length of the k common substring in C is maximized. This problem is referred to as the longest total length of k common substrings from m input strings (LCSS(k, m) for short). The other problem we study here is called the longest total length of a set of common substrings with length more than l from m input string (LSCSS(l, m) for short). Given a set of m strings S = {s₁, s₂, . . . , sₘ} and a positive integer l, for LSCSS(l, m), we want to find a set of common substrings of S, each is of length more than l, such that the total length of all the common substrings is maximized. We show that both problems are NP-hard when k and m are variables. We propose dynamic programming algorithms with time complexity O(k n₁n₂) and O(n₁n₂) to solve LCSS(k, 2) and LSCSS(l, 2), respectively, where n1 and n₂ are the lengths of the two input strings. We then design an algorithm for LSCSS(l, m) when every length > l common substring appears once in each of the m − 1 input strings. The running time is O(n₁²m), where n1 is the length of the input string with no restriction on length > l common substrings. Finally, we propose a fixed parameter algorithm for LSCSS(l, m), where each length > l common substring appears m − 1 + c times among the m − 1 input strings (other than s1). In other words, each length > l common substring may repeatedly appear at most c times among the m − 1 input strings {s₂, s₃, . . . , sₘ}. The running time of the proposed algorithm is O((n12ᶜ)²m), where n₁ is the input string with no restriction on repeats. The LSCSS(l, m) is proposed to handle whole chromosome sequence alignment for different strains of the same species, where more than 98% of letters in core regions are identical.

Keywords: dynamic programming, algorithm, common substrings, string

Procedia PDF Downloads 22
546 Effect of Drag Coefficient Models concerning Global Air-Sea Momentum Flux in Broad Wind Range including Extreme Wind Speeds

Authors: Takeshi Takemoto, Naoya Suzuki, Naohisa Takagaki, Satoru Komori, Masako Terui, George Truscott

Abstract:

Drag coefficient is an important parameter in order to correctly estimate the air-sea momentum flux. However, The parameterization of the drag coefficient hasn’t been established due to the variation in the field data. Instead, a number of drag coefficient model formulae have been proposed, even though almost all these models haven’t discussed the extreme wind speed range. With regards to such models, it is unclear how the drag coefficient changes in the extreme wind speed range as the wind speed increased. In this study, we investigated the effect of the drag coefficient models concerning the air-sea momentum flux in the extreme wind range on a global scale, comparing two different drag coefficient models. Interestingly, one model didn’t discuss the extreme wind speed range while the other model considered it. We found that the difference of the models in the annual global air-sea momentum flux was small because the occurrence frequency of strong wind was approximately 1% with a wind speed of 20m/s or more. However, we also discovered that the difference of the models was shown in the middle latitude where the annual mean air-sea momentum flux was large and the occurrence frequency of strong wind was high. In addition, the estimated data showed that the difference of the models in the drag coefficient was large in the extreme wind speed range and that the largest difference became 23% with a wind speed of 35m/s or more. These results clearly show that the difference of the two models concerning the drag coefficient has a significant impact on the estimation of a regional air-sea momentum flux in an extreme wind speed range such as that seen in a tropical cyclone environment. Furthermore, we estimated each air-sea momentum flux using several kinds of drag coefficient models. We will also provide data from an observation tower and result from CFD (Computational Fluid Dynamics) concerning the influence of wind flow at and around the place.

Keywords: air-sea interaction, drag coefficient, air-sea momentum flux, CFD (Computational Fluid Dynamics)

Procedia PDF Downloads 372
545 Impact of Unusual Dust Event on Regional Climate in India

Authors: Kanika Taneja, V. K. Soni, Kafeel Ahmad, Shamshad Ahmad

Abstract:

A severe dust storm generated from a western disturbance over north Pakistan and adjoining Afghanistan affected the north-west region of India between May 28 and 31, 2014, resulting in significant reductions in air quality and visibility. The air quality of the affected region degraded drastically. PM10 concentration peaked at a very high value of around 1018 μgm-3 during dust storm hours of May 30, 2014 at New Delhi. The present study depicts aerosol optical properties monitored during the dust days using ground based multi-wavelength Sky radiometer over the National Capital Region of India. High Aerosol Optical Depth (AOD) at 500 nm was observed as 1.356 ± 0.19 at New Delhi while Angstrom exponent (Alpha) dropped to 0.287 on May 30, 2014. The variation in the Single Scattering Albedo (SSA) and real n(λ) and imaginary k(λ) parts of the refractive index indicated that the dust event influences the optical state to be more absorbing. The single scattering albedo, refractive index, volume size distribution and asymmetry parameter (ASY) values suggested that dust aerosols were predominant over the anthropogenic aerosols in the urban environment of New Delhi. The large reduction in the radiative flux at the surface level caused significant cooling at the surface. Direct Aerosol Radiative Forcing (DARF) was calculated using a radiative transfer model during the dust period. A consistent increase in surface cooling was evident, ranging from -31 Wm-2 to -82 Wm-2 and an increase in heating of the atmosphere from 15 Wm-2 to 92 Wm-2 and -2 Wm-2 to 10 Wm-2 at top of the atmosphere.

Keywords: aerosol optical properties, dust storm, radiative transfer model, sky radiometer

Procedia PDF Downloads 378
544 Effects of Major and Minor Modes to Emotional Perceptions of 'Happy' and 'Sad' in Piano Music among Students Aged 9-17

Authors: Nurezlin Mohd Azib, Pan Kok Chang

Abstract:

This quantitative study investigates the effects of major and minor modes, and contributing musical parameter of tempo, to the emotional perceptions of ‘happy’ and ‘sad’ in piano music among subjects aged 9-17 years old. The study was conducted in two phases; survey-questionnaire, and listening activity. Subjects (N=31) were sampled from piano music students’ population in Bangi, Selangor. In the survey-questionnaire, subjects answered 20 questions on demographic characteristics, music listening and preference, and understanding of emotional perception in music. In the listening activity, subjects listened to 20 untitled piano music excerpts and rated the emotion perceived for each excerpt, whether ‘happy’ or ‘sad’. Results from survey-questionnaire show that most percentage of subjects are 11 years old, in Grade 1, of 3 years of learning piano, prefer classical music, always listen to music, prefer both major and minor modes’ music, and find it easy to understand emotion in music, as well as major and minor modes. Results from listening activity show that 60 % of major mode music are perceived as ‘major-happy’, while 60 % too, of minor mode music are perceived as ‘minor-sad’. However, Chi-square test of independence statistical analysis indicates that there are no association and significant relationship between modes (major and minor) and ‘happy’, as well as ‘sad’ perceptions (x2 (1, N = 20) = 0.80, p = 0.371), at the significance level of p ≤ 0.05. Contrastingly, there are association and significant relationship between tempo (fast and slow), and ‘happy’, as well as ‘sad’ perceptions (x2 (1, N = 20) = 9.899, p = 0.005). Therefore, it is concluded that tempo plays an important role in effects of major and minor mode to ‘happy’ and ‘sad’ emotional perceptions in piano music among subjects aged 9 to 17 in this study.

Keywords: effects, emotional perceptions, major and minor modes, piano music

Procedia PDF Downloads 218
543 Long Wavelength Coherent Pulse of Sound Propagating in Granular Media

Authors: Rohit Kumar Shrivastava, Amalia Thomas, Nathalie Vriend, Stefan Luding

Abstract:

A mechanical wave or vibration propagating through granular media exhibits a specific signature in time. A coherent pulse or wavefront arrives first with multiply scattered waves (coda) arriving later. The coherent pulse is micro-structure independent i.e. it depends only on the bulk properties of the disordered granular sample, the sound wave velocity of the granular sample and hence bulk and shear moduli. The coherent wavefront attenuates (decreases in amplitude) and broadens with distance from its source. The pulse attenuation and broadening effects are affected by disorder (polydispersity; contrast in size of the granules) and have often been attributed to dispersion and scattering. To study the effect of disorder and initial amplitude (non-linearity) of the pulse imparted to the system on the coherent wavefront, numerical simulations have been carried out on one-dimensional sets of particles (granular chains). The interaction force between the particles is given by a Hertzian contact model. The sizes of particles have been selected randomly from a Gaussian distribution, where the standard deviation of this distribution is the relevant parameter that quantifies the effect of disorder on the coherent wavefront. Since, the coherent wavefront is system configuration independent, ensemble averaging has been used for improving the signal quality of the coherent pulse and removing the multiply scattered waves. The results concerning the width of the coherent wavefront have been formulated in terms of scaling laws. An experimental set-up of photoelastic particles constituting a granular chain is proposed to validate the numerical results.

Keywords: discrete elements, Hertzian contact, polydispersity, weakly nonlinear, wave propagation

Procedia PDF Downloads 205
542 A Two Server Poisson Queue Operating under FCFS Discipline with an ‘m’ Policy

Authors: R. Sivasamy, G. Paulraj, S. Kalaimani, N.Thillaigovindan

Abstract:

For profitable businesses, queues are double-edged swords and hence the pain of long wait times in a queue often frustrates customers. This paper suggests a technical way of reducing the pain of lines through a Poisson M/M1, M2/2 queueing system operated by two heterogeneous servers with an objective of minimising the mean sojourn time of customers served under the queue discipline ‘First Come First Served with an ‘m’ policy, i.e. FCFS-m policy’. Arrivals to the system form a Poisson process of rate λ and are served by two exponential servers. The service times of successive customers at server ‘j’ are independent and identically distributed (i.i.d.) random variables and each of it is exponentially distributed with rate parameter μj (j=1, 2). The primary condition for implementing the queue discipline ‘FCFS-m policy’ on these service rates μj (j=1, 2) is that either (m+1) µ2 > µ1> m µ2 or (m+1) µ1 > µ2> m µ1 must be satisfied. Further waiting customers prefer the server-1 whenever it becomes available for service, and the server-2 should be installed if and only if the queue length exceeds the value ‘m’ as a threshold. Steady-state results on queue length and waiting time distributions have been obtained. A simple way of tracing the optimal service rate μ*2 of the server-2 is illustrated in a specific numerical exercise to equalize the average queue length cost with that of the service cost. Assuming that the server-1 has to dynamically adjust the service rates as μ1 during the system size is strictly less than T=(m+2) while μ2=0, and as μ1 +μ2 where μ2>0 if the system size is more than or equal to T, corresponding steady state results of M/M1+M2/1 queues have been deduced from those of M/M1,M2/2 queues. To conclude this investigation has a viable application, results of M/M1+M2/1 queues have been used in processing of those waiting messages into a single computer node and to measure the power consumption by the node.

Keywords: two heterogeneous servers, M/M1, M2/2 queue, service cost and queue length cost, M/M1+M2/1 queue

Procedia PDF Downloads 363
541 An Analytical Study on the Effect of Chronic Liver Disease Severity and Etiology on Lipid Profiles

Authors: Thinakar Mani Balusamy, Venkateswaran A. R., Bharat Narasimhan, Ratnakar Kini S., Kani Sheikh M., Prem Kumar K., Pugazhendi Thangavelu, Arun Murugan, Sibi Thooran Karmegam, Radhakrishnan N., Mohammed Noufal, Amit Soni

Abstract:

Background and Aims: The liver is integral to lipid metabolism, and a compromise in its function leads to perturbations in these pathways. In this study, we hope to determine the correlation between CLD severity and its effect on lipid parameters. We also look at the etiology-specific effects on lipid levels. Materials and Methods: This is a retrospective cross-sectional analysis of 250 patients with cirrhosis compared to 250 healthy age and sex-matched controls. Severity assessment of CLD using MELD and Child-Pugh scores was performed and etiological details collected. A questionnaire was used to obtain patient demographic details and lastly, a fasting lipid profile (Total, LDL, HDL cholesterol, Triglycerides and VLDL) was obtained. Results: All components of the lipid profile declined linearly with increasing severity of CLD as determined by MELD and Child-Pugh scores. Lipid levels were clearly lower in CLD patients as compared to healthy controls. Interestingly, preliminary analysis indicated that CLD of different etiologies had differential effects on Lipid profiles. This aspect is under further analysis. Conclusion: All components of the lipid profile were definitely lower in CLD patients as compared to controls and demonstrated an inverse correlation with increasing severity. The utilization of this parameter as a prognosticating aid requires further study. Additionally, preliminary analysis indicates that various CLD etiologies appear to have specific effects on the lipid profile – a finding under further analysis.

Keywords: CLD, cholesterol, HDL, LDL, lipid profile, triglycerides, VLDL

Procedia PDF Downloads 221
540 Breaking Stress Criterion that Changes Everything We Know About Materials Failure

Authors: Ali Nour El Hajj

Abstract:

Background: The perennial deficiencies of the failure models in the materials field have profoundly and significantly impacted all associated technical fields that depend on accurate failure predictions. Many preeminent and well-known scientists from an earlier era of groundbreaking discoveries attempted to solve the issue of material failure. However, a thorough understanding of material failure has been frustratingly elusive. Objective: The heart of this study is the presentation of a methodology that identifies a newly derived one-parameter criterion as the only general failure theory for noncompressible, homogeneous, and isotropic materials subjected to multiaxial states of stress and various boundary conditions, providing the solution to this longstanding problem. This theory is the counterpart and companion piece to the theory of elasticity and is in a formalism that is suitable for broad application. Methods: Utilizing advanced finite-element analysis, the maximum internal breaking stress corresponding to the maximum applied external force is identified as a unified and universal material failure criterion for determining the structural capacity of any system, regardless of its geometry or architecture. Results: A comparison between the proposed criterion and methodology against design codes reveals that current provisions may underestimate the structural capacity by 2.17 times or overestimate the capacity by 2.096 times. It also shows that existing standards may underestimate the structural capacity by 1.4 times or overestimate the capacity by 2.49 times. Conclusion: The proposed failure criterion and methodology will pave the way for a new era in designing unconventional structural systems composed of unconventional materials.

Keywords: failure criteria, strength theory, failure mechanics, materials mechanics, rock mechanics, concrete strength, finite-element analysis, mechanical engineering, aeronautical engineering, civil engineering

Procedia PDF Downloads 81
539 Changes of Acute-phase Reactants in Systemic Sclerosis During Long-term Rituximab Therapy

Authors: Liudmila Garzanova, Lidia Ananyeva, Olga Koneva, Olga Ovsyannikova, Oxana Desinova, Mayya Starovoytova, Rushana Shayahmetova, Anna Khelkovskaya-Sergeeva

Abstract:

Objectives. C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) are associated with severe course, increased morbidity and mortality in systemic sclerosis (SSc). The aim of our study was to assess changes in CRP and ESR in SSc patients during long-term RTX therapy. Methods. This study included 113 patients with SSc. Mean age was 48.1±13 years, female-85%. The mean disease duration was 6±5 years. The diffuse cutaneous subset of the disease had 55% of patients. All pts had interstitial lung disease (ILD). All patients received prednisolone at a mean dose of 11.6±4.8 mg/day, and 53 of them - were immunosuppressants at inclusion. Patients received RTX due to the ineffectiveness of previous therapy for ILD. The parameters were evaluated over the periods: at baseline (point 0), 13±2.3 month (point 1, n=113), 42±14 month (point 2, n=80) and 79±6.5 month (point 3, n=25) after initiation of RTX therapy. Cumulative mean dose of RTX at point 1 = 1.7±0.6g, at point 2 = 3±1.5g, and at point 3 = 3.8±2.4g. The results are presented in the form of mean values, delta(Δ)-difference between the baseline parameter and follow-up point. Results. There was an improvement in studied parameters on RTX therapy. There was a significant decrease of ESR, CRP and activity index (EScSG-AI) at all observation points (p=0.001). In point 1: ΔCRP was 6.7 mg/l, ΔESR = 7.4 mm/h, ΔActivity index (EScSG-AI) = 1.7. In point 2: ΔCRP was 8.7 mg/l, ΔESR = 7.5 mm/h, ΔActivity index (EScSG-AI) = 1.9. In point 3: ΔCRP was 16.1 mg/l, ΔESR = 11 mm/h, ΔActivity index (EScSG-AI) = 2.1. Conclusion. There was a significant decrease in CRP and ESR during long-term RTX therapy, which correlated with a decrease in the disease activity index. RTX is an effective treatment option for SSc with an elevation of acute-phase reactants.

Keywords: C-reactive protein, interstitial lung disease, systemic sclerosis, rituximab

Procedia PDF Downloads 29