Search results for: iterative calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1605

Search results for: iterative calculation

615 Compression Index Estimation by Water Content and Liquid Limit and Void Ratio Using Statistics Method

Authors: Lizhou Chen, Abdelhamid Belgaid, Assem Elsayed, Xiaoming Yang

Abstract:

Compression index is essential in foundation settlement calculation. The traditional method for determining compression index is consolidation test which is expensive and time consuming. Many researchers have used regression methods to develop empirical equations for predicting compression index from soil properties. Based on a large number of compression index data collected from consolidation tests, the accuracy of some popularly empirical equations were assessed. It was found that primary compression index is significantly overestimated in some equations while it is underestimated in others. The sensitivity analyses of soil parameters including water content, liquid limit and void ratio were performed. The results indicate that the compression index obtained from void ratio is most accurate. The ANOVA (analysis of variance) demonstrates that the equations with multiple soil parameters cannot provide better predictions than the equations with single soil parameter. In other words, it is not necessary to develop the relationships between compression index and multiple soil parameters. Meanwhile, it was noted that secondary compression index is approximately 0.7-5.0% of primary compression index with an average of 2.0%. In the end, the proposed prediction equations using power regression technique were provided that can provide more accurate predictions than those from existing equations.

Keywords: compression index, clay, settlement, consolidation, secondary compression index, soil parameter

Procedia PDF Downloads 162
614 A Simple Computational Method for the Gravitational and Seismic Soil-Structure-Interaction between New and Existent Buildings Sites

Authors: Nicolae Daniel Stoica, Ion Mierlus Mazilu

Abstract:

This work is one of numerical research and aims to address the issue of the design of new buildings in a 3D location of existing buildings. In today's continuous development and congestion of urban centers is a big question about the influence of the new buildings on an already existent vicinity site. Thus, in this study, we tried to focus on how existent buildings may be affected by any newly constructed buildings and in how far this influence is really decreased. The problem of modeling the influence of interaction between buildings is not simple in any area in the world, and neither in Romania. Unfortunately, most often the designers not done calculations that can determine how close to reality these 3D influences nor the simplified method and the more superior methods. In the most literature making a "shield" (the pilots or molded walls) is absolutely sufficient to stop the influence between the buildings, and so often the soil under the structure is ignored in the calculation models. The main causes for which the soil is neglected in the analysis are related to the complexity modeling of interaction between soil and structure. In this paper, based on a new simple but efficient methodology we tried to determine for a lot of study cases the influence, in terms of assessing the interaction land structure on the behavior of structures that influence a new building on an existing one. The study covers additional subsidence that may occur during the execution of new works and after its completion. It also highlighted the efforts diagrams and deflections in the soil for both the original case and the final stage. This is necessary to see to what extent the expected impact of the new building on existing areas.

Keywords: soil, structure, interaction, piles, earthquakes

Procedia PDF Downloads 291
613 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.

Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate

Procedia PDF Downloads 125
612 Cost-Effective and Optimal Control Analysis for Mitigation Strategy to Chocolate Spot Disease of Faba Bean

Authors: Haileyesus Tessema Alemneh, Abiyu Enyew Molla, Oluwole Daniel Makinde

Abstract:

Introduction: Faba bean is one of the most important grown plants worldwide for humans and animals. Several biotic and abiotic elements have limited the output of faba beans, irrespective of their diverse significance. Many faba bean pathogens have been reported so far, of which the most important yield-limiting disease is chocolate spot disease (Botrytis fabae). The dynamics of disease transmission and decision-making processes for intervention programs for disease control are now better understood through the use of mathematical modeling. Currently, a lot of mathematical modeling researchers are interested in plant disease modeling. Objective: In this paper, a deterministic mathematical model for chocolate spot disease (CSD) on faba bean plant with an optimal control model was developed and analyzed to examine the best strategy for controlling CSD. Methodology: Three control interventions, quarantine (u2), chemical control (u3), and prevention (u1), are employed that would establish the optimal control model. The optimality system, characterization of controls, the adjoint variables, and the Hamiltonian are all generated employing Pontryagin’s maximum principle. A cost-effective approach is chosen from a set of possible integrated strategies using the incremental cost-effectiveness ratio (ICER). The forward-backward sweep iterative approach is used to run numerical simulations. Results: The Hamiltonian, the optimality system, the characterization of the controls, and the adjoint variables were established. The numerical results demonstrate that each integrated strategy can reduce the diseases within the specified period. However, due to limited resources, an integrated strategy of prevention and uprooting was found to be the best cost-effective strategy to combat CSD. Conclusion: Therefore, attention should be given to the integrated cost-effective and environmentally eco-friendly strategy by stakeholders and policymakers to control CSD and disseminate the integrated intervention to the farmers in order to fight the spread of CSD in the Faba bean population and produce the expected yield from the field.

Keywords: CSD, optimal control theory, Pontryagin’s maximum principle, numerical simulation, cost-effectiveness analysis

Procedia PDF Downloads 87
611 Mathematical modeling of the calculation of the absorbed dose in uranium production workers with the genetic effects.

Authors: P. Kazymbet, G. Abildinova, K.Makhambetov, M. Bakhtin, D. Rybalkina, K. Zhumadilov

Abstract:

Conducted cytogenetic research in workers Stepnogorsk Mining-Chemical Combine (Akmola region) with the study of 26341 chromosomal metaphase. Using a regression analysis with program DataFit, version 5.0, dependence between exposure dose and the following cytogenetic exponents has been studied: frequency of aberrant cells, frequency of chromosomal aberrations, frequency of the amounts of dicentric chromosomes, and centric rings. Experimental data on calibration curves "dose-effect" enabled the development of a mathematical model, allowing on data of the frequency of aberrant cells, chromosome aberrations, the amounts of dicentric chromosomes and centric rings calculate the absorbed dose at the time of the study. In the dose range of 0.1 Gy to 5.0 Gy dependence cytogenetic parameters on the dose had the following equation: Y = 0,0067е^0,3307х (R2 = 0,8206) – for frequency of chromosomal aberrations; Y = 0,0057е^0,3161х (R2 = 0,8832) –for frequency of cells with chromosomal aberrations; Y =5 Е-0,5е^0,6383 (R2 = 0,6321) – or frequency of the amounts of dicentric chromosomes and centric rings on cells. On the basis of cytogenetic parameters and regression equations calculated absorbed dose in workers of uranium production at the time of the study did not exceed 0.3 Gy.

Keywords: Stepnogorsk, mathematical modeling, cytogenetic, dicentric chromosomes

Procedia PDF Downloads 477
610 Atomistic Insight into the System of Trapped Oil Droplet/ Nanofluid System in Nanochannels

Authors: Yuanhao Chang, Senbo Xiao, Zhiliang Zhang, Jianying He

Abstract:

The role of nanoparticles (NPs) in enhanced oil recovery (EOR) is being increasingly emphasized. In this study, the motion of NPs and local stress distribution of tapped oil droplet/nanofluid in nanochannels are studied with coarse-grained modeling and molecular dynamic simulations. The results illustrate three motion patterns for NPs: hydrophilic NPs are more likely to adsorb on the channel and stay near the three-phase contact areas, hydrophobic NPs move inside the oil droplet as clusters and more mixed NPs are trapped at the oil-water interface. NPs in each pattern affect the flow of fluid and the interfacial thickness to various degrees. Based on the calculation of atomistic stress, the characteristic that the higher value of stress occurs at the place where NPs aggregate can be obtained. Different occurrence patterns correspond to specific local stress distribution. Significantly, in the three-phase contact area for hydrophilic NPs, the local stress distribution close to the pattern of structural disjoining pressure is observed, which proves the existence of structural disjoining pressure in molecular dynamics simulation for the first time. Our results guide the design and screen of NPs for EOR and provide a basic understanding of nanofluid applications.

Keywords: local stress distribution, nanoparticles, enhanced oil recovery, molecular dynamics simulation, trapped oil droplet, structural disjoining pressure

Procedia PDF Downloads 134
609 A Case Study of Ontology-Based Sentiment Analysis for Fan Pages

Authors: C. -L. Huang, J. -H. Ho

Abstract:

Social media has become more and more important in our life. Many enterprises promote their services and products to fans via the social media. The positive or negative sentiment of feedbacks from fans is very important for enterprises to improve their products, services, and promotion activities. The purpose of this paper is to understand the sentiment of the fan’s responses by analyzing the responses posted by fans on Facebook. The entity and aspect of fan’s responses were analyzed based on a predefined ontology. The ontology for cell phone sentiment analysis consists of aspect categories on the top level as follows: overall, shape, hardware, brand, price, and service. Each category consists of several sub-categories. All aspects for a fan’s response were found based on the ontology, and their corresponding sentimental terms were found using lexicon-based approach. The sentimental scores for aspects of fan responses were obtained by summarizing the sentimental terms in responses. The frequency of 'like' was also weighted in the sentimental score calculation. Three famous cell phone fan pages on Facebook were selected as demonstration cases to evaluate performances of the proposed methodology. Human judgment by several domain experts was also built for performance comparison. The performances of proposed approach were as good as those of human judgment on precision, recall and F1-measure.

Keywords: opinion mining, ontology, sentiment analysis, text mining

Procedia PDF Downloads 232
608 The Characteristics of Quantity Operation for 2nd and 3rd Grade Mathematics Slow Learners

Authors: Pi-Hsia Hung

Abstract:

The development of mathematical competency has individual benefits as well as benefits to the wider society. Children who begin school behind their peers in their understanding of number, counting, and simple arithmetic are at high risk of staying behind throughout their schooling. The development of effective strategies for improving the educational trajectory of these individuals will be contingent on identifying areas of early quantitative knowledge that influence later mathematics achievement. A computer-based quantity assessment was developed in this study to investigate the characteristics of 2nd and 3rd grade slow learners in quantity. The concept of quantification involves understanding measurements, counts, magnitudes, units, indicators, relative size, and numerical trends and patterns. Fifty-five tasks of quantitative reasoning—such as number sense, mental calculation, estimation and assessment of reasonableness of results—are included as quantity problem solving. Thus, quantity is defined in this study as applying knowledge of number and number operations in a wide variety of authentic settings. Around 1000 students were tested and categorized into 4 different performance levels. Students’ quantity ability correlated higher with their school math grade than other subjects. Around 20% students are below basic level. The intervention design implications of the preliminary item map constructed are discussed.

Keywords: mathematics assessment, mathematical cognition, quantity, number sense, validity

Procedia PDF Downloads 247
607 A Fuzzy Inference Tool for Assessing Cancer Risk from Radiation Exposure

Authors: Bouharati Lokman, Bouharati Imen, Bouharati Khaoula, Bouharati Oussama, Bouharati Saddek

Abstract:

Ionizing radiation exposure is an established cancer risk factor. Compared to other common environmental carcinogens, it is relatively easy to determine organ-specific radiation dose and, as a result, radiation dose-response relationships tend to be highly quantified. Nevertheless, there can be considerable uncertainty about questions of radiation-related cancer risk as they apply to risk protection and public policy, and the interpretations of interested parties can differ from one person to another. Examples of tools used in the analysis of the risk of developing cancer due to radiation are characterized by uncertainty. These uncertainties are related to the history of exposure and different assumptions involved in the calculation. We believe that the results of statistical calculations are characterized by uncertainty and imprecision. Having regard to the physiological variation from one person to another. In this study, we develop a tool based on fuzzy logic inference. As fuzzy logic deals with imprecise and uncertain, its application in this area is adequate. We propose a fuzzy system with three input variables (age, sex and body attainable cancer). The output variable expresses the risk of infringement rate of each organ. A base rule is established from recorded actual data. After successful simulation, this will instantly predict the risk of infringement rate of each body following chronic exposure to 0.1 Gy.

Keywords: radiation exposure, cancer, modeling, fuzzy logic

Procedia PDF Downloads 311
606 A Framework for Security Risk Level Measures Using CVSS for Vulnerability Categories

Authors: Umesh Kumar Singh, Chanchala Joshi

Abstract:

With increasing dependency on IT infrastructure, the main objective of a system administrator is to maintain a stable and secure network, with ensuring that the network is robust enough against malicious network users like attackers and intruders. Security risk management provides a way to manage the growing threats to infrastructures or system. This paper proposes a framework for risk level estimation which uses vulnerability database National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) and the Common Vulnerability Scoring System (CVSS). The proposed framework measures the frequency of vulnerability exploitation; converges this measured frequency with standard CVSS score and estimates the security risk level which helps in automated and reasonable security management. In this paper equation for the Temporal score calculation with respect to availability of remediation plan is derived and further, frequency of exploitation is calculated with determined temporal score. The frequency of exploitation along with CVSS score is used to calculate the security risk level of the system. The proposed framework uses the CVSS vectors for risk level estimation and measures the security level of specific network environment, which assists system administrator for assessment of security risks and making decision related to mitigation of security risks.

Keywords: CVSS score, risk level, security measurement, vulnerability category

Procedia PDF Downloads 321
605 Analysis Model for the Relationship of Users, Products, and Stores on Online Marketplace Based on Distributed Representation

Authors: Ke He, Wumaier Parezhati, Haruka Yamashita

Abstract:

Recently, online marketplaces in the e-commerce industry, such as Rakuten and Alibaba, have become some of the most popular online marketplaces in Asia. In these shopping websites, consumers can select purchase products from a large number of stores. Additionally, consumers of the e-commerce site have to register their name, age, gender, and other information in advance, to access their registered account. Therefore, establishing a method for analyzing consumer preferences from both the store and the product side is required. This study uses the Doc2Vec method, which has been studied in the field of natural language processing. Doc2Vec has been used in many cases to analyze the extraction of semantic relationships between documents (represented as consumers) and words (represented as products) in the field of document classification. This concept is applicable to represent the relationship between users and items; however, the problem is that one more factor (i.e., shops) needs to be considered in Doc2Vec. More precisely, a method for analyzing the relationship between consumers, stores, and products is required. The purpose of our study is to combine the analysis of the Doc2vec model for users and shops, and for users and items in the same feature space. This method enables the calculation of similar shops and items for each user. In this study, we derive the real data analysis accumulated in the online marketplace and demonstrate the efficiency of the proposal.

Keywords: Doc2Vec, online marketplace, marketing, recommendation systems

Procedia PDF Downloads 112
604 Amino Acid Derivatives as Green Corrosion Inhibitors for Mild Steel in 1M HCl: Electrochemical, Surface and Density Functional Theory Studies

Authors: Jiyaul Haque, Vandana Srivastava, M. A. Quraishi

Abstract:

The amino acids based corrosion inhibitors 2-(3-(carboxymethyl)-1H-imidazol-3-ium-1-yl) acetate (Z-1),2-(3-(1-carboxyethyl)-1H-imidazol-3-ium-1-yl) propanoate (Z-2) and 2-(3-(1-carboxy-2-phenylethyl)-1H-imidazol-3-ium-1-yl)-3- phenylpropanoate (Z-3) were synthesized by the reaction of amino acids, glyoxal and formaldehyde, and characterized by the FTIR and NMR spectroscopy. The corrosion inhibition performance of synthesized inhibitors was studied by electrochemical (EIS and PDP), surface and DFT methods. The results show, the studied Z-1, Z-2 and Z-3 are effective inhibitors, showed the maximum inhibition efficiency of 88.52 %, 89.48 and 96.08% at concentration 200ppm, respectively. The results of potentiodynamic polarization (PDP) study showed that Z-1 act as a cathodic inhibitor, while Z-2 and Z-3 act as mixed type inhibitors. The results of electrochemical impedance spectroscopy (EIS) studies showed that zwitterions inhibit the corrosion through adsorption mechanism. The adsorption of synthesized zwitterions on the mild steel surface was followed the Langmuir adsorption isotherm. The formation of zwitterions film on mild steel surface was confirmed by the scanning electron microscope (SEM) and energy-dispersive X-ray spectroscopy (EDX). The quantum chemical parameters were used to study the reactivity of inhibitors and supported the experimental results. An inhibitor adsorption model is proposed.

Keywords: electrochemical impedance spectroscopy, green corrosion inhibitors, mild steel, SEM, quantum chemical calculation, zwitterions

Procedia PDF Downloads 195
603 Comparing Two Interventions for Teaching Math to Pre-School Students with Autism

Authors: Hui Fang Huang Su, Jia Borror

Abstract:

This study compared two interventions for teaching math to preschool-aged students with autism spectrum disorder (ASD). The first is considered the business as usual (BAU) intervention, which uses the Strategies for Teaching Based on Autism Research (STAR) curriculum and discrete trial teaching as the instructional methodology. The second is the Math is Not Difficult (Project MIND) activity-embedded, naturalistic intervention. These interventions were randomly assigned to four preschool students with ASD classrooms and implemented over three months for Project Mind. We used measurement gained during the same three months for the STAR intervention. In addition, we used A quasi-experimental, pre-test/post-test design to compare the effectiveness of these two interventions in building mathematical knowledge and skills. The pre-post measures include three standardized instruments: the Test of Early Math Ability-3, the Problem Solving and Calculation subtests of the Woodcock-Johnson Test of Achievement IV, and the Bracken Test of Basic Concepts-3 Receptive. The STAR curriculum-based assessment is administered to all Baudhuin students three times per year, and we used the results in this study. We anticipated that implementing these two approaches would improve the mathematical knowledge and skills of children with ASD. Still, it is crucial to see whether a behavioral or naturalistic teaching approach leads to more significant results.

Keywords: early learning, autism, math for pre-schoolers, special education, teaching strategies

Procedia PDF Downloads 165
602 A Mechanical Diagnosis Method Based on Vibration Fault Signal down-Sampling and the Improved One-Dimensional Convolutional Neural Network

Authors: Bowei Yuan, Shi Li, Liuyang Song, Huaqing Wang, Lingli Cui

Abstract:

Convolutional neural networks (CNN) have received extensive attention in the field of fault diagnosis. Many fault diagnosis methods use CNN for fault type identification. However, when the amount of raw data collected by sensors is massive, the neural network needs to perform a time-consuming classification task. In this paper, a mechanical fault diagnosis method based on vibration signal down-sampling and the improved one-dimensional convolutional neural network is proposed. Through the robust principal component analysis, the low-rank feature matrix of a large amount of raw data can be separated, and then down-sampling is realized to reduce the subsequent calculation amount. In the improved one-dimensional CNN, a smaller convolution kernel is used to reduce the number of parameters and computational complexity, and regularization is introduced before the fully connected layer to prevent overfitting. In addition, the multi-connected layers can better generalize classification results without cumbersome parameter adjustments. The effectiveness of the method is verified by monitoring the signal of the centrifugal pump test bench, and the average test accuracy is above 98%. When compared with the traditional deep belief network (DBN) and support vector machine (SVM) methods, this method has better performance.

Keywords: fault diagnosis, vibration signal down-sampling, 1D-CNN

Procedia PDF Downloads 131
601 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour

Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani

Abstract:

In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.

Keywords: video tracking, particle filter, greedy snake, neural network

Procedia PDF Downloads 342
600 Civil Engineering Tool Kit for Making Perfect Ellipses of Desired Dimensions on Very Large Surfaces

Authors: Karam Chand Gupta

Abstract:

If an ellipse is to be drawn of given dimensions on a large ground, there is no formula, method or set of calculations & procedure available which will help in drawing an ellipse of given length and width on ground. Whenever a field engineer is to start the work of an ellipse-shaped structure like elliptical conference hall, screening chamber and pump chamber in disposal work etc., it is cumbersome for him to give demarcation of the structure on the big surface of the ground. No procedure is available, even in Google. A set of formulas with calculations has been made which helps the field engineer to draw an true and perfect ellipse of given length and width on the large ground very easily so as to start the construction work of elliptical structure. Based on these formulas a civil Engineering tool kit has been made with the help of which we can make perfect ellipse of desired dimensions on very large surface. The Patent of the tool kit has been filed in Intellectual Property India with Patent Filing Number: 201611026153 and Patent Application Filing Date: 30.07.2016. An App named ‘KC’s Mesh Formula’ has also been made to ease the calculation work. This can be downloaded from Play Store. After adopting these formulas and tool kit, a field engineer will not face difficulty in drawing ellipse on the ground to start the work.

Keywords: ellipse, elliptical structure, foci, string, wooden peg

Procedia PDF Downloads 268
599 Seismic Directionality Effects on In-Structure Response Spectra in Seismic Probabilistic Risk Assessment

Authors: Sittipong Jarernprasert, Enrique Bazan-Zurita, Paul C. Rizzo

Abstract:

Currently, seismic probabilistic risk assessments (SPRA) for nuclear facilities use In-Structure Response Spectra (ISRS) in the calculation of fragilities for systems and components. ISRS are calculated via dynamic analyses of the host building subjected to two orthogonal components of horizontal ground motion. Each component is defined as the median motion in any horizontal direction. Structural engineers applied the components along selected X and Y Cartesian axes. The ISRS at different locations in the building are also calculated in the X and Y directions. The choice of the directions of X and Y are not specified by the ground motion model with respect to geographic coordinates, and are rather arbitrarily selected by the structural engineer. Normally, X and Y coincide with the “principal” axes of the building, in the understanding that this practice is generally conservative. For SPRA purposes, however, it is desirable to remove any conservatism in the estimates of median ISRS. This paper examines the effects of the direction of horizontal seismic motion on the ISRS on typical nuclear structure. We also evaluate the variability of ISRS calculated along different horizontal directions. Our results indicate that some central measures of the ISRS provide robust estimates that are practically independent of the selection of the directions of the horizontal Cartesian axes.

Keywords: seismic, directionality, in-structure response spectra, probabilistic risk assessment

Procedia PDF Downloads 410
598 Railway Crane Accident: A Comparative Metallographic Test on Pins Fractured during Operation

Authors: Thiago Viana

Abstract:

Eventually train accidents occur on railways and for some specific cases it is necessary to use a train rescue with a crane positioned under a platform wagon. These tumbled machines are collected and sent to the machine shop or scrap yard. In one of these cranes that were being used to rescue a wagon, occurred a fall of hoist due to fracture of two large pins. The two pins were collected and sent for failure analysis. This work investigates the main cause and the secondary causes for the initiation of the fatigue crack. All standard failure analysis procedures were applied, with careful evaluation of the characteristics of the material, fractured surfaces and, mainly, metallographic tests using an optical microscope to compare the geometry of the peaks and valleys of the thread of the pins and their respective seats. By metallographic analysis, it was concluded that the fatigue cracks were started from a notch (stress concentration) in the valley of the threads of the pin applied to the right side of the crane (pin 1). In this, it was verified that the peaks of the threads of the pin seat did not have proper geometry, with sharp edges being present that caused such notches. The visual analysis showed that fracture of the pin on the left side of the crane (pin 2) was brittle type, being a consequence of the fracture of the first one. Recommendations for this and other railway cranes have been made, such as nondestructive testing, stress calculation, design review, quality control and suitability of the mechanical forming process of the seat threads and pin threads.

Keywords: crane, fracture, pin, railway

Procedia PDF Downloads 108
597 Camel Mortalities Due to Accidental Intoxcation with Ionophore

Authors: M. A. Abdelfattah, F. K. Waleed

Abstract:

Anticoccidials were utilized widely in veterinary practice for the avoidance of coccidiosis in poultry and assume a huge job as development promotants in ruminants. Ionophore harming is every now and again happens because of accidental access to medicated feed, errors in feed mixing, incorrect dosage calculation or misuse in non-recommended species. Camels on several farms in Eastern area of Saudi Arabia were accidently fed with a feed pellet containing 13 ppm salinomycin. One hundred and sixty-three camels died with mortality rate of 100%. The poisoning was clinically characterized by restlessness with tail lift to the top, jerk in the muscles of legs and thighs, excessive sweating, frequent setting and standing with body imbalance, lateral and sternal recumbences with the legs stretched back, eye tears with dilated pupil, vomiting of the stomach content, loss of consciousness and death of some of them. Feed analysis indicated the presence of salinomycin in pelleted feed in a range of 13 mg/kg-47 mg/kg. Necropsy findings and histopathological examinations were presented. Regulations and legal implications concerning with sale of contaminated feed in Saudi market are discussed in the light of feed law and by-law. The necessity for an effective implication of regulation concerning application of quality assurance systems based on the principles of Good Manufacturing Practice (GMP) and the application of Hazard Analysis of Critical Control Point (HACCP) during feed production is necessary to avoid feed accident.

Keywords: medicated feed, salinomycin, anticoccidial, camel, toxicity

Procedia PDF Downloads 113
596 Detection of Hepatitis B by the Use of Artifical Intelegence

Authors: Shizra Waris, Bilal Shoaib, Munib Ahmad

Abstract:

Background; The using of clinical decision support systems (CDSSs) may recover unceasing disease organization, which requires regular visits to multiple health professionals, treatment monitoring, disease control, and patient behavior modification. The objective of this survey is to determine if these CDSSs improve the processes of unceasing care including diagnosis, treatment, and monitoring of diseases. Though artificial intelligence is not a new idea it has been widely documented as a new technology in computer science. Numerous areas such as education business, medical and developed have made use of artificial intelligence Methods: The survey covers articles extracted from relevant databases. It uses search terms related to information technology and viral hepatitis which are published between 2000 and 2016. Results: Overall, 80% of studies asserted the profit provided by information technology (IT); 75% of learning asserted the benefits concerned with medical domain;25% of studies do not clearly define the added benefits due IT. The CDSS current state requires many improvements to hold up the management of liver diseases such as HCV, liver fibrosis, and cirrhosis. Conclusion: We concluded that the planned model gives earlier and more correct calculation of hepatitis B and it works as promising tool for calculating of custom hepatitis B from the clinical laboratory data.

Keywords: detection, hapataties, observation, disesese

Procedia PDF Downloads 156
595 Analysis of Heat Transfer and Energy Saving Characteristics for Bobsleigh/Skeleton Ice Track

Authors: Zichu Liu, Zhenhua Quan, Xin Liu, Yaohua Zhao

Abstract:

Enhancing the heat transfer characteristics of the bobsleigh/skeleton ice track and reducing the energy consumption of the bobsleigh/skeleton ice track plays an important role in energy saving of the refrigeration systems. In this study, a track ice-making test rig was constructed to verify the accuracy of the established ice track heat transfer model. The different meteorological conditions on the variations in the heat transfer characteristics of the ice surface, ice temperature, and evaporation temperature with or without Terrain Weather Protection System (TWPS) were investigated, and the influence of the TWPS with and without low emissivity materials on these indexes was also compared. In addition, the influence of different pipe spacing and diameters of refrigeration pipe on the heat transfer resistance of the track is also analyzed. The results showed that compared with the ice track without sunshade facilities, TWPS could reduce the heat transfer between ice surface and air by 17.6% in the transition season, and TWPS with low emissivity material could reduce the heat transfer by 37%. The thermal resistance of the ice track decreased by 8.9×10⁻⁴ m²·°C/W, and the refrigerant evaporation temperature increased by 0.25 °C when the cooling pipes spacing decreased by every 10 mm. The thermal resistance decreased by 1.46×10⁻³ m²·°C/W, and the refrigerant evaporation temperature increased by 0.3 °C when the pipe diameter increased by one nominal diameter.

Keywords: bobsleigh/skeleton ice track, calculation model, heat transfer characteristics, refrigeration

Procedia PDF Downloads 110
594 Comparison of Safety Factor Evaluation Methods for Buckling of High Strength Steel Welded Box Section Columns

Authors: Balazs Somodi, Balazs Kovesdi

Abstract:

In the research praxis of civil engineering the statistical evaluation of experimental and numerical investigations is an essential task in order to compare the experimental and numerical resistances of a specific structural problem with the proposed resistances of the standards. However, in the standards and in the international literature there are several different safety factor evaluation methods that can be used to check the necessary safety level (e.g.: 5% quantile level, 2.3% quantile level, 1‰ quantile level, γM partial safety factor, γM* partial safety factor, β reliability index). Moreover, in the international literature different calculation methods could be found even for the same safety factor as well. In the present study the flexural buckling resistance of high strength steel (HSS) welded closed sections are analyzed. The authors investigated the flexural buckling resistances of the analyzed columns by laboratory experiments. In the present study the safety levels of the obtained experimental resistances are calculated based on several safety approaches and compared with the EN 1990. The results of the different safety approaches are compared and evaluated. Based on the evaluation tendencies are identified and the differences between the statistical evaluation methods are explained.

Keywords: flexural buckling, high strength steel, partial safety factor, statistical evaluation

Procedia PDF Downloads 160
593 Calculation the Left Ventricle Wall Radial Strain and Radial SR Using Tagged Magnetic Resonance Imaging Data (tMRI)

Authors: Mohammed Alenezy

Abstract:

The function of cardiac motion can be used as an indicator of the heart abnormality by evaluating longitudinal, circumferential, and Radial Strain of the left ventricle. In this paper, the Radial Strain and SR is studied using tagged MRI (tMRI) data during the cardiac cycle on the mid-ventricle level of the left ventricle. Materials and methods: The short-axis view of the left ventricle of five healthy human (three males and two females) and four healthy male rats were imaged using tagged magnetic resonance imaging (tMRI) technique covering the whole cardiac cycle on the mid-ventricle level. Images were processed using Image J software to calculate the left ventricle wall Radial Strain and radial SR. The left ventricle Radial Strain and radial SR were calculated at the mid-ventricular level during the cardiac cycle. The peak Radial Strain for the human and rat heart was 40.7±1.44, and 46.8±0.68 respectively, and it occurs at 40% of the cardiac cycle for both human and rat heart. The peak diastolic and systolic radial SR for human heart was -1.78 s-1 ± 0.02 s-1 and 1.10±0.08 s-1 respectively, while for rat heart it was -5.16± 0.23s-1 and 4.25±0.02 s-1 respectively. Conclusion: This results show the ability of the tMRI data to characterize the cardiac motion during the cardiac cycle including diastolic and systolic phases which can be used as an indicator of the cardiac dysfunction by estimating the left ventricle Radial Strain and radial SR at different locations of the cardiac tissue. This study approves the validity of the tagged MRI data to describe accurately the cardiac radial motion.

Keywords: left ventricle, radial strain, tagged MRI, cardiac cycle

Procedia PDF Downloads 482
592 Behavior of Cold Formed Steel in Trusses

Authors: Reinhard Hermawan Lasut, Henki Wibowo Ashadi

Abstract:

The use of materials in Indonesia's construction sector requires engineers and practitioners to develop efficient construction technology, one of the materials used in cold-formed steel. Generally, the use of cold-formed steel is used in the construction of roof trusses found in houses or factories. The failure of the roof truss structure causes errors in the calculation analysis in the form of cross-sectional dimensions or frame configuration. The roof truss structure, vertical distance effect to the span length at the edge of the frame carries the compressive load. If the span is too long, local buckling will occur which causes problems in the frame strength. The model analysis uses various shapes of roof trusses, span lengths and angles with analysis of the structural stiffness matrix method. Model trusses with one-fifth shortened span and one-sixth shortened span also The trusses model is reviewed with increasing angles. It can be concluded that the trusses model by shortening the span in the compression area can reduce deflection and the model by increasing the angle does not get good results because the higher the roof, the heavier the load carried by the roof so that the force is not channeled properly. The shape of the truss must be calculated correctly so the truss is able to withstand the working load so that there is no structural failure.

Keywords: cold-formed, trusses, deflection, stiffness matrix method

Procedia PDF Downloads 166
591 Diagnosis of Gingivitis Based on Correlations of Laser Doppler Data and Gingival Fluid Cytology

Authors: A. V. Belousov, Yakushenko

Abstract:

One of the main problems of modern dentistry is development a reliable method to detect inflammation in the gums on the stages of diagnosis and assessment of treatment efficacy. We have proposed a method of gingival fluid intake, which successfully combines accessibility, excluding the impact of the annoying and damaging the gingival sulcus factors and provides reliable results (patent of RF№ 2342956 Method of gingival fluid intake). The objects of the study were students - volunteers of Dentistry Faculty numbering 75 people aged 20-21 years. Cellular composition of gingival fluid was studied using microscope "Olympus CX 31" (Japan) with the calculation of epithelial leukocyte index (ELI). Assessment of gingival micro circulation was performed using the apparatus «LAKK–01» (Lazma, Moscow). Cytological investigation noted the highly informative of epithelial leukocyte index (ELI), which demonstrated changes in the mechanisms of protection gums. The increase of ELI occurs during inhibition mechanisms of phagocytosis and activation of epithelial desquamation. The cytological data correlate with micro circulation indicators obtained by laser Doppler flowmetry. We have identified and confirmed the correlations between parameters laser Doppler flowmetry and data cytology gingival fluid in patients with gingivitis.

Keywords: gingivitis, laser doppler flowmetry, gingival fluid cytology, epithelial leukocyte index (ELI)

Procedia PDF Downloads 328
590 Development of an Intervention Program for Moral Education of Undergraduate Students of Sport Sciences and Physical Education

Authors: Najia Zulfiqar

Abstract:

Imparting moral education is the need of time, considering the obvious moral decline in society. Recent research shows the downfall of moral competence among university students. The main objective of the present study was to develop moral development intervention strategies for undergraduate students of Sports and Physical Education. Using an interpretative phenomenological approach, insight into field-specific moral issues was gained through interviews with 7 subject experts and a focus-group discussion session with 8 students. Two research assistants who were trained in qualitative interviewing collected, transcribed and analyzed data into the MAXQDA software using content and discourse analyses. The identified moral issues in Sports and Physical Education were sports gambling and betting, pay-for-play, doping, coach misconduct, tampering, cultural bias, gender equity/nepotism, bullying/discrimination, and harassment. Next, intervention modules were developed for each moral issue based on hypothetical situations, and followed by guided reflection and dilemma discussion questions. The third moral development strategy was community services that included posture screening, diet plan for different age groups, open fitness ground training, exercise camps for physical fitness, balanced diet awareness camp, gymnastic camp, shoe assessment as per health standards, and volunteering for public awareness at the playground, gymnasium, stadium, park, etc. The intervention modules were given to four subject specialists for expert validation who were from different backgrounds within Sport Sciences. Upon refinement and finalization, four students were presented with these intervention modules and questioned about accuracy, relevance, comprehension, and content organization. Iterative changes were made in the content of the intervention modules to tailor them to the moral development needs of undergraduate students. This intervention will strengthen positive moral values and foster mature decision-making about right and wrong acts. As this intervention is easy to apply as a remedial tool, academicians and policymakers can use this to promote students’ moral development.

Keywords: community service, dilemma discussion, morality, physical education, university students.

Procedia PDF Downloads 72
589 A Low-Latency Quadratic Extended Domain Modular Multiplier for Bilinear Pairing Based on Non-Least Positive Multiplication

Authors: Yulong Jia, Xiang Zhang, Ziyuan Wu, Shiji Hu

Abstract:

The calculation of bilinear pairing is the core of the SM9 algorithm, which relies on the underlying prime domain algorithm and the quadratic extension domain algorithm. Among the field algorithms, modular multiplication operation is the most time-consuming part. Therefore, the underlying modular multiplication algorithm is optimized to maximize the operation speed of bilinear pairings. This paper uses a modular multiplication method based on non-least positive (NLP) combined with Karatsuba and schoolbook multiplication to improve the Montgomery algorithm. At the same time, according to the characteristics of multiplication operation in the quadratic extension domain, a quadratic extension domain FP2-NLP modular multiplication algorithm for bilinear pairings is proposed, which effectively reduces the operation time of modular multiplication in the quadratic extension domain. The sub-expanded domain Fp₂ -NLP modular multiplication algorithm effectively reduces the operation time of modular multiplication under the second-expanded domain. The multiplication unit in the quadratic extension domain is implemented using SMIC55nm process, and two different implementation architectures are designed to cope with different application scenarios. Compared with the existing related literature, The output latency of this design can reach a minimum of 15 cycles. The shortest time for calculating the (AB+CD)r⁻¹ mod form is 37.5ns, and the comprehensive area-time product (AT) is 11400. The final R-ate pairing algorithm hardware accelerator consumes 2670k equivalent logic gates and 1.8ms computing time in 55nm process.

Keywords: sm9, hardware, NLP, Montgomery

Procedia PDF Downloads 5
588 Inversion of PROSPECT+SAIL Model for Estimating Vegetation Parameters from Hyperspectral Measurements with Application to Drought-Induced Impacts Detection

Authors: Bagher Bayat, Wouter Verhoef, Behnaz Arabi, Christiaan Van der Tol

Abstract:

The aim of this study was to follow the canopy reflectance patterns in response to soil water deficit and to detect trends of changes in biophysical and biochemical parameters of grass (Poa pratensis species). We used visual interpretation, imaging spectroscopy and radiative transfer model inversion to monitor the gradual manifestation of water stress effects in a laboratory setting. Plots of 21 cm x 14.5 cm surface area with Poa pratensis plants that formed a closed canopy were subjected to water stress for 50 days. In a regular weekly schedule, canopy reflectance was measured. In addition, Leaf Area Index (LAI), Chlorophyll (a+b) content (Cab) and Leaf Water Content (Cw) were measured at regular time intervals. The 1-D bidirectional canopy reflectance model SAIL, coupled with the leaf optical properties model PROSPECT, was inverted using hyperspectral measurements by means of an iterative optimization method to retrieve vegetation biophysical and biochemical parameters. The relationships between retrieved LAI, Cab, Cw, and Cs (Senescent material) with soil moisture content were established in two separated groups; stress and non-stressed. To differentiate the water stress condition from the non-stressed condition, a threshold was defined that was based on the laboratory produced Soil Water Characteristic (SWC) curve. All parameters retrieved by model inversion using canopy spectral data showed good correlation with soil water content in the water stress condition. These parameters co-varied with soil moisture content under the stress condition (Chl: R2= 0.91, Cw: R2= 0.97, Cs: R2= 0.88 and LAI: R2=0.48) at the canopy level. To validate the results, the relationship between vegetation parameters that were measured in the laboratory and soil moisture content was established. The results were totally in agreement with the modeling outputs and confirmed the results produced by radiative transfer model inversion and spectroscopy. Since water stress changes all parts of the spectrum, we concluded that analysis of the reflectance spectrum in the VIS-NIR-MIR region is a promising tool for monitoring water stress impacts on vegetation.

Keywords: hyperspectral remote sensing, model inversion, vegetation responses, water stress

Procedia PDF Downloads 225
587 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137

Authors: Abdulsalam M. Alhawsawi

Abstract:

Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.

Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137

Procedia PDF Downloads 116
586 Assessment of Hypersaline Outfalls via Computational Fluid Dynamics Simulations: A Case Study of the Gold Coast Desalination Plant Offshore Multiport Brine Diffuser

Authors: Mitchell J. Baum, Badin Gibbes, Greg Collecutt

Abstract:

This study details a three-dimensional field-scale numerical investigation conducted for the Gold Coast Desalination Plant (GCDP) offshore multiport brine diffuser. Quantitative assessment of diffuser performance with regard to trajectory, dilution and mapping of seafloor concentration distributions was conducted for 100% plant operation. The quasi-steady Computational Fluid Dynamics (CFD) simulations were performed using the Reynolds averaged Navier-Stokes equations with a k-ω shear stress transport turbulence closure scheme. The study compliments a field investigation, which measured brine plume characteristics under similar conditions. CFD models used an iterative mesh in a domain with dimensions 400 m long, 200 m wide and an average depth of 24.2 m. Acoustic Doppler current profiler measurements conducted in the companion field study exhibited considerable variability over the water column. The effect of this vertical variability on simulated discharge outcomes was examined. Seafloor slope was also accommodated into the model. Ambient currents varied predominantly in the longshore direction – perpendicular to the diffuser structure. Under these conditions, the alternating port orientation of the GCDP diffuser resulted in simultaneous subjection to co-propagating and counter-propagating ambient regimes. Results from quiescent ambient simulations suggest broad agreement with empirical scaling arguments traditionally employed in design and regulatory assessments. Simulated dynamic ambient regimes showed the influence of ambient crossflow upon jet trajectory, dilution and seafloor concentration is significant. The effect of ambient flow structure and the subsequent influence on jet dynamics is discussed, along with the implications for using these different simulation approaches to inform regulatory decisions.

Keywords: computational fluid dynamics, desalination, field-scale simulation, multiport brine diffuser, negatively buoyant jet

Procedia PDF Downloads 214