Search results for: motor parameter estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4665

Search results for: motor parameter estimation

3345 Taguchi-Based Optimization of Surface Roughness and Dimensional Accuracy in Wire EDM Process with S7 Heat Treated Steel

Authors: Joseph C. Chen, Joshua Cox

Abstract:

This research focuses on the use of the Taguchi method to reduce the surface roughness and improve dimensional accuracy of parts machined by Wire Electrical Discharge Machining (EDM) with S7 heat treated steel material. Due to its high impact toughness, the material is a candidate for a wide variety of tooling applications which require high precision in dimension and desired surface roughness. This paper demonstrates that Taguchi Parameter Design methodology is able to optimize both dimensioning and surface roughness successfully by investigating seven wire-EDM controllable parameters: pulse on time (ON), pulse off time (OFF), servo voltage (SV), voltage (V), servo feed (SF), wire tension (WT), and wire speed (WS). The temperature of the water in the Wire EDM process is investigated as the noise factor in this research. Experimental design and analysis based on L18 Taguchi orthogonal arrays are conducted. This paper demonstrates that the Taguchi-based system enables the wire EDM process to produce (1) high precision parts with an average of 0.6601 inches dimension, while the desired dimension is 0.6600 inches; and (2) surface roughness of 1.7322 microns which is significantly improved from 2.8160 microns.

Keywords: Taguchi Parameter Design, surface roughness, Wire EDM, dimensional accuracy

Procedia PDF Downloads 373
3344 Study of Properties of Concretes Made of Local Building Materials and Containing Admixtures, and Their Further Introduction in Construction Operations and Road Building

Authors: Iuri Salukvadze

Abstract:

Development of Georgian Economy largely depends on its effective use of its transit country potential. The value of Georgia as the part of Europe-Asia corridor has increased; this increases the interest of western and eastern countries to Georgia as to the country that laid on the transit axes that implies transit infrastructure creation and development in Georgia. It is important to use compacted concrete with the additive in modern road construction industry. Even in the 21-century, concrete remains as the main vital constructive building material, therefore innovative, economic and environmentally protected technologies are needed. Georgian construction market requires the use of concrete of new generation, adaptation of nanotechnologies to the local realities that will give the ability to create multifunctional, nano-technological high effective materials. It is highly important to research their physical and mechanical states. The study of compacted concrete with the additives is necessary to use in the road construction in the future and to increase hardness of roads in Georgia. The aim of the research is to study the physical-mechanical properties of the compacted concrete with the additives based on the local materials. Any experimental study needs large number of experiments from one side in order to achieve high accuracy and optimal number of the experiments with minimal charges and in the shortest period of time from the other side. To solve this problem in practice, it is possible to use experiments planning static and mathematical methods. For the materials properties research we will use distribution hypothesis, measurements results by normal law according to which divergence of the obtained results is caused by the error of method and inhomogeneity of the object. As the result of the study, we will get resistible compacted concrete with additives for the motor roads that will improve roads infrastructure and give us saving rate while construction of the roads and their exploitation.

Keywords: construction, seismic protection systems, soil, motor roads, concrete

Procedia PDF Downloads 245
3343 Assessment of DNA Degradation Using Comet Assay: A Versatile Technique for Forensic Application

Authors: Ritesh K. Shukla

Abstract:

Degradation of biological samples in terms of macromolecules (DNA, RNA, and protein) are the major challenges in the forensic investigation which misleads the result interpretation. Currently, there are no precise methods available to circumvent this problem. Therefore, at the preliminary level, some methods are urgently needed to solve this issue. In this order, Comet assay is one of the most versatile, rapid and sensitive molecular biology technique to assess the DNA degradation. This technique helps to assess DNA degradation even at very low amount of sample. Moreover, the expedient part of this method does not require any additional process of DNA extraction and isolation during DNA degradation assessment. Samples directly embedded on agarose pre-coated microscopic slide and electrophoresis perform on the same slide after lysis step. After electrophoresis microscopic slide stained by DNA binding dye and observed under fluorescent microscope equipped with Komet software. With the help of this technique extent of DNA degradation can be assessed which can help to screen the sample before DNA fingerprinting, whether it is appropriate for DNA analysis or not. This technique not only helps to assess degradation of DNA but many other challenges in forensic investigation such as time since deposition estimation of biological fluids, repair of genetic material from degraded biological sample and early time since death estimation could also be resolved. With the help of this study, an attempt was made to explore the application of well-known molecular biology technique that is Comet assay in the field of forensic science. This assay will open avenue in the field of forensic research and development.

Keywords: comet assay, DNA degradation, forensic, molecular biology

Procedia PDF Downloads 156
3342 Estimation of Normalized Glandular Doses Using a Three-Layer Mammographic Phantom

Authors: Kuan-Jen Lai, Fang-Yi Lin, Shang-Rong Huang, Yun-Zheng Zeng, Po-Chieh Hsu, Jay Wu

Abstract:

The normalized glandular dose (DgN) estimates the energy deposition of mammography in clinical practice. The Monte Carlo simulations frequently use uniformly mixed phantom for calculating the conversion factor. However, breast tissues are not uniformly distributed, leading to errors of conversion factor estimation. This study constructed a three-layer phantom to estimated more accurate of normalized glandular dose. In this study, MCNP code (Monte Carlo N-Particles code) was used to create the geometric structure. We simulated three types of target/filter combinations (Mo/Mo, Mo/Rh, Rh/Rh), six voltages (25 ~ 35 kVp), six HVL parameters and nine breast phantom thicknesses (2 ~ 10 cm) for the three-layer mammographic phantom. The conversion factor for 25%, 50% and 75% glandularity was calculated. The error of conversion factors compared with the results of the American College of Radiology (ACR) was within 6%. For Rh/Rh, the difference was within 9%. The difference between the 50% average glandularity and the uniform phantom was 7.1% ~ -6.7% for the Mo/Mo combination, voltage of 27 kVp, half value layer of 0.34 mmAl, and breast thickness of 4 cm. According to the simulation results, the regression analysis found that the three-layer mammographic phantom at 0% ~ 100% glandularity can be used to accurately calculate the conversion factors. The difference in glandular tissue distribution leads to errors of conversion factor calculation. The three-layer mammographic phantom can provide accurate estimates of glandular dose in clinical practice.

Keywords: Monte Carlo simulation, mammography, normalized glandular dose, glandularity

Procedia PDF Downloads 190
3341 [Keynote Talk]: Discovering Liouville-Type Problems for p-Energy Minimizing Maps in Closed Half-Ellipsoids by Calculus Variation Method

Authors: Lina Wu, Jia Liu, Ye Li

Abstract:

The goal of this project is to investigate constant properties (called the Liouville-type Problem) for a p-stable map as a local or global minimum of a p-energy functional where the domain is a Euclidean space and the target space is a closed half-ellipsoid. The First and Second Variation Formulas for a p-energy functional has been applied in the Calculus Variation Method as computation techniques. Stokes’ Theorem, Cauchy-Schwarz Inequality, Hardy-Sobolev type Inequalities, and the Bochner Formula as estimation techniques have been used to estimate the lower bound and the upper bound of the derived p-Harmonic Stability Inequality. One challenging point in this project is to construct a family of variation maps such that the images of variation maps must be guaranteed in a closed half-ellipsoid. The other challenging point is to find a contradiction between the lower bound and the upper bound in an analysis of p-Harmonic Stability Inequality when a p-energy minimizing map is not constant. Therefore, the possibility of a non-constant p-energy minimizing map has been ruled out and the constant property for a p-energy minimizing map has been obtained. Our research finding is to explore the constant property for a p-stable map from a Euclidean space into a closed half-ellipsoid in a certain range of p. The certain range of p is determined by the dimension values of a Euclidean space (the domain) and an ellipsoid (the target space). The certain range of p is also bounded by the curvature values on an ellipsoid (that is, the ratio of the longest axis to the shortest axis). Regarding Liouville-type results for a p-stable map, our research finding on an ellipsoid is a generalization of mathematicians’ results on a sphere. Our result is also an extension of mathematicians’ Liouville-type results from a special ellipsoid with only one parameter to any ellipsoid with (n+1) parameters in the general setting.

Keywords: Bochner formula, Calculus Stokes' Theorem, Cauchy-Schwarz Inequality, first and second variation formulas, Liouville-type problem, p-harmonic map

Procedia PDF Downloads 274
3340 Earnings vs Cash Flows: The Valuation Perspective

Authors: Megha Agarwal

Abstract:

The research paper is an effort to compare the earnings based and cash flow based methods of valuation of an enterprise. The theoretically equivalent methods based on either earnings such as Residual Earnings Model (REM), Abnormal Earnings Growth Model (AEGM), Residual Operating Income Method (ReOIM), Abnormal Operating Income Growth Model (AOIGM) and its extensions multipliers such as price/earnings ratio, price/book value ratio; or cash flow based models such as Dividend Valuation Method (DVM) and Free Cash Flow Method (FCFM) all provide different estimates of valuation of the Indian giant corporate Reliance India Limited (RIL). An ex-post analysis of published accounting and financial data for four financial years from 2008-09 to 2011-12 has been conducted. A comparison of these valuation estimates with the actual market capitalization of the company shows that the complex accounting based model AOIGM provides closest forecasts. These different estimates may be derived due to inconsistencies in discount rate, growth rates and the other forecasted variables. Although inputs for earnings based models may be available to the investor and analysts through published statements, precise estimation of free cash flows may be better undertaken by the internal management. The estimation of value from more stable parameters as residual operating income and RNOA could be considered superior to the valuations from more volatile return on equity.

Keywords: earnings, cash flows, valuation, Residual Earnings Model (REM)

Procedia PDF Downloads 378
3339 Immunosupressive Effect of Chloroquine through the Inhibition of Myeloperoxidase

Authors: J. B. Minari, O. B. Oloyede

Abstract:

Polymorphonuclear neutrophils (PMNs) play a crucial role in a variety of infections caused by bacteria, fungi, and parasites. Indeed, the involvement of PMNs in host defence against Plasmodium falciparum is well documented both in vitro and in vivo. Many of the antimalarial drugs such as chloroquine used in the treatment of human malaria significantly reduce the immune response of the host in vitro and in vivo. Myeloperoxidase is the most abundant enzyme found in the polymorphonuclear neutrophil which plays a crucial role in its function. This study was carried out to investigate the effect of chloroquine on the enzyme. In investigating the effects of the drug on myeloperoxidase, the influence of concentration, pH, partition ratio estimation and kinetics of inhibition were studied. This study showed that chloroquine is concentration-dependent inhibitor of myeloperoxidase with an IC50 of 0.03 mM. Partition ratio estimation showed that 40 enzymatic turnover cycles are required for complete inhibition of myeloperoxidase in the presence of chloroquine. The influence of pH on the effect of chloroquine on the enzyme showed significant inhibition of myeloperoxidase at physiological pH. The kinetic inhibition studies showed that chloroquine caused a non-competitive inhibition with an inhibition constant Ki of 0.27mM. The results obtained from this study shows that chloroquine is a potent inhibitor of myeloperoxidase and it is capable of inactivating the enzyme. It is therefore considered that the inhibition of myeloperoxidase in the presence of chloroquine as revealed in this study may partly explain the impairment of polymorphonuclear neutrophil and consequent immunosuppression of the host defence system against secondary infections.

Keywords: myeloperoxidase, chloroquine, inhibition, neutrophil, immune

Procedia PDF Downloads 374
3338 Involvement of Nrf2 in Kolaviron-Mediated Attenuation of Behavioural Incompetence and Neurodegeneration in a Murine Model of Parkinson's Disease

Authors: Yusuf E. Mustapha, Inioluwa A Akindoyeni, Oluwatoyin G. Ezekiel, Ifeoluwa O. Awogbindin, Ebenezer O. Farombi

Abstract:

Background: Parkinson's disease (PD) is the most prevalent motor disorder. Available therapies are palliative with no effect on disease progression. Kolaviron (KV), a natural anti-inflammatory and antioxidant agent, has been reported to possess neuroprotective effects in Parkinsonian flies and rats. Objective: The present study investigates the neuroprotective effect of KV, focusing on the DJ1/Nrf2 signaling pathway. Methodology: All-trans retinoic acid (ATRA, 10 mg/kg, i.p.) was used to inhibit Nrf2. Murine model of PD was established with four doses of MPTP (20 mg/kg i.p.) at 2 hours interval. MPTP mice were pre-treated with either KV (200 mg/kg/day p.o), ATRA, or both conditions for seven days before PD induction. Motor behaviour was evaluated, and markers of oxidative stress/damage and its regulators were assessed with immunofluorescence and ELISA techniques. Results: MPTP-treated mice covered less distance with reduced numbers of anticlockwise rotations, heightened freezing, and prolonged immobility when compared to control. However, KV significantly attenuated these deficits. Pretreatment of MPTP mice with KV upregulated Nrf2 expression beyond MPTP level with a remarkable reduction in Keap1 expression and marked elevation of DJ-1 level, whereas co-administration with ATRA abrogated these effects. KV treatment restored MPTP-mediated depletion of endogenous antioxidant, striatal oxidative stress, oxidative damage, and inhibition of acetylcholinesterase activity. However, ATRA treatment potentiated acetylcholinesterase inhibition and attenuated the protective effect of KV on the level of nitric oxide and activities of catalase and superoxide dismutase. Conclusion: Kolaviron protects Parkinsonian mice by stabilizing and activating the Nrf2 signaling pathway. Thus, kolaviron can be explored as a pharmacological lead in PD management.

Keywords: Garcinia kola, Kolaviron, Parkinson Disease, Nrf2, behavioral incompetence, neurodegeneration

Procedia PDF Downloads 101
3337 Efficacy of Thrust on Basilar Spheno Synchondrosis in Boxers With Ocular Convergence Deficit. Comparison of Thrust and Therapeutic Exercise: Pilot Experimental Randomized Controlled Trial Study

Authors: Andreas Aceranti, Stefano Costa

Abstract:

The aim of this study was to demonstrate that manipulative treatment combined with therapeutic exercisetherapywas more effective than isolated therapeutic exercise in the short-term treatment of eye convergence disorders in boxers. A randomized controlled trial (RCT) pilot trial was performed at our physiotherapy practices. 30 adult subjects who practice the discipline of boxing were selected after an initial skimming defined by the Convergence Insufficiency Symptom Survey (CISS) test (results greater than or equal to 10) starting from the initial sample of 50 subjects; The 30 recruits were evaluated by an orthoptist using prisms to know the diopters of each eye and were divided into 2 groups (experimental group and control group). The members of the experimental group were subjected to manipulation of the lateral strain of sphenoid from the side contralateral to the eye that had fewer diopters and were subjected to a sequence of 3 ocular motor exercises immediately after manipulation. The control group, on the other hand, received only ocular motor treatment. A secondary outcome was also drawn up that demonstrated how changes in ocular motricity also affected cervical rotation. Analysis of the data showed that the experimental treatment was in the short term superior to the control group to astatistically significant extent both in terms of the prismatic delta of the right eye (0 OT median without manipulation and 10 OT median with manipulation) and that of the left eye (0 OT median without manipulation and 5 OT median with manipulation). Cervical rotation values also showed better values in the experimental group with a median of 4° in the right rotation without manipulation and 6° with thrust; the left rotation presented a median of 2° without manipulation and 7° with thrust. From the results that emerged, the treatment was effective. It would be desirable to increase the sample number and set up a timeline to see if the net improvements obtained in the short term will also be maintained in the medium to long term.

Keywords: boxing, basilar spheno synchondrosis, ocular convergence deficit, osteopathic treatment

Procedia PDF Downloads 89
3336 The Origins of Representations: Cognitive and Brain Development

Authors: Athanasios Raftopoulos

Abstract:

In this paper, an attempt is made to explain the evolution or development of human’s representational arsenal from its humble beginnings to its modern abstract symbols. Representations are physical entities that represent something else. To represent a thing (in a general sense of “thing”) means to use in the mind or in an external medium a sign that stands for it. The sign can be used as a proxy of the represented thing when the thing is absent. Representations come in many varieties, from signs that perceptually resemble their representative to abstract symbols that are related to their representata through conventions. Relying the distinction among indices, icons, and symbols, it is explained how symbolic representations gradually emerged from indices and icons. To understand the development or evolution of our representational arsenal, the development of the cognitive capacities that enabled the gradual emergence of representations of increasing complexity and expressive capability should be examined. The examination of these factors should rely on a careful assessment of the available empirical neuroscientific and paleo-anthropological evidence. These pieces of evidence should be synthesized to produce arguments whose conclusions provide clues concerning the developmental process of our representational capabilities. The analysis of the empirical findings in this paper shows that Homo Erectus was able to use both icons and symbols. Icons were used as external representations, while symbols were used in language. The first step in the emergence of representations is that a sensory-motor purely causal schema involved in indices is decoupled from its normal causal sensory-motor functions and serves as a representation of the object that initially called it into play. Sensory-motor schemes are tied to specific contexts of the organism-environment interactions and are activated only within these contexts. For a representation of an object to be possible, this scheme must be de-contextualized so that the same object can be represented in different contexts; a decoupled schema loses its direct ties to reality and becomes mental content. The analysis suggests that symbols emerged due to selection pressures of the social environment. The need to establish and maintain social relationships in ever-enlarging groups that would benefit the group was a sufficient environmental pressure to lead to the appearance of the symbolic capacity. Symbols could serve this need because they can express abstract relationships, such as marriage or monogamy. Icons, by being firmly attached to what can be observed, could not go beyond surface properties to express abstract relations. The cognitive capacities that are required for having iconic and then symbolic representations were present in Homo Erectus, which had a language that started without syntactic rules but was structured so as to mirror the structure of the world. This language became increasingly complex, and grammatical rules started to appear to allow for the construction of more complex expressions required to keep up with the increasing complexity of social niches. This created evolutionary pressures that eventually led to increasing cranial size and restructuring of the brain that allowed more complex representational systems to emerge.

Keywords: mental representations, iconic representations, symbols, human evolution

Procedia PDF Downloads 59
3335 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 129
3334 Effect Analysis of an Improved Adaptive Speech Noise Reduction Algorithm in Online Communication Scenarios

Authors: Xingxing Peng

Abstract:

With the development of society, there are more and more online communication scenarios such as teleconference and online education. In the process of conference communication, the quality of voice communication is a very important part, and noise may cause the communication effect of participants to be greatly reduced. Therefore, voice noise reduction has an important impact on scenarios such as voice calls. This research focuses on the key technologies of the sound transmission process. The purpose is to maintain the audio quality to the maximum so that the listener can hear clearer and smoother sound. Firstly, to solve the problem that the traditional speech enhancement algorithm is not ideal when dealing with non-stationary noise, an adaptive speech noise reduction algorithm is studied in this paper. Traditional noise estimation methods are mainly used to deal with stationary noise. In this chapter, we study the spectral characteristics of different noise types, especially the characteristics of non-stationary Burst noise, and design a noise estimator module to deal with non-stationary noise. Noise features are extracted from non-speech segments, and the noise estimation module is adjusted in real time according to different noise characteristics. This adaptive algorithm can enhance speech according to different noise characteristics, improve the performance of traditional algorithms to deal with non-stationary noise, so as to achieve better enhancement effect. The experimental results show that the algorithm proposed in this chapter is effective and can better adapt to different types of noise, so as to obtain better speech enhancement effect.

Keywords: speech noise reduction, speech enhancement, self-adaptation, Wiener filter algorithm

Procedia PDF Downloads 59
3333 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis

Authors: Petr Gurný

Abstract:

One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the credit-scoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.

Keywords: credit-scoring models, multidimensional subordinated Lévy model, probability of default

Procedia PDF Downloads 456
3332 Hip Strategy in Dynamic Postural Control in Recurrent Ankle Sprain

Authors: Radwa Elshorbagy, Alaa Elden Balbaa, Khaled Ayad, Waleed Reda

Abstract:

Introduction: Ankle sprain is a common lower limb injury that is complicated by high recurrence rate. The cause of recurrence is not clear; however, changes in motor control have been postulated. Objective: to determine the contribution of proximal hip strategy to dynamic postural control in patients with recurrent ankle sprain. Methods: Fifteen subjects with recurrent ankle sprain (group A) and fifteen healthy control subjects (group B) participated in this study. Abductor-adductors as well as flexor-extensor hip musculatures control was abolished by fatigue using the Biodex Isokinetic System. Dynamic postural control was measured before and after fatigue by the Biodex Balance System. Results: Repeated measures MANOVA was used to compare between and within group differences, in group A fatiguing of hip muscles (flexors-extensors and abductors-adductors) increased overall stability index (OASI), anteroposterior stability index (APSI) and mediolateral stability index (MLSI) significantly (p=0.00) whereas; in group B fatiguing of hip flexors-extensors increased significantly OASI and APSI only (p= 0.017, 0.010; respectively) while fatiguing of hip abductors-adductors has no significant effect on these variables. Moreover, patients with ankle sprain had significantly lower dynamic balance after hip muscles fatigue compared to the control group. Specifically, after hip flexor-extensor fatigue, the OASI, APSI and MLSI were increased significantly than those of the control values (p= 0.002, 0.011, and 0.003, respectively) whereas fatiguing of hip abductors-adductors increased significantly in OASI and APSI only (p=0.012, 0.026, respectively). Conclusion: To maintain dynamic balance, patients with recurrent ankle sprain seem to rely more on the hip strategy. This means that those patients depend on a top to down instead of down to top strategy clinical relevance: patients with recurrent ankle sprain less efficient in maintaining the dynamic postural control due to the change in motor strategies. Indicating that health care providers and rehabilitation specialists should treat CAI as a global/central and not just as a simple local or peripheral injury.

Keywords: hip strategy, ankle strategy, postural control, dynamic balance

Procedia PDF Downloads 340
3331 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data

Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer

Abstract:

This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.

Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML

Procedia PDF Downloads 129
3330 Deliberation of Daily Evapotranspiration and Evaporative Fraction Based on Remote Sensing Data

Authors: J. Bahrawi, M. Elhag

Abstract:

Estimation of evapotranspiration is always a major component in water resources management. Traditional techniques of calculating daily evapotranspiration based on field measurements are valid only for local scales. Earth observation satellite sensors are thus used to overcome difficulties in obtaining daily evapotranspiration measurements on regional scale. The Surface Energy Balance System (SEBS) model was adopted to estimate daily evapotranspiration and relative evaporation along with other land surface energy fluxes. The model requires agro-climatic data that improve the model outputs. Advance Along Track Scanning Radiometer (AATSR) and Medium Spectral Resolution Imaging Spectrometer (MERIS) imageries were used to estimate the daily evapotranspiration and relative evaporation over the entire Nile Delta region in Egypt supported by meteorological data collected from six different weather stations located within the study area. Daily evapotranspiration maps derived from SEBS model show a strong agreement with actual ground-truth data taken from 92 points uniformly distributed all over the study area. Moreover, daily evapotranspiration and relative evaporation are strongly correlated. The reliable estimation of daily evapotranspiration supports the decision makers to review the current land use practices in terms of water management, while enabling them to propose proper land use changes.

Keywords: daily evapotranspiration, relative evaporation, SEBS, AATSR, MERIS, Nile Delta

Procedia PDF Downloads 261
3329 Secondary Compression Behavior of Organic Soils in One-Dimensional Consolidation Tests

Authors: Rinku Varghese, S. Chandrakaran, K. Rangaswamy

Abstract:

The standard one-dimensional consolidation test is used to find the consolidation behaviour of artificially consolidated organic soils. Incremental loading tests were conducted on the clay without and with organic matter. The study was conducted with soil having different organic content keeping all other parameters constant. The tests were conducted on clay and artificially prepared organic soil sample at different vertical pressure. The load increment ratio considered for the test is equal to one. Artificial organic soils are used for the test by adding starch to the clay. The percentage of organic content in starch is determined by adding 5% by weight starch into the clay (inorganic soil) sample and corresponding change in organic content of soil was determined. This was expressed as percentage by weight of starch, and it was found that about 95% organic content in the soil sample. Accordingly percentage of organic content fixed and added to the sample for testing to understand the consolidation behaviour clayey soils with organic content. A detailed study of the results obtained from IL test was investigated. The main items investigated were (i) coefficient of consolidation (cv), (ii) coefficient of volume compression (mv), (iii) coefficient of permeability (k). The consolidation parameter obtained from IL test was used for determining the creep strain and creep parameter and also predicting their variation with vertical stress and organic content.

Keywords: consolidation, secondary compression, creep, starch

Procedia PDF Downloads 283
3328 The Contribution of Hip Strategy in Dynamic Postural Control in Recurrent Ankle Sprain

Authors: Radwa El Shorbagy, Alaa El Din Balbaa, Khaled Ayad, Waleed Reda

Abstract:

Introduction: Ankle sprain is a common lower limb injury that is complicated by high recurrence rate. The cause of recurrence is not clear; however, changes in motor control have been postulated. Objective: to determine the contribution of proximal hip strategy to dynamic postural control in patients with recurrent ankle sprain. Methods: Fifteen subjects with recurrent ankle sprain (group A) and fifteen healthy control subjects (group B) participated in this study. Abductor-adductors as well as flexor-extensor hip musculatures control was abolished by fatigue using the Biodex Isokinetic System. Dynamic postural control was measured before and after fatigue by the Biodex Balance System Results: Repeated measures MANOVA was used to compare between and within group differences, In group A fatiguing of hip muscles (flexors-extensors and abductors-adductors) increased overall stability index (OASI), anteroposterior stability index (APSI) and mediolateral stability index (MLSI) significantly (p= 0.00) whereas; in group B fatiguing of hip flexors-extensors increased significantly OASI and APSI only (p= 0.017, 0.010; respectively) while fatiguing of hip abductors-adductors has no significant effect on these variables. Moreover, patients with ankle sprain had significantly lower dynamic balance after hip muscles fatigue compared to the control group. Specifically, after hip flexor-extensor fatigue, the OASI, APSI and MLSI were increased significantly than those of the control values (p= 0.002, 0.011, and 0.003, respectively) whereas fatiguing of hip abductors-adductors increased significantly in OASI and APSI only (p=0.012, 0.026, respectively). Conclusion: To maintain dynamic balance, patients with recurrent ankle sprain seem to relay more on the hip strategy. This means that those patients depend on a top to down instead of down to top strategy clinical relevance: patients with recurrent ankle sprain less efficient in maintaining the dynamic postural control due to the change in motor strategies. Indicating that health care providers and rehabilitation specialists should treat CAI as a global/central and not just as a simple local or peripheral injury.

Keywords: ankle sprain, fatigue hip muscles, dynamic balance

Procedia PDF Downloads 301
3327 A Hybrid Genetic Algorithm and Neural Network for Wind Profile Estimation

Authors: M. Saiful Islam, M. Mohandes, S. Rehman, S. Badran

Abstract:

Increasing necessity of wind power is directing us to have precise knowledge on wind resources. Methodical investigation of potential locations is required for wind power deployment. High penetration of wind energy to the grid is leading multi megawatt installations with huge investment cost. This fact appeals to determine appropriate places for wind farm operation. For accurate assessment, detailed examination of wind speed profile, relative humidity, temperature and other geological or atmospheric parameters are required. Among all of these uncertainty factors influencing wind power estimation, vertical extrapolation of wind speed is perhaps the most difficult and critical one. Different approaches have been used for the extrapolation of wind speed to hub height which are mainly based on Log law, Power law and various modifications of the two. This paper proposes a Artificial Neural Network (ANN) and Genetic Algorithm (GA) based hybrid model, namely GA-NN for vertical extrapolation of wind speed. This model is very simple in a sense that it does not require any parametric estimations like wind shear coefficient, roughness length or atmospheric stability and also reliable compared to other methods. This model uses available measured wind speeds at 10m, 20m and 30m heights to estimate wind speeds up to 100m. A good comparison is found between measured and estimated wind speeds at 30m and 40m with approximately 3% mean absolute percentage error. Comparisons with ANN and power law, further prove the feasibility of the proposed method.

Keywords: wind profile, vertical extrapolation of wind, genetic algorithm, artificial neural network, hybrid machine learning

Procedia PDF Downloads 490
3326 Transfer of Constraints or Constraints on Transfer? Syntactic Islands in Danish L2 English

Authors: Anne Mette Nyvad, Ken Ramshøj Christensen

Abstract:

In the syntax literature, it has standardly been assumed that relative clauses and complement wh-clauses are islands for extraction in English, and that constraints on extraction from syntactic islands are universal. However, the Mainland Scandinavian languages has been known to provide counterexamples. Previous research on Danish has shown that neither relative clauses nor embedded questions are strong islands in Danish. Instead, extraction from this type of syntactic environment is degraded due to structural complexity and it interacts with nonstructural factors such as the frequency of occurrence of the matrix verb, the possibility of temporary misanalysis leading to semantic incongruity and exposure over time. We argue that these facts can be accounted for with parametric variation in the availability of CP-recursion, resulting in the patterns observed, as Danish would then “suspend” the ban on movement out of relative clauses and embedded questions. Given that Danish does not seem to adhere to allegedly universal syntactic constraints, such as the Complex NP Constraint and the Wh-Island Constraint, what happens in L2 English? We present results from a study investigating how native Danish speakers judge extractions from island structures in L2 English. Our findings suggest that Danes transfer their native language parameter setting when asked to judge island constructions in English. This is compatible with the Full Transfer Full Access Hypothesis, as the latter predicts that Danish would have difficulties resetting their [+/- CP-recursion] parameter in English because they are not exposed to negative evidence.

Keywords: syntax, islands, second language acquisition, danish

Procedia PDF Downloads 127
3325 Biosurfactants Produced by Antarctic Bacteria with Hydrocarbon Cleaning Activity

Authors: Claudio Lamilla, Misael Riquelme, Victoria Saez, Fernanda Sepulveda, Monica Pavez, Leticia Barrientos

Abstract:

Biosurfactants are compounds synthesized by microorganisms that show various chemical structures, including glycolipids, lipopeptides, polysaccharide-protein complex, phospholipids, and fatty acids. These molecules have attracted attention in recent years due to the amphipathic nature of these compounds, which allows their application in various activities related to emulsification, foaming, detergency, wetting, dispersion and solubilization of hydrophobic compounds. Microorganisms that produce biosurfactants are ubiquitous, not only present in water, soil, and sediments but in extreme conditions of pH, salinity or temperature such as those present in Antarctic ecosystems. Due to this, it is of interest to study biosurfactants producing bacterial strains isolated from Antarctic environments, with the potential to be used in various biotechnological processes. The objective of this research was to characterize biosurfactants produced by bacterial strains isolated from Antarctic environments, with potential use in biotechnological processes for the cleaning of sites contaminated with hydrocarbons. The samples were collected from soils and sediments in the South Shetland Islands and the Antarctic Peninsula, during the Antarctic Research Expedition INACH 2016, from both pristine and human occupied areas (influenced). The bacteria isolation was performed from solid R2A, M1 and LB media. The selection of strains producing biosurfactants was done by hemolysis test on blood agar plates (5%) and blue agar (CTAB). From 280 isolates, it was determined that 10 bacterial strains produced biosurfactants after stimulation with different carbon sources. 16S rDNA taxonomic markers, using the universal primers 27F-1492R, were used to identify these bacterias. Biosurfactants production was carried out in 250 ml flasks using Bushnell Hass liquid culture medium enriched with different carbon sources (olive oil, glucose, glycerol, and hexadecane) during seven days under constant stirring at 20°C. Each cell-free supernatant was characterized by physicochemical parameters including drop collapse, emulsification and oil displacement, as well as stability at different temperatures, salinity, and pH. In addition, the surface tension of each supernatant was quantified using a tensiometer. The strains with the highest activity were selected, and the production of biosurfactants was stimulated in six liters of culture medium. Biosurfactants were extracted from the supernatants with chloroform methanol (2:1). These biosurfactants were tested against crude oil and motor oil, to evaluate their displacement activity (detergency). The characterization by physicochemical properties of 10 supernatants showed that 80% of them produced the drop collapse, 60% had stability at different temperatures, and 90% had detergency activity in motor and olive oil. The biosurfactants obtained from two bacterial strains showed a high activity of dispersion of crude oil and motor oil with halos superior to 10 cm. We can conclude that bacteria isolated from Antarctic soils and sediments provide biological material of high quality for the production of biosurfactants, with potential applications in the biotechnological industry, especially in hydrocarbons -contaminated areas such as petroleum.

Keywords: antarctic, bacteria, biosurfactants, hydrocarbons

Procedia PDF Downloads 279
3324 A Study of the Atlantoaxial Fracture or Dislocation in Motorcyclists with Helmet Accidents

Authors: Shao-Huang Wu, Ai-Yun Wu, Meng-Chen Wu, Chun-Liang Wu, Kai-Ping Shaw, Hsiao-Ting Chen

Abstract:

Objective: To analyze the forensic autopsy data of known passengers and compare it with the National database of the autopsy report in 2017, and obtain the special patterned injuries, which can be used as the reference for the reconstruction of hit-and-run motor vehicle accidents. Methods: Analyze the items of the Motor Vehicle Accident Report, including Date of accident, Time occurred, Day, Acc. severity, Acc. Location, Acc. Class, Collision with Vehicle, Motorcyclists Codes, Safety equipment use, etc. Analyzed the items of the Autopsy Report included, including General Description, Clothing and Valuables, External Examination, Head and Neck Trauma, Trunk Trauma, Other Injuries, Internal Examination, Associated Items, Autopsy Determinations, etc. Materials: Case 1. The process of injury formation: the car was chased forward and collided with the scooter. The passenger wearing the helmet fell to the ground. The helmet crashed under the bottom of the sedan, and the bottom of the sedan was raised. Additionally, the sedan was hit on the left by the other sedan behind, resulting in the front sedan turning 180 degrees on the spot. The passenger’s head was rotated, and the cervical spine was fractured. Injuries: 1. Fracture of atlantoaxial joint 2. Fracture of the left clavicle, scapula, and proximal humerus 3. Fracture of the 1-10 left ribs and 2-7 right ribs with lung contusion and hemothorax 4. Fracture of the transverse process of 2-5 lumbar vertebras 5. Comminuted fracture of the right femur 6. Suspected subarachnoid space and subdural hemorrhage 7. Laceration of the spleen. Case 2. The process of injury formation: The motorcyclist wearing the helmet fell to the left by himself, and his chest was crushed by the car going straight. Only his upper body was under the car and the helmet finally fell off. Injuries: 1. Dislocation of atlantoaxial joint 2. Laceration on the left posterior occipital 3. Laceration on the left frontal 4. Laceration on the left side of the chin 5. Strip bruising on the anterior neck 6. Open rib fracture of the right chest wall 7. Comminuted fracture of both 1-12 ribs 8. Fracture of the sternum 9. Rupture of the left lung 10. Rupture of the left and right atria, heart tip and several large vessels 11. The aortic root is nearly transected 12. Severe rupture of the liver. Results: The common features of the two cases were the fracture or dislocation of the atlantoaxial joint and both helmets that were crashed. There were no atlantoaxial fractures or dislocations in 27 pedestrians (without wearing a helmet) versus motor vehicle accidents in 2017 the National database of an autopsy report, but there were two atlantoaxial fracture or dislocation cases in the database, both of which were cases of falling from height. Conclusion: The cervical spine fracture injury of the motorcyclist, who was wearing a helmet, is very likely to be a patterned injury caused by his/her fall and rollover under the sedan. It could provide a reference for forensic peers.

Keywords: patterned injuries, atlantoaxial fracture or dislocation, accident reconstruction, motorcycle accident with helmet, forensic autopsy data

Procedia PDF Downloads 93
3323 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.

Keywords: injection molding, shrinkage, six sigma, Taguchi parameter design

Procedia PDF Downloads 179
3322 Diagnosing and Treating Breast Cancer during Pregnancy: Neonatal Outcomes after Chemotherapy

Authors: Elyce Cardonick, Shistri Dhar, Linsdey Seidman

Abstract:

Background: When breast cancer is diagnosed during pregnancy, the prognosis is comparable to non-pregnant women matched for prognostic indicators when pregnant women receive treatment without delay. Chemotherapy, including taxanes, can be given during pregnancy with normal neonatal development in exposed fetuses. Methods: Cases of primary breast cancer were extracted from the Cancer and Pregnancy Registry and longitudinal study at Cooper Medical School, which collects cases of pregnant women diagnosed and treated for cancer into a single database. Obstetrical, oncology and pediatric records were reviewed, including annual neonatal developmental, behavioral and medical assessments. Results: 270 pregnant women were diagnosed with primary breast cancer at a mean gestational age of 14.7+9weeks. Mean maternal age at diagnosis 34.5+4.5 years. Receptor status is comparable to non-pregnant women of reproductive age. Forty-nine women were advised to terminate. Two hundred two women underwent surgery;244 women received chemotherapy in pregnancy after the first trimester; the majority of Doxorubucin/Cytoxan; 81 of the cases included a taxane. At a mean of 90 months, follow up obtained on 255 newborns.192/255 newborns are meeting developmental milestones. Respiratory illnesses, including asthma, and bronchiolitis, were reported in 64 newborns, the most common medical condition reported. Thirty-one children are undergoing treatment for GERD, 11 for urinary tract infections, and 7 are undergoing treatment for anemia. Twenty-six children with expressive or articulation language delays, 21/26 are mild. Eleven children with gross/ 7 with fine motor delays. Eight children are treated for ADHD, 4 for anxiety and 4 have social skill impairment. The majority of children with developmental, language or motor delays were born preterm. Conclusion: After chemotherapy exposure in utero for breast cancer, the majority of newborns are meeting developmental milestones and are medically healthy. The goal for treating pregnant women with breast cancer is to aim for delivery close to the term.

Keywords: breast cancer, pregnancy, chemotherapy, newborn

Procedia PDF Downloads 117
3321 A Preliminary Kinematic Comparison of Vive and Vicon Systems for the Accurate Tracking of Lumbar Motion

Authors: Yaghoubi N., Moore Z., Van Der Veen S. M., Pidcoe P. E., Thomas J. S., Dexheimer B.

Abstract:

Optoelectronic 3D motion capture systems, such as the Vicon kinematic system, are widely utilized in biomedical research to track joint motion. These systems are considered powerful and accurate measurement tools with <2 mm average error. However, these systems are costly and may be difficult to implement and utilize in a clinical setting. 3D virtual reality (VR) is gaining popularity as an affordable and accessible tool to investigate motor control and perception in a controlled, immersive environment. The HTC Vive VR system includes puck-style trackers that seamlessly integrate into its VR environments. These affordable, wireless, lightweight trackers may be more feasible for clinical kinematic data collection. However, the accuracy of HTC Vive Trackers (3.0), when compared to optoelectronic 3D motion capture systems, remains unclear. In this preliminary study, we compared the HTC Vive Tracker system to a Vicon kinematic system in a simulated lumbar flexion task. A 6-DOF robot arm (SCORBOT ER VII, Eshed Robotec/RoboGroup, Rosh Ha’Ayin, Israel) completed various reaching movements to mimic increasing levels of hip flexion (15°, 30°, 45°). Light reflective markers, along with one HTC Vive Tracker (3.0), were placed on the rigid segment separating the elbow and shoulder of the robot. We compared position measures simultaneously collected from both systems. Our preliminary analysis shows no significant differences between the Vicon motion capture system and the HTC Vive tracker in the Z axis, regardless of hip flexion. In the X axis, we found no significant differences between the two systems at 15 degrees of hip flexion but minimal differences at 30 and 45 degrees, ranging from .047 cm ± .02 SE (p = .03) at 30 degrees hip flexion to .194 cm ± .024 SE (p < .0001) at 45 degrees of hip flexion. In the Y axis, we found a minimal difference for 15 degrees of hip flexion only (.743 cm ± .275 SE; p = .007). This preliminary analysis shows that the HTC Vive Tracker may be an appropriate, affordable option for gross motor motion capture when the Vicon system is not available, such as in clinical settings. Further research is needed to compare these two motion capture systems in different body poses and for different body segments.

Keywords: lumbar, vivetracker, viconsystem, 3dmotion, ROM

Procedia PDF Downloads 102
3320 Physical Fitness Normative Reference Values among Lithuanian Primary School Students: Population-Based Cross-Sectional Study

Authors: Brigita Mieziene, Arunas Emeljanovas, Vida Cesnaitiene, Ingunn Fjortoft, Lise Kjonniksen

Abstract:

Background. Health-related physical fitness refers to the favorable health status, i.e. ability to perform daily activities with vigor, as well as capacities that are associated with a low risk for development of chronic diseases and premature death. However, in school-aged children it is constantly declining, while some aspects of fitness declined as much as 50 percent during the last two decades, which prognosticates increasingly earlier onset of health problems, decreasing the quality of life of the population and financial burden for the society. Therefore, the goal of the current study was to indicate nationally representative age- and gender-specific reference values of anthropometric measures, musculoskeletal, motor and cardiorespiratory fitness in Lithuanian primary school children from 6 to 10 years. Methods. The study included 3556 students in total, from 73 randomly selected schools. Ethics approval for research by the Kaunas Regional Ethics Committee (No. BE-2-42) was obtained. Physical fitness was measured by the 9-item test battery, developed by Fjørtoft and colleagues. Height and weight were measured and body mass index calculated. Smoothed centile charts were derived using the LMS method. Results. The numerical age- and gender-specific percentile values (3rd, 10th, 25th, 50th, 75th, 90th, and 97th percentile) for anthropometric measures, musculoskeletal, motor and cardiorespiratory fitness were provided. The equivalent smoothed LMS curves were performed. The study indicated 12.5 percent of overweight and 5 percent of obese children in accordance with international gender and age specific norms of body mass index. This data could be used in clinical and educational settings in order to identify the level of individual physical fitness within its different components.

Keywords: fitness, overweight, primary school children, reference values, smoothed percentile curves

Procedia PDF Downloads 163
3319 Parameter Fitting of the Discrete Element Method When Modeling the DISAMATIC Process

Authors: E. Hovad, J. H. Walther, P. Larsen, J. Thorborg, J. H. Hattel

Abstract:

In sand casting of metal parts for the automotive industry such as brake disks and engine blocks, the molten metal is poured into a sand mold to get its final shape. The DISAMATIC molding process is a way to construct these sand molds for casting of steel parts and in the present work numerical simulations of this process are presented. During the process green sand is blown into a chamber and subsequently squeezed to finally obtain the sand mould. The sand flow is modelled with the Discrete Element method (DEM) and obtaining the correct material parameters for the simulation is the main goal. Different tests will be used to find or calibrate the DEM parameters needed; Poisson ratio, Young modulus, rolling friction coefficient, sliding friction coefficient and coefficient of restitution (COR). The Young modulus and Poisson ratio are found from compression tests of the bulk material and subsequently used in the DEM model according to the Hertz-Mindlin model. The main focus will be on calibrating the rolling resistance and sliding friction in the DEM model with respect to the behavior of “real” sand piles. More specifically, the surface profile of the “real” sand pile will be compared to the sand pile predicted with the DEM for different values of the rolling and sliding friction coefficients. When the DEM parameters are found for the particle-particle (sand-sand) interaction, the particle-wall interaction parameter values are also found. Here the sliding coefficient will be found from experiments and the rolling resistance is investigated by comparing with observations of how the green sand interacts with the chamber wall during experiments and the DEM simulations will be calibrated accordingly. The coefficient of restitution will be tested with different values in the DEM simulations and compared to video footages of the DISAMATIC process. Energy dissipation will be investigated in these simulations for different particle sizes and coefficient of restitution, where scaling laws will be considered to relate the energy dissipation for these parameters. Finally, the found parameter values are used in the overall discrete element model and compared to the video footage of the DISAMATIC process.

Keywords: discrete element method, physical properties of materials, calibration, granular flow

Procedia PDF Downloads 482
3318 Atmospheric CO2 Capture via Temperature/Vacuum Swing Adsorption in SIFSIX-3-Ni

Authors: Eleni Tsalaporta, Sebastien Vaesen, James M. D. MacElroy, Wolfgang Schmitt

Abstract:

Carbon dioxide capture has attracted the attention of many governments, industries and scientists over the last few decades, due to the rapid increase in atmospheric CO2 composition, with several studies being conducted in this area over the last few years. In many of these studies, CO2 capture in complex Pressure Swing Adsorption (PSA) cycles has been associated with high energy consumption despite the promising capture performance of such processes. The purpose of this study is the economic capture of atmospheric carbon dioxide for its transformation into a clean type of energy. A single column Temperature /Vacuum Swing Adsorption (TSA/VSA) process is proposed as an alternative option to multi column Pressure Swing Adsorption (PSA) processes. The proposed adsorbent is SIFSIX-3-Ni, a newly developed MOF (Metal Organic Framework), with extended CO2 selectivity and capacity. There are three stages involved in this paper: (i) SIFSIX-3-Ni is synthesized and pelletized and its physical and chemical properties are examined before and after the pelletization process, (ii) experiments are designed and undertaken for the estimation of the diffusion and adsorption parameters and limitations for CO2 undergoing capture from the air; and (iii) the CO2 adsorption capacity and dynamical characteristics of SIFSIX-3-Ni are investigated both experimentally and mathematically by employing a single column TSA/VSA, for the capture of atmospheric CO2. This work is further supported by a technical-economical study for the estimation of the investment cost and the energy consumption of the single column TSA/VSA process. The simulations are performed using gProms.

Keywords: carbon dioxide capture, temperature/vacuum swing adsorption, metal organic frameworks, SIFSIX-3-Ni

Procedia PDF Downloads 263
3317 Estimation of World Steel Production by Process

Authors: Reina Kawase

Abstract:

World GHG emissions should be reduced 50% by 2050 compared with 1990 level. CO2 emission reduction from steel sector, an energy-intensive sector, is essential. To estimate CO2 emission from steel sector in the world, estimation of steel production is required. The world steel production by process is estimated during the period of 2005-2050. The world is divided into aggregated 35 regions. For a steel making process, two kinds of processes are considered; basic oxygen furnace (BOF) and electric arc furnace (EAF). Steel production by process in each region is decided based on a current production capacity, supply-demand balance of steel and scrap, technology innovation of steel making, steel consumption projection, and goods trade. World steel production under moderate countermeasure scenario in 2050 increases by 1.3 times compared with that in 2012. When domestic scrap recycling is promoted, steel production in developed regions increases about 1.5 times. The share in developed regions changes from 34 %(2012) to about 40%(2050). This is because developed regions are main suppliers of scrap. 48-57% of world steel production is produced by EAF. Under the scenario which thinks much of supply-demand balance of steel, steel production in developing regions increases is 1.4 times and is larger than that in developed regions. The share in developing regions, however, is not so different from current level. The increase in steel production by EAF is the largest under the scenario in which supply-demand balance of steel is an important factor. The share reaches 65%.

Keywords: global steel production, production distribution scenario, steel making process, supply-demand balance

Procedia PDF Downloads 452
3316 Formula Student Car: Design, Analysis and Lap Time Simulation

Authors: Rachit Ahuja, Ayush Chugh

Abstract:

Aerodynamic forces and moments, as well as tire-road forces largely affects the maneuverability of the vehicle. Car manufacturers are largely fascinated and influenced by various aerodynamic improvements made in formula cars. There is constant effort of applying these aerodynamic improvements in road vehicles. In motor racing, the key differentiating factor in a high performance car is its ability to maintain highest possible acceleration in appropriate direction. One of the main areas of concern in motor racing is balance of aerodynamic forces and stream line the flow of air across the body of the vehicle. At present, formula racing cars are regulated by stringent FIA norms, there are constrains for dimensions of the vehicle, engine capacity etc. So one of the fields in which there is a large scope of improvement is aerodynamics of the vehicle. In this project work, an attempt has been made to design a formula- student (FS) car, improve its aerodynamic characteristics through steady state CFD simulations and simultaneously calculate its lap time. Initially, a CAD model of a formula student car is made using SOLIDWORKS as per the given dimensions and a steady-state external air-flow simulation is performed on the baseline model of the formula student car without any add on device to evaluate and analyze the air-flow pattern around the car and aerodynamic forces using FLUENT Solver. A detailed survey on different add-on devices used in racing application like: - front wing, diffuser, shark pin, T- wing etc. is made and geometric model of these add-on devices are created. These add-on devices are assembled with the baseline model. Steady state CFD simulations are done on the modified car to evaluate the aerodynamic effects of these add-on devices on the car. Later comparison of lap time simulation of the formula student car with and without the add-on devices is done with the help of MATLAB. Aerodynamic performances like: - lift, drag and their coefficients are evaluated for different configuration and design of the add-on devices at different speed of the vehicle. From parametric CFD simulations on formula student car attached with add-on devices, there is a considerable amount of drag and lift force reduction besides streamlining the airflow across the car. The best possible configuration of these add-on devices is obtained from these CFD simulations and also use of these add-on devices have shown an improvement in performance of the car which can be compared by various lap time simulations of the car.

Keywords: aerodynamic performance, front wing, laptime simulation, t-wing

Procedia PDF Downloads 198