Search results for: finite memory filter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4211

Search results for: finite memory filter

731 Sensitivity Analysis of Principal Stresses in Concrete Slab of Rigid Pavement Made From Recycled Materials

Authors: Aleš Florian, Lenka Ševelová

Abstract:

Complex sensitivity analysis of stresses in a concrete slab of the real type of rigid pavement made from recycled materials is performed. The computational model of the pavement is designed as a spatial (3D) model, is based on a nonlinear variant of the finite element method that respects the structural nonlinearity, enables to model different arrangements of joints, and the entire model can be loaded by the thermal load. Interaction of adjacent slabs in joints and contact of the slab and the subsequent layer are modeled with the help of special contact elements. Four concrete slabs separated by transverse and longitudinal joints and the additional structural layers and soil to the depth of about 3m are modeled. The thickness of individual layers, physical and mechanical properties of materials, characteristics of joints, and the temperature of the upper and lower surface of slabs are supposed to be random variables. The modern simulation technique Updated Latin Hypercube Sampling with 20 simulations is used. For sensitivity analysis the sensitivity coefficient based on the Spearman rank correlation coefficient is utilized. As a result, the estimates of influence of random variability of individual input variables on the random variability of principal stresses s1 and s3 in 53 points on the upper and lower surface of the concrete slabs are obtained.

Keywords: concrete, FEM, pavement, sensitivity, simulation

Procedia PDF Downloads 294
730 Optimal Design of Linear Generator to Recharge the Smartphone Battery

Authors: Jin Ho Kim, Yujeong Shin, Seong-Jin Cho, Dong-Jin Kim, U-Syn Ha

Abstract:

Due to the development of the information industry and technologies, cellular phones have must not only function to communicate, but also have functions such as the Internet, e-banking, entertainment, etc. These phones are called smartphones. The performance of smartphones has improved, because of the various functions of smartphones, and the capacity of the battery has been increased gradually. Recently, linear generators have been embedded in smartphones in order to recharge the smartphone's battery. In this study, optimization is performed and an array change of permanent magnets is examined in order to increase efficiency. We propose an optimal design using design of experiments (DOE) to maximize the generated induced voltage. The thickness of the poleshoe and permanent magnet (PM), the height of the poleshoe and PM, and the thickness of the coil are determined to be design variables. We made 25 sampling points using an orthogonal array according to four design variables. We performed electromagnetic finite element analysis to predict the generated induced voltage using the commercial electromagnetic analysis software ANSYS Maxwell. Then, we made an approximate model using the Kriging algorithm, and derived optimal values of the design variables using an evolutionary algorithm. The commercial optimization software PIAnO (Process Integration, Automation, and Optimization) was used with these algorithms. The result of the optimization shows that the generated induced voltage is improved.

Keywords: smartphone, linear generator, design of experiment, approximate model, optimal design

Procedia PDF Downloads 321
729 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 111
728 An Approximate Lateral-Torsional Buckling Mode Function for Cantilever I-Beams

Authors: H. Ozbasaran

Abstract:

Lateral torsional buckling is a global stability loss which should be considered in the design of slender structural members under flexure about their strong axis. It is possible to compute the load which causes lateral torsional buckling of a beam by finite element analysis, however, closed form equations are needed in engineering practice. Such equations can be obtained by using energy method. Unfortunately, this method has a vital drawback. In lateral torsional buckling applications of energy method, a proper function for the critical lateral torsional buckling mode should be chosen which can be thought as the variation of twisting angle along the buckled beam. The accuracy of the results depends on how close is the chosen function to the exact mode. Since critical lateral torsional buckling mode of the cantilever I-beams varies due to material properties, section properties, and loading case, the hardest step is to determine a proper mode function. This paper presents an approximate function for critical lateral torsional buckling mode of doubly symmetric cantilever I-beams. Coefficient matrices are calculated for the concentrated load at the free end, uniformly distributed load and constant moment along the beam cases. Critical lateral torsional buckling modes obtained by presented function and exact solutions are compared. It is found that the modes obtained by presented function coincide with differential equation solutions for considered loading cases.

Keywords: buckling mode, cantilever, lateral-torsional buckling, I-beam

Procedia PDF Downloads 336
727 Study of Rayleigh-Bénard-Brinkman Convection Using LTNE Model and Coupled, Real Ginzburg-Landau Equations

Authors: P. G. Siddheshwar, R. K. Vanishree, C. Kanchana

Abstract:

A local nonlinear stability analysis using a eight-mode expansion is performed in arriving at the coupled amplitude equations for Rayleigh-Bénard-Brinkman convection (RBBC) in the presence of LTNE effects. Streamlines and isotherms are obtained in the two-dimensional unsteady finite-amplitude convection regime. The parameters’ influence on heat transport is found to be more pronounced at small time than at long times. Results of the Rayleigh-Bénard convection is obtained as a particular case of the present study. Additional modes are shown not to significantly influence the heat transport thus leading us to infer that five minimal modes are sufficient to make a study of RBBC. The present problem that uses rolls as a pattern of manifestation of instability is a needed first step in the direction of making a very general non-local study of two-dimensional unsteady convection. The results may be useful in determining the preferred range of parameters’ values while making rheometric measurements in fluids to ascertain fluid properties such as viscosity. The results of LTE are obtained as a limiting case of the results of LTNE obtained in the paper.

Keywords: coupled Ginzburg–Landau model, local thermal non-equilibrium (LTNE), local thermal equilibrium (LTE), Rayleigh–Bénard-Brinkman convection

Procedia PDF Downloads 214
726 Finite Element Analysis of Raft Foundation on Various Soil Types under Earthquake Loading

Authors: Qassun S. Mohammed Shafiqu, Murtadha A. Abdulrasool

Abstract:

The design of shallow foundations to withstand different dynamic loads has given considerable attention in recent years. Dynamic loads may be due to the earthquakes, pile driving, blasting, water waves, and machine vibrations. But, predicting the behavior of shallow foundations during earthquakes remains a difficult task for geotechnical engineers. A database for dynamic and static parameters for different soils in seismic active zones in Iraq is prepared which has been collected from geophysical and geotechnical investigation works. Then, analysis of a typical 3-D soil-raft foundation system under earthquake loading is carried out using the database. And a parametric study has been carried out taking into consideration the influence of some parameters on the dynamic behavior of the raft foundation, such as raft stiffness, damping ratio as well as the influence of the earthquake acceleration-time records. The results of the parametric study show that the settlement caused by the earthquake can be decreased by about 72% with increasing the thickness from 0.5 m to 1.5 m. But, it has been noticed that reduction in the maximum bending moment by about 82% was predicted by decreasing the raft thickness from 1.5 m to 0.5 m in all sites model. Also, it has been observed that the maximum lateral displacement, the maximum vertical settlement and the maximum bending moment for damping ratio 0% is about 14%, 20%, and 18% higher than that for damping ratio 7.5%, respectively for all sites model.

Keywords: shallow foundation, seismic behavior, raft thickness, damping ratio

Procedia PDF Downloads 122
725 Root Cause Analysis of Excessive Vibration in a Feeder Pump of a Large Thermal Electric Power Plant: A Simulation Approach

Authors: Kavindan Balakrishnan

Abstract:

Root cause Identification of the Vibration phenomenon in a feedwater pumping station was the main objective of this research. First, the mode shapes of the pumping structure were investigated using numerical and analytical methods. Then the flow pressure and streamline distribution in the pump sump were examined using C.F.D. simulation, which was hypothesized can be a cause of vibration in the pumping station. As the problem specification of this research states, the vibration phenomenon in the pumping station, with four parallel pumps operating at the same time and heavy vibration recorded even after several maintenance steps. They also specified that a relatively large amplitude of vibration exited by pumps 1 and 4 while others remain normal. As a result, the focus of this research was on determining the cause of such a mode of vibration in the pump station with the assistance of Finite Element Analysis tools and Analytical methods. Major outcomes were observed in structural behavior which is favorable to the vibration pattern phenomenon in the pumping structure as a result of this research. Behaviors of the numerical and analytical models of the pump structure have similar characteristics in their mode shapes, particularly in their 2nd mode shape, which is considerably related to the exact cause of the research problem statement. Since this study reveals several possible points of flow visualization in the pump sump model that can be a favorable cause of vibration in the system, there is more room for improved investigation on flow conditions relating to pump vibrations.

Keywords: vibration, simulation, analysis, Ansys, Matlab, mode shapes, pressure distribution, structure

Procedia PDF Downloads 99
724 Heat and Mass Transfer Modelling of Industrial Sludge Drying at Different Pressures and Temperatures

Authors: L. Al Ahmad, C. Latrille, D. Hainos, D. Blanc, M. Clausse

Abstract:

A two-dimensional finite volume axisymmetric model is developed to predict the simultaneous heat and mass transfers during the drying of industrial sludge. The simulations were run using COMSOL-Multiphysics 3.5a. The input parameters of the numerical model were acquired from a preliminary experimental work. Results permit to establish correlations describing the evolution of the various parameters as a function of the drying temperature and the sludge water content. The selection and coupling of the equation are validated based on the drying kinetics acquired experimentally at a temperature range of 45-65 °C and absolute pressure range of 200-1000 mbar. The model, incorporating the heat and mass transfer mechanisms at different operating conditions, shows simulated values of temperature and water content. Simulated results are found concordant with the experimental values, only at the first and last drying stages where sludge shrinkage is insignificant. Simulated and experimental results show that sludge drying is favored at high temperatures and low pressure. As experimentally observed, the drying time is reduced by 68% for drying at 65 °C compared to 45 °C under 1 atm. At 65 °C, a 200-mbar absolute pressure vacuum leads to an additional reduction in drying time estimated by 61%. However, the drying rate is underestimated in the intermediate stage. This rate underestimation could be improved in the model by considering the shrinkage phenomena that occurs during sludge drying.

Keywords: industrial sludge drying, heat transfer, mass transfer, mathematical modelling

Procedia PDF Downloads 102
723 Role of Onion Extract for Neuro-Protection in Experimental Stroke Model

Authors: Richa Shri, Varinder Singh, Kundan Singh Bora, Abhishek Bhanot, Rahul Kumar, Amit Kumar, Ravinder Kaur

Abstract:

The term ‘neuroprotection’ means preserving/salvaging function and structure of neurons. Neuroprotection is an adjunctive treatment option for neurodegenerative disorders. Oxidative stress is considered a major culprit in neurodegenerative disorders; hence, management strategies include use of antioxidants. Our search for a neuroprotective agent began with Allium cepa L. or onions, (family Amaryllidaceae) - a potent antioxidant. We have investigated the neuroprotective potential of onions in experimental models of ischemic stroke, diabetic neuropathy, neuropathic pain, and dementia. In pre and post-ischemic stroke model, the methanol extract of outer scales of onion bulbs (MEOS) prevented memory loss and motor in-coordination; reduced oxidative stress and cerebral infarct size. This also prevented and ameliorated diabetic neuropathy in mice. The MEOS was fractionated to yield a flavonoid rich fraction (FRF) that successfully reversed ischemia-reperfusion induced neuronal damage, thereby demonstrating that the flavonoids are responsible for the activity. The FRF effectively ameliorated chronic constriction induced neuropathic pain in rats. The FRF was subjected to bioactivity-guided fractionated. It was seen that FRF is more effective as compared to the isolated components probably due to synergism among the constituents (i.e., quercetin and quercetin glucosides) in the FRF. The outer scales of onion bulbs have great potential for prevention as well as for treatment of neuronal disorders. Red onions, with higher amounts of flavonoids as compared to the white onions, produced more significant neuroprotection. Thus, the standardized FRF from the waste material of a commonly used vegetable, especially the red variety, may be developed as a valuable neuroprotective agent.

Keywords: Allium cepa, antioxidant activity, flavonoid rich fraction, neuroprotection

Procedia PDF Downloads 119
722 Comparative Evaluation of a Dynamic Navigation System Versus a Three-Dimensional Microscope in Retrieving Separated Endodontic Files: An in Vitro Study

Authors: Mohammed H. Karim, Bestoon M. Faraj

Abstract:

Introduction: instrument separation is a common challenge in the endodontic field. Various techniques and technologies have been developed to improve the retrieval success rate. This study aimed to compare the effectiveness of a Dynamic Navigation System (DNS) and a three-dimensional microscope in retrieving broken rotary NiTi files when using trepan burs and the extractor system. Materials and Methods: Thirty maxillary first bicuspids with sixty separate roots were split into two comparable groups based on a comprehensive Cone-Beam Computed Tomography (CBCT) analysis of the root length and curvature. After standardised access opening, glide paths, and patency attainment with the K file (sizes 10 and 15), the teeth were arranged on 3D models (three per quadrant, six per model). Subsequently, controlled-memory heat-treated NiTi rotary files (#25/0.04) were notched 4 mm from the tips and fractured at the apical third of the roots. The C-FR1 Endo file removal system was employed under both guidance to retrieve the fragments, and the success rate, canal aberration, treatment time and volumetric changes were measured. The statistical analysis was performed using IBM SPSS software at a significance level of 0.05. Results: The microscope-guided group had a higher success rate than the DNS guidance, but the difference was insignificant (p > 0.05). In addition, the microscope-guided drills resulted in a substantially lower proportion of canal aberration, required less time to retrieve the fragments and caused a minor change in the root canal volume (p < 0.05). Conclusion: Although dynamically guided trephining with the extractor can retrieve separated instruments, it is inferior to three-dimensional microscope guidance regarding treatment time, procedural errors, and volume change.

Keywords: dynamic navigation system, separated instruments retrieval, trephine burs and extractor system, three-dimensional video microscope

Procedia PDF Downloads 56
721 Modeling and Behavior of Structural Walls

Authors: Salima Djehaichia, Rachid Lassoued

Abstract:

Reinforced concrete structural walls are very efficient elements for protecting buildings against excessive early damage and against collapse under earthquake actions. It is therefore of interest to develop a numerical model which simulates the typical behavior of these units, this paper presents and describes different modeling techniques that have been used by researchers and their advantages and limitations mentioned. The earthquake of Boumerdes in 2003 has demonstrated the fragility of structures and total neglect of sismique design rules in the realization of old buildings. Significant damage and destruction of buildings caused by this earthquake are not due to the choice of type of material, but the design and the study does not congruent with seismic code requirements and bad quality of materials. For idealizing the failure of rules, a parametric study focuses on: low rate of reinforcements, type of reinforcement, resistance moderate of concrete. As an application the modeling strategy based on finite elements combined with a discretization of wall more solicited by successive thin layers. The estimated performance level achieved during a seismic action is obtained from capacity curves under incrementally increasing loads. Using a pushover analysis, a characteristic non linear force-displacement relationship can be determined. The results of numeric model are confronted with those of Algerian Para seismic Rules (RPA) in force have allowed the determination of profits in terms of displacement, shearing action, ductility.

Keywords: modeling, old building, pushover analysis, structural walls

Procedia PDF Downloads 218
720 An Efficient Backward Semi-Lagrangian Scheme for Nonlinear Advection-Diffusion Equation

Authors: Soyoon Bak, Sunyoung Bu, Philsu Kim

Abstract:

In this paper, a backward semi-Lagrangian scheme combined with the second-order backward difference formula is designed to calculate the numerical solutions of nonlinear advection-diffusion equations. The primary aims of this paper are to remove any iteration process and to get an efficient algorithm with the convergence order of accuracy 2 in time. In order to achieve these objects, we use the second-order central finite difference and the B-spline approximations of degree 2 and 3 in order to approximate the diffusion term and the spatial discretization, respectively. For the temporal discretization, the second order backward difference formula is applied. To calculate the numerical solution of the starting point of the characteristic curves, we use the error correction methodology developed by the authors recently. The proposed algorithm turns out to be completely iteration-free, which resolves the main weakness of the conventional backward semi-Lagrangian method. Also, the adaptability of the proposed method is indicated by numerical simulations for Burgers’ equations. Throughout these numerical simulations, it is shown that the numerical results are in good agreement with the analytic solution and the present scheme offer better accuracy in comparison with other existing numerical schemes. Semi-Lagrangian method, iteration-free method, nonlinear advection-diffusion equation, second-order backward difference formula

Keywords: Semi-Lagrangian method, iteration free method, nonlinear advection-diffusion equation, second-order backward difference formula

Procedia PDF Downloads 298
719 A Parallel Computation Based on GPU Programming for a 3D Compressible Fluid Flow Simulation

Authors: Sugeng Rianto, P.W. Arinto Yudi, Soemarno Muhammad Nurhuda

Abstract:

A computation of a 3D compressible fluid flow for virtual environment with haptic interaction can be a non-trivial issue. This is especially how to reach good performances and balancing between visualization, tactile feedback interaction, and computations. In this paper, we describe our approach of computation methods based on parallel programming on a GPU. The 3D fluid flow solvers have been developed for smoke dispersion simulation by using combinations of the cubic interpolated propagation (CIP) based fluid flow solvers and the advantages of the parallelism and programmability of the GPU. The fluid flow solver is generated in the GPU-CPU message passing scheme to get rapid development of haptic feedback modes for fluid dynamic data. A rapid solution in fluid flow solvers is developed by applying cubic interpolated propagation (CIP) fluid flow solvers. From this scheme, multiphase fluid flow equations can be solved simultaneously. To get more acceleration in the computation, the Navier-Stoke Equations (NSEs) is packed into channels of texel, where computation models are performed on pixels that can be considered to be a grid of cells. Therefore, despite of the complexity of the obstacle geometry, processing on multiple vertices and pixels can be done simultaneously in parallel. The data are also shared in global memory for CPU to control the haptic in providing kinaesthetic interaction and felling. The results show that GPU based parallel computation approaches provide effective simulation of compressible fluid flow model for real-time interaction in 3D computer graphic for PC platform. This report has shown the feasibility of a new approach of solving the compressible fluid flow equations on the GPU. The experimental tests proved that the compressible fluid flowing on various obstacles with haptic interactions on the few model obstacles can be effectively and efficiently simulated on the reasonable frame rate with a realistic visualization. These results confirm that good performances and balancing between visualization, tactile feedback interaction, and computations can be applied successfully.

Keywords: CIP, compressible fluid, GPU programming, parallel computation, real-time visualisation

Procedia PDF Downloads 403
718 Optical Design and Modeling of Micro Light-Emitting Diodes for Display Applications

Authors: Chaya B. M., C. Dhanush, Inti Sai Srikar, Akula Pavan Parvatalu, Chirag Gowda R

Abstract:

Recently, there has been a lot of interest in µ-LED technology because of its exceptional qualities, including auto emission, high visibility, low consumption of power, rapid response and longevity. Light-emitting diodes (LED) using III-nitride, such as lighting sources, visible light communication (VLC) devices, and high-power devices, are finding increasing use as miniaturization technology advances. The use of micro-LED displays in place of traditional display technologies like liquid crystal displays (LCDs) and organic light-emitting diodes (OLEDs) is one of the most prominent recent advances, which may even represent the next generation of displays. The development of fully integrated, multifunctional devices and the incorporation of extra capabilities into micro-LED displays, such as sensing, light detection, and solar cells, are the pillars of advanced technology. Due to the wide range of applications for micro-LED technology, the effectiveness and dependability of these devices in numerous harsh conditions are becoming increasingly important. Enough research has been conducted to overcome the under-effectiveness of micro-LED devices. In this paper, different Micro LED design structures are proposed in order to achieve optimized optical properties. In order to attain improved external quantum efficiency (EQE), devices' light extraction efficiency (LEE) has also been boosted.

Keywords: finite difference time domain, light out coupling efficiency, far field intensity, power density, quantum efficiency, flat panel displays

Procedia PDF Downloads 54
717 Prediction of Alzheimer's Disease Based on Blood Biomarkers and Machine Learning Algorithms

Authors: Man-Yun Liu, Emily Chia-Yu Su

Abstract:

Alzheimer's disease (AD) is the public health crisis of the 21st century. AD is a degenerative brain disease and the most common cause of dementia, a costly disease on the healthcare system. Unfortunately, the cause of AD is poorly understood, furthermore; the treatments of AD so far can only alleviate symptoms rather cure or stop the progress of the disease. Currently, there are several ways to diagnose AD; medical imaging can be used to distinguish between AD, other dementias, and early onset AD, and cerebrospinal fluid (CSF). Compared with other diagnostic tools, blood (plasma) test has advantages as an approach to population-based disease screening because it is simpler, less invasive also cost effective. In our study, we used blood biomarkers dataset of The Alzheimer’s disease Neuroimaging Initiative (ADNI) which was funded by National Institutes of Health (NIH) to do data analysis and develop a prediction model. We used independent analysis of datasets to identify plasma protein biomarkers predicting early onset AD. Firstly, to compare the basic demographic statistics between the cohorts, we used SAS Enterprise Guide to do data preprocessing and statistical analysis. Secondly, we used logistic regression, neural network, decision tree to validate biomarkers by SAS Enterprise Miner. This study generated data from ADNI, contained 146 blood biomarkers from 566 participants. Participants include cognitive normal (healthy), mild cognitive impairment (MCI), and patient suffered Alzheimer’s disease (AD). Participants’ samples were separated into two groups, healthy and MCI, healthy and AD, respectively. We used the two groups to compare important biomarkers of AD and MCI. In preprocessing, we used a t-test to filter 41/47 features between the two groups (healthy and AD, healthy and MCI) before using machine learning algorithms. Then we have built model with 4 machine learning methods, the best AUC of two groups separately are 0.991/0.709. We want to stress the importance that the simple, less invasive, common blood (plasma) test may also early diagnose AD. As our opinion, the result will provide evidence that blood-based biomarkers might be an alternative diagnostics tool before further examination with CSF and medical imaging. A comprehensive study on the differences in blood-based biomarkers between AD patients and healthy subjects is warranted. Early detection of AD progression will allow physicians the opportunity for early intervention and treatment.

Keywords: Alzheimer's disease, blood-based biomarkers, diagnostics, early detection, machine learning

Procedia PDF Downloads 296
716 Investigation of Crack Formation in Ordinary Reinforced Concrete Beams and in Beams Strengthened with Carbon Fiber Sheet: Theory and Experiment

Authors: Anton A. Bykov, Irina O. Glot, Igor N. Shardakov, Alexey P. Shestakov

Abstract:

This paper presents the results of experimental and theoretical investigations of the mechanisms of crack formation in reinforced concrete beams subjected to quasi-static bending. The boundary-value problem has been formulated in the framework of brittle fracture mechanics and has been solved by using the finite-element method. Numerical simulation of the vibrations of an uncracked beam and a beam with cracks of different size serves to determine the pattern of changes in the spectrum of eigenfrequencies observed during crack evolution. Experiments were performed on the sequential quasistatic four-point bending of the beam leading to the formation of cracks in concrete. At each loading stage, the beam was subjected to an impulse load to induce vibrations. Two stages of cracking were detected. At the first stage the conservative process of deformation is realized. The second stage is an active cracking, which is marked by a sharp change in eingenfrequencies. The boundary of a transition from one stage to another is well registered. The vibration behavior was examined for the beams strengthened by carbon-fiber sheet before loading and at the intermediate stage of loading after the grouting of initial cracks. The obtained results show that the vibrodiagnostic approach is an effective tool for monitoring of cracking and for assessing the quality of measures aimed at strengthening concrete structures.

Keywords: crack formation, experiment, mathematical modeling, reinforced concrete, vibrodiagnostics

Procedia PDF Downloads 272
715 Relevance Of Cognitive Rehabilitation Amongst Children Having Chronic Illnesses – A Theoretical Analysis

Authors: Pulari C. Milu Maria Anto

Abstract:

Background: Cognitive Rehabilitation/Retraining has been variously used in the research literature to represent non-pharmacological interventions that target the cognitive impairments with the goal of ameliorating cognitive function and functional behaviors to optimize the quality of life. Along with adult’s cognitive impairments, the need to address acquired cognitive impairments (due to any chronic illnesses like CHD - congenital heart diseases or ALL - Acute Lymphoblastic Leukemia) among child populations is inevitable. Also, it has to be emphasized as same we consider the cognitive impairments seen in the children having neurodevelopmental disorders. Methods: All published brain image studies (Hermann, B. et al,2002, Khalil, A. et al., 2004, Follin, C. et al, 2016, etc.) and studies emphasizing cognitive impairments in attention, memory, and/or executive function and behavioral aspects (Henkin, Y. et al,2007, Bellinger, D. C., & Newburger, J. W. (2010), Cheung, Y. T., et al,2016, that could be identified were reviewed. Based on a systematic review of the literature from (2000 -2021) different brain imaging studies, increased risk of neuropsychological and psychosocial impairments are briefly described. Clinical and research gap in the area is discussed. Results:30 papers, both Indian studies and foreign publications (Sage journals, Delhi psychiatry journal, Wiley Online Library, APA PsyNet, Springer, Elsevier, Developmental medicine, and child neurology), were identified. Conclusions: In India, a very limited number of brain imaging studies and neuropsychological studies have done by indicating the cognitive deficits of a child having or undergone chronic illness. None of the studies have emphasized the relevance nor the need of implementingCR among such children, even though its high time to address but still not established yet. The review of the current evidence is to bring out an insight among rehabilitation professionals in establishing a child specific CR and to publish new findings regarding the implementation of CR among such children. Also, this study will be an awareness on considering cognitive aspects of a child having acquired cognitive deficit (due to chronic illness), especially during their critical developmental period.

Keywords: cognitive rehabilitation, neuropsychological impairments, congenital heart diseases, acute lymphoblastic leukemia, epilepsy, and neuroplasticity

Procedia PDF Downloads 145
714 The Fragility of Sense: The Twofold Temporality of Embodiment and Its Role for Depression

Authors: Laura Bickel

Abstract:

This paper aims to investigate to what extent Merleau-Ponty’s philosophy of body memory serves as a viable resource for the enactive approach to cognitive science and its first-person experience-based research on ‘recurrent depressive disorder’ coded F33 in ICD-10. In pursuit of this goal, the analysis begins by revisiting the neuroreductive paradigm. This paradigm serves biological psychiatry to explain the condition of vital contact in terms of underlying neurophysiological mechanisms. It is demonstrated that the neuroreductive model cannot sufficiently account for the depressed person’s episodical withdrawal in causal terms. The analysis of the irregular loss of vital resonance requires integrating the body as the subject of experience and its phenomenological time. Then, it is shown that the enactive approach to depression as disordered sense-making is a promising alternative. The enactive model of perception implies that living beings do not register pre-existing meaning ‘out there’ but unfold ‘sense’ in their action-oriented response to the world. For the enactive approach, Husserl’s passive synthesis of inner time consciousness is fundamental for what becomes perceptually present for action. It seems intuitive to bring together the enactive approach to depression with the long-standing view in phenomenological psychopathology that explains the loss of vital contact by appealing to the disruption of the temporal structure of consciousness. However, this paper argues that the disruption of the temporal structure is not justified conceptually. Instead, one may integrate Merleau-Ponty’s concept of the past as the unconscious into the enactive approach to depression. From this perspective, the living being’s experiential and biological past inserts itself in the form of habit and bodily skills and ensures action-oriented responses to the environment. Finally, it is concluded that the depressed person’s withdrawal indicates the impairment of this application process. The person suffering from F33 cannot actualize sedimented meaning to respond to the valences and tasks of a given situation.

Keywords: depression, enactivism, neuroreductionsim, phenomenology, temporality

Procedia PDF Downloads 110
713 Sensitivity Based Robust Optimization Using 9 Level Orthogonal Array and Stepwise Regression

Authors: K. K. Lee, H. W. Han, H. L. Kang, T. A. Kim, S. H. Han

Abstract:

For the robust optimization of the manufacturing product design, there are design objectives that must be achieved, such as a minimization of the mean and standard deviation in objective functions within the required sensitivity constraints. The authors utilized the sensitivity of objective functions and constraints with respect to the effective design variables to reduce the computational burden associated with the evaluation of the probabilities. The individual mean and sensitivity values could be estimated easily by using the 9 level orthogonal array based response surface models optimized by the stepwise regression. The present study evaluates a proposed procedure from the robust optimization of rubber domes that are commonly used for keyboard switching, by using the 9 level orthogonal array and stepwise regression along with a desirability function. In addition, a new robust optimization process, i.e., the I2GEO (Identify, Integrate, Generate, Explore and Optimize), was proposed on the basis of the robust optimization in rubber domes. The optimized results from the response surface models and the estimated results by using the finite element analysis were consistent within a small margin of error. The standard deviation of objective function is decreasing 54.17% with suggested sensitivity based robust optimization. (Business for Cooperative R&D between Industry, Academy, and Research Institute funded Korea Small and Medium Business Administration in 2017, S2455569)

Keywords: objective function, orthogonal array, response surface model, robust optimization, stepwise regression

Procedia PDF Downloads 261
712 Constructions of Linear and Robust Codes Based on Wavelet Decompositions

Authors: Alla Levina, Sergey Taranov

Abstract:

The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.

Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability

Procedia PDF Downloads 463
711 A Finite Elements Model for the Study of Buried Pipelines Affected by Strike-Slip Fault

Authors: Reza Akbari, Jalal MontazeriFashtali, PeymanMomeni Taromsari

Abstract:

Pipeline systems, play an important role as a vital element in reducing or increasing the risk of earthquake damage and vulnerability. Pipelines are suitable, cheap, fast, and safe routes for transporting oil, gas, water, sewage, etc. The sepipelines must pass from a wide geographical area; hence they will structurally face different environmental and underground factors of earthquake forces’ effect. Therefore, structural engineering analysis and design for this type of lines requires the understanding of relevant parameters behavior and lack of familiarity with them can cause irreparable damages and risks to design and execution, especially in the face of earthquakes. Today, buried pipelines play an important role in human life cycle, thus, studying the vulnerability of pipeline systems is of particular importance. This study examines the behavior of buried pipelines affected by strike-slip fault. Studied fault is perpendicular to the tube axis and causes stress and deformation in the tube by sliding horizontally. In this study, the pipe-soil interaction is accurately simulated, so that one can examine the large displacements and strains, nonlinear material behavior and contact and friction conditions of soil and pipe. The results can be used for designing buried pipes and determining the amount of fault displacement that causes the failure of the buried pipes.

Keywords: pipe lines , earthquake , fault , soil-fault interaction

Procedia PDF Downloads 425
710 Numerical Simulation of Transient 3D Temperature and Kerf Formation in Laser Fusion Cutting

Authors: Karim Kheloufi, El Hachemi Amara

Abstract:

In the present study, a three-dimensional transient numerical model was developed to study the temperature field and cutting kerf shape during laser fusion cutting. The finite volume model has been constructed, based on the Navier–Stokes equations and energy conservation equation for the description of momentum and heat transport phenomena, and the Volume of Fluid (VOF) method for free surface tracking. The Fresnel absorption model is used to handle the absorption of the incident wave by the surface of the liquid metal and the enthalpy-porosity technique is employed to account for the latent heat during melting and solidification of the material. To model the physical phenomena occurring at the liquid film/gas interface, including momentum/heat transfer, a new approach is proposed which consists of treating friction force, pressure force applied by the gas jet and the heat absorbed by the cutting front surface as source terms incorporated into the governing equations. All these physics are coupled and solved simultaneously in Fluent CFD®. The main objective of using a transient phase change model in the current case is to simulate the dynamics and geometry of a growing laser-cutting generated kerf until it becomes fully developed. The model is used to investigate the effect of some process parameters on temperature fields and the formed kerf geometry.

Keywords: laser cutting, numerical simulation, heat transfer, fluid flow

Procedia PDF Downloads 298
709 An Inverse Approach for Determining Creep Properties from a Miniature Thin Plate Specimen under Bending

Authors: Yang Zheng, Wei Sun

Abstract:

This paper describes a new approach which can be used to interpret the experimental creep deformation data obtained from miniaturized thin plate bending specimen test to the corresponding uniaxial data based on an inversed application of the reference stress method. The geometry of the thin plate is fully defined by the span of the support, l, the width, b, and the thickness, d. Firstly, analytical solutions for the steady-state, load-line creep deformation rate of the thin plates for a Norton’s power law under plane stress (b → 0) and plane strain (b → ∞) conditions were obtained, from which it can be seen that the load-line deformation rate of the thin plate under plane-stress conditions is much higher than that under the plane-strain conditions. Since analytical solution is not available for the plates with random b-values, finite element (FE) analyses are used to obtain the solutions. Based on the FE results obtained for various b/l ratios and creep exponent, n, as well as the analytical solutions under plane stress and plane strain conditions, an approximate, numerical solutions for the deformation rate are obtained by curve fitting. Using these solutions, a reference stress method is utilised to establish the conversion relationships between the applied load and the equivalent uniaxial stress and between the creep deformations of thin plate and the equivalent uniaxial creep strains. Finally, the accuracy of the empirical solution was assessed by using a set of “theoretical” experimental data.

Keywords: bending, creep, thin plate, materials engineering

Procedia PDF Downloads 440
708 Representation of History in Cinema: Comparative Analysis of Turkish Films Based on the Conquest of Istanbul

Authors: Dilara Balcı Gulpinar

Abstract:

History, which can be defined as the narrative of the past, is a process of reproduction that takes place in current time. Scientificness of historiography is controversial for reasons such as the fact that the historian makes choices and comments; even the reason for choosing the subject distracts him/her from objectivity. Historians may take advantage of the current values, cannot be able to afford to contradict society and/or face pressures of dominant groups. In addition, due to the lack of documentation, interpretation, and fiction are used to integrate historical events that seem disconnected. In this respect, there are views that relate history to narrative arts rather than positive sciences. Popular historical films, which are visual historical representations, appeal to wider audiences by taking advantage of visuality, dramatic fictional narrative, various effects, music, stars, and other populist elements. Historical film, which does not claim to be scientific and even has the freedom to distort historical reality, can be perceived as reality itself and becomes an indispensable resource for individual and social memory. The ideological discourse of popular films is not only impressive and manipulative but also changeable. Socio-cultural and political changes can transform the representation of history in films extremely sharply and rapidly. In accordance with the above-mentioned hypothesis, this study is aimed at examining Turkish historical films about the conquest of Istanbul, using methods of historical and social analysis. İstanbul’un Fethi (Conquest of Istanbul, Aydin Arakon, 1953), Kuşatma Altında Aşk (Love Under Siege, Ersin Pertan, 1997) and Fetih 1453 (Conquest 1453, Faruk Aksoy, 2012) are the only three films in Turkish cinema that revolve around the said conquest, therefore constituting the sample of this study. It has been determined that real and fictional events, as well as characters, both focused and ignored, differ from one another in each film. Such significant differences in the dramatic and cinematographic structure of these three films shot respectively in the 50s, 90s, and 2010s show that the representation of history in popular cinema has altered throughout the years, losing its aspect of objectivity.

Keywords: cinema, conquest of Istanbul, historical film, representation

Procedia PDF Downloads 95
707 Inverterless Grid Compatible Micro Turbine Generator

Authors: S. Ozeri, D. Shmilovitz

Abstract:

Micro‐Turbine Generators (MTG) are small size power plants that consist of a high speed, gas turbine driving an electrical generator. MTGs may be fueled by either natural gas or kerosene and may also use sustainable and recycled green fuels such as biomass, landfill or digester gas. The typical ratings of MTGs start from 20 kW up to 200 kW. The primary use of MTGs is for backup for sensitive load sites such as hospitals, and they are also considered a feasible power source for Distributed Generation (DG) providing on-site generation in proximity to remote loads. The MTGs have the compressor, the turbine, and the electrical generator mounted on a single shaft. For this reason, the electrical energy is generated at high frequency and is incompatible with the power grid. Therefore, MTGs must contain, in addition, a power conditioning unit to generate an AC voltage at the grid frequency. Presently, this power conditioning unit consists of a rectifier followed by a DC/AC inverter, both rated at the full MTG’s power. The losses of the power conditioning unit account to some 3-5%. Moreover, the full-power processing stage is a bulky and costly piece of equipment that also lowers the overall system reliability. In this study, we propose a new type of power conditioning stage in which only a small fraction of the power is processed. A low power converter is used only to program the rotor current (i.e. the excitation current which is substantially lower). Thus, the MTG's output voltage is shaped to the desired amplitude and frequency by proper programming of the excitation current. The control is realized by causing the rotor current to track the electrical frequency (which is related to the shaft frequency) with a difference that is exactly equal to the line frequency. Since the phasor of the rotation speed and the phasor of the rotor magnetic field are multiplied, the spectrum of the MTG generator voltage contains the sum and the difference components. The desired difference component is at the line frequency (50/60 Hz), whereas the unwanted sum component is at about twice the electrical frequency of the stator. The unwanted high frequency component can be filtered out by a low-pass filter leaving only the low-frequency output. This approach allows elimination of the large power conditioning unit incorporated in conventional MTGs. Instead, a much smaller and cheaper fractional power stage can be used. The proposed technology is also applicable to other high rotation generator sets such as aircraft power units.

Keywords: gas turbine, inverter, power multiplier, distributed generation

Procedia PDF Downloads 211
706 Effect of Plant Growth Promoting Rhizobacteria on the Germination and Early Growth of Onion (Allium cepa)

Authors: Dragana R. Stamenov, Simonida S. Djuric, Timea Hajnal Jafari

Abstract:

Plant growth promoting rhizobacteria (PGPR) are a heterogeneous group of bacteria that can be found in the rhizosphere, at root surfaces and in association with roots, enhancing the growth of the plant either directly and/or indirectly. Increased crop productivity associated with the presence of PGPR has been observed in a broad range of plant species, such as raspberry, chickpeas, legumes, cucumber, eggplant, pea, pepper, radish, tobacco, tomato, lettuce, carrot, corn, cotton, millet, bean, cocoa, etc. However, until now there has not been much research about influences of the PGPR on the growth and yield of onion. Onion (Allium cepa L.), of the Liliaceae family, is a species of great economic importance, widely cultivated all over the world. The aim of this research was to examine the influence of plant growth promoting bacteria Pseudomonas sp. Dragana, Pseudomonas sp. Kiš, Bacillus subtillis and Azotobacter sp. on the seed germination and early growth of onion (Allium cepa). PGPR Azotobacter sp., Bacillus subtilis, Pseudomonas sp. Dragana, Pseudomonas sp. Kiš, from the collection of the Faculty of Agriculture, Novi Sad, Serbia, were used as inoculants. The number of cells in 1 ml of the inoculum was 10⁸ CFU/ml. The control variant was not inoculated. The effect of PGPR on seed germination and hypocotyls length of Allium cepa was evaluated in controlled conditions, on filter paper in the dark at 22°C, while effect on the plant length and mass in semicontrol conditions, in 10 l volume vegetative pots. Seed treated with fungicide and untreated seed were used. After seven days the percentage of germination was determined. After seven and fourteen days hypocotil length was measured. Fourteen days after germination, length and mass of plants were measured. Application of Pseudomonas sp. Dragana and Kiš and Bacillus subtillis had a negative effect on onion seed germination, while the use of Azotobacter sp. gave positive results. On average, application of all investigated inoculants had a positive effect on the measured parameters of plant growth. Azotobacter sp. had the greatest effect on the hypocotyls length, length and mass of the plant. In average, better results were achieved with untreated seeds in compare with treated. Results of this study have shown that PGPR can be used in the production of onion.

Keywords: germination, length, mass, microorganisms, onion

Procedia PDF Downloads 202
705 Methodologies, Findings, Discussion, and Limitations in Global, Multi-Lingual Research: We Are All Alone - Chinese Internet Drama

Authors: Patricia Portugal Marques de Carvalho Lourenco

Abstract:

A three-phase methodological multi-lingual path was designed, constructed and carried out using the 2020 Chinese Internet Drama Series We Are All Alone as a case study. Phase one, the backbone of the research, comprised of secondary data analysis, providing the structure on which the next two phases would be built on. Phase one incorporated a Google Scholar and a Baidu Index analysis, Star Network Influence Index and Mydramalist.com top two drama reviews, along with an article written about the drama and scrutiny of Chinese related blogs and websites. Phase two was field research elaborated across Latin Europe, and phase three was social media focused, having into account that perceptions are going to be memory conditioned based on past ideas recall. Overall, research has shown the poor cultural expression of Chinese entertainment in Latin Europe and demonstrated the inexistence of Chinese content in French, Italian, Portuguese and Spanish Business to Consumer retailers; a reflection of their low significance in Latin European markets and the short-life cycle of entertainment products in general, bubble-gum, disposable goods without a mid to long-term effect in consumers lives. The process of conducting comprehensive international research was complex and time-consuming, with data not always available in Mandarin, the researcher’s linguistic deficiency, limited Chinese Cultural Knowledge and cultural equivalence. Despite steps being taken to minimize the international proposed research, theoretical limitations concurrent to Latin Europe and China still occurred. Data accuracy was disputable; sampling, data collection/analysis methods are heterogeneous; ascertaining data requirements and the method of analysis to achieve a construct equivalence was challenging and morose to operationalize. Secondary data was also not often readily available in Mandarin; yet, in spite of the array of limitations, research was done, and results were produced.

Keywords: research methodologies, international research, primary data, secondary data, research limitations, online dramas, china, latin europe

Procedia PDF Downloads 45
704 Four-Electron Auger Process for Hollow Ions

Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola

Abstract:

A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.

Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method

Procedia PDF Downloads 129
703 Effect of Classroom Acoustic Factors on Language and Cognition in Bilinguals and Children with Mild to Moderate Hearing Loss

Authors: Douglas MacCutcheon, Florian Pausch, Robert Ljung, Lorna Halliday, Stuart Rosen

Abstract:

Contemporary classrooms are increasingly inclusive of children with mild to moderate disabilities and children from different language backgrounds (bilinguals, multilinguals), but classroom environments and standards have not yet been adapted adequately to meet these challenges brought about by this inclusivity. Additionally, classrooms are becoming noisier as a learner-centered as opposed to teacher-centered teaching paradigm is adopted, which prioritizes group work and peer-to-peer learning. Challenging listening conditions with distracting sound sources and background noise are known to have potentially negative effects on children, particularly those that are prone to struggle with speech perception in noise. Therefore, this research investigates two groups vulnerable to these environmental effects, namely children with a mild to moderate hearing loss (MMHLs) and sequential bilinguals learning in their second language. In the MMHL study, this group was assessed on speech-in-noise perception, and a number of receptive language and cognitive measures (auditory working memory, auditory attention) and correlations were evaluated. Speech reception thresholds were found to be predictive of language and cognitive ability, and the nature of correlations is discussed. In the bilinguals study, sequential bilingual children’s listening comprehension, speech-in-noise perception, listening effort and release from masking was evaluated under a number of different ecologically valid acoustic scenarios in order to pinpoint the extent of the ‘native language benefit’ for Swedish children learning in English, their second language. Scene manipulations included target-to-distractor ratios and introducing spatially separated noise. This research will contribute to the body of findings from which educational institutions can draw when designing or adapting educational environments in inclusive schools.

Keywords: sequential bilinguals, classroom acoustics, mild to moderate hearing loss, speech-in-noise, release from masking

Procedia PDF Downloads 306
702 Talking Back to Hollywood: Museum Representation in Popular Culture as a Gateway to Understanding Public Perception

Authors: Jessica BrodeFrank, Beka Bryer, Lacey Wilson, Sierra Van Ryck deGroot

Abstract:

Museums are enjoying quite the moment in pop culture. From discussions of labor in Bob’s Burger to introducing cultural repatriation in The Black Panther, discussions of various museum issues are making their way to popular media. “Talking Back to Hollywood” analyzes the impact museums have on movies and television. The paper will highlight a series of cultural cameos and discuss what each reveals about critical themes in museums: repatriation, labor, obfuscated histories, institutional legacies, artificial intelligence, and holograms. Using a mixed methods approach to include surveys, descriptive research, thematic analysis, and context analysis, the authors of this paper will explore how we, as the museum staff, might begin to cite museums and movies together as texts. Drawing from their experience working in museums and public history, this contingent of mid-career professionals will highlight the impact museums have had on movies and television and the didactic lessons these portrayals can provide back to cultural heritage professionals. From tackling critical themes in museums such as repatriation, labor conditions/inequities, obfuscated histories, curatorial choice and control, institutional legacies, and more, this paper is grounded in the cultural zeitgeist of the 2000s and the message these media portrayals send to the public and the cultural heritage sector. In particular, the paper will examine how portrayals of AI, holograms, and more technology can be used as entry points for necessary discussions with the public on mistrust, misinformation, and emerging technologies. This paper will not only expose the legacy and cultural understanding of the museum field within popular culture but also will discuss actionable ways that public historians can use these portrayals as an entry point for discussions with the public, citing literature reviews and quantitative and qualitative analysis of survey results. As Hollywood is talking about museums, museums can use that to better connect to the audiences who feel comfortable at the cinema but are excluded from the museum.

Keywords: museums, public memory, representation, popular culture

Procedia PDF Downloads 47