Search results for: non-linear finite element method
18857 Association of Phytomineral Supplementation with the Seasonal Prevalence of Gastrointestinal Parasites of Grazing Sheep in the Scenario of Climate Change
Authors: Muhammad Sohail Sajid, Hafiz Muhammad Rizwan, Ashfaq Ahmad Chatta, Zafar Iqbal, Muhammad Saqib
Abstract:
Changes in the climate are posing threats to the livestock community throughout the globe. Agro-grazing animals and natural vegetation as their forages are the most important components of animal production. Climate and local conditions not only determine the nature and kind of plants, their distribution, composition and nutritive value in different cropping belts and grazing sites but also influence number and kinds of grazing animals. Phytomineral supplementation can act as an indirect tool to boost-up immunological profile of animals leading to the development of resilience against parasitic infections. The present study correlates the trace element (Cu, Co, Mn, Zn) profile of grazing sheep, feedstuffs, respective soils and their GI helminths in a selected district of Sialkot, Punjab, Pakistan. Ten species of GI helminths were found during the survey. A significant (P < 0.05) variation in the concentrations (conc.) of Zn, Cu, Mn and Co was recorded in a total of 16 collected forages. During autumn, mean conc. of Cu, Zn and Co in sera were inversely proportional to the GI helminth burden; while, during spring, only Zn was inversely proportional to the GI helminth burden in grazing sheep. During autumn the highest conc. of Zn, Cu, Mn and Co were recorded in Echinochloa colona, Amaranthus viridis, Cannabis sativa, and Brachiaria ramose and during spring in Cichorium intybus, Cynodon dactylon, Parthenium hysterophorus and Coronopus didymus respectively. The trace element-rich forages, preferably Zn, found effective against helminth infection are advisable supplemental remedies to improve the trace element profile in grazing sheep. This mitigation strategy may ultimately improve the resilience against GI helminth infections especially in the resource poor countries like Pakistan.Keywords: coprological examination, Trace elements, Sheep, Gastro-intestinal parasites, Prevalence, Sialkot, Pakistan
Procedia PDF Downloads 39118856 Location Uncertainty – A Probablistic Solution for Automatic Train Control
Authors: Monish Sengupta, Benjamin Heydecker, Daniel Woodland
Abstract:
New train control systems rely mainly on Automatic Train Protection (ATP) and Automatic Train Operation (ATO) dynamically to control the speed and hence performance. The ATP and the ATO form the vital element within the CBTC (Communication Based Train Control) and within the ERTMS (European Rail Traffic Management System) system architectures. Reliable and accurate measurement of train location, speed and acceleration are vital to the operation of train control systems. In the past, all CBTC and ERTMS system have deployed a balise or equivalent to correct the uncertainty element of the train location. Typically a CBTC train is allowed to miss only one balise on the track, after which the Automatic Train Protection (ATP) system applies emergency brake to halt the service. This is because the location uncertainty, which grows within the train control system, cannot tolerate missing more than one balise. Balises contribute a significant amount towards wayside maintenance and studies have shown that balises on the track also forms a constraint for future track layout change and change in speed profile.This paper investigates the causes of the location uncertainty that is currently experienced and considers whether it is possible to identify an effective filter to ascertain, in conjunction with appropriate sensors, more accurate speed, distance and location for a CBTC driven train without the need of any external balises. An appropriate sensor fusion algorithm and intelligent sensor selection methodology will be deployed to ascertain the railway location and speed measurement at its highest precision. Similar techniques are already in use in aviation, satellite, submarine and other navigation systems. Developing a model for the speed control and the use of Kalman filter is a key element in this research. This paper will summarize the research undertaken and its significant findings, highlighting the potential for introducing alternative approaches to train positioning that would enable removal of all trackside location correction balises, leading to huge reduction in maintenances and more flexibility in future track design.Keywords: ERTMS, CBTC, ATP, ATO
Procedia PDF Downloads 41018855 Design of Enhanced Adaptive Filter for Integrated Navigation System of FOG-SINS and Star Tracker
Authors: Nassim Bessaad, Qilian Bao, Zhao Jiangkang
Abstract:
The fiber optics gyroscope in the strap-down inertial navigation system (FOG-SINS) suffers from precision degradation due to the influence of random errors. In this work, an enhanced Allan variance (AV) stochastic modeling method combined with discrete wavelet transform (DWT) for signal denoising is implemented to estimate the random process in the FOG signal. Furthermore, we devise a measurement-based iterative adaptive Sage-Husa nonlinear filter with augmented states to integrate a star tracker sensor with SINS. The proposed filter adapts the measurement noise covariance matrix based on the available data. Moreover, the enhanced stochastic modeling scheme is invested in tuning the process noise covariance matrix and the augmented state Gauss-Markov process parameters. Finally, the effectiveness of the proposed filter is investigated by employing the collected data in laboratory conditions. The result shows the filter's improved accuracy in comparison with the conventional Kalman filter (CKF).Keywords: inertial navigation, adaptive filtering, star tracker, FOG
Procedia PDF Downloads 8018854 Dislocation Density-Based Modeling of the Grain Refinement in Surface Mechanical Attrition Treatment
Authors: Reza Miresmaeili, Asghar Heydari Astaraee, Fereshteh Dolati
Abstract:
In the present study, an analytical model based on dislocation density model was developed to simulate grain refinement in surface mechanical attrition treatment (SMAT). The correlation between SMAT time and development in plastic strain on one hand, and dislocation density evolution, on the other hand, was established to simulate the grain refinement in SMAT. A dislocation density-based constitutive material law was implemented using VUHARD subroutine. A random sequence of shots is taken into consideration for multiple impacts model using Python programming language by utilizing a random function. The simulation technique was to model each impact in a separate run and then transferring the results of each run as initial conditions for the next run (impact). The developed Finite Element (FE) model of multiple impacts describes the coverage evolution in SMAT. Simulations were run to coverage levels as high as 4500%. It is shown that the coverage implemented in the FE model is equal to the experimental coverage. It is depicted that numerical SMAT coverage parameter is adequately conforming to the well-known Avrami model. Comparison between numerical results and experimental measurements for residual stresses and depth of deformation layers confirms the performance of the established FE model for surface engineering evaluations in SMA treatment. X-ray diffraction (XRD) studies of grain refinement, including resultant grain size and dislocation density, were conducted to validate the established model. The full width at half-maximum in XRD profiles can be used to measure the grain size. Numerical results and experimental measurements of grain refinement illustrate good agreement and show the capability of established FE model to predict the gradient microstructure in SMA treatment.Keywords: dislocation density, grain refinement, severe plastic deformation, simulation, surface mechanical attrition treatment
Procedia PDF Downloads 13618853 Seat Assignment Model for Student Admissions Process at Saudi Higher Education Institutions
Authors: Mohammed Salem Alzahrani
Abstract:
In this paper, student admission process is studied to optimize the assignment of vacant seats with three main objectives. Utilizing all vacant seats, satisfying all program of study admission requirements and maintaining fairness among all candidates are the three main objectives of the optimization model. Seat Assignment Method (SAM) is used to build the model and solve the optimization problem with help of Northwest Coroner Method and Least Cost Method. A closed formula is derived for applying the priority of assigning seat to candidate based on SAM.Keywords: admission process model, assignment problem, Hungarian Method, Least Cost Method, Northwest Corner Method, SAM
Procedia PDF Downloads 50018852 Size Effects on Structural Performance of Concrete Gravity Dams
Authors: Mehmet Akköse
Abstract:
Concern about seismic safety of concrete dams have been growing around the world, partly because the population at risk in locations downstream of major dams continues to expand and also because it is increasingly evident that the seismic design concepts in use at the time most existing dams were built were inadequate. Most of the investigations in the past have been conducted on large dams, typically above 100m high. A large number of concrete dams in our country and in other parts of the world are less than 50m high. Most of these dams were usually designed using pseudo-static methods, ignoring the dynamic characteristics of the structure as well as the characteristics of the ground motion. Therefore, it is important to carry out investigations on seismic behavior this category of dam in order to assess and evaluate the safety of existing dams and improve the knowledge for different high dams to be constructed in the future. In this study, size effects on structural performance of concrete gravity dams subjected to near and far-fault ground motions are investigated including dam-water-foundation interaction. For this purpose, a benchmark problem proposed by ICOLD (International Committee on Large Dams) is chosen as a numerical application. Structural performance of the dam having five different heights is evaluated according to damage criterions in USACE (U.S. Army Corps of Engineers). It is decided according to their structural performance if non-linear analysis of the dams requires or not. The linear elastic dynamic analyses of the dams to near and far-fault ground motions are performed using the step-by-step integration technique. The integration time step is 0.0025 sec. The Rayleigh damping constants are calculated assuming 5% damping ratio. The program NONSAP modified for fluid-structure systems with the Lagrangian fluid finite element is employed in the response calculations.Keywords: concrete gravity dams, Lagrangian approach, near and far-fault ground motion, USACE damage criterions
Procedia PDF Downloads 26718851 Engine with Dual Helical Crankshaft System Operating at an Overdrive Gear Ratio
Authors: Anierudh Vishwanathan
Abstract:
This paper suggests a new design of the crankshaft system that would help to use a low revving engine for applications requiring the use of a high revving engine operating at the same power by converting the extra or unnecessary torque obtained from a low revving engine into angular velocity of the crankshaft of the engine hence, improve the fuel economy of the vehicle because of the fact that low revving engines run more effectively on lean air fuel mixtures accompanied with less wear and tear of the engine due to lesser rubbing of the piston rings with the cylinder walls. If the crankshaft with the proposed design is used in a low revving engine, then it will give the same torque and speed as that given by a high revving engine operating at the same power but the new engine will give better fuel economy. Hence the new engine will give the benefits of a low revving engine as well as a high revving engine. The proposed crankshaft design will be achieved by changing the design of the crankweb in such a way that it functions both as a counterweight as well as a helical gear that can transfer power to the secondary gear shaft which will be incorporated in the crankshaft system. The crankshaft and the secondary gear shaft will be operating at an overdrive ratio. The crankshaft will now be a two shaft system instead of a single shaft system. The newly designed crankshaft will be mounted on the bearings instead of being connected to the flywheel of the engine. This newly designed crankshaft will transmit power to the secondary shaft which will rotate the flywheel and then the rotary motion will be transmitted to the transmission system as usual. In this design, the concept of power transmission will be incorporated in the crankshaft system. In the paper, the crankshaft and the secondary shafts have been designed in such a way that at any instant of time only half the number of crankwebs will be meshed with the secondary shaft. For example, during one revolution of the crankshaft, if for the first half of revolution; first, second, seventh and eighth crankwebs are meshing with the secondary shaft then for the next half revolution, third, fourth, fifth and sixth crankwebs will mesh with the secondary shaft. This paper also analyses the proposed crankshaft design for safety against fatigue failure. Finite element analysis of the crankshaft has been done and the resultant stresses have been calculated.Keywords: low revving, high revving, secondary shaft, partial meshing
Procedia PDF Downloads 26918850 Using MALDI-TOF MS to Detect Environmental Microplastics (Polyethylene, Polyethylene Terephthalate, and Polystyrene) within a Simulated Tissue Sample
Authors: Kara J. Coffman-Rea, Karen E. Samonds
Abstract:
Microplastic pollution is an urgent global threat to our planet and human health. Microplastic particles have been detected within our food, water, and atmosphere, and found within the human stool, placenta, and lung tissue. However, most spectrometric microplastic detection methods require chemical digestion which can alter or destroy microplastic particles and makes it impossible to acquire information about their in-situ distribution. MALDI TOF MS (Matrix-assisted laser desorption ionization-time of flight mass spectrometry) is an analytical method using a soft ionization technique that can be used for polymer analysis. This method provides a valuable opportunity to both acquire information regarding the in-situ distribution of microplastics and also minimizes the destructive element of chemical digestion. In addition, MALDI TOF MS allows for expanded analysis of the microplastics including detection of specific additives that may be present within them. MALDI TOF MS is particularly sensitive to sample preparation and has not yet been used to analyze environmental microplastics within their specific location (e.g., biological tissues, sediment, water). In this study, microplastics were created using polyethylene gloves, polystyrene micro-foam, and polyethylene terephthalate cable sleeving. Plastics were frozen using liquid nitrogen and ground to obtain small fragments. An artificial tissue was created using a cellulose sponge as scaffolding coated with a MaxGel Extracellular Matrix to simulate human lung tissue. Optimal preparation techniques (e.g., matrix, cationization reagent, solvent, mixing ratio, laser intensity) were first established for each specific polymer type. The artificial tissue sample was subsequently spiked with microplastics, and specific polymers were detected using MALDI-TOF-MS. This study presents a novel method for the detection of environmental polyethylene, polyethylene terephthalate, and polystyrene microplastics within a complex sample. Results of this study provide an effective method that can be used in future microplastics research and can aid in determining the potential threats to environmental and human health that they pose.Keywords: environmental plastic pollution, MALDI-TOF MS, microplastics, polymer identification
Procedia PDF Downloads 25618849 Natural Convection in Wavy-Wall Cavities Filled with Power-Law Fluid
Authors: Cha’o-Kuang Chen, Ching-Chang Cho
Abstract:
This paper investigates the natural convection heat transfer performance in a complex-wavy-wall cavity filled with power-law fluid. In performing the simulations, the continuity, Cauchy momentum and energy equations are solved subject to the Boussinesq approximation using a finite volume method. The simulations focus specifically on the effects of the flow behavior index in the power-law model and the Rayleigh number on the flow streamlines, isothermal contours and mean Nusselt number within the cavity. The results show that pseudoplastic fluids have a better heat transfer performance than Newtonian or dilatant fluids. Moreover, it is shown that for Rayleigh numbers greater than Ra=103, the mean Nusselt number has a significantly increase as the flow behavior index is decreased.Keywords: non-Newtonian fluid, power-law fluid, natural convection, heat transfer enhancement, cavity, wavy wall
Procedia PDF Downloads 26618848 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure
Authors: T. Nozu, K. Hibi, T. Nishiie
Abstract:
This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.Keywords: deflagration, large eddy simulation, turbulent combustion, vented enclosure
Procedia PDF Downloads 24418847 Investigation of Scaling Laws for Stiffness and strength in Bioinspired Glass Sponge Structures Produced by Fused Filament Fabrication
Authors: Hassan Beigi Rizi, Harold Auradou, Lamine Hattali
Abstract:
Various industries, including civil engineering, automotive, aerospace, and biomedical fields, are currently seeking novel and innovative high-performance lightweight materials to reduce energy consumption. Inspired by the structure of Euplectella Aspergillum Glass Sponges (EA-sponge), 2D unit cells were created and fabricated using a Fused Filament Fabrication (FFF) process with Polylactic acid (PLA) filaments. The stiffness and strength of bio-inspired EA-sponge lattices were investigated both experimentally and numerically under uniaxial tensile loading and are compared to three standard square lattices with diagonal struts (Designs B and C) and non-diagonal struts (Design D) reinforcements. The aim is to establish predictive scaling laws models and examine the deformation mechanisms involved. The results indicated that for the EA-sponge structure, the relative moduli and yield strength scaled linearly with relative density, suggesting that the deformation mechanism is stretching-dominated. The Finite element analysis (FEA), with periodic boundary conditions for volumetric homogenization, confirms these trends and goes beyond the experimental limits imposed by the FFF printing process. Therefore, the stretching-dominated behavior, investigated from 0.1 to 0.5 relative density, demonstrate that the study of EA-sponge structure can be exploited for the realization of square lattice topologies that are stiff and strong and have attractive potential for lightweight structural applications. However, the FFF process introduces an accuracy limitation, with approximately 10% error, making it challenging to print structures with a relative density below 0.2. Future work could focus on exploring the impact of different printing materials on the performance of EA-sponge structures.Keywords: bio-inspiration, lattice structures, fused filament fabrication, scaling laws
Procedia PDF Downloads 718846 Multifractal Behavior of the Perturbed Cerbelli-Giona Map: Numerical Computation of ω-Measure
Authors: Ibrahim Alsendid, Rob Sturman, Benjamin Sharp
Abstract:
In this paper, we consider a family of 2-dimensional nonlinear area-preserving transformations on the torus. A single parameter η varies between 0 and 1, taking the transformation from a hyperbolic toral automorphism to the “Cerbelli-Giona” map, a system known to exhibit multifractal properties. Here we study the multifractal properties of the family of maps. We apply a box-counting method by defining a grid of boxes Bi(δ), where i is the index and δ is the size of the boxes, to quantify the distribution of stable and unstable manifolds of the map. When the parameter is in the range 0.51< η <0.58 and 0.68< η <1 the map is ergodic; i.e., the unstable and stable manifolds eventually cover the whole torus, although not in a uniform distribution. For accurate numerical results, we require correspondingly accurate construction of the stable and unstable manifolds. Here we use the piecewise linearity of the map to achieve this, by computing the endpoints of line segments that define the global stable and unstable manifolds. This allows the generalized fractal dimension Dq, and spectrum of dimensions f(α), to be computed with accuracy. Finally, the intersection of the unstable and stable manifold of the map will be investigated and compared with the distribution of periodic points of the system.Keywords: Discrete-time dynamical systems, Fractal geometry, Multifractal behaviour of the Perturbed map, Multifractal of Dynamical systems
Procedia PDF Downloads 21118845 Control of Single Axis Magnetic Levitation System Using Fuzzy Logic Control
Authors: A. M. Benomair, M. O. Tokhi
Abstract:
This paper presents the investigation on a system model for the stabilization of a Magnetic Levitation System (Maglev’s). The magnetic levitation system is a challenging nonlinear mechatronic system in which an electromagnetic force is required to suspend an object (metal sphere) in air space. The electromagnetic force is very sensitive to the noise which can create acceleration forces on the metal sphere, causing the sphere to move into the unbalanced region. Maglev’s give the contribution in industry and this system has reduce the power consumption, has increase the power efficiency and reduce the cost maintenance. The common applications for Maglev’s Power Generation (e.g. wind turbine), Maglev’s trains and Medical Device (e.g. Magnetically suspended Artificial Heart Pump). This paper presents the comparison between dynamic response and robust characteristic for both conventional PD and Fuzzy PD controller. The main contribution of this paper is the proof of fuzzy PD type stabilization and robustness. By use of a method to tune the scaling factors of the linear PD type fuzzy controller from an equivalent tuned conventional PD.Keywords: magnetic levitation system, PD controller, Fuzzy Logic Control, Fuzzy PD
Procedia PDF Downloads 27318844 Climate Change Impact on Slope Stability: A Study of Slope Drainage Design and Operation
Authors: Elena Mugarza, Stephanie Glendinning, Ross Stirling, Colin Davies
Abstract:
The effects of climate change and increased rainfall events on UK-based infrastructure are observable, with an increasing number being reported on in the national press. The fatal derailment at Stonehaven in 2020 prompted a wider review of Network Rail-owned earthworks assets. The event was indicated by the Rail Accident Investigation Branch (RAIB) to be caused by mis-installed drainage on the adjacent cutting. The slope failure on Snake Pass (public highway A57) was reportedly caused by significant water ingress following numerous storm events and resulted in the road’s closure for several months. This problem is only projected to continue with greater intensity and more prolonged rainfall events forecasted in the future. Subsequently, this project is designed to evaluate effective drainage trench design within infrastructure embankments, considering the capillary barrier phenomenon that may govern their deterioration and resultant failure. Theoretically, the differential between grain sizes of the embankment clays and gravels, customarily used in drainage trenches, would have a limiting effect on infiltration. As such, it is anticipated that the inclusion of an additional material with an intermediate grain size should improve the hydraulic conductivity across the drainage boundary. Multiple drainage designs will be studied using instrumentation within the drain and surrounding clays. Data from the real-world installation at the BIONICS embankment will be collected and compared with laboratory and Finite Element (FE) simulations. This research aims to reduce the risk of infrastructure slope failures by improving the resilience of earthwork drainage and lessening the consequential impact on transportation networks.Keywords: earthworks, slope drainage, transportation slopes, deterioration, capillary barriers, field study
Procedia PDF Downloads 5118843 A Succinct Method for Allocation of Reactive Power Loss in Deregulated Scenario
Authors: J. S. Savier
Abstract:
Real power is the component power which is converted into useful energy whereas reactive power is the component of power which cannot be converted to useful energy but it is required for the magnetization of various electrical machineries. If the reactive power is compensated at the consumer end, the need for reactive power flow from generators to the load can be avoided and hence the overall power loss can be reduced. In this scenario, this paper presents a succinct method called JSS method for allocation of reactive power losses to consumers connected to radial distribution networks in a deregulated environment. The proposed method has the advantage that no assumptions are made while deriving the reactive power loss allocation method.Keywords: deregulation, reactive power loss allocation, radial distribution systems, succinct method
Procedia PDF Downloads 37618842 Modification of Underwood's Equation to Calculate Minimum Reflux Ratio for Column with One Side Stream Upper Than Feed
Authors: S. Mousavian, A. Abedianpour, A. Khanmohammadi, S. Hematian, Gh. Eidi Veisi
Abstract:
Distillation is one of the most important and utilized separation methods in the industrial practice. There are different ways to design of distillation column. One of these ways is short cut method. In short cut method, material balance and equilibrium are employed to calculate number of tray in distillation column. There are different methods that are classified in short cut method. One of these methods is Fenske-Underwood-Gilliland method. In this method, minimum reflux ratio should be calculated by underwood equation. Underwood proposed an equation that is useful for simple distillation column with one feed and one top and bottom product. In this study, underwood method is developed to predict minimum reflux ratio for column with one side stream upper than feed. The result of this model compared with McCabe-Thiele method. The result shows that proposed method able to calculate minimum reflux ratio with very small error.Keywords: minimum reflux ratio, side stream, distillation, Underwood’s method
Procedia PDF Downloads 40618841 Effects of Viscous Dissipation on Free Convection Boundary Layer Flow towards a Horizontal Circular Cylinder
Authors: Muhammad Khairul Anuar Mohamed, Mohd Zuki Salleh, Anuar Ishak, Nor Aida Zuraimi Md Noar
Abstract:
In this study, the numerical investigation of viscous dissipation on convective boundary layer flow towards a horizontal circular cylinder with constant wall temperature is considered. The transformed partial differential equations are solved numerically by using an implicit finite-difference scheme known as the Keller-box method. Numerical solutions are obtained for the reduced Nusselt number and the skin friction coefficient as well as the velocity and temperature profiles. The features of the flow and heat transfer characteristics for various values of the Prandtl number and Eckert number are analyzed and discussed. The results in this paper is original and important for the researchers working in the area of boundary layer flow and this can be used as reference and also as complement comparison purpose in future.Keywords: free convection, horizontal circular cylinder, viscous dissipation, convective boundary layer flow
Procedia PDF Downloads 43918840 A Calibration Device for Force-Torque Sensors
Authors: Nicolay Zarutskiy, Roman Bulkin
Abstract:
The paper deals with the existing methods of force-torque sensor calibration with a number of components from one to six, analyzed their advantages and disadvantages, the necessity of introduction of a calibration method. Calibration method and its constructive realization are also described here. A calibration method allows performing automated force-torque sensor calibration both with selected components of the main vector of forces and moments and with complex loading. Thus, two main advantages of the proposed calibration method are achieved: the automation of the calibration process and universality.Keywords: automation, calibration, calibration device, calibration method, force-torque sensors
Procedia PDF Downloads 64618839 Relevance of Lecture Method in Modern Era: A Study from Nepal
Authors: Hari Prasad Nepal
Abstract:
Research on lecture method issues confirm that this teaching method has been practiced from the very beginnings of schooling. Many teachers, lecturers and professors are convinced that lecture still represents main tool of contemporary instructional process. The central purpose of this study is to uncover the extent of using lecture method in the higher education. The study was carried out in Nepalese context with employing mixed method research design. To obtain the primary data this study employed a questionnaire involving items with close and open answers. 120 teachers, lecturers and professors participated in this study. The findings indicated that 75 percent of the respondents use the lecture method in their classroom teaching. The study reveals that there are advantages of using lecture method such as easy to practice, less time to prepare, high pass rate, high students’ satisfaction, little comments on instructors, appropriate to large classes and high level students. In addition, the study divulged the instructors’ reflections and measures to improve the lecture method. This research concludes that the practice of lecture method is still significantly applicable in colleges and universities in Nepalese contexts. So, there are no significant changes in the application of lecture method in the higher education classroom despite the emergence of new learning approaches and strategies.Keywords: instructors, learning approaches, learning strategies, lecture method
Procedia PDF Downloads 23818838 3D Numerical Simulation of Undoweled and Uncracked Joints in Short Paneled Concrete Pavements
Authors: K. Sridhar Reddy, M. Amaranatha Reddy, Nilanjan Mitra
Abstract:
Short paneled concrete pavement (SPCP) with shorter panel size can be an alternative to the conventional jointed plain concrete pavements (JPCP) at the same cost as the asphalt pavements with all the advantages of concrete pavement with reduced thickness, less chance of mid-slab cracking and or dowel bar locking so common in JPCP. Cast-in-situ short concrete panels (short slabs) laid on a strong foundation consisting of a dry lean concrete base (DLC), and cement treated subbase (CTSB) will reduce the thickness of the concrete slab to the order of 180 mm to 220 mm, whereas JPCP was with 280 mm for the same traffic. During the construction of SPCP test sections on two Indian National Highways (NH), it was observed that the joints remain uncracked after a year of traffic. The undoweled and uncracked joints load transfer variability and joint behavior are of interest with anticipation on its long-term performance of the SPCP. To investigate the effects of undoweled and uncracked joints on short slabs, the present study was conducted. A multilayer linear elastic analysis using 3D finite element package for different panel sizes with different thicknesses resting on different types of solid elastic foundation with and without temperature gradient was developed. Surface deflections were obtained from 3D FE model and validated with measured field deflections from falling weight deflectometer (FWD) test. Stress analysis indicates that flexural stresses in short slabs are decreased with a decrease in panel size and increase in thickness. Detailed evaluation of stress analysis with the effects of curling behavior, the stiffness of the base layer and a variable degree of load transfer, is underway.Keywords: joint behavior, short slabs, uncracked joints, undoweled joints, 3D numerical simulation
Procedia PDF Downloads 18218837 Geometrical Analysis of an Atheroma Plaque in Left Anterior Descending Coronary Artery
Authors: Sohrab Jafarpour, Hamed Farokhi, Mohammad Rahmati, Alireza Gholipour
Abstract:
In the current study, a nonlinear fluid-structure interaction (FSI) biomechanical model of atherosclerosis in the left anterior descending (LAD) coronary artery is developed to perform a detailed sensitivity analysis of the geometrical features of an atheroma plaque. In the development of the numerical model, first, a 3D geometry of the diseased artery is developed based on patient-specific dimensions obtained from the experimental studies. The geometry includes four influential geometric characteristics: stenosis ratio, plaque shoulder-length, fibrous cap thickness, and eccentricity intensity. Then, a suitable strain energy density function (SEDF) is proposed based on the detailed material stability analysis to accurately model the hyperelasticity of the arterial walls. The time-varying inlet velocity and outlet pressure profiles are adopted from experimental measurements to incorporate the pulsatile nature of the blood flow. In addition, a computationally efficient type of structural boundary condition is imposed on the arterial walls. Finally, a non-Newtonian viscosity model is implemented to model the shear-thinning behaviour of the blood flow. According to the results, the structural responses in terms of the maximum principal stress (MPS) are affected more compared to the fluid responses in terms of wall shear stress (WSS) as the geometrical characteristics are varying. The extent of these changes is critical in the vulnerability assessment of an atheroma plaque.Keywords: atherosclerosis, fluid-Structure interaction modeling, material stability analysis, and nonlinear biomechanics
Procedia PDF Downloads 8818836 Determination of Weld Seam Thickness in Welded Connection Subjected to Local Buckling Effects
Authors: Tugrul Tulunay, Iyas Devran Celik
Abstract:
When the materials used in structural steel industry are evaluated, box beam profiles are considerably preferred. As a result of the cross-sectional properties that these profiles possess, the connection of these profiles to each other and to profiles having different types of cross sections is becoming viable by means of additional measures. An important point to note in such combinations is continuous transfer of internal forces from element to element. At the beginning to ensure this continuity, header plate is needed to use. The connection of the plates to the elements works mainly through welds. In this study, it is aimed to determine the ideal welding thickness in box beam under bending effect and the joints exposed to local buckles that will form in the column. The connection with box column and box beam designed in this context was made by means of corner and circular filler welds. Corner welds of different thickness and analysis by types with different lengths depending on plate dimensions in numerical models were made with the help of ANSYS Workbench program and examined behaviours.Keywords: welding thickness, box beam-column joints, design of steel structures, calculation and construction principles 2016, welded joints under local buckling
Procedia PDF Downloads 16718835 A Method of the Semantic on Image Auto-Annotation
Authors: Lin Huo, Xianwei Liu, Jingxiong Zhou
Abstract:
Recently, due to the existence of semantic gap between image visual features and human concepts, the semantic of image auto-annotation has become an important topic. Firstly, by extract low-level visual features of the image, and the corresponding Hash method, mapping the feature into the corresponding Hash coding, eventually, transformed that into a group of binary string and store it, image auto-annotation by search is a popular method, we can use it to design and implement a method of image semantic auto-annotation. Finally, Through the test based on the Corel image set, and the results show that, this method is effective.Keywords: image auto-annotation, color correlograms, Hash code, image retrieval
Procedia PDF Downloads 49718834 Effect of Different Thermomechanical Cycles on Microstructure of AISI 4140 Steel
Authors: L.L. Costa, A. M. G. Brito, S. Khan, L. Schaeffer
Abstract:
Microstructure resulting from the forging process is studied as a function of variables such as temperature, deformation, austenite grain size and cooling rate. The purpose of this work is to study the thermomechanical behavior of DIN 42CrMo4 (AISI 4140) steel maintained at the temperatures of 900°, 1000°, 1100° and 1200°C for the austenization times of 22, 66 and 200 minutes each and subsequently forged. These samples were quenched in water in order to study the austenite grain and to investigate the microstructure instead of quenching the annealed samples after forging they were cooled down naturally in the air. The morphologies and properties of the materials such as hardness; prepared by these two different routes have been compared. In addition to the forging experiments, the numerical simulation using the finite element model (FEM), microhardness profiles and metallography images have been presented. Forging force vs position curves has been compared with metallographic results for each annealing condition. The microstructural phenomena resulting from the hot conformation proved that longer austenization time and higher temperature decrease the forging force in the curves. The complete recrystallization phenomenon (static, dynamic and meta dynamic) was observed at the highest temperature and longest time i.e., the samples austenized for 200 minutes at 1200ºC. However, higher hardness of the quenched samples was obtained when the temperature was 900ºC for 66 minutes. The phases observed in naturally cooled samples were exclusively ferrite and perlite, but the continuous cooling diagram indicates the presence of austenite and bainite. The morphology of the phases of naturally cooled samples has shown that the phase arrangement and the previous austenitic grain size are the reasons to high hardness in obtained samples when temperature were 900ºC and 1100ºC austenization times of 22 and 66 minutes, respectively.Keywords: austenization time, thermomechanical effects, forging process, steel AISI 4140
Procedia PDF Downloads 14518833 Temperature Contour Detection of Salt Ice Using Color Thermal Image Segmentation Method
Authors: Azam Fazelpour, Saeed Reza Dehghani, Vlastimil Masek, Yuri S. Muzychka
Abstract:
The study uses a novel image analysis based on thermal imaging to detect temperature contours created on salt ice surface during transient phenomena. Thermal cameras detect objects by using their emissivities and IR radiance. The ice surface temperature is not uniform during transient processes. The temperature starts to increase from the boundary of ice towards the center of that. Thermal cameras are able to report temperature changes on the ice surface at every individual moment. Various contours, which show different temperature areas, appear on the ice surface picture captured by a thermal camera. Identifying the exact boundary of these contours is valuable to facilitate ice surface temperature analysis. Image processing techniques are used to extract each contour area precisely. In this study, several pictures are recorded while the temperature is increasing throughout the ice surface. Some pictures are selected to be processed by a specific time interval. An image segmentation method is applied to images to determine the contour areas. Color thermal images are used to exploit the main information. Red, green and blue elements of color images are investigated to find the best contour boundaries. The algorithms of image enhancement and noise removal are applied to images to obtain a high contrast and clear image. A novel edge detection algorithm based on differences in the color of the pixels is established to determine contour boundaries. In this method, the edges of the contours are obtained according to properties of red, blue and green image elements. The color image elements are assessed considering their information. Useful elements proceed to process and useless elements are removed from the process to reduce the consuming time. Neighbor pixels with close intensities are assigned in one contour and differences in intensities determine boundaries. The results are then verified by conducting experimental tests. An experimental setup is performed using ice samples and a thermal camera. To observe the created ice contour by the thermal camera, the samples, which are initially at -20° C, are contacted with a warmer surface. Pictures are captured for 20 seconds. The method is applied to five images ,which are captured at the time intervals of 5 seconds. The study shows the green image element carries no useful information; therefore, the boundary detection method is applied on red and blue image elements. In this case study, the results indicate that proposed algorithm shows the boundaries more effective than other edges detection methods such as Sobel and Canny. Comparison between the contour detection in this method and temperature analysis, which states real boundaries, shows a good agreement. This color image edge detection method is applicable to other similar cases according to their image properties.Keywords: color image processing, edge detection, ice contour boundary, salt ice, thermal image
Procedia PDF Downloads 31418832 Radiochemical Purity of 68Ga-BCA-Peptides: Separation of All 68Ga Species with a Single iTLC Strip
Authors: Anton A. Larenkov, Alesya Ya Maruk
Abstract:
In the present study, highly effective iTLC single strip method for the determination of radiochemical purity (RCP) of 68Ga-BCA-peptides was developed (with no double-developing, changing of eluents or other additional manipulation). In this method iTLC-SG strips and commonly used eluent TFAaq. (3-5 % (v/v)) are used. The method allows determining each of the key radiochemical forms of 68Ga (colloidal, bound, ionic) separately with the peaks separation being no less than 4 σ. Rf = 0.0-0.1 for 68Ga-colloid; Rf = 0.5-0.6 for 68Ga-BCA-peptides; Rf = 0.9-1.0 for ionic 68Ga. The method is simple and fast: For developing length of 75 mm only 4-6 min is required (versus 18-20 min for pharmacopoeial method). The method has been tested on various compounds (including 68Ga-DOTA-TOC, 68Ga-DOTA-TATE, 68Ga-NODAGA-RGD2 etc.). The cross-validation work for every specific form of 68Ga showed good correlation between method developed and control (pharmacopoeial) methods. The method can become convenient and much more informative replacement for pharmacopoeial methods, including HPLC.Keywords: DOTA-TATE, 68Ga, quality control, radiochemical purity, radiopharmaceuticals, TLC
Procedia PDF Downloads 29018831 Comparing Numerical Accuracy of Solutions of Ordinary Differential Equations (ODE) Using Taylor's Series Method, Euler's Method and Runge-Kutta (RK) Method
Authors: Palwinder Singh, Munish Sandhir, Tejinder Singh
Abstract:
The ordinary differential equations (ODE) represent a natural framework for mathematical modeling of many real-life situations in the field of engineering, control systems, physics, chemistry and astronomy etc. Such type of differential equations can be solved by analytical methods or by numerical methods. If the solution is calculated using analytical methods, it is done through calculus theories, and thus requires a longer time to solve. In this paper, we compare the numerical accuracy of the solutions given by the three main types of one-step initial value solvers: Taylor’s Series Method, Euler’s Method and Runge-Kutta Fourth Order Method (RK4). The comparison of accuracy is obtained through comparing the solutions of ordinary differential equation given by these three methods. Furthermore, to verify the accuracy; we compare these numerical solutions with the exact solutions.Keywords: Ordinary differential equations (ODE), Taylor’s Series Method, Euler’s Method, Runge-Kutta Fourth Order Method
Procedia PDF Downloads 35818830 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 10018829 The Effectiveness of Synthesizing A-Pillar Structures in Passenger Cars
Authors: Chris Phan, Yong Seok Park
Abstract:
The Toyota Camry is one of the best-selling cars in America. It is economical, reliable, and most importantly, safe. These attributes allowed the Camry to be the trustworthy choice when choosing dependable vehicle. However, a new finding brought question to the Camry’s safety. Since 1997, the Camry received a “good” rating on its moderate overlap front crash test through the Insurance Institute of Highway Safety. In 2012, the Insurance Institute of Highway Safety introduced a frontal small overlap crash test into the overall evaluation of vehicle occupant safety test. The 2012 Camry received a “poor” rating on this new test, while the 2015 Camry redeemed itself with a “good” rating once again. This study aims to find a possible solution that Toyota implemented to reduce the severity of a frontal small overlap crash in the Camry during a mid-cycle update. The purpose of this study is to analyze and evaluate the performance of various A-pillar shapes as energy absorbing structures in improving passenger safety in a frontal crash. First, A-pillar structures of the 2012 and 2015 Camry were modeled using CAD software, namely SolidWorks. Then, a crash test simulation using ANSYS software, was applied to the A-pillars to analyze the behavior of the structures in similar conditions. Finally, the results were compared to safety values of cabin intrusion to determine the crashworthy behaviors of both A-pillar structures by measuring total deformation. This study highlights that it is possible that Toyota improved the shape of the A-pillar in the 2015 Camry in order to receive a “good” rating from the IIHS safety evaluation once again. These findings can possibly be used to increase safety performance in future vehicles to decrease passenger injury or fatality.Keywords: A-pillar, Crashworthiness, Design Synthesis, Finite Element Analysis
Procedia PDF Downloads 11918828 A Study of Effective Stereo Matching Method for Long-Wave Infrared Camera Module
Authors: Hyun-Koo Kim, Yonghun Kim, Yong-Hoon Kim, Ju Hee Lee, Myungho Song
Abstract:
In this paper, we have described an efficient stereo matching method and pedestrian detection method using stereo types LWIR camera. We compared with three types stereo camera algorithm as block matching, ELAS, and SGM. For pedestrian detection using stereo LWIR camera, we used that SGM stereo matching method, free space detection method using u/v-disparity, and HOG feature based pedestrian detection. According to testing result, SGM method has better performance than block matching and ELAS algorithm. Combination of SGM, free space detection, and pedestrian detection using HOG features and SVM classification can detect pedestrian of 30m distance and has a distance error about 30 cm.Keywords: advanced driver assistance system, pedestrian detection, stereo matching method, stereo long-wave IR camera
Procedia PDF Downloads 415