Search results for: mathematical expectation
284 Integrative Biology Teaching and Learning Model Based on STEM Education
Authors: Narupot Putwattana
Abstract:
Changes in global situation such as environmental and economic crisis brought the new perspective for science education called integrative biology. STEM has been increasingly mentioned for several educational researches as the approach which combines the concept in Science (S), Technology (T), Engineering (E) and Mathematics (M) to apply in teaching and learning process so as to strengthen the 21st-century skills such as creativity and critical thinking. Recent studies demonstrated STEM as the pedagogy which described the engineering process along with the science classroom activities. So far, pedagogical contents for STEM explaining the content in biology have been scarce. A qualitative literature review was conducted so as to gather the articles based on electronic databases (google scholar). STEM education, engineering design, teaching and learning of biology were used as main keywords to find out researches involving with the application of STEM in biology teaching and learning process. All articles were analyzed to obtain appropriate teaching and learning model that unify the core concept of biology. The synthesized model comprised of engineering design, inquiry-based learning, biological prototype and biologically-inspired design (BID). STEM content and context integration were used as the theoretical framework to create the integrative biology instructional model for STEM education. Several disciplines contents such as biology, engineering, and technology were regarded for inquiry-based learning to build biological prototype. Direct and indirect integrations were used to provide the knowledge into the biology related STEM strategy. Meanwhile, engineering design and BID showed the occupational context for engineer and biologist. Technological and mathematical aspects were required to be inspected in terms of co-teaching method. Lastly, other variables such as critical thinking and problem-solving skills should be more considered in the further researches.Keywords: biomimicry, engineering approach, STEM education, teaching and learning model
Procedia PDF Downloads 257283 Multivariate Analysis on Water Quality Attributes Using Master-Slave Neural Network Model
Authors: A. Clementking, C. Jothi Venkateswaran
Abstract:
Mathematical and computational functionalities such as descriptive mining, optimization, and predictions are espoused to resolve natural resource planning. The water quality prediction and its attributes influence determinations are adopted optimization techniques. The water properties are tainted while merging water resource one with another. This work aimed to predict influencing water resource distribution connectivity in accordance to water quality and sediment using an innovative proposed master-slave neural network back-propagation model. The experiment results are arrived through collecting water quality attributes, computation of water quality index, design and development of neural network model to determine water quality and sediment, master–slave back propagation neural network back-propagation model to determine variations on water quality and sediment attributes between the water resources and the recommendation for connectivity. The homogeneous and parallel biochemical reactions are influences water quality and sediment while distributing water from one location to another. Therefore, an innovative master-slave neural network model [M (9:9:2)::S(9:9:2)] designed and developed to predict the attribute variations. The result of training dataset given as an input to master model and its maximum weights are assigned as an input to the slave model to predict the water quality. The developed master-slave model is predicted physicochemical attributes weight variations for 85 % to 90% of water quality as a target values.The sediment level variations also predicated from 0.01 to 0.05% of each water quality percentage. The model produced the significant variations on physiochemical attribute weights. According to the predicated experimental weight variation on training data set, effective recommendations are made to connect different resources.Keywords: master-slave back propagation neural network model(MSBPNNM), water quality analysis, multivariate analysis, environmental mining
Procedia PDF Downloads 478282 Modelling of Heat Generation in a 18650 Lithium-Ion Battery Cell under Varying Discharge Rates
Authors: Foo Shen Hwang, Thomas Confrey, Stephen Scully, Barry Flannery
Abstract:
Thermal characterization plays an important role in battery pack design. Lithium-ion batteries have to be maintained between 15-35 °C to operate optimally. Heat is generated (Q) internally within the batteries during both the charging and discharging phases. This can be quantified using several standard methods. The most common method of calculating the batteries heat generation is through the addition of both the joule heating effects and the entropic changes across the battery. In addition, such values can be derived by identifying the open-circuit voltage (OCV), nominal voltage (V), operating current (I), battery temperature (T) and the rate of change of the open-circuit voltage in relation to temperature (dOCV/dT). This paper focuses on experimental characterization and comparative modelling of the heat generation rate (Q) across several current discharge rates (0.5C, 1C, and 1.5C) of a 18650 cell. The analysis is conducted utilizing several non-linear mathematical functions methods, including polynomial, exponential, and power models. Parameter fitting is carried out over the respective function orders; polynomial (n = 3~7), exponential (n = 2) and power function. The generated parameter fitting functions are then used as heat source functions in a 3-D computational fluid dynamics (CFD) solver under natural convection conditions. Generated temperature profiles are analyzed for errors based on experimental discharge tests, conducted at standard room temperature (25°C). Initial experimental results display low deviation between both experimental and CFD temperature plots. As such, the heat generation function formulated could be easier utilized for larger battery applications than other methods available.Keywords: computational fluid dynamics, curve fitting, lithium-ion battery, voltage drop
Procedia PDF Downloads 96281 Mathematics Anxiety among Male and Female Students
Authors: Wern Lin Yeo, Choo Kim Tan, Sook Ling Lew
Abstract:
Mathematics anxiety refers to the feeling of anxious when one having difficulties in solving mathematical problem. Mathematics anxiety is the most common type of anxiety among other types of anxiety which occurs among the students. However, level of anxiety among males and females are different. There were few past study were conducted to determine the relationship of anxiety and gender but there were still did not have an exact results. Hence, the purpose of this study is to determine the relationship of anxiety level between male and female undergraduates at a private university in Malaysia. Convenient sampling method used in this study in which the students were selected based on the grouping assigned by the faculty. There were 214 undergraduates who registered the probability courses had participated in this study. Mathematics Anxiety Rating Scale (MARS) was the instrument used in study which used to determine students’ anxiety level towards probability. Reliability and validity of instrument was done before the major study was conducted. In the major study, students were given briefing about the study conducted. Participation of this study were voluntary. Students were given consent form to determine whether they agree to participate in the study. Duration of two weeks were given for students to complete the given online questionnaire. The data collected will be analyzed using Statistical Package for the Social Sciences (SPSS) to determine the level of anxiety. There were three anxiety level, i.e., low, average and high. Students’ anxiety level were determined based on their scores obtained compared with the mean and standard deviation. If the scores obtained were below mean and standard deviation, the anxiety level was low. If the scores were at below and above the mean and between one standard deviation, the anxiety level was average. If the scores were above the mean and greater than one standard deviation, the anxiety level was high. Results showed that both of the gender were having average anxiety level. Males having high frequency of three anxiety level which were low, average and high anxiety level as compared to females. Hence, the mean values obtained for males (M = 3.62) was higher than females (M = 3.42). In order to be significant of anxiety level among the gender, the p-value should be less than .05. The p-value obtained in this study was .117. However, this value was greater than .05. Thus, there was no significant difference of anxiety level among the gender. In other words, there was no relationship of anxiety level with the gender.Keywords: anxiety level, gender, mathematics anxiety, probability and statistics
Procedia PDF Downloads 291280 A Method for Multimedia User Interface Design for Mobile Learning
Authors: Shimaa Nagro, Russell Campion
Abstract:
Mobile devices are becoming ever more widely available, with growing functionality, and are increasingly used as an enabling technology to give students access to educational material anytime and anywhere. However, the design of educational material user interfaces for mobile devices is beset by many unresolved research issues such as those arising from emphasising the information concepts then mapping this information to appropriate media (modelling information then mapping media effectively). This report describes a multimedia user interface design method for mobile learning. The method covers specification of user requirements and information architecture, media selection to represent the information content, design for directing attention to important information, and interaction design to enhance user engagement based on Human-Computer Interaction design strategies (HCI). The method will be evaluated by three different case studies to prove the method is suitable for application to different areas / applications, these are; an application to teach about major computer networking concepts, an application to deliver a history-based topic; (after these case studies have been completed, the method will be revised to remove deficiencies and then used to develop a third case study), an application to teach mathematical principles. At this point, the method will again be revised into its final format. A usability evaluation will be carried out to measure the usefulness and effectiveness of the method. The investigation will combine qualitative and quantitative methods, including interviews and questionnaires for data collection and three case studies for validating the MDMLM method. The researcher has successfully produced the method at this point which is now under validation and testing procedures. From this point forward in the report, the researcher will refer to the method using the MDMLM abbreviation which means Multimedia Design Mobile Learning Method.Keywords: human-computer interaction, interface design, mobile learning, education
Procedia PDF Downloads 247279 Emotions Triggered by Children’s Literature Images
Authors: Ana Maria Reis d'Azevedo Breda, Catarina Maria Neto da Cruz
Abstract:
The role of images/illustrations in communicating meanings and triggering emotions assumes an increasingly relevant role in contemporary texts, regardless of the age group for which they are intended or the nature of the texts that host them. It is no coincidence that children's books are full of illustrations and that the image/text ratio decreases as the age group grows. The vast majority of children's books can be considered multimodal texts containing text and images/illustrations interacting with each other to provide the young reader with a broader and more creative understanding of the book's narrative. This interaction is very diverse, ranging from images/illustrations that are not essential for understanding the storytelling to those that contribute significantly to the meaning of the story. Usually, these books are also read by adults, namely by parents, educators, and teachers who act as mediators between the book and the children, explaining aspects that are or seem to be too complex for the child's context. It should be noted that there are books labeled as children's books that are clearly intended for both children and adults. In this work, following a qualitative and interpretative methodology based on written productions, participant observation, and field notes, we will describe the perceptions of future teachers of the 1st cycle of basic education, attending a master's degree at a Portuguese university, about the role of the image in literary and non-literary texts, namely in mathematical texts, and how these can constitute precious resources for emotional regulation and for the design of creative didactic situations. The analysis of the collected data allowed us to obtain evidence regarding the evolution of the participants' perception regarding the crucial role of images in children's literature, not only as an emotional regulator for young readers but also as a creative source for the design of meaningful didactical situations, crossing other scientific areas, other than the mother tongue, namely mathematics.Keywords: children’s literature, emotions, multimodal texts, soft skills
Procedia PDF Downloads 94278 Engineering Thermal-Hydraulic Simulator Based on Complex Simulation Suite “Virtual Unit of Nuclear Power Plant”
Authors: Evgeny Obraztsov, Ilya Kremnev, Vitaly Sokolov, Maksim Gavrilov, Evgeny Tretyakov, Vladimir Kukhtevich, Vladimir Bezlepkin
Abstract:
Over the last decade, a specific set of connected software tools and calculation codes has been gradually developed. It allows simulating I&C systems, thermal-hydraulic, neutron-physical and electrical processes in elements and systems at the Unit of NPP (initially with WWER (pressurized water reactor)). In 2012 it was called a complex simulation suite “Virtual Unit of NPP” (or CSS “VEB” for short). Proper application of this complex tool should result in a complex coupled mathematical computational model. And for a specific design of NPP, it is called the Virtual Power Unit (or VPU for short). VPU can be used for comprehensive modelling of a power unit operation, checking operator's functions on a virtual main control room, and modelling complicated scenarios for normal modes and accidents. In addition, CSS “VEB” contains a combination of thermal hydraulic codes: the best-estimate (two-liquid) calculation codes KORSAR and CORTES and a homogenous calculation code TPP. So to analyze a specific technological system one can build thermal-hydraulic simulation models with different detalization levels up to a nodalization scheme with real geometry. And the result at some points is similar to the notion “engineering/testing simulator” described by the European utility requirements (EUR) for LWR nuclear power plants. The paper is dedicated to description of the tools mentioned above and an example of the application of the engineering thermal-hydraulic simulator in analysis of the boron acid concentration in the primary coolant (changed by the make-up and boron control system).Keywords: best-estimate code, complex simulation suite, engineering simulator, power plant, thermal hydraulic, VEB, virtual power unit
Procedia PDF Downloads 381277 Theoretical Comparisons and Empirical Illustration of Malmquist, Hicks–Moorsteen, and Luenberger Productivity Indices
Authors: Fatemeh Abbasi, Sahand Daneshvar
Abstract:
Productivity is one of the essential goals of companies to improve performance, which as a strategy-oriented method, determines the basis of the company's economic growth. The history of productivity goes back centuries, but most researchers defined productivity as the relationship between a product and the factors used in production in the early twentieth century. Productivity as the optimal use of available resources means that "more output using less input" can increase companies' economic growth and prosperity capacity. Also, having a quality life based on economic progress depends on productivity growth in that society. Therefore, productivity is a national priority for any developed country. There are several methods for calculating productivity growth measurements that can be divided into parametric and non-parametric methods. Parametric methods rely on the existence of a function in their hypotheses, while non-parametric methods do not require a function based on empirical evidence. One of the most popular non-parametric methods is Data Envelopment Analysis (DEA), which measures changes in productivity over time. The DEA evaluates the productivity of decision-making units (DMUs) based on mathematical models. This method uses multiple inputs and outputs to compare the productivity of similar DMUs such as banks, government agencies, companies, airports, Etc. Non-parametric methods are themselves divided into the frontier and non frontier approaches. The Malmquist productivity index (MPI) proposed by Caves, Christensen, and Diewert (1982), the Hicks–Moorsteen productivity index (HMPI) proposed by Bjurek (1996), or the Luenberger productivity indicator (LPI) proposed by Chambers (2002) are powerful tools for measuring productivity changes over time. This study will compare the Malmquist, Hicks–Moorsteen, and Luenberger indices theoretically and empirically based on DEA models and review their strengths and weaknesses.Keywords: data envelopment analysis, Hicks–Moorsteen productivity index, Leuenberger productivity indicator, malmquist productivity index
Procedia PDF Downloads 194276 Radioactivity Assessment of Sediments in Negombo Lagoon Sri Lanka
Authors: H. M. N. L. Handagiripathira
Abstract:
The distributions of naturally occurring and anthropogenic radioactive materials were determined in surface sediments taken at 27 different locations along the bank of Negombo Lagoon in Sri Lanka. Hydrographic parameters of lagoon water and the grain size analyses of the sediment samples were also carried out for this study. The conductivity of the adjacent water was varied from 13.6 mS/cm to 55.4 mS/cm near to the southern end and the northern end of the lagoon, respectively, and equally salinity levels varied from 7.2 psu to 32.1 psu. The average pH in the water was 7.6 and average water temperature was 28.7 °C. The grain size analysis emphasized the mass fractions of the samples as sand (60.9%), fine sand (30.6%) and fine silt+clay (1.3%) in the sampling locations. The surface sediment samples of wet weight, 1 kg each from upper 5-10 cm layer, were oven dried at 105 °C for 24 hours to get a constant weight, homogenized and sieved through a 2 mm sieve (IAEA technical series no. 295). The radioactivity concentrations were determined using gamma spectrometry technique. Ultra Low Background Broad Energy High Purity Ge Detector, BEGe (Model BE5030, Canberra) was used for radioactivity measurement with Canberra Industries' Laboratory Source-less Calibration Software (LabSOCS) mathematical efficiency calibration approach and Geometry composer software. The mean activity concentration was found to be 24 ± 4, 67 ± 9, 181 ± 10, 59 ± 8, 3.5 ± 0.4 and 0.47 ± 0.08 Bq/kg for 238U, 232Th, 40K, 210Pb, 235U and 137Cs respectively. The mean absorbed dose rate in air, radium equivalent activity, external hazard index, annual gonadal dose equivalent and annual effective dose equivalent were 60.8 nGy/h, 137.3 Bq/kg, 0.4, 425.3 mSv/year and 74.6 mSv/year, respectively. The results of this study will provide baseline information on the natural and artificial radioactive isotopes and environmental pollution associated with information on radiological risk.Keywords: gamma spectrometry, lagoon, radioactivity, sediments
Procedia PDF Downloads 139275 Design and Development of an Optimal Fault Tolerant 3 Degree of Freedom Robotic Manipulator
Authors: Ramish, Farhan Khalique Awan
Abstract:
Kinematic redundancy within the manipulators presents extended dexterity and manipulability to the manipulators. Redundant serial robotic manipulators are very popular in industries due to its competencies to keep away from singularities during normal operation and fault tolerance because of failure of one or more joints. Such fault tolerant manipulators are extraordinarily beneficial in applications where human interference for repair and overhaul is both impossible or tough; like in case of robotic arms for space programs, nuclear applications and so on. The design of this sort of fault tolerant serial 3 DoF manipulator is presented in this paper. This work was the extension of the author’s previous work of designing the simple 3R serial manipulator. This work is the realization of the previous design with optimizing the link lengths for incorporating the feature of fault tolerance. Various measures have been followed by the researchers to quantify the fault tolerance of such redundant manipulators. The fault tolerance in this work has been described in terms of the worst-case measure of relative manipulability that is, in fact, a local measure of optimization that works properly for certain configuration of the manipulators. An optimum fault tolerant Jacobian matrix has been determined first based on prescribed null space properties after which the link parameters have been described to meet the given Jacobian matrix. A solid model of the manipulator was then developed to realize the mathematically rigorous design. Further work was executed on determining the dynamic properties of the fault tolerant design and simulations of the movement for various trajectories have been carried out to evaluate the joint torques. The mathematical model of the system was derived via the Euler-Lagrange approach after which the same has been tested using the RoboAnalyzer© software. The results have been quite in agreement. From the CAD model and dynamic simulation data, the manipulator was fabricated in the workshop and Advanced Machining lab of NED University of Engineering and Technology.Keywords: fault tolerant, Graham matrix, Jacobian, kinematics, Lagrange-Euler
Procedia PDF Downloads 222274 Relative Importance of Contact Constructs to Acute Respiratory Illness in General Population in Hong Kong
Authors: Kin On Kwok, Vivian Wei, Benjamin Cowling, Steven Riley, Jonathan Read
Abstract:
Background: The role of social contact behavior measured in different contact constructs in the transmission of respiratory pathogens with acute respiratory illness (ARI) remains unclear. We, therefore, aim to depict the individual pattern of ARI in the community and investigate the association between different contact dimensions and ARI in Hong Kong. Methods: Between June 2013 and September 2013, 620 subjects participated in the last two waves of recruitment of the population based longitudinal phone social contact survey. Some of the subjects in this study are from the same household. They are also provided with the symptom diaries to self-report any acute respiratory illness related symptoms between the two days of phone recruitment. Data from 491 individuals who were not infected on the day of phone recruitment and returned the symptom diaries after the last phone recruitment were used for analysis. Results: After adjusting different follow-up periods among individuals, the overall incidence rate of ARI was 1.77 per 100 person-weeks. Over 75% ARI episodes involve running nose, cough, sore throat, which are followed by headache (55%), malagia (35%) and fever (18%). Using a generalized estimating equation framework accounting for the cluster effect of subjects living in the same household, we showed that both daily number of locations visited with contacts and the number of contacts, explained the ARI incidence rate better than only one single contact construct. Conclusion: Our result suggests that it is the intertwining property of contact quantity (number of contacts) and contact intensity (ratio of subject-to-contact) that governs the infection risk by a collective set of respiratory pathogens. Our results provide empirical evidence that multiple contact constructs should be incorporated in the mathematical transmission models to feature a more realistic dynamics of respiratory disease.Keywords: acute respiratory illness, longitudinal study, social contact, symptom diaries
Procedia PDF Downloads 261273 Methodologies for Stability Assessment of Existing and Newly Designed Reinforced Concrete Bridges
Authors: Marija Vitanovа, Igor Gjorgjiev, Viktor Hristovski, Vlado Micov
Abstract:
Evaluation of stability is very important in the process of definition of optimal structural measures for maintenance of bridge structures and their strengthening. To define optimal measures for their repair and strengthening, it is necessary to evaluate their static and seismic stability. Presented in this paper are methodologies for evaluation of the seismic stability of existing reinforced concrete bridges designed without consideration of seismic effects and checking of structural justification of newly designed bridge structures. All bridges are located in the territory of the Republic of North Macedonia. A total of 26 existing bridges of different structural systems have been analyzed. Visual inspection has been carried out for all bridges, along with the definition of three main damage categories according to which structures have been categorized in respect to the need for their repair and strengthening. Investigations involving testing the quality of the built-in materials have been carried out, and dynamic tests pointing to the dynamic characteristics of the structures have been conducted by use of non-destructive methods of ambient vibration measurements. The conclusions drawn from the performed measurements and tests have been used for the development of accurate mathematical models that have been analyzed for static and dynamic loads. Based on the geometrical characteristics of the cross-sections and the physical characteristics of the built-in materials, interaction diagrams have been constructed. These diagrams along with the obtained section quantities under seismic effects, have been used to obtain the bearing capacity of the cross-sections. The results obtained from the conducted analyses point to the need for the repair of certain structural parts of the bridge structures. They indicate that the stability of the superstructure elements is not critical during a seismic effect, unlike the elements of the sub-structure, whose strengthening is necessary.Keywords: existing bridges, newly designed bridges, reinforced concrete bridges, stability assessment
Procedia PDF Downloads 101272 3-D Modeling of Particle Size Reduction from Micro to Nano Scale Using Finite Difference Method
Authors: Himanshu Singh, Rishi Kant, Shantanu Bhattacharya
Abstract:
This paper adopts a top-down approach for mathematical modeling to predict the size reduction from micro to nano-scale through persistent etching. The process is simulated using a finite difference approach. Previously, various researchers have simulated the etching process for 1-D and 2-D substrates. It consists of two processes: 1) Convection-Diffusion in the etchant domain; 2) Chemical reaction at the surface of the particle. Since the process requires analysis along moving boundary, partial differential equations involved cannot be solved using conventional methods. In 1-D, this problem is very similar to Stefan's problem of moving ice-water boundary. A fixed grid method using finite volume method is very popular for modelling of etching on a one and two dimensional substrate. Other popular approaches include moving grid method and level set method. In this method, finite difference method was used to discretize the spherical diffusion equation. Due to symmetrical distribution of etchant, the angular terms in the equation can be neglected. Concentration is assumed to be constant at the outer boundary. At the particle boundary, the concentration of the etchant is assumed to be zero since the rate of reaction is much faster than rate of diffusion. The rate of reaction is proportional to the velocity of the moving boundary of the particle. Modelling of the above reaction was carried out using Matlab. The initial particle size was taken to be 50 microns. The density, molecular weight and diffusion coefficient of the substrate were taken as 2.1 gm/cm3, 60 and 10-5 cm2/s respectively. The etch-rate was found to decline initially and it gradually became constant at 0.02µ/s (1.2µ/min). The concentration profile was plotted along with space at different time intervals. Initially, a sudden drop is observed at the particle boundary due to high-etch rate. This change becomes more gradual with time due to declination of etch rate.Keywords: particle size reduction, micromixer, FDM modelling, wet etching
Procedia PDF Downloads 431271 Hydrodynamic Analysis of Fish Fin Kinematics of Oreochromis Niloticus Using Machine Learning and Image Processing
Authors: Paramvir Singh
Abstract:
The locomotion of aquatic organisms has long fascinated biologists and engineers alike, with fish fins serving as a prime example of nature's remarkable adaptations for efficient underwater propulsion. This paper presents a comprehensive study focused on the hydrodynamic analysis of fish fin kinematics, employing an innovative approach that combines machine learning and image processing techniques. Through high-speed videography and advanced computational tools, we gain insights into the complex and dynamic motion of the fins of a Tilapia (Oreochromis Niloticus) fish. This study was initially done by experimentally capturing videos of the various motions of a Tilapia in a custom-made setup. Using deep learning and image processing on the videos, the motion of the Caudal and Pectoral fin was extracted. This motion included the fin configuration (i.e., the angle of deviation from the mean position) with respect to time. Numerical investigations for the flapping fins are then performed using a Computational Fluid Dynamics (CFD) solver. 3D models of the fins were created, mimicking the real-life geometry of the fins. Thrust Characteristics of separate fins (i.e., Caudal and Pectoral separately) and when the fins are together were studied. The relationship and the phase between caudal and pectoral fin motion were also discussed. The key objectives include mathematical modeling of the motion of a flapping fin at different naturally occurring frequencies and amplitudes. The interactions between both fins (caudal and pectoral) were also an area of keen interest. This work aims to improve on research that has been done in the past on similar topics. Also, these results can help in the better and more efficient design of the propulsion systems for biomimetic underwater vehicles that are used to study aquatic ecosystems, explore uncharted or challenging underwater regions, do ocean bed modeling, etc.Keywords: biomimetics, fish fin kinematics, image processing, fish tracking, underwater vehicles
Procedia PDF Downloads 91270 What 4th-Year Primary-School Students are Thinking: A Paper Airplane Problem
Authors: Neslihan Şahin Çelik, Ali Eraslan
Abstract:
In recent years, mathematics educators have frequently stressed the necessity of instructing students about models and modeling approaches that encompass cognitive and metacognitive thought processes, starting from the first years of school and continuing on through the years of higher education. The purpose of this study is to examine the thought processes of 4th-grade primary school students in their modeling activities and to explore the difficulties encountered in these processes, if any. The study, of qualitative design, was conducted in the 2015-2016 academic year at a public state-school located in a central city in the Black Sea Region of Turkey. A preliminary study was first implemented with designated 4th grade students, after which the criterion sampling method was used to select three students that would be recruited into the focus group. The focus group that was thus formed was asked to work on the model eliciting activity of the Paper Airplane Problem and the entire process was recorded on video. The Paper Airplane Problem required the students to determine the winner with respect to: (a) the plane that stays in the air for the longest time; (b) the plane that travels the greatest distance in a straight-line path; and (c) the overall winner for the contest. A written transcript was made of the video recording, after which the recording and the students' worksheets were analyzed using the Blum and Ferri modeling cycle. The results of the study revealed that the students tested the hypotheses related to daily life that they had set up, generated ideas of their own, verified their models by making connections with real life, and tried to make their models generalizable. On the other hand, the students had some difficulties in terms of their interpretation of the table of data and their ways of operating on the data during the modeling processes.Keywords: primary school students, model eliciting activity, mathematical modeling, modeling process, paper airplane problem
Procedia PDF Downloads 360269 Numerical Study on the Effect of Liquid Viscosity on Gas Wall and Interfacial Shear Stress in a Horizontal Two-Phase Pipe Flow
Authors: Jack Buckhill Khallahle
Abstract:
In this study, the calculation methods for interfacial and gas wall shear stress in two-phase flow over a stationary liquid surface with dissimilar liquid viscosities within a horizontal pipe are explored. The research focuses on understanding the behavior of gas and liquid phases as they interact in confined pipe geometries, with liquid-water and kerosene serving as the stationary surfaces. To achieve accurate modelling of flow variables such as pressure drop, liquid holdup, and shear stresses in such flow configurations, a 3D pipe model is developed for Computational Fluid Dynamics (CFD) simulation. This model simulates fully developed gas flow over a stationary liquid surface within a 2.2-liter reservoir of 6.25 meters length and 0.05 meters pipe diameter. The pipe geometry is specifically configured based on the experimental setup used by Newton et al [23]. The simulations employ the Volume of Fluid (VOF) model to track the gas-liquid interface in the two-phase domain. Additionally, the k-ω Shear Stress Transport (SST) turbulence model is used to address turbulence effects in the flow field. The governing equations are solved using the Pressure-Implicit with Splitting of Operators (PISO) algorithm. The model is validated by calculating liquid heights, gas wall, and interfacial shear stresses and comparing them against experimental data for both water and kerosene. Notably, the proposed interfacial friction factor correlation based on the employed pipe model aligns excellently with experimental data using the conventional two-phase flow calculation method. However, it is observed that the interfacial and gas wall shear stresses calculated from mathematical formulations involving hydrostatic force exhibit poor correlation with the experimental data.Keywords: Two-Phase Flow, Horizontal Pipe, VOF Model, k-ω SST Model, Stationary Liquid Surface, Gas Wall and Interfacial Shear Stresses and Hydrostatic Force.
Procedia PDF Downloads 12268 Comparative Analysis of in vitro Release profile for Escitalopram and Escitalopram Loaded Nanoparticles
Authors: Rashi Rajput, Manisha Singh
Abstract:
Escitalopram oxalate (ETP), an FDA approved antidepressant drug from the category of SSRI (selective serotonin reuptake inhibitor) and is used in treatment of general anxiety disorder (GAD), major depressive disorder (MDD).When taken orally, it is metabolized to S-demethylcitalopram (S-DCT) and S-didemethylcitalopram (S-DDCT) in the liver with the help of enzymes CYP2C19, CYP3A4 and CYP2D6. Hence, causing side effects such as dizziness, fast or irregular heartbeat, headache, nausea etc. Therefore, targeted and sustained drug delivery will be a helpful tool for increasing its efficacy and reducing side effects. The present study is designed for formulating mucoadhesive nanoparticle formulation for the same Escitalopram loaded polymeric nanoparticles were prepared by ionic gelation method and characterization of the optimised formulation was done by zeta average particle size (93.63nm), zeta potential (-1.89mV), TEM (range of 60nm to 115nm) analysis also confirms nanometric size range of the drug loaded nanoparticles along with polydispersibility index of 0.117. In this research, we have studied the in vitro drug release profile for ETP nanoparticles, through a semi permeable dialysis membrane. The three important characteristics affecting the drug release behaviour were – particle size, ionic strength and morphology of the optimised nanoparticles. The data showed that on increasing the particle size of the drug loaded nanoparticles, the initial burst was reduced which was comparatively higher in drug. Whereas, the formulation with 1mg/ml chitosan in 1.5mg/ml tripolyphosphate solution showed steady release over the entire period of drug release. Then this data was further validated through mathematical modelling to establish the mechanism of drug release kinetics, which showed a typical linear diffusion profile in optimised ETP loaded nanoparticles.Keywords: ionic gelation, mucoadhesive nanoparticle, semi-permeable dialysis membrane, zeta potential
Procedia PDF Downloads 295267 Application of Mathematical Models for Conducting Long-Term Metal Fume Exposure Assessments for Workers in a Shipbuilding Factory
Authors: Shu-Yu Chung, Ying-Fang Wang, Shih-Min Wang
Abstract:
To conduct long-term exposure assessments are important for workers exposed to chemicals with chronic effects. However, it usually encounters with several constrains, including cost, workers' willingness, and interference to work practice, etc., leading to inadequate long-term exposure data in the real world. In this study, an integrated approach was developed for conducting long-term exposure assessment for welding workers in a shipbuilding factory. A laboratory study was conducted to yield the fume generation rates under various operating conditions. The results and the measured environmental conditions were applied to the near field/far field (NF/FF) model for predicting long term fume exposures via the Monte Carlo simulation. Then, the predicted long-term concentrations were used to determine the prior distribution in Bayesian decision analysis (BDA). Finally, the resultant posterior distributions were used to assess the long-term exposure and serve as basis for initiating control strategies for shipbuilding workers. Results show that the NF/FF model was a suitable for predicting the exposures of metal contents containing in welding fume. The resultant posterior distributions could effectively assess the long-term exposures of shipbuilding welders. Welders' long-term Fe, Mn and Pb exposures were found with high possibilities to exceed the action level indicating preventive measures should be taken for reducing welders' exposures immediately. Though the resultant posterior distribution can only be regarded as the best solution based on the currently available predicting and monitoring data, the proposed integrated approach can be regarded as a possible solution for conducting long term exposure assessment in the field.Keywords: Bayesian decision analysis, exposure assessment, near field and far field model, shipbuilding industry, welding fume
Procedia PDF Downloads 142266 Numerical Method for Productivity Prediction of Water-Producing Gas Well with Complex 3D Fractures: Case Study of Xujiahe Gas Well in Sichuan Basin
Authors: Hong Li, Haiyang Yu, Shiqing Cheng, Nai Cao, Zhiliang Shi
Abstract:
Unconventional resources have gradually become the main direction for oil and gas exploration and development. However, the productivity of gas wells, the level of water production, and the seepage law in tight fractured gas reservoirs are very different. These are the reasons why production prediction is so difficult. Firstly, a three-dimensional multi-scale fracture and multiphase mathematical model based on an embedded discrete fracture model (EDFM) is established. And the material balance method is used to calculate the water body multiple according to the production performance characteristics of water-producing gas well. This will help construct a 'virtual water body'. Based on these, this paper presents a numerical simulation process that can adapt to different production modes of gas wells. The research results show that fractures have a double-sided effect. The positive side is that it can increase the initial production capacity, but the negative side is that it can connect to the water body, which will lead to the gas production drop and the water production rise both rapidly, showing a 'scissor-like' characteristic. It is worth noting that fractures with different angles have different abilities to connect with the water body. The higher the angle of gas well development, the earlier the water maybe break through. When the reservoir is a single layer, there may be a stable production period without water before the fractures connect with the water body. Once connected, a 'scissors shape' will appear. If the reservoir has multiple layers, the gas and water will produce at the same time. The above gas-water relationship can be matched with the gas well production date of the Xujiahe gas reservoir in the Sichuan Basin. This method is used to predict the productivity of a well with hydraulic fractures in this gas reservoir, and the prediction results are in agreement with on-site production data by more than 90%. It shows that this research idea has great potential in the productivity prediction of water-producing gas wells. Early prediction results are of great significance to guide the design of development plans.Keywords: EDFM, multiphase, multilayer, water body
Procedia PDF Downloads 195265 Calculation of Fractal Dimension and Its Relation to Some Morphometric Characteristics of Iranian Landforms
Authors: Mitra Saberi, Saeideh Fakhari, Amir Karam, Ali Ahmadabadi
Abstract:
Geomorphology is the scientific study of the characteristics of form and shape of the Earth's surface. The existence of types of landforms and their variation is mainly controlled by changes in the shape and position of land and topography. In fact, the interest and application of fractal issues in geomorphology is due to the fact that many geomorphic landforms have fractal structures and their formation and transformation can be explained by mathematical relations. The purpose of this study is to identify and analyze the fractal behavior of landforms of macro geomorphologic regions of Iran, as well as studying and analyzing topographic and landform characteristics based on fractal relationships. In this study, using the Iranian digital elevation model in the form of slopes, coefficients of deposition and alluvial fan, the fractal dimensions of the curves were calculated through the box counting method. The morphometric characteristics of the landforms and their fractal dimension were then calculated for 4criteria (height, slope, profile curvature and planimetric curvature) and indices (maximum, Average, standard deviation) using ArcMap software separately. After investigating their correlation with fractal dimension, two-way regression analysis was performed and the relationship between fractal dimension and morphometric characteristics of landforms was investigated. The results show that the fractal dimension in different pixels size of 30, 90 and 200m, topographic curves of different landform units of Iran including mountain, hill, plateau, plain of Iran, from1.06in alluvial fans to1.17in The mountains are different. Generally, for all pixels of different sizes, the fractal dimension is reduced from mountain to plain. The fractal dimension with the slope criterion and the standard deviation index has the highest correlation coefficient, with the curvature of the profile and the mean index has the lowest correlation coefficient, and as the pixels become larger, the correlation coefficient between the indices and the fractal dimension decreases.Keywords: box counting method, fractal dimension, geomorphology, Iran, landform
Procedia PDF Downloads 84264 Optimal Tamping for Railway Tracks, Reducing Railway Maintenance Expenditures by the Use of Integer Programming
Authors: Rui Li, Min Wen, Kim Bang Salling
Abstract:
For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euros per kilometer per year. In order to reduce such maintenance expenditures, this paper presents a mixed 0-1 linear mathematical model designed to optimize the predictive railway tamping activities for ballast track in the planning horizon of three to four years. The objective function is to minimize the tamping machine actual costs. The approach of the research is using the simple dynamic model for modelling condition-based tamping process and the solution method for finding optimal condition-based tamping schedule. Seven technical and practical aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality recovery on the track quality after tamping operation; (5) Tamping machine operation practices (6) tamping budgets and (7) differentiating the open track from the station sections. A Danish railway track between Odense and Fredericia with 42.6 km of length is applied for a time period of three and four years in the proposed maintenance model. The generated tamping schedule is reasonable and robust. Based on the result from the Danish railway corridor, the total costs can be reduced significantly (50%) than the previous model which is based on optimizing the number of tamping. The different maintenance strategies have been discussed in the paper. The analysis from the results obtained from the model also shows a longer period of predictive tamping planning has more optimal scheduling of maintenance actions than continuous short term preventive maintenance, namely yearly condition-based planning.Keywords: integer programming, railway tamping, predictive maintenance model, preventive condition-based maintenance
Procedia PDF Downloads 447263 Dynamic Modelling of Hepatitis B Patient Using Sihar Model
Authors: Alakija Temitope Olufunmilayo, Akinyemi, Yagba Joy
Abstract:
Hepatitis is the inflammation of the liver tissue that can cause whiteness of the eyes (Jaundice), lack of appetite, vomiting, tiredness, abdominal pain, diarrhea. Hepatitis is acute if it resolves within 6 months and chronic if it last longer than 6 months. Acute hepatitis can resolve on its own, lead to chronic hepatitis or rarely result in acute liver failure. Chronic hepatitis may lead to scarring of the liver (Cirrhosis), liver failure and liver cancer. Modelling Hepatitis B may become necessary in order to reduce its spread. So, dynamic SIR model can be used. This model consists of a system of three coupled non-linear ordinary differential equation which does not have an explicit formula solution. It is an epidemiological model used to predict the dynamics of infectious disease by categorizing the population into three possible compartments. In this study, a five-compartment dynamic model of Hepatitis B disease was proposed and developed by adding control measure of sensitizing the public called awareness. All the mathematical and statistical formulation of the model, especially the general equilibrium of the model, was derived, including the nonlinear least square estimators. The initial parameters of the model were derived using nonlinear least square embedded in R code. The result study shows that the proportion of Hepatitis B patient in the study population is 1.4 per 1,000,000 populations. The estimated Hepatitis B induced death rate is 0.0108, meaning that 1.08% of the infected individuals die of the disease. The reproduction number of Hepatitis B diseases in Nigeria is 6.0, meaning that one individual can infect more than 6.0 people. The effect of sensitizing the public on the basic reproduction number is significant as the reproduction number is reduced. The study therefore recommends that programme should be designed by government and non-governmental organization to sensitize the entire Nigeria population in order to reduce cases of Hepatitis B disease among the citizens.Keywords: hepatitis B, modelling, non-linear ordinary differential equation, sihar model, sensitization
Procedia PDF Downloads 91262 Angiogenesis and Blood Flow: The Role of Blood Flow in Proliferation and Migration of Endothelial Cells
Authors: Hossein Bazmara, Kaamran Raahemifar, Mostafa Sefidgar, Madjid Soltani
Abstract:
Angiogenesis is formation of new blood vessels from existing vessels. Due to flow of blood in vessels, during angiogenesis, blood flow plays an important role in regulating the angiogenesis process. Multiple mathematical models of angiogenesis have been proposed to simulate the formation of the complicated network of capillaries around a tumor. In this work, a multi-scale model of angiogenesis is developed to show the effect of blood flow on capillaries and network formation. This model spans multiple temporal and spatial scales, i.e. intracellular (molecular), cellular, and extracellular (tissue) scales. In intracellular or molecular scale, the signaling cascade of endothelial cells is obtained. Two main stages in development of a vessel are considered. In the first stage, single sprouts are extended toward the tumor. In this stage, the main regulator of endothelial cells behavior is the signals from extracellular matrix. After anastomosis and formation of closed loops, blood flow starts in the capillaries. In this stage, blood flow induced signals regulate endothelial cells behaviors. In cellular scale, growth and migration of endothelial cells is modeled with a discrete lattice Monte Carlo method called cellular Pott's model (CPM). In extracellular (tissue) scale, diffusion of tumor angiogenic factors in the extracellular matrix, formation of closed loops (anastomosis), and shear stress induced by blood flow is considered. The model is able to simulate the formation of a closed loop and its extension. The results are validated against experimental data. The results show that, without blood flow, the capillaries are not able to maintain their integrity.Keywords: angiogenesis, endothelial cells, multi-scale model, cellular Pott's model, signaling cascade
Procedia PDF Downloads 425261 Development of Ready Reckoner Charts for Easy, Convenient, and Widespread Use of Horrock’s Apparatus by Field Level Health Functionaries in India
Authors: Gumashta Raghvendra, Gumashta Jyotsna
Abstract:
Aim and Objective of Study : The use of Horrock’s Apparatus by health care worker requires onsite mathematical calculations for estimation of ‘volume of water’ and ‘amount of bleaching powder’ necessary as per the serial number of first cup showing blue coloration after adding freshly prepared starch-iodide indicator solution. In view of the difficulties of two simultaneous calculations required to be done, the use of Horrock’s Apparatus is not routinely done by health care workers because it is impractical and inconvenient Material and Methods: Arbitrary use of bleaching powder in wells results in hyper-chlorination or hypo-chlorination of well defying the purpose of adequate chlorination or non-usage of well water due to hyper-chlorination. Keeping this in mind two nomograms have been developed, one to assess the volume of well using depth and diameter of well and the other to know the quantity of bleaching powder to b added using the number of the cup of Horrock’s apparatus which shows the colour indication. Result & Conclusion: Out of thus developed two self-speaking interlinked easy charts, first chart will facilitate bypassing requirement of formulae ‘πr2h’ for water volume (ready reckoner table with depth of water shown on ‘X’ axis and ‘diameter of well’ on ‘Y’ axis) and second chart will facilitate bypassing requirement formulae ‘2ab/455’ (where ‘a’ is for ‘serial number of cup’ and ‘b’ is for ‘water volume’, while ready reckoner table showing ‘water volume’ shown on ‘X’ axis and ‘serial number of cup’ on ‘Y’ axis). The use of these two charts will help health care worker to immediately known, by referring the two charts, about the exact requirement of bleaching powder. Thus, developed ready reckoner charts will be easy and convenient to use for ensuring prevention of water-borne diseases occurring due to hypo-chlorination, especially in rural India and other developing countries.Keywords: apparatus, bleaching, chlorination, Horrock’s, nomogram
Procedia PDF Downloads 484260 Optimization-Based Design Improvement of Synchronizer in Transmission System for Efficient Vehicle Performance
Authors: Sanyka Banerjee, Saikat Nandi, P. K. Dan
Abstract:
Synchronizers as an integral part of gearbox is a key element in the transmission system in automotive. The performance of synchronizer affects transmission efficiency and driving comfort. Synchronizing mechanism as a major component of transmission system must be capable of preventing vibration and noise in the gears. Gear shifting efficiency improvement with an aim to achieve smooth, quick and energy efficient power transmission remains a challenge for the automotive industry. Performance of the synchronizer is dependent on the features and characteristics of its sub-components and therefore analysis of the contribution of such characteristics is necessary. An important exercise involved is to identify all such characteristics or factors which are associated with the modeling and analysis and for this purpose the literature was reviewed, rather extensively, to study the mathematical models, formulated considering such. It has been observed that certain factors are rather common across models; however, there are few factors which have specifically been selected for individual models, as reported. In order to obtain a more realistic model, an attempt here has been made to identify and assimilate practically all possible factors which may be considered in formulating the model more comprehensively. A simulation study, formulated as a block model, for such analysis has been carried out in a reliable environment like MATLAB. Lower synchronization time is desirable and hence, it has been considered here as the output factors in the simulation modeling for evaluating transmission efficiency. An improved synchronizer model requires optimized values of sub-component design parameters. A parametric optimization utilizing Taguchi’s design of experiment based response data and their analysis has been carried out for this purpose. The effectiveness of the optimized parameters for the improved synchronizer performance has been validated by the simulation study of the synchronizer block model with improved parameter values as input parameters for better transmission efficiency and driver comfort.Keywords: design of experiments, modeling, parametric optimization, simulation, synchronizer
Procedia PDF Downloads 314259 Numerical Solution of Portfolio Selecting Semi-Infinite Problem
Authors: Alina Fedossova, Jose Jorge Sierra Molina
Abstract:
SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution
Procedia PDF Downloads 309258 A Perspective on Teaching Mathematical Concepts to Freshman Economics Students Using 3D-Visualisations
Authors: Muhammad Saqib Manzoor, Camille Dickson-Deane, Prashan Karunaratne
Abstract:
Cobb-Douglas production (utility) function is a fundamental function widely used in economics teaching and research. The key reason is the function's characteristics to describe the actual production using inputs like labour and capital. The characteristics of the function like returns to scale, marginal, and diminishing marginal productivities are covered in the introductory units in both microeconomics and macroeconomics with a 2-dimensional static visualisation of the function. However, less insight is provided regarding three-dimensional surface, changes in the curvature properties due to returns to scale, the linkage of the short-run production function with its long-run counterpart and marginal productivities, the level curves, and the constraint optimisation. Since (freshman) learners have diverse prior knowledge and cognitive skills, the existing “one size fits all” approach is not very helpful. The aim of this study is to bridge this gap by introducing technological intervention with interactive animations of the three-dimensional surface and sequential unveiling of the characteristics mentioned above using Python software. A small classroom intervention has helped students enhance their analytical and visualisation skills towards active and authentic learning of this topic. However, to authenticate the strength of our approach, a quasi-Delphi study will be conducted to ask domain-specific experts, “What value to the learning process in economics is there using a 2-dimensional static visualisation compared to using a 3-dimensional dynamic visualisation?’ Here three perspectives of the intervention were reviewed by a panel comprising of novice students, experienced students, novice instructors, and experienced instructors in an effort to determine the learnings from each type of visualisations within a specific domain of knowledge. The value of this approach is key to suggesting different pedagogical methods which can enhance learning outcomes.Keywords: cobb-douglas production function, quasi-Delphi method, effective teaching and learning, 3D-visualisations
Procedia PDF Downloads 146257 Variation of Manning’s Coefficient in a Meandering Channel with Emergent Vegetation Cover
Authors: Spandan Sahu, Amiya Kumar Pati, Kishanjit Kumar Khatua
Abstract:
Vegetation plays a major role in deciding the flow parameters in an open channel. It enhances the aesthetic view of the revetments. The major types of vegetation in river typically comprises of herbs, grasses, weeds, trees, etc. The vegetation in an open channel usually consists of aquatic plants with complete submergence, partial submergence, floating plants. The presence of vegetative plants can have both benefits and problems. The major benefits of aquatic plants are they reduce the soil erosion, which provides the water with a free surface to move on without hindrance. The obvious problems are they retard the flow of water and reduce the hydraulic capacity of the channel. The degree to which the flow parameters are affected depends upon the density of the vegetation, degree of submergence, pattern of vegetation, vegetation species. Vegetation in open channel tends to provide resistance to flow, which in turn provides a background to study the varying trends in flow parameters having vegetative growth in the channel surface. In this paper, an experiment has been conducted on a meandering channel having sinuosity of 1.33 with rigid vegetation cover to investigate the effect on flow parameters, variation of manning’s n with degree of the denseness of vegetation, vegetation pattern and submergence criteria. The measurements have been carried out in four different cross-sections two on trough portion of the meanders, two on the crest portion. In this study, the analytical solution of Shiono and knight (SKM) for lateral distributions of depth-averaged velocity and bed shear stress have been taken into account. Dimensionless eddy viscosity and bed friction have been incorporated to modify the SKM to provide more accurate results. A mathematical model has been formulated to have a comparative analysis with the results obtained from Shiono-Knight Method.Keywords: bed friction, depth averaged velocity, eddy viscosity, SKM
Procedia PDF Downloads 137256 Influence of Flexible Plate's Contour on Dynamic Behavior of High Speed Flexible Coupling of Combat Aircraft
Authors: Dineshsingh Thakur, S. Nagesh, J. Basha
Abstract:
A lightweight High Speed Flexible Coupling (HSFC) is used to connect the Engine Gear Box (EGB) with an Accessory Gear Box (AGB) of the combat aircraft. The HSFC transmits the power at high speeds ranging from 10000 to 18000 rpm from the EGB to AGB. The HSFC is also accommodates larger misalignments resulting from thermal expansion of the aircraft engine and mounting arrangement. The HSFC has the series of metallic contoured annular thin cross-sectioned flexible plates to accommodate the misalignments. The flexible plates are accommodating the misalignment by the elastic material flexure. As the HSFC operates at higher speed, the flexural and axial resonance frequencies are to be kept away from the operating speed and proper prediction is required to prevent failure in the transmission line of a single engine fighter aircraft. To study the influence of flexible plate’s contour on the lateral critical speed (LCS) of HSFC, a mathematical model of HSFC as a elven rotor system is developed. The flexible plate being the bending member of the system, its bending stiffness which results from the contoured governs the LCS. Using transfer matrix method, Influence of various flexible plate contours on critical speed is analyzed. In the above analysis, the support bearing flexibility on critical speed prediction is also considered. Based on the study, a model is built with the optimum contour of flexible plate, for validation by experimental modal analysis. A good correlation between the theoretical prediction and model behavior is observed. From the study, it is found that the flexible plate’s contour is playing vital role in modification of system’s dynamic behavior and the present model can be extended for the development of similar type of flexible couplings for its computational simplicity and reliability.Keywords: flexible rotor, critical speed, experimental modal analysis, high speed flexible coupling (HSFC), misalignment
Procedia PDF Downloads 215255 Reverse Logistics End of Life Products Acquisition and Sorting
Authors: Badli Shah Mohd Yusoff, Khairur Rijal Jamaludin, Rozetta Dollah
Abstract:
The emerging of reverse logistics and product recovery management is an important concept in reconciling economic and environmental objectives through recapturing values of the end of life product returns. End of life products contains valuable modules, parts, residues and materials that can create value if recovered efficiently. The main objective of this study is to explore and develop a model to recover as much of the economic value as reasonably possible to find the optimality of return acquisition and sorting to meet demand and maximize profits over time. In this study, the benefits that can be obtained for remanufacturer is to develop demand forecasting of used products in the future with uncertainty of returns and quality of products. Formulated based on a generic disassembly tree, the proposed model focused on three reverse logistics activity, namely refurbish, remanufacture and disposal incorporating all plausible means quality levels of the returns. While stricter sorting policy, constitute to the decrease amount of products to be refurbished or remanufactured and increases the level of discarded products. Numerical experiments carried out to investigate the characteristics and behaviour of the proposed model with mathematical programming model using Lingo 16.0 for medium-term planning of return acquisition, disassembly (refurbish or remanufacture) and disposal activities. Moreover, the model seeks an analysis a number of decisions relating to trade off management system to maximize revenue from the collection of use products reverse logistics services through refurbish and remanufacture recovery options. The results showed that full utilization in the sorting process leads the system to obtain less quantity from acquisition with minimal overall cost. Further, sensitivity analysis provides a range of possible scenarios to consider in optimizing the overall cost of refurbished and remanufactured products.Keywords: core acquisition, end of life, reverse logistics, quality uncertainty
Procedia PDF Downloads 305