Search results for: lexical and numerical error-recognition tasks
730 Modeling of Thermo Acoustic Emission Memory Effect in Rocks of Varying Textures
Authors: Vladimir Vinnikov
Abstract:
The paper proposes a model of an inhomogeneous rock mass with initially random distribution of microcracks on mineral grain boundaries. It describes the behavior of cracks in a medium under the effect of thermal field, the medium heated instantaneously to a predetermined temperature. Crack growth occurs according to the concept of fracture mechanics provided that the stress intensity factor K exceeds the critical value of Kc. The modeling of thermally induced acoustic emission memory effects is based on the assumption that every event of crack nucleation or crack growth caused by heating is accompanied with a single acoustic emission event. Parameters of the thermally induced acoustic emission memory effect produced by cyclic heating and cooling (with the temperature amplitude increasing from cycle to cycle) were calculated for several rock texture types (massive, banded, and disseminated). The study substantiates the adaptation of the proposed model to humidity interference with the thermally induced acoustic emission memory effect. The influence of humidity on the thermally induced acoustic emission memory effect in quasi-homogeneous and banded rocks is estimated. It is shown that such modeling allows the structure and texture of rocks to be taken into account and the influence of interference factors on the distinctness of the thermally induced acoustic emission memory effect to be estimated. The numerical modeling can be used to obtain information about the thermal impacts on rocks in the past and determine the degree of rock disturbance by means of non-destructive testing.Keywords: crack growth, cyclic heating and cooling, rock texture, thermo acoustic emission memory effect
Procedia PDF Downloads 271729 Effect of the Vertical Pressure on the Electrical Behaviour of the Micro-Copper Polyurethane Composite Films
Authors: Saeid Mehvari, Yolanda Sanchez-Vicente, Sergio González Sánchez, Khalid Lafdi
Abstract:
Abstract- Materials with a combination of transparency, electrical conductivity, and flexibility are required in the growing electronic sector. In this research, electrically conductive and flexible films have been prepared. These composite films consist of dispersing micro-copper particles into polyurethane (PU) matrix. Two sets of samples were made using both spin coating technique (sample thickness lower than 30 μm) and materials casting (sample thickness lower than 100 μm). Copper concentrations in the PU matrix varied from 0.5 to 20% by volume. The dispersion of micro-copper particles into polyurethane (PU) matrix were characterised using optical microscope and scanning electron microscope. The electrical conductivity measurement was carried out using home-made multimeter set up under pressures from 1 to 20 kPa through thickness and in plane direction. It seems that samples made by casting were not conductive. However, the sample made by spin coating shows through-thickness conductivity when they are under pressure. The results showed that spin-coated films with higher concentration of 2 vol. % of copper displayed a significant increase in the conductivity value, known as percolation threshold. The maximum conductivity of 7.2 × 10-1 S∙m-1 was reached at concentrations of filler with 20 vol. % at 20kPa. A semi-empirical model with adjustable coefficients was used to fit and predict the electrical behaviour of composites. For the first time, the finite element method based on the representative volume element (FE-RVE) was successfully used to predict their electrical behaviour under applied pressures. Keywords: electrical conductivity, micro copper, numerical simulation, percolation threshold, polyurethane, RVE model
Procedia PDF Downloads 197728 Revenue Management of Perishable Products Considering Freshness and Price Sensitive Customers
Authors: Onur Kaya, Halit Bayer
Abstract:
Global grocery and supermarket sales are among the largest markets in the world and perishable products such as fresh produce, dairy and meat constitute the biggest section of these markets. Due to their deterioration over time, the demand for these products depends highly on their freshness. They become totally obsolete after a certain amount of time causing a high amount of wastage and decreases in grocery profits. In addition, customers are asking for higher product variety in perishable product categories, leading to less predictable demand per product and to more out-dating. Effective management of these perishable products is an important issue since it is observed that billions of dollars’ worth of food is expired and wasted every month. We consider coordinated inventory and pricing decisions for perishable products with a time and price dependent random demand function. We use stochastic dynamic programming to model this system for both periodically-reviewed and continuously-reviewed inventory systems and prove certain structural characteristics of the optimal solution. We prove that the optimal ordering decision scenario has a monotone structure and the optimal price value decreases by time. However, the optimal price changes in a non-monotonic structure with respect to inventory size. We also analyze the effect of 1 different parameters on the optimal solution through numerical experiments. In addition, we analyze simple-to-implement heuristics, investigate their effectiveness and extract managerial insights. This study gives valuable insights about the management of perishable products in order to decrease wastage and increase profits.Keywords: age-dependent demand, dynamic programming, perishable inventory, pricing
Procedia PDF Downloads 247727 The Use of Polar Substituent Groups for Promoting Azo Disperse Dye Solubility and Reactivity for More Economic and Environmental Benign Applications: A Computational Study
Authors: Olaide O. Wahab, Lukman O. Olasunkanmi, Krishna K. Govender, Penny P. Govender
Abstract:
The economic and environmental challenges associated with azo disperse dyes applications are due to poor aqueous solubility and low degradation tendency which stems from low chemical reactivity. Poor aqueous solubility property of this group of dyes necessitates the use of dispersing agents which increase operational costs and also release toxic chemical components into the environment, while their low degradation tendency is due to the high stability of the azo functional group (-N=N-) in their chemical structures. To address these problems, this study investigated theoretically the effects of some polar substituents on the aqueous solubility and reactivity properties of disperse yellow (DY) 119 dye with a view to theoretically develop new azo disperse dyes with improved solubility in water and higher degradation tendency in the environment using DMol³ computational code. All calculations were carried out using the Becke and Perdew version of Volsko-Wilk-Nusair (VWN-BP) level of density functional theory in conjunction with double numerical basis set containing polarization function (DNP). The aqueous solubility determination was achieved with conductor-like screening model for realistic solvation (COSMO-RS) in conjunction with known empirical solubility model, while the reactivity was predicted using frontier molecular orbital calculations. Most of the new derivatives studied showed evidence of higher aqueous solubility and degradation tendency compared to the parent dye. We conclude that these derivatives are promising alternative dyes for more economic and environmental benign dyeing practice and therefore recommend them for synthesis.Keywords: aqueous solubility, azo disperse dye, degradation, disperse yellow 119, DMol³, reactivity
Procedia PDF Downloads 204726 Geological and Geotechnical Investigation of a Landslide Prone Slope Along Koraput- Rayagada Railway Track Odisha, India: A Case Study
Authors: S. P. Pradhan, Amulya Ratna Roul
Abstract:
A number of landslides are occurring during the rainy season along Rayagada-Koraput Railway track for past three years. The track was constructed about 20 years ago. However, the protection measures are not able to control the recurring slope failures now. It leads to a loss to Indian Railway and its passengers ultimately leading to wastage of time and money. The slopes along Rayagada-Koraput track include both rock and soil slopes. The rock types include mainly Khondalite and Charnockite whereas soil slopes are mainly composed of laterite ranging from less weathered to highly weathered laterite. The field studies were carried out in one of the critical slope. Field study was followed by the kinematic analysis to assess the type of failure. Slake Durability test, Uniaxial Compression test, specific gravity test and triaxial test were done on rock samples to calculate and assess properties such as weathering index, unconfined compressive strength, density, cohesion, and friction angle. Following all the laboratory tests, rock mass rating was calculated. Further, from Kinematic analysis and Rock Mass Ratingbasic, Slope Mass Rating was proposed for each slope. The properties obtained were used to do the slope stability simulations using finite element method based modelling. After all the results, suitable protection measures, to prevent the loss due to slope failure, were suggested using the relation between Slope Mass Rating and protection measures.Keywords: landslides, slope stability, rock mass rating, slope mass rating, numerical simulation
Procedia PDF Downloads 184725 Vibration Absorption Strategy for Multi-Frequency Excitation
Authors: Der Chyan Lin
Abstract:
Since the early introduction by Ormondroyd and Den Hartog, vibration absorber (VA) has become one of the most commonly used vibration mitigation strategies. The strategy is most effective for a primary plant subjected to a single frequency excitation. For continuous systems, notable advances in vibration absorption in the multi-frequency system were made. However, the efficacy of the VA strategy for systems under multi-frequency excitation is not well understood. For example, for an N degrees-of-freedom (DOF) primary-absorber system, there are N 'peak' frequencies of large amplitude vibration per every new excitation frequency. In general, the usable range for vibration absorption can be greatly reduced as a result. Frequency modulated harmonic excitation is a commonly seen multi-frequency excitation example: f(t) = cos(ϖ(t)t) where ϖ(t)=ω(1+α sin(δt)). It is known that f(t) has a series expansion given by the Bessel function of the first kind, which implies an infinity of forcing frequencies in the frequency modulated harmonic excitation. For an SDOF system of natural frequency ωₙ subjected to f(t), it can be shown that amplitude peaks emerge at ω₍ₚ,ₖ₎=(ωₙ ± 2kδ)/(α ∓ 1),k∈Z; i.e., there is an infinity of resonant frequencies ω₍ₚ,ₖ₎, k∈Z, making the use of VA strategy ineffective. In this work, we propose an absorber frequency placement strategy for SDOF vibration systems subjected to frequency-modulated excitation. An SDOF linear mass-spring system coupled to lateral absorber systems is used to demonstrate the ideas. Although the mechanical components are linear, the governing equations for the coupled system are nonlinear. We show using N identical absorbers, for N ≫ 1, that (a) there is a cluster of N+1 natural frequencies around every natural absorber frequency, and (b) the absorber frequencies can be moved away from the plant's resonance frequency (ω₀) as N increases. Moreover, we also show the bandwidth of the VA performance increases with N. The derivations of the clustering and bandwidth widening effect will be given, and the superiority of the proposed strategy will be demonstrated via numerical experiments.Keywords: Bessel function, bandwidth, frequency modulated excitation, vibration absorber
Procedia PDF Downloads 157724 Vibration Analysis of Stepped Nanoarches with Defects
Authors: Jaan Lellep, Shahid Mubasshar
Abstract:
A numerical solution is developed for simply supported nanoarches based on the non-local theory of elasticity. The nanoarch under consideration has a step-wise variable cross-section and is weakened by crack-like defects. It is assumed that the cracks are stationary and the mechanical behaviour of the nanoarch can be modeled by Eringen’s non-local theory of elasticity. The physical and thermal properties are sensitive with respect to changes of dimensions in the nano level. The classical theory of elasticity is unable to describe such changes in material properties. This is because, during the development of the classical theory of elasticity, the speculation of molecular objects was avoided. Therefore, the non-local theory of elasticity is applied to study the vibration of nanostructures and it has been accepted by many researchers. In the non-local theory of elasticity, it is assumed that the stress state of the body at a given point depends on the stress state of each point of the structure. However, within the classical theory of elasticity, the stress state of the body depends only on the given point. The system of main equations consists of equilibrium equations, geometrical relations and constitutive equations with boundary and intermediate conditions. The system of equations is solved by using the method of separation of variables. Consequently, the governing differential equations are converted into a system of algebraic equations whose solution exists if the determinant of the coefficients of the matrix vanishes. The influence of cracks and steps on the natural vibration of the nanoarches is prescribed with the aid of additional local compliance at the weakened cross-section. An algorithm to determine the eigenfrequencies of the nanoarches is developed with the help of computer software. The effects of various physical and geometrical parameters are recorded and drawn graphically.Keywords: crack, nanoarches, natural frequency, step
Procedia PDF Downloads 128723 Numerical Investigation of the Transverse Instability in Radiation Pressure Acceleration
Authors: F. Q. Shao, W. Q. Wang, Y. Yin, T. P. Yu, D. B. Zou, J. M. Ouyang
Abstract:
The Radiation Pressure Acceleration (RPA) mechanism is very promising in laser-driven ion acceleration because of high laser-ion energy conversion efficiency. Although some experiments have shown the characteristics of RPA, the energy of ions is quite limited. The ion energy obtained in experiments is only several MeV/u, which is much lower than theoretical prediction. One possible limiting factor is the transverse instability incited in the RPA process. The transverse instability is basically considered as the Rayleigh-Taylor (RT) instability, which is a kind of interfacial instability and occurs when a light fluid pushes against a heavy fluid. Multi-dimensional particle-in-cell (PIC) simulations show that the onset of transverse instability will destroy the acceleration process and broaden the energy spectrum of fast ions during the RPA dominant ion acceleration processes. The evidence of the RT instability driven by radiation pressure has been observed in a laser-foil interaction experiment in a typical RPA regime, and the dominant scale of RT instability is close to the laser wavelength. The development of transverse instability in the radiation-pressure-acceleration dominant laser-foil interaction is numerically examined by two-dimensional particle-in-cell simulations. When a laser interacts with a foil with modulated surface, the internal instability is quickly incited and it develops. The linear growth and saturation of the transverse instability are observed, and the growth rate is numerically diagnosed. In order to optimize interaction parameters, a method of information entropy is put forward to describe the chaotic degree of the transverse instability. With moderate modulation, the transverse instability shows a low chaotic degree and a quasi-monoenergetic proton beam is produced.Keywords: information entropy, radiation pressure acceleration, Rayleigh-Taylor instability, transverse instability
Procedia PDF Downloads 345722 Communicative Competence Is About Speaking a Lot: Teacher’s Voice on the Art of Developing Communicative Competence
Authors: Bernice Badal
Abstract:
The South African English curriculum emphasizes the adoption of the Communicative Approach (CA) using Communicative Language Teaching (CLT) methodologies to develop English as a second language (ESL) learners’ communicative competence in contexts such as township schools in South Africa. However, studies indicate that the adoption of the approach largely remains a rhetoric. Poor English language proficiency among learners and poor student performance, which continues from the secondary to the tertiary phase, is widely attributed to a lack of English language proficiency in South Africa. Consequently, this qualitative study, using a mix of classroom observations and interviews, sought to investigate teacher knowledge of Communicative Competence and the methods and strategies ESL teachers used to develop their learners’ communicative competence. The success of learners’ ability to develop communicative competence in contexts such as township schools in South Africa is inseparable from materials, tasks, teacher knowledge and how they implement the approach in the classrooms. Accordingly, teacher knowledge of the theory and practical implications of the CLT approach is imperative for the negotiation of meaning and appropriate use of language in context in resource-impoverished areas like the township. Using a mix of interviews and observations as data sources, this qualitative study examined teachers’ definitions and knowledge of Communicative competence with a focus on how it influenced their classroom practices. The findings revealed that teachers were not familiar with the notion of communicative competence, the communication process, and the underpinnings of CLT. Teachers’ narratives indicated an awareness that there should be interactions and communication in the classroom, but a lack of theoretical understanding of the types of communication necessary scuttled their initiatives. Thus, conceptual deficiency influences teachers’ practices as they engage in classroom activities in a superficial manner or focus on stipulated learner activities prescribed by the CAPS document. This study, therefore, concluded that partial or limited conceptual and coherent understandings with ‘teacher-proof’ stipulations for classroom practice do not inspire teacher efficacy and mastery of prescribed approaches; thus, more efforts should be made by the Department of Basic Education to strengthen the existing Professional Development workshops to support teachers in improving their understandings and application of CLT for the development of Communicative competence in their learners. The findings of the study contribute to the field of teacher knowledge acquisition, teacher beliefs and practices and professional development in the context of second language teaching and learning with a recommendation that frameworks for the development of communicative competence with wider applicability in resource-poor environments be developed to support teacher understanding and application in classrooms.Keywords: communicative competence, CLT, conceptual understanding of reforms, professional development
Procedia PDF Downloads 58721 Advantages of Computer Navigation in Knee Arthroplasty
Authors: Mohammad Ali Al Qatawneh, Bespalchuk Pavel Ivanovich
Abstract:
Computer navigation has been introduced in total knee arthroplasty to improve the accuracy of the procedure. Computer navigation improves the accuracy of bone resection in the coronal and sagittal planes. It was also noted that it normalizes the rotational alignment of the femoral component and fully assesses and balances the deformation of soft tissues in the coronal plane. The work is devoted to the advantages of using computer navigation technology in total knee arthroplasty in 62 patients (11 men and 51 women) suffering from gonarthrosis, aged 51 to 83 years, operated using a computer navigation system, followed up to 3 years from the moment of surgery. During the examination, the deformity variant was determined, and radiometric parameters of the knee joints were measured using the Knee Society Score (KSS), Functional Knee Society Score (FKSS), and Western Ontario and McMaster University Osteoarthritis Index (WOMAC) scales. Also, functional stress tests were performed to assess the stability of the knee joint in the frontal plane and functional indicators of the range of motion. After surgery, improvement was observed in all scales; firstly, the WOMAC values decreased by 5.90 times, and the median value to 11 points (p < 0.001), secondly KSS increased by 3.91 times and reached 86 points (p < 0.001), and the third one is that FKSS data increased by 2.08 times and reached 94 points (p < 0.001). After TKA, the axis deviation of the lower limbs of more than 3 degrees was observed in 4 patients at 6.5% and frontal instability of the knee joint just in 2 cases at 3.2%., The lower incidence of sagittal instability of the knee joint after the operation was 9.6%. The range of motion increased by 1.25 times; the volume of movement averaged 125 degrees (p < 0.001). Computer navigation increases the accuracy of the spatial orientation of the endoprosthesis components in all planes, reduces the variability of the axis of the lower limbs within ± 3 °, allows you to achieve the best results of surgical interventions, and can be used to solve most basic tasks, allowing you to achieve excellent and good outcomes of operations in 100% of cases according to the WOMAC scale. With diaphyseal deformities of the femur and/or tibia, as well as with obstruction of their medullary canal, the use of computer navigation is the method of choice. The use of computer navigation prevents the occurrence of flexion contracture and hyperextension of the knee joint during the distal sawing of the femur. Using the navigation system achieves high-precision implantation for the endoprosthesis; in addition, it achieves an adequate balance of the ligaments, which contributes to the stability of the joint, reduces pain, and allows for the achievement of a good functional result of the treatment.Keywords: knee joint, arthroplasty, computer navigation, advantages
Procedia PDF Downloads 90720 Evaluation of the Effect of Turbulence Caused by the Oscillation Grid on Oil Spill in Water Column
Authors: Mohammad Ghiasvand, Babak Khorsandi, Morteza Kolahdoozan
Abstract:
Under the influence of waves, oil in the sea is subject to vertical scattering in the water column. Scientists' knowledge of how oil is dispersed in the water column is one of the lowest levels of knowledge among other processes affecting oil in the marine environment, which highlights the need for research and study in this field. Therefore, this study investigates the distribution of oil in the water column in a turbulent environment with zero velocity characteristics. Lack of laboratory results to analyze the distribution of petroleum pollutants in deep water for information Phenomenon physics on the one hand and using them to calibrate numerical models on the other hand led to the development of laboratory models in research. According to the aim of the present study, which is to investigate the distribution of oil in homogeneous and isotropic turbulence caused by the oscillating Grid, after reaching the ideal conditions, the crude oil flow was poured onto the water surface and oil was distributed in deep water due to turbulence was investigated. In this study, all experimental processes have been implemented and used for the first time in Iran, and the study of oil diffusion in the water column was considered one of the key aspects of pollutant diffusion in the oscillating Grid environment. Finally, the required oscillation velocities were taken at depths of 10, 15, 20, and 25 cm from the water surface and used in the analysis of oil diffusion due to turbulence parameters. The results showed that with the characteristics of the present system in two static modes and network motion with a frequency of 0.8 Hz, the results of oil diffusion in the four mentioned depths at a frequency of 0.8 Hz compared to the static mode from top to bottom at 26.18, 57 31.5, 37.5 and 50% more. Also, after 2.5 minutes of the oil spill at a frequency of 0.8 Hz, oil distribution at the mentioned depths increased by 49, 61.5, 85, and 146.1%, respectively, compared to the base (static) state.Keywords: homogeneous and isotropic turbulence, oil distribution, oscillating grid, oil spill
Procedia PDF Downloads 75719 Analysis of Vortex-Induced Vibration Characteristics for a Three-Dimensional Flexible Tube
Authors: Zhipeng Feng, Huanhuan Qi, Pingchuan Shen, Fenggang Zang, Yixiong Zhang
Abstract:
Numerical simulations of vortex-induced vibration of a three-dimensional flexible tube under uniform turbulent flow are calculated when Reynolds number is 1.35×104. In order to achieve the vortex-induced vibration, the three-dimensional unsteady, viscous, incompressible Navier-Stokes equation and LES turbulence model are solved with the finite volume approach, the tube is discretized according to the finite element theory, and its dynamic equilibrium equations are solved by the Newmark method. The fluid-tube interaction is realized by utilizing the diffusion-based smooth dynamic mesh method. Considering the vortex-induced vibration system, the variety trends of lift coefficient, drag coefficient, displacement, vertex shedding frequency, phase difference angle of tube are analyzed under different frequency ratios. The nonlinear phenomena of locked-in, phase-switch are captured successfully. Meanwhile, the limit cycle and bifurcation of lift coefficient and displacement are analyzed by using trajectory, phase portrait, and Poincaré sections. The results reveal that: when drag coefficient reaches its minimum value, the transverse amplitude reaches its maximum, and the “lock-in” begins simultaneously. In the range of lock-in, amplitude decreases gradually with increasing of frequency ratio. When lift coefficient reaches its minimum value, the phase difference undergoes a suddenly change from the “out-of-phase” to the “in-phase” mode.Keywords: vortex induced vibration, limit cycle, LES, CFD, FEM
Procedia PDF Downloads 281718 Supervised Machine Learning Approach for Studying the Effect of Different Joint Sets on Stability of Mine Pit Slopes Under the Presence of Different External Factors
Authors: Sudhir Kumar Singh, Debashish Chakravarty
Abstract:
Slope stability analysis is an important aspect in the field of geotechnical engineering. It is also important from safety, and economic point of view as any slope failure leads to loss of valuable lives and damage to property worth millions. This paper aims at mitigating the risk of slope failure by studying the effect of different joint sets on the stability of mine pit slopes under the influence of various external factors, namely degree of saturation, rainfall intensity, and seismic coefficients. Supervised machine learning approach has been utilized for making accurate and reliable predictions regarding the stability of slopes based on the value of Factor of Safety. Numerous cases have been studied for analyzing the stability of slopes using the popular Finite Element Method, and the data thus obtained has been used as training data for the supervised machine learning models. The input data has been trained on different supervised machine learning models, namely Random Forest, Decision Tree, Support vector Machine, and XGBoost. Distinct test data that is not present in training data has been used for measuring the performance and accuracy of different models. Although all models have performed well on the test dataset but Random Forest stands out from others due to its high accuracy of greater than 95%, thus helping us by providing a valuable tool at our disposition which is neither computationally expensive nor time consuming and in good accordance with the numerical analysis result.Keywords: finite element method, geotechnical engineering, machine learning, slope stability
Procedia PDF Downloads 101717 Developing Pedagogy for Argumentation and Teacher Agency: An Educational Design Study in the UK
Authors: Zeynep Guler
Abstract:
Argumentation and the production of scientific arguments are essential components that are necessary for helping students become scientifically literate through engaging them in constructing and critiquing ideas. Incorporating argumentation into science classrooms is challenging and can be a long-term process for both students and teachers. Students have difficulty in engaging tasks that require them to craft arguments, evaluate them to seek weaknesses, and revise them. Teachers also struggle with facilitating argumentation when they have underdeveloped science practices, underdeveloped pedagogical knowledge for argumentation science teaching, or underdeveloped teaching practice with argumentation (or a combination of all three). Thus, there is a need to support teachers in developing pedagogy for science teaching as argumentation, planning and implementing teaching practice for facilitating argumentation and also in becoming more agentic in this regards. Looking specifically at the experience of agency within education, it is arguable that agency is necessary for teachers’ renegotiation of professional purposes and practices in the light of changing educational practices. This study investigated how science teachers develop pedagogy for argumentation both individually and with their colleagues and also how teachers become more agentic (or not) through the active engagement of their contexts-for-action that refer to this as an ecological understanding of agency in order to positively influence or change their practice and their students' engagement with argumentation over two academic years. Through educational design study, this study conducted with three secondary science teachers (key stage 3-year 7 students aged 11-12) in the UK to find out if similar or different patterns of developing pedagogy for argumentation and of becoming more agentic emerge as they engage in planning and implementing a cycle of activities during the practice of teaching science with argumentation. Data from video and audio-recording of classroom practice and open-ended interviews with the science teachers were analysed using content analysis. The findings indicated that all the science teachers perceived strong agency in their opportunities to develop and apply pedagogical practices within the classroom. The teachers were pro-actively shaping their practices and classroom contexts in ways that were over and above the amendments to their pedagogy. They demonstrated some outcomes in developing pedagogy for argumentation and becoming more agentic in their teaching in this regards as a result of the collaboration with their colleagues and researcher; some appeared more agentic than others. The role of the collaboration between their colleagues was seen crucial for the teachers’ practice in the schools: close collaboration and support from other teachers in planning and implementing new educational innovations were seen as crucial for the development of pedagogy and becoming more agentic in practice. They needed to understand the importance of scientific argumentation but also understand how it can be planned and integrated into classroom practice. They also perceived constraint emerged from their lack of competence and knowledge in posing appropriate questions to help the students engage in argumentation, providing support for the students' construction of oral and written arguments.Keywords: argumentation, teacher professional development, teacher agency, students' construction of argument
Procedia PDF Downloads 133716 Characteristics of Double-Stator Inner-Rotor Axial Flux Permanent Magnet Machine with Rotor Eccentricity
Authors: Dawoon Choi, Jian Li, Yunhyun Cho
Abstract:
Axial Flux Permanent Magnet (AFPM) machines have been widely used in various applications due to their important merits, such as compact structure, high efficiency and high torque density. This paper presents one of the most important characteristics in the design process of the AFPM device, which is a recent issue. To design AFPM machine, the predicting electromagnetic forces between the permanent magnets and stator is important. Because of the magnitude of electromagnetic force affects many characteristics such as machine size, noise, vibration, and quality of output power. Theoretically, this force is canceled by the equilibrium of force when it is in the middle of the gap, but it is inevitable to deviate due to manufacturing problems in actual machine. Such as large scale wind generator, because of the huge attractive force between rotor and stator disks, this is more serious in getting large power applications such as large. This paper represents the characteristics of Double-Stator Inner –Rotor AFPM machines when it has rotor eccentricity. And, unbalanced air-gap and inclined air-gap condition which is caused by rotor offset and tilt in a double-stator single inner-rotor AFPM machine are each studied in electromagnetic and mechanical aspects. The output voltage and cogging torque under un-normal air-gap condition of AF machines are firstly calculated using a combined analytical and numerical methods, followed by a structure analysis to study the effect to mechanical stress, deformation and bending forces on bearings. Results and conclusions given in this paper are instructive for the successful development of AFPM machines.Keywords: axial flux permanent magnet machine, inclined air gap, unbalanced air gap, rotor eccentricity
Procedia PDF Downloads 219715 The Effects of Qigong Exercise Intervention on the Cognitive Function in Aging Adults
Authors: D. Y. Fong, C. Y. Kuo, Y. T. Chiang, W. C. Lin
Abstract:
Objectives: Qigong is an ancient Chinese practice in pursuit of a healthier body and a more peaceful mindset. It emphasizes on the restoration of vital energy (Qi) in body, mind, and spirit. The practice is the combination of gentle movements and mild breathing which help the doers reach the condition of tranquility. On account of the features of Qigong, first, we use cross-sectional methodology to compare the differences among the varied levels of Qigong practitioners on cognitive function with event-related potential (ERP) and electroencephalography (EEG). Second, we use the longitudinal methodology to explore the effects on the Qigong trainees for pretest and posttest on ERP and EEG. Current study adopts Attentional Network Test (ANT) task to examine the participants’ cognitive function, and aging-related researches demonstrated a declined tread on the cognition in older adults and exercise might ameliorate the deterioration. Qigong exercise integrates physical posture (muscle strength), breathing technique (aerobic ability) and focused intention (attention) that researchers hypothesize it might improve the cognitive function in aging adults. Method: Sixty participants were involved in this study, including 20 young adults (21.65±2.41 y) with normal physical activity (YA), 20 Qigong experts (60.69 ± 12.42 y) with over 7 years Qigong practice experience (QE), and 20 normal and healthy adults (52.90±12.37 y) with no Qigong practice experience as experimental group (EG). The EG participants took Qigong classes 2 times a week and 2 hours per time for 24 weeks with the purpose of examining the effect of Qigong intervention on cognitive function. ANT tasks (alert network, orient network, and executive control) were adopted to evaluate participants’ cognitive function via ERP’s P300 components and P300 amplitude topography. Results: Behavioral data: 1.The reaction time (RT) of YA is faster than the other two groups, and EG was faster than QE in the cue and flanker conditions of ANT task. 2. The RT of posttest was faster than pretest in EG in the cue and flanker conditions. 3. No difference among the three groups on orient, alert, and execute control networks. ERP data: 1. P300 amplitude detection in QE was larger than EG at Fz electrode in orient, alert, and execute control networks. 2. P300 amplitude in EG was larger at pretest than posttest on the orient network. 3. P300 Latency revealed no difference among the three groups in the three networks. Conclusion: Taken together these findings, they provide neuro-electrical evidence that older adults involved in Qigong practice may develop a more overall compensatory mechanism and also benefit the performance of behavior.Keywords: Qigong, cognitive function, aging, event-related potential (ERP)
Procedia PDF Downloads 393714 Causes for the Precession of the Perihelion in the Planetary Orbits
Authors: Kwan U. Kim, Jin Sim, Ryong Jin Jang, Sung Duk Kim
Abstract:
It is Leverrier that discovered the precession of the perihelion in the planetary orbits for the first time in the world, while it is Einstein that explained the astronomical phenomenom for the first time in the world. The amount of the precession of the perihelion for Einstein’s theory of gravitation has been explained by means of the inverse fourth power force(inverse third power potential) introduced totheory of gravitation through Schwarzschild metric However, the methodology has a serious shortcoming that it is impossible to explain the cause for the precession of the perihelion in the planetary orbits. According to our study, without taking the cause for the precession of the perihelion, 6 methods can explain the amount of the precession of the perihelion discovered by Leverrier. Therefore, the problem of what caused the perihelion to precess in the planetary orbits must be solved for physics because it is a profound scientific and technological problem for a basic experiment in construction of relativistic theory of gravitation. The scientific solution to the problem proved that Einstein’s explanation for the planetary orbits is a magic made by the numerical expressions obtained from fictitious gravitation introduced to theory of gravitation and wrong definition of proper time The problem of the precession of the perihelion seems solved already by means of general theory of relativity, but, in essence, the cause for the astronomical phenomenon has not been successfully explained for astronomy yet. The right solution to the problem comes from generalized theory of gravitation. Therefore, in this paper, it has been shown that by means of Schwarzschild field and the physical quantities of relativistic Lagrangian redflected in it, fictitious gravitation is not the main factor which can cause the perihelion to precess in the planetary orbits. In addition to it, it has been shown that the main factor which can cause the perihelion to precess in the planetary orbits is the inverse third power force existing really in the relativistic region in the Solar system.Keywords: inverse third power force, precession of the perihelion, fictitious gravitation, planetary orbits
Procedia PDF Downloads 12713 Building Biodiversity Conservation Plans Robust to Human Land Use Uncertainty
Authors: Yingxiao Ye, Christopher Doehring, Angelos Georghiou, Hugh Robinson, Phebe Vayanos
Abstract:
Human development is a threat to biodiversity, and conservation organizations (COs) are purchasing land to protect areas for biodiversity preservation. However, COs have limited budgets and thus face hard prioritization decisions that are confounded by uncertainty in future human land use. This research proposes a data-driven sequential planning model to help COs choose land parcels that minimize the uncertain human impact on biodiversity. The proposed model is robust to uncertain development, and the sequential decision-making process is adaptive, allowing land purchase decisions to adapt to human land use as it unfolds. The cellular automata model is leveraged to simulate land use development based on climate data, land characteristics, and development threat index from NASA Socioeconomic Data and Applications Center. This simulation is used to model uncertainty in the problem. This research leverages state-of-the-art techniques in the robust optimization literature to propose a computationally tractable reformulation of the model, which can be solved routinely by off-the-shelf solvers like Gurobi or CPLEX. Numerical results based on real data from the Jaguar in Central and South America show that the proposed method reduces conservation loss by 19.46% on average compared to standard approaches such as MARXAN used in practice for biodiversity conservation. Our method may better help guide the decision process in land acquisition and thereby allow conservation organizations to maximize the impact of limited resources.Keywords: data-driven robust optimization, biodiversity conservation, uncertainty simulation, adaptive sequential planning
Procedia PDF Downloads 210712 Blood Flow Simulations to Understand the Role of the Distal Vascular Branches of Carotid Artery in the Stroke Prediction
Authors: Muhsin Kizhisseri, Jorg Schluter, Saleh Gharie
Abstract:
Atherosclerosis is the main reason of stroke, which is one of the deadliest diseases in the world. The carotid artery in the brain is the prominent location for atherosclerotic progression, which hinders the blood flow into the brain. The inclusion of computational fluid dynamics (CFD) into the diagnosis cycle to understand the hemodynamics of the patient-specific carotid artery can give insights into stroke prediction. Realistic outlet boundary conditions are an inevitable part of the numerical simulations, which is one of the major factors in determining the accuracy of the CFD results. The Windkessel model-based outlet boundary conditions can give more realistic characteristics of the distal vascular branches of the carotid artery, such as the resistance to the blood flow and compliance of the distal arterial walls. This study aims to find the most influential distal branches of the carotid artery by using the Windkessel model parameters in the outlet boundary conditions. The parametric study approach to Windkessel model parameters can include the geometrical features of the distal branches, such as radius and length. The incorporation of the variations of the geometrical features of the major distal branches such as the middle cerebral artery, anterior cerebral artery, and ophthalmic artery through the Windkessel model can aid in identifying the most influential distal branch in the carotid artery. The results from this study can help physicians and stroke neurologists to have a more detailed and accurate judgment of the patient's condition.Keywords: stroke, carotid artery, computational fluid dynamics, patient-specific, Windkessel model, distal vascular branches
Procedia PDF Downloads 215711 A Two Server Poisson Queue Operating under FCFS Discipline with an ‘m’ Policy
Authors: R. Sivasamy, G. Paulraj, S. Kalaimani, N.Thillaigovindan
Abstract:
For profitable businesses, queues are double-edged swords and hence the pain of long wait times in a queue often frustrates customers. This paper suggests a technical way of reducing the pain of lines through a Poisson M/M1, M2/2 queueing system operated by two heterogeneous servers with an objective of minimising the mean sojourn time of customers served under the queue discipline ‘First Come First Served with an ‘m’ policy, i.e. FCFS-m policy’. Arrivals to the system form a Poisson process of rate λ and are served by two exponential servers. The service times of successive customers at server ‘j’ are independent and identically distributed (i.i.d.) random variables and each of it is exponentially distributed with rate parameter μj (j=1, 2). The primary condition for implementing the queue discipline ‘FCFS-m policy’ on these service rates μj (j=1, 2) is that either (m+1) µ2 > µ1> m µ2 or (m+1) µ1 > µ2> m µ1 must be satisfied. Further waiting customers prefer the server-1 whenever it becomes available for service, and the server-2 should be installed if and only if the queue length exceeds the value ‘m’ as a threshold. Steady-state results on queue length and waiting time distributions have been obtained. A simple way of tracing the optimal service rate μ*2 of the server-2 is illustrated in a specific numerical exercise to equalize the average queue length cost with that of the service cost. Assuming that the server-1 has to dynamically adjust the service rates as μ1 during the system size is strictly less than T=(m+2) while μ2=0, and as μ1 +μ2 where μ2>0 if the system size is more than or equal to T, corresponding steady state results of M/M1+M2/1 queues have been deduced from those of M/M1,M2/2 queues. To conclude this investigation has a viable application, results of M/M1+M2/1 queues have been used in processing of those waiting messages into a single computer node and to measure the power consumption by the node.Keywords: two heterogeneous servers, M/M1, M2/2 queue, service cost and queue length cost, M/M1+M2/1 queue
Procedia PDF Downloads 362710 Numerical Simulation of Footing on Reinforced Loose Sand
Authors: M. L. Burnwal, P. Raychowdhury
Abstract:
Earthquake leads to adverse effects on buildings resting on soft soils. Mitigating the response of shallow foundations on soft soil with different methods reduces settlement and provides foundation stability. Few methods such as the rocking foundation (used in Performance-based design), deep foundation, prefabricated drain, grouting, and Vibro-compaction are used to control the pore pressure and enhance the strength of the loose soils. One of the problems with these methods is that the settlement is uncontrollable, leading to differential settlement of the footings, further leading to the collapse of buildings. The present study investigates the utility of geosynthetics as a potential improvement of the subsoil to reduce the earthquake-induced settlement of structures. A steel moment-resisting frame building resting on loose liquefiable dry soil, subjected to Uttarkashi 1991 and Chamba 1995 earthquakes, is used for the soil-structure interaction (SSI) analysis. The continuum model can simultaneously simulate structure, soil, interfaces, and geogrids in the OpenSees framework. Soil is modeled with PressureDependentMultiYield (PDMY) material models with Quad element that provides stress-strain at gauss points and is calibrated to predict the behavior of Ganga sand. The model analyzed with a tied degree of freedom contact reveals that the system responses align with the shake table experimental results. An attempt is made to study the responses of footing structure and geosynthetics with unreinforced and reinforced bases with varying parameters. The result shows that geogrid reinforces shallow foundation effectively reduces the settlement by 60%.Keywords: settlement, shallow foundation, SSI, continuum FEM
Procedia PDF Downloads 194709 Rethinking Gender Roles within the Family: Single Fathers and the Domestic Sphere
Authors: Mohamad Chour
Abstract:
Nowadays, a record number of households are headed by single fathers in most of the European societies. Our research aims to explore how French single fathers experience the domestic sphere, a traditionally feminized field while accomplishing their role of fathers. We adopt a gender role and a parenting role construction theoretical perspectives. Indeed, the interior domestic sphere has been traditionally considered as related to the role of the mother. Moreover, according to the masculine domination theory of Bourdieu, men avoid caregiving and domestic practices that are economically and culturally undervalued. Hence, mothers are considered as more likely to handle the expressive dimension of duties whereas fathers’ role is represented as instrumental, functional and independent. Long interviews have been conducted with twenty French single fathers in order to investigate how the absence of the mother affects the practices of fatherhood. We combined the long interviews with projective techniques method in order to better understand their conception of the family and their family values. Seeking a qualitative diversity, our respondents are from various ages (between 30 and 60); they are coming from different regions in France; living in rural, semi-rural and urban areas. Based on the analysis of 427 pages of data, we identify three main categories of single fathers depending on their strategies to copy and/or delegate the role of the mother. 1) Nurturing fathers completely assume the role of the absent mother as well as her functions. Their discourse is characterized by abnegation and sacrifices reflecting a nurturing role. 2) Juggling fathers are those who take charge of a part of the household duties and delegate the other part to the market or to 'feminine resources' for lacking skills or time. 3) Resistant fathers are the very few respondents who refuse to assume any activities related to the domestic sphere that they perceive as feminine. For lacking competences and even for ideological reasons, they have tendency to delegate all the tasks that were assumed by their ex-spouses. Generally, the majority of fathers seem to experience the domestic sphere differently, and their domestic involvement has been underestimated and even misunderstood. Household duties such as cooking and housekeeping in addition to the nurturing role are experienced by many of the respondents as constructing elements of their fatherhood. Our respondents do not seem to accomplish house holding duties in a functional way. The domestic sphere is managed by those fathers with a strong dimension of abnegation. Thus, our research contributes to illustrating the evolution of gender roles and shows how being simultaneously 'a father and a mother' seems to be an emerging social norm in a French and European cultural context.Keywords: fathering, gender roles, gender studies, identity construction, single fathers
Procedia PDF Downloads 133708 Experimental and Numerical Evaluation of a Shaft Failure Behaviour Using Three-Point Bending Test
Authors: Bernd Engel, Sara Salman Hassan Al-Maeeni
Abstract:
A substantial amount of natural resources are nowadays consumed at a growing rate, as humans all over the world used materials obtained from the Earth. Machinery manufacturing industry is one of the major resource consumers on a global scale. Even though the incessant finding out of the new material, metals, and resources, it is urgent for the industry to develop methods to use the Earth's resources intelligently and more sustainable than before. Re-engineering of machine tools regarding design and failure analysis is an approach whereby out-of-date machines are upgraded and returned to useful life. To ensure the reliable future performance of the used machine components, it is essential to investigate the machine component failure through the material, design, and surface examinations. This paper presents an experimental approach aimed at inspecting the shaft of the rotary draw bending machine as a case to study. The testing methodology, which is based on the principle of the three-point bending test, allows assessing the shaft elastic behavior under loading. Furthermore, the shaft elastic characteristics include the maximum linear deflection, and maximum bending stress was determined by using an analytical approach and finite element (FE) analysis approach. In the end, the results were compared with the ones obtained by the experimental approach. In conclusion, it is seen that the measured bending deflection and bending stress were well close to the permissible design value. Therefore, the shaft can work in the second life cycle. However, based on previous surface tests conducted, the shaft needs surface treatments include re-carburizing and refining processes to ensure the reliable surface performance.Keywords: deflection, FE analysis, shaft, stress, three-point bending
Procedia PDF Downloads 158707 Nonlinear Finite Element Modeling of Deep Beam Resting on Linear and Nonlinear Random Soil
Authors: M. Seguini, D. Nedjar
Abstract:
An accuracy nonlinear analysis of a deep beam resting on elastic perfectly plastic soil is carried out in this study. In fact, a nonlinear finite element modeling for large deflection and moderate rotation of Euler-Bernoulli beam resting on linear and nonlinear random soil is investigated. The geometric nonlinear analysis of the beam is based on the theory of von Kàrmàn, where the Newton-Raphson incremental iteration method is implemented in a Matlab code to solve the nonlinear equation of the soil-beam interaction system. However, two analyses (deterministic and probabilistic) are proposed to verify the accuracy and the efficiency of the proposed model where the theory of the local average based on the Monte Carlo approach is used to analyze the effect of the spatial variability of the soil properties on the nonlinear beam response. The effect of six main parameters are investigated: the external load, the length of a beam, the coefficient of subgrade reaction of the soil, the Young’s modulus of the beam, the coefficient of variation and the correlation length of the soil’s coefficient of subgrade reaction. A comparison between the beam resting on linear and nonlinear soil models is presented for different beam’s length and external load. Numerical results have been obtained for the combination of the geometric nonlinearity of beam and material nonlinearity of random soil. This comparison highlighted the need of including the material nonlinearity and spatial variability of the soil in the geometric nonlinear analysis, when the beam undergoes large deflections.Keywords: finite element method, geometric nonlinearity, material nonlinearity, soil-structure interaction, spatial variability
Procedia PDF Downloads 414706 Promoting 21st Century Skills through Telecollaborative Learning
Authors: Saliha Ozcan
Abstract:
Technology has become an integral part of our lives, aiding individuals in accessing higher order competencies, such as global awareness, creativity, collaborative problem solving, and self-directed learning. Students need to acquire these competencies, often referred to as 21st century skills, in order to adapt to a fast changing world. Today, an ever-increasing number of schools are exploring how engagement through telecollaboration can support language learning and promote 21st century skill development in classrooms. However, little is known regarding how telecollaboration may influence the way students acquire 21st century skills. In this paper, we aim to shed light to the potential implications of telecollaborative practices in acquisition of 21st century skills. In our context, telecollaboration, which might be carried out in a variety of settings both synchronously or asynchronously, is considered as the process of communicating and working together with other people or groups from different locations through online digital tools or offline activities to co-produce a desired work output. The study presented here will describe and analyse the implementation of a telecollaborative project between two high school classes, one in Spain and the other in Sweden. The students in these classes were asked to carry out some joint activities, including creating an online platform, aimed at raising awareness of the situation of the Syrian refugees. We conduct a qualitative study in order to explore how language, culture, communication, and technology merge into the co-construction of knowledge, as well as supporting the attainment of the 21st century skills needed for network-mediated communication. To this end, we collected a significant amount of audio-visual data, including video recordings of classroom interaction and external Skype meetings. By analysing this data, we verify whether the initial pedagogical design and intended objectives of the telecollaborative project coincide with what emerges from the actual implementation of the tasks. Our findings indicate that, as well as planned activities, unplanned classroom interactions may lead to acquisition of certain 21st century skills, such as collaborative problem solving and self-directed learning. This work is part of a wider project (KONECT, EDU2013-43932-P; Spanish Ministry of Economy and Finance), which aims to explore innovative, cross-competency based teaching that can address the current gaps between today’s educational practices and the needs of informed citizens in tomorrow’s interconnected, globalised world.Keywords: 21st century skills, telecollaboration, language learning, network mediated communication
Procedia PDF Downloads 125705 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems
Authors: Alejandro Adorjan
Abstract:
Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.Keywords: calculus, engineering education, PreCalculus, Summer Program
Procedia PDF Downloads 290704 Estimation Atmospheric parameters for Weather Study and Forecast over Equatorial Regions Using Ground-Based Global Position System
Authors: Asmamaw Yehun, Tsegaye Kassa, Addisu Hunegnaw, Martin Vermeer
Abstract:
There are various models to estimate the neutral atmospheric parameter values, such as in-suite and reanalysis datasets from numerical models. Accurate estimated values of the atmospheric parameters are useful for weather forecasting and, climate modeling and monitoring of climate change. Recently, Global Navigation Satellite System (GNSS) measurements have been applied for atmospheric sounding due to its robust data quality and wide horizontal and vertical coverage. The Global Positioning System (GPS) solutions that includes tropospheric parameters constitute a reliable set of data to be assimilated into climate models. The objective of this paper is, to estimate the neutral atmospheric parameters such as Wet Zenith Delay (WZD), Precipitable Water Vapour (PWV) and Total Zenith Delay (TZD) using six selected GPS stations in the equatorial regions, more precisely, the Ethiopian GPS stations from 2012 to 2015 observational data. Based on historic estimated GPS-derived values of PWV, we forecasted the PWV from 2015 to 2030. During data processing and analysis, we applied GAMIT-GLOBK software packages to estimate the atmospheric parameters. In the result, we found that the annual averaged minimum values of PWV are 9.72 mm for IISC and maximum 50.37 mm for BJCO stations. The annual averaged minimum values of WZD are 6 cm for IISC and maximum 31 cm for BDMT stations. In the long series of observations (from 2012 to 2015), we also found that there is a trend and cyclic patterns of WZD, PWV and TZD for all stations.Keywords: atmosphere, GNSS, neutral atmosphere, precipitable water vapour
Procedia PDF Downloads 61703 Finite Element Model to Evaluate Gas Conning Phenomenon in Naturally Fractured Oil Reservoirs
Authors: Reda Abdel Azim
Abstract:
Gas conning phenomenon considered one of the prevalent matter in oil field applications as it significantly affects the amount of produced oil, increase cost of production operation and it has a direct effect on oil reservoirs recovery efficiency as well. Therefore, evaluation of such phenomenon and study the reservoir mechanisms that may strongly affect invading gas to the producing formation is crucial. Gas conning is a result of an imbalance between two major forces controlling the oil production: gravitational and viscous forces especially in naturally fractured reservoirs where the capillary pressure forces are negligible. Once the gas invading the producing formation near the wellbore due to large producing oil rate, the oil gas contact will change and such reservoirs are prone to gas conning. Moreover, the oil volume expected to be produced requires the use of long horizontal perforated well. This work presents a numerical simulation study to predict and propose solutions to gas coning in naturally fractured oil reservoirs. The simulation work is based on discrete fractures and permeability tensors approaches. The governing equations are discretized using finite element approach and Galerkin’s least square technique (GLS) is employed to stabilize the equation solutions. The developed simulator is validated against Eclipse-100 using horizontal fractures. The matrix and fracture properties are modelled. Critical rate, breakthrough time and GOR are determined to be used in investigation of the effect of matrix and fracture properties on gas coning. Results show that fracture distribution in terms of diverse dip and azimuth has a great effect on conning occurring. In addition, fracture porosity, anisotropy ratio, and fracture aperture.Keywords: gas conning, finite element, fractured reservoirs, multiphase
Procedia PDF Downloads 195702 Application Difference between Cox and Logistic Regression Models
Authors: Idrissa Kayijuka
Abstract:
The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio
Procedia PDF Downloads 455701 Geographic Information System and Dynamic Segmentation of Very High Resolution Images for the Semi-Automatic Extraction of Sandy Accumulation
Authors: A. Bensaid, T. Mostephaoui, R. Nedjai
Abstract:
A considerable area of Algerian lands is threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mecheria department generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of LANDSAT images (5, 7, and 8) of three scenes 197/37, 198/36 and 198/37 for the year 2020. As a second step, we prospect the use of geospatial techniques to monitor the progression of sand dunes on developed (urban) lands as well as on the formation of sandy accumulations (dune, dunes fields, nebkha, barkhane, etc.). For this purpose, this study made use of the semi-automatic processing method for the dynamic segmentation of images with very high spatial resolution (SENTINEL-2 and Google Earth). This study was able to demonstrate that urban lands under current conditions are located in sand transit zones that are mobilized by the winds from the northwest and southwest directions.Keywords: land development, GIS, segmentation, remote sensing
Procedia PDF Downloads 155