Search results for: fundamental equations
627 Different Approaches to Teaching a Database Course to Undergraduate and Graduate Students
Authors: Samah Senbel
Abstract:
Database Design is a fundamental part of the Computer Science and Information technology curricula in any school, as well as in the study of management, business administration, and data analytics. In this study, we compare the performance of two groups of students studying the same database design and implementation course at Sacred Heart University in the fall of 2018. Both courses used the same textbook and were taught by the same professor, one for seven graduate students and one for 26 undergraduate students (juniors). The undergraduate students were aged around 20 years old with little work experience, while the graduate students averaged 35 years old and all were employed in computer-related or management-related jobs. The textbook used was 'Database Systems, Design, Implementation, and Management' by Coronel and Morris, and the course was designed to follow the textbook roughly a chapter per week. The first 6 weeks covered the design aspect of a database, followed by a paper exam. The next 6 weeks covered the implementation aspect of the database using SQL followed by a lab exam. Since the undergraduate students are on a 16 week semester, we spend the last three weeks of the course covering NoSQL. This part of the course was not included in this study. After the course was over, we analyze the results of the two groups of students. An interesting discrepancy was observed: In the database design part of the course, the average grade of the graduate students was 92%, while that of the undergraduate students was 77% for the same exam. In the implementation part of the course, we observe the opposite: the average grade of the graduate students was 65% while that of the undergraduate students was 73%. The overall grades were quite similar: the graduate average was 78% and that of the undergraduates was 75%. Based on these results, we concluded that having both classes follow the same time schedule was not beneficial, and an adjustment is needed. The graduates could spend less time on design and the undergraduates would benefit from more design time. In the fall of 2019, 30 students registered for the undergraduate course and 15 students registered for the graduate course. To test our conclusion, the undergraduates spend about 67% of time (eight classes) on the design part of the course and 33% (four classes) on the implementation part, using the exact exams as the previous year. This resulted in an improvement in their average grades on the design part from 77% to 83% and also their implementation average grade from 73% to 79%. In conclusion, we recommend using two separate schedules for teaching the database design course. For undergraduate students, it is important to spend more time on the design part rather than the implementation part of the course. While for the older graduate students, we recommend spending more time on the implementation part, as it seems that is the part they struggle with, even though they have a higher understanding of the design component of databases.Keywords: computer science education, database design, graduate and undergraduate students, pedagogy
Procedia PDF Downloads 121626 The Persistence of Abnormal Return on Assets: An Exploratory Analysis of the Differences between Industries and Differences between Firms by Country and Sector
Authors: José Luis Gallizo, Pilar Gargallo, Ramon Saladrigues, Manuel Salvador
Abstract:
This study offers an exploratory statistical analysis of the persistence of annual profits across a sample of firms from different European Union (EU) countries. To this end, a hierarchical Bayesian dynamic model has been used which enables the annual behaviour of those profits to be broken down into a permanent structural and a transitory component, while also distinguishing between general effects affecting the industry as a whole to which each firm belongs and specific effects affecting each firm in particular. This breakdown enables the relative importance of those fundamental components to be more accurately evaluated by country and sector. Furthermore, Bayesian approach allows for testing different hypotheses about the homogeneity of the behaviour of the above components with respect to the sector and the country where the firm develops its activity. The data analysed come from a sample of 23,293 firms in EU countries selected from the AMADEUS data-base. The period analysed ran from 1999 to 2007 and 21 sectors were analysed, chosen in such a way that there was a sufficiently large number of firms in each country sector combination for the industry effects to be estimated accurately enough for meaningful comparisons to be made by sector and country. The analysis has been conducted by sector and by country from a Bayesian perspective, thus making the study more flexible and realistic since the estimates obtained do not depend on asymptotic results. In general terms, the study finds that, although the industry effects are significant, more important are the firm specific effects. That importance varies depending on the sector or the country in which the firm carries out its activity. The influence of firm effects accounts for around 81% of total variation and display a significantly lower degree of persistence, with adjustment speeds oscillating around 34%. However, this pattern is not homogeneous but depends on the sector and country analysed. Industry effects depends also on sector and country analysed have a more marginal importance, being significantly more persistent, with adjustment speeds oscillating around 7-8% with this degree of persistence being very similar for most of sectors and countries analysed.Keywords: dynamic models, Bayesian inference, MCMC, abnormal returns, persistence of profits, return on assets
Procedia PDF Downloads 401625 Surgical Treatment of Glaucoma – Literature and Video Review of Blebs, Tubes, and Micro-Invasive Glaucoma Surgeries (MIGS)
Authors: Ana Miguel
Abstract:
Purpose: Glaucoma is the second cause of worldwide blindness and the first cause of irreversible blindness. Trabeculectomy, the standard glaucoma surgery, has a success rate between 36.0% and 98.0% at three years and a high complication rate, leading to the development of different surgeries, micro-invasive glaucoma surgeries (MIGS). MIGS devices are diverse and have various indications, risks, and effectiveness. We intended to review MIGS’ surgical techniques, indications, contra-indications, and IOP effect. Methods: We performed a literature review of MIGS to differentiate the devices and their reported effectiveness compared to traditional surgery (tubes and blebs). We also conducted a video review of the last 1000 glaucoma surgeries of the author (including MIGS, but also trabeculectomy, deep sclerectomy, and tubes of Ahmed and Baerveldt) performed at glaucoma and advanced anterior segment fellowship in Canada and France, to describe preferred surgical techniques for each. Results: We present the videos with surgical techniques and pearls for each surgery. Glaucoma surgeries included: 1- bleb surgery (namely trabeculectomy, with releasable sutures or with slip knots, deep sclerectomy, Ahmed valve, Baerveldt tube), 2- MIGS with bleb, also known as MIBS (including XEN 45, XEN 63, and Preserflo), 3- MIGS increasing supra-choroidal flow (iStar), 4-MIGS increasing trabecular flow (iStent, gonioscopy-assisted transluminal trabeculotomy - GATT, goniotomy, excimer laser trabeculostomy -ELT), and 5-MIGS decreasing aqueous humor production (endocyclophotocoagulation, ECP). There was also needling (ab interno and ab externo) performed at the operating room and irido-zonulo-hyaloïdectomy (IZHV). Each technique had different indications and contra-indications. Conclusion: MIGS are valuable in glaucoma surgery, such as traditional surgery with trabeculectomy and tubes. All glaucoma surgery can be combined with phacoemulsification (there may be a synergistic effect on MIGS + cataract surgery). In addition, some MIGS may be combined for further intraocular pressure lowering effect (for example, iStents with goniotomy and ECP). A good surgical technique and postoperative management are fundamental to increasing success and good practice in all glaucoma surgery.Keywords: glaucoma, migs, surgery, video, review
Procedia PDF Downloads 83624 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control
Procedia PDF Downloads 162623 Exploring the Applications of Neural Networks in the Adaptive Learning Environment
Authors: Baladitya Swaika, Rahul Khatry
Abstract:
Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.Keywords: computer adaptive tests, item response theory, machine learning, neural networks
Procedia PDF Downloads 175622 Carbon Nanotube Field Effect Transistor - a Review
Authors: P. Geetha, R. S. D. Wahida Banu
Abstract:
The crowning advances in Silicon based electronic technology have dominated the computation world for the past decades. The captivating performance of Si devices lies in sustainable scaling down of the physical dimensions, by that increasing device density and improved performance. But, the fundamental limitations due to physical, technological, economical, and manufacture features restrict further miniaturization of Si based devices. The pit falls are due to scaling down of the devices such as process variation, short channel effects, high leakage currents, and reliability concerns. To fix the above-said problems, it is needed either to follow a new concept that will manage the current hitches or to support the available concept with different materials. The new concept is to design spintronics, quantum computation or two terminal molecular devices. Otherwise, presently used well known three terminal devices can be modified with different materials that suits to address the scaling down difficulties. The first approach will occupy in the far future since it needs considerable effort; the second path is a bright light towards the travel. Modelling paves way to know not only the current-voltage characteristics but also the performance of new devices. So, it is desirable to model a new device of suitable gate control and project the its abilities towards capability of handling high current, high power, high frequency, short delay, and high velocity with excellent electronic and optical properties. Carbon nanotube became a thriving material to replace silicon in nano devices. A well-planned optimized utilization of the carbon material leads to many more advantages. The unique nature of this organic material allows the recent developments in almost all fields of applications from an automobile industry to medical science, especially in electronics field-on which the automation industry depends. More research works were being done in this area. This paper reviews the carbon nanotube field effect transistor with various gate configurations, number of channel element, CNT wall configurations and different modelling techniques.Keywords: array of channels, carbon nanotube field effect transistor, double gate transistor, gate wrap around transistor, modelling, multi-walled CNT, single-walled CNT
Procedia PDF Downloads 326621 Human Capital Development, Foreign Direct Investment and Industrialization in Nigeria
Authors: Ese Urhie, Bosede Olopade, Muyiwa Oladosun, Henry Okodua
Abstract:
In the past three and half decades, aside from the fact that the contribution of the industrial sector to gross domestic product in Nigeria has nose-dived, its performance has also been highly unstable. Investment funds needed to develop the industrial sector usually come from both internal and external sources. The internal sources include surplus generated within the industrial sector and surplus diverted from other sectors of the economy. It has been observed that due to the small size of the industrial sector in developing countries, very limited funds could be raised for further investment. External sources of funds which many currently industrialized and some ‘newly industrializing countries’ have benefited from including direct and indirect investment by foreign capitalists; foreign aid and loans; and investments by nationals living abroad. Foreign direct investment inflow in Nigeria has been declining since 2009 in both absolute and relative terms. High level of human capital has been identified as one of the crucial factors that explain the miraculous growth of the ‘Asian Tigers’. Its low level has also been identified as the major cause for the low level of FDI flow to Nigeria in particular and Africa in general. There has been positive, but slow improvement in human capital indicators in Nigeria in the past three decades. In spite of this, foreign direct investment inflow has not only been low; it has declined drastically in recent years. i) Why has the improvement in human capital in Nigeria failed to attract more FDI inflow? ii) To what extent does the level of human capital influence FDI inflow in Nigeria? iii) Is there a threshold of human capital stock that guarantees sustained inflow of FDI? iv) Does the quality of human capital matter? v) Does the influence of other (negative) factors outweigh the benefits of human capital? Using time series secondary data, a system of equations is employed to evaluate the effect of human capital on FDI inflow in Nigeria on one hand and the effect of FDI on the level of industrialization on the other. A weak relationship between human capital and FDI is expected, while a strong relationship between FDI and industrial growth is expected from the result.Keywords: human capital, foreign direct investment, industrialization, gross domestic product
Procedia PDF Downloads 233620 The Ethical Imperative of Corporate Social Responsibility Practice and Disclosure by Firms in Nigeria Delta Swamplands: A Qualitative Analysis
Authors: Augustar Omoze Ehighalua, Itotenaan Henry Ogiri
Abstract:
As a mono-product economy, Nigeria relies largely on oil revenues for its foreign exchange earnings and the exploration activities of firms operating in the Niger Delta region have left in its wake tales of environmental degradation, poverty and misery. This, no doubt, have created corporate social responsibility issues in the region. The focus of this research is the critical evaluation of the ethical response to Corporate Social Responsibility (CSR) practice by firms operating in Nigeria Delta Swamplands. While CSR is becoming more popular in developed society with effective practice guidelines and reporting benchmark, there is a relatively low level of awareness and selective applicability of existing international guidelines to effectively support CSR practice in Nigeria. This study, haven identified the lack of CSR institutional framework attempts to develop an ethically-driven CSR transparency benchmark laced within a regulatory framework based on international best practices. The research adopts a qualitative methodology and makes use of primary data collected through semi-structured interviews conducted across the six core states of the Niger Delta Region. More importantly, the study adopts an inductive, interpretivist philosophical paradigm that reveal deep phenomenological insights into what local communities, civil society and government officials consider as good ethical benchmark for responsible CSR practice by organizations. The institutional theory provides for the main theoretical foundation, complemented by the stakeholder and legitimacy theories. The Nvivo software was used to analyze the data collected. This study shows that ethical responsibility is lacking in CSR practice by firms in the Niger Delta Region of Nigeria. Furthermore, findings of the study indicate key issues of environmental, health and safety, human rights, and labour as fundamental in developing an effective CSR practice guideline for Nigeria. The study has implications for public policy formulation as well as managerial perspective.Keywords: corporate social responsibility, CSR, ethics, firms, Niger-Delta Swampland, Nigeria
Procedia PDF Downloads 106619 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance
Authors: Rajinder Singh, Ram Valluru
Abstract:
Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.Keywords: actuarial loss reserving techniques, logistic regression, parametric function, volatility
Procedia PDF Downloads 131618 Simulation Based Analysis of Gear Dynamic Behavior in Presence of Multiple Cracks
Authors: Ahmed Saeed, Sadok Sassi, Mohammad Roshun
Abstract:
Gears are important components with a vital role in many rotating machines. One of the common gear failure causes is tooth fatigue crack; however, its early detection is still a challenging task. The objective of this study is to develop a numerical model that simulates the effect of teeth cracks on the resulting gears vibrations and permits consequently to perform an early fault detection. In contrast to other published papers, this work incorporates the possibility of multiple simultaneous cracks with different depths. As cracks alter significantly the stiffness of the tooth, finite element software is used to determine the stiffness variation with respect to the angular position, for different combinations of crack orientation and depth. A simplified six degrees of freedom nonlinear lumped parameter model of a one-stage spur gear system is proposed to study the vibration with and without cracks. The model developed for calculating the stiffness with the crack permitted to update the physical parameters of the second-degree-of-freedom equations of motions describing the vibration of the gearbox. The vibration simulation results of the gearbox were by obtained using Simulink/Matlab. The effect of one crack with different levels was studied thoroughly. The change in the mesh stiffness and the vibration response were found to be consistent with previously published works. In addition, various statistical time domain parameters were considered. They showed different degrees of sensitivity toward the crack depth. Multiple cracks were also introduced at different locations and the vibration response along with the statistical parameters were obtained again for a general case of degradation (increase in crack depth, crack number and crack locations). It was found that although some parameters increase in value as the deterioration level increases, they show almost no change or even decrease when the number of cracks increases. Therefore, the use of any statistical parameters could be misleading if not considered in an appropriate way.Keywords: Spur gear, cracked tooth, numerical simulation, time-domain parameters
Procedia PDF Downloads 266617 The Importance of Oral Mucosal Biopsy Selection Site in Areas of Field Change: A Case Report
Authors: Timmis W., Simms M., Thomas C.
Abstract:
This case discusses the management of two floors of mouth (FOM) Squamous Cell Carcinomas (SCC) not identified upon initial biopsy. A 51 year-old male presented with right FOM erythroleukoplakia. Relevant medical history included alcoholic dependence syndrome and alcoholic liver disease. Relevant drug therapy encompassed acamprosate, folic acid, hydroxocobalamin and thiamine. The patient had a 55.5 pack-year smoking history and alcohol dependence from age 14, drinking 16 units/day. FOM incisional biopsy and histopathological analysis diagnosed Carcinoma in situ. Treatment involved wide local excision. Specimen analysis revealed two separate foci of pT1 moderately differentiated SCCs. Carcinoma staging scans revealed no pathological lymphadenopathy, no local invasion or metastasis. SCCs had been excised in completion with narrow margins. MDT discussion concluded that in view of the field changes it would be difficult to identify specific areas needing further excision, although techniques such as Lugol’s Iodine were considered. Further surgical resection, surgical neck management and sentinel lymph node biopsy was offered. The patient declined intervention, primary management involved close monitoring alongside alcohol and smoking cessation referral. Narrow excisional margins can increase carcinoma recurrence risk. Biopsy failed to identify SCCs, despite sampling an area of clinical concern. For gross field change multiple incisional biopsies should be considered to increase chance of accurate diagnosis and appropriate treatment. Coupling of tobacco and alcohol has a synergistic effect, exponentially increasing the relative risk of oral carcinoma development. Tobacco and alcoholic control is fundamental in reducing treatment‑related side effects, recurrence risk and second primary cancer development.Keywords: alcohol dependence, biopsy, oral carcinoma, tobacco
Procedia PDF Downloads 112616 3D Modeling of Flow and Sediment Transport in Tanks with the Influence of Cavity
Authors: A. Terfous, Y. Liu, A. Ghenaim, P. A. Garambois
Abstract:
With increasing urbanization worldwide, it is crucial to sustainably manage sediment flows in urban networks and especially in stormwater detention basins. One key aspect is to propose optimized designs for detention tanks in order to best reduce flood peak flows and in the meantime settle particles. It is, therefore, necessary to understand complex flows patterns and sediment deposition conditions in stormwater detention basins. The aim of this paper is to study flow structure and particle deposition pattern for a given tank geometry in view to control and maximize sediment deposition. Both numerical simulation and experimental works were done to investigate the flow and sediment distribution in a storm tank with a cavity. As it can be indicated, the settle distribution of the particle in a rectangular tank is mainly determined by the flow patterns and the bed shear stress. The flow patterns in a rectangular tank differ with different geometry, entrance flow rate and the water depth. With the changing of flow patterns, the bed shear stress will change respectively, which also play an influence on the particle settling. The accumulation of the particle in the bed changes the conditions at the bottom, which is ignored in the investigations, however it worth much more attention, the influence of the accumulation of the particle on the sedimentation should be important. The approach presented here is based on the resolution of the Reynolds averaged Navier-Stokes equations to account for turbulent effects and also a passive particle transport model. An analysis of particle deposition conditions is presented in this paper in terms of flow velocities and turbulence patterns. Then sediment deposition zones are presented thanks to the modeling with particle tracking method. It is shown that two recirculation zones seem to significantly influence sediment deposition. Due to the possible overestimation of particle trap efficiency with standard wall functions and stick conditions, further investigations seem required for basal boundary conditions based on turbulent kinetic energy and shear stress. These observations are confirmed by experimental investigations processed in the laboratory.Keywords: storm sewers, sediment deposition, numerical simulation, experimental investigation
Procedia PDF Downloads 325615 Sustainable Integrated Waste Management System
Authors: Lidia Lombardi
Abstract:
Waste management in Europe and North America is evolving towards sustainable materials management, intended as a systemic approach to using and reusing materials more productively over their entire life cycles. Various waste management strategies are prioritized and ranked from the most to the least environmentally preferred, placing emphasis on reducing, reusing, and recycling as key to sustainable materials management. However, non-recyclable materials must also be appropriately addressed, and waste-to-energy (WtE) offers a solution to manage them, especially when a WtE plant is integrated within a complex system of waste and wastewater treatment plants and potential users of the output flows. To evaluate the environmental effects of such system integration, Life Cycle Assessment (LCA) is a helpful and powerful tool. LCA has been largely applied to the waste management sector, dating back to the late 1990s, producing a large number of theoretical studies and applications to the real world as support to waste management planning. However, LCA still has a fundamental role in helping the development of waste management systems supporting decisions. Thus, LCA was applied to evaluate the environmental performances of a Municipal Solid Waste (MSW) management system, with improved separate material collection and recycling and an integrated network of treatment plants including WtE, anaerobic digestion (AD) and also wastewater treatment plant (WWTP), for a reference study case area. The proposed system was compared to the actual situation, characterized by poor recycling, large landfilling and absence of WtE. The LCA results showed that the increased recycling significantly increases the environmental performances, but there is still room for improvement through the introduction of energy recovery (especially by WtE) and through its use within the system, for instance, by feeding the heat to the AD, to sludge recovery processes and supporting the water reuse practice. WtE offers a solution to manage non-recyclable MSW and allows saving important resources (such as landfill volumes and non-renewable energy), reducing the contribution to global warming, and providing an essential contribution to fulfill the goals of really sustainable waste management.Keywords: anaerobic digestion, life cycle assessment, waste-to-energy, municipal solid waste
Procedia PDF Downloads 60614 Spatial Mental Imagery in Students with Visual Impairments when Learning Literal and Metaphorical Uses of Prepositions in English as a Foreign Language
Authors: Natalia Sáez, Dina Shulfman
Abstract:
There is an important research gap regarding accessible pedagogical techniques for teaching foreign languages to adults with visual impairments. English as a foreign language (EFL), in particular, is needed in many countries to expand occupational opportunities and improve living standards. Within EFL research, teaching and learning prepositions have only recently gained momentum, considering that they constitute one of the most difficult structures to learn in a foreign language and are fundamental for communicating about spatial relations in the world, both on the physical and imaginary levels. Learning to use prepositions would not only facilitate communication when referring to the surrounding tangible environment but also when conveying ideas about abstract topics (e.g., justice, love, society), for which students’ sociocultural knowledge about space could play an important role. By potentiating visually impaired students’ ability to construe mental spatial imagery, this study made efforts to explore pedagogical techniques that cater to their strengths, helping them create new worlds by welcoming and expanding their sociocultural funds of knowledge as they learn to use English prepositions. Fifteen visually impaired adults living in Chile participated in the study. Their first language was Spanish, and they were learning English at the intermediate level of proficiency in an EFL workshop at La Biblioteca Central para Ciegos (The Central Library for the Blind). Within this workshop, a series of activities and interviews were designed and implemented with the intention of uncovering students’ spatial funds of knowledge when learning literal/physical uses of three English prepositions, namely “in,” “at,” and “on”. The activities and interviews also explored whether students used their original spatial funds of knowledge when learning metaphorical uses of these prepositions and if their use of spatial imagery changed throughout the learning activities. Over the course of approximately half a year, it soon became clear that the students construed mental images of space when learning both literal/physical and metaphorical uses of these prepositions. This research could inform a new approach to inclusive language education using pedagogical methods that are relevant and accessible to students with visual impairments.Keywords: EFL, funds of knowledge, prepositions, spatial cognition, visually impaired students
Procedia PDF Downloads 78613 Treating Complex Pain and Addictions with Bioelectrode Therapy: An Acupuncture Point Stimulus Method for Relieving Human Suffering
Authors: Les Moncrieff
Abstract:
In a world awash with potent opioids flaming an international crisis, the need to explore safe alternatives has never been more urgent. Bio-electrode Therapy is a novel adjunctive treatment method for relieving acute opioid withdrawal symptoms and many types of complex acute and chronic pain (often the underlying cause of opioid dependence). By combining the science of developmental bioelectricity with Traditional Chinese Medicine’s theory of meridians, rapid relief from pain is routinely being achieved in the clinical setting. Human body functions are dependent on electrical factors, and acupuncture points on the body are known to have higher electrical conductivity than surrounding skin tissue. When tiny gold- and silver-plated electrodes are secured to the skin at specific acupuncture points using established Chinese Medicine principles and protocols, an enhanced microcurrent and electrical field are created between the electrodes, influencing the entire meridian and connecting meridians. No external power source or electrical devices are required. Endogenous DC electric fields are an essential fundamental component for development, regeneration, and wound healing. Disruptions in the normal ion-charge in the meridians and circulation of blood will manifest as pain and development of disease. With the application of these simple electrodes (gold acting as cathode and silver as anode) according to protocols, the resulting microcurrent is directed along the selected meridians to target injured or diseased organs and tissues. When injured or diseased cells have been stimulated by the microcurrent and electrical fields, the permeability of the cell membrane is affected, resulting in an immediate relief of pain, a rapid balancing of positive and negative ions (sodium, potassium, etc.) in the cells, the restoration of intracellular fluid levels, replenishment of electrolyte levels, pH balance, removal of toxins, and a re-establishment of homeostasis.Keywords: bioelectricity, electrodes, electrical fields, acupuncture meridians, complex pain, opioid withdrawal management
Procedia PDF Downloads 80612 Experimental Study and Numerical Modelling of Failure of Rocks Typical for Kuzbass Coal Basin
Authors: Mikhail O. Eremin
Abstract:
Present work is devoted to experimental study and numerical modelling of failure of rocks typical for Kuzbass coal basin (Russia). The main goal was to define strength and deformation characteristics of rocks on the base of uniaxial compression and three-point bending loadings and then to build a mathematical model of failure process for both types of loading. Depending on particular physical-mechanical characteristics typical rocks of Kuzbass coal basin (sandstones, siltstones, mudstones, etc. of different series – Kolchuginsk, Tarbagansk, Balohonsk) manifest brittle and quasi-brittle character of failure. The strength characteristics for both tension and compression are found. Other characteristics are also found from the experiment or taken from literature reviews. On the base of obtained characteristics and structure (obtained from microscopy) the mathematical and structural models are built and numerical modelling of failure under different types of loading is carried out. Effective characteristics obtained from modelling and character of failure correspond to experiment and thus, the mathematical model was verified. An Instron 1185 machine was used to carry out the experiments. Mathematical model includes fundamental conservation laws of solid mechanics – mass, impulse, energy. Each rock has a sufficiently anisotropic structure, however, each crystallite might be considered as isotropic and then a whole rock model has a quasi-isotropic structure. This idea gives an opportunity to use the Hooke’s law inside of each crystallite and thus explicitly accounting for the anisotropy of rocks and the stress-strain state at loading. Inelastic behavior is described in frameworks of two different models: von Mises yield criterion and modified Drucker-Prager yield criterion. The damage accumulation theory is also implemented in order to describe a failure process. Obtained effective characteristics of rocks are used then for modelling of rock mass evolution when mining is carried out both by an open-pit or underground opening.Keywords: damage accumulation, Drucker-Prager yield criterion, failure, mathematical modelling, three-point bending, uniaxial compression
Procedia PDF Downloads 175611 Technique for Online Condition Monitoring of Surge Arresters
Authors: Anil S. Khopkar, Kartik S. Pandya
Abstract:
Overvoltage in power systems is a phenomenon that cannot be avoided. However, it can be controlled to a certain extent. Power system equipment is to be protected against overvoltage to avoid system failure. Metal Oxide Surge Arresters (MOSA) are connected to the system for the protection of the power system against overvoltages. The MOSA will behave as an insulator under normal working conditions, where it offers a conductive path under voltage conditions. MOSA consists of zinc oxide elements (ZnO Blocks), which have non-linear V-I characteristics. ZnO blocks are connected in series and fitted in ceramic or polymer housing. This degrades due to the aging effect under continuous operation. Degradation of zinc oxide elements increases the leakage current flowing from the surge arresters. This Increased leakage current results in the increased temperature of the surge arrester, which further decreases the resistance of zinc oxide elements. As a result, leakage current increases, which again increases the temperature of a MOSA. This creates thermal runaway conditions for MOSA. Once it reaches the thermal runaway condition, it cannot return to normal working conditions. This condition is a primary cause of premature failure of surge arresters, as MOSA constitutes a core protective device for electrical power systems against transients. It contributes significantly to the reliable operation of the power system network. Hence, the condition monitoring of surge arresters should be done at periodic intervals. Online and Offline condition monitoring techniques are available for surge arresters. Offline condition monitoring techniques are not very popular as they require removing surge arresters from the system, which requires system shutdown. Hence, online condition monitoring techniques are very popular. This paper presents the evaluation technique for the surge arrester condition based on the leakage current analysis. Maximum amplitude of total leakage current (IT), Maximum amplitude of fundamental resistive leakage current (IR) and maximum amplitude of third harmonic resistive leakage current (I3rd) have been analyzed as indicators for surge arrester condition monitoring.Keywords: metal oxide surge arrester (MOSA), over voltage, total leakage current, resistive leakage current
Procedia PDF Downloads 67610 3D Codes for Unsteady Interaction Problems of Continuous Mechanics in Euler Variables
Authors: M. Abuziarov
Abstract:
The designed complex is intended for the numerical simulation of fast dynamic processes of interaction of heterogeneous environments susceptible to the significant formability. The main challenges in solving such problems are associated with the construction of the numerical meshes. Currently, there are two basic approaches to solve this problem. One is using of Lagrangian or Lagrangian Eulerian grid associated with the boundaries of media and the second is associated with the fixed Eulerian mesh, boundary cells of which cut boundaries of the environment medium and requires the calculation of these cut volumes. Both approaches require the complex grid generators and significant time for preparing the code’s data for simulation. In this codes these problems are solved using two grids, regular fixed and mobile local Euler Lagrange - Eulerian (ALE approach) accompanying the contact and free boundaries, the surfaces of shock waves and phase transitions, and other possible features of solutions, with mutual interpolation of integrated parameters. For modeling of both liquids and gases, and deformable solids the Godunov scheme of increased accuracy is used in Lagrangian - Eulerian variables, the same for the Euler equations and for the Euler- Cauchy, describing the deformation of the solid. The increased accuracy of the scheme is achieved by using 3D spatial time dependent solution of the discontinuity problem (3D space time dependent Riemann's Problem solver). The same solution is used to calculate the interaction at the liquid-solid surface (Fluid Structure Interaction problem). The codes does not require complex 3D mesh generators, only the surfaces of the calculating objects as the STL files created by means of engineering graphics are given by the user, which greatly simplifies the preparing the task and makes it convenient to use directly by the designer at the design stage. The results of the test solutions and applications related to the generation and extension of the detonation and shock waves, loading the constructions are presented.Keywords: fluid structure interaction, Riemann's solver, Euler variables, 3D codes
Procedia PDF Downloads 439609 Practical Software for Optimum Bore Hole Cleaning Using Drilling Hydraulics Techniques
Authors: Abdulaziz F. Ettir, Ghait Bashir, Tarek S. Duzan
Abstract:
A proper well planning is very vital to achieve any successful drilling program on the basis of preventing, overcome all drilling problems and minimize cost operations. Since the hydraulic system plays an active role during the drilling operations, that will lead to accelerate the drilling effort and lower the overall well cost. Likewise, an improperly designed hydraulic system can slow drill rate, fail to clean the hole of cuttings, and cause kicks. In most cases, common sense and commercially available computer programs are the only elements required to design the hydraulic system. Drilling optimization is the logical process of analyzing effects and interactions of drilling variables through applied drilling and hydraulic equations and mathematical modeling to achieve maximum drilling efficiency with minimize drilling cost. In this paper, practical software adopted in this paper to define drilling optimization models including four different optimum keys, namely Opti-flow, Opti-clean, Opti-slip and Opti-nozzle that can help to achieve high drilling efficiency with lower cost. The used data in this research from vertical and horizontal wells were recently drilled in Waha Oil Company fields. The input data are: Formation type, Geopressures, Hole Geometry, Bottom hole assembly and Mud reghology. Upon data analysis, all the results from wells show that the proposed program provides a high accuracy than that proposed from the company in terms of hole cleaning efficiency, and cost break down if we consider that the actual data as a reference base for all wells. Finally, it is recommended to use the established Optimization calculations software at drilling design to achieve correct drilling parameters that can provide high drilling efficiency, borehole cleaning and all other hydraulic parameters which assist to minimize hole problems and control drilling operation costs.Keywords: optimum keys, namely opti-flow, opti-clean, opti-slip and opti-nozzle
Procedia PDF Downloads 319608 Performance Demonstration of Extendable NSPO Space-Borne GPS Receiver
Authors: Hung-Yuan Chang, Wen-Lung Chiang, Kuo-Liang Wu, Chen-Tsung Lin
Abstract:
National Space Organization (NSPO) has completed in 2014 the development of a space-borne GPS receiver, including design, manufacture, comprehensive functional test, environmental qualification test and so on. The main performance of this receiver include 8-meter positioning accuracy, 0.05 m/sec speed-accuracy, the longest 90 seconds of cold start time, and up to 15g high dynamic scenario. The receiver will be integrated in the autonomous FORMOSAT-7 NSPO-Built satellite scheduled to be launched in 2019 to execute pre-defined scientific missions. The flight model of this receiver manufactured in early 2015 will pass comprehensive functional tests and environmental acceptance tests, etc., which are expected to be completed by the end of 2015. The space-borne GPS receiver is a pure software design in which all GPS baseband signal processing are executed by a digital signal processor (DSP), currently only 50% of its throughput being used. In response to the booming global navigation satellite systems, NSPO will gradually expand this receiver to become a multi-mode, multi-band, high-precision navigation receiver, and even a science payload, such as the reflectometry receiver of a global navigation satellite system. The fundamental purpose of this extension study is to port some software algorithms such as signal acquisition and correlation, reused code and large amount of computation load to the FPGA whose processor is responsible for operational control, navigation solution, and orbit propagation and so on. Due to the development and evolution of the FPGA is pretty fast, the new system architecture upgraded via an FPGA should be able to achieve the goal of being a multi-mode, multi-band high-precision navigation receiver, or scientific receiver. Finally, the results of tests show that the new system architecture not only retains the original overall performance, but also sets aside more resources available for future expansion possibility. This paper will explain the detailed DSP/FPGA architecture, development, test results, and the goals of next development stage of this receiver.Keywords: space-borne, GPS receiver, DSP, FPGA, multi-mode multi-band
Procedia PDF Downloads 369607 Factors Affecting Expectations and Intentions of University Students’ Mobile Phone Use in Educational Contexts
Authors: Davut Disci
Abstract:
Objective: to measure the factors affecting expectations and intentions of using mobile phone in educational contexts by university students, using advanced equations and modeling techniques. Design and Methodology: According to the literature, Mobile Addiction, Parental Surveillance- Safety/Security, Social Relations, and Mobile Behavior are most used terms of defining mobile use of people. Therefore these variables are tried to be measured to find and estimate their effects on expectations and intentions of using mobile phone in educational context. 421 university students participated in this study and there are 229 Female and 192 Male students. For the purpose of examining the mobile behavior and educational expectations and intentions, a questionnaire is prepared and applied to the participants who had to answer all the questions online. Furthermore, responses to close-ended questions are analyzed by using The Statistical Package for Social Sciences(SPSS) software, reliabilities are measured by Cronbach’s Alpha analysis and hypothesis are examined via using Multiple Regression and Linear Regression analysis and the model is tested with Structural Equation Modeling(SEM) technique which is important for testing the model scientifically. Besides these responses, open-ended questions are taken into consideration. Results: When analyzing data gathered from close-ended questions, it is found that Mobile Addiction, Parental Surveillance, Social Relations and Frequency of Using Mobile Phone Applications are affecting the mobile behavior of the participants in different levels, helping them to use mobile phone in educational context. Moreover, as for open-ended questions, participants stated that they use many mobile applications in their learning environment in terms of contacting with friends, watching educational videos, finding course material via internet. They also agree in that mobile phone brings greater flexibility to their lives. According to the SEM results the model is not evaluated and it can be said that it may be improved to show in SEM besides in multiple regression. Conclusion: This study shows that the specified model can be used by educationalist, school authorities to improve their learning environment.Keywords: education, mobile behavior, mobile learning, technology, Turkey
Procedia PDF Downloads 421606 Factors Affecting Expectations and Intentions of University Students in Educational Context
Authors: Davut Disci
Abstract:
Objective: to measure the factors affecting expectations and intentions of using mobile phone in educational contexts by university students, using advanced equations and modeling techniques. Design and Methodology: According to the literature, Mobile Addiction, Parental Surveillance-Safety/Security, Social Relations, and Mobile Behavior are most used terms of defining mobile use of people. Therefore, these variables are tried to be measured to find and estimate their effects on expectations and intentions of using mobile phone in educational context. 421 university students participated in this study and there are 229 Female and 192 Male students. For the purpose of examining the mobile behavior and educational expectations and intentions, a questionnaire is prepared and applied to the participants who had to answer all the questions online. Furthermore, responses to close-ended questions are analyzed by using The Statistical Package for Social Sciences(SPSS) software, reliabilities are measured by Cronbach’s Alpha analysis and hypothesis are examined via using Multiple Regression and Linear Regression analysis and the model is tested with Structural Equation Modeling (SEM) technique which is important for testing the model scientifically. Besides these responses, open-ended questions are taken into consideration. Results: When analyzing data gathered from close-ended questions, it is found that Mobile Addiction, Parental Surveillance, Social Relations and Frequency of Using Mobile Phone Applications are affecting the mobile behavior of the participants in different levels, helping them to use mobile phone in educational context. Moreover, as for open-ended questions, participants stated that they use many mobile applications in their learning environment in terms of contacting with friends, watching educational videos, finding course material via internet. They also agree in that mobile phone brings greater flexibility to their lives. According to the SEM results the model is not evaluated and it can be said that it may be improved to show in SEM besides in multiple regression. Conclusion: This study shows that the specified model can be used by educationalist, school authorities to improve their learning environment.Keywords: learning technology, instructional technology, mobile learning, technology
Procedia PDF Downloads 452605 The Modern Era in the Cricket World: How Far Have We Really Come?
Authors: Habib Noorbhai
Abstract:
History of Cricket: Cricket has a known history spanning from the 16th century till present, with international matches having been played since 1844. The game of cricket arrived in Australia as soon as colonization began in 1788. Cricketers started playing on turf wickets in the late 1800’s and dimensions for both the boundary and pitch later became assimilated. As the years evolved, cricket bats and balls, protective equipment, playing surfaces and the three formats of the game adapted to the playing conditions and laws of cricket. Business of Cricket: During the late 1900's, the shorter version of the game (T20) was introduced in order to attract the crowds to stadiums and television viewers for broadcasting rights. One could argue if this was merely a business venture or a platform for enhancing the performance of cricketers. Between the 16th and 20th century, cricket was a common sport played for passion and pure enjoyment. Industries saw a potential in diversified business ventures in the game (as well as other sports played globally) and cricket subsequently became a career for players, administrators and coaches, the media, health professionals, managers and the corporate world. Pros and Cons of Cricket Developments: At present, the game has significantly gained from the use of technology, sports sciences and varied mechanisms to optimize the performances and forecast frameworks for injury prevention in cricket players. Unfortunately, these had not been utilized in the earlier times of cricket and it would prove interesting to observe how the greats of the game would have benefited with such developments. Cricketers in the 21st century are faced with many overwhelming commitments. One of these is playing cricket for 11 months in a year, making it more than 250 days away from home and their families. As the demand of player contracts increase, the supply of commitment and performances from players increase. Way Forward and Future Implications: The questions are: Are such disadvantages contributing to the overload and injury risks of players? How far have we really come in the cricketing world or has everything since the game’s inception become institutionalized with a business model? These are the fundamental questions which need to be addressed and legislation, policies and ethical considerations need to be drafted and implemented. These will ensure that there is equilibrium of effective transitions and management of not only the players, but also the credibility of the wonderful game.Keywords: enterprising business of cricket, technology, legislation, credibility
Procedia PDF Downloads 448604 Towards an Equitable Proprietary Regime: Property Rights Over Human Genes as a Case Study
Authors: Aileen Editha
Abstract:
The legal recognition of property rights over human genes is a divisive topic to which there is no resolution. As a frequently discussed topic, scholars and practitioners often highlight the inadequacies of a proprietary regime. However, little has been said in regard to the nature of human genetic materials (HGMs). This paper proposes approaching the issue of property over HGMs from an alternative perspective that looks at the personal and social value and valuation of HGMs. This paper will highlight how the unique and unresolved status of HGMs is incompatible with the main tenets of property and, consequently, contributes to legal ambiguity and uncertainty in the regulation of property rights over human genes. HGMs are perceived as part of nature and a free-for-all while also being within an individual’s private sphere. Additionally, it is also considered to occupy a unique “not-private-nor-public” status. This limbo-like position clashes with property’s fundamental characteristic that relies heavily on a clear public/private dichotomy. Moreover, as property is intrinsically linked to the legal recognition of one’s personhood, this irresolution benefits some while disadvantages others. In particular, it demands the publicization of once-private genes for the “common good” but subsequently encourages privatization (through labor) of these now-public genes. This results in the gain of some (already privileged) individuals while enabling the disenfranchisement of members of minority groups, such as Indigenous communities. This paper will discuss real and intellectual property rights over human genes, such as the right to income or patent rights, in Canada and the US. This paper advocates for a sui generis approach to governing rights and interests over human genes that would not rely on having a strict public/private dichotomy. Not only would this improve legal certainty and clarity, but it would also alleviate—or, at the very least, minimize—the role that the current law plays in further entrenching existing systemic inequalities. Despite the specificity of this topic, this paper argues that there are broader lessons to be learned. This issue is an insightful case study on the interconnection of various principles in law, society, and property, and what must be done when discordance between one or more of those principles has detrimental societal outcomes. Ultimately, it must be remembered that property is an adaptable and malleable instrument that can be developed to ensure it contributes to equity and flourishing.Keywords: property rights, human genetic materials, critical legal scholarship, systemic inequalities
Procedia PDF Downloads 80603 Segmenting 3D Optical Coherence Tomography Images Using a Kalman Filter
Authors: Deniz Guven, Wil Ward, Jinming Duan, Li Bai
Abstract:
Over the past two decades or so, Optical Coherence Tomography (OCT) has been used to diagnose retina and optic nerve diseases. The retinal nerve fibre layer, for example, is a powerful diagnostic marker for detecting and staging glaucoma. With the advances in optical imaging hardware, the adoption of OCT is now commonplace in clinics. More and more OCT images are being generated, and for these OCT images to have clinical applicability, accurate automated OCT image segmentation software is needed. Oct image segmentation is still an active research area, as OCT images are inherently noisy, with the multiplicative speckling noise. Simple edge detection algorithms are unsuitable for detecting retinal layer boundaries in OCT images. Intensity fluctuation, motion artefact, and the presence of blood vessels also decrease further OCT image quality. In this paper, we introduce a new method for segmenting three-dimensional (3D) OCT images. This involves the use of a Kalman filter, which is commonly used in computer vision for object tracking. The Kalman filter is applied to the 3D OCT image volume to track the retinal layer boundaries through the slices within the volume and thus segmenting the 3D image. Specifically, after some pre-processing of the OCT images, points on the retinal layer boundaries in the first image are identified, and curve fitting is applied to them such that the layer boundaries can be represented by the coefficients of the curve equations. These coefficients then form the state space for the Kalman Filter. The filter then produces an optimal estimate of the current state of the system by updating its previous state using the measurements available in the form of a feedback control loop. The results show that the algorithm can be used to segment the retinal layers in OCT images. One of the limitations of the current algorithm is that the curve representation of the retinal layer boundary does not work well when the layer boundary is split into two, e.g., at the optic nerve, the layer boundary split into two. This maybe resolved by using a different approach to representing the boundaries, such as b-splines or level sets. The use of a Kalman filter shows promise to developing accurate and effective 3D OCT segmentation methods.Keywords: optical coherence tomography, image segmentation, Kalman filter, object tracking
Procedia PDF Downloads 482602 Evaluation of the Effect of Lactose Derived Monosaccharide on Galactooligosaccharides Production by β-Galactosidase
Authors: Yenny Paola Morales Cortés, Fabián Rico Rodríguez, Juan Carlos Serrato Bermúdez, Carlos Arturo Martínez Riascos
Abstract:
Numerous benefits of galactooligosaccharides (GOS) as prebiotics have motivated the study of enzymatic processes for their production. These processes have special complexities due to several factors that make difficult high productivity, such as enzyme type, reaction medium pH, substrate concentrations and presence of inhibitors, among others. In the present work the production of galactooligosaccharides (with different degrees of polymerization: two, three and four) from lactose was studied. The study considers the formulation of a mathematical model that predicts the production of GOS from lactose using the enzyme β-galactosidase. The effect of pH in the reaction was studied. For that, phosphate buffer was used and with this was evaluated three pH values (6.0.6.5 and 7.0). Thus it was observed that at pH 6.0 the enzymatic activity insignificant. On the other hand, at pH 7.0 the enzymatic activity was approximately 27 times greater than at 6.5. The last result differs from previously reported results. Therefore, pH 7.0 was chosen as working pH. Additionally, the enzyme concentration was analyzed, which allowed observing that the effect of the concentration depends on the pH and the concentration was set for the following studies in 0.272 mM. Afterwards, experiments were performed varying the lactose concentration to evaluate its effects on the process and to generate the data for the adjustment of the mathematical model parameters. The mathematical model considers the reactions of lactose hydrolysis and transgalactosylation for the production of disaccharides and trisaccharides, with their inverse reactions. The production of tetrasaccharides was negligible and, because of that, it was not included in the model. The reaction was monitored by HPLC and for the quantitative analysis of the experimental data the Matlab programming language was used, including solvers for differential equations systems integration (ode15s) and nonlinear problems optimization (fminunc). The results confirm that the transgalactosylation and hydrolysis reactions are reversible, additionally inhibition by glucose and galactose is observed on the production of GOS. In relation to the production process of galactooligosaccharides, the results show that it is necessary to have high initial concentrations of lactose considering that favors the transgalactosylation reaction, while low concentrations favor hydrolysis reactions.Keywords: β-galactosidase, galactooligosaccharides, inhibition, lactose, Matlab, modeling
Procedia PDF Downloads 358601 Interaction Between Task Complexity and Collaborative Learning on Virtual Patient Design: The Effects on Students’ Performance, Cognitive Load, and Task Time
Authors: Fatemeh Jannesarvatan, Ghazaal Parastooei, Jimmy frerejan, Saedeh Mokhtari, Peter Van Rosmalen
Abstract:
Medical and dental education increasingly emphasizes the acquisition, integration, and coordination of complex knowledge, skills, and attitudes that can be applied in practical situations. Instructional design approaches have focused on using real-life tasks in order to facilitate complex learning in both real and simulated environments. The Four component instructional design (4C/ID) model has become a useful guideline for designing instructional materials that improve learning transfer, especially in health profession education. The objective of this study was to apply the 4C/ID model in the creation of virtual patients (VPs) that dental students can use to practice their clinical management and clinical reasoning skills. The study first explored the context and concept of complication factors and common errors for novices and how they can affect the design of a virtual patient program. The study then selected key dental information and considered the content needs of dental students. The design of virtual patients was based on the 4C/ID model's fundamental principles, which included: Designing learning tasks that reflect real patient scenarios and applying different levels of task complexity to challenge students to apply their knowledge and skills in different contexts. Creating varied learning materials that support students during the VP program and are closely integrated with the learning tasks and students' curricula. Cognitive feedback was provided at different levels of the program. Providing procedural information where students followed a step-by-step process from history taking to writing a comprehensive treatment plan. Four virtual patients were designed using the 4C/ID model's principles, and an experimental design was used to test the effectiveness of the principles in achieving the intended educational outcomes. The 4C/ID model provides an effective framework for designing engaging and successful virtual patients that support the transfer of knowledge and skills for dental students. However, there are some challenges and pitfalls that instructional designers should take into account when developing these educational tools.Keywords: 4C/ID model, virtual patients, education, dental, instructional design
Procedia PDF Downloads 80600 Development and Validation of Cylindrical Linear Oscillating Generator
Authors: Sungin Jeong
Abstract:
This paper presents a linear oscillating generator of cylindrical type for hybrid electric vehicle application. The focus of the study is the suggestion of the optimal model and the design rule of the cylindrical linear oscillating generator with permanent magnet in the back-iron translator. The cylindrical topology is achieved using equivalent magnetic circuit considering leakage elements as initial modeling. This topology with permanent magnet in the back-iron translator is described by number of phases and displacement of stroke. For more accurate analysis of an oscillating machine, it will be compared by moving just one-pole pitch forward and backward the thrust of single-phase system and three-phase system. Through the analysis and comparison, a single-phase system of cylindrical topology as the optimal topology is selected. Finally, the detailed design of the optimal topology takes the magnetic saturation effects into account by finite element analysis. Besides, the losses are examined to obtain more accurate results; copper loss in the conductors of machine windings, eddy-current loss of permanent magnet, and iron-loss of specific material of electrical steel. The considerations of thermal performances and mechanical robustness are essential, because they have an effect on the entire efficiency and the insulations of the machine due to the losses of the high temperature generated in each region of the generator. Besides electric machine with linear oscillating movement requires a support system that can resist dynamic forces and mechanical masses. As a result, the fatigue analysis of shaft is achieved by the kinetic equations. Also, the thermal characteristics are analyzed by the operating frequency in each region. The results of this study will give a very important design rule in the design of linear oscillating machines. It enables us to more accurate machine design and more accurate prediction of machine performances.Keywords: equivalent magnetic circuit, finite element analysis, hybrid electric vehicle, linear oscillating generator
Procedia PDF Downloads 195599 Consideration for a Policy Change to the South African Collective Bargaining Process: A Reflection on National Union of Metalworkers of South Africa v Trenstar (Pty) (2023) 44 ILJ 1189 (CC)
Authors: Carlos Joel Tchawouo Mbiada
Abstract:
At the back of the apartheid era, South Africa embarked on a democratic drive of all its institution underpinned by a social justice perspective to eradicate past injustices. These democratic values based on fundamental human rights and equality informed all rights enshrined in the Constitution of the Republic of South Africa, 1996. This means that all rights are therefore infused by social justice perspective and labour rights are no exception. Labour law is therefore regulated to the extent that it is viewed as too rigid. Hence a call for more flexibility to enhance investment and boost job creation. This view articulated by the Free Market Foundation fell on deaf ears as the opponents believe in what is termed regulated flexibility which affords greater protection to vulnerable workers while promoting business opportunities and investment. The question that this paper seeks to examine is to what extent the regulation of labour law will go to protect employees. This question is prompted by the recent Constitutional Court’s judgment of National Union of Metalworkers of South Africa v Trenstar which barred the employer from employing labour replacement in response to the strike action by its employees. The question whether employers may use replacement labour and have recourse to lock-outs in response to strike action is considered in the context of the dichotomy between the Free market foundation and social justice perspectives which are at loggerheads in the South African collective bargaining process. With the current unemployment rate soaring constantly, the aftermath of the Covid 19 pandemic, the effects of the war in Ukraine and lately the financial burden of load shedding on companies to run their businesses, this paper argues for a policy shift toward deregulation or a lesser state and judiciary intervention. This initiative will relieve the burden on companies to run a viable business while at the same time protecting existing jobs.Keywords: labour law, replacement labour, right to strike, free market foundation perspective, social justice perspective
Procedia PDF Downloads 103598 Test Procedures for Assessing the Peel Strength and Cleavage Resistance of Adhesively Bonded Joints with Elastic Adhesives under Detrimental Service Conditions
Authors: Johannes Barlang
Abstract:
Adhesive bonding plays a pivotal role in various industrial applications, ranging from automotive manufacturing to aerospace engineering. The peel strength of adhesives, a critical parameter reflecting the ability of an adhesive to withstand external forces, is crucial for ensuring the integrity and durability of bonded joints. This study provides a synopsis of the methodologies, influencing factors, and significance of peel testing in the evaluation of adhesive performance. Peel testing involves the measurement of the force required to separate two bonded substrates under controlled conditions. This study systematically reviews the different testing techniques commonly applied in peel testing, including the widely used 180-degree peel test and the T-peel test. Emphasis is placed on the importance of selecting an appropriate testing method based on the specific characteristics of the adhesive and the application requirements. The influencing factors on peel strength are multifaceted, encompassing adhesive properties, substrate characteristics, environmental conditions, and test parameters. Through an in-depth analysis, this study explores how factors such as adhesive formulation, surface preparation, temperature, and peel rate can significantly impact the peel strength of adhesively bonded joints. Understanding these factors is essential for optimizing adhesive selection and application processes in real-world scenarios. Furthermore, the study highlights the role of peel testing in quality control and assurance, aiding manufacturers in maintaining consistent adhesive performance and ensuring the reliability of bonded structures. The correlation between peel strength and long-term durability is discussed, shedding light on the predictive capabilities of peel testing in assessing the service life of adhesive bonds. In conclusion, this study underscores the significance of peel testing as a fundamental tool for characterizing adhesive performance. By delving into testing methodologies, influencing factors, and practical implications, this study contributes to the broader understanding of adhesive behavior and fosters advancements in adhesive technology across diverse industrial sectors.Keywords: adhesively bonded joints, cleavage resistance, elastic adhesives, peel strength
Procedia PDF Downloads 95