Search results for: output from learners
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3148

Search results for: output from learners

208 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust

Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin

Abstract:

The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.

Keywords: acoustic impedance, engine exhaust system, FEM model, test stand

Procedia PDF Downloads 31
207 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 88
206 Scenario-Based Learning Using Virtual Optometrist Applications

Authors: J. S. M. Yang, G. E. T. Chua

Abstract:

Diploma in Optometry (OPT) course is a three-year program offered by Ngee Ann Polytechnic (NP) to train students to provide primary eye care. Students are equipped with foundational conceptual knowledge and practical skills in the first three semesters before clinical modules in fourth to six semesters. In the clinical modules, students typically have difficulties in integrating the acquired knowledge and skills from the past semesters to perform general eye examinations on public patients at NP Optometry Centre (NPOC). To help the students overcome the challenge, a web-based game Virtual Optometrist (VO) was developed to help students apply their skills and knowledge through scenario-based learning. It consisted of two interfaces, Optical Practice Counter (OPC) and Optometric Consultation Room (OCR), to provide two simulated settings for authentic learning experiences. In OPC, students would recommend and provide appropriate frame and lens selection based on virtual patient’s case history. In OCR, students would diagnose and manage virtual patients with common ocular conditions. Simulated scenarios provided real-world clinical situations that required contextual application of integrated knowledge from relevant modules. The stages in OPC and OCR are of increasing complexity to align to expected students’ clinical competency as they progress to more senior semesters. This prevented gameplay fatigue as VO was used over the semesters to achieve different learning outcomes. Numerous feedback opportunities were provided to students based on their decisions to allow individualized learning to take place. The game-based learning element in VO was achieved through the scoreboard and leader board to enhance students' motivation to perform. Scores were based on the speed and accuracy of students’ responses to the questions posed in the simulated scenarios, preparing the students to perform accurately and effectively under time pressure in a realistic optometric environment. Learning analytics was generated in VO’s backend office based on students’ responses, offering real-time data on distinctive and observable learners’ behavior to monitor students’ engagement and learning progress. The backend office allowed versatility to add, edit, and delete scenarios for different intended learning outcomes. Likert Scale was used to measure students’ learning experience with VO for OPT Year 2 and 3 students. The survey results highlighted the learning benefits of implementing VO in the different modules, such as enhancing recall and reinforcement of clinical knowledge for contextual application to develop higher-order thinking skills, increasing efficiency in clinical decision-making, facilitating learning through immediate feedback and second attempts, providing exposure to common and significant ocular conditions, and training effective communication skills. The results showed that VO has been useful in reinforcing optometry students’ learning and supporting the development of higher-order thinking, increasing efficiency in clinical decision-making, and allowing students to learn from their mistakes with immediate feedback and second attempts. VO also exposed the students to diverse ocular conditions through simulated real-world clinical scenarios, which may otherwise not be encountered in NPOC, and promoted effective communication skills.

Keywords: authentic learning, game-based learning, scenario-based learning, simulated clinical scenarios

Procedia PDF Downloads 93
205 Optimal Applications of Solar Energy Systems: Comparative Analysis of Ground-Mounted and Rooftop Solar PV Installations in Drought-Prone and Residential Areas of the Indian Subcontinent

Authors: Rajkumar Ghosh, Bhabani Prasad Mukhopadhyay

Abstract:

The increasing demand for environmentally friendly energy solutions highlights the need to optimize solar energy systems. This study compares two types of solar energy systems: ground-mounted solar panels for drought-prone locations and rooftop solar PV installations measuring 300 sq. ft. (approx. 28 sq. m.). The electricity output of 4730 kWh/year saves ₹ 14191/year. As a clean and sustainable energy source, solar power is pivotal in reducing greenhouse gas CO2 emissions reduction by 85 tonnes in 25 years and combating climate change. This effort, "PM Suryadaya Ghar-Muft Bijli Yojana," seeks to empower Indian homes by giving free access to solar energy. The initiative is part of the Indian government's larger attempt to encourage clean and renewable energy sources while reducing reliance on traditional fossil fuels. This report reviews various installations and government reports to analyse the performance and impact of both ground-mounted and rooftop solar systems. Besides, effectiveness of government subsidy programs for residential on-grid solar systems, including the ₹78,000 incentive for systems above 3 kW. The study also looks into the subsidy schemes available for domestic agricultural grid use. Systems up to 3 kW receive ₹43,764, while systems over 10 kW receive a fixed subsidy of ₹94,822. Households can save a substantial amount of energy and minimize their reliance on grid electricity by installing the proper solar plant capacity. In terms of monthly consumption at home, the acceptable Rooftop Solar Plant capacity for households is 0-150 units (1-2 kW), 150-300 units (2-3 kW), and >300 units (above 3 kW). Ground-mounted panels, particularly in arid regions, offer benefits such as scalability and optimal orientation but face challenges like land use conflicts and environmental impact, particularly in drought-prone regions. By evaluating the distinct advantages and challenges of each system, this study aims to provide insights into their optimal applications, guiding stakeholders in making informed decisions to enhance solar energy efficiency and sustainability within regulatory constraints. This research also explores the implications of regulations, such as Italy's ban on ground-mounted solar panels on productive agricultural land, on solar energy strategies.

Keywords: sustainability, solar energy, subsidy, rooftop solar energy, renewable energy

Procedia PDF Downloads 18
204 Comparative Quantitative Study on Learning Outcomes of Major Study Groups of an Information and Communication Technology Bachelor Educational Program

Authors: Kari Björn, Mikael Soini

Abstract:

Higher Education system reforms, especially Finnish system of Universities of Applied Sciences in 2014 are discussed. The new steering model is based on major legislative changes, output-oriented funding and open information. The governmental steering reform, especially the financial model and the resulting institutional level responses, such as a curriculum reforms are discussed, focusing especially in engineering programs. The paper is motivated by management need to establish objective steering-related performance indicators and to apply them consistently across all educational programs. The close relationship to governmental steering and funding model imply that internally derived indicators can be directly applied. Metropolia University of Applied Sciences (MUAS) as a case institution is briefly introduced, focusing on engineering education in Information and Communications Technology (ICT), and its related programs. The reform forced consolidation of previously separate smaller programs into fewer units of student application. New curriculum ICT students have a common first year before they apply for a Major. A framework of parallel and longitudinal comparisons is introduced and used across Majors in two campuses. The new externally introduced performance criteria are applied internally on ICT Majors using data ex-ante and ex-post of program merger.  A comparative performance of the Majors after completion of joint first year is established, focusing on previously omitted Majors for completeness of analysis. Some new research questions resulting from transfer of Majors between campuses and quota setting are discussed. Practical orientation identifies best practices to share or targets needing most attention for improvement. This level of analysis is directly applicable at student group and teaching team level, where corrective actions are possible, when identified. The analysis is quantitative and the nature of the corrective actions are not discussed. Causal relationships and factor analysis are omitted, because campuses, their staff and various pedagogical implementation details contain still too many undetermined factors for our limited data. Such qualitative analysis is left for further research. Further study must, however, be guided by the relevance of the observations.

Keywords: engineering education, integrated curriculum, learning outcomes, performance measurement

Procedia PDF Downloads 206
203 Psychophysiological Adaptive Automation Based on Fuzzy Controller

Authors: Liliana Villavicencio, Yohn Garcia, Pallavi Singh, Luis Fernando Cruz, Wilfrido Moreno

Abstract:

Psychophysiological adaptive automation is a concept that combines human physiological data and computer algorithms to create personalized interfaces and experiences for users. This approach aims to enhance human learning by adapting to individual needs and preferences and optimizing the interaction between humans and machines. According to neurosciences, the working memory demand during the student learning process is modified when the student is learning a new subject or topic, managing and/or fulfilling a specific task goal. A sudden increase in working memory demand modifies the level of students’ attention, engagement, and cognitive load. The proposed psychophysiological adaptive automation system will adapt the task requirements to optimize cognitive load, the process output variable, by monitoring the student's brain activity. Cognitive load changes according to the student’s previous knowledge, the type of task, the difficulty level of the task, and the overall psychophysiological state of the student. Scaling the measured cognitive load as low, medium, or high; the system will assign a task difficulty level to the next task according to the ratio between the previous-task difficulty level and student stress. For instance, if a student becomes stressed or overwhelmed during a particular task, the system detects this through signal measurements such as brain waves, heart rate variability, or any other psychophysiological variables analyzed to adjust the task difficulty level. The control of engagement and stress are considered internal variables for the hypermedia system which selects between three different types of instructional material. This work assesses the feasibility of a fuzzy controller to track a student's physiological responses and adjust the learning content and pace accordingly. Using an industrial automation approach, the proposed fuzzy logic controller is based on linguistic rules that complement the instrumentation of the system to monitor and control the delivery of instructional material to the students. From the test results, it can be proved that the implemented fuzzy controller can satisfactorily regulate the delivery of academic content based on the working memory demand without compromising students’ health. This work has a potential application in the instructional design of virtual reality environments for training and education.

Keywords: fuzzy logic controller, hypermedia control system, personalized education, psychophysiological adaptive automation

Procedia PDF Downloads 58
202 Investigating the Impacts on Cyclist Casualty Severity at Roundabouts: A UK Case Study

Authors: Nurten Akgun, Dilum Dissanayake, Neil Thorpe, Margaret C. Bell

Abstract:

Cycling has gained a great attention with comparable speeds, low cost, health benefits and reducing the impact on the environment. The main challenge associated with cycling is the provision of safety for the people choosing to cycle as their main means of transport. From the road safety point of view, cyclists are considered as vulnerable road users because they are at higher risk of serious casualty in the urban network but more specifically at roundabouts. This research addresses the development of an enhanced mathematical model by including a broad spectrum of casualty related variables. These variables were geometric design measures (approach number of lanes and entry path radius), speed limit, meteorological condition variables (light, weather, road surface) and socio-demographic characteristics (age and gender), as well as contributory factors. Contributory factors included driver’s behavior related variables such as failed to look properly, sudden braking, a vehicle passing too close to a cyclist, junction overshot, failed to judge other person’s path, restart moving off at the junction, poor turn or manoeuvre and disobeyed give-way. Tyne and Wear in the UK were selected as a case study area. The cyclist casualty data was obtained from UK STATS19 National dataset. The reference categories for the regression model were set to slight and serious cyclist casualties. Therefore, binary logistic regression was applied. Binary logistic regression analysis showed that approach number of lanes was statistically significant at the 95% level of confidence. A higher number of approach lanes increased the probability of severity of cyclist casualty occurrence. In addition, sudden braking statistically significantly increased the cyclist casualty severity at the 95% level of confidence. The result concluded that cyclist casualty severity was highly related to approach a number of lanes and sudden braking. Further research should be carried out an in-depth analysis to explore connectivity of sudden braking and approach number of lanes in order to investigate the driver’s behavior at approach locations. The output of this research will inform investment in measure to improve the safety of cyclists at roundabouts.

Keywords: binary logistic regression, casualty severity, cyclist safety, roundabout

Procedia PDF Downloads 161
201 Adaption to Climate Change as a Challenge for the Manufacturing Industry: Finding Business Strategies by Game-Based Learning

Authors: Jan Schmitt, Sophie Fischer

Abstract:

After the Corona pandemic, climate change is a further, long-lasting challenge the society must deal with. An ongoing climate change need to be prevented. Nevertheless, the adoption tothe already changed climate conditionshas to be focused in many sectors. Recently, the decisive role of the economic sector with high value added can be seen in the Corona crisis. Hence, manufacturing industry as such a sector, needs to be prepared for climate change and adaption. Several examples from the manufacturing industry show the importance of a strategic effort in this field: The outsourcing of a major parts of the value chain to suppliers in other countries and optimizing procurement logistics in a time-, storage- and cost-efficient manner within a network of global value creation, can lead vulnerable impacts due to climate-related disruptions. E.g. the total damage costs after the 2011 flood disaster in Thailand, including costs for delivery failures, were estimated at 45 billion US dollars worldwide. German car manufacturers were also affected by supply bottlenecks andhave close its plant in Thailand for a short time. Another OEM must reduce the production output. In this contribution, a game-based learning approach is presented, which should enable manufacturing companies to derive their own strategies for climate adaption out of a mix of different actions. Based on data from a regional study of small, medium and large manufacturing companies in Mainfranken, a strongly industrialized region of northern Bavaria (Germany) the game-based learning approach is designed. Out of this, the actual state of efforts due to climate adaption is evaluated. First, the results are used to collect single actions for manufacturing companies and second, further actions can be identified. Then, a variety of climate adaption activities can be clustered according to the scope of activity of the company. The combination of different actions e.g. the renewal of the building envelope with regard to thermal insulation, its benefits and drawbacks leads to a specific strategy for climate adaption for each company. Within the game-based approach, the players take on different roles in a fictionalcompany and discuss the order and the characteristics of each action taken into their climate adaption strategy. Different indicators such as economic, ecologic and stakeholder satisfaction compare the success of the respective measures in a competitive format with other virtual companies deriving their own strategy. A "play through" climate change scenarios with targeted adaptation actions illustrate the impact of different actions and their combination onthefictional company.

Keywords: business strategy, climate change, climate adaption, game-based learning

Procedia PDF Downloads 181
200 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions

Authors: M. Tarik Boyraz, M. Bilge Imer

Abstract:

Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.

Keywords: heat treatment, IN738LC, simulations, super-alloys

Procedia PDF Downloads 229
199 Basic Education Curriculum in South- South Nigeria: Challenges and Opportunities of Quality Contents in the Second Language Learning

Authors: Catherine Alex Agbor

Abstract:

The modern Nigerian society is dynamic, divided in zones based on economic, political and educational resources often shared across the zones. The Six Geopolitical Zones in Nigeria is a major division in modern Nigeria, created during the regime of president Ibrahim Badamasi Babangida. They are North Central, North East, North West, South East, South South and South West. However, the zone used in this study is known as former South-Eastern State of Akwa-Ibom State and Cross-River State; former Rivers State of Bayelsa State and Rivers State; and former Mid-Western Region, Nigeria of Delta State and Edo State. Many reforms have taken place overtime, particularly in the education sector. Education is constantly presenting new ideas and innovative approaches which act to facilitate the rapid exchange of knowledge and provide quality basic education for learners. The Federal Government of Nigeria in accordance with its National Council on Education directed the Nigerian Educational Research and Development Council to restructure its basic education curriculum with the hope to enable the nation meet national and global developmental goals. One of the goals of the 9-year Basic Education Programme is developing in the entire citizenry a strong consciousness for education and a strong commitment to its vigorous promotion. Another is ensuring the acquisition of appropriate levels of literacy, numeracy, manipulative, communicative and life-skills as well as the ethical, moral and civic values for laying a solid foundation for lifelong learning. Therefore, this article at the introductory stage is aimed to describe some key issues in Nigeria’s experience in the basic education curriculum. In this study, particular attention is paid to this very recent educational policy of the Nigerian government known as Universal Basic Education, its challenges and what can be done to make the policy achieve its desired objectives. It progresses to analyze modern requirements for second language teaching; and presents the challenges of second language teaching in Nigeria. Finally, it reports a study which investigated special efforts for appropriate achievement of quality education in language classroom in the south-south zone of Nigeria. One fundamental research question was posed on what educational practices can contribute to current understanding of the structure of language curriculum. More explicitly, the study was designed to analyze the extent to which quality content contributes to current understanding of the structure of school curriculum in the zone. Otherwise stated, it investigated how student-centred educational practices impact on their learning of French language. One hundred and eighty (180) participants (teachers) were purposefully sampled for the study. Qualitative technique was used to elicit information from participants. The qualitative method used was Focus Group Discussion (FGD). Participants were divided into six groups comprising of 30 teachers from each zone. Group discussions were based mainly on curriculum contents and practices. Information from participants revealed that the curriculum content, among others is inadequate and should be re-examined. Recommendations were proffered as a panacea to concrete implementation of the basic education in Nigeria.

Keywords: basic education, quality contents, second language, south-south states

Procedia PDF Downloads 211
198 The Metabolism of Built Environment: Energy Flow and Greenhouse Gas Emissions in Nigeria

Authors: Yusuf U. Datti

Abstract:

It is becoming increasingly clear that the consumption of resources now enjoyed in the developed nations will be impossible to be sustained worldwide. While developing countries still have the advantage of low consumption and a smaller ecological footprint per person, they cannot simply develop in the same way as other western cities have developed in the past. The severe reality of population and consumption inequalities makes it contentious whether studies done in developed countries can be translated and applied to developing countries. Additional to this disparities, there are few or no metabolism of energy studies in Nigeria. Rather more contentious majority of energy metabolism studies have been done only in developed countries. While researches in Nigeria concentrate on other aspects/principles of sustainability such as water supply, sewage disposal, energy supply, energy efficiency, waste disposal, etc., which will not accurately capture the environmental impact of energy flow in Nigeria, this research will set itself apart by examining the flow of energy in Nigeria and the impact that the flow will have on the environment. The aim of the study is to examine and quantify the metabolic flows of energy in Nigeria and its corresponding environmental impact. The study will quantify the level and pattern of energy inflow and the outflow of greenhouse emissions in Nigeria. This study will describe measures to address the impact of existing energy sources and suggest alternative renewable energy sources in Nigeria that will lower the emission of greenhouse gas emissions. This study will investigate the metabolism of energy in Nigeria through a three-part methodology. The first step involved selecting and defining the study area and some variables that would affect the output of the energy (time of the year, stability of the country, income level, literacy rate and population). The second step involves analyzing, categorizing and quantifying the amount of energy generated by the various energy sources in the country. The third step involves analyzing what effect the variables would have on the environment. To ensure a representative sample of the study area, Africa’s most populous country, with economy that is the second biggest and that is among the top largest oil producing countries in the world is selected. This is due to the understanding that countries with large economy and dense populations are ideal places to examine sustainability strategies; hence, the choice of Nigeria for the study. National data will be utilized unless where such data cannot be found, then local data will be employed which will be aggregated to reflect the national situation. The outcome of the study will help policy-makers better target energy conservation and efficiency programs and enables early identification and mitigation of any negative effects in the environment.

Keywords: built environment, energy metabolism, environmental impact, greenhouse gas emissions and sustainability

Procedia PDF Downloads 161
197 Subjective Temporal Resources: On the Relationship Between Time Perspective and Chronic Time Pressure to Burnout

Authors: Diamant Irene, Dar Tamar

Abstract:

Burnout, conceptualized within the framework of stress research, is to a large extent a result of a threat on resources of time or a feeling of time shortage. In reaction to numerous tasks, deadlines, high output, management of different duties encompassing work-home conflicts, many individuals experience ‘time pressure’. Time pressure is characterized as the perception of a lack of available time in relation to the amount of workload. It can be a result of local objective constraints, but it can also be a chronic attribute in coping with life. As such, time pressure is associated in the literature with general stress experience and can therefore be a direct, contributory burnout factor. The present study examines the relation of chronic time pressure – feeling of time shortage and of being rushed, with another central aspect in subjective temporal experience - time perspective. Time perspective is a stable personal disposition, capturing the extent to which people subjectively remember the past, live the present and\or anticipate the future. Based on Hobfoll’s Conservation of Resources Theory, it was hypothesized that individuals with chronic time pressure would experience a permanent threat on their time resources resulting in relatively increased burnout. In addition, it was hypothesized that different time perspective profiles, based on Zimbardo’s typology of five dimensions – Past Positive, Past Negative, Present Hedonistic, Present Fatalistic, and Future, would be related to different magnitudes of chronic time pressure and of burnout. We expected that individuals with ‘Past Negative’ or ‘Present Fatalist’ time perspectives would experience more burnout, with chronic time pressure being a moderator variable. Conversely, individuals with a ‘Present Hedonistic’ - with little concern with the future consequences of actions, would experience less chronic time pressure and less burnout. Another temporal experience angle examined in this study is the difference between the actual distribution of time (as in a typical day) versus desired distribution of time (such as would have been distributed optimally during a day). It was hypothesized that there would be a positive correlation between the gap between these time distributions and chronic time pressure and burnout. Data was collected through an online self-reporting survey distributed on social networks, with 240 participants (aged 21-65) recruited through convenience and snowball sampling methods from various organizational sectors. The results of the present study support the hypotheses and constitute a basis for future debate regarding the elements of burnout in the modern work environment, with an emphasis on subjective temporal experience. Our findings point to the importance of chronic and stable temporal experiences, as time pressure and time perspective, in occupational experience. The findings are also discussed with a view to the development of practical methods of burnout prevention.

Keywords: conservation of resources, burnout, time pressure, time perspective

Procedia PDF Downloads 149
196 Blended Learning Instructional Approach to Teach Pharmaceutical Calculations

Authors: Sini George

Abstract:

Active learning pedagogies are valued for their success in increasing 21st-century learners’ engagement, developing transferable skills like critical thinking or quantitative reasoning, and creating deeper and more lasting educational gains. 'Blended learning' is an active learning pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter. This project aimed to develop a blended learning instructional approach to teaching concepts around pharmaceutical calculations to year 1 pharmacy students. The wrong dose, strength or frequency of a medication accounts for almost a third of medication errors in the NHS therefore, progression to year 2 requires a 70% pass in this calculation test, in addition to the standard progression requirements. Many students were struggling to achieve this requirement in the past. It was also challenging to teach these concepts to students of a large class (> 130) with mixed mathematical abilities, especially within a traditional didactic lecture format. Therefore, short screencasts with voice-over of the lecturer were provided in advance of a total of four teaching sessions (two hours/session), incorporating core content of each session and talking through how they approached the calculations to model metacognition. Links to the screencasts were posted on the learning management. Viewership counts were used to determine that the students were indeed accessing and watching the screencasts on schedule. In the classroom, students had to apply the knowledge learned beforehand to a series of increasingly difficult set of questions. Students were then asked to create a question in group settings (two students/group) and to discuss the questions created by their peers in their groups to promote deep conceptual learning. Students were also given time for question-and-answer period to seek clarifications on the concepts covered. Student response to this instructional approach and their test grades were collected. After collecting and organizing the data, statistical analysis was carried out to calculate binomial statistics for the two data sets: the test grade for students who received blended learning instruction and the test grades for students who received instruction in a standard lecture format in class, to compare the effectiveness of each type of instruction. Student response and their performance data on the assessment indicate that the learning of content in the blended learning instructional approach led to higher levels of student engagement, satisfaction, and more substantial learning gains. The blended learning approach enabled each student to learn how to do calculations at their own pace freeing class time for interactive application of this knowledge. Although time-consuming for an instructor to implement, the findings of this research demonstrate that the blended learning instructional approach improves student academic outcomes and represents a valuable method to incorporate active learning methodologies while still maintaining broad content coverage. Satisfaction with this approach was high, and we are currently developing more pharmacy content for delivery in this format.

Keywords: active learning, blended learning, deep conceptual learning, instructional approach, metacognition, pharmaceutical calculations

Procedia PDF Downloads 149
195 Educational Audit and Curricular Reforms in the Arabian Context

Authors: Irum Naz

Abstract:

In the Arabian higher education context, linguistic proficiency in the English language is considered crucial for the developmental sustainability, economic growth, and stability of communities and societies. Qatar’s educational reforms package, through the 2030 vision, identifies the acquisition of English at K-12 as an essential survival communication tool for globalization, believing that Qatari students need better preparation to take on the responsibilities of leadership and to participate effectively in the country’s surging economy. The idea of introducing Qatari students to modern curricula benchmarked to high-student-performance curricula in developed countries is one of the components of reformatory design principles of Education for New Era reform project that is mutually consented to and supported by the Office of Shared Services, Communications Office, and Supreme Education Council. In appreciation of the government’s vision, the English Language Centre (ELC) at the Community College of Qatar ran an internal educational audit and conducted evaluative research to understand and appraise the value, impact, and practicality of the existing ELC language development program. This study sought to identify the type of change that could identify and improve the quality of Foundation Program courses and the manners in which second language learners could be assisted to transit smoothly between (ELC) levels. Following the interpretivist paradigm and mixed research method, the data was gathered through a bicyclic research model and a triangular design. The analyses of the data suggested that there was a need for improvement in the ELC program as a whole, and particularly in terms of curriculum, student learning outcomes, and the general learning environment in the department. Key findings suggest that the target program would benefit from significant revisions, which would include narrowing the focus of the courses, providing sets of specific learning objectives, and preventing repetition between levels. Another promising finding was about the assessment tools and process. The data suggested that a set of standardized assessments that more closely suited the programs of study should be devised. It was also recommended that students undergo a more comprehensive placement process to ensure that they begin the program at an appropriate level and get the maximum benefit from their learning experience. Although this ties into the idea of curriculum revamp, it was expected that students could leave the ELC having had exposure to courses in English for specific purposes. The idea of a more reliable exit assessment for students was raised frequently so ELC could regulate itself and ensure optimum learning outcomes. Another important recommendation was the provision of a Student Learning Center for students that would help them to receive personalized tuition, differentiated instruction, and self-driven and self-evaluated learning experience. In addition, an extra study level was recommended to be added to the program to accommodate the different levels of English language proficiency represented among ELC students. The evidence collected in the course of conducting the study suggests that significant change is needed in the structure of the ELC program, specifically about curriculum, the program learning outcomes, and the learning environment in general.

Keywords: educational audit, ESL, optimum learning outcomes, Qatar’s educational reforms, self-driven and self-evaluated learning experience, Student Learning Center

Procedia PDF Downloads 156
194 Determinants of Budget Performance in an Oil-Based Economy

Authors: Adeola Adenikinju, Olusanya E. Olubusoye, Lateef O. Akinpelu, Dilinna L. Nwobi

Abstract:

Since the enactment of the Fiscal Responsibility Act (2007), the Federal Government of Nigeria (FGN) has made public its fiscal budget and the subsequent implementation report. A critical review of these documents shows significant variations in the five macroeconomic variables which are inputs in each Presidential budget; oil Production target (mbpd), oil price ($), Foreign exchange rate(N/$), and Gross Domestic Product growth rate (%) and inflation rate (%). This results in underperformance of the Federal budget expected output in terms of non-oil and oil revenue aggregates. This paper evaluates first the existing variance between budgeted and actuals, then the relationship and causality between the determinants of Federal fiscal budget assumptions, and finally the determinants of FGN’s Gross Oil Revenue. The paper employed the use of descriptive statistics, the Autoregressive distributed lag (ARDL) model, and a Profit oil probabilistic model to achieve these objectives. This model permits for both the static and dynamic effect(s) of the independent variable(s) on the dependent variable, unlike a static model that accounts for static or fixed effect(s) only. It offers a technique for checking the existence of a long-run relationship between variables, unlike other tests of cointegration, such as the Engle-Granger and Johansen tests, which consider only non-stationary series that are integrated of the same order. Finally, even with small sample size, the ARDL model is known to generate a valid result, for it is the dependent variable and is the explanatory variable. The results showed that there is a long-run relationship between oil revenue as a proxy for budget performance and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a short-run relationship between oil revenue and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a long-run relationship between non-oil revenue and its determinants; inflation rate, GDP growth rate, and foreign exchange rate. The grangers’ causality test results show that there is a mono-directional causality between oil revenue and its determinants. The Federal budget assumptions only explain 68% of oil revenue and 62% of non-oil revenue. There is a mono-directional causality between non-oil revenue and its determinants. The Profit oil Model describes production sharing contracts, joint ventures, and modified carrying arrangements as the greatest contributors to FGN’s gross oil revenue. This provides empirical justification for the selected macroeconomic variables used in the Federal budget design and performance evaluation. The research recommends other variables, debt and money supply, be included in the Federal budget design to explain the Federal budget revenue performance further.

Keywords: ARDL, budget performance, oil price, oil quantity, oil revenue

Procedia PDF Downloads 148
193 Application of Multilinear Regression Analysis for Prediction of Synthetic Shear Wave Velocity Logs in Upper Assam Basin

Authors: Triveni Gogoi, Rima Chatterjee

Abstract:

Shear wave velocity (Vs) estimation is an important approach in the seismic exploration and characterization of a hydrocarbon reservoir. There are varying methods for prediction of S-wave velocity, if recorded S-wave log is not available. But all the available methods for Vs prediction are empirical mathematical models. Shear wave velocity can be estimated using P-wave velocity by applying Castagna’s equation, which is the most common approach. The constants used in Castagna’s equation vary for different lithologies and geological set-ups. In this study, multiple regression analysis has been used for estimation of S-wave velocity. The EMERGE module from Hampson-Russel software has been used here for generation of S-wave log. Both single attribute and multi attributes analysis have been carried out for generation of synthetic S-wave log in Upper Assam basin. Upper Assam basin situated in North Eastern India is one of the most important petroleum provinces of India. The present study was carried out using four wells of the study area. Out of these wells, S-wave velocity was available for three wells. The main objective of the present study is a prediction of shear wave velocities for wells where S-wave velocity information is not available. The three wells having S-wave velocity were first used to test the reliability of the method and the generated S-wave log was compared with actual S-wave log. Single attribute analysis has been carried out for these three wells within the depth range 1700-2100m, which corresponds to Barail group of Oligocene age. The Barail Group is the main target zone in this study, which is the primary producing reservoir of the basin. A system generated list of attributes with varying degrees of correlation appeared and the attribute with the highest correlation was concerned for the single attribute analysis. Crossplot between the attributes shows the variation of points from line of best fit. The final result of the analysis was compared with the available S-wave log, which shows a good visual fit with a correlation of 72%. Next multi-attribute analysis has been carried out for the same data using all the wells within the same analysis window. A high correlation of 85% has been observed between the output log from the analysis and the recorded S-wave. The almost perfect fit between the synthetic S-wave and the recorded S-wave log validates the reliability of the method. For further authentication, the generated S-wave data from the wells have been tied to the seismic and correlated them. Synthetic share wave log has been generated for the well M2 where S-wave is not available and it shows a good correlation with the seismic. Neutron porosity, density, AI and P-wave velocity are proved to be the most significant variables in this statistical method for S-wave generation. Multilinear regression method thus can be considered as a reliable technique for generation of shear wave velocity log in this study.

Keywords: Castagna's equation, multi linear regression, multi attribute analysis, shear wave logs

Procedia PDF Downloads 201
192 Start with the Art: Early Results from a Study of Arts-Integrated Instruction for Young Children

Authors: Juliane Toce, Steven Holochwost

Abstract:

A substantial and growing literature has demonstrated that arts education benefits young children’s socioemotional and cognitive development. Less is known about the capacity of arts-integrated instruction to yield benefits to similar domains, particularly among demographically and socioeconomically diverse groups of young children. However, the small literature on this topic suggests that arts-integrated instruction may foster young children’s socioemotional and cognitive development by presenting opportunities to 1) engage in instructional content in diverse ways, 2) experience and regulate strong emotions, 3) experience growth-oriented feedback, and 4) engage in collaborative work with peers. Start with the Art is a new program of arts-integrated instruction currently being implemented in four schools in a school district that serves students from a diverse range of backgrounds. The program employs a co-teaching model in which teaching artists and classroom teachers engage in collaborative lesson planning and instruction over the course of the academic year and is currently the focus of an impact study featuring a randomized-control design, as well as an implementation study, both of which are funded through an Educational Innovation and Research grant from the United States Department of Education. The paper will present the early results from the Start with the Art implementation study. These results will provide an overview of the extent to which the program was implemented in accordance with design, with a particular emphasis on the degree to which the four opportunities enumerated above (e.g., opportunities to engage in instructional content in diverse ways) were presented to students. There will be a review key factors that may influence the fidelity of implementation, including classroom teachers’ reception of the program and the extent to which extant conditions in the classroom (e.g., the overall level of classroom organization) may have impacted implementation fidelity. With the explicit purpose of creating a program that values and meets the needs of the teachers and students, Start with the Art incorporates the feedback from individuals participating in the intervention. Tracing its trajectory from inception to ongoing development and examining the adaptive changes made in response to teachers' transformative experiences in the post-pandemic classroom, Start with the Art continues to solicit input from experts in integrating artistic content into core curricula within educational settings catering to students from under-represented backgrounds in the arts. Leveraging the input from this rich consortium of experts has allowed for a comprehensive evaluation of the program’s implementation. The early findings derived from the implementation study emphasize the potential of arts-integrated instruction to incorporate restorative practices. Such practices serve as a crucial support system for both students and educators, providing avenues for children to express themselves, heal emotionally, and foster social development, while empowering teachers to create more empathetic, inclusive, and supportive learning environments. This all-encompassing analysis spotlights Start with the Art’s adaptability to any learning environment through the program’s effectiveness, resilience, and its capacity to transform - through art - the classroom experience within the ever-evolving landscape of education.

Keywords: arts-integration, social emotional learning, diverse learners, co-teaching, teaching artists, post-pandemic teaching

Procedia PDF Downloads 39
191 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 170
190 Fostering Non-Traditional Student Success in an Online Music Appreciation Course

Authors: Linda Fellag, Arlene Caney

Abstract:

E-learning has earned an essential place in academia because it promotes learner autonomy, student engagement, and technological aptitude, and allows for flexible learning. However, despite advantages, educators have been slower to embrace e-learning for ESL and other non-traditional students for fear that such students will not succeed without the direct faculty contact and academic support of face-to-face classrooms. This study aims to determine if a non-traditional student-friendly online course can produce student retention and performance rates that compare favorably with those of students in standard online sections of the same course aimed at traditional college-level students. One Music faculty member is currently collaborating with an English instructor to redesign an online college-level Music Appreciation course for non-traditional college students. At Community College of Philadelphia, Introduction to Music Appreciation was recently designated as one of the few college-level courses that advanced ESL, and developmental English students can take while completing their language studies. Beginning in Fall 2017, the course will be critical for international students who must maintain full-time student status under visa requirements. In its current online format, however, Music Appreciation is designed for traditional college students, and faculty who teach these sections have been reluctant to revise the course to address the needs of non-traditional students. Interestingly, presenters maintain that the online platform is the ideal place to develop language and college readiness skills in at-risk students while maintaining the course's curricular integrity. The two faculty presenters describe how curriculum rather than technology drives the redesign of the digitized music course, and self-study materials, guided assignments, and periodic assessments promote independent learning and comprehension of material. The 'scaffolded' modules allow ESL and developmental English students to build on prior knowledge, preview key vocabulary, discuss content, and complete graded tasks that demonstrate comprehension. Activities and assignments, in turn, enhance college success by allowing students to practice academic reading strategies, writing, speaking, and student-faculty and peer-peer communication and collaboration. The course components facilitate a comparison of student performance and retention in sections of the redesigned and existing online sections of Music Appreciation as well as in previous sections with at-risk students. Indirect, qualitative measures include student attitudinal surveys and evaluations. Direct, quantitative measures include withdrawal rates, tests of disciplinary knowledge, and final grades. The study will compare the outcomes of three cohorts in the two versions of the online course: ESL students, at-risk developmental students, and college-level students. These data will also be compared with retention and student outcomes data of the three cohorts in f2f Music Appreciation, which permitted non-traditional student enrollment from 1998-2005. During this eight-year period, the presenter addressed the problems of at-risk students by adding language and college success support, which resulted in strong retention and outcomes. The presenters contend that the redesigned course will produce favorable outcomes among all three cohorts because it contains components which proved successful with at-risk learners in f2f sections of the course. Results of their study will be published in 2019 after the redesigned online course has met for two semesters.

Keywords: college readiness, e-learning, music appreciation, online courses

Procedia PDF Downloads 155
189 Forming Form, Motivation and Their Biolinguistic Hypothesis: The Case of Consonant Iconicity in Tashelhiyt Amazigh and English

Authors: Noury Bakrim

Abstract:

When dealing with motivation/arbitrariness, forming form (Forma Formans) and morphodynamics are to be grasped as relevant implications of enunciation/enactment, schematization within the specificity of language as sound/meaning articulation. Thus, the fact that a language is a form does not contradict stasis/dynamic enunciation (reflexivity vs double articulation). Moreover, some languages exemplify the role of the forming form, uttering, and schematization (roots in Semitic languages, the Chinese case). Beyond the evolutionary biosemiotic process (form/substance bifurcation, the split between realization/representation), non-isomorphism/asymmetry between linguistic form/norm and linguistic realization (phonetics for instance) opens up a new horizon problematizing the role of Brain – sensorimotor contribution in the continuous forming form. Therefore, we hypothesize biotization as both process/trace co-constructing motivation/forming form. Henceforth, referring to our findings concerning distribution and motivation patterns within Berber written texts (pulse based obstruents and nasal-lateral levels in poetry) and oral storytelling (consonant intensity clustering in quantitative and semantic/prosodic motivation), we understand consonant clustering, motivation and schematization as a complex phenomenon partaking in patterns of oral/written iconic prosody and reflexive metalinguistic representation opening the stable form. We focus our inquiry on both Amazigh and English clusters (/spl/, /spr/) and iconic consonant iteration in [gnunnuy] (to roll/tumble), [smummuy] (to moan sadly or crankily). For instance, the syllabic structures of /splaeʃ/ and /splaet/ imply an anamorphic representation of the state of the world: splash, impact on aquatic surfaces/splat impact on the ground. The pair has stridency and distribution as distinctive features which specify its phonetic realization (and a part of its meaning) /ʃ/ is [+ strident] and /t/ is [+ distributed] on the vocal tract. Schematization is then a process relating both physiology/code as an arthron vocal/bodily, vocal/practical shaping of the motor-articulatory system, leading to syntactic/semantic thematization (agent/patient roles in /spl/, /sm/ and other clusters or the tense uvular /qq/ at the initial position in Berber). Furthermore, the productivity of serial syllable sequencing in Berber points out different expressivity forms. We postulate two Components of motivated formalization: i) the process of memory paradigmatization relating to sequence modeling under sensorimotor/verbal specific categories (production/perception), ii) the process of phonotactic selection - prosodic unconscious/subconscious distribution by virtue of iconicity. Basing on multiple tests including a questionnaire, phonotactic/visual recognition and oral/written reproduction, we aim at patterning/conceptualizing consonant schematization and motivation among EFL and Amazigh (Berber) learners and speakers integrating biolinguistic hypotheses.

Keywords: consonant motivation and prosody, language and order of life, anamorphic representation, represented representation, biotization, sensori-motor and brain representation, form, formalization and schematization

Procedia PDF Downloads 116
188 The Influence of English Immersion Program on Academic Performance: Case Study at a Sino-US Cooperative University in China

Authors: Leah Li Echiverri, Haoyu Shang, Yue Li

Abstract:

Wenzhou-Kean University (WKU) is a Sino-US Cooperative University in China. It practices the English Immersion Program (EIP), where all the courses are taught in English. Class discussions and presentations are pervasively interwoven in designing students’ learning experiences. This WKU model has brought positive influences on students and is in some way ahead of traditional college English majors. However, literature to support the perceptions on the positive outcomes of this teaching and learning model remain scarce. The distinctive profile of Chinese-ESL students in an English Medium of Instruction (EMI) environment contributes further to the scarcity of literature compared to existing studies conducted among ESL learners in Western educational settings. Hence, the study investigated the students’ perceptions towards the English Immersion Program and determine how it influences Chinese-ESL students’ academic performance (AP). This research can provide empirical data that would be helpful to educators, teaching practitioners, university administrators, and other researchers in making informed decisions when developing curricular reforms, instructional and pedagogical methods, and university-wide support programs using this educational model. The purpose of the study was to establish the relationship between the English Immersion Program and Academic Performance among Chinese-ESL students enrolled at WKU for the academic year 2020-2021. Course length, immersion location, course type, and instructional design were the constructs of the English immersion program. English language learning, learning efficiency, and class participation were used to measure academic performance. Descriptive-correlational design was used in this cross-sectional research project. A quantitative approach for data analysis was applied to determine the relationship between the English immersion program and Chinese-ESL students’ academic performance. The research was conducted at WKU; a Chinese-American jointly established higher educational institution located in Wenzhou, Zhejiang province. Convenience, random, and snowball sampling of 283 students, a response rate of 10.5%, were applied to represent the WKU student population. The questionnaire was posted through the survey website named Wenjuanxing and shared to QQ or WeChat. Cronbach’s alpha was used to test the reliability of the research instrument. Findings revealed that when professors integrate technology (PowerPoint, videos, and audios) in teaching, students pay more attention. This contributes to the acquisition of more professional knowledge in their major courses. As to course immersion, students perceive WKU as a good place to study, providing them a high degree of confidence to talk with their professors in English. This also contributes to their English fluency and better pronunciation in their communication. In the construct of designing instruction, the use of pictures, video clips, and professors’ non-verbal communication, and demonstration of concern for students encouraged students to be more active in-class participation. Findings on course length and academic performance indicated that students’ perception regarding taking courses during fall and spring terms can moderately contribute to their academic performance. In conclusion, the findings revealed a significantly strong positive relationship between course type, immersion location, instructional design, and academic performance.

Keywords: class participation, English immersion program, English language learning, learning efficiency

Procedia PDF Downloads 150
187 Enhancing Athlete Training using Real Time Pose Estimation with Neural Networks

Authors: Jeh Patel, Chandrahas Paidi, Ahmed Hambaba

Abstract:

Traditional methods for analyzing athlete movement often lack the detail and immediacy required for optimal training. This project aims to address this limitation by developing a Real-time human pose estimation system specifically designed to enhance athlete training across various sports. This system leverages the power of convolutional neural networks (CNNs) to provide a comprehensive and immediate analysis of an athlete’s movement patterns during training sessions. The core architecture utilizes dilated convolutions to capture crucial long-range dependencies within video frames. Combining this with the robust encoder-decoder architecture to further refine pose estimation accuracy. This capability is essential for precise joint localization across the diverse range of athletic poses encountered in different sports. Furthermore, by quantifying movement efficiency, power output, and range of motion, the system provides data-driven insights that can be used to optimize training programs. Pose estimation data analysis can also be used to develop personalized training plans that target specific weaknesses identified in an athlete’s movement patterns. To overcome the limitations posed by outdoor environments, the project employs strategies such as multi-camera configurations or depth sensing techniques. These approaches can enhance pose estimation accuracy in challenging lighting and occlusion scenarios, where pose estimation accuracy in challenging lighting and occlusion scenarios. A dataset is collected From the labs of Martin Luther King at San Jose State University. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing different poses, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced pose detection model and lays the groundwork for future innovations in assistive enhancement technologies.

Keywords: computer vision, deep learning, human pose estimation, U-NET, CNN

Procedia PDF Downloads 11
186 Predictors of Pericardial Effusion Requiring Drainage Following Coronary Artery Bypass Graft Surgery: A Retrospective Analysis

Authors: Nicholas McNamara, John Brookes, Michael Williams, Manish Mathew, Elizabeth Brookes, Tristan Yan, Paul Bannon

Abstract:

Objective: Pericardial effusions are an uncommon but potentially fatal complication after cardiac surgery. The goal of this study was to describe the incidence and risk factors associated with the development of pericardial effusion requiring drainage after coronary artery bypass graft surgery (CABG). Methods: A retrospective analysis was undertaken using prospectively collected data. All adult patients who underwent CABG at our institution between 1st January 2017 and 31st December 2018 were included. Pericardial effusion was diagnosed using transthoracic echocardiography (TTE) performed for clinical suspicion of pre-tamponade or tamponade. Drainage was undertaken if considered clinically necessary and performed via a sub-xiphoid incision, pericardiocentesis, or via re-sternotomy at the discretion of the treating surgeon. Patient demographics, operative characteristics, anticoagulant exposure, and postoperative outcomes were examined to identify those variables associated with the development of pericardial effusion requiring drainage. Tests of association were performed using the Fischer exact test for dichotomous variables and the Student t-test for continuous variables. Logistic regression models were used to determine univariate predictors of pericardial effusion requiring drainage. Results: Between January 1st, 2017, and December 31st, 2018, a total of 408 patients underwent CABG at our institution, and eight (1.9%) required drainage of pericardial effusion. There was no difference in age, gender, or the proportion of patients on preoperative therapeutic heparin between the study and control groups. Univariate analysis identified preoperative atrial arrhythmia (37.5% vs 8.8%, p = 0.03), reduced left ventricular ejection fraction (47% vs 56%, p = 0.04), longer cardiopulmonary bypass (130 vs 84 min, p < 0.01) and cross-clamp (107 vs 62 min, p < 0.01) times, higher drain output in the first four postoperative hours (420 vs 213 mL, p <0.01), postoperative atrial fibrillation (100% vs 32%, p < 0.01), and pleural effusion requiring drainage (87.5% vs 12.5%, p < 0.01) to be associated with development of pericardial effusion requiring drainage. Conclusion: In this study, the incidence of pericardial effusion requiring drainage was 1.9%. Several factors, mainly related to preoperative or postoperative arrhythmia, length of surgery, and pleural effusion requiring drainage, were identified to be associated with developing clinically significant pericardial effusions. High clinical suspicion and low threshold for transthoracic echo are pertinent to ensure this potentially lethal condition is not missed.

Keywords: coronary artery bypass, pericardial effusion, pericardiocentesis, tamponade, sub-xiphoid drainage

Procedia PDF Downloads 149
185 A Research Study of the Inclusiveness of VR Headsets for Higher Education

Authors: Fredrick Forster, Gareth Ward, Matthew Tubby, Pamela Lithgow, Anne Nortcliffe

Abstract:

This paper presents the results from a research study of random adult participants accessing one of four different commercially available Virtual Reality (VR) Head Mounted Displays (HMDs) and completing a post user experience reflection questionnaire. The research sort to understand how inclusive commercially available VR HMDs are and identify any associated barriers that could impact the widespread adoption of the devices, specifically in Higher Education (HE). In the UK, education providers are legally required under the Equality Act 2010 to ensure all education facilities are inclusive and reasonable adjustments can be applied appropriately. The research specifically aimed to identify the considerations that academics and learning technologists need to make when adopting the use of commercial VR HMDs in HE classrooms, namely cybersickness, user comfort, Interpupillary Distance, inclusiveness, and user perceptions of VR. The research approach was designed to build upon previously published research on user reflections on presence, usability, and overall HMD comfort, using quantitative and qualitative research methods by way of a questionnaire. The quantitative data included the recording of physical characteristics such as the distance between eye pupils, known as Interpupillary Distance (IPD). VR HMDs require each user’s IPD measurement to enable the focusing of the VR HMDs virtual camera output to the right position in front of the eyes of the user. In addition, the questionnaire captured users’ qualitative reflections and evaluations of the broader accessibility characteristics of the VR HMDs. The initial research activity was accomplished by enabling a random sample of visitors, staff, and students at Canterbury Christ Church University, Kent to use a VR HMD for a set period of time and asking them to complete the post user experience questionnaire. The study identified that there is little correlation between users who experience cyber sickness and car sickness. Also, users with a smaller IPD than average (typically associated with females) were able to use the VR HMDs successfully; however, users with a larger than average IPD reported an impeded experience. This indicates that there is reduced inclusiveness for the tested VR HMDs for users with a higher-than-average IPD which is typically associated with males of certain ethnicities. As action education research, these initial findings will be used to refine the research method and conduct further investigations with the aim to provide verification and validation of the accessibility of current commercial VR HMDs. The conference presentation will report on the research results of the initial study and subsequent follow up studies with a larger variety of adult volunteers.

Keywords: virtual reality, education technology, inclusive technology, higher education

Procedia PDF Downloads 42
184 Preparation of β-Polyvinylidene Fluoride Film for Self-Charging Lithium-Ion Battery

Authors: Nursultan Turdakyn, Alisher Medeubayev, Didar Meiramov, Zhibek Bekezhankyzy, Desmond Adair, Gulnur Kalimuldina

Abstract:

In recent years the development of sustainable energy sources is getting extensive research interest due to the ever-growing demand for energy. As an alternative energy source to power small electronic devices, ambient energy harvesting from vibration or human body motion is considered a potential candidate. Despite the enormous progress in the field of battery research in terms of safety, lifecycle and energy density in about three decades, it has not reached the level to conveniently power wearable electronic devices such as smartwatches, bands, hearing aids, etc. For this reason, the development of self-charging power units with excellent flexibility and integrated energy harvesting and storage is crucial. Self-powering is a key idea that makes it possible for the system to operate sustainably, which is now getting more acceptance in many fields in the area of sensor networks, the internet of things (IoT) and implantable in-vivo medical devices. For solving this energy harvesting issue, the self-powering nanogenerators (NGS) were proposed and proved their high effectiveness. Usually, sustainable power is delivered through energy harvesting and storage devices by connecting them to the power management circuit; as for energy storage, the Li-ion battery (LIB) is one of the most effective technologies. Through the movement of Li ions under the driving of an externally applied voltage source, the electrochemical reactions generate the anode and cathode, storing the electrical energy as the chemical energy. In this paper, we present a simultaneous process of converting the mechanical energy into chemical energy in a way that NG and LIB are combined as an all-in-one power system. The electrospinning method was used as an initial step for the development of such a system with a β-PVDF separator. The obtained film showed promising voltage output at different stress frequencies. X-ray diffraction (XRD) and Fourier Transform Infrared Spectroscopy (FT-IR) analysis showed a high percentage of β phase of PVDF polymer material. Moreover, it was found that the addition of 1 wt.% of BTO (Barium Titanate) results in higher quality fibers. When comparing pure PVDF solution with 20 wt.% content and the one with BTO added the latter was more viscous. Hence, the sample was electrospun uniformly without any beads. Lastly, to test the sensor application of such film, a particular testing device has been developed. With this device, the force of a finger tap can be applied at different frequencies so that electrical signal generation is validated.

Keywords: electrospinning, nanogenerators, piezoelectric PVDF, self-charging li-ion batteries

Procedia PDF Downloads 142
183 Fibrin Glue Reinforcement of Choledochotomy Closure Suture Line for Prevention of Bile Leak in Patients Undergoing Laparoscopic Common Bile Duct Exploration with Primary Closure: A Pilot Study

Authors: Rahul Jain, Jagdish Chander, Anish Gupta

Abstract:

Introduction: Laparoscopic common bile duct exploration (LCBDE) allows cholecystectomy and the removal of common bile duct (CBD) stones to be performed during the same sitting, thereby decreasing hospital stay. CBD exploration through choledochotomy can be closed primarily with an absorbable suture material, but can lead to biliary leakage postoperatively. In this study we tried to find a solution to further lower the incidence of bile leakage by using fibrin glue to reinforce the sutures put on choledochotomy suture line. It has haemostatic and sealing action, through strengthening the last step of the physiological coagulation and biostimulation, which favours the formation of new tissue matrix. Methodology: This study was conducted at a tertiary care teaching hospital in New Delhi, India, from 2011 to 2013. 20 patients with CBD stones documented on MRCP with CBD diameter of 9 mm or more were included in this study. Patients were randomized into two groups namely Group A in which choledochotomy was closed with polyglactin 4-0 suture and suture line reinforced with fibrin glue, and Group ‘B’ in which choledochotomy was closed with polyglactin 4-0 suture alone. Both the groups were evaluated and compared on clinical parameters such as operative time, drain content, drain output, no. of days drain was required, blood loss & transfusion requirements, length of postoperative hospital stay and conversion to open surgery. Results: The operative time for Group A ranged from 60 to 210 min (mean 131.50 min) and Group B 65 to 300 min (mean 140 minutes). The blood loss in group A ranged from 10 to 120 ml (mean 51.50 ml), in group B it ranged from 10 to 200 ml (mean 53.50 ml). In Group A, there was no case of bile leak but there was bile leak in 2 cases in Group B, minimum 0 and maximum 900 ml with a mean of 97 ml and p value of 0.147 with no statistically significant difference in bile leak in test and control groups. The minimum and maximum serous drainage in Group A was nil & 80 ml (mean 11 ml) and in Group B was nil & 270 ml (mean 72.50 ml). The p value came as 0.028 which is statistically significant. Thus serous leakage in Group A was significantly less than in Group B. The drains in Group A were removed from 2 to 4 days (mean: 3 days) while in Group B from 2 to 9 days (mean: 3.9 days). The patients in Group A stayed in hospital post operatively from 3 to 8 days (mean: 5.30) while in Group B it ranged from 3 to 10 days with a mean of 5 days. Conclusion: Fibrin glue application on CBD decreases bile leakage but in statistically insignificant manner. Fibrin glue application on CBD can significantly decrease post operative serous drainage after LCBDE. Fibrin glue application on CBD is safe and easy technique without any significant adverse effects and can help less experienced surgeons performing LCBDE.

Keywords: bile leak, fibrin glue, LCBDE, serous leak

Procedia PDF Downloads 194
182 Comparing Deep Architectures for Selecting Optimal Machine Translation

Authors: Despoina Mouratidis, Katia Lida Kermanidis

Abstract:

Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.

Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification

Procedia PDF Downloads 108
181 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 50
180 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data

Authors: S. Jurado, E. Pazmino

Abstract:

Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.

Keywords: medial axis, pore-throat distribution, porosity, porous media

Procedia PDF Downloads 94
179 Performance and Voyage Analysis of Marine Gas Turbine Engine, Installed to Power and Propel an Ocean-Going Cruise Ship from Lagos to Jeddah

Authors: Mathias U. Bonet, Pericles Pilidis, Georgios Doulgeris

Abstract:

An aero-derivative marine Gas Turbine engine model is simulated to be installed as the main propulsion prime mover to power a cruise ship which is designed and routed to transport intending Muslim pilgrims for the annual hajj pilgrimage from Nigeria to the Islamic port city of Jeddah in Saudi Arabia. A performance assessment of the Gas Turbine engine has been conducted by examining the effect of varying aerodynamic and hydrodynamic conditions encountered at various geographical locations along the scheduled transit route during the voyage. The investigation focuses on the overall behavior of the Gas Turbine engine employed to power and propel the ship as it operates under ideal and adverse conditions to be encountered during calm and rough weather according to the different seasons of the year under which the voyage may be undertaken. The variation of engine performance under varying operating conditions has been considered as a very important economic issue by determining the time the speed by which the journey is completed as well as the quantity of fuel required for undertaking the voyage. The assessment also focuses on the increased resistance caused by the fouling of the submerged portion of the ship hull surface with its resultant effect on the power output of the engine as well as the overall performance of the propulsion system. Daily ambient temperature levels were obtained by accessing data from the UK Meteorological Office while the varying degree of turbulence along the transit route and according to the Beaufort scale were also obtained as major input variables of the investigation. By assuming the ship to be navigating the Atlantic Ocean and the Mediterranean Sea during winter, spring and summer seasons, the performance modeling and simulation was accomplished through the use of an integrated Gas Turbine performance simulation code known as ‘Turbomach’ along with a Matlab generated code named ‘Poseidon’, all of which have been developed at the Power and Propulsion Department of Cranfield University. As a case study, the results of the various assumptions have further revealed that the marine Gas Turbine is a reliable and available alternative to the conventional marine propulsion prime movers that have dominated the maritime industry before now. The techno-economic and environmental assessment of this type of propulsion prime mover has enabled the determination of the effect of changes in weather and sea conditions on the ship speed as well as trip time and the quantity of fuel required to be burned throughout the voyage.

Keywords: ambient temperature, hull fouling, marine gas turbine, performance, propulsion, voyage

Procedia PDF Downloads 167