Search results for: real time digital simulator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22690

Search results for: real time digital simulator

10120 Selection of Rayleigh Damping Coefficients for Seismic Response Analysis of Soil Layers

Authors: Huai-Feng Wang, Meng-Lin Lou, Ru-Lin Zhang

Abstract:

One good analysis method in seismic response analysis is direct time integration, which widely adopts Rayleigh damping. An approach is presented for selection of Rayleigh damping coefficients to be used in seismic analyses to produce a response that is consistent with Modal damping response. In the presented approach, the expression of the error of peak response, acquired through complete quadratic combination method, and Rayleigh damping coefficients was set up and then the coefficients were produced by minimizing the error. Two finite element modes of soil layers, excited by 28 seismic waves, were used to demonstrate the feasibility and validity.

Keywords: Rayleigh damping, modal damping, damping coefficients, seismic response analysis

Procedia PDF Downloads 433
10119 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 91
10118 Nursing Students’ Learning Effects of Online Visits for Mothers Rearing Infants during the COVID-19 Pandemic

Authors: Saori Fujimoto, Hiromi Kawasaki, Mari Murakami, Yoko Ueno

Abstract:

Background: Coronavirus disease (COVID-19) has been spreading throughout the world. In Japan, many nursing universities have conducted online clinical practices to secure students’ learning opportunities. In the field of women’s health nursing, even after the pandemic ended, it will be worthwhile to utilize online practice in declining birthrate and reducing the burden of mothers. This study examined the learning effects of conducting online visits for mothers with infants during the COVID-19 pandemic by nursing students to enhance the students’ ability to carry out the online practice even in ordinary times effectively. Methods: Students were divided into groups of three, and information on the mothers was assessed, and the visits were planned. After role-play was conducted by the students and teachers, an online visit was conducted. The analysis target was the self-evaluation score of nine students who conducted online visits in June 2020 and had consented to participate. The evaluation contents included three items for assessment, two items for planning, one item for ethical consideration, five items for nursing practice, and two items for evaluation. The self-evaluation score ranged from 4 (‘Can do with a little advice’) to 1 (‘Can’t do with a little advice’). A univariate statistical analysis was performed. This study was approved by the Ethical Committee for Epidemiology of Hiroshima University. Results: The items with the highest mean (standard deviation) scores were ‘advocates for the dignity and the rights of mothers’ (3.89 (0.31)) and ‘communication behavior needed to create a trusting relationship’ (3.89 (0.31)).’ Next were the ‘individual nursing practice tailored to mothers (3.78 (0.42))’ and ‘review own practice and work on own task (3.78 (0.42)).’ The mean (standard deviation) of the items by type were as follows: three assessment items, 3.26 (0.70), two planning items, 3.11 (0.49), one ethical consideration item, 3.89 (0.31), five nursing practice items, 3.56 (0.54), and two evaluation items, 3.67 (0.47). Conclusion: The highest self-evaluations were for ‘advocates for the dignity and the rights of mothers’ and ‘communication behavior needed to create a trusting relationship.’ These findings suggest that the students were able to form good relationships with the mothers by improving their ability to effectively communicate and by presenting a positive attitude, even when conducting health visits online. However, the self-evaluation scores for assessment and planning were lower than those of ethical consideration, nursing practice, and evaluation. This was most likely due to a lack of opportunities and time to gather information and the need to modify and add plans in a short amount of time during one online visit. It is necessary to further consider the methods used in conducting online visits from the following viewpoints: methods of gathering information and the ability to make changes through multiple visits.

Keywords: infants, learning effects, mothers, online visit practice

Procedia PDF Downloads 138
10117 Amyloid-β Fibrils Remodeling by an Organic Molecule: Insight from All-Atomic Molecular Dynamics Simulations

Authors: Nikhil Agrawal, Adam A. Skelton

Abstract:

Alzheimer’s disease (AD) is one of the most common forms of dementia, which is caused by misfolding and aggregation of amyloid beta (Aβ) peptides into amyloid-β fibrils (Aβ fibrils). To disrupt the remodeling of Aβ fibrils, a number of candidate molecules have been proposed. To study the molecular mechanisms of Aβ fibrils remodeling we performed a series of all-atom molecular dynamics simulations, a total time of 3µs, in explicit solvent. Several previously undiscovered candidate molecule-Aβ fibrils binding modes are unraveled; one of which shows the direct conformational change of the Aβ fibril by understanding the physicochemical factors responsible for binding and subsequent remodeling of Aβ fibrils by the candidate molecule, open avenues into structure-based drug design for AD can be opened.

Keywords: alzheimer’s disease, amyloid, MD simulations, misfolded protein

Procedia PDF Downloads 342
10116 An Empirical Study of Determinants Influencing Telemedicine Services Acceptance by Healthcare Professionals: Case of Selected Hospitals in Ghana

Authors: Jonathan Kissi, Baozhen Dai, Wisdom W. K. Pomegbe, Abdul-Basit Kassim

Abstract:

Protecting patient’s digital information is a growing concern for healthcare institutions as people nowadays perpetually live their lives through telemedicine services. These telemedicine services have been confronted with several determinants that hinder their successful implementations, especially in developing countries. Identifying such determinants that influence the acceptance of telemedicine services is also a problem for healthcare professionals. Despite the tremendous increase in telemedicine services, its adoption, and use has been quite slow in some healthcare settings. Generally, it is accepted in today’s globalizing world that the success of telemedicine services relies on users’ satisfaction. Satisfying health professionals and patients are one of the crucial objectives of telemedicine success. This study seeks to investigate the determinants that influence health professionals’ intention to utilize telemedicine services in clinical activities in a sub-Saharan African country in West Africa (Ghana). A hybridized model comprising of health adoption models, including technology acceptance theory, diffusion of innovation theory, and protection of motivation theory, were used to investigate these quandaries. The study was carried out in four government health institutions that apply and regulate telemedicine services in their clinical activities. A structured questionnaire was developed and used for data collection. Purposive and convenience sampling methods were used in the selection of healthcare professionals from different medical fields for the study. The collected data were analyzed based on structural equation modeling (SEM) approach. All selected constructs showed a significant relationship with health professional’s behavioral intention in the direction expected from prior literature including perceived usefulness, perceived ease of use, management strategies, financial sustainability, communication channels, patients security threat, patients privacy risk, self efficacy, actual service use, user satisfaction, and telemedicine services systems securities threat. Surprisingly, user characteristics and response efficacy of health professionals were not significant in the hybridized model. The findings and insights from this research show that health professionals are pragmatic when making choices for technology applications and also their willingness to use telemedicine services. They are, however, anxious about its threats and coping appraisals. The identified significant constructs in the study may help to increase efficiency, quality of services, quality patient care delivery, and satisfactory user satisfaction among healthcare professionals. The implantation and effective utilization of telemedicine services in the selected hospitals will aid as a strategy to eradicate hardships in healthcare services delivery. The service will help attain universal health access coverage to all populace. This study contributes to empirical knowledge by identifying the vital factors influencing health professionals’ behavioral intentions to adopt telemedicine services. The study will also help stakeholders of healthcare to formulate better policies towards telemedicine service usage.

Keywords: telemedicine service, perceived usefulness, perceived ease of use, management strategies, security threats

Procedia PDF Downloads 137
10115 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss

Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin

Abstract:

Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.

Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links

Procedia PDF Downloads 126
10114 Exposure to Radon on Air in Tourist Caves in Bulgaria

Authors: Bistra Kunovska, Kremena Ivanova, Jana Djounova, Desislava Djunakova, Zdenka Stojanovska

Abstract:

The carcinogenic effects of radon as a radioactive noble gas have been studied and show a strong correlation between radon exposure and lung cancer occurrence, even in the case of low radon levels. The major part of the natural radiation dose in humans is received by inhaling radon and its progenies, which originates from the decay chain of U-238. Indoor radon poses a substantial threat to human health when build-up occurs in confined spaces such as homes, mines and caves and the risk increases with the duration of radon exposure and is proportional to both the radon concentration and the time of exposure. Tourist caves are a case of special environmental conditions that may be affected by high radon concentration. Tourist caves are a recognized danger in terms of radon exposure to cave workers (guides, employees working in shops built above the cave entrances, etc.), but due to the sensitive nature of the cave environment, high concentrations cannot be easily removed. Forced ventilation of the air in the caves is considered unthinkable due to the possible harmful effects on the microclimate, flora and fauna. The risks to human health posed by exposure to elevated radon levels in caves are not well documented. Various studies around the world often detail very high concentrations of radon in caves and exposure of employees but without a follow-up assessment of the overall impact on human health. This study was developed in the implementation of a national project to assess the potential health effects caused by exposure to elevated levels of radon in buildings with public access under the National Science Fund of Bulgaria, in the framework of grant No КП-06-Н23/1/07.12.2018. The purpose of the work is to assess the radon level in Bulgarian caves and the exposure of the visitors and workers. The number of caves (sampling size) was calculated for simple random selection from total available caves 65 (sampling population) are 13 caves with confidence level 95 % and confidence interval (margin of error) approximately 25 %. A measurement of the radon concentration in air at specific locations in caves was done by using CR-39 type nuclear track-etch detectors that were placed by the participants in the research team. Despite the fact that all of the caves were formed in karst rocks, the radon levels were rather different from each other (97–7575 Bq/m3). An assessment of the influence of the orientation of the caves in the earth's surface (horizontal, inclined, vertical) on the radon concentration was performed. Evaluation of health hazards and radon risk exposure causing by inhaling the radon and its daughter products in each surveyed caves was done. Reducing the time spent in the cave has been recommended in order to decrease the exposure of workers.

Keywords: tourist caves, radon concentration, exposure, Bulgaria

Procedia PDF Downloads 183
10113 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing

Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn

Abstract:

Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.

Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency

Procedia PDF Downloads 107
10112 DNA Multiplier: A Design Architecture of a Multiplier Circuit Using DNA Molecules

Authors: Hafiz Md. Hasan Babu, Khandaker Mohammad Mohi Uddin, Nitish Biswas, Sarreha Tasmin Rikta, Nuzmul Hossain Nahid

Abstract:

Nanomedicine and bioengineering use biological systems that can perform computing operations. In a biocomputational circuit, different types of biomolecules and DNA (Deoxyribose Nucleic Acid) are used as active components. DNA computing has the capability of performing parallel processing and a large storage capacity that makes it diverse from other computing systems. In most processors, the multiplier is treated as a core hardware block, and multiplication is one of the time-consuming and lengthy tasks. In this paper, cost-effective DNA multipliers are designed using algorithms of molecular DNA operations with respect to conventional ones. The speed and storage capacity of a DNA multiplier are also much higher than a traditional silicon-based multiplier.

Keywords: biological systems, DNA multiplier, large storage, parallel processing

Procedia PDF Downloads 211
10111 A Survey on Routh-Hurwitz Stability Criterion

Authors: Mojtaba Hakimi-Moghaddam

Abstract:

Routh-Hurwitz stability criterion is a powerful approach to determine stability of linear time invariant systems. On the other hand, applying this criterion to characteristic equation of a system, whose stability or marginal stability can be determined. Although the command roots (.) of MATLAB software can be easily used to determine the roots of a polynomial, the characteristic equation of closed loop system usually includes parameters, so software cannot handle it; however, Routh-Hurwitz stability criterion results the region of parameter changes where the stability is guaranteed. Moreover, this criterion has been extended to characterize the stability of interval polynomials as well as fractional-order polynomials. Furthermore, it can help us to design stable and minimum-phase controllers. In this paper, theory and application of this criterion will be reviewed. Also, several illustrative examples are given.

Keywords: Hurwitz polynomials, Routh-Hurwitz stability criterion, continued fraction expansion, pure imaginary roots

Procedia PDF Downloads 319
10110 Transient Current Investigations in Liquid Crystalline Polyurethane

Authors: Jitendra Kumar Quamara, Sohan Lal, Pushkar Raj

Abstract:

Electrical conduction behavior of liquid crystalline polyurethane (LCPU) has been investigated under transient conditions in the operating temperature range 50-220°C at various electric fields of 4.35-43.45 kV/cm. The transient currents show the hyperbolic decay character and the decay exponent ∆t (one tenth decay time) dependent on field as well as on temperature. The increase in I0/Is values (where I0 represents the current observed immediately after applying the voltage and Is represents the steady state current) and the variation of mobility at high operating temperatures shows the appearance of mesophase. The origin of transient currents has been attributed to the dipolar nature of carbonyl (C=O) groups in the main chain of LCPU and the trapping charge carriers.

Keywords: electrical conduction, transient current, liquid crystalline polymers, mesophase

Procedia PDF Downloads 274
10109 A Microfluidic Biosensor for Detection of EGFR 19 Deletion Mutation Targeting Non-Small Cell Lung Cancer on Rolling Circle Amplification

Authors: Ji Su Kim, Bo Ram Choi, Ju Yeon Cho, Hyukjin Lee

Abstract:

Epidermal growth factor receptor (EGFR) 19 deletion mutation gene is over-expressed in carcinoma patient. EGFR 19 deletion mutation is known as typical biomarker of non-small cell lung cancer (NSCLC), which one section in the coding exon 19 of EGFR is deleted. Therefore, there have been many attempts over the years to detect EGFR 19 deletion mutation for replacing conventional diagnostic method such as PCR and tissue biopsy. We developed a simple and facile detection platform based on Rolling Circle Amplification (RCA), which provides highly amplified products in isothermal amplification of the ligated DNA template. Limit of detection (~50 nM) and a faster detection time (~30 min) could be achieved by introducing RCA.

Keywords: EGFR19, cancer, diagnosis, rolling circle amplification (RCA), hydrogel

Procedia PDF Downloads 252
10108 Curriculum Check in Industrial Design, Based on Knowledge Management in Iran Universities

Authors: Maryam Mostafaee, Hassan Sadeghi Naeini, Sara Mostowfi

Abstract:

Today’s Knowledge management (KM), plays an important role in organizations. Basically, knowledge management is in the relation of using it for taking advantage of work forces in an organization for forwarding the goals and demand of that organization used at the most. The purpose of knowledge management is not only to manage existing documentation, information, and Data through an organization, but the most important part of KM is to control most important and key factor of those information and Data. For sure it is to chase the information needed for the employees in the right time of needed to take from genuine source for bringing out the best performance and result then in this matter the performance of organization will be at most of it. There are a lot of definitions over the objective of management released. Management is the science that in force the accurate knowledge with repeating to the organization to shape it and take full advantages for reaching goals and targets in the organization to be used by employees and users, but the definition of Knowledge based on Kalinz dictionary is: Facts, emotions or experiences known by man or group of people is ‘ knowledge ‘: Based on the Merriam Webster Dictionary: the act or skill of controlling and making decision about a business, department, sport team, etc, based on the Oxford Dictionary: Efficient handling of information and resources within a commercial organization, and based on the Oxford Dictionary: The art or process of designing manufactured products: the scale is a beautiful work of industrial design. When knowledge management performed executive in universities, discovery and create a new knowledge be facilitated. Make procedures between different units for knowledge exchange. College's officials and employees understand the importance of knowledge for University's success and will make more efforts to prevent the errors. In this strategy, is explored factors and affective trends and manage of it in University. In this research, Iranian universities for a time being analyzed that over usage of knowledge management, how they are behaving and having understood this matter: 1. Discovery of knowledge management in Iranian Universities, 2. Transferring exciting knowledge between faculties and unites, 3. Participate of employees for getting and using and transferring knowledge, 4.The accessibility of valid sources, 5. Researching over factors and correct processes in the university. We are pointing in some examples that we have already analyzed which is: -Enabling better and faster decision-making, -Making it easy to find relevant information and resources, -Reusing ideas, documents, and expertise, -Avoiding redundant effort. Consequence: It is found that effectiveness of knowledge management in the Industrial design field is low. Based on filled checklist by Education officials and professors in universities, and coefficient of effectiveness Calculate, knowledge management could not get the right place.

Keywords: knowledge management, industrial design, educational curriculum, learning performance

Procedia PDF Downloads 366
10107 Benchmarking Bert-Based Low-Resource Language: Case Uzbek NLP Models

Authors: Jamshid Qodirov, Sirojiddin Komolov, Ravilov Mirahmad, Olimjon Mirzayev

Abstract:

Nowadays, natural language processing tools play a crucial role in our daily lives, including various techniques with text processing. There are very advanced models in modern languages, such as English, Russian etc. But, in some languages, such as Uzbek, the NLP models have been developed recently. Thus, there are only a few NLP models in Uzbek language. Moreover, there is no such work that could show which Uzbek NLP model behaves in different situations and when to use them. This work tries to close this gap and compares the Uzbek NLP models existing as of the time this article was written. The authors try to compare the NLP models in two different scenarios: sentiment analysis and sentence similarity, which are the implementations of the two most common problems in the industry: classification and similarity. Another outcome from this work is two datasets for classification and sentence similarity in Uzbek language that we generated ourselves and can be useful in both industry and academia as well.

Keywords: NLP, benchmak, bert, vectorization

Procedia PDF Downloads 50
10106 Socioeconomic Burden of Life Long Disease: A Case of Diabetes Care in Bangladesh

Authors: Samira Humaira Habib

Abstract:

Diabetes has profound effects on individuals and their families. If diabetes is not well monitored and managed, then it leads to long-term complications and a large and growing cost to the health care system. Prevalence and socioeconomic burden of diabetes and relative return of investment for the elimination or the reduction of the burden are much more important regarding its cost burden. Various studies regarding the socioeconomic cost burden of diabetes are well explored in developed countries but almost absent in developing countries like Bangladesh. The main objective of the study is to estimate the total socioeconomic burden of diabetes. It is a prospective longitudinal follow up study which is analytical in nature. Primary and secondary data are collected from patients who are undergoing treatment for diabetes at the out-patient department of Bangladesh Institute of Research & Rehabilitation in Diabetes, Endocrine & Metabolic Disorders (BIRDEM). Of the 2115 diabetic subjects, females constitute around 50.35% of the study subject, and the rest are male (49.65%). Among the subjects, 1323 are controlled, and 792 are uncontrolled diabetes. Cost analysis of 2115 diabetic patients shows that the total cost of diabetes management and treatment is US$ 903018 with an average of US$ 426.95 per patient. In direct cost, the investigation and medical treatment at hospital along with investigation constitute most of the cost in diabetes. The average cost of a hospital is US$ 311.79, which indicates an alarming warn for diabetic patients. The indirect cost shows that cost of productivity loss (US$ 51110.1) is higher among the all indirect item. All constitute total indirect cost as US$ 69215.7. The incremental cost of intensive management of uncontrolled diabetes is US$ 101.54 per patient and event-free time gained in this group is 0.55 years and the life years gain is 1.19 years. The incremental cost per event-free year gained is US$ 198.12. The incremental cost of intensive management of the controlled group is US$ 89.54 per patient and event-free time gained is 0.68 years, and the life year gain is 1.12 years. The incremental cost per event-free year gained is US$ 223.34. The EuroQoL difference between the groups is found to be 64.04. The cost-effective ratio is found to be US$ 1.64 cost per effect in case of controlled diabetes and US$ 1.69 cost per effect in case of uncontrolled diabetes. So management of diabetes is much more cost-effective. Cost of young type 1 diabetic patient showed upper socioeconomic class, and with the increase of the duration of diabetes, the cost increased also. The dietary pattern showed macronutrients intake and cost are significantly higher in the uncontrolled group than their counterparts. Proper management and control of diabetes can decrease the cost of care for the long term.

Keywords: cost, cost-effective, chronic diseases, diabetes care, burden, Bangladesh

Procedia PDF Downloads 146
10105 Methylphenidate Use by Canadian Children and Adolescents and the Associated Adverse Reactions

Authors: Ming-Dong Wang, Abigail F. Ruby, Michelle E. Ross

Abstract:

Methylphenidate is a first-line treatment drug for attention deficit hyperactivity disorder (ADHD), a common mental health disorder in children and adolescents. Over the last several decades, the rate of children and adolescents using ADHD medication has been increasing in many countries. A recent study found that the prevalence of ADHD medication use among children aged 3-18 years increased in 13 different world regions between 2001 and 2015, where the absolute increase ranged from 0.02 to 0.26% per year. The goal of this study was to examine the use of methylphenidate in Canadian children and its associated adverse reactions. Methylphenidate use information among young Canadians aged 0-14 years was extracted from IQVIA data on prescriptions dispensed by pharmacies between April 2014 and June 2020. The adverse reaction information associated with methylphenidate use was extracted from the Canada Vigilance database for the same time period. Methylphenidate use trends were analyzed based on sex, age group (0-4 years, 5-9 years, and 10-14 years), and geographical location (province). The common classes of adverse reactions associated with methylphenidate use were sorted, and the relative risks associated with methylphenidate use as compared with two second-line amphetamine medications for ADHD were estimated. This study revealed that among Canadians aged 0-14 years, every 100 people used about 25 prescriptions (or 23,000 mg) of methylphenidate per year during the study period, and the use increased with time. Boys used almost three times more methylphenidate than girls. The amount of drug used was inversely associated with age: Canadians aged 10-14 years used nearly three times as many drugs compared to those aged 5-9 years. Seasonal methylphenidate use patterns were apparent among young Canadians, but the seasonal trends differed among the three age groups. Methylphenidate use varied from region to region, and the highest methylphenidate use was observed in Quebec, where the use of methylphenidate was at least double that of any other province. During the study period, Health Canada received 304 adverse reaction reports associated with the use of methylphenidate for Canadians aged 0-14 years. The number of adverse reaction reports received for boys was 3.5 times higher than that for girls. The three most common adverse reaction classes were psychiatric disorders, nervous system disorders and injury, poisoning procedural complications. The number one commonly reported adverse reaction for boys was aggression (11.2%), while for girls, it was a tremor (9.6%). The safety profile in terms of adverse reaction classes associated with methylphenidate use was similar to that of the selected control products. Methylphenidate is a commonly used pharmaceutical product in young Canadians, particularly in the province of Quebec. Boys used approximately three times more of this product as compared to girls. Future investigation is needed to determine what factors are associated with the observed geographic variations in Canada.

Keywords: adverse reaction risk, methylphenidate, prescription trend, use variation

Procedia PDF Downloads 155
10104 Research Progress on the Correlation between Tinnitus and Sleep Behaviors

Authors: Jiajia Peng

Abstract:

Tinnitus is one of the common symptoms of ear diseases and is characterized by an abnormal perception of sound without external stimulation. Tinnitus is agony and seriously affects the life of the general population by approximately 1%. Sleep disturbance is a common problem in patients with tinnitus. Lack of sleep will lead to the accumulation of metabolites in the brain and cannot be cleared in time. These substances enhance sympathetic nerve reactivity in the auditory system, resulting in tinnitus occurrence or aggravation. Then, tinnitus may aggravate sleep disturbance, thus forming a vicious circle. Through a systematic review of the relevant literature, we summarize the research on tinnitus and sleep. Although the results suggest that tinnitus is often accompanied by sleep disturbance, the impact of unfavorable sleep habits on tinnitus is not clear. In particular, the relationships between sleep behaviors and other chronic diseases have been revealed. To reduce the incidence rate of tinnitus, clinicians should pay attention to the relevance between different sleep behaviors and tinnitus.

Keywords: tinnitus, sleep, sleep factor, sleep behavior

Procedia PDF Downloads 151
10103 A Fast GPS Satellites Signals Detection Algorithm Based on Simplified Fast Fourier Transform

Authors: Beldjilali Bilal, Benadda Belkacem, Kahlouche Salem

Abstract:

Due to the Doppler effect caused by the high velocity of satellite and in some case receivers, the frequency of the Global Positioning System (GPS) signals are transformed into a new ones. Several acquisition algorithms frequency of the Global Positioning System (GPS) signals are transformed can be used to estimate the new frequency and phase shifts values. Numerous algorithms are based on the frequencies domain calculation. Our developed algorithm is a new approach dedicated to the Global Positioning System signal acquisition based on the fast Fourier transform. Our proposed new algorithm is easier to implement and has fast execution time compared with elder ones.

Keywords: global positioning system, acquisition, FFT, GPS/L1, software receiver, weak signal

Procedia PDF Downloads 244
10102 Silicon Nanostructure Based on Metal-Nanoparticle-Assisted Chemical Etching for Photovoltaic Application

Authors: B. Bouktif, M. Gaidi, M. Benrabha

Abstract:

Metal-nano particle-assisted chemical etching is an extraordinary developed wet etching method of producing uniform semiconductor nanostructure (nanowires) from the patterned metallic film on the crystalline silicon surface. The metal films facilitate the etching in HF and H2O2 solution and produce silicon nanowires (SiNWs). Creation of different SiNWs morphologies by changing the etching time and its effects on optical and optoelectronic properties was investigated. Combination effect of formed SiNWs and stain etching treatment in acid (HF/HNO3/H2O) solution on the surface morphology of Si wafers as well as on the optical and optoelectronic properties are presented in this paper.

Keywords: semiconductor nanostructure, chemical etching, optoelectronic property, silicon surface

Procedia PDF Downloads 383
10101 Modelling and Assessment of an Off-Grid Biogas Powered Mini-Scale Trigeneration Plant with Prioritized Loads Supported by Photovoltaic and Thermal Panels

Authors: Lorenzo Petrucci

Abstract:

This paper is intended to give insight into the potential use of small-scale off-grid trigeneration systems powered by biogas generated in a dairy farm. The off-grid plant object of analysis comprises a dual-fuel Genset as well as electrical and thermal storage equipment and an adsorption machine. The loads are the different apparatus used in the dairy farm, a household where the workers live and a small electric vehicle whose batteries can also be used as a power source in case of emergency. The insertion in the plant of an adsorption machine is mainly justified by the abundance of thermal energy and the simultaneous high cooling demand associated with the milk-chilling process. In the evaluated operational scenario, our research highlights the importance of prioritizing specific small loads which cannot sustain an interrupted supply of power over time. As a consequence, a photovoltaic and thermal panel is included in the plant and is tasked with providing energy independently of potentially disruptive events such as engine malfunctioning or scarce and unstable supplies of fuels. To efficiently manage the plant an energy dispatch strategy is created in order to control the flow of energy between the power sources and the thermal and electric storages. In this article we elaborate on models of the equipment and from these models, we extract parameters useful to build load-dependent profiles of the prime movers and storage efficiencies. We show that under reasonable assumptions the analysis provides a sensible estimate of the generated energy. The simulations indicate that a Diesel Generator sized to a value 25% higher than the total electrical peak demand operates 65% of the time below the minimum acceptable load threshold. To circumvent such a critical operating mode, dump loads are added through the activation and deactivation of small resistors. In this way, the excess of electric energy generated can be transformed into useful heat. The combination of PVT and electrical storage to support the prioritized load in an emergency scenario is evaluated in two different days of the year having the lowest and highest irradiation values, respectively. The results show that the renewable energy component of the plant can successfully sustain the prioritized loads and only during a day with very low irradiation levels it also needs the support of the EVs’ battery. Finally, we show that the adsorption machine can reduce the ice builder and the air conditioning energy consumption by 40%.

Keywords: hybrid power plants, mathematical modeling, off-grid plants, renewable energy, trigeneration

Procedia PDF Downloads 170
10100 Restriction on the Freedom of Economic Activity in the Polish Energy Law

Authors: Zofia Romanowska

Abstract:

Recently there have been significant changes in the Polish energy market. Due to the government's decision to strengthen energy security as well as to strengthen the implementation of the European Union common energy policy, the Polish energy market has been undergoing significant changes. In the face of these, it is necessary to answer the question about the direction the Polish energy rationing sector is going, how wide apart the powers of the state are and also whether the real regulator of energy projects in Poland is not in fact the European Union itself. In order to determine the role of the state as a regulator of the energy market, the study analyses the basic instruments of regulation, i.e. the licenses, permits and permissions to conduct various activities related to the energy market, such as the production and sale of liquid fuels or concessions for trade in natural gas. Bearing in mind that Polish law is part of the widely interpreted European Union energy policy, the legal solutions in neighbouring countries are also being researched, including those made in Germany, a country which plays a key role in the shaping of EU policies. The correct interpretation of the new legislation modifying the current wording of the Energy Law Act, such as obliging the entities engaged in the production and trade of liquid fuels (including abroad) to meet a number of additional requirements for the licensing and providing information to the state about conducted business, plays a key role in the study. Going beyond the legal framework for energy rationing, the study also includes a legal and economic analysis of public and private goods within the energy sector and delves into the subject of effective remedies. The research caused the relationships between progressive rationing introduced by the legislator and the rearrangement rules prevailing on the Polish energy market to be taken note of, which led to the introduction of greater transparency in the sector. The studies refer to the initial conclusion that currently, despite the proclaimed idea of liberalization of the oil and gas market and the opening of market to a bigger number of entities as a result of the newly implanted changes, the process of issuing and controlling the conduction of the concessions will be tightened, guaranteeing to entities greater security of energy supply. In the long term, the effect of the introduced legislative solutions will be the reduction of the amount of entities on the energy market. The companies that meet the requirements imposed on them by the new regulation to cope with the profitability of the business will in turn increase prices for their services, which will be have an impact on consumers' budgets.

Keywords: license, energy law, energy market, public goods, regulator

Procedia PDF Downloads 242
10099 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems

Authors: Alejandro Adorjan

Abstract:

Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.

Keywords: calculus, engineering education, PreCalculus, Summer Program

Procedia PDF Downloads 287
10098 Quantification of Site Nonlinearity Based on HHT Analysis of Seismic Recordings

Authors: Ruichong Zhang

Abstract:

This study proposes a recording-based approach to characterize and quantify earthquake-induced site nonlinearity, exemplified as soil nonlinearity and/or liquefaction. Alternative to Fourier spectral analysis (FSA), the paper introduces time-frequency analysis of earthquake ground motion recordings with the aid of so-called Hilbert-Huang transform (HHT), and offers justification for the HHT in addressing the nonlinear features shown in the recordings. With the use of the 2001 Nisqually earthquake recordings, this study shows that the proposed approach is effective in characterizing site nonlinearity and quantifying the influences in seismic ground responses.

Keywords: site nonlinearity, site amplification, site damping, Hilbert-Huang Transform (HHT), liquefaction, 2001 Nisqually Earthquake

Procedia PDF Downloads 481
10097 The Effects of SCMs on the Mechanical Properties and Durability of Fibre Cement Plates

Authors: Ceren Ince, Berkay Zafer Erdem, Shahram Derogar, Nabi Yuzer

Abstract:

Fibre cement plates, often used in construction, generally are made using quartz as an inert material, cement as a binder and cellulose as a fibre. This paper first of all investigates the mechanical properties and durability of fibre cement plates when quartz is both partly and fully replaced with diatomite. Diatomite does not only have lower density compared to quartz but also has high pozzolanic activity. The main objective of this paper is the investigation of the effects of supplementary cementing materials (SCMs) on the short and long term mechanical properties and durability characteristics of fibre cement plates prepared using diatomite. Supplementary cementing materials such as ground granulated blast furnace slug (GGBS) and fly ash (FA) are used in this study. 10, 20, 30 and 40% of GGBS and FA are used as partial replacement materials to cement. Short and long term mechanical properties such as compressive and flexural strengths as well as capillary absorption, sorptivity characteristics and mass were investigated. Consistency and setting time at each replacement levels of SCMs were also recorded. The effects of using supplementary cementing materials on the carbonation and sulphate resistance of fibre cement plates were then experimented. The results, first of all, show that the use of diatomite as a full or partial replacement to quartz resulted in a systematic decrease in total mass of the fibre cement plates. The reduction of mass was largely due to the lower density and finer particle size of diatomite compared to quartz. The use of diatomite did not only reduce the mass of these plates but also increased the compressive strength significantly as a result of its high pozzolanic activity. The replacement levels of both GGBS and FA resulted in a systematic decrease in short term compressive strength with increasing replacement levels. This was essentially expected as the total heat of hydration is much lower in GGBS and FA than that of cement. Long term results however, indicated that the compressive strength of fibre cement plates prepared using both GGBS and FA increases with time and hence the compressive strength of plates prepared using SCMs is either equivalent or more than the compressive strength of plates prepared using cement alone. Durability characteristics of fibre cement plates prepared using SCMs were enhanced significantly. Measurements of capillary absorption and sopritivty characteristics were also indicated that the plates prepared using SCMs has much lower permeability compared to plates prepared cement alone. Much higher resistance to carbonation and sulphate attach were observed with plates prepared using SCMs. The results presented in this paper show that the use of SCMs does not only support the production of more sustainable construction materials but also enhances the mechanical properties and durability characteristics of fibre cement plates.

Keywords: diatomite, fibre, strength, supplementary cementing material

Procedia PDF Downloads 325
10096 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 186
10095 The Effect of Dark energy on Amplitude of Gravitational Waves

Authors: Jafar Khodagholizadeh

Abstract:

In this talk, we study the tensor mode equation of perturbation in the presence of nonzero $-\Lambda$ as dark energy, whose dynamic nature depends on the Hubble parameter $ H$ and/or its time derivative. Dark energy, according to the total vacuum contribution, has little effect during the radiation-dominated era, but it reduces the squared amplitude of gravitational waves (GWs) up to $60\%$ for the wavelengths that enter the horizon during the matter-dominated era. Moreover, the observations bound on dark energy models, such as running vacuum model (RVM), generalized running vacuum model (GRVM), and generalized running vacuum subcase (GRVS), are effective in reducing the GWs’ amplitude. Although this effect is less for the wavelengths that enter the horizon at later times, this reduction is stable and permanent.

Keywords: gravitational waves, dark energy, GW's amplitude, all stage universe

Procedia PDF Downloads 149
10094 Effect of Nano-SiO2 Solution on the Strength Characteristics of Kaolinite

Authors: Reza Ziaie Moayed, Hamidreza Rahmani

Abstract:

Today, with developments in science and technology, there is an excessive potential for the use of nanomaterials in various fields of geotechnical project such as soil stabilization. This study investigates the effect of Nano-SiO2 solution on the unconfined compression strength and Young's elastic modulus of Kaolinite. For this purpose, nano-SiO2 was mixed with kaolinite in five different contents: 1, 2, 3, 4 and 5% by weight of the dry soil and a series of the unconfined compression test with curing time of one-day was selected as laboratory test. Analyses of the tests results show that stabilization of kaolinite with Nano-SiO2 solution can improve effectively the unconfined compression strength of modified soil up to 1.43 times compared to  the pure soil.

Keywords: kaolinite, Nano-SiO2, stabilization, unconfined compression test, Young's modulus

Procedia PDF Downloads 386
10093 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach

Authors: Jared Beard, Ali Baheri

Abstract:

As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.

Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification

Procedia PDF Downloads 151
10092 Quantifying Fatigue during Periods of Intensified Competition in Professional Ice Hockey Players: Magnitude of Fatigue in Selected Markers

Authors: Eoin Kirwan, Christopher Nulty, Declan Browne

Abstract:

The professional ice hockey season consists of approximately 60 regular season games with periods of fixture congestion occurring several times in the average season. These periods of congestion provide limited time for recovery, exposing the athletes to the risk of competing whilst not fully recovered. Although a body of research is growing with respect to monitoring fatigue, particularly during periods of congested fixtures in team sports such as rugby and soccer, it has received little to no attention thus far in ice hockey athletes. Consequently, there is limited knowledge on monitoring tools that might effectively detect a fatigue response and the magnitude of fatigue that can accumulate when recovery is limited by competitive fixtures. The benefit of quantifying and establishing fatigue status is the ability to optimise training and provide pertinent information on player health, injury risk, availability and readiness. Some commonly used methods to assess fatigue and recovery status of athletes include the use of perceived fatigue and wellbeing questionnaires, tests of muscular force and ratings of perceive exertion (RPE). These measures are widely used in popular team sports such as soccer and rugby and show promise as assessments of fatigue and recovery status for ice hockey athletes. As part of a larger study, this study explored the magnitude of changes in adductor muscle strength after game play and throughout a period of fixture congestion and examined the relationship between internal game load and perceived wellbeing with adductor muscle strength. Methods 8 professional ice hockey players from a British Elite League club volunteered to participate (age = 29.3 ± 2.49 years, height = 186.15 ± 6.75 cm, body mass = 90.85 ± 8.64 kg). Prior to and after competitive games each player performed trials of the adductor squeeze test at 0˚ hip flexion with the lead investigator using hand-held dynamometry. Rate of perceived exertion was recorded for each game and from data of total ice time individual session RPE was calculated. After each game players completed a 5- point questionnaire to assess perceived wellbeing. Data was collected from six competitive games, 1 practice and 36 hours post the final game, over a 10 – day period. Results Pending final data collection in February Conclusions Pending final data collection in February.

Keywords: Conjested fixtures, fatigue monitoring, ice hockey, readiness

Procedia PDF Downloads 137
10091 Affirming Students’ Attention and Perceptions on Prezi Presentation via Eye Tracking System

Authors: Mona Masood, Norshazlina Shaik Othman

Abstract:

The purpose of this study was to investigate graduate students’ visual attention and perceptions of a Prezi presentation. Ten post-graduate master students were presented with a Prezi presentation at the Centre for Instructional Technology and Multimedia, Universiti Sains Malaysia (USM). The eye movement indicators such as dwell time, average fixation on the areas of interests, heat maps and focus maps were abstracted to indicate the students’ visual attention. Descriptive statistics was employed to analyze the students’ perception of the Prezi presentation in terms of text, slide design, images, layout and overall presentation. The result revealed that the students paid more attention to the text followed by the images and sub heading presented through the Prezi presentation.

Keywords: eye tracking, Prezi, visual attention, visual perception

Procedia PDF Downloads 435