Search results for: fundamental%20frequency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1729

Search results for: fundamental%20frequency

469 Predictive Relationship between Motivation Strategies and Musical Creativity of Secondary School Music Students

Authors: Lucy Lugo Mawang

Abstract:

Educational Psychologists have highlighted the significance of creativity in education. Likewise, a fundamental objective of music education concern the development of students’ musical creativity potential. The purpose of this study was to determine the relationship between motivation strategies and musical creativity, and establish the prediction equation of musical creativity. The study used purposive sampling and census to select 201 fourth-form music students (139 females/ 62 males), mainly from public secondary schools in Kenya. The mean age of participants was 17.24 years (SD = .78). Framed upon self- determination theory and the dichotomous model of achievement motivation, the study adopted an ex post facto research design. A self-report measure, the Achievement Goal Questionnaire-Revised (AGQ-R) was used in data collection for the independent variable. Musical creativity was based on a creative music composition task and measured by the Consensual Musical Creativity Assessment Scale (CMCAS). Data collected in two separate sessions within an interval of one month. The questionnaire was administered in the first session, lasting approximately 20 minutes. The second session was for notation of participants’ creative composition. The results indicated a positive correlation r(199) = .39, p ˂ .01 between musical creativity and intrinsic music motivation. Conversely, negative correlation r(199) = -.19, p < .01 was observed between musical creativity and extrinsic music motivation. The equation for predicting musical creativity from music motivation strategies was significant F(2, 198) = 20.8, p < .01, with R2 = .17. Motivation strategies accounted for approximately (17%) of the variance in participants’ musical creativity. Intrinsic music motivation had the highest significant predictive value (β = .38, p ˂ .01) on musical creativity. In the exploratory analysis, a significant mean difference t(118) = 4.59, p ˂ .01 in musical creativity for intrinsic and extrinsic music motivation was observed in favour of intrinsically motivated participants. Further, a significant gender difference t(93.47) = 4.31, p ˂ .01 in musical creativity was observed, with male participants scoring higher than females. However, there was no significant difference in participants’ musical creativity based on age. The study recommended that music educators should strive to enhance intrinsic music motivation among students. Specifically, schools should create conducive environments and have interventions for the development of intrinsic music motivation since it is the most facilitative motivation strategy in predicting musical creativity.

Keywords: extrinsic music motivation, intrinsic music motivation, musical creativity, music composition

Procedia PDF Downloads 133
468 Evaluation of Methods for Simultaneous Extraction and Purification of Fungal and Bacterial DNA from Vaginal Swabs

Authors: Vanessa De Carvalho, Chad MacPherson, Julien Tremblay, Julie Champagne, Stephanie-Anne Girard

Abstract:

Background: The interactions between bacteria and fungi in the human vaginal microbiome are fundamental to the concept of health and disease. The means by which the microbiota and mycobiota interact is still poorly understood and further studies are necessary to properly characterize this complex ecosystem. The aim of this study was to select a DNA extraction method capable of recovering high qualities of fungal and bacterial DNA from a single vaginal swab. Methods: 11 female volunteers ( ≥ 20 to < 55 years old) self-collected vaginal swabs in triplicates. Three commercial extraction kits: Masterpure Yeast Purification kit (Epicenter), PureLink™ Microbiome DNA Purification kit (Invitrogen), and Quick-DNA™ Fecal/Soil Microbe Miniprep kit (Zymo) were evaluated on the ability to recover fungal and bacterial DNA simultaneously. The extraction kits were compared on the basis of recovery, yield, purity, and the community richness of bacterial (16S rRNA - V3-V4 region) and fungal (ITS1) microbiota composition by Illumina MiSeq amplicon sequencing. Results: Recovery of bacterial DNA was achieved with all three kits while fungal DNA was only consistently recovered with Masterpure Yeast Purification kit (yield and purity). Overall, all kits displayed similar microbiota profiles for the top 20 OTUs; however, Quick-DNA™ Fecal/Soil Microbe Miniprep kit (Zymo) showed more species richness than the other two kits. Conclusion: In the present study, Masterpure Yeast purification kit proved to be a good candidate for purification of high quality fungal and bacterial DNA simultaneously. These findings have potential benefits that could be applied in future vaginal microbiome research. Whilst the use of a single extraction method would lessen the burden of multiple swab sampling, decrease laboratory workload and off-set costs associated with multiple DNA extractions, thoughtful consideration must be taken when selecting an extraction kit depending on the desired downstream application.

Keywords: bacterial vaginosis, DNA extraction, microbiota, mycobiota, vagina, vulvovaginal candidiasis, women’s health

Procedia PDF Downloads 180
467 The Effect of the Contributory Pension Scheme on Employees’ Performance

Authors: Oladipo Jimoh Ayanda, Fashagba Mathew Olasehinde

Abstract:

Pension is a post retirement benefit paid to employees after retirement to cushion the effects of severance from monthly emoluments. It serves the dual purpose of providing financial succour to retired employees as well as motivating employees currently in service to greater performance on duty. However, the scheme, as operated in Nigeria, is prone to some pitfalls such as delayed and irregular payments, inadequate budgetary provisions, employee sufferings and deaths arising from the rigors of verification exercises, among others. This necessitated the replacement of the old scheme with the contributory pension scheme through an enabling law in 2004. The implementation of the new scheme has its own challenges especially in connection with administration. These challenges pose a fundamental problem of establishing a nexus between pension benefits and work performance which represent the focus of the study. The study objectives were to: determine the effect of contributory pension scheme on employees’ performance. The study population consisted of National Universities Commission recognized public and private universities in the South West Nigeria. Multi-stage sampling method involving stratified sampling and systematic sampling was used in selecting 359 respondents while data were collected through questionnaire administration. The procedure for analyzing the data included descriptive statistic, normal distribution test and cross-tabulation (gamma coefficient). The findings of the study showed that the existence of the scheme positively enhances employees’ performance as indicated by normal distribution test with Z-score (10.169) which is greater than the table value (1.96) at 0.05 level. The study concluded that the scope for enhancing employee current job performance can be quite elastic if future retirement benefits are guaranteed through proper and efficient administration and management of the contributory pension scheme. The study recommended that certain factors such as employers’ commitment which account for different levels of confidence between public and private universities should be looked into in order to improve confidence across board while the provisions of the scheme as they affect the PFAs should be properly monitored to ensure compliance.

Keywords: pension, retirement, performance, employees, benefit

Procedia PDF Downloads 304
466 Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is defined as a closed subset contains real numbers. Then the inequalities of time scales version have received a lot of attention and has had a major field in both pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on double integrals to obtain new time-scale inequalities of Copson driven by Steklov operator. They will be applied in the solution of the Cauchy problem for the wave equation. The proof can be done by introducing restriction on the operator in several cases. In addition, the obtained inequalities done by using some concepts in time scale version such as time scales calculus, theorem of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of Hardy, inequality of Coposon, Steklov operator

Procedia PDF Downloads 54
465 Muslims in Diaspora Negotiating Islam through Muslim Public Sphere and the Role of Media

Authors: Sabah Khan

Abstract:

The idea of universal Islam tends to exaggerate the extent of homogeneity in Islamic beliefs and practices across Muslim communities. In the age of migration, various Muslim communities are in diaspora. The immediate implication of this is what happens to Islam in diaspora? How Islam gets represented in new forms? Such pertinent questions need to be dealt with. This paper shall draw on the idea of religious transnationalism, primarily transnational Islam. There are multiple ways to conceptualize transnational phenomenon with reference to Islam in terms of flow of people, transnational organizations and networks; Ummah oriented solidarity and the new Muslim public sphere. This paper specifically deals with the new Muslim public sphere. It primarily refers to the space and networks enabled by new media and communication technologies, whereby Muslim identity and Islamic normativity are rehearsed, debated by people in different locales. A new sense of public is emerging across Muslim communities, which needs to be contextualized. This paper uses both primary and secondary data. Primary data elicited through content analysis of audio-visuals on social media and secondary sources of information ranging from books, articles, journals, etc. The basic aim of the paper is to focus on the emerging Muslim public sphere and the role of media in expanding public spheres of Islam. It also explores how Muslims in diaspora negotiate Islam and Islamic practices through media and the new Muslim public sphere. This paper cogently weaves in discussions firstly, of re-intellectualization of Islamic discourse in the public sphere. In other words, how Muslims have come to reimagine their collective identity and critically look at fundamental principles and authoritative tradition. Secondly, the emerging alternative forms of Islam by young Muslims in diaspora. In other words, how young Muslims search for unorthodox ways and media for religious articulation, including music, clothing and TV. This includes transmission and distribution of Islam in diaspora in terms of emerging ‘media Islam’ or ‘soundbite Islam’. The new Muslim public sphere has offered an arena to a large number of participants to critically engage with Islam, which leads not only to a critical engagement with traditional forms of Islamic authority but also emerging alternative forms of Islam and Islamic practices.

Keywords: Islam, media, Muslims, public sphere

Procedia PDF Downloads 244
464 Derivation of Bathymetry from High-Resolution Satellite Images: Comparison of Empirical Methods through Geographical Error Analysis

Authors: Anusha P. Wijesundara, Dulap I. Rathnayake, Nihal D. Perera

Abstract:

Bathymetric information is fundamental importance to coastal and marine planning and management, nautical navigation, and scientific studies of marine environments. Satellite-derived bathymetry data provide detailed information in areas where conventional sounding data is lacking and conventional surveys are inaccessible. The two empirical approaches of log-linear bathymetric inversion model and non-linear bathymetric inversion model are applied for deriving bathymetry from high-resolution multispectral satellite imagery. This study compares these two approaches by means of geographical error analysis for the site Kankesanturai using WorldView-2 satellite imagery. Based on the Levenberg-Marquardt method calibrated the parameters of non-linear inversion model and the multiple-linear regression model was applied to calibrate the log-linear inversion model. In order to calibrate both models, Single Beam Echo Sounding (SBES) data in this study area were used as reference points. Residuals were calculated as the difference between the derived depth values and the validation echo sounder bathymetry data and the geographical distribution of model residuals was mapped. The spatial autocorrelation was calculated by comparing the performance of the bathymetric models and the results showing the geographic errors for both models. A spatial error model was constructed from the initial bathymetry estimates and the estimates of autocorrelation. This spatial error model is used to generate more reliable estimates of bathymetry by quantifying autocorrelation of model error and incorporating this into an improved regression model. Log-linear model (R²=0.846) performs better than the non- linear model (R²=0.692). Finally, the spatial error models improved bathymetric estimates derived from linear and non-linear models up to R²=0.854 and R²=0.704 respectively. The Root Mean Square Error (RMSE) was calculated for all reference points in various depth ranges. The magnitude of the prediction error increases with depth for both the log-linear and the non-linear inversion models. Overall RMSE for log-linear and the non-linear inversion models were ±1.532 m and ±2.089 m, respectively.

Keywords: log-linear model, multi spectral, residuals, spatial error model

Procedia PDF Downloads 276
463 Digital Twin for University Campus: Workflow, Applications and Benefits

Authors: Frederico Fialho Teixeira, Islam Mashaly, Maryam Shafiei, Jurij Karlovsek

Abstract:

The ubiquity of data gathering and smart technologies, advancements in virtual technologies, and the development of the internet of things (IoT) have created urgent demands for the development of frameworks and efficient workflows for data collection, visualisation, and analysis. Digital twin, in different scales of the city into the building, allows for bringing together data from different sources to generate fundamental and illuminating insights for the management of current facilities and the lifecycle of amenities as well as improvement of the performance of current and future designs. Over the past two decades, there has been growing interest in the topic of digital twin and their applications in city and building scales. Most such studies look at the urban environment through a homogeneous or generalist lens and lack specificity in particular characteristics or identities, which define an urban university campus. Bridging this knowledge gap, this paper offers a framework for developing a digital twin for a university campus that, with some modifications, could provide insights for any large-scale digital twin settings like towns and cities. It showcases how currently unused data could be purposefully combined, interpolated and visualised for producing analysis-ready data (such as flood or energy simulations or functional and occupancy maps), highlighting the potential applications of such a framework for campus planning and policymaking. The research integrates campus-level data layers into one spatial information repository and casts light on critical data clusters for the digital twin at the campus level. The paper also seeks to raise insightful and directive questions on how digital twin for campus can be extrapolated to city-scale digital twin. The outcomes of the paper, thus, inform future projects for the development of large-scale digital twin as well as urban and architectural researchers on potential applications of digital twin in future design, management, and sustainable planning, to predict problems, calculate risks, decrease management costs, and improve performance.

Keywords: digital twin, smart campus, framework, data collection, point cloud

Procedia PDF Downloads 53
462 Investigating Visual Statistical Learning during Aging Using the Eye-Tracking Method

Authors: Zahra Kazemi Saleh, Bénédicte Poulin-Charronnat, Annie Vinter

Abstract:

This study examines the effects of aging on visual statistical learning, using eye-tracking techniques to investigate this cognitive phenomenon. Visual statistical learning is a fundamental brain function that enables the automatic and implicit recognition, processing, and internalization of environmental patterns over time. Some previous research has suggested the robustness of this learning mechanism throughout the aging process, underscoring its importance in the context of education and rehabilitation for the elderly. The study included three distinct groups of participants, including 21 young adults (Mage: 19.73), 20 young-old adults (Mage: 67.22), and 17 old-old adults (Mage: 79.34). Participants were exposed to a series of 12 arbitrary black shapes organized into 6 pairs, each with different spatial configurations and orientations (horizontal, vertical, and oblique). These pairs were not explicitly revealed to the participants, who were instructed to passively observe 144 grids presented sequentially on the screen for a total duration of 7 min. In the subsequent test phase, participants performed a two-alternative forced-choice task in which they had to identify the most familiar pair from 48 trials, each consisting of a base pair and a non-base pair. Behavioral analysis using t-tests revealed notable findings. The mean score for the first group was significantly above chance, indicating the presence of visual statistical learning. Similarly, the second group also performed significantly above chance, confirming the persistence of visual statistical learning in young-old adults. Conversely, the third group, consisting of old-old adults, showed a mean score that was not significantly above chance. This lack of statistical learning in the old-old adult group suggests a decline in this cognitive ability with age. Preliminary eye-tracking results showed a decrease in the number and duration of fixations during the exposure phase for all groups. The main difference was that older participants focused more often on empty cases than younger participants, likely due to a decline in the ability to ignore irrelevant information, resulting in a decrease in statistical learning performance.

Keywords: aging, eye tracking, implicit learning, visual statistical learning

Procedia PDF Downloads 59
461 Technology Enriched Classroom for Intercultural Competence Building through Films

Authors: Tamara Matevosyan

Abstract:

In this globalized world, intercultural communication is becoming essential for understanding communication among people, for developing understanding of cultures, to appreciate the opportunities and challenges that each culture presents to people. Moreover, it plays an important role in developing an ideal personification to understand different behaviors in different cultures. Native speakers assimilate sociolinguistic knowledge in natural conditions, while it is a great problem for language learners, and in this context feature films reveal cultural peculiarities and involve students in real communication. As we know nowadays the key role of language learning is the development of intercultural competence as communicating with someone from a different cultural background can be exciting and scary, frustrating and enlightening. Intercultural competence is important in FL learning classroom and here feature films can perform as essential tools to develop this competence and overcome the intercultural gap that foreign students face. Current proposal attempts to reveal the correlation of the given culture and language through feature films. To ensure qualified, well-organized and practical classes on Intercultural Communication for language learners a number of methods connected with movie watching have been implemented. All the pre-watching, while watching and post-watching methods and techniques are aimed at developing students’ communicative competence. The application of such activities as Climax, Role-play, Interactive Language, Daily Life helps to reveal and overcome mistakes of cultural and pragmatic character. All the above-mentioned activities are directed at the assimilation of the language vocabulary with special reference to the given culture. The study dwells into the essence of culture as one of the core concepts of intercultural communication. Sometimes culture is not a priority in the process of language learning which leads to further misunderstandings in real life communication. The application of various methods and techniques with feature films aims at developing students’ cultural competence, their understanding of norms and values of individual cultures. Thus, feature film activities will enable learners to enlarge their knowledge of the particular culture and develop a fundamental insight into intercultural communication.

Keywords: climax, intercultural competence, interactive language, role-play

Procedia PDF Downloads 325
460 Review and Comparison of Iran`s Sixteenth Topic of the Building with the Ranking System of the Water Sector Lead to Improve the Criteria of the Sixteenth Topic

Authors: O. Fatemi

Abstract:

Considering growing building construction industry in developing countries and sustainable development concept, as well as the importance of taking care of the future generations, codifying buildings scoring system based on environmental criteria, has always been a subject for discussion. The existing systems cannot be used for all the regions due to several reasons, including but not limited to variety in regional variables. In this article, the most important common LEED (Leadership in Energy and Environmental Design) and BREEAM (Building Research Establishment Environmental Assessment Method) common and Global environmental scoring systems, used in UK, USA, and Japan, respectively, have been discussed and compared with a special focus on CASBEE (Comprehensive Assessment System for Built Environment Efficiency), to credit assigning field (weighing and scores systems) as well as sustainable development criteria in each system. Then, converging and distinct fields of the foregoing systems are examined considering National Iranian Building Code. Furthermore, the common credits in the said systems not mentioned in National Iranian Building Code have been identified. These credits, which are generally included in well-known fundamental principles in sustainable development, may be considered as offered options for the Iranian building environmental scoring system. It is suggested that one of the globally and commonly accepted systems is chosen considering national priorities in order to offer an effective method for buildings environmental scoring, and then, a part of credits is added and/or removed, or a certain credit score is changed, and eventually, a new scoring system with a new title is developed for the country. Evidently, building construction industry highly affects the environment, economy, efficiency, and health of the relevant occupants. Considering the growing trend of cities and construction, achieving building scoring systems based on environmental criteria has always been a matter of discussion. The existing systems cannot be used for all the regions due to several reasons, including but not limited to variety in regional variables.

Keywords: scoring system, sustainability assessment, water efficiency, national Iranian building code

Procedia PDF Downloads 158
459 The Identification of Instructional Approach for Enhancing Competency of Autism, Attention Deficit Hyperactivity Disorder and Learning Disability Groups

Authors: P. Srisuruk, P. Narot

Abstract:

The purpose of this research were 1) to develop the curriculum and instructional approach that are suitable for children with autism, attention deficit hyperactivity disorder and learning disability as well as to arrange the instructional approach that can be integrated into inclusive classroom 2) to increase the competency of the children in these group. The research processes were to a) study related documents, b) arrange workshops to clarify fundamental issues in developing core curriculum among the researchers and experts in curriculum development, c) arrange workshops to develop the curriculum, submit it to the experts for criticism and editing, d) implement the instructional approach to examine its effectiveness, e) select the schools to participate in the project and arrange training programs for teachers in the selected school, f) implement the instruction approach in the selected schools in different regions. The research results were 1) the core curriculum to enhance the competency of children with autism, attention deficit hyperactivity disorder and learning disability , and to be used as a guideline for teachers, and these group of children in order to arrange classrooms for students with special needs to study with normal students, 2) teaching and learning methods arranged for students with autism, attention deficit, hyperactivity disorder and learning disability to study with normal students can be used as a framework for writing plans to help students with parallel problems by developing teaching materials as part of the instructional approach. However, the details of how to help the students in each skill or content differ according to the demand of development as well as the problems of individual students or group of students. Furthermore; it was found that most of target teacher could implement the instructional approach based on the guideline model developed by the research team. School in each region does not have much difference in their implementation. The good point of the developed instructional model is that teacher can construct a parallel lesson plan. So teacher did not fell that they have to do extra work it was also shown that students in regular classroom enjoyed studying with the developed instructional model as well.

Keywords: instructional approach, autism, attention deficit hyperactivity disorder, learning disability

Procedia PDF Downloads 313
458 2106 kA/cm² Peak Tunneling Current Density in GaN-Based Resonant Tunneling Diode with an Intrinsic Oscillation Frequency of ~260GHz at Room Temperature

Authors: Fang Liu, JunShuai Xue, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun

Abstract:

Terahertz spectra is in great demand since last two decades for many photonic and electronic applications. III-Nitride resonant tunneling diode is one of the promising candidates for portable and compact THz sources. Room temperature microwave oscillator based on GaN/AlN resonant tunneling diode was reported in this work. The devices, grown by plasma-assisted molecular-beam epitaxy on free-standing c-plane GaN substrates, exhibit highly repeatable and robust negative differential resistance (NDR) characteristics at room temperature. To improve the interface quality at the active region in RTD, indium surfactant assisted growth is adopted to enhance the surface mobility of metal atoms on growing film front. Thanks to the lowered valley current associated with the suppression of threading dislocation scattering on low dislocation GaN substrate, a positive peak current density of record-high 2.1 MA/cm2 in conjunction with a peak-to-valley current ratio (PVCR) of 1.2 are obtained, which is the best results reported in nitride-based RTDs up to now considering the peak current density and PVCR values simultaneously. When biased within the NDR region, microwave oscillations are measured with a fundamental frequency of 0.31 GHz, yielding an output power of 5.37 µW. Impedance mismatch results in the limited output power and oscillation frequency described above. The actual measured intrinsic capacitance is only 30fF. Using a small-signal equivalent circuit model, the maximum intrinsic frequency of oscillation for these diodes is estimated to be ~260GHz. This work demonstrates a microwave oscillator based on resonant tunneling effect, which can meet the demands of terahertz spectral devices, more importantly providing guidance for the fabrication of the complex nitride terahertz and quantum effect devices.

Keywords: GaN resonant tunneling diode, peak current density, microwave oscillation, intrinsic capacitance

Procedia PDF Downloads 109
457 Some Considerations about the Theory of Spatial-Motor Thinking Applied to a Traditional Fife Band in Brazil

Authors: Murilo G. Mendes

Abstract:

This text presents part of the results presented in the Ph.D. thesis that has used John Baily's theory and method as well as its ethnographic application in the context of the fife flutes of the Banda Cabaçal dos Irmãos Aniceto in the state of Ceará, northeast of Brazil. John Baily is a British ethnomusicologist dedicated to studying the relationships between music, musical gesture, and embodied cognition. His methodology became a useful tool to highlight historical-social aspects present in the group's instrumental music. Remaining indigenous and illiterate, these musicians played and transmitted their music from generation to generation, for almost two hundred years, without any nomenclature or systematization of the fingering performed on the flute. In other words, his music, free from any theorization, is learned, felt, perceived, and processed directly through hearing and through the relationship between the instrument's motor skills and its sound result. For this reason, Baily's assumptions became fundamental in the analysis processes. As the author's methodology recommends, classes were held with the natives and provided technical musical learning and some important concepts. Then, transcriptions and analyses of musical aspects were made from patterns of movement on the instrument incorporated by repetitions and/or by the intrinsic facility of the instrument. As a result, it was discovered how the group reconciled its indigenous origins with the demand requested by the public power and the interests of the local financial elite from the mid-twentieth century. The article is structured from the cultural context of the group, where local historical and social aspects influence the social and musical practices of the group. Then, will be present the methodological conceptions of John Baily and, finally, their application in the music of the Irmãos Aniceto. The conclusion points to the good results of identifying, through this methodology and analysis, approximations between discourse, historical-social factors, and musical text. Still, questions are raised about its application in other contexts.

Keywords: Banda Cabaçal dos Irmãos Aniceto, John Baily, pífano, spatial-motor thinking

Procedia PDF Downloads 108
456 Climate Change Adaptation in the U.S. Coastal Zone: Data, Policy, and Moving Away from Moral Hazard

Authors: Thomas Ruppert, Shana Jones, J. Scott Pippin

Abstract:

State and federal government agencies within the United States have recently invested substantial resources into studies of future flood risk conditions associated with climate change and sea-level rise. A review of numerous case studies has uncovered several key themes that speak to an overall incoherence within current flood risk assessment procedures in the U.S. context. First, there are substantial local differences in the quality of available information about basic infrastructure, particularly with regard to local stormwater features and essential facilities that are fundamental components of effective flood hazard planning and mitigation. Second, there can be substantial mismatch between regulatory Flood Insurance Rate Maps (FIRMs) as produced by the National Flood Insurance Program (NFIP) and other 'current condition' flood assessment approaches. This is of particular concern in areas where FIRMs already seem to underestimate extant flood risk, which can only be expected to become a greater concern if future FIRMs do not appropriately account for changing climate conditions. Moreover, while there are incentives within the NFIP’s Community Rating System (CRS) to develop enhanced assessments that include future flood risk projections from climate change, the incentive structures seem to have counterintuitive implications that would tend to promote moral hazard. In particular, a technical finding of higher future risk seems to make it easier for a community to qualify for flood insurance savings, with much of these prospective savings applied to individual properties that have the most physical risk of flooding. However, there is at least some case study evidence to indicate that recognition of these issues is prompting broader discussion about the need to move beyond FIRMs as a standalone local flood planning standard. The paper concludes with approaches for developing climate adaptation and flood resilience strategies in the U.S. that move away from the social welfare model being applied through NFIP and toward more of an informed risk approach that transfers much of the investment responsibility over to individual private property owners.

Keywords: climate change adaptation, flood risk, moral hazard, sea-level rise

Procedia PDF Downloads 82
455 Reading Strategy Instruction in Secondary Schools in China

Authors: Leijun Zhang

Abstract:

Reading literacy has become a powerful tool for academic success and an essential goal of education. The ability to read is not only fundamental for pupils’ academic success but also a prerequisite for successful participation in today’s vastly expanding multi-literate textual environment. It is also important to recognize that, in many educational settings, students are expected to learn a foreign/second language for successful participation in the increasingly globalized world. Therefore, it is crucial to help learners become skilled foreign-language readers. Research indicates that students’ reading comprehension can be significantly improved through explicit instruction of multiple reading strategies. Despite the wealth of research on how to enhance learners’ reading comprehension achievement by identifying an enormous range of reading strategies and techniques for assisting students in comprehending specific texts, relatively scattered studies have centered on whether these reading comprehension strategies and techniques are used in classrooms, especially in Chinese academic settings. Given the central role of ‘the teacher’ in reading instruction, the study investigates the degree of importance that EFL teachers attach to reading comprehension strategies and their classroom employment of those strategies in secondary schools in China. It also explores the efficiency of reading strategy instruction on pupils’ reading comprehension performance. As a mix-method study, the analysis drew on data from a quantitative survey and interviews with seven teachers. The study revealed that the EFL teachers had positive attitudes toward the use of cognitive strategies despite their insufficient knowledge about and limited attention to the metacognitive strategies and supporting strategies. Regarding the selection of reading strategies for instruction, the mandated curriculum and high-stakes examinations, text features and demands, teaching preparation programs and their own EFL reading experiences were the major criteria in their responses, while few teachers took into account the learner needs in their choice of reading strategies. Although many teachers agreed upon the efficiency of reading strategy instruction in developing students’ reading comprehension competence, three challenges were identified in their implementation of the strategy instruction. The study provides some insights into reading strategy instruction in EFL contexts and proposes implications for curriculum innovation, teacher professional development, and reading instruction research.

Keywords: reading comprehension strategies, EFL reading instruction, language teacher cognition, teacher education

Procedia PDF Downloads 70
454 Exergetic Optimization on Solid Oxide Fuel Cell Systems

Authors: George N. Prodromidis, Frank A. Coutelieris

Abstract:

Biogas can be currently considered as an alternative option for electricity production, mainly due to its high energy content (hydrocarbon-rich source), its renewable status and its relatively low utilization cost. Solid Oxide Fuel Cell (SOFC) stacks convert fuel’s chemical energy to electricity with high efficiencies and reveal significant advantages on fuel flexibility combined with lower emissions rate, especially when utilize biogas. Electricity production by biogas constitutes a composite problem which incorporates an extensive parametric analysis on numerous dynamic variables. The main scope of the presented study is to propose a detailed thermodynamic model on the optimization of SOFC-based power plants’ operation based on fundamental thermodynamics, energy and exergy balances. This model named THERMAS (THERmodynamic MAthematical Simulation model) incorporates each individual process, during electricity production, mathematically simulated for different case studies that represent real life operational conditions. Also, THERMAS offers the opportunity to choose a great variety of different values for each operational parameter individually, thus allowing for studies within unexplored and experimentally impossible operational ranges. Finally, THERMAS innovatively incorporates a specific criterion concluded by the extensive energy analysis to identify the most optimal scenario per simulated system in exergy terms. Therefore, several dynamical parameters as well as several biogas mixture compositions have been taken into account, to cover all the possible incidents. Towards the optimization process in terms of an innovative OPF (OPtimization Factor), presented here, this research study reveals that systems supplied by low methane fuels can be comparable to these supplied by pure methane. To conclude, such an innovative simulation model indicates a perspective on the optimal design of a SOFC stack based system, in the direction of the commercialization of systems utilizing biogas.

Keywords: biogas, exergy, efficiency, optimization

Procedia PDF Downloads 347
453 Building a Blockchain-based Internet of Things

Authors: Rob van den Dam

Abstract:

Today’s Internet of Things (IoT) comprises more than a billion intelligent devices, connected via wired/wireless communications. The expected proliferation of hundreds of billions more places us at the threshold of a transformation sweeping across the communications industry. Yet, we found that the IoT architecture and solutions that currently work for billions of devices won’t necessarily scale to tomorrow’s hundreds of billions of devices because of high cost, lack of privacy, not future-proof, lack of functional value and broken business models. As the IoT scales exponentially, decentralized networks have the potential to reduce infrastructure and maintenance costs to manufacturers. Decentralization also promises increased robustness by removing single points of failure that could exist in traditional centralized networks. By shifting the power in the network from the center to the edges, devices gain greater autonomy and can become points of transactions and economic value creation for owners and users. To validate the underlying technology vision, IBM jointly developed with Samsung Electronics the autonomous decentralized peer-to- peer proof-of-concept (PoC). The primary objective of this PoC was to establish a foundation on which to demonstrate several capabilities that are fundamental to building a decentralized IoT. Though many commercial systems in the future will exist as hybrid centralized-decentralized models, the PoC demonstrated a fully distributed proof. The PoC (a) validated the future vision for decentralized systems to extensively augment today’s centralized solutions, (b) demonstrated foundational IoT tasks without the use of centralized control, (c) proved that empowered devices can engage autonomously in marketplace transactions. The PoC opens the door for the communications and electronics industry to further explore the challenges and opportunities of potential hybrid models that can address the complexity and variety of requirements posed by the internet that continues to scale. Contents: (a) The new approach for an IoT that will be secure and scalable, (b) The three foundational technologies that are key for the future IoT, (c) The related business models and user experiences, (d) How such an IoT will create an 'Economy of Things', (e) The role of users, devices, and industries in the IoT future, (f) The winners in the IoT economy.

Keywords: IoT, internet, wired, wireless

Procedia PDF Downloads 316
452 Exploring the In-Between: An Examination of the Contextual Factors That Impact How Young Children Come to Value and Use the Visual Arts in Their Learning and Lives

Authors: S. Probine

Abstract:

The visual arts have been proven to be a central means through which young children can communicate their ideas, reflect on experience, and construct new knowledge. Despite this, perceptions of, and the degree to which the visual arts are valued within education, vary widely within political, educational, community and family contexts. These differing perceptions informed my doctoral research project, which explored the contextual factors that affect how young children come to value and use the visual arts in their lives and learning. The qualitative methodology of narrative inquiry with inclusion of arts-based methods was most appropriate for this inquiry. Using a sociocultural framework, the stories collected were analysed through the sociocultural theories of Lev Vygotsky as well as the work of Urie Bronfenbrenner, together with postmodern theories about identity formation. The use of arts-based methods such as teacher’s reflective art journals and the collection of images by child participants and their parent/caregivers allowed the research participants to have a significant role in the research. Three early childhood settings at which the visual arts were deeply valued as a meaning-making device in children’s learning, were purposively selected to be involved in the research. At each setting, the study found a unique and complex web of influences and interconnections, which shaped how children utilised the visual arts to mediate their thinking. Although the teachers' practices at all three centres were influenced by sociocultural theories, each settings' interpretations of these theories were unique and resulted in innovative interpretations of the role of the teacher in supporting visual arts learning. These practices had a significant impact on children’s experiences of the visual arts. For many of the children involved in this study, visual art was the primary means through which they learned. The children in this study used visual art to represent their experiences, relationships, to explore working theories, their interests (including those related to popular culture), to make sense of their own and other cultures, and to enrich their imaginative play. This research demonstrates that teachers have fundamental roles in fostering and disseminating the importance of the visual arts within their educational communities.

Keywords: arts-based methods, early childhood education, teacher's visual arts pedagogies, visual arts

Procedia PDF Downloads 121
451 Assessing the Channel Design of the Eco-Friendly ‘Falaj’ Water System in Meeting the Optimal Water Demand: A Case Study of Falaj Al-Khatmain, Sultanate of Oman

Authors: Omer Al-Kaabi, Ahmed Nasr, Abdullah Al-Ghafri, Mohammed Abdelfattah

Abstract:

The Falaj system, derived from natural water sources, is a man-made canal system designed to supply communities of farmers with water for domestic and agricultural purposes. For thousands of years, Falaj has served communities by harnessing the force of gravity; it persists as a vital water management system in numerous regions across the Sultanate of Oman. Remarkably, predates the establishment of many fundamental hydraulic principles used today. Al-Khatmain Falaj, with its accessibility and historical significance spanning over 2000 years, was chosen as the focal point of this study. The research aimed to investigate the efficiency of Al-Khatmain Falaj in meeting specific water demands. The HEC-RAS model was utilized to visualize water flow dynamics within the Falaj channels, accompanied by graphical representations of pertinent variables. The application of HEC-RAS helped to measure different water flow scenarios within the channel, enabling a clear comparison with the demand area catchment. The cultivated land of Al-Khatmain is 723,124 m² and consists of 16,873 palm trees representing 91% of the total area and the remaining 9% is mixed types of trees counted 3,920 trees. The study revealed a total demand of 8,244 m³ is required to irrigate the cultivated land. Through rigorous analysis, the study has proven that the Falaj system in Al-Khatmain operates with high efficiency, as the average annual water supply is 9676.8 m3/day. Additionally, the channel designed at 0.6m width x 0.3m height efficiently holds the optimal water supply, with an average flow depth of 0.21m. Also, the system includes an overflow drainage channel to mitigate floods and prevent crop damage based on seasonal requirements. This research holds promise for examining diverse hydrological conditions and devising effective strategies to manage scenarios of both high and low flow rates.

Keywords: Al-Khatmain, sustainability, Falaj, HEC-RAS, water management system

Procedia PDF Downloads 25
450 The Power of Inferences and Assumptions: Using a Humanities Education Approach to Help Students Learn to Think Critically

Authors: Randall E. Osborne

Abstract:

A four-step ‘humanities’ thought model has been used in an interdisciplinary course for almost two decades and has been proven to aid in student abilities to become more inclusive in their world view. Lack of tolerance for ambiguity can interfere with this progression so we developed an assignment that seems to have assisted students in developing more tolerance for ambiguity and, therefore, opened them up to make more progress on the critical thought model. A four-step critical thought model (built from a humanities education approach) is used in an interdisciplinary course on prejudice, discrimination, and hate in an effort to minimize egocentrism and promote sociocentrism in college students. A fundamental barrier to this progression is a lack of tolerance for ambiguity. The approach to the course is built on the assumption that Tolerance for Ambiguity (characterized by a dislike of uncertain, ambiguous or situations in which expected behaviors are uncertain, will like serve as a barrier (if tolerance is low) or facilitator (if tolerance is high) of active ‘engagement’ with assignments. Given that active engagement with course assignments would be necessary to promote an increase in critical thought and the degree of multicultural attitude change, tolerance for ambiguity inhibits critical thinking and, ultimately multicultural attitude change. As expected, those students showing the least amount of decrease (or even an increase) in intolerance across the semester, earned lower grades in the course than those students who showed a significant decrease in intolerance, t(1,19) = 4.659, p < .001. Students who demonstrated the most change in their Tolerance for Ambiguity (showed an increasing ability to tolerate ambiguity) earned the highest grades in the course. This is, especially, significant because faculty did not know student scores on this measure until after all assignments had been graded and course grades assigned. An assignment designed to assist students in making their assumption and inferences processes visible so they could be explored, was implemented with the goal of this exploration then promoting more tolerance for ambiguity, which, as already outlined, promotes critical thought. The assignment offers students two options and then requires them to explore what they have learned about inferences and/or assumptions This presentation outlines the assignment and demonstrates the humanities model, what students learn from particular assignments and how it fosters a change in Tolerance for Ambiguity which, serves as the foundational component of critical thinking.

Keywords: critical thinking, humanities education, sociocentrism, tolerance for ambiguity

Procedia PDF Downloads 250
449 A Model Outlining Feelings vs. Emotions and Why Distinction is Critical

Authors: Brendan Mooney

Abstract:

Context: Feelings and emotions are commonly misunderstood and the terms often used interchangeably, leading to potential negative impacts on individuals' mental well-being and relationships. The distinction between these two fundamentally different experiences of human life is crucial for effective psychological practice and communication. Research Aim: The aim of this study is to outline the disparities between feelings and emotions, emphasising the significance of this differentiation in psychological practice to enhance clients' observation, decision-making, problem-solving, and communication skills. Methodology: This research utilises a conceptual model developed by the author in 2017 based on clinical experience, client observations, and feedback. The model serves to guide effective clinical practice by providing clear definitions and understanding of feelings versus emotions. Case study examples were utilised to support the efficacy of the model. Findings: The study highlights that recognising and expressing feelings rather than emotions is more empowering and conducive to resolving unresolved issues, thereby fostering better psychological well-being and interpersonal relationships. Theoretical Importance: This research underscores the importance of clarifying fundamental definitions related to feelings and emotions in enhancing psychological interventions and preventing various relationship conflicts and individual issues. Data Collection and Analysis Procedures: Data was collected through the author's clinical experience and interactions with clients, informing the development of the Feeling Emotions Mental (FEM) model. Analysis involved synthesising observations and feedback to elucidate the distinctions between feelings and emotions. Questions Addressed: What are the disparities between feelings and emotions? How does the confusion between these two fundamentally different experiences of human life impact individuals' mental well-being and relationships? Why is it essential to differentiate between feelings and emotions in psychological practice? Conclusion: The study advocates for a clear understanding of feelings versus emotions to support clients in addressing unresolved issues and improving their overall psychological functioning and communication skills, thereby preventing potential conflicts and relationship challenges.

Keywords: couples, mental, misinformation, misunderstanding, relationships

Procedia PDF Downloads 24
448 Statistical Modeling of Constituents in Ash Evolved From Pulverized Coal Combustion

Authors: Esam Jassim

Abstract:

Industries using conventional fossil fuels have an interest in better understanding the mechanism of particulate formation during combustion since such is responsible for emission of undesired inorganic elements that directly impact the atmospheric pollution level. Fine and ultrafine particulates have tendency to escape the flue gas cleaning devices to the atmosphere. They also preferentially collect on surfaces in power systems resulting in ascending in corrosion inclination, descending in the heat transfer thermal unit, and severe impact on human health. This adverseness manifests particularly in the regions of world where coal is the dominated source of energy for consumption. This study highlights the behavior of calcium transformation as mineral grains verses organically associated inorganic components during pulverized coal combustion. The influence of existing type of calcium on the coarse, fine and ultrafine mode formation mechanisms is also presented. The impact of two sub-bituminous coals on particle size and calcium composition evolution during combustion is to be assessed. Three mixed blends named Blends 1, 2, and 3 are selected according to the ration of coal A to coal B by weight. Calcium percentage in original coal increases as going from Blend 1 to 3. A mathematical model and a new approach of describing constituent distribution are proposed. Analysis of experiments of calcium distribution in ash is also modeled using Poisson distribution. A novel parameter, called elemental index λ, is introduced as a measuring factor of element distribution. Results show that calcium in ash that originally in coal as mineral grains has index of 17, whereas organically associated calcium transformed to fly ash shown to be best described when elemental index λ is 7. As an alkaline-earth element, calcium is considered the fundamental element responsible for boiler deficiency since it is the major player in the mechanism of ash slagging process. The mechanism of particle size distribution and mineral species of ash particles are presented using CCSEM and size-segregated ash characteristics. Conclusions are drawn from the analysis of pulverized coal ash generated from a utility-scale boiler.

Keywords: coal combustion, inorganic element, calcium evolution, fluid dynamics

Procedia PDF Downloads 312
447 Multi-omics Integrative Analysis with Genome-Scale Metabolic Model Simulation Reveals Reaction Essentiality data in Human Astrocytes Under the Lipotoxic Effect of Palmitic Acid

Authors: Janneth Gonzalez, Andres Pinzon Velasco, Maria Angarita, Nicolas Mendoza

Abstract:

Astrocytes play an important role in various processes in the brain, including pathological conditions such as neurodegenerative diseases. Recent studies have shown that the increase in saturated fatty acids such as palmitic acid (PA) triggers pro-inflammatory pathways in the brain. The use of synthetic neurosteroids such as tibolone has demonstrated neuro-protective mechanisms. However, there are few studies on the neuro-protective mechanisms of tibolone, especially at the systemic (omic) level. In this study, we performed the integration of multi-omic data (transcriptome and proteome) into a human astrocyte genomic scale metabolic model to study the astrocytic response during palmitate treatment. We evaluated metabolic fluxes in three scenarios (healthy, induced inflammation by PA, and tibolone treatment under PA inflammation). We also use control theory to identify those reactions that control the astrocytic system. Our results suggest that PA generates a modulation of central and secondary metabolism, showing a change in energy source use through inhibition of folate cycle and fatty acid β-oxidation and upregulation of ketone bodies formation.We found 25 metabolic switches under PA-mediated cellular regulation, 9 of which were critical only in the inflammatory scenario but not in the protective tibolone one. Within these reactions, inhibitory, total, and directional coupling profiles were key findings, playing a fundamental role in the (de)regulation in metabolic pathways that increase neurotoxicity and represent potential treatment targets. Finally, this study framework facilitates the understanding of metabolic regulation strategies, andit can be used for in silico exploring the mechanisms of astrocytic cell regulation, directing a more complex future experimental work in neurodegenerative diseases.

Keywords: astrocytes, data integration, palmitic acid, computational model, multi-omics, control theory

Procedia PDF Downloads 101
446 The Psychometric Properties of an Instrument to Estimate Performance in Ball Tasks Objectively

Authors: Kougioumtzis Konstantin, Rylander Pär, Karlsteen Magnus

Abstract:

Ball skills as a subset of fundamental motor skills are predictors for performance in sports. Currently, most tools evaluate ball skills utilizing subjective ratings. The aim of this study was to examine the psychometric properties of a newly developed instrument to objectively measure ball handling skills (BHS-test) utilizing digital instrument. Participants were a convenience sample of 213 adolescents (age M = 17.1 years, SD =3.6; 55% females, 45% males) recruited from upper secondary schools and invited to a sports hall for the assessment. The 8-item instrument incorporated both accuracy-based ball skill tests and repetitive-performance tests with a ball. Testers counted performance manually in the four tests (one throwing and three juggling tasks). Furthermore, assessment was technologically enhanced in the other four tests utilizing a ball machine, a Kinect camera and balls with motion sensors (one balancing and three rolling tasks). 3D printing technology was used to construct equipment, while all results were administered digitally with smart phones/tablets, computers and a specially constructed application to send data to a server. The instrument was deemed reliable (α = .77) and principal component analysis was used in a random subset (53 of the participants). Furthermore, latent variable modeling was employed to confirm the structure with the remaining subset (160 of the participants). The analysis showed good factorial-related validity with one factor explaining 57.90 % of the total variance. Four loadings were larger than .80, two more exceeded .76 and the other two were .65 and .49. The one factor solution was confirmed by a first order model with one general factor and an excellent fit between model and data (χ² = 16.12, DF = 20; RMSEA = .00, CI90 .00–.05; CFI = 1.00; SRMR = .02). The loadings on the general factor ranged between .65 and .83. Our findings indicate good reliability and construct validity for the BHS-test. To develop the instrument further, more studies are needed with various age-groups, e.g. children. We suggest using the BHS-test for diagnostic or assessment purpose for talent development and sports participation interventions that focus on ball games.

Keywords: ball-handling skills, ball-handling ability, technologically-enhanced measurements, assessment

Procedia PDF Downloads 69
445 Diabetes Care in Detention Settings: A Systematic Review

Authors: A. Papachristou, A. Ntikoudi, L. Makris, V. Saridakis

Abstract:

Introduction: More than 10 million people are imprisoned or detained worldwide. Figures from 2011-12 show that prison inmates are more likely than the general population to suffer from chronic or infectious diseases, while most inmates are overweight or obese, and more than a quarter have high blood pressure. In 2011/12, the proportion of prisoners reporting diabetes or hyperglycemia was 899 per 10,000 prisoners, almost double the 2004 figure (483 per 10,000). It is important to ensure that this population has access to the same standard of care as people outside prisons, as access to services should be need-based. Diabetes is a public health problem associated with increased morbidity and mortality worldwide. According to the International Diabetes Federation (IDF) in 2017, approximately 425 million people worldwide had diabetes. This number is expected to increase to 629 million by 2045. Poor management of diabetes in prisons can lead to poor blood sugar control and increase the risk of complications. Aim: The aim of this review was to systematically evaluate all the available literature on diabetes care in custodial settings. Methods: An extensive literature search was conducted through electronic databases (PubMed, Scopus and CINAHL) with the terms ‘custody’, ‘diabetes Mellitus, ‘detention centers and ‘chronic disease’. Articles published in English until September 2022, were included; no other criteria on publication dates were set. Results: Most of the studies mentioned a diabetes prevalence of approximately 10%, among other common chronic. Hypertension, obesity, smoking, sedentary lifestyle were the most common comorbidities associated with diabetes. Conclusion: Good glycemic control is fundamental to managing diabetes, and while many prisoners enter prison poorly, access to regular medication and meals, as well as exercise, offers the potential for improvement. Not being able to get help as quickly as in the past can be extremely stressful, and some prisoners may deliberately raise their blood sugar levels to avoid the risk of developing hypoglycemia, especially if they know they have had previous episodes of nocturnal hypoglycemia. Thus, appropriate training and resources are critical to providing quality care to incarcerated people with diabetes.

Keywords: custody, diabetes mellitus, detention centers, chronic disease

Procedia PDF Downloads 78
444 Different Approaches to Teaching a Database Course to Undergraduate and Graduate Students

Authors: Samah Senbel

Abstract:

Database Design is a fundamental part of the Computer Science and Information technology curricula in any school, as well as in the study of management, business administration, and data analytics. In this study, we compare the performance of two groups of students studying the same database design and implementation course at Sacred Heart University in the fall of 2018. Both courses used the same textbook and were taught by the same professor, one for seven graduate students and one for 26 undergraduate students (juniors). The undergraduate students were aged around 20 years old with little work experience, while the graduate students averaged 35 years old and all were employed in computer-related or management-related jobs. The textbook used was 'Database Systems, Design, Implementation, and Management' by Coronel and Morris, and the course was designed to follow the textbook roughly a chapter per week. The first 6 weeks covered the design aspect of a database, followed by a paper exam. The next 6 weeks covered the implementation aspect of the database using SQL followed by a lab exam. Since the undergraduate students are on a 16 week semester, we spend the last three weeks of the course covering NoSQL. This part of the course was not included in this study. After the course was over, we analyze the results of the two groups of students. An interesting discrepancy was observed: In the database design part of the course, the average grade of the graduate students was 92%, while that of the undergraduate students was 77% for the same exam. In the implementation part of the course, we observe the opposite: the average grade of the graduate students was 65% while that of the undergraduate students was 73%. The overall grades were quite similar: the graduate average was 78% and that of the undergraduates was 75%. Based on these results, we concluded that having both classes follow the same time schedule was not beneficial, and an adjustment is needed. The graduates could spend less time on design and the undergraduates would benefit from more design time. In the fall of 2019, 30 students registered for the undergraduate course and 15 students registered for the graduate course. To test our conclusion, the undergraduates spend about 67% of time (eight classes) on the design part of the course and 33% (four classes) on the implementation part, using the exact exams as the previous year. This resulted in an improvement in their average grades on the design part from 77% to 83% and also their implementation average grade from 73% to 79%. In conclusion, we recommend using two separate schedules for teaching the database design course. For undergraduate students, it is important to spend more time on the design part rather than the implementation part of the course. While for the older graduate students, we recommend spending more time on the implementation part, as it seems that is the part they struggle with, even though they have a higher understanding of the design component of databases.

Keywords: computer science education, database design, graduate and undergraduate students, pedagogy

Procedia PDF Downloads 102
443 The Persistence of Abnormal Return on Assets: An Exploratory Analysis of the Differences between Industries and Differences between Firms by Country and Sector

Authors: José Luis Gallizo, Pilar Gargallo, Ramon Saladrigues, Manuel Salvador

Abstract:

This study offers an exploratory statistical analysis of the persistence of annual profits across a sample of firms from different European Union (EU) countries. To this end, a hierarchical Bayesian dynamic model has been used which enables the annual behaviour of those profits to be broken down into a permanent structural and a transitory component, while also distinguishing between general effects affecting the industry as a whole to which each firm belongs and specific effects affecting each firm in particular. This breakdown enables the relative importance of those fundamental components to be more accurately evaluated by country and sector. Furthermore, Bayesian approach allows for testing different hypotheses about the homogeneity of the behaviour of the above components with respect to the sector and the country where the firm develops its activity. The data analysed come from a sample of 23,293 firms in EU countries selected from the AMADEUS data-base. The period analysed ran from 1999 to 2007 and 21 sectors were analysed, chosen in such a way that there was a sufficiently large number of firms in each country sector combination for the industry effects to be estimated accurately enough for meaningful comparisons to be made by sector and country. The analysis has been conducted by sector and by country from a Bayesian perspective, thus making the study more flexible and realistic since the estimates obtained do not depend on asymptotic results. In general terms, the study finds that, although the industry effects are significant, more important are the firm specific effects. That importance varies depending on the sector or the country in which the firm carries out its activity. The influence of firm effects accounts for around 81% of total variation and display a significantly lower degree of persistence, with adjustment speeds oscillating around 34%. However, this pattern is not homogeneous but depends on the sector and country analysed. Industry effects depends also on sector and country analysed have a more marginal importance, being significantly more persistent, with adjustment speeds oscillating around 7-8% with this degree of persistence being very similar for most of sectors and countries analysed.

Keywords: dynamic models, Bayesian inference, MCMC, abnormal returns, persistence of profits, return on assets

Procedia PDF Downloads 381
442 Surgical Treatment of Glaucoma – Literature and Video Review of Blebs, Tubes, and Micro-Invasive Glaucoma Surgeries (MIGS)

Authors: Ana Miguel

Abstract:

Purpose: Glaucoma is the second cause of worldwide blindness and the first cause of irreversible blindness. Trabeculectomy, the standard glaucoma surgery, has a success rate between 36.0% and 98.0% at three years and a high complication rate, leading to the development of different surgeries, micro-invasive glaucoma surgeries (MIGS). MIGS devices are diverse and have various indications, risks, and effectiveness. We intended to review MIGS’ surgical techniques, indications, contra-indications, and IOP effect. Methods: We performed a literature review of MIGS to differentiate the devices and their reported effectiveness compared to traditional surgery (tubes and blebs). We also conducted a video review of the last 1000 glaucoma surgeries of the author (including MIGS, but also trabeculectomy, deep sclerectomy, and tubes of Ahmed and Baerveldt) performed at glaucoma and advanced anterior segment fellowship in Canada and France, to describe preferred surgical techniques for each. Results: We present the videos with surgical techniques and pearls for each surgery. Glaucoma surgeries included: 1- bleb surgery (namely trabeculectomy, with releasable sutures or with slip knots, deep sclerectomy, Ahmed valve, Baerveldt tube), 2- MIGS with bleb, also known as MIBS (including XEN 45, XEN 63, and Preserflo), 3- MIGS increasing supra-choroidal flow (iStar), 4-MIGS increasing trabecular flow (iStent, gonioscopy-assisted transluminal trabeculotomy - GATT, goniotomy, excimer laser trabeculostomy -ELT), and 5-MIGS decreasing aqueous humor production (endocyclophotocoagulation, ECP). There was also needling (ab interno and ab externo) performed at the operating room and irido-zonulo-hyaloïdectomy (IZHV). Each technique had different indications and contra-indications. Conclusion: MIGS are valuable in glaucoma surgery, such as traditional surgery with trabeculectomy and tubes. All glaucoma surgery can be combined with phacoemulsification (there may be a synergistic effect on MIGS + cataract surgery). In addition, some MIGS may be combined for further intraocular pressure lowering effect (for example, iStents with goniotomy and ECP). A good surgical technique and postoperative management are fundamental to increasing success and good practice in all glaucoma surgery.

Keywords: glaucoma, migs, surgery, video, review

Procedia PDF Downloads 62
441 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.

Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control

Procedia PDF Downloads 143
440 Carbon Nanotube Field Effect Transistor - a Review

Authors: P. Geetha, R. S. D. Wahida Banu

Abstract:

The crowning advances in Silicon based electronic technology have dominated the computation world for the past decades. The captivating performance of Si devices lies in sustainable scaling down of the physical dimensions, by that increasing device density and improved performance. But, the fundamental limitations due to physical, technological, economical, and manufacture features restrict further miniaturization of Si based devices. The pit falls are due to scaling down of the devices such as process variation, short channel effects, high leakage currents, and reliability concerns. To fix the above-said problems, it is needed either to follow a new concept that will manage the current hitches or to support the available concept with different materials. The new concept is to design spintronics, quantum computation or two terminal molecular devices. Otherwise, presently used well known three terminal devices can be modified with different materials that suits to address the scaling down difficulties. The first approach will occupy in the far future since it needs considerable effort; the second path is a bright light towards the travel. Modelling paves way to know not only the current-voltage characteristics but also the performance of new devices. So, it is desirable to model a new device of suitable gate control and project the its abilities towards capability of handling high current, high power, high frequency, short delay, and high velocity with excellent electronic and optical properties. Carbon nanotube became a thriving material to replace silicon in nano devices. A well-planned optimized utilization of the carbon material leads to many more advantages. The unique nature of this organic material allows the recent developments in almost all fields of applications from an automobile industry to medical science, especially in electronics field-on which the automation industry depends. More research works were being done in this area. This paper reviews the carbon nanotube field effect transistor with various gate configurations, number of channel element, CNT wall configurations and different modelling techniques.

Keywords: array of channels, carbon nanotube field effect transistor, double gate transistor, gate wrap around transistor, modelling, multi-walled CNT, single-walled CNT

Procedia PDF Downloads 296