Search results for: merged SPECT
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 116

Search results for: merged SPECT

26 Reconstructing the Segmental System of Proto-Graeco-Phrygian: a Bottom-Up Approach

Authors: Aljoša Šorgo

Abstract:

Recent scholarship on Phrygian has begun to more closely examine the long-held belief that Greek and Phrygian are two very closely related languages. It is now clear that Graeco-Phrygian can be firmly postulated as a subclade of the Indo-European languages. The present paper will focus on the reconstruction of the phonological and phonetic segments of Proto-Graeco-Phrygian (= PGPh.) by providing relevant correspondence sets and reconstructing the classes of segments. The PGPh. basic vowel system consisted of ten phonemic oral vowels: */a e o ā ē ī ō ū/. The correspondences of the vowels are clear and leave little open to ambiguity. There were four resonants and two semi-vowels in PGPh.: */r l m n i̯ u̯/, which could appear in both a consonantal and a syllabic function, with the distribution between the two still being phonotactically predictable. Of note is the fact that the segments *m and *n seem to have merged when their phonotactic position would see them used in a syllabic function. Whether the segment resulting from this merger was a nasalized vowel (most likely *[ã]) or a syllabic nasal *[N̥] (underspecified for place of articulation) cannot be determined at this stage. There were three fricatives in PGPh.: */s h ç/. *s and *h are easily identifiable. The existence of *ç, which may seem unexpected, is postulated on the basis of the correspondence Gr. ὄς ~ Phr. yos/ιος. It is of note that Bozzone has previously proposed the existence of *ç ( < PIE *h₁i̯-) in an early stage of Greek even without taking into account Phrygian data. Finally, the system of stops in PGPh. distinguished four places of articulation (labial, dental, velar, and labiovelar) and three phonation types. The question of which three phonation types were actually present in PGPh. is one of great importance for the ongoing debate on the realization of the three series in PIE. Since the matter is still very much in dispute, we ought to, at this stage, endeavour to reconstruct the PGPh. system without recourse to the other IE languages. The three series of correspondences are: 1. Gr. T (= tenuis) ~ Phr. T; 2. Gr. D (= media) ~ Phr. T; 3. Gr. TA (= tenuis aspirata) ~ Phr. M. The first series must clearly be reconstructed as composed of voiceless stops. The second and third series are more problematic. With a bottom-up approach, neither the second nor the third series of correspondences are compatible with simple modal voicing, and the reflexes differ greatly in voice onset time. Rather, the defining feature distinguishing the two series was [±spread glottis], with ancillary vibration of the vocal cords. In PGPh. the second series was undergoing further spreading of the glottis. As the two languages split, this process would continue, but be affected by dissimilar changes in VOT, which was ultimately phonemicized in both languages as the defining feature distinguishing between their series of stops.

Keywords: bottom-up reconstruction, Proto-Graeco-Phrygian, spread glottis, syllabic resonant

Procedia PDF Downloads 50
25 The Impact of the Mastering My Mental Fitness™-Nurses Workshops on Practical Nursing Students’ Perceived Burnout and Psychological Capital: An Embedded Mixed Methods Study

Authors: Linda Frost, Lindsay Anderson, Jana Borras, Ariel Dysangco, Vimabayi Makwaira

Abstract:

The academic environment in which nursing students are immersed in comes with many demands and expectations. Course load, clinical placements, and financial expenses are examples of the pressures facing students each semester. These pressures contribute to student stress and impact their overall well-being and mental fitness. Students' ability to cope with stress and bounce back from adversity is enhanced when we build their mental fitness. Building mental fitness has the benefit of improving physical health, relationships, self-esteem, resilience, work productivity, and overall contentment, happiness and life satisfaction. While self-care is encouraged to avoid burnout, there is a gap in literature on programs to help build nursing students’ mental health and ability to engage in self-care. There is an opportunity and a need to design programs and implement actions aimed at reducing stress and its adverse effects on nursing students. Nursing students require the support of people who understand the complexities of the nursing profession, multifaceted work environments in which they operate, and the impact these environments have on their mental fitness. Nursing academia is in the best position to ensure that tools are in place to support the next generation of nurses who face a career with significant emotional and physical demands. This is a mixed-method study using an embedded design. We utilized a pretest-posttest design to compare the difference in psychological capital (PsyCap) and burnout in students who have received the Mastering My Mental Fitness-Nurses™ (MMMF-N™) workshops (n=8) and the control group (n=9) who have not. Semi structured interviews were conducted with the eight nursing students in the intervention group, along with data from feedback forms to explore the impact of the workshops on student’s burnout and PsyCap and determine how to improve the workshops for future students. The quantitative and qualitative data will be merged using a side-by-side comparison. This will be in a discussion format that allows for the comparison of the results from both phases. The findings will be available January 2025. We anticipate that students in the control and intervention group will report similar levels of burnout. As well, students in the intervention group will indicate the benefits of the MMMF-N™ workshops through qualitative interviews and workshop feedback forms.

Keywords: burnout, mental fitness, nursing students, psychological capital

Procedia PDF Downloads 23
24 Leadership in the Emergence Paradigm: A Literature Review on the Medusa Principles

Authors: Everard van Kemenade

Abstract:

Many quality improvement activities are planned. Leaders are strongly involved in missions, visions and strategic planning. They use, consciously or unconsciously, the PDCA-cycle, also know as the Deming cycle. After the planning, the plans are carried out and the results or effects are measured. If the results show that the goals in the plan have not been achieved, adjustments are made in the next plan or in the execution of the processes. Then, the cycle is run through again. Traditionally, the PDCA-cycle is advocated as a means to an end. However, PDCA is especially fit for planned, ordered, certain contexts. It fits with the empirical and referential quality paradigm. For uncertain, unordered, unplanned processes, something else might be needed instead of Plan-Do-Check-Act. Due to the complexity of our society, the influence of the context, and the uncertainty in our world nowadays, not every activity can be planned anymore. At the same time organisations need to be more innovative than ever. That provides leaders with ‘wicked tendencies’. However, that raises the question how one can innovate without being able to plan? Complexity science studies the interactions of a diverse group of agents that bring about change in times of uncertainty, e.g. when radical innovation is co-created. This process is called emergence. This research study explores the role of leadership in the emergence paradigm. Aim of the article is to study the way that leadership can support the emergence of innovation in a complex context. First, clarity is given on the concepts used in the research question: complexity, emergence, innovation and leadership. Thereafter a literature search is conducted to answer the research question. The topics ‘emergent leadership’ or ‘complexity leadership’ are chosen for an exploratory search in Google and Google Scholar using the berry picking method. Exclusion criterion is emergence in other disciplines than organizational development or in the meaning of ‘arising’. The literature search conducted gave 45 hits. Twenty-seven articles were excluded after reading the title and abstract because they did not research the topic of emergent leadership and complexity. After reading the remaining articles as a whole one more was excluded because the article used emergent in the limited meaning of ‗arising‘ and eight more were excluded because the topic did not match the research question of this article. That brings the total of the search to 17 articles. The useful conclusions from the articles are merged and grouped together under overarching topics, using thematic analysis. The findings are that 5 topics prevail when looking at possibilities for leadership to facilitate innovation: enabling, sharing values, dreaming, interacting, context sensitivity and adaptivity. Together they form In Dutch the acronym Medusa.

Keywords: complexity science, emergence, leadership in the emergence paradigm, innovation, the Medusa principles

Procedia PDF Downloads 29
23 Influence of Maternal Factors on Growth Patterns of Schoolchildren in a Rural Health and Demographic Surveillance Site in South Africa: A Mixed Method Study

Authors: Perpetua Modjadji, Sphiwe Madiba

Abstract:

Background: The growth patterns of children are good nutritional indicators of their nutritional status, health, and socioeconomic level. However, the maternal factors and the belief system of the society affect the growth of children promoting undernutrition. This study determined the influence of maternal factors on growth patterns of schoolchildren in a rural site. Methods: A convergent mixed method study was conducted among 508 schoolchildren and their mothers in Dikgale Health and Demographic Surveillance System Site, South Africa. Multistage sampling was used to select schools (purposive) and learners (random), who were paired with their mothers. Anthropometry was measured and socio-demographic, obstetrical, household information, maternal influence on children’s nutrition, and growth were assessed using an interviewer administered questionnaire (quantitative). The influence of the cultural beliefs and practices of mothers on the nutrition and growth of their children was explored using focus group discussions (qualitative). Narratives of mothers were used to best understand growth patterns of schoolchildren (mixed method). Data were analyzed using STATA 14 (quantitative) and Nvivo 11 (qualitative). Quantitative and qualitative data were merged for integrated mixed method analysis using a joint display analysis. Results: Mean age of children was 10 ± 2 years, ranging from 6 to 15 years. Substantial percentages of thinness (25%), underweight (24%), and stunting (22%) were observed among the children. Mothers had a mean age of 37 ± 7 years, and 75% were overweight or obese. A depressed socio-economic status indicated by a higher rate of unemployment with no income (82.3%), and dependency on social grants (86.8%) was observed. Determinants of poor growth patterns were child’s age and gender, maternal age, height and BMI, access to water supply, and refrigerator use. The narratives of mothers suggested that the children in most of their households were exposed to poverty and the inadequate intake of quality food. Conclusion: Poor growth patterns were observed among schoolchildren while their mothers were overweight or obese. Child’s gender, school grade, maternal body mass index, and access to water were the main determinants. Congruence was observed between most qualitative themes and quantitative constructs. A need for a multi sectoral approach considering an evidence based and feasible nutrition programs for schoolchildren, especially those in rural settings and educating mothers, cannot be over-emphasized.

Keywords: growth patterns, maternal factors, rural context, schoolchildren, South Africa

Procedia PDF Downloads 180
22 Residual Analysis and Ground Motion Prediction Equation Ranking Metrics for Western Balkan Strong Motion Database

Authors: Manuela Villani, Anila Xhahysa, Christopher Brooks, Marco Pagani

Abstract:

The geological structure of Western Balkans is strongly affected by the collision between Adria microplate and the southwestern Euroasia margin, resulting in a considerably active seismic region. The Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project (BSHAP) (2007-2011, 2012-2015) by NATO supported the preparation of new seismic hazard maps of the Western Balkan, but when inspecting the seismic hazard models produced later by these countries on a national scale, significant differences in design PGA values are observed in the border, for instance, North Albania-Montenegro, South Albania- Greece, etc. Considering the fact that the catalogues were unified and seismic sources were defined within BSHAP framework, obviously, the differences arise from the Ground Motion Prediction Equations selection, which are generally the component with highest impact on the seismic hazard assessment. At the time of the project, a modest database was present, namely 672 three-component records, whereas nowadays, this strong motion database has increased considerably up to 20,939 records with Mw ranging in the interval 3.7-7 and epicentral distance distribution from 0.47km to 490km. Statistical analysis of the strong motion database showed the lack of recordings in the moderate-to-large magnitude and short distance ranges; therefore, there is need to re-evaluate the Ground Motion Prediction Equation in light of the recently updated database and the new generations of GMMs. In some cases, it was observed that some events were more extensively documented in one database than the other, like the 1979 Montenegro earthquake, with a considerably larger number of records in the BSHAP Analogue SM database when compared to ESM23. Therefore, the strong motion flat-file provided from the Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project was merged with the ESM23 database for the polygon studied in this project. After performing the preliminary residual analysis, the candidate GMPE-s were identified. This process was done using the GMPE performance metrics available within the SMT in the OpenQuake Platform. The Likelihood Model and Euclidean Distance Based Ranking (EDR) were used. Finally, for this study, a GMPE logic tree was selected and following the selection of candidate GMPEs, model weights were assigned using the average sample log-likelihood approach of Scherbaum.

Keywords: residual analysis, GMPE, western balkan, strong motion, openquake

Procedia PDF Downloads 88
21 The Impact of CSR Satisfaction on Employee Commitment

Authors: Silke Bustamante, Andrea Pelzeter, Andreas Deckmann, Rudi Ehlscheidt, Franziska Freudenberger

Abstract:

Many companies increasingly seek to enhance their attractiveness as an employer to bind their employees. At the same time, corporate responsibility for social and ecological issues seems to become a more important part of an attractive employer brand. It enables the company to match the values and expectations of its members, to signal fairness towards them and to increase its brand potential for positive psychological identification on the employees’ side. In the last decade, several empirical studies have focused this relationship, confirming a positive effect of employees’ CSR perception and their affective organizational commitment. The current paper aims to take a slightly different view by analyzing the impact of another factor on commitment: the weighted employee’s satisfaction with the employer CSR. For that purpose, it is assumed that commitment levels are rather a result of the fulfillment or disappointment of expectations. Hence, instead of merely asking how CSR perception affects commitment, a more complex independent variable is taken into account: a weighted satisfaction construct that summarizes two different factors. Therefore, the individual level of commitment contingent on CSR is conceptualized as a function of two psychological processes: (1) the individual significance that an employee ascribes to specific employer attributes and (2) the individual satisfaction based on the fulfillment of expectation that rely on preceding perceptions of employer attributes. The results presented are based on a quantitative survey that was undertaken among employees of the German service sector. Conceptually a five-dimensional CSR construct (ecology, employees, marketplace, society and corporate governance) and a two-dimensional non-CSR construct (company and workplace) were applied to differentiate employer characteristics. (1) Respondents were asked to indicate the importance of different facets of CSR-related and non-CSR-related employer attributes. By means of a conjoint analysis, the relative importance of each employer attribute was calculated from the data. (2) In addition to this, participants stated their level of satisfaction with specific employer attributes. Both indications were merged to individually weighted satisfaction indexes on the seven-dimensional levels of employer characteristics. The affective organizational commitment of employees (dependent variable) was gathered by applying the established 15-items Organizational Commitment Questionnaire (OCQ). The findings related to the relationship between satisfaction and commitment will be presented. Furthermore, the question will be addressed, how important satisfaction with CSR is in relation to the satisfaction with other attributes of the company in the creation of commitment. Practical as well as scientific implications will be discussed especially with reference to previous results that focused on CSR perception as a commitment driver.

Keywords: corporate social responsibility, organizational commitment, employee attitudes/satisfaction, employee expectations, employer brand

Procedia PDF Downloads 267
20 Organizational Culture and Its Internalization of Change in the Manufacturing and Service Sector Industries in India

Authors: Rashmi Uchil, A. H. Sequeira

Abstract:

Post-liberalization era in India has seen an unprecedented growth of mergers, both domestic as well as cross-border deals. Indian organizations have slowly begun appreciating this inorganic method of growth. However, all is not well as is evidenced in the lowering value creation of organizations after mergers. Several studies have identified that organizational culture is one of the key factors that affects the success of mergers. But very few studies have been attempted in this realm in India. The current study attempts to identify the factors in the organizational culture variable that may be unique to India. It also focuses on the difference in the impact of organizational culture on merger of organizations in the manufacturing and service sectors in India. The study uses a mixed research approach. An exploratory research approach is adopted to identify the variables that constitute organizational culture specifically in the Indian scenario. A few hypotheses were developed from the identified variables and tested to arrive at the Grounded Theory. The Grounded Theory approach used in the study, attempts to integrate the variables related to organizational culture. Descriptive approach is used to validate the developed grounded theory with a new empirical data set and thus test the relationship between the organizational culture variables and the success of mergers. Empirical data is captured from merged organizations situated in major cities of India. These organizations represent significant proportions of the total number of organizations which have adopted mergers. The mix of industries included software, banking, manufacturing, pharmaceutical and financial services. Mixed sampling approach was adopted for this study. The first phase of sampling was conducted using the probability method of stratified random sampling. The study further used the non-probability method of judgmental sampling. Adequate sample size was identified for the study which represents the top, middle and junior management levels of the organizations that had adopted mergers. Validity and reliability of the research instrument was ensured with appropriate tests. Statistical tools like regression analysis, correlation analysis and factor analysis were used for data analysis. The results of the study revealed a strong relationship between organizational culture and its impact on the success of mergers. The study also revealed that the results were unique to the extent that they highlighted a marked difference in the manner of internalization of change of organizational culture after merger by the organizations in the manufacturing sector. Further, the study reveals that the organizations in the service sector internalized the changes at a slower rate. The study also portrays the industries in the manufacturing sector as more proactive and can contribute to a change in the perception of the said organizations.

Keywords: manufacturing industries, mergers, organizational culture, service industries

Procedia PDF Downloads 297
19 The Use of Social Stories and Digital Technology as Interventions for Autistic Children; A State-Of-The-Art Review and Qualitative Data Analysis

Authors: S. Hussain, C. Grieco, M. Brosnan

Abstract:

Background and Aims: Autism is a complex neurobehavioural disorder, characterised by impairments in the development of language and communication skills. The study involved a state-of-art systematic review, in addition to qualitative data analysis, to establish the evidence for social stories as an intervention strategy for autistic children. An up-to-date review of the use of digital technologies in the delivery of interventions to autistic children was also carried out; to propose the efficacy of digital technologies and the use of social stories to improve intervention outcomes for autistic children. Methods: Two student researchers reviewed a range of randomised control trials and observational studies. The aim of the review was to establish if there was adequate evidence to justify recommending social stories to autistic patients. Students devised their own search strategies to be used across a range of search engines, including Ovid-Medline, Google Scholar and PubMed. Students then critically appraised the generated literature. Additionally, qualitative data obtained from a comprehensive online questionnaire on social stories was also thematically analysed. The thematic analysis was carried out independently by each researcher, using a ‘bottom-up’ approach, meaning contributors read and analysed responses to questions and devised semantic themes from reading the responses to a given question. The researchers then placed each response into a semantic theme or sub-theme. The students then joined to discuss the merging of their theme headings. The Inter-rater reliability (IRR) was calculated before and after theme headings were merged, giving IRR for pre- and post-discussion. Lastly, the thematic analysis was assessed by a third researcher, who is a professor of psychology and the director for the ‘Centre for Applied Autism Research’ at the University of Bath. Results: A review of the literature, as well as thematic analysis of qualitative data found supporting evidence for social story use. The thematic analysis uncovered some interesting themes from the questionnaire responses, relating to the reasons why social stories were used and the factors influencing their effectiveness in each case. However, overall, the evidence for digital technologies interventions was limited, and the literature could not prove a causal link between better intervention outcomes for autistic children and the use of technologies. However, they did offer valid proposed theories for the suitability of digital technologies for autistic children. Conclusions: Overall, the review concluded that there was adequate evidence to justify advising the use of social stories with autistic children. The role of digital technologies is clearly a fast-emerging field and appears to be a promising method of intervention for autistic children; however, it should not yet be considered an evidence-based approach. The students, using this research, developed ideas on social story interventions which aim to help autistic children.

Keywords: autistic children, digital technologies, intervention, social stories

Procedia PDF Downloads 121
18 Students’ Speech Anxiety in Blended Learning

Authors: Mary Jane B. Suarez

Abstract:

Public speaking anxiety (PSA), also known as speech anxiety, is innumerably persistent in any traditional communication classes, especially for students who learn English as a second language. The speech anxiety intensifies when communication skills assessments have taken their toll in an online or a remote mode of learning due to the perils of the COVID-19 virus. Both teachers and students have experienced vast ambiguity on how to realize a still effective way to teach and learn speaking skills amidst the pandemic. Communication skills assessments like public speaking, oral presentations, and student reporting have defined their new meaning using Google Meet, Zoom, and other online platforms. Though using such technologies has paved for more creative ways for students to acquire and develop communication skills, the effectiveness of using such assessment tools stands in question. This mixed method study aimed to determine the factors that affected the public speaking skills of students in a communication class, to probe on the assessment gaps in assessing speaking skills of students attending online classes vis-à-vis the implementation of remote and blended modalities of learning, and to recommend ways on how to address the public speaking anxieties of students in performing a speaking task online and to bridge the assessment gaps based on the outcome of the study in order to achieve a smooth segue from online to on-ground instructions maneuvering towards a much better post-pandemic academic milieu. Using a convergent parallel design, both quantitative and qualitative data were reconciled by probing on the public speaking anxiety of students and the potential assessment gaps encountered in an online English communication class under remote and blended learning. There were four phases in applying the convergent parallel design. The first phase was the data collection, where both quantitative and qualitative data were collected using document reviews and focus group discussions. The second phase was data analysis, where quantitative data was treated using statistical testing, particularly frequency, percentage, and mean by using Microsoft Excel application and IBM Statistical Package for Social Sciences (SPSS) version 19, and qualitative data was examined using thematic analysis. The third phase was the merging of data analysis results to amalgamate varying comparisons between desired learning competencies versus the actual learning competencies of students. Finally, the fourth phase was the interpretation of merged data that led to the findings that there was a significantly high percentage of students' public speaking anxiety whenever students would deliver speaking tasks online. There were also assessment gaps identified by comparing the desired learning competencies of the formative and alternative assessments implemented and the actual speaking performances of students that showed evidence that public speaking anxiety of students was not properly identified and processed.

Keywords: blended learning, communication skills assessment, public speaking anxiety, speech anxiety

Procedia PDF Downloads 102
17 Towards an Environmental Knowledge System in Water Management

Authors: Mareike Dornhoefer, Madjid Fathi

Abstract:

Water supply and water quality are key problems of mankind at the moment and - due to increasing population - in the future. Management disciplines like water, environment and quality management therefore need to closely interact, to establish a high level of water quality and to guarantee water supply in all parts of the world. Groundwater remediation is one aspect in this process. From a knowledge management perspective it is only possible to solve complex ecological or environmental problems if different factors, expert knowledge of various stakeholders and formal regulations regarding water, waste or chemical management are interconnected in form of a knowledge base. In general knowledge management focuses the processes of gathering and representing existing and new knowledge in a way, which allows for inference or deduction of knowledge for e.g. a situation where a problem solution or decision support are required. A knowledge base is no sole data repository, but a key element in a knowledge based system, thus providing or allowing for inference mechanisms to deduct further knowledge from existing facts. In consequence this knowledge provides decision support. The given paper introduces an environmental knowledge system in water management. The proposed environmental knowledge system is part of a research concept called Green Knowledge Management. It applies semantic technologies or concepts such as ontology or linked open data to interconnect different data and information sources about environmental aspects, in this case, water quality, as well as background material enriching an established knowledge base. Examples for the aforementioned ecological or environmental factors threatening water quality are among others industrial pollution (e.g. leakage of chemicals), environmental changes (e.g. rise in temperature) or floods, where all kinds of waste are merged and transferred into natural water environments. Water quality is usually determined with the help of measuring different indicators (e.g. chemical or biological), which are gathered with the help of laboratory testing, continuous monitoring equipment or other measuring processes. During all of these processes data are gathered and stored in different databases. Meanwhile the knowledge base needs to be established through interconnecting data of these different data sources and enriching its semantics. Experts may add their knowledge or experiences of previous incidents or influencing factors. In consequence querying or inference mechanisms are applied for the deduction of coherence between indicators, predictive developments or environmental threats. Relevant processes or steps of action may be modeled in form of a rule based approach. Overall the environmental knowledge system supports the interconnection of information and adding semantics to create environmental knowledge about water environment, supply chain as well as quality. The proposed concept itself is a holistic approach, which links to associated disciplines like environmental and quality management. Quality indicators and quality management steps need to be considered e.g. for the process and inference layers of the environmental knowledge system, thus integrating the aforementioned management disciplines in one water management application.

Keywords: water quality, environmental knowledge system, green knowledge management, semantic technologies, quality management

Procedia PDF Downloads 220
16 Different Types of Bismuth Selenide Nanostructures for Targeted Applications: Synthesis and Properties

Authors: Jana Andzane, Gunta Kunakova, Margarita Baitimirova, Mikelis Marnauza, Floriana Lombardi, Donats Erts

Abstract:

Bismuth selenide (Bi₂Se₃) is known as a narrow band gap semiconductor with pronounced thermoelectric (TE) and topological insulator (TI) properties. Unique TI properties offer exciting possibilities for fundamental research as observing the exciton condensate and Majorana fermions, as well as practical application in spintronic and quantum information. In turn, TE properties of this material can be applied for wide range of thermoelectric applications, as well as for broadband photodetectors and near-infrared sensors. Nanostructuring of this material results in improvement of TI properties due to suppression of the bulk conductivity, and enhancement of TE properties because of increased phonon scattering at the nanoscale grains and interfaces. Regarding TE properties, crystallographic growth direction, as well as orientation of the nanostructures relative to the growth substrate, play significant role in improvement of TE performance of nanostructured material. For instance, Bi₂Se₃ layers consisting of randomly oriented nanostructures and/or of combination of them with planar nanostructures show significantly enhanced in comparison with bulk and only planar Bi₂Se₃ nanostructures TE properties. In this work, a catalyst-free vapour-solid deposition technique was applied for controlled obtaining of different types of Bi₂Se₃ nanostructures and continuous nanostructured layers for targeted applications. For example, separated Bi₂Se₃ nanoplates, nanobelts and nanowires can be used for investigations of TI properties; consisting from merged planar and/or randomly oriented nanostructures Bi₂Se₃ layers are useful for applications in heat-to-power conversion devices and infrared detectors. The vapour-solid deposition was carried out using quartz tube furnace (MTI Corp), equipped with an inert gas supply and pressure/temperature control system. Bi₂Se₃ nanostructures/nanostructured layers of desired type were obtained by adjustment of synthesis parameters (process temperature, deposition time, pressure, carrier gas flow) and selection of deposition substrate (glass, quartz, mica, indium-tin-oxide, graphene and carbon nanotubes). Morphology, structure and composition of obtained Bi₂Se₃ nanostructures and nanostructured layers were inspected using SEM, AFM, EDX and HRTEM techniques, as well as home-build experimental setup for thermoelectric measurements. It was found that introducing of temporary carrier gas flow into the process tube during the synthesis and deposition substrate choice significantly influence nanostructures formation mechanism. Electrical, thermoelectric, and topological insulator properties of different types of deposited Bi₂Se₃ nanostructures and nanostructured coatings are characterized as a function of thickness and discussed.

Keywords: bismuth seleinde, nanostructures, topological insulator, vapour-solid deposition

Procedia PDF Downloads 231
15 Numerical Simulation on Two Components Particles Flow in Fluidized Bed

Authors: Wang Heng, Zhong Zhaoping, Guo Feihong, Wang Jia, Wang Xiaoyi

Abstract:

Flow of gas and particles in fluidized beds is complex and chaotic, which is difficult to measure and analyze by experiments. Some bed materials with bad fluidized performance always fluidize with fluidized medium. The material and the fluidized medium are different in many properties such as density, size and shape. These factors make the dynamic process more complex and the experiment research more limited. Numerical simulation is an efficient way to describe the process of gas-solid flow in fluidized bed. One of the most popular numerical simulation methods is CFD-DEM, i.e., computational fluid dynamics-discrete element method. The shapes of particles are always simplified as sphere in most researches. Although sphere-shaped particles make the calculation of particle uncomplicated, the effects of different shapes are disregarded. However, in practical applications, the two-component systems in fluidized bed also contain sphere particles and non-sphere particles. Therefore, it is needed to study the two component flow of sphere particles and non-sphere particles. In this paper, the flows of mixing were simulated as the flow of molding biomass particles and quartz in fluidized bad. The integrated model was built on an Eulerian–Lagrangian approach which was improved to suit the non-sphere particles. The constructed methods of cylinder-shaped particles were different when it came to different numerical methods. Each cylinder-shaped particle was constructed as an agglomerate of fictitious small particles in CFD part, which means the small fictitious particles gathered but not combined with each other. The diameter of a fictitious particle d_fic and its solid volume fraction inside a cylinder-shaped particle α_fic, which is called the fictitious volume fraction, are introduced to modify the drag coefficient β by introducing the volume fraction of the cylinder-shaped particles α_cld and sphere-shaped particles α_sph. In a computational cell, the void ε, can be expressed as ε=1-〖α_cld α〗_fic-α_sph. The Ergun equation and the Wen and Yu equation were used to calculate β. While in DEM method, cylinder-shaped particles were built by multi-sphere method, in which small sphere element merged with each other. Soft sphere model was using to get the connect force between particles. The total connect force of cylinder-shaped particle was calculated as the sum of the small sphere particles’ forces. The model (size=1×0.15×0.032 mm3) contained 420000 sphere-shaped particles (diameter=0.8 mm, density=1350 kg/m3) and 60 cylinder-shaped particles (diameter=10 mm, length=10 mm, density=2650 kg/m3). Each cylinder-shaped particle was constructed by 2072 small sphere-shaped particles (d=0.8 mm) in CFD mesh and 768 sphere-shaped particles (d=3 mm) in DEM mesh. The length of CFD and DEM cells are 1 mm and 2 mm. Superficial gas velocity was changed in different models as 1.0 m/s, 1.5 m/s, 2.0m/s. The results of simulation were compared with the experimental results. The movements of particles were regularly as fountain. The effect of superficial gas velocity on cylinder-shaped particles was stronger than that of sphere-shaped particles. The result proved this present work provided a effective approach to simulation the flow of two component particles.

Keywords: computational fluid dynamics, discrete element method, fluidized bed, multiphase flow

Procedia PDF Downloads 326
14 Hybrid Living: Emerging Out of the Crises and Divisions

Authors: Yiorgos Hadjichristou

Abstract:

The paper will focus on the hybrid living typologies which are brought about due to the Global Crisis. Mixing of the generations and the groups of people, mingling the functions of living with working and socializing, merging the act of living in synergy with the urban realm and its constituent elements will be the springboard of proposing an essential sustainable housing approach and the respective urban development. The thematic will be based on methodologies developed both on the academic, educational environment including participation of students’ research and on the practical aspect of architecture including case studies executed by the author in the island of Cyprus. Both paths of the research will deal with the explorative understanding of the hybrid ways of living, testing the limits of its autonomy. The evolution of the living typologies into substantial hybrid entities, will deal with the understanding of new ways of living which include among others: re-introduction of natural phenomena, accommodation of the activity of work and services in the living realm, interchange of public and private, injections of communal events into the individual living territories. The issues and the binary questions raised by what is natural and artificial, what is private and what public, what is ephemeral and what permanent and all the in-between conditions are eloquently traced in the everyday life in the island. Additionally, given the situation of Cyprus with the eminent scar of the dividing ‘Green line’ and the waiting of the ‘ghost city’ of Famagusta to be resurrected, the conventional way of understanding the limits and the definitions of the properties is irreversibly shaken. The situation is further aggravated by the unprecedented phenomenon of the crisis on the island. All these observations set the premises of reexamining the urban development and the respective sustainable housing in a synergy where their characteristics start exchanging positions, merge into each other, contemporarily emerge and vanish, changing from permanent to ephemeral. This fluidity of conditions will attempt to render a future of the built- and unbuilt realm where the main focusing point will be redirected to the human and the social. Weather and social ritual scenographies together with ‘spontaneous urban landscapes’ of ‘momentary relationships’ will suggest a recipe for emerging urban environments and sustainable living. Thus, the paper will aim at opening a discourse on the future of the sustainable living merged in a sustainable urban development in relation to the imminent solution of the division of island, where the issue of property became the main obstacle to be overcome. At the same time, it will attempt to link this approach to the global need for a sustainable evolution of the urban and living realms.

Keywords: social ritual scenographies, spontaneous urban landscapes, substantial hybrid entities, re-introduction of natural phenomena

Procedia PDF Downloads 263
13 Low-Temperature Poly-Si Nanowire Junctionless Thin Film Transistors with Nickel Silicide

Authors: Yu-Hsien Lin, Yu-Ru Lin, Yung-Chun Wu

Abstract:

This work demonstrates the ultra-thin poly-Si (polycrystalline Silicon) nanowire junctionless thin film transistors (NWs JL-TFT) with nickel silicide contact. For nickel silicide film, this work designs to use two-step annealing to form ultra-thin, uniform and low sheet resistance (Rs) Ni silicide film. The NWs JL-TFT with nickel silicide contact exhibits the good electrical properties, including high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In addition, this work also compares the electrical characteristics of NWs JL-TFT with nickel silicide and non-silicide contact. Nickel silicide techniques are widely used for high-performance devices as the device scaling due to the source/drain sheet resistance issue. Therefore, the self-aligned silicide (salicide) technique is presented to reduce the series resistance of the device. Nickel silicide has several advantages including low-temperature process, low silicon consumption, no bridging failure property, smaller mechanical stress, and smaller contact resistance. The junctionless thin-film transistor (JL-TFT) is fabricated simply by heavily doping the channel and source/drain (S/D) regions simultaneously. Owing to the special doping profile, JL-TFT has some advantages such as lower thermal the budget which can integrate with high-k/metal-gate easier than conventional MOSFETs (Metal Oxide Semiconductor Field-Effect Transistors), longer effective channel length than conventional MOSFETs, and avoidance of complicated source/drain engineering. To solve JL-TFT has turn-off problem, JL-TFT needs ultra-thin body (UTB) structure to reach fully depleted channel region in off-state. On the other hand, the drive current (Iᴅ) is declined as transistor features are scaled. Therefore, this work demonstrates ultra thin poly-Si nanowire junctionless thin film transistors with nickel silicide contact. This work investigates the low-temperature formation of nickel silicide layer by physical-chemical deposition (PVD) of a 15nm Ni layer on the poly-Si substrate. Notably, this work designs to use two-step annealing to form ultrathin, uniform and low sheet resistance (Rs) Ni silicide film. The first step was promoted Ni diffusion through a thin interfacial amorphous layer. Then, the unreacted metal was lifted off after the first step. The second step was annealing for lower sheet resistance and firmly merged the phase.The ultra-thin poly-Si nanowire junctionless thin film transistors NWs JL-TFT with nickel silicide contact is demonstrated, which reveals high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In silicide film analysis, the second step of annealing was applied to form lower sheet resistance and firmly merge the phase silicide film. In short, the NWs JL-TFT with nickel silicide contact has exhibited a competitive short-channel behavior and improved drive current.

Keywords: poly-Si, nanowire, junctionless, thin-film transistors, nickel silicide

Procedia PDF Downloads 237
12 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction

Authors: Bruce Wrightsman

Abstract:

Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.

Keywords: wood building systems, material histories, monocoque systems, construction waste

Procedia PDF Downloads 78
11 Auto Surgical-Emissive Hand

Authors: Abhit Kumar

Abstract:

The world is full of master slave Telemanipulator where the doctor’s masters the console and the surgical arm perform the operations, i.e. these robots are passive robots, what the world needs to focus is that in use of these passive robots we are acquiring doctors for operating these console hence the utilization of the concept of robotics is still not fully utilized ,hence the focus should be on active robots, Auto Surgical-Emissive Hand use the similar concept of active robotics where this anthropomorphic hand focuses on the autonomous surgical, emissive and scanning operation, enabled with the vision of 3 way emission of Laser Beam/-5°C < ICY Steam < 5°C/ TIC embedded in palm of the anthropomorphic hand and structured in a form of 3 way disc. Fingers of AS-EH (Auto Surgical-Emissive Hand) as called, will have tactile, force, pressure sensor rooted to it so that the mechanical mechanism of force, pressure and physical presence on the external subject can be maintained, conversely our main focus is on the concept of “emission” the question arises how all the 3 non related methods will work together that to merged in a single programmed hand, all the 3 methods will be utilized according to the need of the external subject, the laser if considered will be emitted via a pin sized outlet, this radiation is channelized via a thin channel which further connect to the palm of the surgical hand internally leading to the pin sized outlet, here the laser is used to emit radiation enough to cut open the skin for removal of metal scrap or any other foreign material while the patient is in under anesthesia, keeping the complexity of the operation very low, at the same time the TIC fitted with accurate temperature compensator will be providing us the real time feed of the surgery in the form of heat image, this gives us the chance to analyze the level, also ATC will help us to determine the elevated body temperature while the operation is being proceeded, the thermal imaging camera in rooted internally in the AS-EH while also being connected to the real time software externally to provide us live feedback. The ICY steam will provide the cooling effect before and after the operation, however for more utilization of this concept we can understand the working of simple procedure in which If a finger remain in icy water for a long time it freezes the blood flow stops and the portion become numb and isolated hence even if you try to pinch it will not provide any sensation as the nerve impulse did not coordinated with the brain hence sensory receptor did not got active which means no sense of touch was observed utilizing the same concept we can use the icy stem to be emitted via a pin sized hole on the area of concern ,temperature below 273K which will frost the area after which operation can be done, this steam can also be use to desensitized the pain while the operation in under process. The mathematical calculation, algorithm, programming of working and movement of this hand will be installed in the system prior to the procedure, since this AS-EH is a programmable hand it comes with the limitation hence this AS-EH robot will perform surgical process of low complexity only.

Keywords: active robots, algorithm, emission, icy steam, TIC, laser

Procedia PDF Downloads 356
10 Voices of the Students From a Fully Inclusive Classroom

Authors: Ashwini Tiwari

Abstract:

Introduction: Inclusive education for all is a multifaceted approach that requires system thinking and the promotion of a "Culture of Inclusion." Such can only be achieved through the collaboration of multiple stakeholders at the community, regional, state, national, and international levels. Researchers have found effective practices used in inclusive general classrooms are beneficial to all students, including students with disabilities, those who experience challenges academically and socially, and students without disabilities as well. However, to date, no statistically significant effects on the academic performance of students without disabilities in the presence of students with disabilities have been revealed. Therefore, proponents against inclusive education practices, based solely on their beliefs regarding the detrimental effects of students without disabilities, appears to have unfounded perceptions. This qualitative case study examines students' perspectives and beliefs about inclusive education in a middle school in South Texas. More specifically, this study examined students understanding of how inclusive education practices intersect with the classroom community. The data was collected from the students attending fully inclusive classrooms through interviews and focus groups. The findings suggest that peer integration and friendships built during classes are an essential part of schooling for both disabled and non-disabled students. Research Methodology: This qualitative case study used observations and focus group interviews with 12 middle school students attending an inclusive classroom at a public school located in South Texas. The participant of this study includes eight females and five males. All the study participants attend a fully inclusive middle school with special needs peers. Five of the students had disabilities. The focus groups and interviews were conducted during for entire academic year, with an average of one focus group and observation each month. The data were analyzed using the constant comparative method. The data from the focus group and observation were continuously compared for emerging codes during the data collection process. Codes were further refined and merged. Themes emerged as a result of the interpretation at the end of the data analysis process. Findings and discussion: This study was conducted to examine disabled and non-disabled students' perspectives on the inclusion of disabled students. The study revealed that non-disabled students generally have positive attitudes toward their disabled peers. The students in the study did not perceive inclusion as a special provision; rather, they perceived inclusion as a way of instructional practice. Most of the participants in the study spoke about the multiple benefits of inclusion. They emphasized that peer integration and friendships built during classes are an essential part of their schooling. Students believed that it was part of their responsibility to assist their peers in the ways possible. This finding is in line with the literature that the personality of children with disabilities is not determined by their disability but rather by their social environment and its interaction with the child. Interactions with peers are one of the most important socio-cultural conditions for the development of children with disabilities.

Keywords: inclusion, special education, k-12 education, student voices

Procedia PDF Downloads 80
9 3D Design of Orthotic Braces and Casts in Medical Applications Using Microsoft Kinect Sensor

Authors: Sanjana S. Mallya, Roshan Arvind Sivakumar

Abstract:

Orthotics is the branch of medicine that deals with the provision and use of artificial casts or braces to alter the biomechanical structure of the limb and provide support for the limb. Custom-made orthoses provide more comfort and can correct issues better than those available over-the-counter. However, they are expensive and require intricate modelling of the limb. Traditional methods of modelling involve creating a plaster of Paris mould of the limb. Lately, CAD/CAM and 3D printing processes have improved the accuracy and reduced the production time. Ordinarily, digital cameras are used to capture the features of the limb from different views to create a 3D model. We propose a system to model the limb using Microsoft Kinect2 sensor. The Kinect can capture RGB and depth frames simultaneously up to 30 fps with sufficient accuracy. The region of interest is captured from three views, each shifted by 90 degrees. The RGB and depth data are fused into a single RGB-D frame. The resolution of the RGB frame is 1920px x 1080px while the resolution of the Depth frame is 512px x 424px. As the resolution of the frames is not equal, RGB pixels are mapped onto the Depth pixels to make sure data is not lost even if the resolution is lower. The resulting RGB-D frames are collected and using the depth coordinates, a three dimensional point cloud is generated for each view of the Kinect sensor. A common reference system was developed to merge the individual point clouds from the Kinect sensors. The reference system consisted of 8 coloured cubes, connected by rods to form a skeleton-cube with the coloured cubes at the corners. For each Kinect, the region of interest is the square formed by the centres of the four cubes facing the Kinect. The point clouds are merged by considering one of the cubes as the origin of a reference system. Depending on the relative distance from each cube, the three dimensional coordinate points from each point cloud is aligned to the reference frame to give a complete point cloud. The RGB data is used to correct for any errors in depth data for the point cloud. A triangular mesh is generated from the point cloud by applying Delaunay triangulation which generates the rough surface of the limb. This technique forms an approximation of the surface of the limb. The mesh is smoothened to obtain a smooth outer layer to give an accurate model of the limb. The model of the limb is used as a base for designing the custom orthotic brace or cast. It is transferred to a CAD/CAM design file to design of the brace above the surface of the limb. The proposed system would be more cost effective than current systems that use MRI or CT scans for generating 3D models and would be quicker than using traditional plaster of Paris cast modelling and the overall setup time is also low. Preliminary results indicate that the accuracy of the Kinect2 is satisfactory to perform modelling.

Keywords: 3d scanning, mesh generation, Microsoft kinect, orthotics, registration

Procedia PDF Downloads 190
8 Designating and Evaluating a Healthy Eating Model at the Workplace: A Practical Strategy for Preventing Non-Communicable Diseases in Aging

Authors: Mahnaz Khalafehnilsaz, Rozina Rahnama

Abstract:

Introduction: The aging process has been linked to a wide range of non-communicable diseases that cause a loss of health-related quality of life. This process can be worsened if an active and healthy lifestyle is not followed by adults, especially in the workplace. This setting not only may create a sedentary lifestyle but will lead to obesity and overweight in the long term and create unhealthy and inactive aging. In addition, eating habits are always known to be associated with active aging. Therefore, it is very valuable to know the eating patterns of people at work in order to detect and prevent diseases in the coming years. This study aimed to design and test a model to improve eating habits among employees at an industrial complex as a practical strategy. Material and method: The present research was a mixed-method study with a subsequent exploratory design which was carried out in two phases, qualitative and quantitative, in 2018 year. In the first step, participants were selected by purposive sampling (n=34) to ensure representation of different job roles; hours worked, gender, grade, and age groups, and semi-structured interviews were used. All interviews were conducted in the workplace and were audio recorded, transcribed verbatim, and analyzed using the Strauss and Corbin approach. The interview question was, “what were their experiences of eating at work, and how could these nutritional habits affect their health in old age.” Finally, a total of 1500 basic codes were oriented at the open coding step, and they were merged together to create the 17 classes, and six concepts and a conceptual model were designed. The second phase of the study was conducted in the form of a cross-sectional study. After verification of the research tool, the developed questionnaire was examined in a group of employees. In order to test the conceptual model of the study, a total of 500 subjects were included in psychometry. Findings: Six main concepts have been known, including 1. undesirable control of stress, 2. lack of eating knowledge, 3. effect of the social network, 4. lack of motivation for healthy habits, 5. environmental-organizational intensifier, 6. unhealthy eating behaviors. The core concept was “Motivation Loss to do preventive behavior.” The main constructs of the motivational-based model for the promotion of eating habits are “modification and promote of eating habits,” increase of knowledge and competency, convey of healthy nutrition behavior culture and effecting of behavioral model especially in older age, desirable of control stress. Conclusion: A key factor for unhealthy eating behavior at the workplace is a lack of motivation, which can be an obstacle to conduct preventive behaviors at work that can affect the healthy aging process in the long term. The motivational-based model could be considered an effective conceptual framework and instrument for designing interventions for the promotion to create healthy and active aging.

Keywords: aging, eating habits, older age, workplace

Procedia PDF Downloads 101
7 The Role of Metaheuristic Approaches in Engineering Problems

Authors: Ferzat Anka

Abstract:

Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.

Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems

Procedia PDF Downloads 77
6 The Effect of Students’ Social and Scholastic Background and Environmental Impact on Shaping Their Pattern of Digital Learning in Academia: A Pre- and Post-COVID Comparative View

Authors: Nitza Davidovitch, Yael Yossel-Eisenbach

Abstract:

The purpose of the study was to inquire whether there was a change in the shaping of undergraduate students’ digitally-oriented study pattern in the pre-Covid (2016-2017) versus post-Covid period (2022-2023), as affected by three factors: social background characteristics, high school, and academic background characteristics. These two-time points were cauterized by dramatic changes in teaching and learning at institutions of higher education. The data were collected via cross-sectional surveys at two-time points, in the 2016-2017 academic school year (N=443) and in the 2022-2023 school year (N=326). The questionnaire was distributed on social media and it includes questions on demographic background characteristics, previous studies in high school and present academic studies, and questions on learning and reading habits. Method of analysis: A. Statistical descriptive analysis, B. Mean comparison tests were conducted to analyze the variations in the mean score for the digitally-oriented learning pattern variable at two-time points (pre- and post-Covid) in relation to each of the independent variables. C. Analysis of variance was performed to test the main effects and the interactions. D. Applying linear regression, the research aimed to examine the combined effect of the independent variables on shaping students' digitally-oriented learning habits. The analysis includes four models. In all four models, the dependent variable is students’ perception of digitally oriented learning. The first model included social background variables; the second model included scholastic background as well. In the third model, the academic background variables were added, and the fourth model includes all the independent variables together with the variable of period (pre- and post-COVID). E. Factor analysis confirms using the principal component method with varimax rotation; the variables were constructed by a weighted mean of all the relevant statements merged to form a single variable denoting a shared content world. The research findings indicate a significant rise in students’ perceptions of digitally-oriented learning in the post-COVID period. From a gender perspective, the impact of COVID on shaping a digital learning pattern was much more significant for female students. The socioeconomic status perspective is eliminated when controlling for the period, and the student’s job is affected - more than all other variables. It may be assumed that the student’s work pattern mediates effects related to the convenience offered by digital learning regarding distance and time. The significant effect of scholastic background on shaping students’ digital learning patterns remained stable, even when controlling for all explanatory variables. The advantage that universities had over colleges in shaping a digital learning pattern in the pre-COVID period dissipated. Therefore, it can be said that after COVID, there was a change in how colleges shape students’ digital learning patterns in such a way that no institutional differences are evident with regard to shaping the digital learning pattern. The study shows that period has a significant independent effect on shaping students’ digital learning patterns when controlling for the explanatory variables.

Keywords: learning pattern, COVID, socioeconomic status, digital learning

Procedia PDF Downloads 62
5 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 119
4 Single Cell Analysis of Circulating Monocytes in Prostate Cancer Patients

Authors: Leander Van Neste, Kirk Wojno

Abstract:

The innate immune system reacts to foreign insult in several unique ways, one of which is phagocytosis of perceived threats such as cancer, bacteria, and viruses. The goal of this study was to look for evidence of phagocytosed RNA from tumor cells in circulating monocytes. While all monocytes possess phagocytic capabilities, the non-classical CD14+/FCGR3A+ monocytes and the intermediate CD14++/FCGR3A+ monocytes most actively remove threatening ‘external’ cellular materials. Purified CD14-positive monocyte samples from fourteen patients recently diagnosed with clinically localized prostate cancer (PCa) were investigated by single-cell RNA sequencing using the 10X Genomics protocol followed by paired-end sequencing on Illumina’s NovaSeq. Similarly, samples were processed and used as controls, i.e., one patient underwent biopsy but was found not to harbor prostate cancer (benign), three young, healthy men, and three men previously diagnosed with prostate cancer that recently underwent (curative) radical prostatectomy (post-RP). Sequencing data were mapped using 10X Genomics’ CellRanger software and viable cells were subsequently identified using CellBender, removing technical artifacts such as doublets and non-cellular RNA. Next, data analysis was performed in R, using the Seurat package. Because the main goal was to identify differences between PCa patients and ‘control’ patients, rather than exploring differences between individual subjects, the individual Seurat objects of all 21 patients were merged into one Seurat object per Seurat’s recommendation. Finally, the single-cell dataset was normalized as a whole prior to further analysis. Cell identity was assessed using the SingleR and cell dex packages. The Monaco Immune Data was selected as the reference dataset, consisting of bulk RNA-seq data of sorted human immune cells. The Monaco classification was supplemented with normalized PCa data obtained from The Cancer Genome Atlas (TCGA), which consists of bulk RNA sequencing data from 499 prostate tumor tissues (including 1 metastatic) and 52 (adjacent) normal prostate tissues. SingleR was subsequently run on the combined immune cell and PCa datasets. As expected, the vast majority of cells were labeled as having a monocytic origin (~90%), with the most noticeable difference being the larger number of intermediate monocytes in the PCa patients (13.6% versus 7.1%; p<.001). In men harboring PCa, 0.60% of all purified monocytes were classified as harboring PCa signals when the TCGA data were included. This was 3-fold, 7.5-fold, and 4-fold higher compared to post-RP, benign, and young men, respectively (all p<.001). In addition, with 7.91%, the number of unclassified cells, i.e., cells with pruned labels due to high uncertainty of the assigned label, was also highest in men with PCa, compared to 3.51%, 2.67%, and 5.51% of cells in post-RP, benign, and young men, respectively (all p<.001). It can be postulated that actively phagocytosing cells are hardest to classify due to their dual immune cell and foreign cell nature. Hence, the higher number of unclassified cells and intermediate monocytes in PCa patients might reflect higher phagocytic activity due to tumor burden. This also illustrates that small numbers (~1%) of circulating peripheral blood monocytes that have interacted with tumor cells might still possess detectable phagocytosed tumor RNA.

Keywords: circulating monocytes, phagocytic cells, prostate cancer, tumor immune response

Procedia PDF Downloads 162
3 Trends in Blood Pressure Control and Associated Risk Factors Among US Adults with Hypertension from 2013 to 2020: Insights from NHANES Data

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

Controlling blood pressure is critical to reducing the risk of cardiovascular disease. However, BP control rates (systolic BP < 140 mm Hg and diastolic BP < 90 mm Hg) have declined since 2013, warranting further analysis to identify contributing factors and potential interventions. This study investigates the factors associated with the decline in blood pressure (BP) control among U.S. adults with hypertension over the past decade. Data from the U.S. National Health and Nutrition Examination Survey (NHANES) were used to assess BP control trends between 2013 and 2020. The analysis included 18,927 U.S. adults with hypertension aged 18 years and older who completed study interviews and examinations. The dataset, obtained from the cardioStatsUSA and RNHANES R packages, was merged based on survey IDs. Key variables analyzed included demographic factors, lifestyle behaviors, hypertension status, BMI, comorbidities, antihypertensive medication use, and cardiovascular disease history. The prevalence of BP control declined from 78.0% in 2013-2014 to 71.6% in 2017-2020. Non-Hispanic Whites had the highest BP control prevalence (33.6% in 2013-2014), but this declined to 26.5% by 2017-2020. In contrast, BP control among Non-Hispanic Blacks increased slightly. Younger adults (aged 18-44) exhibited better BP control, but control rates declined over time. Obesity prevalence increased, contributing to poorer BP control. Antihypertensive medication use rose from 26.1% to 29.2% across the study period. Lifestyle behaviors, such as smoking and diet, also affected BP control, with nonsmokers and those with better diets showing higher control rates. Key findings indicate significant disparities in blood pressure control across racial/ethnic groups. Non-Hispanic Black participants had consistently higher odds (OR ranging from 1.84 to 2.33) of poor blood pressure control compared to Non-Hispanic Whites, while odds among Non-Hispanic Asians varied by cycle. Younger age groups (18-44 and 45-64) showed significantly lower odds of poor blood pressure control compared to those aged 75+, highlighting better control in younger populations. Men had consistently higher odds of poor control compared to women, though this disparity slightly decreased in 2017-2020. Medical comorbidities such as diabetes and chronic kidney disease were associated with significantly higher odds of poor blood pressure control across all cycles. Participants with chronic kidney disease had particularly elevated odds (OR=5.54 in 2015-2016), underscoring the challenge of managing hypertension in these populations. Antihypertensive medication use was also linked with higher odds of poor control, suggesting potential difficulties in achieving target blood pressure despite treatment. Lifestyle factors such as alcohol consumption and physical activity showed no consistent association with blood pressure control. However, dietary quality appeared protective, with those reporting an excellent diet showing lower odds (OR=0.64) of poor control in the overall sample. Increased BMI was associated with higher odds of poor blood pressure control, particularly in the 30-35 and 35+ BMI categories during 2015-2016. The study highlights a significant decline in BP control among U.S. adults with hypertension, particularly among certain demographic groups and those with increasing obesity rates. Lifestyle behaviors, antihypertensive medication use, and socioeconomic factors all played a role in these trends.

Keywords: diabetes, blood pressure, obesity, logistic regression, odd ratio

Procedia PDF Downloads 9
2 Sustainable Antimicrobial Biopolymeric Food & Biomedical Film Engineering Using Bioactive AMP-Ag+ Formulations

Authors: Eduardo Lanzagorta Garcia, Chaitra Venkatesh, Romina Pezzoli, Laura Gabriela Rodriguez Barroso, Declan Devine, Margaret E. Brennan Fournet

Abstract:

New antimicrobial interventions are urgently required to combat rising global health and medical infection challenges. Here, an innovative antimicrobial technology, providing price competitive alternatives to antibiotics and readily integratable with currently technological systems is presented. Two cutting edge antimicrobial materials, antimicrobial peptides (AMPs) and uncompromised sustained Ag+ action from triangular silver nanoplates (TSNPs) reservoirs, are merged for versatile effective antimicrobial action where current approaches fail. Antimicrobial peptides (AMPs) exist widely in nature and have recently been demonstrated for broad spectrum of activity against bacteria, viruses, and fungi. TSNP’s are highly discrete, homogenous and readily functionisable Ag+ nanoreseviors that have a proven amenability for operation within in a wide range of bio-based settings. In a design for advanced antimicrobial sustainable plastics, antimicrobial TSNPs are formulated for processing within biodegradable biopolymers. Histone H5 AMP was selected for its reported strong antimicrobial action and functionalized with the TSNP (AMP-TSNP) in a similar fashion to previously reported TSNP biofunctionalisation methods. A synergy between the propensity of biopolymers for degradation and Ag+ release combined with AMP activity provides a novel mechanism for the sustained antimicrobial action of biopolymeric thin films. Nanoplates are transferred from aqueous phase to an organic solvent in order to facilitate integration within hydrophobic polymers. Extrusion is used in combination with calendering rolls to create thin polymerc film where the nanoplates are embedded onto the surface. The resultant antibacterial functional films are suitable to be adapted for food packing and biomedical applications. TSNP synthesis were synthesized by adapting a previously reported seed mediated approach. TSNP synthesis was scaled up for litre scale batch production and subsequently concentrated to 43 ppm using thermally controlled H2O removal. Nanoplates were transferred from aqueous phase to an organic solvent in order to facilitate integration within hydrophobic polymers. This was acomplised by functionalizing the TSNP with thiol terminated polyethylene glycol and using centrifugal force to transfer them to chloroform. Polycaprolactone (PCL) and Polylactic acid (PLA) were individually processed through extrusion, TSNP and AMP-TSNP solutions were sprayed onto the polymer immediately after exiting the dye. Calendering rolls were used to disperse and incorporate TSNP and TSNP-AMP onto the surface of the extruded films. Observation of the characteristic blue colour confirms the integrity of the TSNP within the films. Antimicrobial tests were performed by incubating Gram + and Gram – strains with treated and non-treated films, to evaluate if bacterial growth was reduced due to the presence of the TSNP. The resulting films successfully incorporated TSNP and AMP-TSNP. Reduced bacterial growth was observed for both Gram + and Gram – strains for both TSNP and AMP-TSNP compared with untreated films indicating antimicrobial action. The largest growth reduction was observed for AMP-TSNP treated films demonstrating the additional antimicrobial activity due to the presence of the AMPs. The potential of this technology to impede bacterial activity in food industry and medical surfaces will forge new confidence in the battle against antibiotic resistant bacteria, serving to greatly inhibit infections and facilitate patient recovery.

Keywords: antimicrobial, biodegradable, peptide, polymer, nanoparticle

Procedia PDF Downloads 116
1 Climate Change Threats to UNESCO-Designated World Heritage Sites: Empirical Evidence from Konso Cultural Landscape, Ethiopia

Authors: Yimer Mohammed Assen, Abiyot Legesse Kura, Engida Esyas Dube, Asebe Regassa Debelo, Girma Kelboro Mensuro, Lete Bekele Gure

Abstract:

Climate change has posed severe threats to many cultural landscapes of UNESCO world heritage sites recently. The UNESCO State of Conservation (SOC) reports categorized flooding, temperature increment, and drought as threats to cultural landscapes. This study aimed to examine variations and trends of rainfall and temperature extreme events and their threats to the UNESCO-designated Konso Cultural Landscape in southern Ethiopia. The study used dense merged satellite-gauge station rainfall data (1981-2020) with spatial resolution of 4km by 4km and observed maximum and minimum temperature data (1987-2020). Qualitative data were also gathered from cultural leaders, local administrators, and religious leaders using structured interview checklists. The spatial patterns, coefficient of variation, standardized anomalies, trends, and magnitude of change of rainfall and temperature extreme events both at annual and seasonal levels were computed using the Mann-Kendall trend test and Sen’s slope estimator under the CDT package. The standard precipitation index (SPI) was also used to calculate drought severity, frequency, and trend maps. The data gathered from key informant interviews and focus group discussions were coded and analyzed thematically to complement statistical findings. Thematic areas that explain the impacts of extreme events on the cultural landscape were chosen for coding. The thematic analysis was conducted using Nvivo software. The findings revealed that rainfall was highly variable and unpredictable, resulting in extreme drought and flood. There were significant (P<0.05) increasing trends of heavy rainfall (R10mm and R20mm) and the total amount of rain on wet days (PRCPTOT), which might have resulted in flooding. The study also confirmed that absolute temperature extreme indices (TXx, TXn, and TNx) and the percentile-based temperature extreme indices (TX90p, TN90p, TX10p, and TN10P) showed significant (P<0.05) increasing trends which are signals for warming of the study area. The results revealed that the frequency as well as the severity of drought at 3-months (katana/hageya seasons) was more pronounced than the 12-months (annual) time scale. The highest number of droughts in 100 years is projected at a 3-months timescale across the study area. The findings also showed that frequent drought has led to loss of grasses which are used for making traditional individual houses and multipurpose communal houses (pafta), food insecurity, migration, loss of biodiversity, and commodification of stones from terrace. On the other hand, the increasing trends of rainfall extreme indices resulted in destruction of terraces, soil erosion, loss of life and damage of properties. The study shows that a persistent decline in farmland productivity, due to erratic and extreme rainfall and frequent drought occurrences, forced the local people to participate in non-farm activities and retreat from daily preservation and management of their landscape. Overall, the increasing rainfall and temperature extremes coupled with prevalence of drought are thought to have an impact on the sustainability of cultural landscape through disrupting the ecosystem services and livelihood of the community. Therefore, more localized adaptation and mitigation strategies to the changing climate are needed to maintain the sustainability of Konso cultural landscapes as a global cultural treasure and to strengthen the resilience of smallholder farmers.

Keywords: adaptation, cultural landscape, drought, extremes indices

Procedia PDF Downloads 26