Search results for: continuous testing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5087

Search results for: continuous testing

827 The Role of Evaluation for Effective and Efficient Change in Higher Education Institutions

Authors: Pattaka Sa-Ngimnet

Abstract:

That the University as we have known it is no longer serving the needs of the vast majority of students and potential students has been a topic of much discussion. Institutions of higher education, in this age of global culture, are in a process of metamorphosis. Technology is being used to allow more students, older students, working students and disabled students, who cannot attend conventional classes, to have greater access to higher education through the internet. But change must come about only after much evaluation and experimentation or education will simply become a commodity as, in some cases, it already has. This paper will be concerned with the meaning and methods of change and evaluation as they are applied to institutions of higher education. Organization’s generally have different goals and different approaches in order to be successful. However, the means of reaching those goals requires rational and effective planning. Any plans for successful change in any institution must take into account both effectiveness and efficiency and the differences between them. “Effectiveness” refers to an adequate means of achieving an objective. “Efficiency” refers to the ability to achieve an objective without waste of time or resources (The Free Dictionary). So an effective means may not be efficient and an efficient means may not be effective. The goal is to reach a synthesis of effectiveness and efficiency that will maximize both to the extent each is limited by the other. This focus of this paper then is to determine how an educational institution can become either successful or oppressive depending on the kinds of planning, evaluating and changes that operate by and on the administration. If the plan is concerned only with efficiency, the institution can easily become oppressive and lose sight of its purpose of educating students. If it is overly concentrated on effectiveness, the students may receive a superior education in the short run but the institution will face operating difficulties. In becoming only goal oriented, institutions also face problems. Simply stated, if the institution reaches its goals, the stake holders may become satisfied and fail to change and keep up with the needs of the times. So goals should be seen only as benchmarks in a process of becoming even better in providing quality education. Constant and consistent evaluation is the key to making all these factors come together in a successful process of planning, testing and changing the plans as needed. The focus of the evaluation has to be considered. Evaluations must take into account progress and needs of students, methods and skills of instructors, resources available from the institution and the styles and objectives of administrators. Thus the role of evaluation is pivotal in providing for the maximum of both effective and efficient change in higher education institutions.

Keywords: change, effectiveness, efficiency, education

Procedia PDF Downloads 306
826 Redefining Success Beyond Borders: A Deep Dive into Effective Methods to Boost Morale Among Virtual Workers for Exponential Project Performance

Authors: Florence Ibeh, David Oyewmi Oyekunle, David Boohene

Abstract:

The continuous advancement of information technology has completely transformed how businesses and organizations operate on a global scale. The widespread availability of virtual communication tools enables individuals to opt for remote work. While remote employment offers various benefits, such as facilitating corporate growth and enhancing customer support, it also presents distinct challenges. Therefore, investigating the intricacies of virtual team morale is crucial for ensuring the achievement of project objectives. For this study, content analysis of pre-existing secondary data was employed to examine the phenomenon. Essential elements vital for improving the success of projects within virtual teams were identified. These factors include technology adoption, creating a distraction-free work environment, effective leadership, trust-building, clear communication channels, well-defined task allocation, active team participation, and motivation. Furthermore, the study established a substantial correlation between morale levels and the participation and productivity of virtual team members. Higher levels of morale were associated with optimal performance among virtual teams. The study determined that the key factors for enhancing project performance in virtual teams are the adoption of technology, a focused environment, effective leadership, trust, communication, well-defined tasks, collaborative teamwork, and motivation. Additionally, the study discovered that modifying the optimal strategies employed by in-office teams can enhance the diminished morale prevalent in remote teams to sustain a high level of team morale for virtual teams. The findings of this study are highly significant in the dynamic field of project management. Currently, there is limited information regarding strategies that address challenges arising from external factors in virtual teams, such as ambient noise and disruptions caused by family members. The findings underscore the significance of selecting appropriate communication technologies, delineating distinct roles and responsibilities for virtual team members, and nurturing a culture of accountability and trust. Promoting seamless collaboration and instilling motivation among virtual team members are deemed highly effective in augmenting employee engagement and performance within virtual team setting.

Keywords: virtual teams, morale, project performance, distract-free environment, technology adaptation

Procedia PDF Downloads 67
825 Long-Term Variabilities and Tendencies in the Zonally Averaged TIMED-SABER Ozone and Temperature in the Middle Atmosphere over 10°N-15°N

Authors: Oindrila Nath, S. Sridharan

Abstract:

Long-term (2002-2012) temperature and ozone measurements by Sounding of Atmosphere by Broadband Emission Radiometry (SABER) instrument onboard Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) satellite zonally averaged over 10°N-15°N are used to study their long-term changes and their responses to solar cycle, quasi-biennial oscillation and El Nino Southern Oscillation. The region is selected to provide more accurate long-term trends and variabilities, which were not possible earlier with lidar measurements over Gadanki (13.5°N, 79.2°E), which are limited to cloud-free nights, whereas continuous data sets of SABER temperature and ozone are available. Regression analysis of temperature shows a cooling trend of 0.5K/decade in the stratosphere and that of 3K/decade in the mesosphere. Ozone shows a statistically significant decreasing trend of 1.3 ppmv per decade in the mesosphere although there is a small positive trend in stratosphere at 25 km. Other than this no significant ozone trend is observed in stratosphere. Negative ozone-QBO response (0.02ppmv/QBO), positive ozone-solar cycle (0.91ppmv/100SFU) and negative response to ENSO (0.51ppmv/SOI) have been found more in mesosphere whereas positive ozone response to ENSO (0.23ppmv/SOI) is pronounced in stratosphere (20-30 km). The temperature response to solar cycle is more positive (3.74K/100SFU) in the upper mesosphere and its response to ENSO is negative around 80 km and positive around 90-100 km and its response to QBO is insignificant at most of the heights. Composite monthly mean of ozone volume mixing ratio shows maximum values during pre-monsoon and post-monsoon season in middle stratosphere (25-30 km) and in upper mesosphere (85-95 km) around 10 ppmv. Composite monthly mean of temperature shows semi-annual variation with large values (~250-260 K) in equinox months and less values in solstice months in upper stratosphere and lower mesosphere (40-55 km) whereas the SAO becomes weaker above 55 km. The semi-annual variation again appears at 80-90 km, with large values in spring equinox and winter months. In the upper mesosphere (90-100 km), less temperature (~170-190 K) prevails in all the months except during September, when the temperature is slightly more. The height profiles of amplitudes of semi-annual and annual oscillations in ozone show maximum values of 6 ppmv and 2.5 ppmv respectively in upper mesosphere (80-100 km), whereas SAO and AO in temperature show maximum values of 5.8 K and 4.6 K in lower and middle mesosphere around 60-85 km. The phase profiles of both SAO and AO show downward progressions. These results are being compared with long-term lidar temperature measurements over Gadanki (13.5°N, 79.2°E) and the results obtained will be presented during the meeting.

Keywords: trends, QBO, solar cycle, ENSO, ozone, temperature

Procedia PDF Downloads 398
824 The Agency of Award Systems in Architecture: The Case of Cyprus

Authors: Christakis Chatzichristou, Elias Kranos

Abstract:

Architectural awards, especially if they are given by the state, recognize excellence in the field and, at the same time, strongly contribute to the making of the architectural culture of a place. The present research looks at the houses that have been awarded through such a system in Cyprus in order to discuss the values promoted, directly or not, by such a setup which is quite similar to other prestigious award systems such as the Mies van de Rohe Prize in Europe. In fact, many of the projects signed out through the state award system end up being selected to represent the country for the European awards. The residential architecture encouraged by such systems is quite interesting in that the most public of institutions influence how the most private unit of society is architecturally accommodated. The methodology uses both qualitative as well as quantitative research tools in order to analyze: the official state call for entries to the competition; the final report of the evaluation committee; the spatial characteristics of the houses through the Space Syntax methodology; the statements of the architects regarding their intentions and the final outcome; the feelings of the owners and users of the houses regarding the design process as well as the degree of satisfaction regarding the final product. The above-mentioned analyses allow for a more thorough discussion regarding not only the values promoted explicitly by the system through the brief that describes what the evaluation committee is looking for but also the values that are actually being promoted indirectly through the results of the actual evaluation itself. The findings suggest that: the strong emphasis in brief on bioclimatic design and issues of sustainability weakens significantly, if at all present, in the actual selection process; continuous improvement seems to be fuzzily used as a concept; most of the houses tend to have a similar spatial genotype; most of the houses have similar aesthetic qualities; discrepancies between the proposed lifestyle through the design and the actual use of the spaces do not seem to be acknowledged in the evaluation as an issue; the temporal factor seems to be ignored as the projects are required to be ‘finished projects’ as though the users and their needs do not change through time. The research suggests that, rather than preserving a critical attitude regarding the role of the architect in society, the state award system tends, like any other non-reflective social organism, to simply promote its own unexamined values as well as prejudices. This is perhaps more evident in the shared aesthetic character of the awarded houses and less so in the hidden spatial genotype to which they belong. If the design of houses is indeed a great opportunity for architecture to contribute in a more deliberate manner to the evolution of society, then what the present study shows is that this opportunity seems to be largely missed. The findings may serve better less as a verdict and more as a chance for introspection and discussion.

Keywords: award systems, houses, spatial genotype, aesthetic qualities

Procedia PDF Downloads 51
823 Estimation of Biomedical Waste Generated in a Tertiary Care Hospital in New Delhi

Authors: Priyanka Sharma, Manoj Jais, Poonam Gupta, Suraiya K. Ansari, Ravinder Kaur

Abstract:

Introduction: As much as the Health Care is necessary for the population, so is the management of the Biomedical waste produced. Biomedical waste is a wide terminology used for the waste material produced during the diagnosis, treatment or immunization of human beings and animals, in research or in the production or testing of biological products. Biomedical waste management is a chain of processes from the point of generation of Biomedical waste to its final disposal in the correct and proper way, assigned for that particular type of waste. Any deviation from the said processes leads to improper disposal of Biomedical waste which itself is a major health hazard. Proper segregation of Biomedical waste is the key for Biomedical Waste management. Improper disposal of BMW can cause sharp injuries which may lead to HIV, Hepatitis-B virus, Hepatitis-C virus infections. Therefore, proper disposal of BMW is of upmost importance. Health care establishments segregate the Biomedical waste and dispose it as per the Biomedical waste management rules in India. Objectives: This study was done to observe the current trends of Biomedical waste generated in a tertiary care Hospital in Delhi. Methodology: Biomedical waste management rounds were conducted in the hospital wards. Relevant details were collected and analysed and sites with maximum Biomedical waste generation were identified. All the data was cross checked with the commons collection site. Results: The total amount of waste generated in the hospital during January 2014 till December 2014 was 6,39,547 kg, of which 70.5% was General (non-hazardous) waste and the rest 29.5% was BMW which consisted highly infectious waste (12.2%), disposable plastic waste (16.3%) and sharps (1%). The maximum quantity of Biomedical waste producing sites were Obstetrics and Gynaecology wards with a total Biomedical waste production of 45.8%, followed by Paediatrics, Surgery and Medicine wards with 21.2 %, 4.6% and 4.3% respectively. The maximum average Biomedical waste generated was by Obstetrics and Gynaecology ward with 0.7 kg/bed/day, followed by Paediatrics, Surgery and Medicine wards with 0.29, 0.28 and 0.18 kg/bed/day respectively. Conclusions: Hospitals should pay attention to the sites which produce a large amount of BMW to avoid improper segregation of Biomedical waste. Also, induction and refresher training Program of Biomedical waste management should be conducted to avoid improper management of Biomedical waste. Healthcare workers should be made aware of risks of poor Biomedical waste management.

Keywords: biomedical waste, biomedical waste management, hospital-tertiary care, New Delhi

Procedia PDF Downloads 229
822 Extreme Heat and Workforce Health in Southern Nevada

Authors: Erick R. Bandala, Kebret Kebede, Nicole Johnson, Rebecca Murray, Destiny Green, John Mejia, Polioptro Martinez-Austria

Abstract:

Summertemperature data from Clark County was collected and used to estimate two different heat-related indexes: the heat index (HI) and excess heat factor (EHF). These two indexes were used jointly with data of health-related deaths in Clark County to assess the effect of extreme heat on the exposed population. The trends of the heat indexes were then analyzed for the 2007-2016 decadeandthe correlation between heat wave episodes and the number of heat-related deaths in the area was estimated. The HI showed that this value has increased significantly in June, July, and August over the last ten years. The same trend was found for the EHF, which showed a clear increase in the severity and number of these events per year. The number of heat wave episodes increased from 1.4 per year during the 1980-2016 period to 1.66 per yearduring the 2007-2016 period. However, a different trend was found for heat-wave-event duration, which decreasedfrom an average of 20.4 days during the trans-decadal period (1980-2016) to 18.1 days during the most recent decade(2007-2016). The number of heat-related deaths was also found to increase from 2007 to 2016, with 2016 with the highest number of heat-related deaths. Both HI and the number of deaths showeda normal-like distribution for June, July, and August, with the peak values reached in late July and early August. The average maximum HI values better correlated with the number of deaths registered in Clark County than the EHF, probably because HI uses the maximum temperature and humidity in its estimation,whereas EHF uses the average medium temperature. However, it is worth testing the EHF of the study zone because it was reported to fit properly in the case of heat-related morbidity. For the overall period, 437 heat-related deaths were registered in Clark County, with 20% of the deaths occurring in June, 52% occurring in July, 18% occurring in August,and the remaining 10% occurring in the other months of the year. The most vulnerable subpopulation was people over 50 years old, for which 76% of the heat-related deaths were registered.Most of the cases were associated with heart disease preconditions. The second most vulnerable subpopulation was young adults (20-50), which accounted for 23% of the heat-related deaths. These deathswere associated with alcoholic/illegal drug intoxication.

Keywords: heat, health, hazards, workforce

Procedia PDF Downloads 94
821 To Live on the Margins: A Closer Look at the Social and Economic Situation of Illegal Afghan Migrants in Iran

Authors: Abdullah Mohammadi

Abstract:

Years of prolong war in Afghanistan has led to one of the largest refugee and migrant populations in the contemporary world. During this continuous unrest which began in 1970s (by military coup, Marxist revolution and the subsequent invasion of USSR), over one-third of the population migrated to neighboring countries, especially Pakistan and Iran. After the Soviet Army withdrawal in 1989, a new wave of conflicts emerged between rival Afghan groups and this led to new refugees. Taliban period, also, created its own refugees. During all these years, I.R. of Iran has been one of the main destinations of Afghan refugees and migrants. At first, due to the political situation after Islamic Revolution, Iran government didn’t restrict the entry of Afghan refugees. Those who came first in Iran received ID cards and had access to education and healthcare services. But in 1990s, due to economic and social concerns, Iran’s policy towards Afghan refugees and migrants changed. The government has tried to identify and register Afghans in Iran and limit their access to some services and jobs. Unfortunately, there are few studies on Afghan refugees and migrants’ situation in Iran and we have a dim and vague picture of them. Of the few studies done on this group, none of them focus on the illegal Afghan migrants’ situation in Iran. Here, we tried to study the social and economic aspects of illegal Afghan migrants’ living in Iran. In doing so, we interviewed 24 illegal Afghan migrants in Iran. The method applied for analyzing the data is thematic analysis. For the interviews, we chose family heads (17 men and 7 women). According to the findings, illegal Afghan migrants’ socio-economic situation in Iran is very undesirable. Its main cause is the marginalization of this group which is resulted from government policies towards Afghan migrants. Most of the illegal Afghan migrants work in unskilled and inferior jobs and live in rent houses on the margins of cities and villages. None of them could buy a house or vehicle due to law. Based on their income, they form one of the lowest, unprivileged groups in the society. Socially, they face many problems in their everyday life: social insecurity, harassment and violence, misuse of their situation by police and people, lack of education opportunity, etc. In general, we may conclude that illegal Afghan migrant have little adaptation with Iran’s society. They face severe limitations compared to legal migrants and refugees and have no opportunity for upward social mobility. However, they have managed some strategies to face these difficulties including: seeking financial and emotional helps from family and friendship networks, sending one of the family members to third country (mostly to European countries), establishing self-administered schools for children (schools which are illegal and run by Afghan educated youth).

Keywords: illegal Afghan migrants, marginalization, social insecurity, upward social mobility

Procedia PDF Downloads 303
820 Community Engagement Strategies to Assist with the Development of an RCT Among People Living with HIV

Authors: Joyce K. Anastasi, Bernadette Capili

Abstract:

Community Engagement Strategies to Assist with the Development of an RCT Among People Living with HIV Our research team focuses on developing and testing protocols to manage chronic symptoms. For many years, our team designed and implemented symptom management studies for people living with HIV (PLWH). We identify symptoms that are not curative and are not adequately controlled by conventional therapies. As an exemplar, we describe how we successfully engaged PLWH in developing and refining our research feasibility protocol for distal sensory peripheral neuropathy (DSP) associated with HIV. With input from PLWH with DSP, our research received National Institutes of Health (NIH) research funding support. Significance: DSP is one of the most common neurologic complications in HIV. It is estimated that DSP affects 21% to 50% of PLWH. The pathogenesis of DSP in HIV is complex and unclear. Proposed mechanisms include cytokine dysregulation, viral protein-produced neurotoxicity, and mitochondrial dysfunction associated with antiretroviral medications. There are no FDA-approved treatments for DSP in HIV. Purpose: Aims: 1) to explore the impact of DSP on the lives of PLWH, 2) to identify patients’ perspectives on successful treatments for DSP, 3) to identify interventions considered feasible and sensitive to the needs of PLWH with DSP, and 4) to obtain participant input for protocol/study design. Description of Process: We conducted a needs assessment with PLWH with DSP. From our needs assessment, we learned from the patients’ perspective detailed descriptions of their symptoms; physical functioning with DSP; self-care remedies tried, and desired interventions. We also asked about protocol scheduling, instrument clarity, study compensation, study-related burdens, and willingness to participate in a randomized controlled trial (RCT) with a placebo and a waitlist group. Implications: We incorporated many of the suggestions learned from the need assessment. We developed and completed a feasibility study that provided us with invaluable information that informed subsequent NIH-funded studies. In addition to our extensive clinical and research experience working with PLWH, learning from the patient perspective helped in developing our protocol and promoting a successful plan for recruitment and retention of study participants.

Keywords: clinical trial development, peripheral neuropathy, traditional medicine, HIV, AIDS

Procedia PDF Downloads 74
819 Biological Control of Karnal Bunt by Pseudomonas fluorescens

Authors: Geetika Vajpayee, Sugandha Asthana, Pratibha Kumari, Shanthy Sundaram

Abstract:

Pseudomonas species possess a variety of promising properties of antifungal and growth promoting activities in the wheat plant. In the present study, Pseudomonas fluorescens MTCC-9768 is tested against plant pathogenic fungus Tilletia indica, causing Karnal bunt, a quarantine disease of wheat (Triticum aestivum) affecting kernels of wheat. It is one of the 1/A1 harmful diseases of wheat worldwide under EU legislation. This disease develops in the growth phase by the spreading of microscopically small spores of the fungus (teliospores) being dispersed by the wind. The present chemical fungicidal treatments were reported to reduce teliospores germination, but its effect is questionable since T. indica can survive up to four years in the soil. The fungal growth inhibition tests were performed using Dual Culture Technique, and the results showed inhibition by 82.5%. The interaction of antagonist bacteria-fungus causes changes in the morphology of hyphae, which was observed using Lactophenol cotton blue staining and Scanning Electron Microscopy (SEM). The rounded and swollen ends, called ‘theca’ were observed in interacted fungus as compared to control fungus (without bacterial interaction). This bacterium was tested for its antagonistic activity like protease, cellulose, HCN production, Chitinase, etc. The growth promoting activities showed increase production of IAA in bacteria. The bacterial secondary metabolites were extracted in different solvents for testing its growth inhibiting properties. The characterization and purification of the antifungal compound were done by Thin Layer Chromatography, and Rf value was calculated (Rf value = 0.54) and compared to the standard antifungal compound, 2, 4 DAPG (Rf value = 0.54). Further, the in vivo experiments showed a significant decrease in the severity of disease in the wheat plant due to direct injection method and seed treatment. Our results indicate that the extracted and purified compound from the antagonist bacteria, P. fluorescens MTCC-9768 may be used as a potential biocontrol agent against T. indica. This also concludes that the PGPR properties of the bacteria may be utilized by incorporating it into bio-fertilizers.

Keywords: antagonism, Karnal bunt, PGPR, Pseudomonas fluorescens

Procedia PDF Downloads 383
818 Predictors of Pericardial Effusion Requiring Drainage Following Coronary Artery Bypass Graft Surgery: A Retrospective Analysis

Authors: Nicholas McNamara, John Brookes, Michael Williams, Manish Mathew, Elizabeth Brookes, Tristan Yan, Paul Bannon

Abstract:

Objective: Pericardial effusions are an uncommon but potentially fatal complication after cardiac surgery. The goal of this study was to describe the incidence and risk factors associated with the development of pericardial effusion requiring drainage after coronary artery bypass graft surgery (CABG). Methods: A retrospective analysis was undertaken using prospectively collected data. All adult patients who underwent CABG at our institution between 1st January 2017 and 31st December 2018 were included. Pericardial effusion was diagnosed using transthoracic echocardiography (TTE) performed for clinical suspicion of pre-tamponade or tamponade. Drainage was undertaken if considered clinically necessary and performed via a sub-xiphoid incision, pericardiocentesis, or via re-sternotomy at the discretion of the treating surgeon. Patient demographics, operative characteristics, anticoagulant exposure, and postoperative outcomes were examined to identify those variables associated with the development of pericardial effusion requiring drainage. Tests of association were performed using the Fischer exact test for dichotomous variables and the Student t-test for continuous variables. Logistic regression models were used to determine univariate predictors of pericardial effusion requiring drainage. Results: Between January 1st, 2017, and December 31st, 2018, a total of 408 patients underwent CABG at our institution, and eight (1.9%) required drainage of pericardial effusion. There was no difference in age, gender, or the proportion of patients on preoperative therapeutic heparin between the study and control groups. Univariate analysis identified preoperative atrial arrhythmia (37.5% vs 8.8%, p = 0.03), reduced left ventricular ejection fraction (47% vs 56%, p = 0.04), longer cardiopulmonary bypass (130 vs 84 min, p < 0.01) and cross-clamp (107 vs 62 min, p < 0.01) times, higher drain output in the first four postoperative hours (420 vs 213 mL, p <0.01), postoperative atrial fibrillation (100% vs 32%, p < 0.01), and pleural effusion requiring drainage (87.5% vs 12.5%, p < 0.01) to be associated with development of pericardial effusion requiring drainage. Conclusion: In this study, the incidence of pericardial effusion requiring drainage was 1.9%. Several factors, mainly related to preoperative or postoperative arrhythmia, length of surgery, and pleural effusion requiring drainage, were identified to be associated with developing clinically significant pericardial effusions. High clinical suspicion and low threshold for transthoracic echo are pertinent to ensure this potentially lethal condition is not missed.

Keywords: coronary artery bypass, pericardial effusion, pericardiocentesis, tamponade, sub-xiphoid drainage

Procedia PDF Downloads 152
817 Foundation Phase Teachers' Experiences of School Based Support Teams: A Case of Selected Schools in Johannesburg

Authors: Ambeck Celyne Tebid, Harry S. Rampa

Abstract:

The South African Education system recognises the need for all learners including those experiencing learning difficulties, to have access to a single unified system of education. For teachers to be pedagogically responsive to an increasingly diverse learner population without appropriate support has been proven to be unrealistic. As such, this has considerably hampered interest amongst teachers, especially those at the foundation phase to work within an Inclusive Education (IE) and training system. This qualitative study aimed at investigating foundation phase teachers’ experiences of school-based support teams (SBSTs) in two Full-Service (inclusive schools) and one Mainstream public primary school in the Gauteng province of South Africa; with particular emphasis on finding ways to supporting them, since teachers claimed they were not empowered in their initial training to teach learners experiencing learning difficulties. Hence, SBSTs were created at school levels to fill this gap thereby, supporting teaching and learning by identifying and addressing learners’, teachers’ and schools’ needs. With the notion that IE may be failing because of systemic reasons, this study uses Bronfenbrenner’s (1979) ecosystemic as well as Piaget’s (1980) maturational theory to examine the nature of support and experiences amongst teachers taking individual and systemic factors into consideration. Data was collected using in-depth, face-to-face interviews, document analysis and observation with 6 foundation phase teachers drawn from 3 different schools, 3 SBST coordinators, and 3 school principals. Data was analysed using the phenomenological data analysis method. Amongst the findings of the study is that South African full- service and mainstream schools have functional SBSTs which render formal and informal support to the teachers; this support varies in quality depending on the socio-economic status of the relevant community where the schools are situated. This paper, however, argues that what foundation phase teachers settled for as ‘support’ is flawed; as well as how they perceive the SBST and its role is problematic. The paper conclude by recommending that, the SBST should consider other approaches at foundation phase teacher support such as, empowering teachers with continuous practical experiences on how to deal with real classroom scenarios, as well as ensuring that all support, be it on academic or non-academic issues should be provided within a learning community framework where the teacher, family, SBST and where necessary, community organisations should harness their skills towards a common goal.

Keywords: foundation phase, full- service schools, inclusive education, learning difficulties, school-based support teams, teacher support

Procedia PDF Downloads 216
816 Greek Teachers' Understandings of Typical Language Development and of Language Difficulties in Primary School Children and Their Approaches to Language Teaching

Authors: Konstantina Georgali

Abstract:

The present study explores Greek teachers’ understandings of typical language development and of language difficulties. Its core aim was to highlight that teachers need to have a thorough understanding of educational linguistics, that is of how language figures in education. They should also be aware of how language should be taught so as to promote language development for all students while at the same time support the needs of children with language difficulties in an inclusive ethos. The study, thus argued that language can be a dynamic learning mechanism in the minds of all children and a powerful teaching tool in the hands of teachers and provided current research evidence to show that structural and morphological particularities of native languages- in this case, of the Greek language- can be used by teachers to enhance children’s understanding of language and simultaneously improve oral language skills for children with typical language development and for those with language difficulties. The research was based on a Sequential Exploratory Mixed Methods Design deployed in three consecutive and integrative phases. The first phase involved 18 exploratory interviews with teachers. Its findings informed the second phase involving a questionnaire survey with 119 respondents. Contradictory questionnaire results were further investigated in a third phase employing a formal testing procedure with 60 children attending Y1, Y2 and Y3 of primary school (a research group of 30 language impaired children and a comparison group of 30 children with typical language development, both identified by their class teachers). Results showed both strengths and weaknesses in teachers’ awareness of educational linguistics and of language difficulties. They also provided a different perspective of children’s language needs and of language teaching approaches that reflected current advances and conceptualizations of language problems and opened a new window on how best they can be met in an inclusive ethos. However, teachers barely used teaching approaches that could capitalize on the particularities of the Greek language to improve language skills for all students in class. Although they seemed to realize the importance of oral language skills and their knowledge base on language related issues was adequate, their practices indicated that they did not see language as a dynamic teaching and learning mechanism that can promote children’s language development and in tandem, improve academic attainment. Important educational implications arose and clear indications of the generalization of findings beyond the Greek educational context.

Keywords: educational linguistics, inclusive ethos, language difficulties, typical language development

Procedia PDF Downloads 371
815 Limbic Involvement in Visual Processing

Authors: Deborah Zelinsky

Abstract:

The retina filters millions of incoming signals into a smaller amount of exiting optic nerve fibers that travel to different portions of the brain. Most of the signals are for eyesight (called "image-forming" signals). However, there are other faster signals that travel "elsewhere" and are not directly involved with eyesight (called "non-image-forming" signals). This article centers on the neurons of the optic nerve connecting to parts of the limbic system. Eye care providers are currently looking at parvocellular and magnocellular processing pathways without realizing that those are part of an enormous "galaxy" of all the body systems. Lenses are modifying both non-image and image-forming pathways, taking A.M. Skeffington's seminal work one step further. Almost 100 years ago, he described the Where am I (orientation), Where is It (localization), and What is It (identification) pathways. Now, among others, there is a How am I (animation) and a Who am I (inclination, motivation, imagination) pathway. Classic eye testing considers pupils and often assesses posture and motion awareness, but classical prescriptions often overlook limbic involvement in visual processing. The limbic system is composed of the hippocampus, amygdala, hypothalamus, and anterior nuclei of the thalamus. The optic nerve's limbic connections arise from the intrinsically photosensitive retinal ganglion cells (ipRGC) through the "retinohypothalamic tract" (RHT). There are two main hypothalamic nuclei with direct photic inputs. These are the suprachiasmatic nucleus and the paraventricular nucleus. Other hypothalamic nuclei connected with retinal function, including mood regulation, appetite, and glucose regulation, are the supraoptic nucleus and the arcuate nucleus. The retino-hypothalamic tract is often overlooked when we prescribe eyeglasses. Each person is different, but the lenses we choose are influencing this fast processing, which affects each patient's aiming and focusing abilities. These signals arise from the ipRGC cells that were only discovered 20+ years ago and do not address the campana retinal interneurons that were only discovered 2 years ago. As eyecare providers, we are unknowingly altering such factors as lymph flow, glucose metabolism, appetite, and sleep cycles in our patients. It is important to know what we are prescribing as the visual processing evaluations expand past the 20/20 central eyesight.

Keywords: neuromodulation, retinal processing, retinohypothalamic tract, limbic system, visual processing

Procedia PDF Downloads 69
814 Application Reliability Method for the Analysis of the Stability Limit States of Large Concrete Dams

Authors: Mustapha Kamel Mihoubi, Essadik Kerkar, Abdelhamid Hebbouche

Abstract:

According to the randomness of most of the factors affecting the stability of a gravity dam, probability theory is generally used to TESTING the risk of failure and there is a confusing logical transition from the state of stability failed state, so the stability failure process is considered as a probable event. The control of risk of product failures is of capital importance for the control from a cross analysis of the gravity of the consequences and effects of the probability of occurrence of identified major accidents and can incur a significant risk to the concrete dam structures. Probabilistic risk analysis models are used to provide a better understanding the reliability and structural failure of the works, including when calculating stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of the reliability analysis methods including the methods used in engineering. It is in our case of the use of level II methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type FORM (First Order Reliability Method), SORM (Second Order Reliability Method). By way of comparison, a second level III method was used which generates a full analysis of the problem and involving an integration of the probability density function of, random variables are extended to the field of security by using of the method of Mont-Carlo simulations. Taking into account the change in stress following load combinations: normal, exceptional and extreme the acting on the dam, calculation results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities thus causing a significant decrease in strength, especially in the presence of combinations of unique and extreme loads. Shear forces then induce a shift threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case THE increase of uplift in a hypothetical default of the drainage system.

Keywords: dam, failure, limit state, monte-carlo, reliability, probability, sliding, Taylor

Procedia PDF Downloads 307
813 The Appropriate Number of Test Items That a Classroom-Based Reading Assessment Should Include: A Generalizability Analysis

Authors: Jui-Teng Liao

Abstract:

The selected-response (SR) format has been commonly adopted to assess academic reading in both formal and informal testing (i.e., standardized assessment and classroom assessment) because of its strengths in content validity, construct validity, as well as scoring objectivity and efficiency. When developing a second language (L2) reading test, researchers indicate that the longer the test (e.g., more test items) is, the higher reliability and validity the test is likely to produce. However, previous studies have not provided specific guidelines regarding the optimal length of a test or the most suitable number of test items or reading passages. Additionally, reading tests often include different question types (e.g., factual, vocabulary, inferential) that require varying degrees of reading comprehension and cognitive processes. Therefore, it is important to investigate the impact of question types on the number of items in relation to the score reliability of L2 reading tests. Given the popularity of the SR question format and its impact on assessment results on teaching and learning, it is necessary to investigate the degree to which such a question format can reliably measure learners’ L2 reading comprehension. The present study, therefore, adopted the generalizability (G) theory to investigate the score reliability of the SR format in L2 reading tests focusing on how many test items a reading test should include. Specifically, this study aimed to investigate the interaction between question types and the number of items, providing insights into the appropriate item count for different types of questions. G theory is a comprehensive statistical framework used for estimating the score reliability of tests and validating their results. Data were collected from 108 English as a second language student who completed an English reading test comprising factual, vocabulary, and inferential questions in the SR format. The computer program mGENOVA was utilized to analyze the data using multivariate designs (i.e., scenarios). Based on the results of G theory analyses, the findings indicated that the number of test items had a critical impact on the score reliability of an L2 reading test. Furthermore, the findings revealed that different types of reading questions required varying numbers of test items for reliable assessment of learners’ L2 reading proficiency. Further implications for teaching practice and classroom-based assessments are discussed.

Keywords: second language reading assessment, validity and reliability, Generalizability theory, Academic reading, Question format

Procedia PDF Downloads 66
812 Medial Temporal Tau Predicts Memory Decline in Cognitively Unimpaired Elderly

Authors: Angela T. H. Kwan, Saman Arfaie, Joseph Therriault, Zahra Azizi, Firoza Z. Lussier, Cecile Tissot, Mira Chamoun, Gleb Bezgin, Stijn Servaes, Jenna Stevenon, Nesrine Rahmouni, Vanessa Pallen, Serge Gauthier, Pedro Rosa-Neto

Abstract:

Alzheimer’s disease (AD) can be detected in living people using in vivo biomarkers of amyloid-β (Aβ) and tau, even in the absence of cognitive impairment during the preclinical phase. [¹⁸F]-MK-6420 is a high affinity positron emission tomography (PET) tracer that quantifies tau neurofibrillary tangles, but its ability to predict cognitive changes associated with early AD symptoms, such as memory decline, is unclear. Here, we assess the prognostic accuracy of baseline [18F]-MK-6420 tau PET for predicting longitudinal memory decline in asymptomatic elderly individuals. In a longitudinal observational study, we evaluated a cohort of cognitively normal elderly participants (n = 111) from the Translational Biomarkers in Aging and Dementia (TRIAD) study (data collected between October 2017 and July 2020, with a follow-up period of 12 months). All participants underwent tau PET with [¹⁸F]-MK-6420 and Aβ PET with [¹⁸F]-AZD-4694. The exclusion criteria included the presence of head trauma, stroke, or other neurological disorders. There were 111 eligible participants who were chosen based on the availability of Aβ PET, tau PET, magnetic resonance imaging (MRI), and APOEε4 genotyping. Among these participants, the mean (SD) age was 70.1 (8.6) years; 20 (18%) were tau PET positive, and 71 of 111 (63.9%) were women. A significant association between baseline Braak I-II [¹⁸F]-MK-6240 SUVR positivity and change in composite memory score was observed at the 12-month follow-up, after correcting for age, sex, and years of education (Logical Memory and RAVLT, standardized beta = -0.52 (-0.82-0.21), p < 0.001, for dichotomized tau PET and -1.22 (-1.84-(-0.61)), p < 0.0001, for continuous tau PET). Moderate cognitive decline was observed for A+T+ over the follow-up period, whereas no significant change was observed for A-T+, A+T-, and A-T-, though it should be noted that the A-T+ group was small.Our results indicate that baseline tau neurofibrillary tangle pathology is associated with longitudinal changes in memory function, supporting the use of [¹⁸F]-MK-6420 PET to predict the likelihood of asymptomatic elderly individuals experiencing future memory decline. Overall, [¹⁸F]-MK-6420 PET is a promising tool for predicting memory decline in older adults without cognitive impairment at baseline. This is of critical relevance as the field is shifting towards a biological model of AD defined by the aggregation of pathologic tau. Therefore, early detection of tau pathology using [¹⁸F]-MK-6420 PET provides us with the hope that living patients with AD may be diagnosed during the preclinical phase before it is too late.

Keywords: alzheimer’s disease, braak I-II, in vivo biomarkers, memory, PET, tau

Procedia PDF Downloads 62
811 Servitization in Machine and Plant Engineering: Leveraging Generative AI for Effective Product Portfolio Management Amidst Disruptive Innovations

Authors: Till Gramberg

Abstract:

In the dynamic world of machine and plant engineering, stagnation in the growth of new product sales compels companies to reconsider their business models. The increasing shift toward service orientation, known as "servitization," along with challenges posed by digitalization and sustainability, necessitates an adaptation of product portfolio management (PPM). Against this backdrop, this study investigates the current challenges and requirements of PPM in this industrial context and develops a framework for the application of generative artificial intelligence (AI) to enhance agility and efficiency in PPM processes. The research approach of this study is based on a mixed-method design. Initially, qualitative interviews with industry experts were conducted to gain a deep understanding of the specific challenges and requirements in PPM. These interviews were analyzed using the Gioia method, painting a detailed picture of the existing issues and needs within the sector. This was complemented by a quantitative online survey. The combination of qualitative and quantitative research enabled a comprehensive understanding of the current challenges in the practical application of machine and plant engineering PPM. Based on these insights, a specific framework for the application of generative AI in PPM was developed. This framework aims to assist companies in implementing faster and more agile processes, systematically integrating dynamic requirements from trends such as digitalization and sustainability into their PPM process. Utilizing generative AI technologies, companies can more quickly identify and respond to trends and market changes, allowing for a more efficient and targeted adaptation of the product portfolio. The study emphasizes the importance of an agile and reactive approach to PPM in a rapidly changing environment. It demonstrates how generative AI can serve as a powerful tool to manage the complexity of a diversified and continually evolving product portfolio. The developed framework offers practical guidelines and strategies for companies to improve their PPM processes by leveraging the latest technological advancements while maintaining ecological and social responsibility. This paper significantly contributes to deepening the understanding of the application of generative AI in PPM and provides a framework for companies to manage their product portfolios more effectively and adapt to changing market conditions. The findings underscore the relevance of continuous adaptation and innovation in PPM strategies and demonstrate the potential of generative AI for proactive and future-oriented business management.

Keywords: servitization, product portfolio management, generative AI, disruptive innovation, machine and plant engineering

Procedia PDF Downloads 58
810 A Comparative Study of Environmental, Social and Economic Cross-Border Cooperation in Post-Conflict Environments: The Israel-Jordan Border

Authors: Tamar Arieli

Abstract:

Cross-border cooperation has long been hailed as a means for stabilizing and normalizing relations between former enemies. Cooperation in problem-solving and realizing of local interests in post-conflict environments can indeed serve as a basis for developing dialogue and meaningful relations between neighbors across borders. Hence the potential for formerly sealed borders to serve as a basis for generating local and national perceptions of interdependence and as a buffer against the resume of conflict. Central questions which arise for policy-makers and third parties are how to facilitate cross-border cooperation and which areas of cooperation best serve to normalize post-conflict border regions. The Israel-Jordan border functions as a post-conflict border, in that it is a peaceful border since the 1994 Israel-Jordan peace treaty yet cross-border relations are defined but the highly securitized nature of the border region and the ongoing Arab-Israel regional conflict. This case study is based on long term qualitative research carried out in the border regions of both Israel and Jordan, which mapped and analyzed cross-border in a wide range of activities – social interactions sponsored by peace-facilitating NGOs, government sponsored agricultural cooperation, municipal initiated emergency planning in cross-border continuous urban settings, private cross-border business ventures and various environmental cooperative initiatives. These cooperative initiatives are evaluated through multiple interviews carried out with initiators and partners in cross-border cooperation as well as analysis of documentation, funding and media. These cooperative interactions are compared based on levels of cross-border local and official awareness and involvement as well as sustainability over time. This research identifies environmental cooperation as the most sustainable area of cross- border cooperation and as most conducive to generating perceptions of regional interdependence. This is a variation to the ‘New Middle East’ vision of business-based cooperation leading to conflict amelioration and regional stability. Environmental cooperation serving the public good rather than personal profit enjoys social legitimization even in the face of widespread anti-normalization sentiments common in the post-conflict environment. This insight is examined in light of philosophical and social aspects of the natural environment and its social perceptions. This research has theoretical implications for better understanding dynamics of cooperation and conflict, as well as practical ramifications for practitioners in border region policy and management.

Keywords: borders, cooperation, post-conflict, security

Procedia PDF Downloads 295
809 Assessment of Seeding and Weeding Field Robot Performance

Authors: Victor Bloch, Eerikki Kaila, Reetta Palva

Abstract:

Field robots are an important tool for enhancing efficiency and decreasing the climatic impact of food production. There exists a number of commercial field robots; however, since this technology is still new, the robot advantages and limitations, as well as methods for optimal using of robots, are still unclear. In this study, the performance of a commercial field robot for seeding and weeding was assessed. A research 2-ha sugar beet field with 0.5m row width was used for testing, which included robotic sowing of sugar beet and weeding five times during the first two months of the growing. About three and five percent of the field were used as untreated and chemically weeded control areas, respectively. The plant detection was based on the exact plant location without image processing. The robot was equipped with six seeding and weeding tools, including passive between-rows harrow hoes and active hoes cutting inside rows between the plants, and it moved with a maximal speed of 0.9 km/h. The robot's performance was assessed by image processing. The field images were collected by an action camera with a height of 2 m and a resolution 27M pixels installed on the robot and by a drone with a 16M pixel camera flying at 4 m height. To detect plants and weeds, the YOLO model was trained with transfer learning from two available datasets. A preliminary analysis of the entire field showed that in the areas treated by the robot, the weed average density varied across the field from 6.8 to 9.1 weeds/m² (compared with 0.8 in the chemically treated area and 24.3 in the untreated area), the weed average density inside rows was 2.0-2.9 weeds / m (compared with 0 on the chemically treated area), and the emergence rate was 90-95%. The information about the robot's performance has high importance for the application of robotics for field tasks. With the help of the developed method, the performance can be assessed several times during the growth according to the robotic weeding frequency. When it’s used by farmers, they can know the field condition and efficiency of the robotic treatment all over the field. Farmers and researchers could develop optimal strategies for using the robot, such as seeding and weeding timing, robot settings, and plant and field parameters and geometry. The robot producers can have quantitative information from an actual working environment and improve the robots accordingly.

Keywords: agricultural robot, field robot, plant detection, robot performance

Procedia PDF Downloads 57
808 Critical Analysis of International Protections for Children from Sexual Abuse and Examination of Indian Legal Approach

Authors: Ankita Singh

Abstract:

Sex trafficking and child pornography are those kinds of borderless crimes which can not be effectively prevented only through the laws and efforts of one country because it requires a proper and smooth collaboration among countries. Eradication of international human trafficking syndicates, criminalisation of international cyber offenders, and effective ban on child pornography is not possible without applying effective universal laws; hence, continuous collaboration of all countries is much needed to adopt and routinely update these universal laws. Congregation of countries on an international platform is very necessary from time to time, where they can simultaneously adopt international agendas and create powerful universal laws to prevent sex trafficking and child pornography in this modern digital era. In the past, some international steps have been taken through The Convention on the Rights of the Child (CRC) and through The Optional Protocol to the Convention on the Rights of the Child on the Sale of Children, Child Prostitution, and Child Pornography, but in reality, these measures are quite weak and are not capable in effectively protecting children from sexual abuse in this modern & highly advanced digital era. The uncontrolled growth of artificial intelligence (AI) and its misuse, lack of proper legal jurisdiction over foreign child abusers and difficulties in their extradition, improper control over international trade of digital child pornographic content, etc., are some prominent issues which can only be controlled through some new, effective and powerful universal laws. Due to a lack of effective international standards and a lack of improper collaboration among countries, Indian laws are also not capable of taking effective actions against child abusers. This research will be conducted through both doctrinal as well as empirical methods. Various literary sources will be examined, and a questionnaire survey will be conducted to analyse the effectiveness of international standards and Indian laws against child pornography. Participants in this survey will be Indian University students. In this work, the existing international norms made for protecting children from sexual abuse will be critically analysed. It will explore why effective and strong collaboration between countries is required in modern times. It will be analysed whether existing international steps are enough to protect children from getting trafficked or being subjected to pornography, and if these steps are not found to be sufficient enough, then suggestions will be given on how international standards and protections can be made more effective and powerful in this digital era. The approach of India towards the existing international standards, the Indian laws to protect children from being subjected to pornography, and the contributions & capabilities of India in strengthening the international standards will also be analysed.

Keywords: child pornography, prevention of children from sexual offences act, the optional protocol to the convention on the rights of the child on the sale of children, child prostitution and child pornography, the convention on the rights of the child

Procedia PDF Downloads 21
807 A Quadratic Model to Early Predict the Blastocyst Stage with a Time Lapse Incubator

Authors: Cecile Edel, Sandrine Giscard D'Estaing, Elsa Labrune, Jacqueline Lornage, Mehdi Benchaib

Abstract:

Introduction: The use of incubator equipped with time-lapse technology in Artificial Reproductive Technology (ART) allows a continuous surveillance. With morphocinetic parameters, algorithms are available to predict the potential outcome of an embryo. However, the different proposed time-lapse algorithms do not take account the missing data, and then some embryos could not be classified. The aim of this work is to construct a predictive model even in the case of missing data. Materials and methods: Patients: A retrospective study was performed, in biology laboratory of reproduction at the hospital ‘Femme Mère Enfant’ (Lyon, France) between 1 May 2013 and 30 April 2015. Embryos (n= 557) obtained from couples (n=108) were cultured in a time-lapse incubator (Embryoscope®, Vitrolife, Goteborg, Sweden). Time-lapse incubator: The morphocinetic parameters obtained during the three first days of embryo life were used to build the predictive model. Predictive model: A quadratic regression was performed between the number of cells and time. N = a. T² + b. T + c. N: number of cells at T time (T in hours). The regression coefficients were calculated with Excel software (Microsoft, Redmond, WA, USA), a program with Visual Basic for Application (VBA) (Microsoft) was written for this purpose. The quadratic equation was used to find a value that allows to predict the blastocyst formation: the synthetize value. The area under the curve (AUC) obtained from the ROC curve was used to appreciate the performance of the regression coefficients and the synthetize value. A cut-off value has been calculated for each regression coefficient and for the synthetize value to obtain two groups where the difference of blastocyst formation rate according to the cut-off values was maximal. The data were analyzed with SPSS (IBM, Il, Chicago, USA). Results: Among the 557 embryos, 79.7% had reached the blastocyst stage. The synthetize value corresponds to the value calculated with time value equal to 99, the highest AUC was then obtained. The AUC for regression coefficient ‘a’ was 0.648 (p < 0.001), 0.363 (p < 0.001) for the regression coefficient ‘b’, 0.633 (p < 0.001) for the regression coefficient ‘c’, and 0.659 (p < 0.001) for the synthetize value. The results are presented as follow: blastocyst formation rate under cut-off value versus blastocyst rate formation above cut-off value. For the regression coefficient ‘a’ the optimum cut-off value was -1.14.10-3 (61.3% versus 84.3%, p < 0.001), 0.26 for the regression coefficient ‘b’ (83.9% versus 63.1%, p < 0.001), -4.4 for the regression coefficient ‘c’ (62.2% versus 83.1%, p < 0.001) and 8.89 for the synthetize value (58.6% versus 85.0%, p < 0.001). Conclusion: This quadratic regression allows to predict the outcome of an embryo even in case of missing data. Three regression coefficients and a synthetize value could represent the identity card of an embryo. ‘a’ regression coefficient represents the acceleration of cells division, ‘b’ regression coefficient represents the speed of cell division. We could hypothesize that ‘c’ regression coefficient could represent the intrinsic potential of an embryo. This intrinsic potential could be dependent from oocyte originating the embryo. These hypotheses should be confirmed by studies analyzing relationship between regression coefficients and ART parameters.

Keywords: ART procedure, blastocyst formation, time-lapse incubator, quadratic model

Procedia PDF Downloads 295
806 Exploring the Correlation between Population Distribution and Urban Heat Island under Urban Data: Taking Shenzhen Urban Heat Island as an Example

Authors: Wang Yang

Abstract:

Shenzhen is a modern city of China's reform and opening-up policy, the development of urban morphology has been established on the administration of the Chinese government. This city`s planning paradigm is primarily affected by the spatial structure and human behavior. The subjective urban agglomeration center is divided into several groups and centers. In comparisons of this effect, the city development law has better to be neglected. With the continuous development of the internet, extensive data technology has been introduced in China. Data mining and data analysis has become important tools in municipal research. Data mining has been utilized to improve data cleaning such as receiving business data, traffic data and population data. Prior to data mining, government data were collected by traditional means, then were analyzed using city-relationship research, delaying the timeliness of urban development, especially for the contemporary city. Data update speed is very fast and based on the Internet. The city's point of interest (POI) in the excavation serves as data source affecting the city design, while satellite remote sensing is used as a reference object, city analysis is conducted in both directions, the administrative paradigm of government is broken and urban research is restored. Therefore, the use of data mining in urban analysis is very important. The satellite remote sensing data of the Shenzhen city in July 2018 were measured by the satellite Modis sensor and can be utilized to perform land surface temperature inversion, and analyze city heat island distribution of Shenzhen. This article acquired and classified the data from Shenzhen by using Data crawler technology. Data of Shenzhen heat island and interest points were simulated and analyzed in the GIS platform to discover the main features of functional equivalent distribution influence. Shenzhen is located in the east-west area of China. The city’s main streets are also determined according to the direction of city development. Therefore, it is determined that the functional area of the city is also distributed in the east-west direction. The urban heat island can express the heat map according to the functional urban area. Regional POI has correspondence. The research result clearly explains that the distribution of the urban heat island and the distribution of urban POIs are one-to-one correspondence. Urban heat island is primarily influenced by the properties of the underlying surface, avoiding the impact of urban climate. Using urban POIs as analysis object, the distribution of municipal POIs and population aggregation are closely connected, so that the distribution of the population corresponded with the distribution of the urban heat island.

Keywords: POI, satellite remote sensing, the population distribution, urban heat island thermal map

Procedia PDF Downloads 90
805 Chinese Early Childhood Parenting Style as a Moderator of the Development of Social Competence Based on Mindreading

Authors: Arkadiusz Gut, Joanna Afek

Abstract:

The first issue that we discuss in this paper is a battery of research demonstrating that culture influences children’s performance in tasks testing their theory of mind, also known as mindreading. We devote special attention to research done within Chinese culture; namely, studies with children speaking Cantonese and Mandarin natively and growing up in an environment dominated by the Chinese model of informal home education. Our attention focuses on the differences in development and functioning of social abilities and competences between children from China and the West. Another matter we turn to is the description of the nature of Chinese early childhood education. We suggest that the differences between the Chinese model and that of the West reveal a set of modifiers responsible for the variation observed in empirical research on children’s theory of mind (mindreading). The modifiers we identify are the following: (1) early socialization – that is, the transformation of the child into a member of the family and society that set special value by the social and physical environment; (2) the Confucian model of education – that is, the Chinese alphabet and tradition that determine a certain way of education in China; (3) the authoritarian style of upbringing – that is, reinforcing conformism, discouraging voicing of private opinions, and respect for elders; (4) the modesty of children and protectiveness of parents – that is, obedience as a desired characteristic in the child, overprotectiveness of parents, especially mothers; and (5) gender differences – that is, different educational styles for girls and boys. In our study, we conduct a thorough meta-analysis of empirical data on the development of mindreading and ToM (children’s theory of mind), as well as a cultural analysis of early childhood education in China. We support our analyses with questionnaire and narrative studies conducted in China that use the ‘Children’s Social Understanding Scale’ questionnaire, conversations based on the so-called ‘Scenarios Presented to Parents’, and questions designed to measure the ‘my child and I’ relation. With our research we aim to identify the factors in early childhood education that serve as moderators explaining the nature of the development and functioning of social cognition based on mind reading in China. Additionally, our study provides a valuable insight for comparative research of social cognition between China and the West.

Keywords: early childhood education, China, mindreading, parenting

Procedia PDF Downloads 379
804 Formal Development of Electronic Identity Card System Using Event-B

Authors: Tomokazu Nagata, Jawid Ahmad Baktash

Abstract:

The goal of this paper is to explore the use of formal methods for Electronic Identity Card System. Nowadays, one of the core research directions in a constantly growing distributed environment is the improvement of the communication process. The responsibility for proper verification becomes crucial. Formal methods can play an essential role in the development and testing of systems. The thesis presents two different methodologies for assessing correctness. Our first approach employs abstract interpretation techniques for creating a trace based model for Electronic Identity Card System. The model was used for building a semi decidable procedure for verifying the system model. We also developed the code for the eID System and can cover three parts login to system sending of Acknowledgment from user side, receiving of all information from server side and log out from system. The new concepts of impasse and spawned sessions that we introduced led our research to original statements about the intruder’s knowledge and eID system coding with respect to secrecy. Furthermore, we demonstrated that there is a bound on the number of sessions needed for the analysis of System.Electronic identity (eID) cards promise to supply a universal, nation-wide mechanism for user authentication. Most European countries have started to deploy eID for government and private sector applications. Are government-issued electronic ID cards the proper way to authenticate users of online services? We use the eID project as a showcase to discuss eID from an application perspective. The new eID card has interesting design features, it is contact-less, it aims to protect people’s privacy to the extent possible, and it supports cryptographically strong mutual authentication between users and services. Privacy features include support for pseudonymous authentication and per service controlled access to individual data items. The article discusses key concepts, the eID infrastructure, observed and expected problems, and open questions. The core technology seems ready for prime time and government projects deploy it to the masses. But application issues may hamper eID adoption for online applications.

Keywords: eID, event-B, Pro-B, formal method, message passing

Procedia PDF Downloads 216
803 Inflammatory Changes Caused by Lipopolysaccharide in Odontoblasts

Authors: Virve Pääkkönen, Heidi M. Cuffaro, Leo Tjäderhane

Abstract:

Objectives: Odontoblasts are the outermost cell layer of dental pulp and form the dentin. Importance of bacterial products, e.g. lipoteichoic acid (LTA), a cell wall component of Gram-positive bacteria and lipopolysaccharide (LPS), a cell wall component of Gram-negative bacteria, have been indicated in the pathogenesis of pulpitis. Gram-positive bacteria are more prevalent in superficial carious lesion while the amount gram-negative is higher in the deep lesions. Objective of this study was to investigate the effect of these bacterial products on inflammatory response of pulp tissue. Interleukins (IL) were of special interest. Various ILs have been observed in the dentin-pulp complex of carious tooth in vivo. Methods: Tissue culture method was used for testing the effect of LTA and LPS on human odontoblasts. Enzymatic isolation technique was used to extract living odontoblasts for cell cultures. DNA microarray and quantitative PCR (qPCR) were used to characterize the changes in the expression profile of the tissue cultured odontoblasts. Laser microdissection was used to cut healthy and affected dentin and odontoblast layer directly under carious lesion for experiments. Cytokine array detecting 80 inflammatory cytokines was used to analyze the protein content of conditioned culture media as well as dentin and odontoblasts from the carious teeth. Results: LPS caused increased gene expression IL-1α, and -8 and decrease of IL-1β, 12 , -15 and -16 after 1h treatment, while after 24h treatment decrease of IL-8, -11 and 23 mRNAs was observed. LTA treatment caused cell death in the tissue cultured odontoblasts but in in the cell culture but not in cell culture. Cytokine array revealed at least 2-fold down-regulation of IL-1β, -10 and -12 in response to LPS treatment. Cytokine array of odontoblasts of carious teeth, as well as LPS-treated tissue-cultured odontoblasts, revealed increased protein amounts of IL-16, epidermal growth factor (EGF), angiogenin and IGFBP-1 as well as decreased amount of fractalkine. In carious dentin, increased amount of IL-1β, EGF and fractalkine was observed, as well as decreased level of GRO-1 and HGF. Conclusion: LPS caused marked changes in the expression of inflammatory cytokines in odontoblasts. Similar changes were observed in the odontoblasts cut directly under the carious lesion. These results help to shed light on the inflammatory processes happening during caries.

Keywords: inflammation, interleukin, lipoteichoic acid, odontoblasts

Procedia PDF Downloads 197
802 Non Destructive Ultrasound Testing for the Determination of Elastic Characteristics of AlSi7Zn3Cu2Mg Foundry Alloy

Authors: A. Hakem, Y. Bouafia

Abstract:

Characterization of materials used for various mechanical components is of great importance in their design. Several studies were conducted by various authors in order to improve their physical and/or chemical properties in general and mechanical or metallurgical properties in particular. The foundry alloy AlSi7Zn3Cu2Mg is one of the main components constituting the various mechanisms for the implementation of applications and various industrial projects. Obtaining a reliable product is not an easy task; several results proposed by different authors show sometimes results that can contradictory. Due to their high mechanical characteristics, these alloys are widely used in engineering. Silicon improves casting properties and magnesium allows heat treatment. It is thus possible to obtain various degrees of hardening and therefore interesting compromise between tensile strength and yield strength, on one hand, and elongation, on the other hand. These mechanical characteristics can be further enhanced by a series of mechanical treatments or heat treatments. Their light weight coupled with high mechanical characteristics, aluminum alloys are very much used in cars and aircraft industry. The present study is focused on the influence of heat treatments which cause significant micro structural changes, usually hardening by variation of annealing temperatures by increments of 10°C and 20°C on the evolution of the main elastic characteristics, the resistance, the ductility and the structural characteristics of AlSi7Zn3Cu2Mg foundry alloy cast in sand by gravity. These elastic properties are determined in three directions for each specimen of dimensions 200x150x20 mm³ by the ultrasonic method based on acoustic or elastic waves. The hardness, the micro hardness and the structural characteristics are evaluated by a non-destructive method. The aim of this work is to study the hardening ability of AlSi7Zn3Cu2Mg alloy by considering ten states. To improve the mechanical properties obtained with the raw casting, one should use heat treatment for structural hardening; the addition of magnesium is necessary to increase the sensitivity to this specific heat treatment: Treatment followed by homogenization which generates a diffusion of atoms in a substitution solid solution inside a hardening furnace at 500°C during 8h, followed immediately by quenching in water at room temperature 20 to 25°C, then an ageing process for 17h at room temperature and at different annealing temperature (150, 160, 170, 180, 190, 240, 200, 220 and 240°C) for 20h in an annealing oven. The specimens were allowed to cool inside the oven.

Keywords: aluminum, foundry alloy, magnesium, mechanical characteristics, silicon

Procedia PDF Downloads 245
801 Applying the Global Trigger Tool in German Hospitals: A Retrospective Study in Surgery and Neurosurgery

Authors: Mareen Brosterhaus, Antje Hammer, Steffen Kalina, Stefan Grau, Anjali A. Roeth, Hany Ashmawy, Thomas Gross, Marcel Binnebosel, Wolfram T. Knoefel, Tanja Manser

Abstract:

Background: The identification of critical incidents in hospitals is an essential component of improving patient safety. To date, various methods have been used to measure and characterize such critical incidents. These methods are often viewed by physicians and nurses as external quality assurance, and this creates obstacles to the reporting events and the implementation of recommendations in practice. One way to overcome this problem is to use tools that directly involve staff in measuring indicators of quality and safety of care in the department. One such instrument is the global trigger tool (GTT), which helps physicians and nurses identify adverse events by systematically reviewing randomly selected patient records. Based on so-called ‘triggers’ (warning signals), indications of adverse events can be given. While the tool is already used internationally, its implementation in German hospitals has been very limited. Objectives: This study aimed to assess the feasibility and potential of the global trigger tool for identifying adverse events in German hospitals. Methods: A total of 120 patient records were randomly selected from two surgical, and one neurosurgery, departments of three university hospitals in Germany over a period of two months per department between January and July, 2017. The records were reviewed using an adaptation of the German version of the Institute for Healthcare Improvement Global Trigger Tool to identify triggers and adverse event rates per 1000 patient days and per 100 admissions. The severity of adverse events was classified using the National Coordinating Council for Medication Error Reporting and Prevention. Results: A total of 53 adverse events were detected in the three departments. This corresponded to adverse event rates of 25.5-72.1 per 1000 patient-days and from 25.0 to 60.0 per 100 admissions across the three departments. 98.1% of identified adverse events were associated with non-permanent harm without (Category E–71.7%) or with (Category F–26.4%) the need for prolonged hospitalization. One adverse event (1.9%) was associated with potentially permanent harm to the patient. We also identified practical challenges in the implementation of the tool, such as the need for adaptation of the global trigger tool to the respective department. Conclusions: The global trigger tool is feasible and an effective instrument for quality measurement when adapted to the departmental specifics. Based on our experience, we recommend a continuous use of the tool thereby directly involving clinicians in quality improvement.

Keywords: adverse events, global trigger tool, patient safety, record review

Procedia PDF Downloads 235
800 Establishing a Communication Framework in Response to the COVID-19 Pandemic in a Tertiary Government Hospital in the Philippines

Authors: Nicole Marella G. Tan, Al Joseph R. Molina, Raisa Celine R. Rosete, Soraya Elisse E. Escandor, Blythe N. Ke, Veronica Marie E. Ramos, Apolinario Ericson B. Berberabe, Jose Jonas D. del Rosario, Regina Pascua-Berba, Eileen Liesl A. Cubillan, Winlove P. Mojica

Abstract:

Emergency risk and health communications play a vital role in any pandemic response. However, the Philippine General Hospital (PGH) lacked a system of information delivery that could effectively fulfill the hospital’s communication needs as a COVID-19 referral hospital. This study aimed to describe the establishment of a communication framework for information dissemination within a tertiary government hospital during the COVID-19 pandemic and evaluated the perceived usefulness of its outputs. This is a mixed quantitative-qualitative study with two phases. Phase 1 documented the formation and responsibilities of the Information Education Communication (IEC) Committee. Phase 2 evaluated its output and outcomes through a hospital-wide survey of 528 healthcare workers (HCWs) using a pre-tested questionnaire. In-depth explanations were obtained from five focused group discussions (FGD) amongst various HCW subgroups. Descriptive analysis was done using STATA 16 while qualitative data were synthesized thematically. Communication practices in PGH were loosely structured at the beginning of the pandemic until the establishment of the IEC Committee. The IEC Committee was well-represented by concerned stakeholders. Nine types of infographics tackled different aspects of the hospital’s health operations after thorough inputs from concerned offices. Internal and external feedback mechanisms ensured accurate infographics. Majority of the survey respondents (98.67%) perceived these as useful in their work or daily lives. FGD participants cited the relevance of infographics to their occupations, suggested improvements, and hoped that these efforts would be continued in the future. Sustainability and comprehensive reach were the main concerns in this undertaking. The PGH COVID-19 IEC framework was developed through trial and testing as there were no existing formal structures to communicate health risks and to properly direct the HCWs in the chaotic time of a pandemic. It is a continuously evolving framework which is perceived as useful by HCWs and is hoped to be sustained in the future.

Keywords: COVID-19, pandemic, health communication, infographics, social media

Procedia PDF Downloads 110
799 Software User Experience Enhancement through Collaborative Design

Authors: Shan Wang, Fahad Alhathal, Daniel Hobson

Abstract:

User-centered design skills play an important role in crafting a positive and intuitive user experience for software applications. Embracing a user-centric design approach involves understanding the needs, preferences, and behaviors of the end-users throughout the design process. This mindset not only enhances the usability of the software but also fosters a deeper connection between the digital product and its users. This paper encompasses a 6-month knowledge exchange collaboration project between an academic institution and an external industry in 2023, aims to improve the user experience of a digital platform utilized for a knowledge management tool, to understand users' preferences for features, identify sources of frustration, and pinpoint areas for enhancement. This research conducted one of the most effective methods to implement user-centered design through co-design workshops for testing user onboarding experiences that involve the active participation of users in the design process. More specifically, in January 2023, we organized eight workshops with a diverse group of 11 individuals. Throughout these sessions, we accumulated a total of 11 hours of qualitative data in both video and audio formats. Subsequently, we conducted an analysis of user journeys, identifying common issues and potential areas for improvement. This analysis was pivotal in guiding the knowledge management software in prioritizing feature enhancements and design improvements. Employing a user-centered design thinking process, we developed a series of graphic design solutions in collaboration with the software management tool company. These solutions were targeted at refining onboarding user experiences, workplace interfaces, and interactive design. Some of these design solutions were translated into tangible interfaces for the knowledge management tool. By actively involving users in the design process and valuing their input, developers can create products that are not only functional but also resonate with the end-users, ultimately leading to greater success in the competitive software landscape. In conclusion, this paper not only contributes insights into designing onboarding user experiences for software within a co-design approach but also presents key theories on leveraging the user-centered design process in software design to enhance overall user experiences.

Keywords: user experiences, co-design, design process, knowledge management tool, user-centered design

Procedia PDF Downloads 42
798 Vibroacoustic Modulation of Wideband Vibrations and its Possible Application for Windmill Blade Diagnostics

Authors: Abdullah Alnutayfat, Alexander Sutin, Dong Liu

Abstract:

Wind turbine has become one of the most popular energy productions. However, failure of blades and maintenance costs evolve into significant issues in the wind power industry, so it is essential to detect the initial blade defects to avoid the collapse of the blades and structure. This paper aims to apply modulation of high-frequency blade vibrations by low-frequency blade rotation, which is close to the known Vibro-Acoustic Modulation (VAM) method. The high-frequency wideband blade vibration is produced by the interaction of the surface blades with the environment air turbulence, and the low-frequency modulation is produced by alternating bending stress due to gravity. The low-frequency load of rotational wind turbine blades ranges between 0.2-0.4 Hz and can reach up to 2 Hz for strong wind. The main difference between this study and previous ones on VAM methods is the use of a wideband vibration signal from the blade's natural vibrations. Different features of the vibroacoustic modulation are considered using a simple model of breathing crack. This model considers the simple mechanical oscillator, where the parameters of the oscillator are varied due to low-frequency blade rotation. During the blade's operation, the internal stress caused by the weight of the blade modifies the crack's elasticity and damping. The laboratory experiment using steel samples demonstrates the possibility of VAM using a probe wideband noise signal. A cycle load with a small amplitude was used as a pump wave to damage the tested sample, and a small transducer generated a wideband probe wave. The received signal demodulation was conducted using the Detecting of Envelope Modulation on Noise (DEMON) approach. In addition, the experimental results were compared with the modulation index (MI) technique regarding the harmonic pump wave. The wideband and traditional VAM methods demonstrated similar sensitivity for earlier detection of invisible cracks. Importantly, employing a wideband probe signal with the DEMON approach speeds up and simplifies testing since it eliminates the need to conduct tests repeatedly for various harmonic probe frequencies and to adjust the probe frequency.

Keywords: vibro-acoustic modulation, detecting of envelope modulation on noise, damage, turbine blades

Procedia PDF Downloads 86