Search results for: system reorganization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17664

Search results for: system reorganization

624 External Validation of Established Pre-Operative Scoring Systems in Predicting Response to Microvascular Decompression for Trigeminal Neuralgia

Authors: Kantha Siddhanth Gujjari, Shaani Singhal, Robert Andrew Danks, Adrian Praeger

Abstract:

Background: Trigeminal neuralgia (TN) is a heterogenous pain syndrome characterised by short paroxysms of lancinating facial pain in the distribution of the trigeminal nerve, often triggered by usually innocuous stimuli. TN has a low prevalence of less than 0.1%, of which 80% to 90% is caused by compression of the trigeminal nerve from an adjacent artery or vein. The root entry zone of the trigeminal nerve is most sensitive to neurovascular conflict (NVC), causing dysmyelination. Whilst microvascular decompression (MVD) is an effective treatment for TN with NVC, all patients do not achieve long-term pain relief. Pre-operative scoring systems by Panczykowski and Hardaway have been proposed but have not been externally validated. These pre-operative scoring systems are composite scores calculated according to a subtype of TN, presence and degree of neurovascular conflict, and response to medical treatments. There is discordance in the assessment of NVC identified on pre-operative magnetic resonance imaging (MRI) between neurosurgeons and radiologists. To our best knowledge, the prognostic impact for MVD of this difference of interpretation has not previously been investigated in the form of a composite scoring system such as those suggested by Panczykowski and Hardaway. Aims: This study aims to identify prognostic factors and externally validate the proposed scoring systems by Panczykowski and Hardaway for TN. A secondary aim is to investigate the prognostic difference between a neurosurgeon's interpretation of NVC on MRI compared with a radiologist’s. Methods: This retrospective cohort study included 95 patients who underwent de novo MVD in a single neurosurgical unit in Melbourne. Data was recorded from patients’ hospital records and neurosurgeon’s correspondence from perioperative clinic reviews. Patient demographics, type of TN, distribution of TN, response to carbamazepine, neurosurgeon, and radiologist interpretation of NVC on MRI, were clearly described prospectively and preoperatively in the correspondence. Scoring systems published by Panczykowski et al. and Hardaway et al. were used to determine composite scores, which were compared with the recurrence of TN recorded during follow-up over 1-year. Categorical data analysed using Pearson chi-square testing. Independent numerical and nominal data analysed with logistical regression. Results: Logistical regression showed that a Panczykowski composite score of greater than 3 points was associated with a higher likelihood of pain-free outcome 1-year post-MVD with an OR 1.81 (95%CI 1.41-2.61, p=0.032). The composite score using neurosurgeon’s impression of NVC had an OR 2.96 (95%CI 2.28-3.31, p=0.048). A Hardaway composite score of greater than 2 points was associated with a higher likelihood of pain-free outcome 1 year post-MVD with an OR 3.41 (95%CI 2.58-4.37, p=0.028). The composite score using neurosurgeon’s impression of NVC had an OR 3.96 (95%CI 3.01-4.65, p=0.042). Conclusion: Composite scores developed by Panczykowski and Hardaway were validated for the prediction of response to MVD in TN. A composite score based on the neurosurgeon’s interpretation of NVC on MRI, when compared with the radiologist’s had a greater correlation with pain-free outcomes 1 year post-MVD.

Keywords: de novo microvascular decompression, neurovascular conflict, prognosis, trigeminal neuralgia

Procedia PDF Downloads 75
623 Optimizing Usability Testing with Collaborative Method in an E-Commerce Ecosystem

Authors: Markandeya Kunchi

Abstract:

Usability testing (UT) is one of the vital steps in the User-centred design (UCD) process when designing a product. In an e-commerce ecosystem, UT becomes primary as new products, features, and services are launched very frequently. And, there are losses attached to the company if an unusable and inefficient product is put out to market and is rejected by customers. This paper tries to answer why UT is important in the product life-cycle of an E-commerce ecosystem. Secondary user research was conducted to find out work patterns, development methods, type of stakeholders, and technology constraints, etc. of a typical E-commerce company. Qualitative user interviews were conducted with product managers and designers to find out the structure, project planning, product management method and role of the design team in a mid-level company. The paper tries to address the usual apprehensions of the company to inculcate UT within the team. As well, it stresses upon factors like monetary resources, lack of usability expert, narrow timelines, and lack of understanding of higher management as some primary reasons. Outsourcing UT to vendors is also very prevalent with mid-level e-commerce companies, but it has its own severe repercussions like very little team involvement, huge cost, misinterpretation of the findings, elongated timelines, and lack of empathy towards the customer, etc. The shortfalls of the unavailability of a UT process in place within the team and conducting UT through vendors are bad user experiences for customers while interacting with the product, badly designed products which are neither useful and nor utilitarian. As a result, companies see dipping conversions rates in apps and websites, huge bounce rates and increased uninstall rates. Thus, there was a need for a more lean UT system in place which could solve all these issues for the company. This paper highlights on optimizing the UT process with a collaborative method. The degree of optimization and structure of collaborative method is the highlight of this paper. Collaborative method of UT is one in which the centralised design team of the company takes for conducting and analysing the UT. The UT is usually a formative kind where designers take findings into account and uses in the ideation process. The success of collaborative method of UT is due to its ability to sync with the product management method employed by the company or team. The collaborative methods focus on engaging various teams (design, marketing, product, administration, IT, etc.) each with its own defined roles and responsibility in conducting a smooth UT with users In-house. The paper finally highlights the positive results of collaborative UT method after conducting more than 100 In-lab interviews with users across the different lines of businesses. Some of which are the improvement of interaction between stakeholders and the design team, empathy towards users, improved design iteration, better sanity check of design solutions, optimization of time and money, effective and efficient design solution. The future scope of collaborative UT is to make this method leaner, by reducing the number of days to complete the entire project starting from planning between teams to publishing the UT report.

Keywords: collaborative method, e-commerce, product management method, usability testing

Procedia PDF Downloads 120
622 Barriers and Facilitators of Implementing Digital Mental Health Resources in Underserved Regions of Ontario during the COVID-19 Pandemic

Authors: Samaneh Abedini, Diana Urajnik, Nicole Naccarato

Abstract:

A high prevalence of mental health problems was observed in marginalized youth living in underserved regions of Ontario during the COVID-19 pandemic. To address this issue, a growing number of community-based traditional mental health services are offering digital mental health resources due to their accessibility, affordability, and scalability. The feasibility of providing these resources in underserved regions has been examined by researchers rather than by representatives of effective services within a mental health system. Indeed, digitalized mental health contents are not routinely embedded within local mental health organizations' services in Northern Ontario, where they can make a substantial impact. To date, many technology-based mental health initiatives have not been effectively implemented in this region. The obstacles associated with implementing digitalized mental health resources in Northern Ontario may be unique to that region. Thus, specific context-based considerations might need to be applied for developing and implementing digital resources by regional mental health organizations in Northern Ontario. The target population was child-serving organizations situated in northeastern Ontario, specifically within Greater Sudbury and the Sudbury District. A sample of six organizations were selected with representation from the mental health, social, and healthcare sectors. The project supervisor was in a unique position to access the organizations by virtue of existing relationships with the practice and lay communities at large. Thus, recruitment was conducted through professional outreach in partnership with the Center for Rural and Northern Health Research (CRaNHR). Semi-structured interviews were conducted with 1-2 key personnel (e.g., administrator, clinician) from participating organizations. Audio recordings from the semi-structured interviews were transcribed verbatim and thematically analyzed supported by NVivo. Thematic analysis of the data resulted in a total of 13 excerpts which were categorized into two major themes including 1) digital mental health services as a valuable resource for organizations both during and after the pandemic, and 2) barriers and facilitators to a successful implementation of digital mental health resources in northern Ontario. Four secondary themes were identified: 1) perceived barriers to implementation of digital mental health resources to the offered services by mental health agencies; 2) acceptability and feasibility of digital health sources for people living in northern Ontario; 3) data security, safety, and risk; and 4) connecting with clients. The employees of mental health organizations in northern Ontario considered digital mental health resources as generally acceptable to youth. However, they raised several concerns that may affect their implementation into routine practice and service delivery. The implementation of digital systems should be simple and straightforward and should enhance rather than hinder clinical workflows for staff. A clear plan for implementing technological services is also required for the successful adoption of digital systems. For successful adoption and implementation of digital systems, staff views must be considered.

Keywords: COVID-19 pandemic, digital mental health resources, Ontario, underserved

Procedia PDF Downloads 103
621 Networked Media, Citizen Journalism and Political Participation in Post-Revolutionary Tunisia: Insight from a European Research Project

Authors: Andrea Miconi

Abstract:

The research will focus on the results of the Tempus European Project eMEDia dedicated to Cross-Media Journalism. The project is founded by the European Commission as it involves four European partners - IULM University, Tampere University, University of Barcelona, and the Mediterranean network Unimed - and three Tunisian Universities – IPSI La Manouba, Sfax and Sousse – along with the Tunisian Ministry for Higher Education and the National Syndicate of Journalists. The focus on Tunisian condition is basically due to the role played by digital activists in its recent history. The research is dedicated to the relationship between political participation, news-making practices and the spread of social media, as it is affecting Tunisian society. As we know, Tunisia during the Arab Spring had been widely considered as a laboratory for the analysis the use of new technologies for political participation. Nonetheless, the literature about the Arab Spring actually fell short in explaining the genesis of the phenomenon, on the one hand by isolating technologies as a casual factor in the spread of demonstrations, and on the other by analyzing North-African condition through a biased perspective. Nowadays, it is interesting to focus on the consolidation of the information environment three years after the uprisings. And what is relevant, only a close, in-depth analysis of Tunisian society is able to provide an explanation of its history, and namely of the part of digital media in the overall evolution of political system. That is why the research is based on different methodologies: desk stage, interviews, and in-depth analysis of communication practices. Networked journalism is the condition determined by the technological innovation on news-making activities: a condition upon which professional journalist can no longer be considered the only player in the information arena, and a new skill must be developed. Along with democratization, nonetheless, the so-called citizen journalism is also likely to produce some ambiguous effects, such as the lack of professional standards and the spread of information cascades, which may prove to be particularly dangerous in an evolving media market as the Tunisian one. This is why, according to the project, a new profile must be defined, which is able to manage this new condition, and which can be hardly reduced to the parameters of traditional journalistic work. Rather than simply using new devices for news visualization, communication professionals must also be able to dialogue with all new players and to accept the decentralized nature of digital environments. This networked nature of news-making seemed to emerge during the Tunisian revolution, when bloggers, journalists, and activists used to retweet each other. Nonetheless, this intensification of communication exchange was inspired by the political climax of the uprising, while all media, by definition, are also supposed to bring some effects on people’s state of mind, culture and daily life routines. That is why it is worth analyzing the consolidation of these practices in a normal, post-revolutionary situation.

Keywords: cross-media, education, Mediterranean, networked journalism, social media, Tunisia

Procedia PDF Downloads 204
620 Innovations and Challenges: Multimodal Learning in Cybersecurity

Authors: Tarek Saadawi, Rosario Gennaro, Jonathan Akeley

Abstract:

There is rapidly growing demand for professionals to fill positions in Cybersecurity. This is recognized as a national priority both by government agencies and the private sector. Cybersecurity is a very wide technical area which encompasses all measures that can be taken in an electronic system to prevent criminal or unauthorized use of data and resources. This requires defending computers, servers, networks, and their users from any kind of malicious attacks. The need to address this challenge has been recognized globally but is particularly acute in the New York metropolitan area, home to some of the largest financial institutions in the world, which are prime targets of cyberattacks. In New York State alone, there are currently around 57,000 jobs in the Cybersecurity industry, with more than 23,000 unfilled positions. The Cybersecurity Program at City College is a collaboration between the Departments of Computer Science and Electrical Engineering. In Fall 2020, The City College of New York matriculated its first students in theCybersecurity Master of Science program. The program was designed to fill gaps in the previous offerings and evolved out ofan established partnership with Facebook on Cybersecurity Education. City College has designed a program where courses, curricula, syllabi, materials, labs, etc., are developed in cooperation and coordination with industry whenever possible, ensuring that students graduating from the program will have the necessary background to seamlessly segue into industry jobs. The Cybersecurity Program has created multiple pathways for prospective students to obtain the necessary prerequisites to apply in order to build a more diverse student population. The program can also be pursued on a part-time basis which makes it available to working professionals. Since City College’s Cybersecurity M.S. program was established to equip students with the advanced technical skills needed to thrive in a high-demand, rapidly-evolving field, it incorporates a range of pedagogical formats. From its outset, the Cybersecurity program has sought to provide both the theoretical foundations necessary for meaningful work in the field along with labs and applied learning projects aligned with skillsets required by industry. The efforts have involved collaboration with outside organizations and with visiting professors designing new courses on topics such as Adversarial AI, Data Privacy, Secure Cloud Computing, and blockchain. Although the program was initially designed with a single asynchronous course in the curriculum with the rest of the classes designed to be offered in-person, the advent of the COVID-19 pandemic necessitated a move to fullyonline learning. The shift to online learning has provided lessons for future development by providing examples of some inherent advantages to the medium in addition to its drawbacks. This talk will address the structure of the newly-implemented Cybersecurity Master’s Program and discuss the innovations, challenges, and possible future directions.

Keywords: cybersecurity, new york, city college, graduate degree, master of science

Procedia PDF Downloads 148
619 Investigating the Nature of Transactions Behind Violations Along Bangalore’s Lakes

Authors: Sakshi Saxena

Abstract:

Bangalore is an IT industry-based metropolitan city in the state of Karnataka in India. It has experienced tremendous urbanization at the expense of the environment. The reasons behind development over and near ecologically sensitive areas have been raised by several instances of disappearing lakes. Lakes in Bangalore can be considered commons on both a local and a regional scale and these water bodies are becoming less interconnected because of encroachment in the catchment area. Other sociocultural environmental risks that have led to social issues are now a source of concern. They serve as an example of the transformations in commons, a dilemma that as is transformed from rural to urban areas, as well as the complicated institutional issues associated with governance. According to some scholarly work and ecologists, a nexus of public and commercial institutions is primarily responsible for the depletion of water tanks and the inefficiency of the planning process. It is said that Bangalore's growth as an urban centre, together with the demands it created, particularly on land and water, resulted in the emergence of a middle and upper class that was demanding and self-assured. For the report in focus, it is evident to understand the issues and problems which led to these encroachments and captured violations if any around these lakes and tanks which arose during these decades. To claim watersheds and lake edges as properties, institutional arrangements (organizations, laws, and policies) intersect with planning authorities. Because of unregulated or indiscriminate forms of urbanization, it is claimed that the engagement of actors and negotiations of the process, including government ignorance, are allowing this problem to flourish. In general, the governance of natural resources in India is largely state-based. This is due to the constitutional scheme, which since the Government of India Act, of 1935 has in principle given the power to the states to legislate in this area. Thus, states have the exclusive power to regulate water supplies, irrigation and canals, drainage and embankments, water storage, hydropower, and fisheries. Thus, The main aim is to understand institutional arrangements and the master planning processes behind these arrangements. To understand the ambiguity through an example, it is noted that, Custodianship alone is a role divided between two state and two city-level bodies. This creates regulatory ambiguity and the effects on the environment are such as changes in city temperature, urban flooding, etc. As established, the main kinds of issues around lakes/tanks in Bangalore are encroachment and depletion. This study will further be enhanced by doing a physical survey of three of these lakes focusing on the Bellandur site and the stakeholders involved. According to the study's findings thus far, corrupt politicians and dubious land transaction tools are involved in the real estate industry. It appears that some destruction could have been stopped or at least mitigated in this case if there had been a robust system of urban planning processes involved along with strong institutional arrangements to protect lakes.

Keywords: wetlands, lakes, urbanization, bangalore, politics, reservoirs, municipal jurisdiction, lake connections, institutions

Procedia PDF Downloads 79
618 Voices of the Students From a Fully Inclusive Classroom

Authors: Ashwini Tiwari

Abstract:

Introduction: Inclusive education for all is a multifaceted approach that requires system thinking and the promotion of a "Culture of Inclusion." Such can only be achieved through the collaboration of multiple stakeholders at the community, regional, state, national, and international levels. Researchers have found effective practices used in inclusive general classrooms are beneficial to all students, including students with disabilities, those who experience challenges academically and socially, and students without disabilities as well. However, to date, no statistically significant effects on the academic performance of students without disabilities in the presence of students with disabilities have been revealed. Therefore, proponents against inclusive education practices, based solely on their beliefs regarding the detrimental effects of students without disabilities, appears to have unfounded perceptions. This qualitative case study examines students' perspectives and beliefs about inclusive education in a middle school in South Texas. More specifically, this study examined students understanding of how inclusive education practices intersect with the classroom community. The data was collected from the students attending fully inclusive classrooms through interviews and focus groups. The findings suggest that peer integration and friendships built during classes are an essential part of schooling for both disabled and non-disabled students. Research Methodology: This qualitative case study used observations and focus group interviews with 12 middle school students attending an inclusive classroom at a public school located in South Texas. The participant of this study includes eight females and five males. All the study participants attend a fully inclusive middle school with special needs peers. Five of the students had disabilities. The focus groups and interviews were conducted during for entire academic year, with an average of one focus group and observation each month. The data were analyzed using the constant comparative method. The data from the focus group and observation were continuously compared for emerging codes during the data collection process. Codes were further refined and merged. Themes emerged as a result of the interpretation at the end of the data analysis process. Findings and discussion: This study was conducted to examine disabled and non-disabled students' perspectives on the inclusion of disabled students. The study revealed that non-disabled students generally have positive attitudes toward their disabled peers. The students in the study did not perceive inclusion as a special provision; rather, they perceived inclusion as a way of instructional practice. Most of the participants in the study spoke about the multiple benefits of inclusion. They emphasized that peer integration and friendships built during classes are an essential part of their schooling. Students believed that it was part of their responsibility to assist their peers in the ways possible. This finding is in line with the literature that the personality of children with disabilities is not determined by their disability but rather by their social environment and its interaction with the child. Interactions with peers are one of the most important socio-cultural conditions for the development of children with disabilities.

Keywords: inclusion, special education, k-12 education, student voices

Procedia PDF Downloads 81
617 Risk Factors Associated with Increased Emergency Department Visits and Hospital Admissions Among Child and Adolescent Patients

Authors: Lalanthica Yogendran, Manassa Hany, Saira Pasha, Benjamin Chaucer, Simarpreet Kaur, Christopher Janusz

Abstract:

Children and adolescent patients visit the Psychiatric Emergency Department (ED) for multiple reasons. Visiting the Psychiatric ED itself can be a traumatic experience that can affect an adolescents mental well-being, regardless of a history of mental illness. Despite this, limited research exists in this domain. Prospective studies have correlated adverse psychosocial determinants among adolescents to risk factors for poor well-being and unfavorable behavior outcomes. Studies have also shown that physiological stress is a contributor in the development of health problems and an increase in substance abuse in adolescents. This study aimed to retrospectively determine which psychosocial factors are associated with an increase in psychiatric ED visits. 600 charts of patients who had a psychiatric ED and inpatient admission visit from January 2014 through December 2014 were reviewed. Sociodemographics, diagnoses, ED visits and inpatient admissions were collected. Descriptive statistics, chi-square tests and independent t-test analyses were utilized to examine differences in the sample to determine which factors affected ED visits and admissions. The sample was 50% female, 35.2% self-identified black, and had a mean age of 13 years. The majority, 85%, went to public school and 17% were in special education. Attention Deficit Hyperactivity Disorder was the most common admitting diagnosis, found in 132(23%) responders. Most patients came from single parent household 305 (53%). The mean ages of patients that were sexually active, with legal issues, and reporting marijuana substance abuse were 15, 14.35, and 15 years respectively. Patients from two biological parent households had significantly fewer ED visits (1.2 vs. 1.7, p < 0.01) and admissions (0.09 vs. 0.26, p < 0.01). Among social factors, those who reported sexual, physical or emotional abuse had a significantly greater number of ED visits (2.1 vs. 1.5, p < 0.01) and admissions (0.61 vs. 0.14, p < 0.01) than those who did not. Patients that were sexually active or had legal issues or substance abuse with marijuana had a significantly greater number of admissions (0.43 vs. 0.17, p < 0.01), (0.54 vs. .18, p < 0.01) and (0.46 vs. 0.18, p < 0.01) respectively. This data supports the theory of the stability of a two parent home. Dual parenting plays a role in creating a safe space where a child can develop; this is shown by subsequent decreases in psychiatric ED visits and admissions. This may highlight the psychological protective role of a two parent household. Abuse can exacerbate existing psychiatric illness or initiate the onset of new disease. Substance abuse and legal issues result in early induction to the criminal system. Results show that this causes an increase in frequency of visits and severity of symptoms. Only marijuana, but not other illicit substances, correlated with higher incidence of psychiatric ED visits. This may speak to the psychotropic nature of tetrahydrocannabinols and their role in mental illness. This study demonstrates the array of psychosocial factors that lead to increased ED visits and admissions in children and adolescents.

Keywords: adolescent, child psychiatry, emergency department, substance abuse

Procedia PDF Downloads 336
616 A Sociological Study of the Potential Role of Retired Soldiers in the Post War Development and Reconstruction in Sri Lanka

Authors: Amunupura Kiriwandeiye Gedara, Asintha Saminda Gnanaratne

Abstract:

The security forces can be described as a workforce that goes beyond the role of ensuring the national security and contributes to the development process of the country. Soldiers are following combatant training courses during their tenure, they are equipped with a variety of vocational training courses to satisfy the needs of the army, to equip them with vocational training capabilities to achieve the development and reconstruction goals of the country as well as for the betterment of society in the event of emergencies. But with retirement, their relationship with the military is severed, and they are responsible for the future of their lives. The main purpose of this study was to examine how such professional capabilities can contribute to the development of the country, the current socio-economic status of the retired soldiers, and the current application of the vocational training skills they have mastered in the army to develop and rebuild the country in an effective manner. After analyzing the available research literature related to this field, a conceptual framework was developed and according to qualitative research methodology, and data obtained from Case studies and interviews are analyzed by using thematic analysis. Factors influencing early retirement include a lack of understanding of benefits, delays in promotions, not being properly evaluated for work, getting married on hasty decisions, and not having enough time to spend on family and household chores. Most of the soldiers are not aware about various programs and benefits available to retirees. They do not have a satisfactory attitude towards the retirement guidance they receive from the army at the time of retirement. Also, due to the lack of understanding about how to use their vocational capabilities successfully pursue their retirement life, the majority of people are employed in temporary jobs, and some are successful in post-retirement life due to their successful use of training received. Some live on pensions without engaging in any income-generating activities, and those who retire after 12 years of service are facing severe economic hardships as they do not get pensions. Although they have received training in various fields, they do not use them for their benefit due to lack of proper guidance. Although the government implements programs, they are not clearly aware of them. Barriers to utilization of training include an absence of a system to identify the professional skills of retired soldiers, interest in civil society affairs, exploration of opportunities in the civil and private sectors, and politicization of services. If they are given the opportunity, they will be able to contribute to the development and reconstruction process. The findings of the study further show that it has many social, economic, political, and psychological benefits not only for individuals but also for a country. Entrepreneurship training for all retired soldiers, improving officers' understanding, streamlining existing mechanisms, creating new mechanisms, setting up a separate unit for retirees, and adapting them to civil society, private and non-governmental contributions, and training courses can be identified as potential means to improve the current situation.

Keywords: development, reconstruction, retired soldiers, vocational capabilities

Procedia PDF Downloads 135
615 Development of a Risk Governance Index and Examination of Its Determinants: An Empirical Study in Indian Context

Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav

Abstract:

Risk management has been gaining extensive focus from international organizations like Committee of Sponsoring Organizations and Financial Stability Board, and, the foundation of such an effective and efficient risk management system lies in a strong risk governance structure. In view of this, an attempt (perhaps a first of its kind) has been made to develop a risk governance index, which could be used as proxy for quality of risk governance structures. The index (normative framework) is based on eleven variables, namely, size of board, board diversity in terms of gender, proportion of executive directors, executive/non-executive status of chairperson, proportion of independent directors, CEO duality, chief risk officer (CRO), risk management committee, mandatory committees, voluntary committees and existence/non-existence of whistle blower policy. These variables are scored on a scale of 1 to 5 with the exception of the variables, namely, status of chairperson and CEO duality (which have been scored on a dichotomous scale with the score of 3 or 5). In case there is a legal/statutory requirement in respect of above-mentioned variables and there is a non-compliance with such requirement a score of one has been envisaged. Though there is no legal requirement, for the larger part of study, in context of CRO, risk management committee and whistle blower policy, still a score of 1 has been assigned in the event of their non-existence. Recognizing the importance of these variables in context of risk governance structure and the fact that the study basically focuses on risk governance, the absence of these variables has been equated to non-compliance with a legal/statutory requirement. Therefore, based on this the minimum score is 15 and the maximum possible is 55. In addition, an attempt has been made to explore the determinants of this index. For this purpose, the sample consists of non-financial companies (429) that constitute S&P CNX500 index. The study covers a 10 years period from April 1, 2005 to March 31, 2015. Given the panel nature of data, Hausman test was applied, and it suggested that fixed effects regression would be appropriate. The results indicate that age and size of firms have significant positive impact on its risk governance structures. Further, post-recession period (2009-2015) has witnessed significant improvement in quality of governance structures. In contrast, profitability (positive relationship), leverage (negative relationship) and growth (negative relationship) do not have significant impact on quality of risk governance structures. The value of rho indicates that about 77.74% variation in risk governance structures is due to firm specific factors. Given the fact that each firm is unique in terms of its risk exposure, risk culture, risk appetite, and risk tolerance levels, it appears reasonable to assume that the specific conditions and circumstances that a company is beset with, could be the biggest determinants of its risk governance structures. Given the recommendations put forth in the paper (particularly for regulators and companies), the study is expected to be of immense utility in an important yet neglected aspect of risk management.

Keywords: corporate governance, ERM, risk governance, risk management

Procedia PDF Downloads 254
614 Characterization of Carbazole-Based Host Material for Highly Efficient Thermally Activated Delayed Fluorescence Emitter

Authors: Malek Mahmoudi, Jonas Keruckas, Dmytro Volyniuk, Jurate Simokaitiene, Juozas V. Grazulevicius

Abstract:

Host materials have been discovered as one of the most appealing methods for harvesting triplet states in organic materials for application in organic light-emitting diodes (OLEDs). The ideal host-guest system for emission in thermally delayed fluorescence OLEDs with 20% guest concentration for efficient energy transfer has been demonstrated in the present investigation. In this work, 3,3'-bis[9-(4-fluorophenyl) carbazole] (bFPC) has been used as the host, which induces balanced charge carrier transport for high-efficiency OLEDs.For providing a complete characterization of the synthesized compound, photophysical, photoelectrical, charge-transporting, and electrochemical properties of the compound have been examined. Excited-state lifetimes and singlet-triplet energy gaps were measured for characterization of photophysical properties, while thermogravimetric analysis, as well as differential scanning calorimetry measurements, were performed for probing of electrochemical and thermal properties of the compound. The electrochemical properties of this compound were investigated by cyclic voltammetry (CV) method, and ionization potential (IPCV) value of 5.68 eV was observed. UV–Vis absorption and photoluminescence spectrum of a solution of the compound in toluene (10-5 M) showed maxima at 302 and 405 nm, respectively. Photoelectron emission spectrometry was used for the characterization of charge-injection properties of the studied compound in solid. The ionization potential of this material was found to be 5.78 eV, and time-of-flight measurement was used for testing charge-transporting properties and hole mobility estimated using this technique in a vacuum-deposited layer reached 4×10-4 cm2 V-1s-1. Since the compound with high charge mobilities was tested as a host in an organic light-emitting diode. The device was fabricated by successive deposition onto a pre-cleaned indium tin oxide (ITO) coated glass substrate under a vacuum of 10-6 Torr and consisting of an indium-tin-oxide anode, hole injection and transporting layer(MoO3, NPB), emitting layer with bFPC as a host and 4CzIPN (2,4,5,6-tetra(9-carbazolyl)isophthalonitrile) which is a new highly efficient green thermally activated delayed fluorescence (TADF) material as an emitter, an electron transporting layer(TPBi) and lithium fluoride layer topped with aluminum layer as a cathode exhibited the highest maximum current efficiency and power efficiency of 33.9 cd/A and 23.5 lm/W, respectively and the electroluminescence spectrum showed only a peak at 512nm. Furthermore, the new bicarbazole-based compound was tested as a host in thermally activated delayed fluorescence organic light-emitting diodes are reaching luminance of 25300 cd m-2 and external quantum efficiency of 10.1%. Interestingly, the turn-on voltage was low enough (3.8 V), and such a device can be used for highly efficient light sources.

Keywords: thermally-activated delayed fluorescence, host material, ionization energy, charge mobility, electroluminescence

Procedia PDF Downloads 143
613 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications

Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky

Abstract:

InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.

Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor

Procedia PDF Downloads 257
612 A 500 MWₑ Coal-Fired Power Plant Operated under Partial Oxy-Combustion: Methodology and Economic Evaluation

Authors: Fernando Vega, Esmeralda Portillo, Sara Camino, Benito Navarrete, Elena Montavez

Abstract:

The European Union aims at strongly reducing their CO₂ emissions from energy and industrial sector by 2030. The energy sector contributes with more than two-thirds of the CO₂ emission share derived from anthropogenic activities. Although efforts are mainly focused on the use of renewables by energy production sector, carbon capture and storage (CCS) remains as a frontline option to reduce CO₂ emissions from industrial process, particularly from fossil-fuel power plants and cement production. Among the most feasible and near-to-market CCS technologies, namely post-combustion and oxy-combustion, partial oxy-combustion is a novel concept that can potentially reduce the overall energy requirements of the CO₂ capture process. This technology consists in the use of higher oxygen content in the oxidizer that should increase the CO₂ concentration of the flue gas once the fuel is burnt. The CO₂ is then separated from the flue gas downstream by means of a conventional CO₂ chemical absorption process. The production of a higher CO₂ concentrated flue gas should enhance the CO₂ absorption into the solvent, leading to further reductions of the CO₂ separation performance in terms of solvent flow-rate, equipment size, and energy penalty related to the solvent regeneration. This work evaluates a portfolio of CCS technologies applied to fossil-fuel power plants. For this purpose, an economic evaluation methodology was developed in detail to determine the main economical parameters for CO₂ emission removal such as the levelized cost of electricity (LCOE) and the CO₂ captured and avoided costs. ASPEN Plus™ software was used to simulate the main units of power plant and solve the energy and mass balance. Capital and investment costs were determined from the purchased cost of equipment, also engineering costs and project and process contingencies. The annual capital cost and operating and maintenance costs were later obtained. A complete energy balance was performed to determine the net power produced in each case. The baseline case consists of a supercritical 500 MWe coal-fired power plant using anthracite as a fuel without any CO₂ capture system. Four cases were proposed: conventional post-combustion capture, oxy-combustion and partial oxy-combustion using two levels of oxygen-enriched air (40%v/v and 75%v/v). CO₂ chemical absorption process using monoethanolamine (MEA) was used as a CO₂ separation process whereas the O₂ requirement was achieved using a conventional air separation unit (ASU) based on Linde's cryogenic process. Results showed a reduction of 15% of the total investment cost of the CO₂ separation process when partial oxy-combustion was used. Oxygen-enriched air production also reduced almost half the investment costs required for ASU in comparison with oxy-combustion cases. Partial oxy-combustion has a significant impact on the performance of both CO₂ separation and O₂ production technologies, and it can lead to further energy reductions using new developments on both CO₂ and O₂ separation processes.

Keywords: carbon capture, cost methodology, economic evaluation, partial oxy-combustion

Procedia PDF Downloads 149
611 Investigating the Influence of Solidification Rate on the Microstructural, Mechanical and Physical Properties of Directionally Solidified Al-Mg Based Multicomponent Eutectic Alloys Containing High Mg Alloys

Authors: Fatih Kılıç, Burak Birol, Necmettin Maraşlı

Abstract:

The directional solidification process is generally used for homogeneous compound production, single crystal growth, and refining (zone refining), etc. processes. The most important two parameters that control eutectic structures are temperature gradient and grain growth rate which are called as solidification parameters The solidification behavior and microstructure characteristics is an interesting topic due to their effects on the properties and performance of the alloys containing eutectic compositions. The solidification behavior of multicomponent and multiphase systems is an important parameter for determining various properties of these materials. The researches have been conducted mostly on the solidification of pure materials or alloys containing two phases. However, there are very few studies on the literature about multiphase reactions and microstructure formation of multicomponent alloys during solidification. Because of this situation, it is important to study the microstructure formation and the thermodynamical, thermophysical and microstructural properties of these alloys. The production process is difficult due to easy oxidation of magnesium and therefore, there is not a comprehensive study concerning alloys containing high Mg (> 30 wt.% Mg). With the increasing amount of Mg inside Al alloys, the specific weight decreases, and the strength shows a slight increase, while due to formation of β-Al8Mg5 phase, ductility lowers. For this reason, production, examination and development of high Mg containing alloys will initiate the production of new advanced engineering materials. The original value of this research can be described as obtaining high Mg containing (> 30% Mg) Al based multicomponent alloys by melting under vacuum; controlled directional solidification with various growth rates at a constant temperature gradient; and establishing relationship between solidification rate and microstructural, mechanical, electrical and thermal properties. Therefore, within the scope of this research, some > 30% Mg containing ternary or quaternary Al alloy compositions were determined, and it was planned to investigate the effects of directional solidification rate on the mechanical, electrical and thermal properties of these alloys. Within the scope of the research, the influence of the growth rate on microstructure parameters, microhardness, tensile strength, electrical conductivity and thermal conductivity of directionally solidified high Mg containing Al-32,2Mg-0,37Si; Al-30Mg-12Zn; Al-32Mg-1,7Ni; Al-32,2Mg-0,37Fe; Al-32Mg-1,7Ni-0,4Si; Al-33,3Mg-0,35Si-0,11Fe (wt.%) alloys with wide range of growth rate (50-2500 µm/s) and fixed temperature gradient, will be investigated. The work can be planned as; (a) directional solidification of Al-Mg based Al-Mg-Si, Al-Mg-Zn, Al-Mg-Ni, Al-Mg-Fe, Al-Mg-Ni-Si, Al-Mg-Si-Fe within wide range of growth rates (50-2500 µm/s) at a constant temperature gradient by Bridgman type solidification system, (b) analysis of microstructure parameters of directionally solidified alloys by using an optical light microscopy and Scanning Electron Microscopy (SEM), (c) measurement of microhardness and tensile strength of directionally solidified alloys, (d) measurement of electrical conductivity by four point probe technique at room temperature (e) measurement of thermal conductivity by linear heat flow method at room temperature.

Keywords: directional solidification, electrical conductivity, high Mg containing multicomponent Al alloys, microhardness, microstructure, tensile strength, thermal conductivity

Procedia PDF Downloads 262
610 Information and Communication Technology (ICT) Education Improvement for Enhancing Learning Performance and Social Equality

Authors: Heichia Wang, Yalan Chao

Abstract:

Social inequality is a persistent problem. One of the ways to solve this problem is through education. At present, vulnerable groups are often less geographically accessible to educational resources. However, compared with educational resources, communication equipment is easier for vulnerable groups. Now that information and communication technology (ICT) has entered the field of education, today we can accept the convenience that ICT provides in education, and the mobility that it brings makes learning independent of time and place. With mobile learning, teachers and students can start discussions in an online chat room without the limitations of time or place. However, because liquidity learning is quite convenient, people tend to solve problems in short online texts with lack of detailed information in a lack of convenient online environment to express ideas. Therefore, the ICT education environment may cause misunderstanding between teachers and students. Therefore, in order to better understand each other's views between teachers and students, this study aims to clarify the essays of the analysts and classify the students into several types of learning questions to clarify the views of teachers and students. In addition, this study attempts to extend the description of possible omissions in short texts by using external resources prior to classification. In short, by applying a short text classification, this study can point out each student's learning problems and inform the instructor where the main focus of the future course is, thus improving the ICT education environment. In order to achieve the goals, this research uses convolutional neural network (CNN) method to analyze short discussion content between teachers and students in an ICT education environment. Divide students into several main types of learning problem groups to facilitate answering student problems. In addition, this study will further cluster sub-categories of each major learning type to indicate specific problems for each student. Unlike most neural network programs, this study attempts to extend short texts with external resources before classifying them to improve classification performance. In short, by applying the classification of short texts, we can point out the learning problems of each student and inform the instructors where the main focus of future courses will improve the ICT education environment. The data of the empirical process will be used to pre-process the chat records between teachers and students and the course materials. An action system will be set up to compare the most similar parts of the teaching material with each student's chat history to improve future classification performance. Later, the function of short text classification uses CNN to classify rich chat records into several major learning problems based on theory-driven titles. By applying these modules, this research hopes to clarify the main learning problems of students and inform teachers that they should focus on future teaching.

Keywords: ICT education improvement, social equality, short text analysis, convolutional neural network

Procedia PDF Downloads 129
609 Intriguing Modulations in the Excited State Intramolecular Proton Transfer Process of Chrysazine Governed by Host-Guest Interactions with Macrocyclic Molecules

Authors: Poojan Gharat, Haridas Pal, Sharmistha Dutta Choudhury

Abstract:

Tuning photophysical properties of guest dyes through host-guest interactions involving macrocyclic hosts are the attractive research areas since past few decades, as these changes can directly be implemented in chemical sensing, molecular recognition, fluorescence imaging and dye laser applications. Excited state intramolecular proton transfer (ESIPT) is an intramolecular prototautomerization process display by some specific dyes. The process is quite amenable to tunability by the presence of different macrocyclic hosts. The present study explores the interesting effect of p-sulfonatocalix[n]arene (SCXn) and cyclodextrin (CD) hosts on the excited-state prototautomeric equilibrium of Chrysazine (CZ), a model antitumour drug. CZ exists exclusively in its normal form (N) in the ground state. However, in the excited state, the excited N* form undergoes ESIPT along with its pre-existing intramolecular hydrogen bonds, giving the excited state prototautomer (T*). Accordingly, CZ shows a single absorption band due to N form, but two emission bands due to N* and T* forms. Facile prototautomerization of CZ is considerably inhibited when the dye gets bound to SCXn hosts. However, in spite of lower binding affinity, the inhibition is more profound with SCX6 host as compared to SCX4 host. For CD-CZ system, while prototautomerization process is hindered by the presence of β-CD, it remains unaffected in the presence of γCD. Reduction in the prototautomerization process of CZ by SCXn and βCD hosts is unusual, because T* form is less dipolar in nature than the N*, hence binding of CZ within relatively hydrophobic hosts cavities should have enhanced the prototautomerization process. At the same time, considering the similar chemical nature of two CD hosts, their effect on prototautomerization process of CZ would have also been similar. The atypical effects on the prototautomerization process of CZ by the studied hosts are suggested to arise due to the partial inclusion or external binding of CZ with the hosts. As a result, there is a strong possibility of intermolecular H-bonding interaction between CZ dye and the functional groups present at the portals of SCXn and βCD hosts. Formation of these intermolecular H-bonds effectively causes the pre-existing intramolecular H-bonding network within CZ molecule to become weak, and this consequently reduces the prototautomerization process for the dye. Our results suggest that rather than the binding affinity between the dye and host, it is the orientation of CZ in the case of SCXn-CZ complexes and the binding stoichiometry in the case of CD-CZ complexes that play the predominant role in influencing the prototautomeric equilibrium of the dye CZ. In the case of SCXn-CZ complexes, the results obtained through experimental findings are well supported by quantum chemical calculations. Similarly for CD-CZ systems, binding stoichiometries obtained through geometry optimization studies on the complexes between CZ and CD hosts correlate nicely with the experimental results. Formation of βCD-CZ complexes with 1:1 stoichiometry while formation of γCD-CZ complexes with 1:1, 1:2 and 2:2 stoichiometries are revealed from geometry optimization studies and these results are in good accordance with the observed effects by the βCD and γCD hosts on the ESIPT process of CZ dye.

Keywords: intermolecular proton transfer, macrocyclic hosts, quantum chemical studies, photophysical studies

Procedia PDF Downloads 123
608 China Pakistan Economic Corridor: An Unfolding Fiasco in World Economy

Authors: Debarpita Pande

Abstract:

On 22nd May 2013 Chinese Premier Li Keqiang on his visit to Pakistan tabled a proposal for connecting Kashgar in China’s Xinjiang Uygur Autonomous Region with the south-western Pakistani seaport of Gwadar via the China Pakistan Economic Corridor (hereinafter referred to as CPEC). The project, popularly termed as 'One Belt One Road' will encompass within it a connectivity component including a 3000-kilometre road, railways and oil pipeline from Kashgar to Gwadar port along with an international airport and a deep sea port. Superficially, this may look like a 'game changer' for Pakistan and other countries of South Asia but this article by doctrinal method of research will unearth some serious flaws in it, which may change the entire economic system of this region heavily affecting the socio-economic conditions of South Asia, further complicating the complete geopolitical situation of the region disturbing the world economic stability. The paper besets with a logical analyzation of the socio-economic issues arising out of this project with an emphasis on its impact on the Pakistani and Indian economy due to Chinese dominance, serious tension in international relations, security issues, arms race, political and provincial concerns. The research paper further aims to study the impact of huge burden of loan given by China towards this project where Pakistan already suffers from persistent debts in the face of declining foreign currency reserves along with that the sovereignty of Pakistan will also be at stake as the entire economy of the country will be held hostage by China. The author compares this situation with the fallout from projects in Sri Lanka, Tajikistan, and several countries of Africa, all of which are now facing huge debt risks brought by Chinese investments. The entire economic balance will be muddled by the increment in Pakistan’s demand of raw materials resulting to the import of the same from China, which will lead to exorbitant price-hike and limited availability. CPEC will also create Chinese dominance over the international movement of goods that will take place between the Atlantic and the Pacific oceans and hence jeopardising the entire economic balance of South Asia along with Middle Eastern countries like Dubai. Moreover, the paper also analyses the impact of CPEC in the context of international unrest and arms race between Pakistan and India as well as India and China due to border disputes and Chinese surveillance. The paper also examines the global change in economic dynamics in international trade that CPEC will create in the light of U.S.-China relationship. The article thus reflects the grave consequences of CPEC on the international economy, security and bilateral relations, which surpasses the positive impacts of it. The author lastly suggests for more transparency and proper diplomatic planning in the execution of this mega project, which can be a cause of economic complexity in international trade in near future.

Keywords: China, CPEC, international trade, Pakistan

Procedia PDF Downloads 175
607 Structural and Microstructural Analysis of White Etching Layer Formation by Electrical Arcing Induced on the Surface of Rail Track

Authors: Ali Ahmed Ali Al-Juboori, H. Zhu, D. Wexler, H. Li, C. Lu, J. McLeod, S. Pannila, J. Barnes

Abstract:

A number of studies have focused on the formation mechanics of white etching layer and its origin in the railway operation. Until recently, the following hypotheses consider the precise mechanics of WELs formation: (i) WELs are the result of thermal process caused by wheel slip; (ii) WELs are mechanically induced by severe plastic deformation; (iii) WELs are caused by a combination of thermo-mechanical process. The mechanisms discussed above lead to occurrence of white etching layers on the area of wheel and rail contact. This is because the contact patch which is the active point of the wheel on the rail is exposed to highest shear stresses which result in localised severe plastic deformation; and highest rate of heat caused by wheel slipe during excessive traction or braking effort. However, if the WELs are not on the running band area, it would suggest that there is another cause of WELs formation. In railway system, particularly electrified railway, arcing phenomenon has been occurring more often and regularly on the rails. In electrified railway, the current is delivered to the train traction motor via contact wires and then returned to the station via the contact between the wheel and the rail. If the contact between the wheel and the rail is temporarily losing, due to dynamic vibration, entrapped dirt or water, lubricant effect or oxidation occurrences, high current can jump through the gap and results in arcing. The other resources of arcing also include the wheel passage the insulated joint and lightning on a train during bad weather. During the arcing, an extensive heat is generated and speared over a large area of top surface of rail. Thus, arcing is considered another heat source in the rail head (rather than wheel slipe) that results in microstructural changes and white etching layer formation. A head hardened (HH) rail steel, cut from a curved rail truck was used for the investigation. Samples were sectioned from a depth of 10 mm below the rail surface, where the material is considered to be still within the hardened layer but away from any microstructural changes on the top surface layer caused by train passage. These samples were subjected to electrical discharges by using Gas Tungsten Arc Welding (GTAW) machine. The arc current was controlled and moved along the samples surface in the direction of travel, as indicated by an arrow. Five different conditions were applied on the surface of the samples. Samples containing pre-existed WELs, taken from ex-service rail surface, were also considered in this study for comparison. Both simulated and ex-serviced WELs were characterised by advanced methods including SEM, TEM, TKD, EDS, XRD. Samples for TEM and TKFD were prepared by Focused Ion Beam (FIB) milling. The results showed that both simulated WELs by electrical arcing and ex-service WEL comprise similar microstructure. Brown etching layer was found with WELs and likely induced by a concurrent tempering process. This study provided a clear understanding of new formation mechanics of WELs which contributes to track maintenance procedure.

Keywords: white etching layer, arcing, brown etching layer, material characterisation

Procedia PDF Downloads 123
606 Blending Synchronous with Asynchronous Learning Tools: Students’ Experiences and Preferences for Online Learning Environment in a Resource-Constrained Higher Education Situations in Uganda

Authors: Stephen Kyakulumbye, Vivian Kobusingye

Abstract:

Generally, World over, COVID-19 has had adverse effects on all sectors but with more debilitating effects on the education sector. After reactive lockdowns, education institutions that could continue teaching and learning had to go a distance mediated by digital technological tools. In Uganda, the Ministry of Education thereby issued COVID-19 Online Distance E-learning (ODeL) emergent guidelines. Despite such guidelines, academic institutions in Uganda and similar developing contexts with academically constrained resource environments were caught off-guard and ill-prepared to transform from face-to-face learning to online distance learning mode. Most academic institutions that migrated spontaneously did so with no deliberate tools, systems, strategies, or software to cause active, meaningful, and engaging learning for students. By experience, most of these academic institutions shifted to Zoom and WhatsApp and instead conducted online teaching in real-time than blended synchronous and asynchronous tools. This paper provides students’ experiences while blending synchronous and asynchronous content-creating and learning tools within a technological resource-constrained environment to navigate in such a challenging Uganda context. These conceptual case-based findings, using experience from Uganda Christian University (UCU), point at the design of learning activities with two certain characteristics, the enhancement of synchronous learning technologies with asynchronous ones to mitigate the challenge of system breakdown, passive learning to active learning, and enhances the types of presence (social, cognitive and facilitatory). The paper, both empirical and experiential in nature, uses online experiences from third-year students in Bachelor of Business Administration student lectured using asynchronous text, audio, and video created with Open Broadcaster Studio software and compressed with Handbrake, all open-source software to mitigate disk space and bandwidth usage challenges. The synchronous online engagements with students were a blend of zoom or BigBlueButton, to ensure that students had an alternative just in case one failed due to excessive real-time traffic. Generally, students report that compared to their previous face-to-face lectures, the pre-recorded lectures via Youtube provided them an opportunity to reflect on content in a self-paced manner, which later on enabled them to engage actively during the live zoom and/or BigBlueButton real-time discussions and presentations. The major recommendation is that lecturers and teachers in a resource-constrained environment with limited digital resources like the internet and digital devices should harness this approach to offer students access to learning content in a self-paced manner and thereby enabling reflective active learning through reflective and high-order thinking.

Keywords: synchronous learning, asynchronous learning, active learning, reflective learning, resource-constrained environment

Procedia PDF Downloads 140
605 A Study of the Carbon Footprint from a Liquid Silicone Rubber Compounding Facility in Malaysia

Authors: Q. R. Cheah, Y. F. Tan

Abstract:

In modern times, the push for a low carbon footprint entails achieving carbon neutrality as a goal for future generations. One possible step towards carbon footprint reduction is the use of more durable materials with longer lifespans, for example, silicone data cableswhich show at least double the lifespan of similar plastic products. By having greater durability and longer lifespans, silicone data cables can reduce the amount of trash produced as compared to plastics. Furthermore, silicone products don’t produce micro contamination harmful to the ocean. Every year the electronics industry produces an estimated 5 billion data cables for USB type C and lightning data cables for tablets and mobile phone devices. Material usage for outer jacketing is 6 to 12 grams per meter. Tests show that the product lifespan of a silicone data cable over plastic can be doubled due to greater durability. This can save at least 40,000 tonnes of material a year just on the outer jacketing of the data cable. The facility in this study specialises in compounding of liquid silicone rubber (LSR) material for the extrusion process in jacketing for the silicone data cable. This study analyses the carbon emissions from the facility, which is presently capable of producing more than 1,000 tonnes of LSR annually. This study uses guidelines from the World Business Council for Sustainable Development (WBCSD) and World Resources Institute (WRI) to define the boundaries of the scope. The scope of emissions is defined as 1. Emissions from operations owned or controlled by the reporting company, 2. Emissions from the generation of purchased or acquired energy such as electricity, steam, heating, or cooling consumed by the reporting company, and 3. All other indirect emissions occurring in the value chain of the reporting company, including both upstream and downstream emissions. As the study is limited to the compounding facility, the system boundaries definition according to GHG protocol is cradle-to-gate instead of cradle-to-grave exercises. Malaysia’s present electricity generation scenario was also used, where natural gas and coal constitute the bulk of emissions. Calculations show the LSR produced for the silicone data cable with high fire retardant capability has scope 1 emissions of 0.82kg CO2/kg, scope 2 emissions of 0.87kg CO2/kg, and scope 3 emissions of 2.76kg CO2/kg, with a total product carbon footprint of 4.45kg CO2/kg. This total product carbon footprint (Cradle-to-gate) is comparable to the industry and to plastic materials per tonne of material. Although per tonne emission is comparable to plastic material, due to greater durability and longer lifespan, there can be significantly reduced use of LSR material. Suggestions to reduce the calculated product carbon footprint in the scope of emissions involve 1. Incorporating the recycling of factory silicone waste into operations, 2. Using green renewable energy for external electricity sources and 3. Sourcing eco-friendly raw materials with low GHG emissions.

Keywords: carbon footprint, liquid silicone rubber, silicone data cable, Malaysia facility

Procedia PDF Downloads 98
604 Learning the Most Common Causes of Major Industrial Accidents and Apply Best Practices to Prevent Such Accidents

Authors: Rajender Dahiya

Abstract:

Investigation outcomes of major process incidents have been consistent for decades and validate that the causes and consequences are often identical. The debate remains as we continue to experience similar process incidents even with enormous development of new tools, technologies, industry standards, codes, regulations, and learning processes? The objective of this paper is to investigate the most common causes of major industrial incidents and reveal industry challenges and best practices to prevent such incidents. The author, in his current role, performs audits and inspections of a variety of high-hazard industries in North America, including petroleum refineries, chemicals, petrochemicals, manufacturing, etc. In this paper, he shares real life scenarios, examples, and case studies from high hazards operating facilities including key challenges and best practices. This case study will provide a clear understanding of the importance of near miss incident investigation. The incident was a Safe operating limit excursion. The case describes the deficiencies in management programs, the competency of employees, and the culture of the corporation that includes hazard identification and risk assessment, maintaining the integrity of safety-critical equipment, operating discipline, learning from process safety near misses, process safety competency, process safety culture, audits, and performance measurement. Failure to identify the hazards and manage the risks of highly hazardous materials and processes is one of the primary root-causes of an incident, and failure to learn from past incidents is the leading cause of the recurrence of incidents. Several investigations of major incidents discovered that each showed several warning signs before occurring, and most importantly, all were preventable. The author will discuss why preventable incidents were not prevented and review the mutual causes of learning failures from past major incidents. The leading causes of past incidents are summarized below. Management failure to identify the hazard and/or mitigate the risk of hazardous processes or materials. This process starts early in the project stage and continues throughout the life cycle of the facility. For example, a poorly done hazard study such as HAZID, PHA, or LOPA is one of the leading causes of the failure. If this step is performed correctly, then the next potential cause is. Management failure to maintain the integrity of safety critical systems and equipment. In most of the incidents, mechanical integrity of the critical equipment was not maintained, safety barriers were either bypassed, disabled, or not maintained. The third major cause is Management failure to learn and/or apply learning from the past incidents. There were several precursors before those incidents. These precursors were either ignored altogether or not taken seriously. This paper will conclude by sharing how a well-implemented operating management system, good process safety culture, and competent leaders and staff contributed to managing the risks to prevent major incidents.

Keywords: incident investigation, risk management, loss prevention, process safety, accident prevention

Procedia PDF Downloads 58
603 Loss Quantification Archaeological Sites in Watershed Due to the Use and Occupation of Land

Authors: Elissandro Voigt Beier, Cristiano Poleto

Abstract:

The main objective of the research is to assess the loss through the quantification of material culture (archaeological fragments) in rural areas, sites explored economically by machining on seasonal crops, and also permanent, in a hydrographic subsystem Camaquã River in the state of Rio Grande do Sul, Brazil. The study area consists of different micro basins and differs in area, ranging between 1,000 m² and 10,000 m², respectively the largest and the smallest, all with a large number of occurrences and outcrop locations of archaeological material and high density in intense farm environment. In the first stage of the research aimed to identify the dispersion of points of archaeological material through field survey through plot points by the Global Positioning System (GPS), within each river basin, was made use of concise bibliography on the topic in the region, helping theoretically in understanding the old landscaping with preferences of occupation for reasons of ancient historical people through the settlements relating to the practice observed in the field. The mapping was followed by the cartographic development in the region through the development of cartographic products of the land elevation, consequently were created cartographic products were to contribute to the understanding of the distribution of the absolute materials; the definition and scope of the material dispersed; and as a result of human activities the development of revolving letter by mechanization of in situ material, it was also necessary for the preparation of materials found density maps, linking natural environments conducive to ancient historical occupation with the current human occupation. The third stage of the project it is for the systematic collection of archaeological material without alteration or interference in the subsurface of the indigenous settlements, thus, the material was prepared and treated in the laboratory to remove soil excesses, cleaning through previous communication methodology, measurement and quantification. Approximately 15,000 were identified archaeological fragments belonging to different periods of ancient history of the region, all collected outside of its environmental and historical context and it also has quite changed and modified. The material was identified and cataloged considering features such as object weight, size, type of material (lithic, ceramic, bone, Historical porcelain and their true association with the ancient history) and it was disregarded its principles as individual lithology of the object and functionality same. As observed preliminary results, we can point out the change of materials by heavy mechanization and consequent soil disturbance processes, and these processes generate loading of archaeological materials. Therefore, as a next step will be sought, an estimate of potential losses through a mathematical model. It is expected by this process, to reach a reliable model of high accuracy which can be applied to an archeological site of lower density without encountering a significant error.

Keywords: degradation of heritage, quantification in archaeology, watershed, use and occupation of land

Procedia PDF Downloads 279
602 Development and Preliminary Testing of the Dutch Version of the Program for the Education and Enrichment of Relational Skills

Authors: Sakinah Idris, Gabrine Jagersma, Bjorn Jaime Van Pelt, Kirstin Greaves-Lord

Abstract:

Background: The PEERS (Program for the Education and Enrichment of Relational Skills) intervention can be considered a well-established, evidence-based intervention in the USA. However, testing the efficacy of cultural adaptations of PEERS is still ongoing. More and more, the involvement of all stakeholders in the development and evaluation of interventions is acknowledged as crucial for the longer term implementation of interventions across settings. Therefore, in the current project, teens with ASD (Autism Spectrum Disorder), their neurotypical peers, parents, teachers, as well as clinicians were involved in the development and evaluation of the Dutch version of PEERS. Objectives: The current presentation covers (1) the formative phase and (2) the preliminary adaptation test phase of the cultural adaptation of evidence-based interventions. In the formative phase, we aim to describe the process of adaptation of the PEERS program to the Dutch culture and care system. In the preliminary adaptation phase, we will present results from the preliminary adaptation test among 32 adolescents with ASD. Methods: In phase 1, a group discussion on common vocabulary was conducted among 70 teenagers (and their teachers) from special and regular education aged 12-18 years old. This inventory concerned 14 key constructs from PEERS, e.g., areas of interests, locations for making friends, common peer groups and crowds inside and outside of school, activities with friends, commonly used ways for electronic communication, ways for handling disagreements, and common teasing comebacks. Also, 15 clinicians were involved in the translation and cultural adaptation process. The translation and cultural adaptation process were guided by the research team, and who included input and feedback from all stakeholders through an iterative feedback incorporation procedure. In phase 2, The parent-reported Social Responsiveness Scale (SRS), the Test of Adolescent Social Skills Knowledge (TASSK), and the Quality of Socialization Questionnaire (QSQ) were assessed pre- and post-intervention to evaluate potential treatment outcome. Results: The most striking cultural adaptation - reflecting the standpoints of all stakeholders - concerned the strategies for handling rumors and gossip, which were suggested to be taught using a similar approach as the teasing comebacks, more in line with ‘down-to-earth’ Dutch standards. The preliminary testing of this adapted version indicated that the adolescents with ASD significantly improved their social knowledge (TASSK; t₃₁ = -10.9, p < .01), social experience (QSQ-Parent; t₃₁ = -4.2, p < .01 and QSQ-Adolescent; t₃₂ = -3.8, p < .01), and in parent-reported social responsiveness (SRS; t₃₃ = 3.9, p < .01). In addition, subjective evaluations of teens with ASD, their parents and clinicians were positive. Conclusions: In order to further scrutinize the effectiveness of the Dutch version of the PEERS intervention, we recommended performing a larger scale randomized control trial (RCT) design, for which we provide several methodological considerations.

Keywords: cultural adaptation, PEERS, preliminary testing, translation

Procedia PDF Downloads 169
601 Economic Impacts of Nitrogen Fertilizer Use into Tropical Pastures for Beef Cattle in Brazil

Authors: Elieder P. Romanzini, Lutti M. Delevatti, Rhaony G. Leite, Ricardo A. Reis, Euclides B. Malheiros

Abstract:

Brazilian beef cattle production systems are an important profitability source for the national gross domestic product. The main characteristic of these systems is forage utilization as the exclusive feed source. Forage utilization had been causing on owners the false feeling of low production costs. However, this low cost is followed to low profit causing a lot times worst animal index what can result in activities changes or until land sold. Aiming to evaluate economic impacts into Brazilian beef cattle systems were evaluated four nitrogen fertilizer (N) application levels (0, 90, 180 and 270 kg per hectare [kg.ha-1]). Research was developed during 2015 into Forage Crops and Grasslands section of São Paulo State University, “Júlio de Mesquita Filho” (Unesp) (Jaboticabal, São Paulo, Brazil). Pastures were seeded with Brachiaria brizantha Stapf. ‘Marandu’ (Palisade grass) handled using continuous grazing system, with variable stocking rate, sward height maintained at 25 cm. The economic evaluation was developed in rearing e finishing phases. We evaluated the cash flows inside each phase on different N levels. Economic valuations were considering: cost-effective operating (CEO), cost-total operating (CTO), gross revenue (GR), operating profit (OP) and net income (NI), every measured in US$. Complementary analyses were developed, profitability was calculated by [OP/GR]. Pay back (measured in years) was calculated considering average capital stocktaking pondered by area in use (ACS) divided by [GR-CEO]. And the internal rate of return (IRR) was calculated by 100/(pay back). Input prices were prices during 2015 and were obtained from Anuário Brasileiro da Pecuária, Centro de Estudos Avançados em Economia Aplicada and quotation in the same region of animal production (northeast São Paulo State) during the period above mentioned. Values were calculated in US$ according exchange rate US$1.00 equal R$3.34. The CEO, CTO, GR, OP and NI per hectare for each N level were respectively US$1,919.66; US$2,048.47; US$2,905.72; US$857.25 and US$986.06 to 0 kg.ha-1; US$2,403.20; US$2,551.80; US$3,530.19; US$978.39 and US$1,126.99 to 90 kg.ha-1; US$3,180.42; US$3,364.81; US$4,985.03; US$1,620.23 and US$1,804.62 to 180 kg.ha-1andUS$3,709.14; US$3,915.15; US$5,554.95; US$1,639.80 and US$1,845.81 to 270 kg.ha-1. Relationship to another economic indexes, profitability, pay back and IRR, the results were respectively 29.50%, 6.44 and 15.54% to 0 kg.ha-1; 27.72%, 6.88 and 14.54% to 90 kg.ha-1; 32.50%, 4.08 and 24.50% to 180 kg.ha-1 and 29.52%, 3.42 and 29.27% to 270 kg.ha-1. Values previously presented in this evaluation allowing to affirm that the best result was obtained to N level 270 kg.ha-1. These results among all N levels evaluated could be explained by improve occurred on stocking rate caused by increase on N level. However, a crucial information about high N level application into pastures is the efficiency of N utilization (associated to environmental impacts) that normally decrease with the increase on N level. Hence, considering all situations (efficiency of N utilization and economic results) into tropical pastures used to beef cattle production could be recommended N level equal to 180kg.ha-1, which had better profitability and cause lesser environmental impacts, proved by other studies developed in the same area.

Keywords: Brachiaria brizantha, cost-total operating, gross revenue, profitability

Procedia PDF Downloads 174
600 The Philosophical Hermeneutics Contribution to Form a Highly Qualified Judiciary in Brazil

Authors: Thiago R. Pereira

Abstract:

The philosophical hermeneutics is able to change the Brazilian Judiciary because of the understanding of the characteristics of the human being. It is impossible for humans, to be invested in the function of being a judge, making absolutely neutral decisions, but the philosophical hermeneutics can assist the judge making impartial decisions, based on the federal constitution. The normative legal positivism imagined a neutral judge, a judge able to try without any preconceived ideas, without allowing his/her background to influence him/her. When a judge arbitrates based on legal rules, the problem is smaller, but when there are no clear legal rules, and the judge must try based on principles, the risk of the decision is based on what they believe in. Solipsistically, this issue gains a huge dimension. Today, the Brazilian judiciary is independent, but there must be a greater knowledge of philosophy and the philosophy of law, partially because the bigger problem is the unpredictability of decisions made by the judiciary. Actually, when a lawsuit is filed, the result of this judgment is absolutely unpredictable. It is almost a gamble. There must be the slightest legal certainty and predictability of judicial decisions, so that people, with similar cases, may not receive opposite sentences. The relativism, since classical antiquity, believes in the possibility of multiple answers. Since the Greeks in in the sixth century before Christ, through the Germans in the eighteenth century, and even today, it has been established the constitution as the great law, the Groundnorm, and thus, the relativism of life can be greatly reduced when a hermeneut uses the Constitution as North interpretational, where all interpretation must act as the hermeneutic constitutional filter. For a current philosophy of law, that inside a legal system with a Federal Constitution, there is a single correct answer to a specific case. The challenge is how to find this right answer. The only answer to this question will be that we should use the constitutional principles. But in many cases, a collision between principles will take place, and to resolve this issue, the judge or the hermeneut will choose a solipsism way, using what they personally believe to be the right one. For obvious reasons, that conduct is not safe. Thus, a theory of decision is necessary to seek justice, and the hermeneutic philosophy and the linguistic turn will be necessary for one to find the right answer. In order to help this difficult mission, it will be necessary to use philosophical hermeneutics in order to find the right answer, which is the constitutionally most appropriate response. The constitutionally appropriate response will not always be the answer that individuals agree to, but we must put aside our preferences and defend the answer that the Constitution gives us. Therefore, the hermeneutics applied to Law, in search constitutionally appropriate response, should be the safest way to avoid judicial individual decisions. The aim of this paper is to present the science of law starting from the linguistic turn, the philosophical hermeneutics, moving away from legal positivism. The methodology used in this paper is qualitative, academic and theoretical, philosophical hermeneutics with the mission to conduct research proposing a new way of thinking about the science of law. The research sought to demonstrate the difficulty of the Brazilian courts to depart from the secular influence of legal positivism. Moreover, the research sought to demonstrate the need to think science of law within a contemporary perspective, where the linguistic turn, philosophical hermeneutics, will be the surest way to conduct the science of law in the present century.

Keywords: hermeneutic, right answer, solipsism, Brazilian judiciary

Procedia PDF Downloads 351
599 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR

Authors: Ionut Vintu, Stefan Laible, Ruth Schulz

Abstract:

Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.

Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection

Procedia PDF Downloads 140
598 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)

Authors: Eric Pla Erra, Mariana Jimenez Martinez

Abstract:

While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.

Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)

Procedia PDF Downloads 106
597 Management of Urine Recovery at the Building Level

Authors: Joao Almeida, Ana Azevedo, Myriam Kanoun-Boule, Maria Ines Santos, Antonio Tadeu

Abstract:

The effects of the increasing expansion of cities and climate changes have encouraged European countries and regions to adopt nature-based solutions with ability to mitigate environmental issues and improve life in cities. Among these strategies, green roofs and urban gardens have been considered ingenious solutions, since they have the desirable potential to improve air quality, prevent floods, reduce the heat island effect and restore biodiversity in cities. However, an additional consumption of fresh water and mineral nutrients is necessary to sustain larger green urban areas. This communication discusses the main technical features of a new system to manage urine recovery at the building level and its application in green roofs. The depletion of critical nutrients like phosphorus constitutes an emergency. In turn, their elimination through urine is one of the principal causes for their loss. Thus, urine recovery in buildings may offer numerous advantages, constituting a valuable fertilizer abundantly available in cities and reducing the load on wastewater treatment plants. Although several urine-diverting toilets have been developed for this purpose and some experiments using urine directly in agriculture have already been carried out in Europe, several challenges have emerged with this practice concerning collection, sanitization, storage and application of urine in buildings. To our best knowledge, current buildings are not designed to receive these systems and integrated solutions with ability to self-manage the whole process of urine recovery, including separation, maturation and storage phases, are not known. Additionally, if from a hygiene point of view human urine may be considered a relatively safe fertilizer, the risk of disease transmission needs to be carefully analysed. A reduction in microorganisms can be achieved by storing the urine in closed tanks. However, several factors may affect this process, which may result in a higher survival rate for some pathogens. In this work, urine effluent was collected under real conditions, stored in closed containers and kept in climatic chambers under variable conditions simulating cold, temperate and tropical climates. These samples were subjected to a first physicochemical and microbiological control, which was repeated over time. The results obtained so far suggest that maturation conditions were reached for all the three temperatures and that a storage period of less than three months is required to achieve a strong depletion of microorganisms. The authors are grateful for the Project WashOne (POCI-01-0247-FEDER-017461) funded by the Operational Program for Competitiveness and Internationalization (POCI) of Portugal 2020, with the support of the European Regional Development Fund (FEDER).

Keywords: sustainable green roofs and urban gardens, urban nutrient cycle, urine-based fertilizers, urine recovery in buildings

Procedia PDF Downloads 166
596 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test

Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston

Abstract:

The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.

Keywords: biomarker, diagnostic, neurology, TBI

Procedia PDF Downloads 70
595 Investigating the Online Effect of Language on Gesture in Advanced Bilinguals of Two Structurally Different Languages in Comparison to L1 Native Speakers of L2 and Explores Whether Bilinguals Will Follow Target L2 Patterns in Speech and Co-speech

Authors: Armita Ghobadi, Samantha Emerson, Seyda Ozcaliskan

Abstract:

Being a bilingual involves mastery of both speech and gesture patterns in a second language (L2). We know from earlier work in first language (L1) production contexts that speech and co-speech gesture form a tightly integrated system: co-speech gesture mirrors the patterns observed in speech, suggesting an online effect of language on nonverbal representation of events in gesture during the act of speaking (i.e., “thinking for speaking”). Relatively less is known about the online effect of language on gesture in bilinguals speaking structurally different languages. The few existing studies—mostly with small sample sizes—suggests inconclusive findings: some show greater achievement of L2 patterns in gesture with more advanced L2 speech production, while others show preferences for L1 gesture patterns even in advanced bilinguals. In this study, we focus on advanced bilingual speakers of two structurally different languages (Spanish L1 with English L2) in comparison to L1 English speakers. We ask whether bilingual speakers will follow target L2 patterns not only in speech but also in gesture, or alternatively, follow L2 patterns in speech but resort to L1 patterns in gesture. We examined this question by studying speech and gestures produced by 23 advanced adult Spanish (L1)-English (L2) bilinguals (Mage=22; SD=7) and 23 monolingual English speakers (Mage=20; SD=2). Participants were shown 16 animated motion event scenes that included distinct manner and path components (e.g., "run over the bridge"). We recorded and transcribed all participant responses for speech and segmented it into sentence units that included at least one motion verb and its associated arguments. We also coded all gestures that accompanied each sentence unit. We focused on motion event descriptions as it shows strong crosslinguistic differences in the packaging of motion elements in speech and co-speech gesture in first language production contexts. English speakers synthesize manner and path into a single clause or gesture (he runs over the bridge; running fingers forward), while Spanish speakers express each component separately (manner-only: el corre=he is running; circle arms next to body conveying running; path-only: el cruza el puente=he crosses the bridge; trace finger forward conveying trajectory). We tallied all responses by group and packaging type, separately for speech and co-speech gesture. Our preliminary results (n=4/group) showed that productions in English L1 and Spanish L1 differed, with greater preference for conflated packaging in L1 English and separated packaging in L1 Spanish—a pattern that was also largely evident in co-speech gesture. Bilinguals’ production in L2 English, however, followed the patterns of the target language in speech—with greater preference for conflated packaging—but not in gesture. Bilinguals used separated and conflated strategies in gesture in roughly similar rates in their L2 English, showing an effect of both L1 and L2 on co-speech gesture. Our results suggest that online production of L2 language has more limited effects on L2 gestures and that mastery of native-like patterns in L2 gesture might take longer than native-like L2 speech patterns.

Keywords: bilingualism, cross-linguistic variation, gesture, second language acquisition, thinking for speaking hypothesis

Procedia PDF Downloads 77