Search results for: grade 12 educators' difficulties
49 Examining Language as a Crucial Factor in Determining Academic Performance: A Case of Business Education in Hong Kong
Authors: Chau So Ling
Abstract:
I.INTRODUCTION: Educators have always been interested in exploring factors that contribute to students’ academic success. It is beyond question that language, as a medium of instruction, will affect student learning. This paper tries to investigate whether language is a crucial factor in determining students’ achievement in their studies. II. BACKGROUND AND SIGNIFICANCE OF STUDY: The issue of using English as a medium of instruction in Hong Kong is a special topic because Hong Kong is a post-colonial and international city which a British colony. In such a specific language environment, researchers in the education field have always been interested in investigating students’ language proficiency and its relation to academic achievement and other related educational indicators such as motivation to learn, self-esteem, learning effectiveness, self-efficacy, etc. Along this line of thought, this study specifically focused on business education. III. METHODOLOGY: The methodology in this study involved two sequential stages, namely, a focus group interview and a data analysis. The whole study was directed towards both qualitative and quantitative aspects. The subjects of the study were divided into two groups. For the first group participating in the interview, a total of ten high school students were invited. They studied Business Studies, and their English standard was varied. The theme of the discussion was “Does English affect your learning and examination results of Business Studies?” The students were facilitated to discuss the extent to which English standard affected their learning of Business subjects and requested to rate the correlation between English and performance of Business Studies on a five-point scale. The second stage of the study involved another group of students. They were high school graduates who had taken the public examination for entering universities. A database containing their public examination results for different subjects has been obtained for the purpose of statistical analysis. Hypotheses were tested and evidence was obtained from the focus group interview to triangulate the findings. V. MAJOR FINDINGS AND CONCLUSION: By sharing of personal experience, the discussion of focus group interviews indicated that higher English standards could help the students achieve better learning and examination performance. In order to end the interview, the students were asked to indicate the correlation between English proficiency and performance of Business Studies on a five-point scale. With point one meant least correlated, ninety percent of the students gave point four for the correlation. The preliminary results illustrated that English plays an important role in students’ learning of Business Studies, or at least this was what the students perceived, which set the hypotheses for the study. After conducting the focus group interview, further evidence had to be gathered to support the hypotheses. The data analysis part tried to find out the relationship by correlating the students’ public examination results of Business Studies and levels of English standard. The results indicated a positive correlation between their English standard and Business Studies examination performance. In order to highlight the importance of the English language to the study of Business Studies, the correlation between the public examination results of other non-business subjects was also tested. Statistical results showed that language does play a role in affecting students’ performance in studying Business subjects than the other subjects. The explanation includes the dynamic subject nature, examination format and study requirements, the specialist language used, etc. Unlike Science and Geography, students in their learning process might find it more difficult to relate business concepts or terminologies to their own experience, and there are not many obvious physical or practical activities or visual aids to serve as evidence or experiments. It is well-researched in Hong Kong that English proficiency is a determinant of academic success. Other research studies verified such a notion. For example, research revealed that the more enriched the language experience, the better the cognitive performance in conceptual tasks. The ability to perform this kind of task is particularly important to students taking Business subjects. Another research was carried out in the UK, which was geared towards identifying and analyzing the reasons for underachievement across a cohort of GCSE students taking Business Studies. Results showed that weak language ability was the main barrier to raising students’ performance levels. It seemed that the interview result was successfully triangulated with data findings. Although education failure cannot be restricted to linguistic failure and language is just one of the variables to play in determining academic achievement, it is generally accepted that language does affect students’ academic performance. It is just a matter of extent. This paper provides recommendations for business educators on students’ language training and sheds light on more research possibilities in this area.Keywords: academic performance, language, learning, medium of instruction
Procedia PDF Downloads 12248 Autonomous Strategic Aircraft Deconfliction in a Multi-Vehicle Low Altitude Urban Environment
Authors: Loyd R. Hook, Maryam Moharek
Abstract:
With the envisioned future growth of low altitude urban aircraft operations for airborne delivery service and advanced air mobility, strategies to coordinate and deconflict aircraft flight paths must be prioritized. Autonomous coordination and planning of flight trajectories is the preferred approach to the future vision in order to increase safety, density, and efficiency over manual methods employed today. Difficulties arise because any conflict resolution must be constrained by all other aircraft, all airspace restrictions, and all ground-based obstacles in the vicinity. These considerations make pair-wise tactical deconfliction difficult at best and unlikely to find a suitable solution for the entire system of vehicles. In addition, more traditional methods which rely on long time scales and large protected zones will artificially limit vehicle density and drastically decrease efficiency. Instead, strategic planning, which is able to respond to highly dynamic conditions and still account for high density operations, will be required to coordinate multiple vehicles in the highly constrained low altitude urban environment. This paper develops and evaluates such a planning algorithm which can be implemented autonomously across multiple aircraft and situations. Data from this evaluation provide promising results with simulations showing up to 10 aircraft deconflicted through a relatively narrow low-altitude urban canyon without any vehicle to vehicle or obstacle conflict. The algorithm achieves this level of coordination beginning with the assumption that each vehicle is controlled to follow an independently constructed flight path, which is itself free of obstacle conflict and restricted airspace. Then, by preferencing speed change deconfliction maneuvers constrained by the vehicles flight envelope, vehicles can remain as close to the original planned path and prevent cascading vehicle to vehicle conflicts. Performing the search for a set of commands which can simultaneously ensure separation for each pair-wise aircraft interaction and optimize the total velocities of all the aircraft is further complicated by the fact that each aircraft's flight plan could contain multiple segments. This means that relative velocities will change when any aircraft achieves a waypoint and changes course. Additionally, the timing of when that aircraft will achieve a waypoint (or, more directly, the order upon which all of the aircraft will achieve their respective waypoints) will change with the commanded speed. Put all together, the continuous relative velocity of each vehicle pair and the discretized change in relative velocity at waypoints resembles a hybrid reachability problem - a form of control reachability. This paper proposes two methods for finding solutions to these multi-body problems. First, an analytical formulation of the continuous problem is developed with an exhaustive search of the combined state space. However, because of computational complexity, this technique is only computable for pairwise interactions. For more complicated scenarios, including the proposed 10 vehicle example, a discretized search space is used, and a depth-first search with early stopping is employed to find the first solution that solves the constraints.Keywords: strategic planning, autonomous, aircraft, deconfliction
Procedia PDF Downloads 9847 Trends in Blood Pressure Control and Associated Risk Factors Among US Adults with Hypertension from 2013 to 2020: Insights from NHANES Data
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Controlling blood pressure is critical to reducing the risk of cardiovascular disease. However, BP control rates (systolic BP < 140 mm Hg and diastolic BP < 90 mm Hg) have declined since 2013, warranting further analysis to identify contributing factors and potential interventions. This study investigates the factors associated with the decline in blood pressure (BP) control among U.S. adults with hypertension over the past decade. Data from the U.S. National Health and Nutrition Examination Survey (NHANES) were used to assess BP control trends between 2013 and 2020. The analysis included 18,927 U.S. adults with hypertension aged 18 years and older who completed study interviews and examinations. The dataset, obtained from the cardioStatsUSA and RNHANES R packages, was merged based on survey IDs. Key variables analyzed included demographic factors, lifestyle behaviors, hypertension status, BMI, comorbidities, antihypertensive medication use, and cardiovascular disease history. The prevalence of BP control declined from 78.0% in 2013-2014 to 71.6% in 2017-2020. Non-Hispanic Whites had the highest BP control prevalence (33.6% in 2013-2014), but this declined to 26.5% by 2017-2020. In contrast, BP control among Non-Hispanic Blacks increased slightly. Younger adults (aged 18-44) exhibited better BP control, but control rates declined over time. Obesity prevalence increased, contributing to poorer BP control. Antihypertensive medication use rose from 26.1% to 29.2% across the study period. Lifestyle behaviors, such as smoking and diet, also affected BP control, with nonsmokers and those with better diets showing higher control rates. Key findings indicate significant disparities in blood pressure control across racial/ethnic groups. Non-Hispanic Black participants had consistently higher odds (OR ranging from 1.84 to 2.33) of poor blood pressure control compared to Non-Hispanic Whites, while odds among Non-Hispanic Asians varied by cycle. Younger age groups (18-44 and 45-64) showed significantly lower odds of poor blood pressure control compared to those aged 75+, highlighting better control in younger populations. Men had consistently higher odds of poor control compared to women, though this disparity slightly decreased in 2017-2020. Medical comorbidities such as diabetes and chronic kidney disease were associated with significantly higher odds of poor blood pressure control across all cycles. Participants with chronic kidney disease had particularly elevated odds (OR=5.54 in 2015-2016), underscoring the challenge of managing hypertension in these populations. Antihypertensive medication use was also linked with higher odds of poor control, suggesting potential difficulties in achieving target blood pressure despite treatment. Lifestyle factors such as alcohol consumption and physical activity showed no consistent association with blood pressure control. However, dietary quality appeared protective, with those reporting an excellent diet showing lower odds (OR=0.64) of poor control in the overall sample. Increased BMI was associated with higher odds of poor blood pressure control, particularly in the 30-35 and 35+ BMI categories during 2015-2016. The study highlights a significant decline in BP control among U.S. adults with hypertension, particularly among certain demographic groups and those with increasing obesity rates. Lifestyle behaviors, antihypertensive medication use, and socioeconomic factors all played a role in these trends.Keywords: diabetes, blood pressure, obesity, logistic regression, odd ratio
Procedia PDF Downloads 1746 Functions and Challenges of New County-Based Regional Plan in Taiwan
Authors: Yu-Hsin Tsai
Abstract:
A new, mandated county regional plan system has been initiated since 2010 nationwide in Taiwan, with its role situated in-between the policy-led cross-county regional plan and the blueprint-led city plan. This new regional plan contain both urban and rural areas in one single plan, which provides a more complete planning territory, i.e., city region within the county’s jurisdiction, and to be executed and managed effectively by the county government. However, the full picture of its functions and characteristics seems still not totally clear, compared with other levels of plans; either are planning goals and issues that can be most appropriately dealt with at this spatial scale. In addition, the extent to which the inclusion of sustainability ideal and measures to cope with climate change are unclear. Based on the above issues, this study aims to clarify the roles of county regional plan, to analyze the extent to which the measures cope with sustainability, climate change, and forecasted declining population, and the success factors and issues faced in the planning process. The methodology applied includes literature review, plan quality evaluation, and interview with officials of the central and local governments and urban planners involved for all the 23 counties in Taiwan. The preliminary research results show, first, growth management related policies have been widely implemented and expected to have effective impact, including incorporating resources capacity to determine maximum population for the city region as a whole, developing overall vision of urban growth boundary for all the whole city region, prioritizing infill development, and use of architectural land within urbanized area over rural area to cope with urban growth. Secondly, planning-oriented zoning is adopted in urban areas, while demand-oriented planning permission is applied in the rural areas with designated plans. Then, public participation has been evolved to the next level to oversee all of government’s planning and review processes due to the decreasing trust in the government, and development of public forum on the internet etc. Next, fertile agricultural land is preserved to maintain food self-supplied goal for national security concern. More adoption-based methods than mitigation-based methods have been applied to cope with global climate change. Finally, better land use and transportation planning in terms of avoiding developing rail transit stations and corridor in rural area is promoted. Even though many promising, prompt measures have been adopted, however, challenges exist to surround: first, overall urban density, likely affecting success of UGB, or use of rural agricultural land, has not been incorporated, possibly due to implementation difficulties. Second, land-use related measures to mitigating climate change seem less clear and hence less employed. Smart decline has not drawn enough attention to cope with predicted population decrease in the next decade. Then, some reluctance from county’s government to implement county regional plan can be observed vaguely possibly since limits have be set on further development on agricultural land and sensitive areas. Finally, resolving issue on existing illegal factories on agricultural land remains the most challenging dilemma.Keywords: city region plan, sustainability, global climate change, growth management
Procedia PDF Downloads 35245 Assessment of Airborne PM0.5 Mutagenic and Genotoxic Effects in Five Different Italian Cities: The MAPEC_LIFE Project
Authors: T. Schilirò, S. Bonetta, S. Bonetta, E. Ceretti, D. Feretti, I. Zerbini, V. Romanazzi, S. Levorato, T. Salvatori, S. Vannini, M. Verani, C. Pignata, F. Bagordo, G. Gilli, S. Bonizzoni, A. Bonetti, E. Carraro, U. Gelatti
Abstract:
Air pollution is one of the most important worldwide health concern. In the last years, in both the US and Europe, new directives and regulations supporting more restrictive pollution limits were published. However, the early effects of air pollution occur, especially for the urban population. Several epidemiological and toxicological studies have documented the remarkable effect of particulate matter (PM) in increasing morbidity and mortality for cardiovascular disease, lung cancer and natural cause mortality. The finest fractions of PM (PM with aerodynamic diameter <2.5 µm and less) play a major role in causing chronic diseases. The International Agency for Research on Cancer (IARC) has recently classified air pollution and fine PM as carcinogenic to human (1 Group). The structure and composition of PM influence the biological properties of particles. The chemical composition varies with season and region of sampling, photochemical-meteorological conditions and sources of emissions. The aim of the MAPEC (Monitoring Air Pollution Effects on Children for supporting public health policy) study is to evaluate the associations between air pollution and biomarkers of early biological effects in oral mucosa cells of 6-8 year old children recruited from first grade schools. The study was performed in five Italian towns (Brescia, Torino, Lecce, Perugia and Pisa) characterized by different levels of airborne PM (PM10 annual average from 44 µg/m3 measured in Torino to 20 µg/m3 measured in Lecce). Two to five schools for each town were chosen to evaluate the variability of pollution within the same town. Child exposure to urban air pollution was evaluated by collecting ultrafine PM (PM0.5) in the school area, on the same day of biological sampling. PM samples were collected for 72h using a high-volume gravimetric air sampler and glass fiber filters in two different seasons (winter and spring). Gravimetric analysis of the collected filters was performed; PM0.5 organic extracts were chemically analyzed (PAH, Nitro-PAH) and tested on A549 by the Comet assay and Micronucleus test and on Salmonella strains (TA100, TA98, TA98NR and YG1021) by Ames test. Results showed that PM0.5 represents a high variable PM10 percentage (range 19.6-63%). PM10 concentration were generally lower than 50µg/m3 (EU daily limit). All PM0.5 extracts showed a mutagenic effect with TA98 strain (net revertant/m3 range 0.3-1.5) and suggested the presence of indirect mutagens, while lower effect was observed with TA100 strain. The results with the TA98NR and YG1021 strains showed the presence of nitroaromatic compounds as confirmed by the chemical analysis. No genotoxic or oxidative effect of PM0.5 extracts was observed using the comet assay (with/without Fpg enzyme) and micronucleus test except for some sporadic samples. The low biological effect observed could be related to the low level of air pollution observed in this winter sampling associated to a high atmospheric instability. For a greater understanding of the relationship between PM size, composition and biological effects the results obtained in this study suggest to investigate the biological effect of the other PM fractions and in particular of the PM0.5-1 fraction.Keywords: airborne PM, ames test, comet assay, micronucleus test
Procedia PDF Downloads 32344 From Intuitive to Constructive Audit Risk Assessment: A Complementary Approach to CAATTs Adoption
Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy
Abstract:
The use of the audit risk model in auditing has faced limitations and difficulties, leading auditors to rely on a conceptual level of its application. The qualitative approach to assessing risks has resulted in different risk assessments, affecting the quality of audits and decision-making on the adoption of CAATTs. This study aims to investigate risk factors impacting the implementation of the audit risk model and propose a complementary risk-based instrument (KRIs) to form substance risk judgments and mitigate against heightened risk of material misstatement (RMM). The study addresses the question of how risk factors impact the implementation of the audit risk model, improve risk judgments, and aid in the adoption of CAATTs. The study uses a three-stage scale development procedure involving a pretest and subsequent study with two independent samples. The pretest involves an exploratory factor analysis, while the subsequent study employs confirmatory factor analysis for construct validation. Additionally, the authors test the ability of the KRIs to predict audit efforts needed to mitigate against heightened RMM. Data was collected through two independent samples involving 767 participants. The collected data was analyzed using exploratory factor analysis and confirmatory factor analysis to assess scale validity and construct validation. The suggested KRIs, comprising two risk components and seventeen risk items, are found to have high predictive power in determining audit efforts needed to reduce RMM. The study validates the suggested KRIs as an effective instrument for risk assessment and decision-making on the adoption of CAATTs. This study contributes to the existing literature by implementing a holistic approach to risk assessment and providing a quantitative expression of assessed risks. It bridges the gap between intuitive risk evaluation and the theoretical domain, clarifying the mechanism of risk assessments. It also helps improve the uniformity and quality of risk assessments, aiding audit standard-setters in issuing updated guidelines on CAATT adoption. A few limitations and recommendations for future research should be mentioned. First, the process of developing the scale was conducted in the Israeli auditing market, which follows the International Standards on Auditing (ISAs). Although ISAs are adopted in European countries, for greater generalization, future studies could focus on other countries that adopt additional or local auditing standards. Second, this study revealed risk factors that have a material impact on the assessed risk. However, there could be additional risk factors that influence the assessment of the RMM. Therefore, future research could investigate other risk segments, such as operational and financial risks, to bring a broader generalizability to our results. Third, although the sample size in this study fits acceptable scale development procedures and enables drawing conclusions from the body of research, future research may develop standardized measures based on larger samples to reduce the generation of equivocal results and suggest an extended risk model.Keywords: audit risk model, audit efforts, CAATTs adoption, key risk indicators, sustainability
Procedia PDF Downloads 7743 Improved Anatomy Teaching by the 3D Slicer Platform
Authors: Ahmedou Moulaye Idriss, Yahya Tfeil
Abstract:
Medical imaging technology has become an indispensable tool in many branches of the biomedical, health area, and research and is vitally important for the training of professionals in these fields. It is not only about the tools, technologies, and knowledge provided but also about the community that this training project proposes. In order to be able to raise the level of anatomy teaching in the medical school of Nouakchott in Mauritania, it is necessary and even urgent to facilitate access to modern technology for African countries. The role of technology as a key driver of justifiable development has long been recognized. Anatomy is an essential discipline for the training of medical students; it is a key element for the training of medical specialists. The quality and results of the work of a young surgeon depend on his better knowledge of anatomical structures. The teaching of anatomy is difficult as the discipline is being neglected by medical students in many academic institutions. However, anatomy remains a vital part of any medical education program. When anatomy is presented in various planes medical students approve of difficulties in understanding. They do not increase their ability to visualize and mentally manipulate 3D structures. They are sometimes not able to correctly identify neighbouring or associated structures. This is the case when they have to make the identification of structures related to the caudate lobe when the liver is moved to different positions. In recent decades, some modern educational tools using digital sources tend to replace old methods. One of the main reasons for this change is the lack of cadavers in laboratories with poorly qualified staff. The emergence of increasingly sophisticated mathematical models, image processing, and visualization tools in biomedical imaging research have enabled sophisticated three-dimensional (3D) representations of anatomical structures. In this paper, we report our current experience in the Faculty of Medicine in Nouakchott Mauritania. One of our main aims is to create a local learning community in the fields of anatomy. The main technological platform used in this project is called 3D Slicer. 3D Slicer platform is an open-source application available for free for viewing, analysis, and interaction with biomedical imaging data. Using the 3D Slicer platform, we created from real medical images anatomical atlases of parts of the human body, including head, thorax, abdomen, liver, and pelvis, upper and lower limbs. Data were collected from several local hospitals and also from the website. We used MRI and CT-Scan imaging data from children and adults. Many different anatomy atlases exist, both in print and digital forms. Anatomy Atlas displays three-dimensional anatomical models, image cross-sections of labelled structures and source radiological imaging, and a text-based hierarchy of structures. Open and free online anatomical atlases developed by our anatomy laboratory team will be available to our students. This will allow pedagogical autonomy and remedy the shortcomings by responding more fully to the objectives of sustainable local development of quality education and good health at the national level. To make this work a reality, our team produced several atlases available in our faculty in the form of research projects.Keywords: anatomy, education, medical imaging, three dimensional
Procedia PDF Downloads 24542 A Postmodern Framework for Quranic Hermeneutics
Authors: Christiane Paulus
Abstract:
Post-Islamism assumes that the Quran should not be viewed in terms of what Lyotard identifies as a ‘meta-narrative'. However, its socio-ethical content can be viewed as critical of power discourse (Foucault). Practicing religion seems to be limited to rites and individual spirituality, taqwa. Alternatively, can we build on Muhammad Abduh's classic-modern reform and develop it through a postmodernist frame? This is the main question of this study. Through his general and vague remarks on the context of the Quran, Abduh was the first to refer to the historical and cultural distance of the text as an obstacle for interpretation. His application, however, corresponded to the modern absolute idea of authentic sharia. He was followed by Amin al-Khuli, who hermeneutically linked the content of the Quran to the theory of evolution. Fazlur Rahman and Nasr Hamid abu Zeid remain reluctant to go beyond the general level in terms of context. The hermeneutic circle, therefore, persists in challenging, how to get out to overcome one’s own assumptions. The insight into and the acceptance of the lasting ambivalence of understanding can be grasped as a postmodern approach; it is documented in Derrida's discovery of the shift in text meanings, difference, also in Lyotard's theory of différend. The resulting mixture of meanings (Wolfgang Welsch) can be read together with the classic ambiguity of the premodern interpreters of the Quran (Thomas Bauer). Confronting hermeneutic difficulties in general, Niklas Luhmann proves every description an attribution, tautology, i.e., remaining in the circle. ‘De-tautologization’ is possible, namely by analyzing the distinctions in the sense of objective, temporal and social information that every text contains. This could be expanded with the Kantian aesthetic dimension of reason (critique of pure judgment) corresponding to the iʽgaz of the Coran. Luhmann asks, ‘What distinction does the observer/author make?’ Quran as a speech from God to the first listeners could be seen as a discourse responding to the problems of everyday life of that time, which can be viewed as the general goal of the entire Qoran. Through reconstructing koranic Lifeworlds (Alfred Schütz) in detail, the social structure crystallizes the socio-economic differences, the enormous poverty. The koranic instruction to provide the basic needs for the neglected groups, which often intersect (old, poor, slaves, women, children), can be seen immediately in the text. First, the references to lifeworlds/social problems and discourses in longer koranic passages should be hypothesized. Subsequently, information from the classic commentaries could be extracted, the classical Tafseer, in particular, contains rich narrative material for reconstructing. By selecting and assigning suitable, specific context information, the meaning of the description becomes condensed (Clifford Geertz). In this manner, the text gets necessarily an alienation and is newly accessible. The socio-ethical implications can thus be grasped from the difference of the original problem and the revealed/improved order/procedure; this small step can be materialized as such, not as an absolute solution but as offering plausible patterns for today’s challenges as the Agenda 2030.Keywords: postmodern hermeneutics, condensed description, sociological approach, small steps of reform
Procedia PDF Downloads 22141 Force Sensing Resistor Testing of Hand Forces and Grasps during Daily Functional Activities in the Covid-19 Pandemic
Authors: Monique M. Keller, Roline Barnes, Corlia Brandt
Abstract:
Introduction Scientific evidence on the hand forces and the types of grasps measurement during daily tasks are lacking, leaving a gap in the field of hand rehabilitation and robotics. Measuring the grasp forces and types produced by the individual fingers during daily functional tasks is valuable to inform and grade rehabilitation practices for second to fifth metacarpal fractures with robust scientific evidence. Feix et al, 2016 identified the most extensive and complete grasp study that resulted in the GRASP taxonomy. Covid-19 virus changed data collection across the globe and safety precautions in research are essential to ensure the health of participants and researchers. Methodology A cross-sectional study investigated six healthy adults aged 20 to 59 years, pilot participants’ hand forces during 105 tasks. The tasks were categorized into five sections namely, personal care, transport and moving around, home environment and inside, gardening and outside, and office. The predominant grasp of each task was identified guided by the GRASP Taxonomy. Grasp forces were measured with 13mm force-sensing resistors glued onto a glove attached to each of the dominant and non-dominant hand’s individual fingers. Testing equipment included Flexiforce 13millimetres FSR .5" circle, calibrated prior to testing, 10k 1/4w resistors, Arduino pro mini 5.0v – compatible, Esp-01-kit, Arduino uno r3 – compatible board, USB ab cable - 1m, Ftdi ft232 mini USB to serial, Sil 40 inline connectors, ribbon cable combo male header pins, female to female, male to female, two gloves, glue to attach the FSR to glove, Arduino software programme downloaded on a laptop. Grip strength measurements with Jamar dynamometer prior to testing and after every 25 daily tasks were taken to will avoid fatigue and ensure reliability in testing. Covid-19 precautions included wearing face masks at all times, screening questionnaires, temperatures taken, wearing surgical gloves before putting on the testing gloves 1.5 metres long wires attaching the FSR to the Arduino to maintain social distance. Findings Predominant grasps observed during 105 tasks included, adducted thumb (17), lateral tripod (10), prismatic three fingers (12), small diameter (9), prismatic two fingers (9), medium wrap (7), fixed hook (5), sphere four fingers (4), palmar (4), parallel extension (4), index finger extension (3), distal (3), power sphere (2), tripod (2), quadpod (2), prismatic four fingers (2), lateral (2), large-diameter (2), ventral (2), precision sphere (1), palmar pinch (1), light tool (1), inferior pincher (1), and writing tripod (1). Range of forces applied per category, personal care (1-25N), transport and moving around (1-9 N), home environment and inside (1-41N), gardening and outside (1-26.5N), and office (1-20N). Conclusion Scientifically measurements of finger forces with careful consideration to types of grasps used in daily tasks should guide rehabilitation practices and robotic design to ensure a return to the full participation of the individual into the community.Keywords: activities of daily living (ADL), Covid-19, force-sensing resistors, grasps, hand forces
Procedia PDF Downloads 19240 Pivoting to Fortify our Digital Self: Revealing the Need for Personal Cyber Insurance
Authors: Richard McGregor, Carmen Reaiche, Stephen Boyle
Abstract:
Cyber threats are a relatively recent phenomenon and offer cyber insurers a dynamic and intelligent peril. As individuals en mass become increasingly digitally dependent, Personal Cyber Insurance (PCI) offers an attractive option to mitigate cyber risk at a personal level. This abstract proposes a literature review that conceptualises a framework for siting Personal Cyber Insurance (PCI) within the context of cyberspace. The lack of empirical research within this domain demonstrates an immediate need to define the scope of PCI to allow cyber insurers to understand personal cyber risk threats and vectors, customer awareness, capabilities, and their associated needs. Additionally, this will allow cyber insurers to conceptualise appropriate frameworks allowing effective management and distribution of PCI products and services within a landscape often in-congruent with risk attributes commonly associated with traditional personal line insurance products. Cyberspace has provided significant improvement to the quality of social connectivity and productivity during past decades and allowed enormous capability uplift of information sharing and communication between people and communities. Conversely, personal digital dependency furnish ample opportunities for adverse cyber events such as data breaches and cyber-attacksthus introducing a continuous and insidious threat of omnipresent cyber risk–particularly since the advent of the COVID-19 pandemic and wide-spread adoption of ‘work-from-home’ practices. Recognition of escalating inter-dependencies, vulnerabilities and inadequate personal cyber behaviours have prompted efforts by businesses and individuals alike to investigate strategies and tactics to mitigate cyber risk – of which cyber insurance is a viable, cost-effective option. It is argued that, ceteris parabus, the nature of cyberspace intrinsically provides characteristic peculiarities that pose significant and bespoke challenges to cyber insurers, often in-congruent with risk attributes commonly associated with traditional personal line insurance products. These challenges include (inter alia) a paucity of historical claim/loss data for underwriting and pricing purposes, interdependencies of cyber architecture promoting high correlation of cyber risk, difficulties in evaluating cyber risk, intangibility of risk assets (such as data, reputation), lack of standardisation across the industry, high and undetermined tail risks, and moral hazard among others. This study proposes a thematic overview of the literature deemed necessary to conceptualise the challenges to issuing personal cyber coverage. There is an evident absence of empirical research appertaining to PCI and the design of operational business models for this business domain, especially qualitative initiatives that (1) attempt to define the scope of the peril, (2) secure an understanding of the needs of both cyber insurer and customer, and (3) to identify elements pivotal to effective management and profitable distribution of PCI - leading to an argument proposed by the author that postulates that the traditional general insurance customer journey and business model are ill-suited for the lineaments of cyberspace. The findings of the review confirm significant gaps in contemporary research within the domain of personal cyber insurance.Keywords: cyberspace, personal cyber risk, personal cyber insurance, customer journey, business model
Procedia PDF Downloads 10539 A Comparative Evaluation of Cognitive Load Management: Case Study of Postgraduate Business Students
Authors: Kavita Goel, Donald Winchester
Abstract:
In a world of information overload and work complexities, academics often struggle to create an online instructional environment enabling efficient and effective student learning. Research has established that students’ learning styles are different, some learn faster when taught using audio and visual methods. Attributes like prior knowledge and mental effort affect their learning. ‘Cognitive load theory’, opines learners have limited processing capacity. Cognitive load depends on the learner’s prior knowledge, the complexity of content and tasks, and instructional environment. Hence, the proper allocation of cognitive resources is critical for students’ learning. Consequently, a lecturer needs to understand the limits and strengths of the human learning processes, various learning styles of students, and accommodate these requirements while designing online assessments. As acknowledged in the cognitive load theory literature, visual and auditory explanations of worked examples potentially lead to a reduction of cognitive load (effort) and increased facilitation of learning when compared to conventional sequential text problem solving. This will help learner to utilize both subcomponents of their working memory. Instructional design changes were introduced at the case site for the delivery of the postgraduate business subjects. To make effective use of auditory and visual modalities, video recorded lectures, and key concept webinars were delivered to students. Videos were prepared to free up student limited working memory from irrelevant mental effort as all elements in a visual screening can be viewed simultaneously, processed quickly, and facilitates greater psychological processing efficiency. Most case study students in the postgraduate programs are adults, working full-time at higher management levels, and studying part-time. Their learning style and needs are different from other tertiary students. The purpose of the audio and visual interventions was to lower the students cognitive load and provide an online environment supportive to their efficient learning. These changes were expected to impact the student’s learning experience, their academic performance and retention favourably. This paper posits that these changes to instruction design facilitates students to integrate new knowledge into their long-term memory. A mixed methods case study methodology was used in this investigation. Primary data were collected from interviews and survey(s) of students and academics. Secondary data were collected from the organisation’s databases and reports. Some evidence was found that the academic performance of students does improve when new instructional design changes are introduced although not statistically significant. However, the overall grade distribution of student’s academic performance has changed and skewed higher which shows deeper understanding of the content. It was identified from feedback received from students that recorded webinars served as better learning aids than material with text alone, especially with more complex content. The recorded webinars on the subject content and assessments provides flexibility to students to access this material any time from repositories, many times, and this enhances students learning style. Visual and audio information enters student’s working memory more effectively. Also as each assessment included the application of the concepts, conceptual knowledge interacted with the pre-existing schema in the long-term memory and lowered student’s cognitive load.Keywords: cognitive load theory, learning style, instructional environment, working memory
Procedia PDF Downloads 14738 Big Data Applications for Transportation Planning
Authors: Antonella Falanga, Armando Cartenì
Abstract:
"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning
Procedia PDF Downloads 6137 Childhood Sensory Sensitivity: A Potential Precursor to Borderline Personality Disorder
Authors: Valerie Porr, Sydney A. DeCaro
Abstract:
TARA for borderline personality disorder (BPD), an education and advocacy organization, helps families to compassionately and effectively deal with troubling BPD behaviors. Our psychoeducational programs focus on understanding underlying neurobiological features of BPD and evidence-based methodology integrating dialectical behavior therapy (DBT) and mentalization based therapy (MBT,) clarifying the inherent misunderstanding of BPD behaviors and improving family communication. TARA4BPD conducts online surveys, workshops, and topical webinars. For over 25 years, we have collected data from BPD helpline callers. This data drew our attention to particular childhood idiosyncrasies that seem to characterize many of the children who later met the criteria for BPD. The idiosyncrasies we observed, heightened sensory sensitivity and hypervigilance, were included in Adolf Stern’s 1938 definition of “Borderline.” This aspect of BPD has not been prioritized by personality disorder researchers, presently focused on emotion processing and social cognition in BPD. Parents described sleep reversal problems in infants who, early on, seem to exhibit dysregulation in circadian rhythm. Families describe children as supersensitive to sensory sensations, such as specific sounds, heightened sense of smell, taste, textures of foods, and an inability to tolerate various fabrics textures (i.e., seams in socks). They also exhibit high sensitivity to particular words and voice tones. Many have alexithymia and dyslexia. These children are either hypo- or hypersensitive to sensory sensations, including pain. Many suffer from fibromyalgia. BPD reactions to pain have been studied (C. Schmahl) and confirm the existence of hyper and hypo-reactions to pain stimuli in people with BPD. To date, there is little or no data regarding what comprises a normative range of sensitivity in infants and children. Many parents reported that their children were tested or treated for sensory processing disorder (SPD), learning disorders, and ADHD. SPD is not included in the DSM and is treated by occupational therapists. The overwhelming anecdotal data from thousands of parents of children who later met criteria for BPD led TARA4BPD to develop a sensitivity survey to develop evidence of the possible role of early sensory perception problems as a pre-cursor to BPD, hopefully initiating new directions in BPD research. At present, the research community seems unaware of the role supersensory sensitivity might play as an early indicator of BPD. Parents' observations of childhood sensitivity obtained through family interviews and results of an extensive online survey on sensory responses across various ages of development will be presented. People with BPD suffer from a sense of isolation and otherness that often results in later interpersonal difficulties. Early identification of supersensitive children while brain circuits are developing might decrease the development of social interaction deficits such as rejection sensitivity, self-referential processes, and negative bias, hallmarks of BPD, ultimately minimizing the maladaptive methods of coping with distress that characterizes BPD. Family experiences are an untapped resource for BPD research. It is hoped that this data will give family observations the critical credibility to inform future treatment and research directions.Keywords: alexithymia, dyslexia, hypersensitivity, sensory processing disorder
Procedia PDF Downloads 20336 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit
Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic
Abstract:
Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method
Procedia PDF Downloads 12335 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction
Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal
Abstract:
Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction
Procedia PDF Downloads 14034 Single Crystal Growth in Floating-Zone Method and Properties of Spin Ladders: Quantum Magnets
Authors: Rabindranath Bag, Surjeet Singh
Abstract:
Materials in which the electrons are strongly correlated provide some of the most challenging and exciting problems in condensed matter physics today. After the discovery of high critical temperature superconductivity in layered or two-dimensional copper oxides, many physicists got attention in cuprates and it led to an upsurge of interest in the synthesis and physical properties of copper-oxide based material. The quest to understand superconducting mechanism in high-temperature cuprates, drew physicist’s attention to somewhat simpler compounds consisting of spin-chains or one-dimensional lattice of coupled spins. Low-dimensional quantum magnets are of huge contemporary interest in basic sciences as well emerging technologies such as quantum computing and quantum information theory, and heat management in microelectronic devices. Spin ladder is an example of quasi one-dimensional quantum magnets which provides a bridge between one and two dimensional materials. One of the examples of quasi one-dimensional spin-ladder compounds is Sr14Cu24O41, which exhibits a lot of interesting and exciting physical phenomena in low dimensional systems. Very recently, the ladder compound Sr14Cu24O41 was shown to exhibit long-distance quantum entanglement crucial to quantum information theory. Also, it is well known that hole-compensation in this material results in very high (metal-like) anisotropic thermal conductivity at room temperature. These observations suggest that Sr14Cu24O41 is a potential multifunctional material which invites further detailed investigations. To investigate these properties one must needs a large and high quality of single crystal. But these systems are showing incongruently melting behavior, which brings many difficulties to grow a large and quality of single crystals. Hence, we are using TSFZ (Travelling Solvent Floating Zone) method to grow the high quality of single crystals of the low dimensional magnets. Apart from this, it has unique crystal structure (alternating stacks of plane containing edge-sharing CuO2 chains, and the plane containing two-leg Cu2O3 ladder with intermediate Sr layers along the b- axis), which is also incommensurate in nature. It exhibits abundant physical phenomenon such as spin dimerization, crystallization of charge holes and charge density wave. The maximum focus of research so far involved in introducing defects on A-site (Sr). However, apart from the A-site (Sr) doping, there are only few studies in which the B-site (Cu) doping of polycrystalline Sr14Cu24O41 have been discussed and the reason behind this is the possibility of two doping sites for Cu (CuO2 chain and Cu2O3 ladder). Therefore, in our present work, the crystals (pristine and Cu-site doped) were grown by using TSFZ method by tuning the growth parameters. The Laue diffraction images, optical polarized microscopy and Scanning Electron Microscopy (SEM) images confirm the quality of the grown crystals. Here, we report the single crystal growth, magnetic and transport properties of Sr14Cu24O41 and its lightly doped variants (magnetic and non-magnetic) containing less than 1% of Co, Ni, Al and Zn impurities. Since, any real system will have some amount of weak disorder, our studies on these ladder compounds with controlled dilute disorder would be significant in the present context.Keywords: low-dimensional quantum magnets, single crystal, spin-ladder, TSFZ technique
Procedia PDF Downloads 27533 Autologous Blood for Conjunctival Autograft Fixation in Primary Pterygium Surgery: a Systematic Review and Meta-Analysis
Authors: Mohamed Abdelmongy
Abstract:
Autologous Blood for Conjunctival Autograft Fixation in Primary Pterygium Surgery: A Systematic Review and Meta-analysis Hossam Zein1,2, Ammar Ismail1,3, Mohamed Abdelmongy1,4, Sherif Elsherif1,5,6, Ahmad Hassanen1,4, Basma Muhammad2, Fathy Assaf1,3, Ahmed Elsehili1,7, Ahmed Negida1,7, Shin Yamane9, Mohamed M. Abdel-Daim8,9 and Kazuaki Kadonosono9 https://www.ncbi.nlm.nih.gov/pubmed/30277146 BACKGROUND: Pterygium is a benign ocular lesion characterized by triangular fibrovascular growth of conjunctival tissue over the cornea. Patients complain of the bad cosmetic appearance, ocular surface irritation and decreased visual acuity if the pterygium is large enough to cause astigmatism or encroach on the pupil. The definitive treatment of pterygium is surgical removal. However, outcomes are compromised by recurrence . The aim of the current study is to systematically review the current literature to explore the efficacy and safety of fibrin glue, suture and autologous blood coagulum for conjunctivalautograft fixation in primary pterygium surgery. OBJECTIVES: To assess the effectiveness of fibrin glue compared to sutures and autologous blood coagulum in conjunctival autografting for the surgical treatment of pterygium. METHODS: During preparing this manuscript, we followed the steps adequately illustrated in the Cochrane Handbook for Systematic Reviews of Interventions version 5.3, and reported it according to the preferred reporting of systematic review and meta-analysis (PRISMA) statement guidelines. We searched PubMed, Ovid (both through Medline), ISI Web of Science, and Cochrane Central Register of Controlled Trials (Central) through January 2017, using the following keywords “Pterygium AND (blood OR glue OR suture)” SELECTION CRITERIA: We included all randomized controlled trials (RCTs) that met the following criteria: 1) comparing autologous blood vs fibrin glue for conjunctivalautograft fixation in primary pterygium surgery 2) comparing autologous blood vs sutures for conjunctivalautograft fixation in primary pterygium surgery DATA COLLECTION AND ANALYSIS: Two review authors independently screened the search results, assessed trial quality, and extracted data using standard methodological procedures expected by Cochrane. The extracted data included A) study design, sample size, and main findings, B) Baseline characteristics of patients included in this review including their age, sex, pterygium site and grade, and graft size. C) Study outcomes comprising 1) primary outcomes: recurrence rate 2) secondary outcomes: graft stability outcomes (graft retraction, graft displacement), operation time (min) and postoperative symptoms (pain, discomfort, foreign body sensation, tearing) MAIN RESULTS: We included 7 RCTs and The review included662eyes (Blood: 293; Glue: 198; Suture: 171). we assess the 1) primary outcomes: recurrence rate 2) secondary outcomes: graft stability outcomes (graft retraction, graft displacement), operation time (min) and postoperative symptoms (pain, discomfort, foreign body sensation, tearing) CONCLUSIONS: Autologous blood for conjunctivalautograft fixation in pterygium surgery is associated with lower graft stability than fibrin glue or sutures. It was not inferior to fibrin glue or sutures regarding recurrence rate. The overall quality of evidence is low. Further well designed RCTs are needed to fully explore the efficacy of this new technique.Keywords: pterygium, autograft, ophthalmology, cornea
Procedia PDF Downloads 16132 SWOT Analysis on the Prospects of Carob Use in Human Nutrition: Crete, Greece
Authors: Georgios A. Fragkiadakis, Antonia Psaroudaki, Theodora Mouratidou, Eirini Sfakianaki
Abstract:
Research: Within the project "Actions for the optimal utilization of the potential of carob in the Region of Crete" which is financed-supervised by the Region, with collaboration of Crete University and Hellenic Mediterranean University, a SWOT (strengths, weaknesses, opportunities, threats) survey was carried out, to evaluate the prospects of carob in human nutrition, in Crete. Results and conclusions: 1). Strengths: There exists a local production of carob for human consumption, based on international reports, and local-product reports. The data on products in the market (over 100 brands of carob food), indicates a sufficiency of carob materials offered in Crete. The variety of carob food products retailed in Crete indicates a strong demand-production-consumption trend. There is a stable number (core) of businesses that invest significantly (Creta carob, Cretan mills, etc.). The great majority of the relevant food stores (bakery, confectionary etc.) do offer carob products. The presence of carob products produced in Crete is strong on the internet (over 20 main professionally designed websites). The promotion of the carob food-products is based on their variety and on a few historical elements connected with the Cretan diet. 2). Weaknesses: The international prices for carob seed affect the sector; the seed had an international price of €20 per kg in 2021-22 and fell to €8 in 2022, causing losses to carob traders. The local producers do not sort the carobs they deliver for processing, causing 30-40% losses of the product in the industry. The occasional high price triggers the collection of degraded raw material; large losses may emerge due to the action of insects. There are many carob trees whose fruits are not collected, e.g. in Apokoronas, Chania. The nutritional and commercial value of the wild carob fruits is very low. Carob trees-production is recorded by Greek statistical services as "other cultures" in combination with prickly pear i.e., creating difficulties in retrieving data. The percentage of carob used for human nutrition, in contrast to animal feeding, is not known. The exact imports of carob are not closely monitored. We have no data on the recycling of carob by-products in Crete. 3). Opportunities: The development of a culture of respect for carob trade may improve professional relations in the sector. Monitoring carob market and connecting production with retailing-industry needs may allow better market-stability. Raw material evaluation procedures may be implemented to maintain carob value-chain. The state agricultural services may be further involved in carob-health protection. The education of farmers on carob cultivation/management, can improve the quality of the product. The selection of local productive varieties, may improve the sustainability of the culture. Connecting the consumption of carob with health-food products, may create added value in the sector. The presence and extent of wild carob threes in Crete, represents, potentially, a target for grafting. 4). Threats: The annual fluctuation of carob yield challenges the programming of local food industry activities. Carob is a forest species also - there is danger of wrong classification of crops as forest areas, where land ownership is not clear.Keywords: human nutrition, carob food, SWOT analysis, crete, greece
Procedia PDF Downloads 9931 The Procedural Sedation Checklist Manifesto, Emergency Department, Jersey General Hospital
Authors: Jerome Dalphinis, Vishal Patel
Abstract:
The Bailiwick of Jersey is an island British crown dependency situated off the coast of France. Jersey General Hospital’s emergency department sees approximately 40,000 patients a year. It’s outside the NHS, with secondary care being free at the point of care. Sedation is a continuum which extends from a normal conscious level to being fully unresponsive. Procedural sedation produces a minimally depressed level of consciousness in which the patient retains the ability to maintain an airway, and they respond appropriately to physical stimulation. The goals of it are to improve patient comfort and tolerance of the procedure and alleviate associated anxiety. Indications can be stratified by acuity, emergency (cardioversion for life-threatening dysrhythmia), and urgency (joint reduction). In the emergency department, this is most often achieved using a combination of opioids and benzodiazepines. Some departments also use ketamine to produce dissociative sedation, a cataleptic state of profound analgesia and amnesia. The response to pharmacological agents is highly individual, and the drugs used occasionally have unpredictable pharmacokinetics and pharmacodynamics, which can always result in progression between levels of sedation irrespective of the intention. Therefore, practitioners must be able to ‘rescue’ patients from deeper sedation. These practitioners need to be senior clinicians with advanced airway skills (AAS) training. It can lead to adverse effects such as dangerous hypoxia and unintended loss of consciousness if incorrectly undertaken; studies by the National Confidential Enquiry into Patient Outcome and Death (NCEPOD) have reported avoidable deaths. The Royal College of Emergency Medicine, UK (RCEM) released an updated ‘Safe Sedation of Adults in the Emergency Department’ guidance in 2017 detailing a series of standards for staff competencies, and the required environment and equipment, which are required for each target sedation depth. The emergency department in Jersey undertook audit research in 2018 to assess their current practice. It showed gaps in clinical competency, the need for uniform care, and improved documentation. This spurred the development of a checklist incorporating the above RCEM standards, including contraindication for procedural sedation and difficult airway assessment. This was approved following discussion with the relevant heads of departments and the patient safety directorates. Following this, a second audit research was carried out in 2019 with 17 completed checklists (11 relocation of joints, 6 cardioversions). Data was obtained from looking at the controlled resuscitation drugs book containing documented use of ketamine, alfentanil, and fentanyl. TrakCare, which is the patient electronic record system, was then referenced to obtain further information. The results showed dramatic improvement compared to 2018, and they have been subdivided into six categories; pre-procedure assessment recording of significant medical history and ASA grade (2 fold increase), informed consent (100% documentation), pre-oxygenation (88%), staff (90% were AAS practitioners) and monitoring (92% use of non-invasive blood pressure, pulse oximetry, capnography, and cardiac rhythm monitoring) during procedure, and discharge instructions including the documented return of normal vitals and consciousness (82%). This procedural sedation checklist is a safe intervention that identifies pertinent information about the patient and provides a standardised checklist for the delivery of gold standard of care.Keywords: advanced airway skills, checklist, procedural sedation, resuscitation
Procedia PDF Downloads 11830 Supporting 'vulnerable' Students to Complete Their Studies During the Economic Crisis in Greece: The Umbrella Program of International Hellenic University
Authors: Rigas Kotsakis, Nikolaos Tsigilis, Vasilis Grammatikopoulos, Evridiki Zachopoulou
Abstract:
During the last decade, Greece has faced an unprecedented financial crisis, affecting various aspects and functionalities of Higher Education. Besides the restricted funding of academic institutions, the students and their families encountered economical difficulties that undoubtedly influenced the effective completion of their studies. In this context, a fairly large number of students in Alexander campus of International Hellenic University (IHU) delay, interrupt, or even abandon their studies, especially when they come from low-income families, belong to sensitive social or special needs groups, they have different cultural origins, etc. For this reason, a European project, named “Umbrella”, was initiated aiming at providing the necessary psychological support and counseling, especially to disadvantaged students, towards the completion of their studies. To this end, a network of various academic members (academic staff and students) from IHU, namely iMentor, were implicated in different roles. Specifically, experienced academic staff trained students to serve as intermediate links for the integration and educational support of students that fall into the aforementioned sensitive social groups and face problems for the completion of their studies. The main idea of the project is held upon its person-centered character, which facilitates direct student-to-student communication without the intervention of the teaching staff. The backbone of the iMentors network are senior students that face no problem in their academic life and volunteered for this project. It should be noted that there is a provision from the Umbrella structure for substantial and ethical rewards for their engagement. In this context, a well-defined, stringent methodology was implemented for the evaluation of the extent of the problem in IHU and the detection of the profile of the “candidate” disadvantaged students. The first phase included two steps, (a) data collection and (b) data cleansing/ preprocessing. The first step involved the data collection process from the Secretary Services of all Schools in IHU, from 1980 to 2019, which resulted in 96.418 records. The data set included the School name, the semester of studies, a student enrolling criteria, the nationality, the graduation year or the current, up-to-date academic state (still studying, delayed, dropped off, etc.). The second step of the employed methodology involved the data cleansing/preprocessing because of the existence of “noisy” data, missing and erroneous values, etc. Furthermore, several assumptions and grouping actions were imposed to achieve data homogeneity and an easy-to-interpret subsequent statistical analysis. Specifically, the duration of 40 years recording was limited to the last 15 years (2004-2019). In 2004 the Greek Technological Institutions were evolved into Higher Education Universities, leading into a stable and unified frame of graduate studies. In addition, the data concerning active students were excluded from the analysis since the initial processing effort was focused on the detection of factors/variables that differentiated graduate and deleted students. The final working dataset included 21.432 records with only two categories of students, those that have a degree and those who abandoned their studies. Findings of the first phase are presented across faculties and further discussed.Keywords: higher education, students support, economic crisis, mentoring
Procedia PDF Downloads 11629 Pulmonary Complication of Chronic Liver Disease and the Challenges Identifying and Managing Three Patients
Authors: Aidan Ryan, Nahima Miah, Sahaj Kaur, Imogen Sutherland, Mohamed Saleh
Abstract:
Pulmonary symptoms are a common presentation to the emergency department. Due to a lack of understanding of the underlying pathophysiology, chronic liver disease is not often considered a cause of dyspnea. We present three patients who were admitted with significant respiratory distress secondary to hepatopulmonary syndrome, portopulmonary hypertension, and hepatic hydrothorax. The first is a 27-year-old male with a 6-month history of progressive dyspnea. The patient developed a severe type 1 respiratory failure with a PaO₂ of 6.3kPa and was escalated to critical care, where he was managed with non-invasive ventilation to maintain oxygen saturation. He had an agitated saline contrast echocardiogram, which showed the presence of a possible shunt. A CT angiogram revealed significant liver cirrhosis, portal hypertension, and large para esophageal varices. Ultrasound of the abdomen showed coarse liver echo patter and enlarged spleen. Along with these imaging findings, his biochemistry demonstrated impaired synthetic liver function with an elevated international normalized ratio (INR) of 1.4 and hypoalbuminaemia of 28g/L. The patient was then transferred to a tertiary center for further management. Further investigations confirmed a shunt of 56%, and liver biopsy confirmed cirrhosis suggestive of alpha-1-antitripsyin deficiency. The findings were consistent with a diagnosis of hepatopulmonary syndrome, and the patient is awaiting a liver transplant. The second patient is a 56-year-old male with a 12-month history of worsening dyspnoea, jaundice, confusion. His medical history included liver cirrhosis, portal hypertension, and grade 1 oesophageal varices secondary to significant alcohol excess. On admission, he developed a type 1 respiratory failure with PaO₂ of 6.8kPa requiring 10L of oxygen. CT pulmonary angiogram was negative for pulmonary embolism but showed evidence of chronic pulmonary hypertension, liver cirrhosis, and portal hypertension. An echocardiogram revealed a grossly dilated right heart with reduced function, pulmonary and tricuspid regurgitation, and pulmonary artery pressures estimated at 78mmHg. His biochemical markers showed impaired synthetic liver function with an INR of 3.2, albumin of 29g/L, along with raised bilirubin of 148mg/dL. During his long admission, he was managed with diuretics with little improvement. After three weeks, he was diagnosed with portopulmonary hypertension and was commenced on terlipressin. This resulted in successfully weaning off oxygen, and he was discharged home. The third patient is a 61-year-old male who presented to the local ambulatory care unit for therapeutic paracentesis on a background of decompensated liver cirrhosis. On presenting, he complained of a 2-day history of worsening dyspnoea and a productive cough. Chest x-ray showed a large pleural effusion, increasing in size over the previous eight months, and his abdomen was visibly distended with ascitic fluid. Unfortunately, the patient deteriorated, developing a larger effusion along with an increase in oxygen demand, and passed away. Without underlying cardiorespiratory disease, in the presence of a persistent pleural effusion with underlying decompensated cirrhosis, he was diagnosed with hepatic hydrothorax. While each presented with dyspnoea, the cause and underlying pathophysiology differ significantly from case to case. By describing these complications, we hope to improve awareness and aid prompt and accurate diagnosis, vital for improving outcomes.Keywords: dyspnea, hepatic hydrothorax, hepatopulmonary syndrome, portopulmonary syndrome
Procedia PDF Downloads 12728 Polysaccharide Polyelectrolyte Complexation: An Engineering Strategy for the Development of Commercially Viable Sustainable Materials
Authors: Jeffrey M. Catchmark, Parisa Nazema, Caini Chen, Wei-Shu Lin
Abstract:
Sustainable and environmentally compatible materials are needed for a wide variety of volume commercial applications. Current synthetic materials such as plastics, fluorochemicals (such as PFAS), adhesives and resins in form of sheets, laminates, coatings, foams, fibers, molded parts and composites are used for countless products such as packaging, food handling, textiles, biomedical, construction, automotive and general consumer devices. Synthetic materials offer distinct performance advantages including stability, durability and low cost. These attributes are associated with the physical and chemical properties of these materials that, once formed, can be resistant to water, oils, solvents, harsh chemicals, salt, temperature, impact, wear and microbial degradation. These advantages become disadvantages when considering the end of life of these products which generate significant land and water pollution when disposed of and few are recycled. Agriculturally and biologically derived polymers offer the potential of remediating these environmental and life-cycle difficulties, but face numerous challenges including feedstock supply, scalability, performance and cost. Such polymers include microbial biopolymers like polyhydroxyalkanoates and polyhydroxbutirate; polymers produced using biomonomer chemical synthesis like polylactic acid; proteins like soy, collagen and casein; lipids like waxes; and polysaccharides like cellulose and starch. Although these materials, and combinations thereof, exhibit the potential for meeting some of the performance needs of various commercial applications, only cellulose and starch have both the production feedstock volume and cost to compete with petroleum derived materials. Over 430 million tons of plastic is produced each year and plastics like low density polyethylene cost ~$1500 to $1800 per ton. Over 400 million tons of cellulose and over 100 million tons of starch are produced each year at a volume cost as low as ~$500 to $1000 per ton with the capability of increased production. Cellulose and starches, however, are hydroscopic materials that do not exhibit the needed performance in most applications. Celluloses and starches can be chemically modified to contain positive and negative surface charges and such modified versions of these are used in papermaking, foods and cosmetics. Although these modified polysaccharides exhibit the same performance limitations, recent research has shown that composite materials comprised of cationic and anionic polysaccharides in polyelectrolyte complexation exhibit significantly improved performance including stability in diverse environments. Moreover, starches with added plasticizers can exhibit thermoplasticity, presenting the possibility of improved thermoplastic starches when comprised of starches in polyelectrolyte complexation. In this work, the potential for numerous volume commercial products based on polysaccharide polyelectrolyte complexes (PPCs) will be discussed, including the engineering design strategy used to develop them. Research results will be detailed including the development and demonstration of starch PPC compositions for paper coatings to replace PFAS; adhesives; foams for packaging, insulation and biomedical applications; and thermoplastic starches. In addition, efforts to demonstrate the potential for volume manufacturing with industrial partners will be discussed.Keywords: biomaterials engineering, commercial materials, polysaccharides, sustainable materials
Procedia PDF Downloads 2027 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus
Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert
Abstract:
Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.Keywords: building information modeling, digital terrain model, existing buildings, interoperability
Procedia PDF Downloads 11526 Quality in Healthcare: An Autism-Friendly Hospital Emergency Waiting Room
Authors: Elena Bellini, Daniele Mugnaini, Michele Boschetto
Abstract:
People with an Autistic Spectrum Disorder and an Intellectual Disability who need to attend a Hospital Emergency Waiting Room frequently present high levels of discomfort and challenging behaviors due to stress-related hyperarousal, sensory sensitivity, novelty-anxiety, communication and self-regulation difficulties. Increased agitation and acting out also disturb the diagnostic and therapeutic processes, and the emergency room climate. Architectural design disciplines aimed at reducing distress in hospitals or creating autism-friendly environments are called for to find effective answers to this particular need. A growing number of researchers are considering the physical environment as an important point of intervention for people with autism. It has been shown that providing the right setting can help enhance confidence and self-esteem and can have a profound impact on their health and wellbeing. Environmental psychology has evaluated the perceived quality of care, looking at the design of hospital rooms, paths and circulation, waiting rooms, services and devices. Furthermore, many studies have investigated the influence of the hospital environment on patients, in terms of stress-reduction and therapeutic intervention’ speed, but also on health professionals and their work. Several services around the world are organizing autism-friendly hospital environments which involve the architecture and the specific staff training. In Italy, the association Spes contra spem has promoted and published, in 2013, the ‘Chart of disabled people in the hospital’. It stipulates that disabled people should have equal rights to accessible and high-quality care. There are a few Italian examples of therapeutic programmes for autistic people as the Dama project in Milan and the recent experience of Children and Autism Foundation in Pordenone. Careggi’s Emergency Waiting Room in Florence has been built to satisfy this challenge. This project of research comes from a collaboration between the technical staff of Careggi Hospital, the Center for autism PAMAPI and some architects expert in the sensory environment. The methodology of focus group involved architects, psychologists and professionals through a transdisciplinary research, centered on the links between the spatial characteristics and clinical state of people with ASD. The relationship between architectural space and quality of life is studied to pay maximum attention to users’ needs and to support the medical staff in their work by a specific program of training. The result of this research is a sum of criteria used to design the emergency waiting room, that will be illustrated. A protected room, with a clear space design, maximizes comprehension and predictability. The multisensory environment is thought to help sensory integration and relaxation. Visual communication through Ipad allows an anticipated understanding of medical procedures, and a specific technological system supports requests, choices and self-determination in order to fit sensory stimulation to personal preferences, especially for hypo and hypersensitive people. All these characteristics should ensure a better regulation of the arousal, less behavior problems, improving treatment accessibility, safety, and effectiveness. First results about patient-satisfaction levels will be presented.Keywords: accessibility of care, autism-friendly architecture, personalized therapeutic process, sensory environment
Procedia PDF Downloads 26925 The Impact of Supporting Productive Struggle in Learning Mathematics: A Quasi-Experimental Study in High School Algebra Classes
Authors: Sumeyra Karatas, Veysel Karatas, Reyhan Safak, Gamze Bulut-Ozturk, Ozgul Kartal
Abstract:
Productive struggle entails a student's cognitive exertion to comprehend mathematical concepts and uncover solutions not immediately apparent. The significance of productive struggle in learning mathematics is accentuated by influential educational theorists, emphasizing its necessity for learning mathematics with understanding. Consequently, supporting productive struggle in learning mathematics is recognized as a high-leverage and effective mathematics teaching practice. In this study, the investigation into the role of productive struggle in learning mathematics led to the development of a comprehensive rubric for productive struggle pedagogy through an exhaustive literature review. The rubric consists of eight primary criteria and 37 sub-criteria, providing a detailed description of teacher actions and pedagogical choices that foster students' productive struggles. These criteria encompass various pedagogical aspects, including task design, tool implementation, allowing time for struggle, posing questions, scaffolding, handling mistakes, acknowledging efforts, and facilitating discussion/feedback. Utilizing this rubric, a team of researchers and teachers designed eight 90-minute lesson plans, employing a productive struggle pedagogy, for a two-week unit on solving systems of linear equations. Simultaneously, another set of eight lesson plans on the same topic, featuring identical content and problems but employing a traditional lecture-and-practice model, was designed by the same team. The objective was to assess the impact of supporting productive struggle on students' mathematics learning, defined by the strands of mathematical proficiency. This quasi-experimental study compares the control group, which received traditional lecture- and practice instruction, with the treatment group, which experienced a productive struggle in pedagogy. Sixty-six 10th and 11th-grade students from two algebra classes, taught by the same teacher at a high school, underwent either the productive struggle pedagogy or lecture-and-practice approach over two-week eight 90-minute class sessions. To measure students' learning, an assessment was created and validated by a team of researchers and teachers. It comprised seven open-response problems assessing the strands of mathematical proficiency: procedural and conceptual understanding, strategic competence, and adaptive reasoning on the topic. The test was administered at the beginning and end of the two weeks as pre-and post-test. Students' solutions underwent scoring using an established rubric, subjected to expert validation and an inter-rater reliability process involving multiple criteria for each problem based on their steps and procedures. An analysis of covariance (ANCOVA) was conducted to examine the differences between the control group, which received traditional pedagogy, and the treatment group, exposed to the productive struggle pedagogy, on the post-test scores while controlling for the pre-test. The results indicated a significant effect of treatment on post-test scores for procedural understanding (F(2, 63) = 10.47, p < .001), strategic competence (F(2, 63) = 9.92, p < .001), adaptive reasoning (F(2, 63) = 10.69, p < .001), and conceptual understanding (F(2, 63) = 10.06, p < .001), controlling for pre-test scores. This demonstrates the positive impact of supporting productive struggle in learning mathematics. In conclusion, the results revealed the significance of the role of productive struggle in learning mathematics. The study further explored the practical application of productive struggle through the development of a comprehensive rubric describing the pedagogy of supporting productive struggle.Keywords: effective mathematics teaching practice, high school algebra, learning mathematics, productive struggle
Procedia PDF Downloads 5524 Geovisualization of Human Mobility Patterns in Los Angeles Using Twitter Data
Authors: Linna Li
Abstract:
The capability to move around places is doubtless very important for individuals to maintain good health and social functions. People’s activities in space and time have long been a research topic in behavioral and socio-economic studies, particularly focusing on the highly dynamic urban environment. By analyzing groups of people who share similar activity patterns, many socio-economic and socio-demographic problems and their relationships with individual behavior preferences can be revealed. Los Angeles, known for its large population, ethnic diversity, cultural mixing, and entertainment industry, faces great transportation challenges such as traffic congestion, parking difficulties, and long commuting. Understanding people’s travel behavior and movement patterns in this metropolis sheds light on potential solutions to complex problems regarding urban mobility. This project visualizes people’s trajectories in Greater Los Angeles (L.A.) Area over a period of two months using Twitter data. A Python script was used to collect georeferenced tweets within the Greater L.A. Area including Ventura, San Bernardino, Riverside, Los Angeles, and Orange counties. Information associated with tweets includes text, time, location, and user ID. Information associated with users includes name, the number of followers, etc. Both aggregated and individual activity patterns are demonstrated using various geovisualization techniques. Locations of individual Twitter users were aggregated to create a surface of activity hot spots at different time instants using kernel density estimation, which shows the dynamic flow of people’s movement throughout the metropolis in a twenty-four-hour cycle. In the 3D geovisualization interface, the z-axis indicates time that covers 24 hours, and the x-y plane shows the geographic space of the city. Any two points on the z axis can be selected for displaying activity density surface within a particular time period. In addition, daily trajectories of Twitter users were created using space-time paths that show the continuous movement of individuals throughout the day. When a personal trajectory is overlaid on top of ancillary layers including land use and road networks in 3D visualization, the vivid representation of a realistic view of the urban environment boosts situational awareness of the map reader. A comparison of the same individual’s paths on different days shows some regular patterns on weekdays for some Twitter users, but for some other users, their daily trajectories are more irregular and sporadic. This research makes contributions in two major areas: geovisualization of spatial footprints to understand travel behavior using the big data approach and dynamic representation of activity space in the Greater Los Angeles Area. Unlike traditional travel surveys, social media (e.g., Twitter) provides an inexpensive way of data collection on spatio-temporal footprints. The visualization techniques used in this project are also valuable for analyzing other spatio-temporal data in the exploratory stage, thus leading to informed decisions about generating and testing hypotheses for further investigation. The next step of this research is to separate users into different groups based on gender/ethnic origin and compare their daily trajectory patterns.Keywords: geovisualization, human mobility pattern, Los Angeles, social media
Procedia PDF Downloads 12123 Neural Correlates of Diminished Humor Comprehension in Schizophrenia: A Functional Magnetic Resonance Imaging Study
Authors: Przemysław Adamczyk, Mirosław Wyczesany, Aleksandra Domagalik, Artur Daren, Kamil Cepuch, Piotr Błądziński, Tadeusz Marek, Andrzej Cechnicki
Abstract:
The present study aimed at evaluation of neural correlates of humor comprehension impairments observed in schizophrenia. To investigate the nature of this deficit in schizophrenia and to localize cortical areas involved in humor processing we used functional magnetic resonance imaging (fMRI). The study included chronic schizophrenia outpatients (SCH; n=20), and sex, age and education level matched healthy controls (n=20). The task consisted of 60 stories (setup) of which 20 had funny, 20 nonsensical and 20 neutral (not funny) punchlines. After the punchlines were presented, the participants were asked to indicate whether the story was comprehensible (yes/no) and how funny it was (1-9 Likert-type scale). fMRI was performed on a 3T scanner (Magnetom Skyra, Siemens) using 32-channel head coil. Three contrasts in accordance with the three stages of humor processing were analyzed in both groups: abstract vs neutral stories - incongruity detection; funny vs abstract - incongruity resolution; funny vs neutral - elaboration. Additionally, parametric modulation analysis was performed using both subjective ratings separately in order to further differentiate the areas involved in incongruity resolution processing. Statistical analysis for behavioral data used U Mann-Whitney test and Bonferroni’s correction, fMRI data analysis utilized whole-brain voxel-wise t-tests with 10-voxel extent threshold and with Family Wise Error (FWE) correction at alpha = 0.05, or uncorrected at alpha = 0.001. Between group comparisons revealed that the SCH subjects had attenuated activation in: the right superior temporal gyrus in case of irresolvable incongruity processing of nonsensical puns (nonsensical > neutral); the left medial frontal gyrus in case of incongruity resolution processing of funny puns (funny > nonsensical) and the interhemispheric ACC in case of elaboration of funny puns (funny > neutral). Additionally, the SCH group revealed weaker activation during funniness ratings in the left ventro-medial prefrontal cortex, the medial frontal gyrus, the angular and the supramarginal gyrus, and the right temporal pole. In comprehension ratings the SCH group showed suppressed activity in the left superior and medial frontal gyri. Interestingly, these differences were accompanied by protraction of time in both types of rating responses in the SCH group, a lower level of comprehension for funny punchlines and a higher funniness for absurd punchlines. Presented results indicate that, in comparison to healthy controls, schizophrenia is characterized by difficulties in humor processing revealed by longer reaction times, impairments of understanding jokes and finding nonsensical punchlines more funny. This is accompanied by attenuated brain activations, especially in the left fronto-parietal and the right temporal cortices. Disturbances of the humor processing seem to be impaired at the all three stages of the humor comprehension process, from incongruity detection, through its resolution to elaboration. The neural correlates revealed diminished neural activity of the schizophrenia brain, as compared with the control group. The study was supported by the National Science Centre, Poland (grant no 2014/13/B/HS6/03091).Keywords: communication skills, functional magnetic resonance imaging, humor, schizophrenia
Procedia PDF Downloads 21522 Cycleloop Personal Rapid Transit: An Exploratory Study for Last Mile Connectivity in Urban Transport
Authors: Suresh Salla
Abstract:
In this paper, author explores for most sustainable last mile transport mode addressing present problems of traffic congestion, jams, pollution and travel stress. Development of energy-efficient sustainable integrated transport system(s) is/are must to make our cities more livable. Emphasis on autonomous, connected, electric, sharing system for effective utilization of systems (vehicles and public infrastructure) is on the rise. Many surface mobility innovations like PBS, Ride hailing, ride sharing, etc. are, although workable but if we analyze holistically, add to the already congested roads, difficult to ride in hostile weather, causes pollution and poses commuter stress. Sustainability of transportation is evaluated with respect to public adoption, average speed, energy consumption, and pollution. Why public prefer certain mode over others? How commute time plays a role in mode selection or shift? What are the factors play-ing role in energy consumption and pollution? Based on the study, it is clear that public prefer a transport mode which is exhaustive (i.e., less need for interchange – network is widespread) and intensive (i.e., less waiting time - vehicles are available at frequent intervals) and convenient with latest technologies. Average speed is dependent on stops, number of intersections, signals, clear route availability, etc. It is clear from Physics that higher the kerb weight of a vehicle; higher is the operational energy consumption. Higher kerb weight also demands heavier infrastructure. Pollution is dependent on source of energy, efficiency of vehicle, average speed. Mode can be made exhaustive when the unit infrastructure cost is less and can be offered intensively when the vehicle cost is less. Reliable and seamless integrated mobility till last ¼ mile (Five Minute Walk-FMW) is a must to encourage sustainable public transportation. Study shows that average speed and reliability of dedicated modes (like Metro, PRT, BRT, etc.) is high compared to road vehicles. Electric vehicles and more so battery-less or 3rd rail vehicles reduce pollution. One potential mode can be Cycleloop PRT, where commuter rides e-cycle in a dedicated path – elevated, at grade or underground. e-Bike with kerb weight per rider at 15 kg being 1/50th of car or 1/10th of other PRT systems makes it sustainable mode. Cycleloop tube will be light, sleek and scalable and can be modular erected, either on modified street lamp-posts or can be hanged/suspended between the two stations. Embarking and dis-embarking points or offline stations can be at an interval which suits FMW to mass public transit. In terms of convenience, guided e-Bike can be made self-balancing thus encouraging driverless on-demand vehicles. e-Bike equipped with smart electronics and drive controls can intelligently respond to field sensors and autonomously move reacting to Central Controller. Smart switching allows travel from origin to destination without interchange of cycles. DC Powered Batteryless e-cycle with voluntary manual pedaling makes it sustainable and provides health benefits. Tandem e-bike, smart switching and Platoon operations algorithm options provide superior through-put of the Cycleloop. Thus Cycleloop PRT will be exhaustive, intensive, convenient, reliable, speedy, sustainable, safe, pollution-free and healthy alternative mode for last mile connectivity in cities.Keywords: cycleloop PRT, five-minute walk, lean modular infrastructure, self-balanced intelligent e-cycle
Procedia PDF Downloads 13521 Project Management Practices and Operational Challenges in Conflict Areas: Case Study Kewot Woreda North Shewa Zone, Amhara Region, Ethiopia
Authors: Rahel Birhane Eshetu
Abstract:
This research investigates the complex landscape of project management practices and operational challenges in conflict-affected areas, with a specific focus on Kewot Woreda in the North Shewa Zone of the Amhara region in Ethiopia. The study aims to identify essential project management methodologies, the significant operational hurdles faced, and the adaptive strategies employed by project managers in these challenging environments. Utilizing a mixed-methods approach, the research combines qualitative and quantitative data collection. Initially, a comprehensive literature review was conducted to establish a theoretical framework. This was followed by the administration of questionnaires to gather empirical data, which was then analyzed using statistical software. This sequential approach ensures a robust understanding of the context and challenges faced by project managers. The findings reveal that project managers in conflict zones encounter a range of escalating challenges. Initially, they must contend with immediate security threats and the presence of displaced populations, which significantly disrupt project initiation and execution. As projects progress, additional challenges arise, including limited access to essential resources and environmental disruptions such as natural disasters. These factors exacerbate the operational difficulties that project managers must navigate. In response to these challenges, the study highlights the necessity for project managers to implement formal project plans while simultaneously adopting adaptive strategies that evolve over time. Key adaptive strategies identified include flexible risk management frameworks, change management practices, and enhanced stakeholder engagement approaches. These strategies are crucial for maintaining project momentum and ensuring that objectives are met despite the unpredictable nature of conflict environments. The research emphasizes that structured scope management, clear documentation, and thorough requirements analysis are vital components for effectively navigating the complexities inherent in conflict-affected regions. However, the ongoing threats and logistical barriers necessitate a continuous adjustment to project management methodologies. This adaptability is not only essential for the immediate success of projects but also for fostering long-term resilience within the community. Concluding, the study offers actionable recommendations aimed at improving project management practices in conflict zones. These include the adoption of adaptive frameworks specifically tailored to the unique conditions of conflict environments and targeted training for project managers. Such training should focus on equipping managers with the skills to better address the dynamic challenges presented by conflict situations. The insights gained from this research contribute significantly to the broader field of project management, providing a practical guide for practitioners operating in high-risk areas. By emphasizing sustainable and resilient project outcomes, this study underscores the importance of adaptive management strategies in ensuring the success of projects in conflict-affected regions. The findings serve not only to enhance the understanding of project management practices in Kewot Woreda but also to inform future research and practice in similar contexts, ultimately aiming to promote stability and development in areas beset by conflict.Keywords: project management practices, operational challenges, conflict zones, adaptive strategies
Procedia PDF Downloads 2020 Experiences of Discrimination and Coping Strategies of Second Generation Academics during the Career-Entry Phase in Austria
Authors: R. Verwiebe, L. Seewann, M. Wolf
Abstract:
This presentation addresses marginalization and discrimination as experienced by young academics with a migrant background in the Austrian labor market. Focusing on second generation academics of Central Eastern European and Turkish descent we explore two major issues. First, we ask whether their career-entry and everyday professional life entails origin-specific barriers. As educational residents, they show competences which, when lacking, tend to be drawn upon to explain discrimination: excellent linguistic skills, accredited high-level training, and networks. Second, we concentrate on how this group reacts to discrimination and overcomes experiences of marginalization. To answer these questions, we utilize recent sociological and social psychological theories that focus on the diversity of individual experiences. This distinguishes us from a long tradition of research that has dealt with the motives that inform discrimination, but has less often considered the effects on those concerned. Similarly, applied coping strategies have less often been investigated, though they may provide unique insights into current problematic issues. Building upon present literature, we follow recent discrimination research incorporating the concepts of ‘multiple discrimination’, ‘subtle discrimination’, and ‘visual social markers’. 21 problem-centered interviews are the empirical foundation underlying this study. The interviewees completed their entire educational career in Austria, graduated in different universities and disciplines and are working in their first post-graduate jobs (career entry phase). In our analysis, we combined thematic charting with a coding method. The results emanating from our empirical material indicated a variety of discrimination experiences ranging from barely perceptible disadvantages to directly articulated and overt marginalization. The spectrum of experiences covered stereotypical suppositions at job interviews, the disavowal of competencies, symbolic or social exclusion by new colleges, restricted professional participation (e.g. customer contact) and non-recruitment due to religious or ethnical markers (e.g. headscarves). In these experiences the role of the academics education level, networks, or competences seemed to be minimal, as negative prejudice on the basis of visible ‘social markers’ operated ‘ex-ante’. The coping strategies identified in overcoming such barriers are: an increased emphasis on effort, avoidance of potentially marginalizing situations, direct resistance (mostly in the form of verbal opposition) and dismissal of negative experiences by ignoring or ironizing the situation. In some cases, the academics drew into their specific competences, such as an intellectual approach of studying specialist literature, focus on their intercultural competences or planning to migrate back to their parent’s country of origin. Our analysis further suggests a distinction between reactive (i.e. to act on and respond to experienced discrimination) and preventative strategies (applied to obviate discrimination) of coping. In light of our results, we would like to stress that the tension between educational and professional success experienced by academics with a migrant background – and the barriers and marginalization they continue to face – are essential issues to be introduced to socio-political discourse. It seems imperative to publicly accentuate the growing social, political and economic significance of this group, their educational aspirations, as well as their experiences of achievement and difficulties.Keywords: coping strategies, discrimination, labor market, second generation university graduates
Procedia PDF Downloads 222