Search results for: heuristics and biases
64 Thinking in a Foreign Language Overcomes the Developmental Reversal in Risky Decision-Making: The Foreign Language Effect in Risky Decision-Making
Authors: Rendong Cai, Bei Peng, Yanping Dong
Abstract:
In risk decision making, individuals are found to be susceptible to 'frames': people tend to be risk averse when the choice is described in terms of potential 'gains' (gain frame), whereas they tend to be risk seeking when the same choice is described in terms of potential 'losses' (loss frame); this effect is termed the framing effect. The framing effect has been well documented and some studies even find a developmental reversal in the framing effect: The more experience an individual has in a certain field, the easier for him to be influenced by the frame relevant to the field, resulting in greater decision inconsistency. Recent studies reported that using a foreign language can reduce the framing effect. However, it is not clear whether foreign language use can overcome the developmental reversal in the framing effect. The present study investigated three potential factors that may influence the developmental reversal in the framing effect: specialized knowledge of the participants, the language in which the problem is presented, and the types of problems. The present study examined the decision making behavior of 188 Chinese-English bilinguals who majored in Finance, with a group of 277 English majors as the control group. They were asked to solve a financial problem (experimental condition) and a life problem (control condition). Each problem was presented in one of the following four versions: native language-gain frame, foreign language-gain frame, native language-loss frame, and foreign language-loss frame. Results revealed that for the life problem, under the native condition, both groups were affected by the frame; but under the foreign condition, this framing effect disappeared for the financial majors. This confirmed that foreign language use modulates framing effects in general decision making, which served as an effective baseline. For the financial problem, under the native condition, only the financial major was observed to be influenced by the frame, which was a developmental reversal; under the foreign condition, however, this framing effect disappeared. The results provide further empirical evidence for the universal of the developmental reversal in risky decision making. More importantly, the results suggest that using a foreign language can overcome such reversal, which has implications for the reduction of decision biases in professionals. The findings also shed new light on the complex interaction between general decision-making and bilingualism.Keywords: the foreign language effect, developmental reversals, the framing effect, bilingualism
Procedia PDF Downloads 37063 Observing the Observers: Journalism and the Gendered Newsroom
Authors: M. Silveirinha, P. Lobo
Abstract:
In the last few decades, many studies have documented a systematic under-representation of women in the news. Aside from being fewer than men, research has also shown that they are frequently portrayed according to traditional stereotypes that have been proven to be disadvantageous for women. When considering this problem, it has often been argued that news content will be more gender balanced when the number of female journalists increases. However, the recent so-called ‘feminization’ of media professions has shown that this assumption is too simplistic. If we want to better grasp gender biases in news content we will need to take a deeper approach into the processes of news production and into journalism culture itself, taking the study of newsmaking as a starting point and theoretical framework, with the purpose of examining the actual newsroom routines, professional values, structures and news access that eventually lead to an unbalanced media representation of women. If journalists consider themselves to be observers of everyday social and political life, of specific importance, as a vast body of research shows, is the observation of women journalist’s believes and of their roles and practices in a gendered newsroom. In order to better understand the professional and organizational context of news production, and the gender power relations in decision-making processes, we conducted a participant observation in two television newsrooms. Our approach involved a combination of methods, including overt observation itself, formal and informal interviews and the writing-up and analysis of our own diaries. Drawing insights in organizational sociology, we took newsroom practices to be a result of professional routines and socialization and focused on how women and men respond to newsroom dynamics and structures. We also analyzed the gendered organization of the newsmaking process and the subtle and/or obvious glass-ceiling obstacles often reported on. In our paper we address two levels of research: first, we look at our results and establish an overview of the patterns of continuity between the gendering of organizations, working conditions and professional journalist beliefs. At this level, the study not only interrogates how journalists handle views on gender and the practice of the profession but also highlights the structural inequalities in journalism and the pervasiveness of family–work tensions for female journalists. Secondly, we reflect on our observation method, and establish a critical assessment of the method itself.Keywords: gender, journalism, participant observation, women
Procedia PDF Downloads 35462 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates
Authors: Yemane Hailu Fissuh, Zhongzhan Zhang
Abstract:
a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate
Procedia PDF Downloads 17561 Investigating the Impact of Task Demand and Duration on Passage of Time Judgements and Duration Estimates
Authors: Jesika A. Walker, Mohammed Aswad, Guy Lacroix, Denis Cousineau
Abstract:
There is a fundamental disconnect between the experience of time passing and the chronometric units by which time is quantified. Specifically, there appears to be no relationship between the passage of time judgments (PoTJs) and verbal duration estimates at short durations (e.g., < 2000 milliseconds). When a duration is longer than several minutes, however, evidence suggests that a slower feeling of time passing is predictive of overestimation. Might the length of a task moderate the relation between PoTJs and duration estimates? Similarly, the estimation paradigm (prospective vs. retrospective) and the mental effort demanded of a task (task demand) have both been found to influence duration estimates. However, only a handful of experiments have investigated these effects for tasks of long durations, and the results have been mixed. Thus, might the length of a task also moderate the effects of the estimation paradigm and task demand on duration estimates? To investigate these questions, 273 participants performed either an easy or difficult visual and memory search task for either eight or 58 minutes, under prospective or retrospective instructions. Afterward, participants provided a duration estimate in minutes, followed by a PoTJ on a Likert scale (1 = very slow, 7 = very fast). A 2 (prospective vs. retrospective) × 2 (eight minutes vs. 58 minutes) × 2 (high vs. low difficulty) between-subjects ANOVA revealed a two-way interaction between task demand and task duration on PoTJs, p = .02. Specifically, time felt faster in the more challenging task, but only in the eight-minute condition, p < .01. Duration estimates were transformed into RATIOs (estimate/actual duration) to standardize estimates across durations. An ANOVA revealed a two-way interaction between estimation paradigm and task duration, p = .03. Specifically, participants overestimated the task more if they were given prospective instructions, but only in the eight-minute task. Surprisingly, there was no effect of task difficulty on duration estimates. Thus, the demands of a task may influence ‘feeling of time’ and ‘estimation time’ differently, contributing to the existing theory that these two forms of time judgement rely on separate underlying cognitive mechanisms. Finally, a significant main effect of task duration was found for both PoTJs and duration estimates (ps < .001). Participants underestimated the 58-minute task (m = 42.5 minutes) and overestimated the eight-minute task (m = 10.7 minutes). Yet, they reported the 58-minute task as passing significantly slower on a Likert scale (m = 2.5) compared to the eight-minute task (m = 4.1). In fact, a significant correlation was found between PoTJ and duration estimation (r = .27, p <.001). This experiment thus provides evidence for a compensatory effect at longer durations, in which people underestimate a ‘slow feeling condition and overestimate a ‘fast feeling condition. The results are discussed in relation to heuristics that might alter the relationship between these two variables when conditions range from several minutes up to almost an hour.Keywords: duration estimates, long durations, passage of time judgements, task demands
Procedia PDF Downloads 13160 Unlocking Health Insights: Studying Data for Better Care
Authors: Valentina Marutyan
Abstract:
Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.Keywords: data mining, healthcare, big data, large amounts of data
Procedia PDF Downloads 7659 Cancer Burden and Policy Needs in the Democratic Republic of the Congo: A Descriptive Study
Authors: Jean Paul Muambangu Milambo, Peter Nyasulu, John Akudugu, Leonidas Ndayisaba, Joyce Tsoka-Gwegweni, Lebwaze Massamba Bienvenu, Mitshindo Mwambangu Chiro
Abstract:
In 2018, non-communicable diseases (NCDs) were responsible for 48% of deaths in the Democratic Republic of Congo (DRC), with cancer contributing to 5% of these deaths. There is a notable absence of cancer registries, capacity-building activities, budgets, and treatment roadmaps in the DRC. Current cancer estimates are primarily based on mathematical modeling with limited data from neighboring countries. This study aimed to assess cancer subtype prevalence in Kinshasa hospitals and compare these findings with WHO model estimates. Methods: A retrospective observational study was conducted from 2018 to 2020 at HJ Hospitals in Kinshasa. Data were collected using American Cancer Society (ACS) questionnaires and physician logs. Descriptive analysis was performed using STATA version 16 to estimate cancer burden and provide evidence-based recommendations. Results: The results from the chart review at HJ Hospitals in Kinshasa (2018-2020) indicate that out of 6,852 samples, approximately 11.16% were diagnosed with cancer. The distribution of cancer subtypes in this cohort was as follows: breast cancer (33.6%), prostate cancer (21.8%), colorectal cancer (9.6%), lymphoma (4.6%), and cervical cancer (4.4%). These figures are based on histopathological confirmation at the facility and may not fully represent the broader population due to potential selection biases related to geographic and financial accessibility to the hospital. In contrast, the World Health Organization (WHO) model estimates for cancer prevalence in the DRC show different proportions. According to WHO data, the distribution of cancer types is as follows: cervical cancer (15.9%), prostate cancer (15.3%), breast cancer (14.9%), liver cancer (6.8%), colorectal cancer (5.9%), and other cancers (41.2%) (WHO, 2020). Conclusion: The data indicate a rising cancer prevalence in DRC but highlight significant gaps in clinical, biomedical, and genetic cancer data. The establishment of a population-based cancer registry (PBCR) and a defined cancer management pathway is crucial. The current estimates are limited due to data scarcity and inconsistencies in clinical practices. There is an urgent need for multidisciplinary cancer management, integration of palliative care, and improvement in care quality based on evidence-based measures.Keywords: cancer, risk factors, DRC, gene-environment interactions, survivors
Procedia PDF Downloads 2358 Network Governance and Renewable Energy Transition in Sub-Saharan Africa: Contextual Evidence from Ghana
Authors: Kyere Francis, Sun Dongying, Asante Dennis, Nkrumah Nana Kwame Edmund, Naana Yaa Gyamea Kumah
Abstract:
With a focus on renewable energy to achieve low-carbon transition objectives, there is a greater demand for effective collaborative strategies for planning, strategic decision mechanisms, and long-term policy designs to steer the transitions. Government agencies, NGOs, the private sector, and individual citizens play an important role in sustainable energy production. In Ghana, however, such collaboration is fragile in the fight against climate change. This current study seeks to re-examine the position or potential of network governance in Ghana's renewable energy transition. The study adopted a qualitative approach and employed semi-structured interviews for data gathering. To explore network governance and low carbon transitions in Ghana, we examine key themes such as political environment and impact, actor cooperation and stakeholder interactions, financing and the transition, market design and renewable energy integration, existing regulation and policy gaps for renewable energy transition, clean cooking accessibility, and affordability. The findings reveal the following; Lack of comprehensive consultations with relevant stakeholders leads to lower acceptance of the policy model and sometimes lack of policy awareness. Again, the unavailability and affordability of renewable energy technologies and access to credit facilities is a significant hurdle to long-term renewable transition. Ghana's renewable energy transitions require strong networking and interaction among the public, private, and non-governmental organizations. The study participants believe that the involvement of relevant energy experts and stakeholders devoid of any political biases is instrumental in accelerating renewable energy transitions, as emphasized in the proposed framework. The study recommends that the national renewable energy transition plan be evident to all stakeholders and political administrators. Such policy may encourage renewable energy investment through stable and fixed lending rates by the financial institutions and build a network with international organizations and corporations. These findings could serve as valuable information for the transition-based energy process, primarily aiming to govern sustainability changes through network governance.Keywords: actors, development, sustainable energy, network governance, renewable energy transition
Procedia PDF Downloads 8957 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger
Authors: Hany Elsaid Fawaz Abdallah
Abstract:
This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations
Procedia PDF Downloads 8856 Commitment Dynamics: Generational Variations in Romantic Relationships among Gen X, Millennials and Gen Z
Authors: Ispreha Bailung
Abstract:
Background: Romantic commitment has evolved across generations, influenced by societal, cultural, and technological changes. This study explores how Generation X, Millennials, and Gen Z perceive, develop, and sustain commitment, with a focus on family, society, and technology. The objectives are to uncover generational differences, identify barriers to commitment, and examine cultural influences, offering insights to foster healthier relationships in a shifting world. Method: A phenomenological approach was used to examine generational differences in romantic commitment dynamics. Fifteen participants (five from each generation) were recruited online. Inclusion criteria required participants to identify with a specified generation and have romantic relationship experience. Semi-structured interviews (60–90 minutes) were conducted, focusing on personal experiences, values, and technology's influence on commitment. Interviews were recorded, transcribed, and analyzed thematically. Ethical protocols ensured participant well-being and data integrity. Findings: Generational shifts in commitment were observed, with Gen X emphasizing traditional values like marriage and loyalty, Millennials balancing tradition with personal fulfillment, and Gen Z prioritizing autonomy and mental well-being. Technology, such as dating apps and social media, created option overload and skepticism about authenticity. Despite increasing individualization, family influence remained significant. Key barriers to commitment included emotional detachment, career priorities, and trust issues, reflecting a broader shift toward more flexible and individualized relationships. Conclusion: This study provides valuable insights into generational differences in commitment dynamics, highlighting shifts in how commitment is viewed and enacted. While the study contributes to understanding evolving perspectives, the findings are limited by a small sample size, potential cultural biases, and the short-term nature of the research, limiting generalizability. Future Implications: Future research should focus on cross-cultural and longitudinal studies to track changes in commitment perceptions. Examining digital communication’s impact on relationship satisfaction and exploring new frameworks for assessing relationship success will further inform understanding and policymaking in the context of evolving romantic dynamics.Keywords: generational differences, commitment dynamics, romantic relationships, emotional compatibility, social media
Procedia PDF Downloads 755 Acceptability and Challenges Experienced by Homosexual Indigenous Peoples in Southern Palawan
Authors: Crisanto H. Ecaldre
Abstract:
Gender perception represents how an individual perceives the gender identity of a person. Since this is a subjective assessment, it paves the way to various social reactions, either in the form of acceptance or discrimination. Reports across the world show that lesbian, gay, bisexual, or transgender (LGBT) people often face discrimination, stigmatization, and targeted violence because of their sexual orientation or gender identity. However, the challenges faced by those who belong to both a sexual minority and a marginalized ethnic, religious, linguistic, or indigenous community are even more complex. Specifically, in Palaw’an community, members own those who identify themselves as gays or lesbians and use “bantut” to identify them. There was also the introduction of various scholarly works to facilitate dialogues that promote visibility and inclusivity across sectors in terms of gender preferences; however, there are still gaps that need to be addressed in terms of recognition and visibility. Though local research initiatives are slowly increasing in terms of numbers, culturally situating gender studies appropriately within the context of indigenous cultural communities is still lacking. Indigenous community-based discourses on gender or indigenizing gender discourses remain a challenge; hence, this study aimed to contribute to addressing these identified gaps. These research objectives were realized through a qualitative approach following an exploratory design. Findings revealed that the Palaw’an indigenous cultural community has an existing concept of homosexuality, which they termed “bantut.” This notion was culturally defined by the participants as (a) kaloob ng diwata; (b) a manifestation of physical inferiority; (c) hindi nakapag-asawa or hindi nagka-anak; and (d) based on the ascribed roles by the community. These were recognized and valued by the community. However, despite the recognition and visibility within the community, the outside people view them otherwise. The challenges experienced by the Palaw’an homosexuals are imposed by the people outside their community, and these include prejudice, discrimination, and double marginalization. Because of these struggles, they are forced to cope. They deal with these imposed limitations, biases, and burdens by non-Palaw’an through self-acceptance, strong self-perception, and the option to leave the community to seek a more open and progressive environment for LGBTs. While these are indications of their ‘resilience’ amidst difficult situations, this reality poses an important concern -how the recognition and visibility of indigenous homosexuals from the mainstream perspective can be attained.Keywords: gender preference, acceptability, challenge, recognition, visibility, coping
Procedia PDF Downloads 5654 Developing a Framework for Designing Digital Assessments for Middle-school Aged Deaf or Hard of Hearing Students in the United States
Authors: Alexis Polanco Jr, Tsai Lu Liu
Abstract:
Research on digital assessment for deaf and hard of hearing (DHH) students is negligible. Part of this stems from the DHH assessment design existing at the intersection of the emergent disciplines of usability, accessibility, and child-computer interaction (CCI). While these disciplines have some prevailing guidelines —e.g. in user experience design (UXD), there is Jacob Nielsen’s 10 Usability Heuristics (Nielsen-10); for accessibility, there are the Web Content Accessibility Guidelines (WCAG) & the Principles of Universal Design (PUD)— this research was unable to uncover a unified set of guidelines. Given that digital assessments have lasting implications for the funding and shaping of U.S. school districts, it is vital that cross-disciplinary guidelines emerge. As a result, this research seeks to provide a framework by which these disciplines can share knowledge. The framework entails a process of asking subject-matter experts (SMEs) and design & development professionals to self-describe their fields of expertise, how their work might serve DHH students, and to expose any incongruence between their ideal process and what is permissible at their workplace. This research used two rounds of mixed methods. The first round consisted of structured interviews with SMEs in usability, accessibility, CCI, and DHH education. These practitioners were not designers by trade but were revealed to use designerly work processes. In addition to asking these SMEs about their field of expertise, work process, etc., these SMEs were asked to comment about whether they believed Nielsen-10 and/or PUD were sufficient for designing products for middle-school DHH students. This first round of interviews revealed that Nielsen-10 and PUD were, at best, a starting point for creating middle-school DHH design guidelines or, at worst insufficient. The second round of interviews followed a semi-structured interview methodology. The SMEs who were interviewed in the first round were asked open-ended follow-up questions about their semantic understanding of guidelines— going from the most general sense down to the level of design guidelines for DHH middle school students. Designers and developers who were never interviewed previously were asked the same questions that the SMEs had been asked across both rounds of interviews. In terms of the research goals: it was confirmed that the design of digital assessments for DHH students is inherently cross-disciplinary. Unexpectedly, 1) guidelines did not emerge from the interviews conducted in this study, and 2) the principles of Nielsen-10 and PUD were deemed to be less relevant than expected. Given the prevalence of Nielsen-10 in UXD curricula across academia and certificate programs, this poses a risk to the efficacy of DHH assessments designed by UX designers. Furthermore, the following findings emerged: A) deep collaboration between the disciplines of usability, accessibility, and CCI is low to non-existent; B) there are no universally agreed-upon guidelines for designing digital assessments for DHH middle school students; C) these disciplines are structured academically and professionally in such a way that practitioners may not know to reach out to other disciplines. For example, accessibility teams at large organizations do not have designers and accessibility specialists on the same team.Keywords: deaf, hard of hearing, design, guidelines, education, assessment
Procedia PDF Downloads 6753 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 7852 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance
Authors: Clement Yeboah, Eva Laryea
Abstract:
A pretest-posttest within subjects experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant, indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant, indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop an interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers and will continue to be a dynamic and rapidly evolving field for years to come.Keywords: pretest-posttest within subjects, computer game-based learning, statistics achievement, statistics anxiety
Procedia PDF Downloads 7751 Decolonizing Print Culture and Bibliography Through Digital Visualizations of Artists’ Books at the University of Miami
Authors: Alejandra G. Barbón, José Vila, Dania Vazquez
Abstract:
This study seeks to contribute to the advancement of library and archival sciences in the areas of records management, knowledge organization, and information architecture, particularly focusing on the enhancement of bibliographical description through the incorporation of visual interactive designs aimed to enrich the library users’ experience. In an era of heightened awareness about the legacy of hiddenness across special and rare collections in libraries and archives, along with the need for inclusivity in academia, the University of Miami Libraries has embarked on an innovative project that intersects the realms of print culture, decolonization, and digital technology. This proposal presents an exciting initiative to revitalize the study of Artists’ Books collections by employing digital visual representations to decolonize bibliographic records of some of the most unique materials and foster a more holistic understanding of cultural heritage. Artists' Books, a dynamic and interdisciplinary art form, challenge conventional bibliographic classification systems, making them ripe for the exploration of alternative approaches. This project involves the creation of a digital platform that combines multimedia elements for digital representations, interactive information retrieval systems, innovative information architecture, trending bibliographic cataloging and metadata initiatives, and collaborative curation to transform how we engage with and understand these collections. By embracing the potential of technology, we aim to transcend traditional constraints and address the historical biases that have influenced bibliographic practices. In essence, this study showcases a groundbreaking endeavor at the University of Miami Libraries that seeks to not only enhance bibliographic practices but also confront the legacy of hiddenness across special and rare collections in libraries and archives while strengthening conventional bibliographic description. By embracing digital visualizations, we aim to provide new pathways for understanding Artists' Books collections in a manner that is more inclusive, dynamic, and forward-looking. This project exemplifies the University’s dedication to fostering critical engagement, embracing technological innovation, and promoting diverse and equitable classifications and representations of cultural heritage.Keywords: decolonizing bibliographic cataloging frameworks, digital visualizations information architecture platforms, collaborative curation and inclusivity for records management, engagement and accessibility increasing interaction design and user experience
Procedia PDF Downloads 7550 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance
Authors: Eva Laryea, Clement Yeboah Authors
Abstract:
A pretest-posttest within subjects, experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising, as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers, and will continue to be a dynamic and rapidly evolving field for years to come.Keywords: pretest-posttest within subjects, experimental design, achievement, statistics-related anxiety
Procedia PDF Downloads 5849 Artificial Intelligence: Reimagining Education
Authors: Silvia Zanazzi
Abstract:
Artificial intelligence (AI) has become an integral part of our world, transitioning from scientific exploration to practical applications that impact daily life. The emergence of generative AI is reshaping education, prompting new questions about the role of teachers, the nature of learning, and the overall purpose of schooling. While AI offers the potential for optimizing teaching and learning processes, concerns about discrimination and bias arising from training data and algorithmic decisions persist. There is a risk of a disconnect between the rapid development of AI and the goals of building inclusive educational environments. The prevailing discourse on AI in education often prioritizes efficiency and individual skill acquisition. This narrow focus can undermine the importance of collaborative learning and shared experiences. A growing body of research challenges this perspective, advocating for AI that enhances, rather than replaces, human interaction in education. This study aims to examine the relationship between AI and education critically. Reviewing existing research will identify both AI implementation’s potential benefits and risks. The goal is to develop a framework that supports the ethical and effective integration of AI into education, ensuring it serves the needs of all learners. The theoretical reflection will be developed based on a review of national and international scientific literature on artificial intelligence in education. The primary objective is to curate a selection of critical contributions from diverse disciplinary perspectives and/or an inter- and transdisciplinary viewpoint, providing a state-of-the-art overview and a critical analysis of potential future developments. Subsequently, the thematic analysis of these contributions will enable the creation of a framework for understanding and critically analyzing the role of artificial intelligence in schools and education, highlighting promising directions and potential pitfalls. The expected results are (1) a classification of the cognitive biases present in representations of AI in education and the associated risks and (2) a categorization of potentially beneficial interactions between AI applications and teaching and learning processes, including those already in use or under development. While not exhaustive, the proposed framework will serve as a guide for critically exploring the complexity of AI in education. It will help to reframe dystopian visions often associated with technology and facilitate discussions on fostering synergies that balance the ‘dream’ of quality education for all with the realities of AI implementation. The discourse on artificial intelligence in education, highlighting reductionist models rooted in fragmented and utilitarian views of knowledge, has the merit of stimulating the construction of alternative perspectives that can ‘return’ teaching and learning to education, human growth, and the well-being of individuals and communities.Keywords: education, artificial intelligence, teaching, learning
Procedia PDF Downloads 2048 Chatbots and the Future of Globalization: Implications of Businesses and Consumers
Authors: Shoury Gupta
Abstract:
Chatbots are a rapidly growing technological trend that has revolutionized the way businesses interact with their customers. With the advancements in artificial intelligence, chatbots can now mimic human-like conversations and provide instant and efficient responses to customer inquiries. In this research paper, we aim to explore the implications of chatbots on the future of globalization for both businesses and consumers. The paper begins by providing an overview of the current state of chatbots in the global market and their growth potential in the future. The focus is on how chatbots have become a valuable tool for businesses looking to expand their global reach, especially in areas with high population density and language barriers. With chatbots, businesses can engage with customers in different languages and provide 24/7 customer service support, creating a more accessible and convenient customer experience. The paper then examines the impact of chatbots on cross-cultural communication and how they can help bridge communication gaps between businesses and consumers from different cultural backgrounds. Chatbots can potentially facilitate cross-cultural communication by offering real-time translations, voice recognition, and other innovative features that can help users communicate effectively across different languages and cultures. By providing more accessible and inclusive communication channels, chatbots can help businesses reach new markets and expand their customer base, making them more competitive in the global market. However, the paper also acknowledges that there are potential drawbacks associated with chatbots. For instance, chatbots may not be able to address complex customer inquiries that require human input. Additionally, chatbots may perpetuate biases if they are programmed with certain stereotypes or assumptions about different cultures. These drawbacks may have significant implications for businesses and consumers alike. To explore the implications of chatbots on the future of globalization in greater detail, the paper provides a thorough review of existing literature and case studies. The review covers topics such as the benefits of chatbots for businesses and consumers, the potential drawbacks of chatbots, and how businesses can mitigate any risks associated with chatbot use. The paper also discusses the ethical considerations associated with chatbot use, such as privacy concerns and the need to ensure that chatbots do not discriminate against certain groups of people. The ethical implications of chatbots are particularly important given the potential for chatbots to be used in sensitive areas such as healthcare and financial services. Overall, this research paper provides a comprehensive analysis of chatbots and their implications for the future of globalization. By exploring both the potential benefits and drawbacks of chatbot use, the paper aims to provide insights into how businesses and consumers can leverage this technology to achieve greater global reach and improve cross-cultural communication. Ultimately, the paper concludes that chatbots have the potential to be a powerful tool for businesses looking to expand their global footprint and improve their customer experience, but that care must be taken to mitigate any risks associated with their use.Keywords: chatbots, conversational AI, globalization, businesses
Procedia PDF Downloads 9747 Reliability of Clinical Coding in Accurately Estimating the Actual Prevalence of Adverse Drug Event Admissions
Authors: Nisa Mohan
Abstract:
Adverse drug event (ADE) related hospital admissions are common among older people. The first step in prevention is accurately estimating the prevalence of ADE admissions. Clinical coding is an efficient method to estimate the prevalence of ADE admissions. The objective of the study is to estimate the rate of under-coding of ADE admissions in older people in New Zealand and to explore how clinical coders decide whether or not to code an admission as an ADE. There has not been any research in New Zealand to explore these areas. This study is done using a mixed-methods approach. Two common and serious ADEs in older people, namely bleeding and hypoglycaemia were selected for the study. In study 1, eight hundred medical records of people aged 65 years and above who are admitted to hospital due to bleeding and hypoglycemia during the years 2015 – 2016 were selected for quantitative retrospective medical records review. This selection was made to estimate the proportion of ADE-related bleeding and hypoglycemia admissions that are not coded as ADEs. These files were reviewed and recorded as to whether the admission was caused by an ADE. The hospital discharge data were reviewed to check whether all the ADE admissions identified in the records review were coded as ADEs, and the proportion of under-coding of ADE admissions was estimated. In study 2, thirteen clinical coders were selected to conduct qualitative semi-structured interviews using a general inductive approach. Participants were selected purposively based on their experience in clinical coding. Interview questions were designed in a way to investigate the reasons for the under-coding of ADE admissions. The records review study showed that 35% (Cl 28% - 44%) of the ADE-related bleeding admissions and 22% of the ADE-related hypoglycemia admissions were not coded as ADEs. Although the quality of clinical coding is high across New Zealand, a substantial proportion of ADE admissions were under-coded. This shows that clinical coding might under-estimate the actual prevalence of ADE related hospital admissions in New Zealand. The interviews with the clinical coders added that lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing might be the potential reasons for the under-coding of the ADE admissions. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. These results highlight that further work is needed on interventions to improve the clinical coding of ADE admissions, such as providing education to coders about the importance of ADEs, education to clinicians about the importance of clear and confirmed medical records entries, availing pharmacist service to improve the detection and clear documentation of ADE admissions and including a mandatory field in the discharge summary about external causes of diseases.Keywords: adverse drug events, bleeding, clinical coders, clinical coding, hypoglycemia
Procedia PDF Downloads 13046 An Adaptive Oversampling Technique for Imbalanced Datasets
Authors: Shaukat Ali Shahee, Usha Ananthakumar
Abstract:
A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling
Procedia PDF Downloads 41845 A Discourse Analysis of Syrian Refugee Representations in Canadian News Media
Authors: Pamela Aimee Rigor
Abstract:
This study aims to examine the representation of Syrian refugees resettled in Vancouver and the Lower Mainland in local community and major newspapers. While there is strong support for immigration in Canada, public opinion towards refugees and asylum seekers is a bit more varied. Concerns about the legitimacy of refugee claims are among the common concerns of Canadians, and hateful or negative narratives are still present in Canadian media discourse which affects how people view refugees. To counter the narratives, these Syrian refugees must publicly declare how grateful they are because they are resettled in Canada. The dominant media discourse is that these refugees should be grateful as they have been graciously accepted by Canada and Canadians, once again upholding the image of Canada being a generous and humanitarian nation. The study examined the representation of Syrian refugees and the Syrian refugee resettlement in Canadian newspapers from September 2015 to October 2017 – around the time Prime Minister Trudeau came into power up until the present. Using a combination of content and discourse analysis, it aimed to uncover how local community and major newspapers in Vancouver covered the Syrian refugee ‘crisis’ – more particularly, the arrival and resettlement of the refugees in the country. Using the qualitative data analysis software Nvivo 12, the newspapers were analyzed and sorted into themes. Based on the initial findings, the discourse of Canada being a humanitarian country and Canadians being generous, as well as the idea of Syrian refugees having to publicly announce how grateful they are, is still present in the local community newspapers. This seems to be done to counter the hateful narratives of citizens who might view them as people who are abusing help provided by the community or the services provided by the government. However, compared to the major and national newspapers in Canada, many these local community newspapers are very inclusive of Syrian refugee voices. Most of the News and Community articles interview Syrian refugees and ask them their personal stories of plight, survival, resettlement and starting a ‘new life’ in Canada. They are not seen as potential threats nor are they dismissed – the refugees were named and were allowed to share their personal experiences in these news articles. These community newspapers, even though their representations are far from perfect, actually address some aspects of the refugee resettlement issue and respond to their community’s needs. There are quite a number of news articles that announce community meetings and orientations about the Syrian refugee crisis, ways to help in the resettlement process, as well as community fundraising activities to help sponsor refugees or resettle newly arrived refugees. This study aims to promote awareness of how these individuals are socially constructed so we can, in turn, be aware of the certain biases and stereotypes present, and its implications on refugee laws and public response to the issue.Keywords: forced migration and conflict, media representations, race and multiculturalism, refugee studies
Procedia PDF Downloads 25144 Audit and Assurance Program for AI-Based Technologies
Authors: Beatrice Arthur
Abstract:
The rapid development of artificial intelligence (AI) has transformed various industries, enabling faster and more accurate decision-making processes. However, with these advancements come increased risks, including data privacy issues, systemic biases, and challenges related to transparency and accountability. As AI technologies become more integrated into business processes, there is a growing need for comprehensive auditing and assurance frameworks to manage these risks and ensure ethical use. This paper provides a literature review on AI auditing and assurance programs, highlighting the importance of adapting traditional audit methodologies to the complexities of AI-driven systems. Objective: The objective of this review is to explore current AI audit practices and their role in mitigating risks, ensuring accountability, and fostering trust in AI systems. The study aims to provide a structured framework for developing audit programs tailored to AI technologies while also investigating how AI impacts governance, risk management, and regulatory compliance in various sectors. Methodology: This research synthesizes findings from academic publications and industry reports from 2014 to 2024, focusing on the intersection of AI technologies and IT assurance practices. The study employs a qualitative review of existing audit methodologies and frameworks, particularly the COBIT 2019 framework, to understand how audit processes can be aligned with AI governance and compliance standards. The review also considers real-time auditing as an emerging necessity for influencing AI system design during early development stages. Outcomes: Preliminary findings indicate that while AI auditing is still in its infancy, it is rapidly gaining traction as both a risk management strategy and a potential driver of business innovation. Auditors are increasingly being called upon to develop controls that address the ethical and operational risks posed by AI systems. The study highlights the need for continuous monitoring and adaptable audit techniques to handle the dynamic nature of AI technologies. Future Directions: Future research will explore the development of AI-specific audit tools and real-time auditing capabilities that can keep pace with evolving technologies. There is also a need for cross-industry collaboration to establish universal standards for AI auditing, particularly in high-risk sectors like healthcare and finance. Further work will involve engaging with industry practitioners and policymakers to refine the proposed governance and audit frameworks. Funding/Support Acknowledgements: This research is supported by the Information Systems Assurance Management Program at Concordia University of Edmonton.Keywords: AI auditing, assurance, risk management, governance, COBIT 2019, transparency, accountability, machine learning, compliance
Procedia PDF Downloads 2543 Cognition in Context: Investigating the Impact of Persuasive Outcomes across Face-to-Face, Social Media and Virtual Reality Environments
Authors: Claire Tranter, Coral Dando
Abstract:
Gathering information from others is a fundamental goal for those concerned with investigating crime, and protecting national and international security. Persuading an individual to move from an opposing to converging viewpoint, and an understanding on the cognitive style behind this change can serve to increase understanding of traditional face-to-face interactions, as well as synthetic environments (SEs) often used for communication across varying geographical locations. SEs are growing in usage, and with this increase comes an increase in crime being undertaken online. Communication technologies can allow people to mask their real identities, supporting anonymous communication which can raise significant challenges for investigators when monitoring and managing these conversations inside SEs. To date, the psychological literature concerning how to maximise information-gain in SEs for real-world interviewing purposes is sparse, and as such this aspect of social cognition is not well understood. Here, we introduce an overview of a novel programme of PhD research which seeks to enhance understanding of cross-cultural and cross-gender communication in SEs for maximising information gain. Utilising a dyadic jury paradigm, participants interacted with a confederate who attempted to persuade them to the opposing verdict across three distinct environments: face-to-face, instant messaging, and a novel virtual reality environment utilising avatars. Participants discussed a criminal scenario, acting as a two-person (male; female) jury. Persuasion was manipulated by the confederate claiming an opposing viewpoint (guilty v. not guilty) to the naïve participants from the outset. Pre and post discussion data, and observational digital recordings (voice and video) of participant’ discussion performance was collected. Information regarding cognitive style was also collected to ascertain participants need for cognitive closure and biases towards jumping to conclusions. Findings revealed that individuals communicating via an avatar in a virtual reality environment reacted in a similar way, and thus equally persuasive, when compared to individuals communicating face-to-face. Anonymous instant messaging however created a resistance to persuasion in participants, with males showing a significant decline in persuasive outcomes compared to face to face. The findings reveal new insights particularly regarding the interplay of persuasion on gender and modality, with anonymous instant messaging enhancing resistance to persuasion attempts. This study illuminates how varying SE can support new theoretical and applied understandings of how judgments are formed and modified in response to advocacy.Keywords: applied cognition, persuasion, social media, virtual reality
Procedia PDF Downloads 14442 Knowledge Creation Environment in the Iranian Universities: A Case Study
Authors: Mahdi Shaghaghi, Amir Ghaebi, Fariba Ahmadi
Abstract:
Purpose: The main purpose of the present research is to analyze the knowledge creation environment at a Iranian University (Alzahra University) as a typical University in Iran, using a combination of the i-System and Ba models. This study is necessary for understanding the determinants of knowledge creation at Alzahra University as a typical University in Iran. Methodology: To carry out the present research, which is an applied study in terms of purpose, a descriptive survey method was used. In this study, a combination of the i-System and Ba models has been used to analyze the knowledge creation environment at Alzahra University. i-System consists of 5 constructs including intervention (input), intelligence (process), involvement (process), imagination (process), and integration (output). The Ba environment has three pillars, namely the infrastructure, the agent, and the information. The integration of these two models resulted in 11 constructs which were as follows: intervention (input), infrastructure-intelligence, agent-intelligence, information-intelligence (process); infrastructure-involvement, agent-involvement, information-involvement (process); infrastructure-imagination, agent-imagination, information-imagination (process); and integration (output). These 11 constructs were incorporated into a 52-statement questionnaire and the validity and reliability of the questionnaire were examined and confirmed. The statistical population included the faculty members of Alzahra University (344 people). A total of 181 participants were selected through the stratified random sampling technique. The descriptive statistics, binomial test, regression analysis, and structural equation modeling (SEM) methods were also utilized to analyze the data. Findings: The research findings indicated that among the 11 research constructs, the levels of intervention, information-intelligence, infrastructure-involvement, and agent-imagination constructs were average and not acceptable. The levels of infrastructure-intelligence and information-imagination constructs ranged from average to low. The levels of agent-intelligence and information-involvement constructs were also completely average. The level of infrastructure-imagination construct was average to high and thus was considered acceptable. The levels of agent-involvement and integration constructs were above average and were in a highly acceptable condition. Furthermore, the regression analysis results indicated that only two constructs, viz. the information-imagination and agent-involvement constructs, positively and significantly correlate with the integration construct. The results of the structural equation modeling also revealed that the intervention, intelligence, and involvement constructs are related to the integration construct with the complete mediation of imagination. Discussion and conclusion: The present research suggests that knowledge creation at Alzahra University relatively complies with the combination of the i-System and Ba models. Unlike this model, the intervention, intelligence, and involvement constructs are not directly related to the integration construct and this seems to have three implications: 1) the information sources are not frequently used to assess and identify the research biases; 2) problem finding is probably of less concern at the end of studies and at the time of assessment and validation; 3) the involvement of others has a smaller role in the summarization, assessment, and validation of the research.Keywords: i-System, Ba model , knowledge creation , knowledge management, knowledge creation environment, Iranian Universities
Procedia PDF Downloads 10141 Navigating the Digital Landscape: An Ethnographic Content Analysis of Black Youth's Encounters with Racially Traumatic Content on Social Media
Authors: Tiera Tanksley, Amanda M. McLeroy
Abstract:
The advent of technology and social media has ushered in a new era of communication, providing platforms for news dissemination and cause advocacy. However, this digital landscape has also exposed a distressing phenomenon termed "Black death," or trauma porn. This paper delves into the profound effects of repeated exposure to traumatic content on Black youth via social media, exploring the psychological impacts and potential reinforcing of stereotypes. Employing Critical Race Technology Theory (CRTT), the study sheds light on algorithmic anti-blackness and its influence on Black youth's lives and educational experiences. Through ethnographic content analysis, the research investigates common manifestations of Black death encountered online by Black adolescents. Findings unveil distressing viral videos, traumatic images, racial slurs, and hate speech, perpetuating stereotypes. However, amidst the distress, the study identifies narratives of activism and social justice on social media platforms, empowering Black youth to engage in positive change. Coping mechanisms and community support emerge as significant factors in navigating the digital landscape. The study underscores the need for comprehensive interventions and policies informed by evidence-based research. By addressing algorithmic anti-blackness and promoting digital resilience, the paper advocates for a more empathetic and inclusive online environment. Understanding coping mechanisms and community support becomes imperative for fostering mental well-being among Black adolescents navigating social media. In education, the implications are substantial. Acknowledging the impact of Black death content, educators play a pivotal role in promoting media literacy and digital resilience. Creating inclusive and safe online spaces, educators can mitigate negative effects and encourage open discussions about traumatic content. The application of CRTT in educational technology emphasizes dismantling systemic biases and promoting equity. In conclusion, this study calls for educators to be cognizant of the impact of Black death content on social media. By prioritizing media literacy, fostering digital resilience, and advocating for unbiased technologies, educators contribute to an inclusive and just educational environment for all students, irrespective of their race or background. Addressing challenges related to Black death content proactively ensures the well-being and mental health of Black adolescents, fostering an empathetic and inclusive digital space.Keywords: algorithmic anti-Blackness, digital resilience, media literacy, traumatic content
Procedia PDF Downloads 5640 Recognizing Human Actions by Multi-Layer Growing Grid Architecture
Authors: Z. Gharaee
Abstract:
Recognizing actions performed by others is important in our daily lives since it is necessary for communicating with others in a proper way. We perceive an action by observing the kinematics of motions involved in the performance. We use our experience and concepts to make a correct recognition of the actions. Although building the action concepts is a life-long process, which is repeated throughout life, we are very efficient in applying our learned concepts in analyzing motions and recognizing actions. Experiments on the subjects observing the actions performed by an actor show that an action is recognized after only about two hundred milliseconds of observation. In this study, hierarchical action recognition architecture is proposed by using growing grid layers. The first-layer growing grid receives the pre-processed data of consecutive 3D postures of joint positions and applies some heuristics during the growth phase to allocate areas of the map by inserting new neurons. As a result of training the first-layer growing grid, action pattern vectors are generated by connecting the elicited activations of the learned map. The ordered vector representation layer receives action pattern vectors to create time-invariant vectors of key elicited activations. Time-invariant vectors are sent to second-layer growing grid for categorization. This grid creates the clusters representing the actions. Finally, one-layer neural network developed by a delta rule labels the action categories in the last layer. System performance has been evaluated in an experiment with the publicly available MSR-Action3D dataset. There are actions performed by using different parts of human body: Hand Clap, Two Hands Wave, Side Boxing, Bend, Forward Kick, Side Kick, Jogging, Tennis Serve, Golf Swing, Pick Up and Throw. The growing grid architecture was trained by applying several random selections of generalization test data fed to the system during on average 100 epochs for each training of the first-layer growing grid and around 75 epochs for each training of the second-layer growing grid. The average generalization test accuracy is 92.6%. A comparison analysis between the performance of growing grid architecture and self-organizing map (SOM) architecture in terms of accuracy and learning speed show that the growing grid architecture is superior to the SOM architecture in action recognition task. The SOM architecture completes learning the same dataset of actions in around 150 epochs for each training of the first-layer SOM while it takes 1200 epochs for each training of the second-layer SOM and it achieves the average recognition accuracy of 90% for generalization test data. In summary, using the growing grid network preserves the fundamental features of SOMs, such as topographic organization of neurons, lateral interactions, the abilities of unsupervised learning and representing high dimensional input space in the lower dimensional maps. The architecture also benefits from an automatic size setting mechanism resulting in higher flexibility and robustness. Moreover, by utilizing growing grids the system automatically obtains a prior knowledge of input space during the growth phase and applies this information to expand the map by inserting new neurons wherever there is high representational demand.Keywords: action recognition, growing grid, hierarchical architecture, neural networks, system performance
Procedia PDF Downloads 15739 A Qualitative Study to Analyze Clinical Coders’ Decision Making Process of Adverse Drug Event Admissions
Authors: Nisa Mohan
Abstract:
Clinical coding is a feasible method for estimating the national prevalence of adverse drug event (ADE) admissions. However, under-coding of ADE admissions is a limitation of this method. Whilst the under-coding will impact the accurate estimation of the actual burden of ADEs, the feasibility of the coded data in estimating the adverse drug event admissions goes much further compared to the other methods. Therefore, it is necessary to know the reasons for the under-coding in order to improve the clinical coding of ADE admissions. The ability to identify the reasons for the under-coding of ADE admissions rests on understanding the decision-making process of coding ADE admissions. Hence, the current study aimed to explore the decision-making process of clinical coders when coding cases of ADE admissions. Clinical coders from different levels of coding job such as trainee, intermediate and advanced level coders were purposefully selected for the interviews. Thirteen clinical coders were recruited from two Auckland region District Health Board hospitals for the interview study. Semi-structured, one-on-one, face-to-face interviews using open-ended questions were conducted with the selected clinical coders. Interviews were about 20 to 30 minutes long and were audio-recorded with the approval of the participants. The interview data were analysed using a general inductive approach. The interviews with the clinical coders revealed that the coders have targets to meet, and they sometimes hesitate to adhere to the coding standards. Coders deviate from the standard coding processes to make a decision. Coders avoid contacting the doctors for clarifying small doubts such as ADEs and the name of the medications because of the delay in getting a reply from the doctors. They prefer to do some research themselves or take help from their seniors and colleagues for making a decision because they can avoid a long wait to get a reply from the doctors. Coders think of ADE as a small thing. Lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing may contribute to the under-coding of the ADE admissions. These findings suggest that further work is needed on interventions to improve the clinical coding of ADE admissions. Providing education to coders about the importance of ADEs, educating clinicians about the importance of clear and confirmed medical records entries, availing pharmacists’ services to improve the detection and clear documentation of ADE admissions, and including a mandatory field in the discharge summary about external causes of diseases may be useful for improving the clinical coding of ADE admissions. The findings of the research will help the policymakers to make informed decisions about the improvements. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. This country-specific research conducted in New Zealand may also benefit other countries by providing insight into the clinical coding of ADE admissions and will offer guidance about where to focus changes and improvement initiatives.Keywords: adverse drug events, clinical coders, decision making, hospital admissions
Procedia PDF Downloads 12038 Qualitative Narrative Framework as Tool for Reduction of Stigma and Prejudice
Authors: Anastasia Schnitzer, Oliver Rehren
Abstract:
Mental health has become an increasingly important topic in society in recent years, not least due to the challenges posed by the corona pandemic. Along with this, the public has become more and more aware that a lack of enlightenment and proper coping mechanisms may result in a notable risk to develop mental disorders. Yet, there are still many biases against those affected, which are further connected to issues of stigmatization and societal exclusion. One of the main strategies to combat these forms of prejudice and stigma is to induce intergroup contact. More specifically, the Intergroup Contact Theory states engaging in certain types of contact with members of marginalized groups may be an effective way to improve attitudes towards these groups. However, due to the persistent prejudice and stigmatization, affected individuals often do not dare to speak openly about their mental disorders, so that intergroup contact often goes unnoticed. As a result, many people only experience conscious contact with individuals with a mental disorder through media. As an analogy to the Intergroup Contact Theory, the Parasocial Contact Hypothesis proposes that repeatedly being exposed to positive media representations of outgroup members can lead to a reduction of negative prejudices and attitudes towards this outgroup. While there is a growing body of research on the merit of this mechanism, measurements often only consist of 'positive' or 'negative' parasocial contact conditions (or examine the valence or quality of the previous contact with the outgroup); meanwhile, more specific conditions are often neglected. The current study aims to tackle this shortcoming. By scrutinizing the potential of contemporary series as a narrative framework of high quality, we strive to elucidate more detailed aspects of beneficial parasocial contact -for the sake of reducing prejudice and stigma towards individuals with mental disorders. Thus, a two-factorial between-subject online panel study with three measurement points was conducted (N = 95). Participants were randomly assigned to one of two groups, having to watch episodes of either a series with a narrative framework of high (Quality-TV) or low quality (Continental-TV), with one-week interval in-between the episodes. Suitable series were determined with the help of a pretest. Prejudice and stigma towards people with mental disorders were measured at the beginning of the study, before and after each episode, and in a final follow-up one week after the last two episodes. Additionally, parasocial interaction (PSI), quality of contact (QoC), and transportation were measured several times. Based on these data, multivariate multilevel analyses were performed in R using the lavaan package. Latent growth models showed moderate to high increases in QoC and PSI as well as small to moderate decreases in stigma and prejudice over time. Multilevel path analysis with individual and group levels further revealed that a qualitative narrative framework leads to a higher quality of contact experience, which then leads to lower prejudice and stigma, with effects ranging from moderate to high.Keywords: prejudice, quality of contact, parasocial contact, narrative framework
Procedia PDF Downloads 8337 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies
Authors: Roberta Martino, Viviana Ventre
Abstract:
Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty
Procedia PDF Downloads 12936 Towards Automatic Calibration of In-Line Machine Processes
Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales
Abstract:
In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820Keywords: data model, machine learning, industrial winding, calibration
Procedia PDF Downloads 24135 The Influence of Gender and Sexual Orientation on Police Decisions in Intimate Partner Violence Cases
Authors: Brenda Russell
Abstract:
Police officers spend a great deal of time responding to domestic violence calls. Recent research has found that men and women in heterosexual and same-sex relationships are equally likely to initiate intimate partner violence IPV) and likewise susceptible to victimization, yet police training tends to focus primarily on male perpetration and female victimization. Criminal justice studies have found that male perpetrators of IPV are blamed more than female perpetrators who commit the same offense. While previous research has examined officer’s response in IPV cases with male and female heterosexual offenders, research has yet to investigate police response in same-sex relationships. This study examined officers’ decisions to arrest, perceptions of blame, perceived danger to others, disrespect, and beliefs in prosecution, guilt and sentencing. Officers in the U.S. (N = 248) were recruited using word of mouth and access to police association websites where a link to an online study was made available. Officers were provided with one of 4 experimentally manipulated scenarios depicting a male or female perpetrator (heterosexual or same-sex) in a clear domestic assault situation. Officer age, experience with IPV and IPV training were examined as possible covariates. Training in IPV was not correlated to any dependent variable of interest. Age was correlated with perpetrator arrest and blame (.14 and .16, respectively) and years of experience was correlated to arrest, offering informal advice, and mediating the incident (.14 to -.17). A 2(perpetrator gender) X 2 (victim gender) factorial design was conducted. Results revealed that officers were more likely to provide informal advice and mediate in gay male relationships, and were less likely to arrest perpetrators in same-sex relationships. When officer age and years of experience with domestic violence were statistically controlled, effects for perpetrator arrest and providing informal advice were no longer significant. Officers perceived heterosexual male perpetrators as more dangerous, blameworthy, disrespectful, and believed they would receive significantly longer sentences than all other conditions. When officer age and experience were included as covariates in the analyses perpetrator blame was no longer statistically significant. Age, experience and training in IPV were not related to perceptions of victims. Police perceived victims as more truthful and believable when the perpetrator was a male. Police also believed victims of female perpetrators were more responsible for their own victimization. Victims were more likely to be perceived as a danger to their family when the perpetrator was female. Female perpetrators in same-sex relationships and heterosexual males were considered to experience more mental illness than heterosexual female or gay male perpetrators. These results replicate previous research suggesting male perpetrators are more blameworthy and responsible for their own victimization, yet expands upon previous research by identifying potential biases in police response to IPV in same-sex relationships. This study brings to the forefront the importance of evidence-based officer training in IPV and provides insight into the need for a gender inclusive approach as well as addressing the necessity of the practical applications for police.Keywords: domestic violence, heterosexual, intimate partner violence, officer response, police officer, same-sex
Procedia PDF Downloads 347