Search results for: systematic science
407 Structural Equation Modeling Exploration for the Multiple College Admission Criteria in Taiwan
Authors: Tzu-Ling Hsieh
Abstract:
When the Taiwan Ministry of Education implemented a new university multiple entrance policy in 2002, most colleges and universities still use testing scores as mainly admission criteria. With forthcoming 12 basic-year education curriculum, the Ministry of Education provides a new college admission policy, which will be implemented in 2021. The new college admission policy will highlight the importance of holistic education by more emphases on the learning process of senior high school, except only on the outcome of academic testing. However, the development of college admission criteria doesn’t have a thoughtful process. Universities and colleges don’t have an idea about how to make suitable multi-admission criteria. Although there are lots of studies in other countries which have implemented multi-college admission criteria for years, these studies still cannot represent Taiwanese students. Also, these studies are limited without the comparison of two different academic fields. Therefore, this study investigated multiple admission criteria and its relationship with college success. This study analyzed the Taiwan Higher Education Database with 12,747 samples from 156 universities and tested a conceptual framework that examines factors by structural equation model (SEM). The conceptual framework of this study was adapted from Pascarella's general causal model and focused on how different admission criteria predict students’ college success. It discussed the relationship between admission criteria and college success, also the relationship how motivation (one of admission standard) influence college success through engagement behaviors of student effort and interactions with agents of socialization. After processing missing value, reliability and validity analysis, the study found three indicators can significantly predict students’ college success which was defined as average grade of last semester. These three indicators are the Chinese language scores at college entrance exam, high school class rank, and quality of student academic engagement. In addition, motivation can significantly predict quality of student academic engagement and interactions with agents of socialization. However, the multi-group SEM analysis showed that there is no difference to predict college success between the students from liberal arts and science. Finally, this study provided some suggestions for universities and colleges to develop multi-admission criteria through the empirical research of Taiwanese higher education students.Keywords: college admission, admission criteria, structural equation modeling, higher education, education policy
Procedia PDF Downloads 178406 Potentiality of Litchi-Fodder Based Agroforestry System in Bangladesh
Authors: M. R. Zaman, M. S. Bari, M. Kajal
Abstract:
A field experiment was conducted at the Agroforestry and Environment Research Field, Hajee Mohammad Danesh Science and Technology University, Dinajpur during 2013 to investigate the potentiality of three napier fodder varieties under Litchi orchard. The experiment was consisted of 2 factors RCBD with 3 replications. Among the two factors, factor A was two production systems; S1= Litchi + fodder and S2 = Fodder (sole crop); another factor B was three napier varieties: V1= BARI Napier -1 (Bazra), V2= BARI Napier - 2 (Arusha) and V3= BARI Napier -3 (Hybrid). The experimental results revealed that there were significant variation among the varieties in terms of leaf growth and yield. The maximum number of leaf plant -1 was recorded in variety Bazra (V1) whereas the minimum number was recorded in hybrid variety (V3).Significantly the highest (13.75, 14.53 and14.84 tha-1 at 1st, 2nd and 3rd harvest respectively) yield was also recorded in variety Bazra whereas the lowest (5.89, 6.36 and 9.11 tha-1 at 1st, 2nd v and 3rd harvest respectively) yield was in hybrid variety. Again, in case of production systems, there were also significant differences between the two production systems were founded. The maximum number of leaf plant -1 was recorded under Litchi based AGF system (T1) whereas the minimum was recorded in open condition (T2). Similarly, significantly the highest (12.00, 12.35 and 13.31 tha-1 at 1st, 2nd and 3rd harvest respectively) yield of napier was recorded under Litchi based AGF system where as the lowest (9.73, 10.47 and 11.66 tha-1 at 1st, 2nd and 3rd harvest respectively) yield was recorded in open condition i.e. napier in sole cropping. Furthermore, the interaction effect of napier variety and production systems were also gave significant deviation result in terms of growth and yield. The maximum number of leaf plant -1 was recorded under Litchi based AGF systems with Bazra variety whereas the minimum was recorded in open condition with hybrid variety. The highest yield (14.42, 16.14 and 16.15 tha-1 at 1st, 2nd and 3rd harvest respectively) of napier was found under Litchi based AGF systems with Bazra variety. Significantly the lowest (5.33, 5.79 and 8.48 tha-1 at 1st, 2nd and 3rd harvest respectively) yield was found in open condition i.e. sole cropping with hybrid variety. In case of the quality perspective, the highest nutritive value (DM, ASH, CP, CF, EE, and NFE) was found in Bazra (V1) and the lowest value was found in hybrid variety (V3). Therefore, the suitability of napier production under Litchi based AGF system may be ranked as Bazra > Arusha > Hybrid variety. Finally, the economic analysis showed that maximum BCR (5.20) was found in the Litchi based AGF systems over sole cropping (BCR=4.38). From the findings of the taken investigation, it may be concluded that the cultivation of Bazra napier varieties in the floor of Litchi orchard ensures higher revenue to the farmers compared to its sole cropping.Keywords: potentiality, Litchi, fodder, agroforestry
Procedia PDF Downloads 323405 Leadership Lessons from Female Executives in the South African Oil Industry
Authors: Anthea Carol Nefdt
Abstract:
In this article, observations are drawn from a number of interviews conducted with female executives in the South African Oil Industry in 2017. Globally, the oil industry represents one of the most male-dominated organisational structures as well as cultures in the business world. Some of the remarkable women, who hold upper management positions, have not only emerged from the science and finance spheres (equally gendered organisations) but also navigated their way through an aggressive, patriarchal atmosphere of rivalry and competition. We examine various mythology associated with the industry, such as the cowboy myth, the frontier ideology and the queen bee syndrome directed at female executives. One of the themes to emerge from my interviews was the almost unanimous rejection of the ‘glass ceiling’ metaphor favoured by some Feminists. The women of the oil industry rather affirmed a picture of their rise to leadership positions through a strategic labyrinth of challenges and obstacles both in terms of gender and race. This article aims to share the insights of women leaders in a complex industry through both their reflections and a theoretical Feminist lens. The study is located within the South African context and given our historical legacy, it was optimal to use an intersectional approach which would allow issues of race, gender, ethnicity and language to emerge. A qualitative research methodological approach was employed as well as a thematic interpretative analysis to analyse and interpret the data. This research methodology was used precisely because it encourages and acknowledged the experiences women have and places these experiences at the centre of the research. Multiple methods of recruitment of the research participants was utilised. The initial method of recruitment was snowballing sampling, the second method used was purposive sampling. In addition to this, semi-structured interviews gave the participants an opportunity to ask questions, add information and have discussions on issues or aspects of the research area which was of interest to them. One of the key objectives of the study was to investigate if there was a difference in the leadership styles of men and women. Findings show that despite the wealth of literature on the topic, to the contrary some women do not perceive a significant difference in men and women’s leadership style. However other respondents felt that there were some important differences in the experiences of men and women superiors although they hesitated to generalise from these experiences Further findings suggest that although the oil industry provides unique challenges to women as a gendered organization, it also incorporates various progressive initiatives for their advancement.Keywords: petroleum industry, gender, feminism, leadership
Procedia PDF Downloads 162404 National Branding through Education: South Korean Image in Romania through the Language Textbooks for Foreigners
Authors: Raluca-Ioana Antonescu
Abstract:
The paper treats about the Korean public diplomacy and national branding strategies, and how the Korean language textbooks were used in order to construct the Korean national image. The field research of the paper stands at the intersection between Linguistics and Political Science, while the problem of the research is the role of language and culture in national branding process. The research goal is to contribute to the literature situated at the intersection between International Relations and Applied Linguistics, while the objective is to conceptualize the idea of national branding by emphasizing a dimension which is not much discussed, and that would be the education as an instrument of the national branding and public diplomacy strategies. In order to examine the importance of language upon the national branding strategies, the paper will answer one main question, How is the Korean language used in the construction of national branding?, and two secondary questions, How are explored in literature the relations between language and national branding construction? and What kind of image of South Korea the language textbooks for foreigners transmit? In order to answer the research questions, the paper starts from one main hypothesis, that the language is an essential component of the culture, which is used in the construction of the national branding influenced by traditional elements (like Confucianism) but also by modern elements (like Western influence), and from two secondary hypothesis, the first one is that in the International Relations literature there are little explored the connections between language and national branding, while the second hypothesis is that the South Korean image is constructed through the promotion of a traditional society, but also a modern one. In terms of methodology, the paper will analyze the textbooks used in Romania at the universities which provide Korean Language classes during the three years program B.A., following the dialogs, the descriptive texts and the additional text about the Korean culture. The analysis will focus on the rank status difference, the individual in relation to the collectivity, the respect for the harmony, and the image of the foreigner. The results of the research show that the South Korean image projected in the textbooks convey the Confucian values and it does not emphasize the changes suffered by the society due to the modernity and globalization. The Westernized aspect of the Korean society is conveyed more in an informative way about the Korean international companies, Korean internal development (like the transport or other services), but it does not show the cultural changed the society underwent. Even if the paper is using the textbooks which are used in Romania as a teaching material, it could be used and applied at least to other European countries, since the textbooks are the ones issued by the South Korean language schools, which other European countries are using also.Keywords: confucianism, modernism, national branding, public diplomacy, traditionalism
Procedia PDF Downloads 242403 Evaluation of the Influence of Graphene Oxide on Spheroid and Monolayer Culture under Flow Conditions
Authors: A. Zuchowska, A. Buta, M. Mazurkiewicz-Pawlicka, A. Malolepszy, L. Stobinski, Z. Brzozka
Abstract:
In recent years, graphene-based materials are finding more and more applications in biological science. As a thin, tough, transparent and chemically resistant materials, they appear to be a very good material for the production of implants and biosensors. Interest in graphene derivatives also resulted at the beginning of research about the possibility of their application in cancer therapy. Currently, the analysis of their potential use in photothermal therapy and as a drug carrier is mostly performed. Moreover, the direct anticancer properties of graphene-based materials are also tested. Nowadays, cytotoxic studies are conducted on in vitro cell culture in standard culture vessels (macroscale). However, in this type of cell culture, the cells grow on the synthetic surface in static conditions. For this reason, cell culture in macroscale does not reflect in vivo environment. The microfluidic systems, called Lab-on-a-chip, are proposed as a solution for improvement of cytotoxicity analysis of new compounds. Here, we present the evaluation of cytotoxic properties of graphene oxide (GO) on breast, liver and colon cancer cell line in a microfluidic system in two spatial models (2D and 3D). Before cell introduction, the microchambers surface was modified by the fibronectin (2D, monolayer) and poly(vinyl alcohol) (3D, spheroids) covering. After spheroid creation (3D) and cell attachment (2D, monolayer) the selected concentration of GO was introduced into microsystems. Then monolayer and spheroids viability/proliferation using alamarBlue® assay and standard microplate reader was checked for three days. Moreover, in every day of the culture, the morphological changes of cells were determined using microscopic analysis. Additionally, on the last day of the culture differential staining using Calcein AM and Propidium iodide were performed. We were able to note that the GO has an influence on all tested cell line viability in both monolayer and spheroid arrangement. We showed that GO caused higher viability/proliferation decrease for spheroids than a monolayer (this was observed for all tested cell lines). Higher cytotoxicity of GO on spheroid culture can be caused by different geometry of the microchambers for 2D and 3D cell cultures. Probably, GO was removed from the flat microchambers for 2D culture. Those results were also confirmed by differential staining. Comparing our results with the studies conducted in the macroscale, we also proved that the cytotoxic properties of GO are changed depending on the cell culture conditions (static/ flow).Keywords: cytotoxicity, graphene oxide, monolayer, spheroid
Procedia PDF Downloads 125402 Assessing the Influence of Station Density on Geostatistical Prediction of Groundwater Levels in a Semi-arid Watershed of Karnataka
Authors: Sakshi Dhumale, Madhushree C., Amba Shetty
Abstract:
The effect of station density on the geostatistical prediction of groundwater levels is of critical importance to ensure accurate and reliable predictions. Monitoring station density directly impacts the accuracy and reliability of geostatistical predictions by influencing the model's ability to capture localized variations and small-scale features in groundwater levels. This is particularly crucial in regions with complex hydrogeological conditions and significant spatial heterogeneity. Insufficient station density can result in larger prediction uncertainties, as the model may struggle to adequately represent the spatial variability and correlation patterns of the data. On the other hand, an optimal distribution of monitoring stations enables effective coverage of the study area and captures the spatial variability of groundwater levels more comprehensively. In this study, we investigate the effect of station density on the predictive performance of groundwater levels using the geostatistical technique of Ordinary Kriging. The research utilizes groundwater level data collected from 121 observation wells within the semi-arid Berambadi watershed, gathered over a six-year period (2010-2015) from the Indian Institute of Science (IISc), Bengaluru. The dataset is partitioned into seven subsets representing varying sampling densities, ranging from 15% (12 wells) to 100% (121 wells) of the total well network. The results obtained from different monitoring networks are compared against the existing groundwater monitoring network established by the Central Ground Water Board (CGWB). The findings of this study demonstrate that higher station densities significantly enhance the accuracy of geostatistical predictions for groundwater levels. The increased number of monitoring stations enables improved interpolation accuracy and captures finer-scale variations in groundwater levels. These results shed light on the relationship between station density and the geostatistical prediction of groundwater levels, emphasizing the importance of appropriate station densities to ensure accurate and reliable predictions. The insights gained from this study have practical implications for designing and optimizing monitoring networks, facilitating effective groundwater level assessments, and enabling sustainable management of groundwater resources.Keywords: station density, geostatistical prediction, groundwater levels, monitoring networks, interpolation accuracy, spatial variability
Procedia PDF Downloads 60401 The Decline of Islamic Influence in the Global Geopolitics
Authors: M. S. Riyazulla
Abstract:
Since the dawn of the 21st century, there has been a perceptible decline in Islamic supremacy in world affairs, apart from the gradual waning of the amiable relations and relevance of Islamic countries in the International political arena. For a long, Islamic countries have been marginalised by the superpowers in the global conflicting issues. This was evident in the context of their recent invasions and interference in Afghanistan, Syria, Iraq, and Libya. The leading International Islamic organizations like the Arab League, Organization of Islamic Cooperation, Gulf Cooperation Council, and Muslim World League did not play any prominent role there in resolving the crisis that ensued due to the exogenous and endogenous causes. Hence, there is a need for Islamic countries to create a credible International Islamic organization that could dictate its terms and shape a new Islamic world order. The prominent Islamic countries are divided on ideological and religious fault lines. Their concord is indispensable to enhance their image and placate the relations with other countries and communities. The massive boon of oil and gas could be synergistically utilised to exhibit their omnipotence and eminence through constructive ways. The prevailing menace of Islamophobia could be abated through syncretic messages, discussions, and deliberations by the sagacious Islamic scholars with the other community leaders. Presently, as Muslims are at a crossroads, a dynamic leadership could navigate the agitated Muslim community on the constructive path and herald political stability around the world. The present political disorder, chaos, and economic challenges necessities a paradigm shift in approach to worldly affairs. This could also be accomplished through the advancement in science and technology, particularly space exploration, for peaceful purposes. The Islamic world, in order to regain its lost preeminence, should rise to the occasion in promoting peace and tranquility in the world and should evolve a rational and human-centric solution to global disputes and concerns. As a splendid contribution to humanity and for amicable international relations, they should devote all their resources and scientific intellect towards space exploration and should safely transport man from the Earth to the nearest and most accessible cosmic body, the Moon, within one hundred years as the mankind is facing the existential threat on the planet.Keywords: carboniferous period, Earth, extinction, fossil fuels, global leaders, Islamic glory, international order, life, marginalization, Moon, natural catastrophes
Procedia PDF Downloads 68400 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment
Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane
Abstract:
Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.Keywords: artificial intelligence, computer science, criminal investigation, digital forensics
Procedia PDF Downloads 212399 Automatic and High Precise Modeling for System Optimization
Authors: Stephanie Chen, Mitja Echim, Christof Büskens
Abstract:
To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization
Procedia PDF Downloads 409398 Comics as an Intermediary for Media Literacy Education
Authors: Ryan C. Zlomek
Abstract:
The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.Keywords: comics, graphics novels, mass communication, media literacy, metacognition
Procedia PDF Downloads 299397 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer
Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi
Abstract:
Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales
Procedia PDF Downloads 124396 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures
Authors: Rui Teixeira, Alan O’Connor, Maria Nogal
Abstract:
The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data
Procedia PDF Downloads 273395 A Case Study on Experiences of Clinical Preceptors in the Undergraduate Nursing Program
Authors: Jacqueline M. Dias, Amina A Khowaja
Abstract:
Clinical education is one of the most important components of a nursing curriculum as it develops the students’ cognitive, psychomotor and affective skills. Clinical teaching ensures the integration of knowledge into practice. As the numbers of students increase in the field of nursing coupled with the faculty shortage, clinical preceptors are the best choice to ensure student learning in the clinical settings. The clinical preceptor role has been introduced in the undergraduate nursing programme. In Pakistan, this role emerged due to a faculty shortage. Initially, two clinical preceptors were hired. This study will explore clinical preceptors views and experiences of precepting Bachelor of Science in Nursing (BScN) students in an undergraduate program. A case study design was used. As case studies explore a single unit of study such as a person or very small number of subjects; the two clinical preceptors were fundamental to the study and served as a single case. Qualitative data were obtained through an iterative process using in depth interviews and written accounts from reflective journals that were kept by the clinical preceptors. The findings revealed that the clinical preceptors were dedicated to their roles and responsibilities. Another, key finding was that clinical preceptors’ prior knowledge and clinical experience were valuable assets to perform their role effectively. The clinical preceptors found their new role innovative and challenging; it was stressful at the same time. Findings also revealed that in the clinical agencies there were unclear expectations and role ambiguity. Furthermore, clinical preceptors had difficulty integrating theory into practice in the clinical area and they had difficulty in giving feedback to the students. Although this study is localized to one university, generalizations can be drawn from the results. The key findings indicate that the role of a clinical preceptor is demanding and stressful. Clinical preceptors need preparation prior to precepting students on clinicals. Also, institutional support is fundamental for their acceptance. This paper focuses on the views and experiences of clinical preceptors undertaking a newly established role and resonates with the literature. The following recommendations are drawn to strengthen the role of the clinical preceptors: A structured program for clinical preceptors is needed along with mentorship. Clinical preceptors should be provided with formal training in teaching and learning with emphasis on clinical teaching and giving feedback to students. Additionally, for improving integration of theory into practice, clinical modules should be provided ahead of the clinical. In spite of all the challenges, ten more clinical preceptors have been hired as the faculty shortage continues to persist.Keywords: baccalaureate nursing education, clinical education, clinical preceptors, nursing curriculum
Procedia PDF Downloads 174394 The Risk of Prioritizing Management over Education at Japanese Universities
Authors: Masanori Kimura
Abstract:
Due to the decline of the 18-year-old population, Japanese universities have a tendency to convert their form of employment from tenured positions to fixed-term positions for newly hired teachers. The advantage of this is that universities can be more flexible in their employment plans in case they fail to fill the enrollment of quotas of prospective students or they need to supplement teachers who can engage in other academic fields or research areas where new demand is expected. The most serious disadvantage of this, however, is that if secure positions cannot be provided to faculty members, there is the possibility that coherence of education and continuity of research supported by the university cannot be achieved. Therefore, the question of this presentation is as follows: Are universities aiming to give first priority to management, or are they trying to prioritize educational and research rather than management? To answer this question, the author examined the number of job offerings for college foreign language teachers posted on the JREC-IN (Japan Research Career Information Network, which is run by Japan Science and Technology Agency) website from April 2012 to October 2015. The results show that there were 1,002 and 1,056 job offerings for tenured positions and fixed-term contracts respectively, suggesting that, overall, today’s Japanese universities show a tendency to give first priority to management. More detailed examinations of the data, however, show that the tendency slightly varies depending on the types of universities. National universities which are supported by the central government and state universities which are supported by local governments posted more job offerings for tenured positions than for fixed-term contracts: national universities posted 285 and 257 job offerings for tenured positions and fixed-term contracts respectively, and state universities posted 106 and 86 job offerings for tenured positions and fixed-term contracts respectively. Yet the difference in number between the two types of employment status at national and state universities is marginal. As for private universities, they posted 713 job offerings for fixed-term contracts and 616 offerings for tenured positions. Moreover, 73% of the fixed-term contracts were offered for low rank positions including associate professors, lectures, and so forth. Generally speaking, those positions are offered to younger teachers. Therefore, this result indicates that private universities attempt to cut their budgets yet expect the same educational effect by hiring younger teachers. Although the results have shown that there are some differences in personal strategies among the three types of universities, the author argues that all three types of universities may lose important human resources that will take a pivotal role at their universities in the future unless they urgently review their employment strategies.Keywords: higher education, management, employment status, foreign language education
Procedia PDF Downloads 134393 Teaching Ethnic Relations in Social Work Education: A Study of Teachers' Strategies and Experiences in Sweden
Authors: Helene Jacobson Pettersson, Linda Lill
Abstract:
Demographic changes and globalization in society provide new opportunities for social work and social work education in Sweden. There has been an ambition to include these aspects into the Swedish social work education. However, the Swedish welfare state standard continued to be as affectionate as invisible starting point in discussions about people’s way of life and social problems. The aim of this study is to explore content given to ethnic relations in social work in the social work education in Sweden. Our standpoint is that the subject can be understood both from individual and structural levels, it changes over time, varies in different steering documents and differs from the perspectives of teachers and students. Our question is what content is given to ethnic relations in social work by the teachers in their strategies and teaching material. The study brings together research in the interface between education science, social work and research of international migration and ethnic relations. The presented narratives are from longer interviews with a total of 17 university teachers who teach in social work program at four different universities in Sweden. The universities have in different ways a curriculum that involves the theme of ethnic relations in social work, and the interviewed teachers are teaching and grading social workers on specific courses related to ethnic relations at undergraduate and graduate levels. Overall assesses these 17 teachers a large number of students during a semester. The questions were concerned on how the teachers handle ethnic relations in education in social work. The particular focus during the interviews has been the teacher's understanding of the documented learning objectives and content of literature and how this has implications for their teaching. What emerges is the teachers' own stories about the educational work and how they relate to the content of teaching, as well as the teaching strategies they use to promote the theme of ethnic relations in social work education. The analysis of this kind of pedagogy is that the teaching ends up at an individual level with a particular focus on the professional encounter with individuals. We can see the shortage of a critical analysis of the construction of social problems. The conclusion is that individual circumstance precedes theoretical perspective on social problems related to migration, transnational relations, globalization and social. This result has problematic implications from the perspective of sustainability in terms of ethnic diversity and integration in society. Thus these aspects have most relevance for social workers’ professional acting in social support and empowerment related activities, in supporting the social status and human rights and equality for immigrants.Keywords: ethnic relations in Swedish social work education, teaching content, teaching strategies, educating for change, human rights and equality
Procedia PDF Downloads 248392 Dexamethasone Treatment Deregulates Proteoglycans Expression in Normal Brain Tissue
Authors: A. Y. Tsidulko, T. M. Pankova, E. V. Grigorieva
Abstract:
High-grade gliomas are the most frequent and most aggressive brain tumors which are characterized by active invasion of tumor cells into the surrounding brain tissue, where the extracellular matrix (ECM) plays a crucial role. Disruption of ECM can be involved in anticancer drugs effectiveness, side-effects and also in tumor relapses. The anti-inflammatory agent dexamethasone is a common drug used during high-grade glioma treatment for alleviating cerebral edema. Although dexamethasone is widely used in the clinic, its effects on normal brain tissue ECM remain poorly investigated. It is known that proteoglycans (PGs) are a major component of the extracellular matrix in the central nervous system. In our work, we studied the effects of dexamethasone on the ECM proteoglycans (syndecan-1, glypican-1, perlecan, versican, brevican, NG2, decorin, biglican, lumican) using RT-PCR in the experimental animal model. It was shown that proteoglycans in rat brain have age-specific expression patterns. In early post-natal rat brain (8 days old rat pups) overall PGs expression was quite high and mainly expressed PGs were biglycan, decorin, and syndecan-1. The overall transcriptional activity of PGs in adult rat brain is 1.5-fold decreased compared to post-natal brain. The expression pattern was changed as well with biglycan, decorin, syndecan-1, glypican-1 and brevican becoming almost equally expressed. PGs expression patterns create a specific tissue microenvironment that differs in developing and adult brain. Dexamethasone regimen close to the one used in the clinic during high-grade glioma treatment significantly affects proteoglycans expression. It was shown that overall PGs transcription activity is 1.5-2-folds increased after dexamethasone treatment. The most up-regulated PGs were biglycan, decorin, and lumican. The PGs expression pattern in adult brain changed after treatment becoming quite close to the expression pattern in developing brain. It is known that microenvironment in developing tissues promotes cells proliferation while in adult tissues proliferation is usually suppressed. The changes occurring in the adult brain after dexamethasone treatment may lead to re-activation of cell proliferation due to signals from changed microenvironment. Taken together obtained data show that dexamethasone treatment significantly affects the normal brain ECM, creating the appropriate microenvironment for tumor cells proliferation and thus can reduce the effectiveness of anticancer treatment and promote tumor relapses. This work has been supported by a Russian Science Foundation (RSF Grant 16-15-10243)Keywords: dexamthasone, extracellular matrix, glioma, proteoglycan
Procedia PDF Downloads 199391 Dogs Chest Homogeneous Phantom for Image Optimization
Authors: Maris Eugênia Dela Rosa, Ana Luiza Menegatti Pavan, Marcela De Oliveira, Diana Rodrigues De Pina, Luis Carlos Vulcano
Abstract:
In medical veterinary as well as in human medicine, radiological study is essential for a safe diagnosis in clinical practice. Thus, the quality of radiographic image is crucial. In last year’s there has been an increasing substitution of image acquisition screen-film systems for computed radiology equipment (CR) without technical charts adequacy. Furthermore, to carry out a radiographic examination in veterinary patient is required human assistance for restraint this, which can compromise image quality by generating dose increasing to the animal, for Occupationally Exposed and also the increased cost to the institution. The image optimization procedure and construction of radiographic techniques are performed with the use of homogeneous phantoms. In this study, we sought to develop a homogeneous phantom of canine chest to be applied to the optimization of these images for the CR system. In carrying out the simulator was created a database with retrospectives chest images of computed tomography (CT) of the Veterinary Hospital of the Faculty of Veterinary Medicine and Animal Science - UNESP (FMVZ / Botucatu). Images were divided into four groups according to the animal weight employing classification by sizes proposed by Hoskins & Goldston. The thickness of biological tissues were quantified in a 80 animals, separated in groups of 20 animals according to their weights: (S) Small - equal to or less than 9.0 kg, (M) Medium - between 9.0 and 23.0 kg, (L) Large – between 23.1 and 40.0kg and (G) Giant – over 40.1 kg. Mean weight for group (S) was 6.5±2.0 kg, (M) 15.0±5.0 kg, (L) 32.0±5.5 kg and (G) 50.0 ±12.0 kg. An algorithm was developed in Matlab in order to classify and quantify biological tissues present in CT images and convert them in simulator materials. To classify tissues presents, the membership functions were created from the retrospective CT scans according to the type of tissue (adipose, muscle, bone trabecular or cortical and lung tissue). After conversion of the biologic tissue thickness in equivalent material thicknesses (acrylic simulating soft tissues, bone tissues simulated by aluminum and air to the lung) were obtained four different homogeneous phantoms, with (S) 5 cm of acrylic, 0,14 cm of aluminum and 1,8 cm of air; (M) 8,7 cm of acrylic, 0,2 cm of aluminum and 2,4 cm of air; (L) 10,6 cm of acrylic, 0,27 cm of aluminum and 3,1 cm of air and (G) 14,8 cm of acrylic, 0,33 cm of aluminum and 3,8 cm of air. The developed canine homogeneous phantom is a practical tool, which will be employed in future, works to optimize veterinary X-ray procedures.Keywords: radiation protection, phantom, veterinary radiology, computed radiography
Procedia PDF Downloads 418390 Issues of Accounting of Lease and Revenue according to International Financial Reporting Standards
Authors: Nadezhda Kvatashidze, Elena Kharabadze
Abstract:
It is broadly known that lease is a flexible means of funding enterprises. Lease reduces the risk related to access and possession of assets, as well as obtainment of funding. Therefore, it is important to refine lease accounting. The lease accounting regulations under the applicable standard (International Accounting Standards 17) make concealment of liabilities possible. As a result, the information users get inaccurate and incomprehensive information and have to resort to an additional assessment of the off-balance sheet lease liabilities. In order to address the problem, the International Financial Reporting Standards Board decided to change the approach to lease accounting. With the deficiencies of the applicable standard taken into account, the new standard (IFRS 16 ‘Leases’) aims at supplying appropriate and fair lease-related information to the users. Save certain exclusions; the lessee is obliged to recognize all the lease agreements in its financial report. The approach was determined by the fact that under the lease agreement, rights and obligations arise by way of assets and liabilities. Immediately upon conclusion of the lease agreement, the lessee takes an asset into its disposal and assumes the obligation to effect the lease-related payments in order to meet the recognition criteria defined by the Conceptual Framework for Financial Reporting. The payments are to be entered into the financial report. The new lease accounting standard secures supply of quality and comparable information to the financial information users. The International Accounting Standards Board and the US Financial Accounting Standards Board jointly developed IFRS 15: ‘Revenue from Contracts with Customers’. The standard allows the establishment of detailed revenue recognition practical criteria such as identification of the performance obligations in the contract, determination of the transaction price and its components, especially price variable considerations and other important components, as well as passage of control over the asset to the customer. IFRS 15: ‘Revenue from Contracts with Customers’ is very similar to the relevant US standards and includes requirements more specific and consistent than those of the standards in place. The new standard is going to change the recognition terms and techniques in the industries, such as construction, telecommunications (mobile and cable networks), licensing (media, science, franchising), real property, software etc.Keywords: assessment of the lease assets and liabilities, contractual liability, division of contract, identification of contracts, contract price, lease identification, lease liabilities, off-balance sheet, transaction value
Procedia PDF Downloads 320389 Friction and Wear Characteristics of Diamond Nanoparticles Mixed with Copper Oxide in Poly Alpha Olefin
Authors: Ankush Raina, Ankush Anand
Abstract:
Plyometric training is a form of specialised strength training that uses fast muscular contractions to improve power and speed in sports conditioning by coaches and athletes. Despite its useful role in sports conditioning programme, the information about plyometric training on the athletes cardiovascular health especially Electrocardiogram (ECG) has not been established in the literature. The purpose of the study was to determine the effects of lower and upper body plyometric training on ECG of athletes. The study was guided by three null hypotheses. Quasi–experimental research design was adopted for the study. Seventy-two university male athletes constituted the population of the study. Thirty male athletes aged 18 to 24 years volunteered to participate in the study, but only twenty-three completed the study. The volunteered athletes were apparently healthy, physically active and free of any lower and upper extremity bone injuries for past one year and they had no medical or orthopedic injuries that may affect their participation in the study. Ten subjects were purposively assigned to one of the three groups: lower body plyometric training (LBPT), upper body plyometric training (UBPT), and control (C). Training consisted of six plyometric exercises: lower (ankle hops, squat jumps, tuck jumps) and upper body plyometric training (push-ups, medicine ball-chest throws and side throws) with moderate intensity. The general data were collated and analysed using Statistical Package for Social Science (SPSS version 22.0). The research questions were answered using mean and standard deviation, while paired samples t-test was also used to test for the hypotheses. The results revealed that athletes who were trained using LBPT had reduced ECG parameters better than those in the control group. The results also revealed that athletes who were trained using both LBPT and UBPT indicated lack of significant differences following ten weeks plyometric training than those in the control group in the ECG parameters except in Q wave, R wave and S wave (QRS) complex. Based on the findings of the study, it was recommended among others that coaches should include both LBPT and UBPT as part of athletes’ overall training programme from primary to tertiary institution to optimise performance as well as reduce the risk of cardiovascular diseases and promotes good healthy lifestyle.Keywords: boundary lubrication, copper oxide, friction, nano diamond
Procedia PDF Downloads 123388 Profile of the Renal Failure Patients under Haemodialysis at B. P. Koirala Institute of Health Sciences Nepal
Authors: Ram Sharan Mehta, Sanjeev Sharma
Abstract:
Introduction: Haemodialysis (HD) is a mechanical process of removing waste products from the blood and replacing essential substances in patients with renal failure. First artificial kidney developed in Netherlands in 1943 AD First successful treatment of CRF reported in 1960AD, life-saving treatment begins for CRF in 1972 AD. In 1973 AD Medicare took over financial responsibility for many clients and after that method become popular. BP Koirala institute of health science is the only center outside the Kathmandu, where HD service is available. In BPKIHS PD started in Jan.1998, HD started in August 2002 till September 2003 about 278 patients received HD. Day by day the number of HD patients is increasing in BPKIHS as with institutional growth. No such type of study was conducted in past hence there is lack of valid & reliable baseline data. Hence, the investigators were interested to conduct the study on " Profile of the Renal Failure patients under Haemodialysis at B.P. Koirala Institute of Health Sciences Nepal". Objectives: The objectives of the study were: to find out the Socio-demographic characteristics of the patients, to explore the knowledge of the patients regarding disease process and Haemodialysis and to identify the problems encountered by the patients. Methods: It is a hospital-based exploratory study. The population of the study was the clients under HD and the sampling method was purposive. Fifty-four patients undergone HD during the period of 17 July 2012 to 16 July 2013 of complete one year were included in the study. Structured interview schedule was used for collect data after obtaining validity and reliability. Results: Total 54 subjects had undergone for HD, having age range of 5-75 years and majority of them were male (74%) and Hindu (93 %). Thirty-one percent illiterate, 28% had agriculture their occupation, 80% of them were from very poor community, and about 30% subjects were unaware about the disease they suffering. Majority of subjects reported that they had no complications during dialysis (61%), where as 20% reported nausea and vomiting, 9% Hypotension, 4% headache and 2%chest pain during dialysis. Conclusions: CRF leading to HD is a long battle for patients, required to make major and continuous adjustment, both physiologically and psychologically. The study suggests that non-compliance with HD regimen were common. The socio-demographic and knowledge profile will help in the management and early prevention of disease and evaluate aspects that will influence care and patients can select mode of treatment themselves properly.Keywords: profile, haemodialysis, Nepal, patients, treatment
Procedia PDF Downloads 375387 The Quantum Theory of Music and Human Languages
Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer
Abstract:
The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: language, music, sciences, quantum entenglement
Procedia PDF Downloads 77386 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 144385 Foundations for Global Interactions: The Theoretical Underpinnings of Understanding Others
Authors: Randall E. Osborne
Abstract:
In a course on International Psychology, 8 theoretical perspectives (Critical Psychology, Liberation Psychology, Post-Modernism, Social Constructivism, Social Identity Theory, Social Reduction Theory, Symbolic Interactionism, and Vygotsky’s Sociocultural Theory) are used as a framework for getting students to understand the concept of and need for Globalization. One of critical psychology's main criticisms of conventional psychology is that it fails to consider or deliberately ignores the way power differences between social classes and groups can impact the mental and physical well-being of individuals or groups of people. Liberation psychology, also known as liberation social psychology or psicología social de la liberación, is an approach to psychological science that aims to understand the psychology of oppressed and impoverished communities by addressing the oppressive sociopolitical structure in which they exist. Postmodernism is largely a reaction to the assumed certainty of scientific, or objective, efforts to explain reality. It stems from a recognition that reality is not simply mirrored in human understanding of it, but rather, is constructed as the mind tries to understand its own particular and personal reality. Lev Vygotsky argued that all cognitive functions originate in, and must therefore be explained as products of social interactions and that learning was not simply the assimilation and accommodation of new knowledge by learners. Social Identity Theory discusses the implications of social identity for human interactions with and assumptions about other people. Social Identification Theory suggests people: (1) categorize—people find it helpful (humans might be perceived as having a need) to place people and objects into categories, (2) identify—people align themselves with groups and gain identity and self-esteem from it, and (3) compare—people compare self to others. Social reductionism argues that all behavior and experiences can be explained simply by the affect of groups on the individual. Symbolic interaction theory focuses attention on the way that people interact through symbols: words, gestures, rules, and roles. Meaning evolves from human their interactions in their environment and with people. Vygotsky’s sociocultural theory of human learning describes learning as a social process and the origination of human intelligence in society or culture. The major theme of Vygotsky’s theoretical framework is that social interaction plays a fundamental role in the development of cognition. This presentation will discuss how these theoretical perspectives are incorporated into a course on International Psychology, a course on the Politics of Hate, and a course on the Psychology of Prejudice, Discrimination and Hate to promote student thinking in a more ‘global’ manner.Keywords: globalization, international psychology, society and culture, teaching interculturally
Procedia PDF Downloads 252384 Predicting Personality and Psychological Distress Using Natural Language Processing
Authors: Jihee Jang, Seowon Yoon, Gaeun Son, Minjung Kang, Joon Yeon Choeh, Kee-Hong Choi
Abstract:
Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one’s personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple-choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological constructs to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II, which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: The phase I (pilot) study was conducted on fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 425 Korean adults were recruited using a convenience sampling method via an online survey. The text data collected from interviews were analyzed using natural language processing. The results of the online survey, including demographic data, depression, anxiety, and personality inventories, were analyzed together in the model to predict individuals’ FFM of personality and the level of psychological distress (phase 2).Keywords: personality prediction, psychological distress prediction, natural language processing, machine learning, the five-factor model of personality
Procedia PDF Downloads 79383 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks
Authors: Sulemana Ibrahim
Abstract:
Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks
Procedia PDF Downloads 62382 The Relationship between Self-Injurious Behavior and Manner of Death
Authors: Sait Ozsoy, Hacer Yasar Teke, Mustafa Dalgic, Cetin Ketenci, Ertugrul Gok, Kenan Karbeyaz, Azem Irez, Mesut Akyol
Abstract:
Self-mutilating behavior or self-injury behavior (SIB) is defined as: intentional harm to one’s body without intends to commit suicide”. SIB cases are commonly seen in psychiatry and forensic medicine practices. Despite variety of SIB methods, cuts in the skin is the most common (70-97%) injury in this group of patients. Subjects with SIB have one or more other comorbidities which include depression, anxiety, depersonalization, and feeling of worthlessness, borderline personality disorder, antisocial behaviors, and histrionic personality. These individuals feel a high level of hostility towards themselves and their surroundings. Researches have also revealed a strong relationship between antisocial personality disorder, criminal behavior, and SIB. This study has retrospectively evaluated 6,599 autopsy cases performed at forensic medicine institutes of six major cities (Ankara, Izmir, Diyarbakir, Erzurum, Trabzon, Eskisehir) of Turkey in 2013. The study group consisted of all cases with SIB findings (psychopathic cuts, cigarette burns, scars, and etc.). The relationship between causes of death in the study group (SIB subjects) and the control group was investigated. The control group was created from subjects without signs of SIB. Mann-Whitney U test was used for age variables and Chi-square test for categorical variables. Multinomial logistic regression analysis was used in order to analyze group differences in respect to manner of death (natural, accident, homicide, suicide) and analysis of risk factors associated with each group was determined by the Binomial logistic regression analysis. This study used SPSS statistics 15.0 for all its statistical and calculation needs. The statistical significance was p <0.05. There was no significant difference between accidental and natural death among the groups (p=0.737). Also there was a unit increase in number of cuts in psychopathic group while number of accidental death decreased (95% CI: 0.941-0.993) by 0.967 times (p=0.015). In contrast, there was a significant difference between suicidal and natural death (p<0.001), and also between homicidal and natural death (p=0.025). SIB is often seen with borderline and antisocial personality disorder but may be associated with many psychiatric illnesses. Studies have shown a relationship between antisocial personality disorders with criminal behavior and SIB with suicidal behavior. In our study, rate of suicide, murder and intoxication was higher compared to the control group. It could be concluded that SIB can be used as a predictor of possibility of one’s harm to him/herself and other people.Keywords: autopsy, cause of death, forensic science, self-injury behaviour
Procedia PDF Downloads 510381 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 122380 The Importance of Efficient and Sustainable Water Resources Management and the Role of Artificial Intelligence in Preventing Forced Migration
Authors: Fateme Aysin Anka, Farzad Kiani
Abstract:
Forced migration is a situation in which people are forced to leave their homes against their will due to political conflicts, wars and conflicts, natural disasters, climate change, economic crises, or other emergencies. This type of migration takes place under conditions where people cannot lead a sustainable life due to reasons such as security, shelter and meeting their basic needs. This type of migration may occur in connection with different factors that affect people's living conditions. In addition to these general and widespread reasons, water security and resources will be one that is starting now and will be encountered more and more in the future. Forced migration may occur due to insufficient or depleted water resources in the areas where people live. In this case, people's living conditions become unsustainable, and they may have to go elsewhere, as they cannot obtain their basic needs, such as drinking water, water used for agriculture and industry. To cope with these situations, it is important to minimize the causes, as international organizations and societies must provide assistance (for example, humanitarian aid, shelter, medical support and education) and protection to address (or mitigate) this problem. From the international perspective, plans such as the Green New Deal (GND) and the European Green Deal (EGD) draw attention to the need for people to live equally in a cleaner and greener world. Especially recently, with the advancement of technology, science and methods have become more efficient. In this regard, in this article, a multidisciplinary case model is presented by reinforcing the water problem with an engineering approach within the framework of the social dimension. It is worth emphasizing that this problem is largely linked to climate change and the lack of a sustainable water management perspective. As a matter of fact, the United Nations Development Agency (UNDA) draws attention to this problem in its universally accepted sustainable development goals. Therefore, an artificial intelligence-based approach has been applied to solve this problem by focusing on the water management problem. The most general but also important aspect in the management of water resources is its correct consumption. In this context, the artificial intelligence-based system undertakes tasks such as water demand forecasting and distribution management, emergency and crisis management, water pollution detection and prevention, and maintenance and repair control and forecasting.Keywords: water resource management, forced migration, multidisciplinary studies, artificial intelligence
Procedia PDF Downloads 87379 Inner and Outer School Contextual Factors Associated with Poor Performance of Grade 12 Students: A Case Study of an Underperforming High School in Mpumalanga, South Africa
Authors: Victoria L. Nkosi, Parvaneh Farhangpour
Abstract:
Often a Grade 12 certificate is perceived as a passport to tertiary education and the minimum requirement to enter the world of work. In spite of its importance, many students do not make this milestone in South Africa. It is important to find out why so many students still fail in spite of transformation in the education system in the post-apartheid era. Given the complexity of education and its context, this study adopted a case study design to examine one historically underperforming high school in Bushbuckridge, Mpumalanga Province, South Africa in 2013. The aim was to gain a understanding of the inner and outer school contextual factors associated with the high failure rate among Grade 12 students. Government documents and reports were consulted to identify factors in the district and the village surrounding the school and a student survey was conducted to identify school, home and student factors. The randomly-sampled half of the population of Grade 12 students (53) participated in the survey and quantitative data are analyzed using descriptive statistical methods. The findings showed that a host of factors is at play. The school is located in a village within a municipality which has been one of the poorest three municipalities in South Africa and the lowest Grade 12 pass rate in the Mpumalanga province. Moreover, over half of the families of the students are single parents, 43% are unemployed and the majority has a low level of education. In addition, most families (83%) do not have basic study materials such as a dictionary, books, tables, and chairs. A significant number of students (70%) are over-aged (+19 years old); close to half of them (49%) are grade repeaters. The school itself lacks essential resources, namely computers, science laboratories, library, and enough furniture and textbooks. Moreover, teaching and learning are negatively affected by the teachers’ occasional absenteeism, inadequate lesson preparation, and poor communication skills. Overall, the continuous low performance of students in this school mirrors the vicious circle of multiple negative conditions present within and outside of the school. The complexity of factors associated with the underperformance of Grade 12 students in this school calls for a multi-dimensional intervention from government and stakeholders. One important intervention should be the placement of over-aged students and grade-repeaters in suitable educational institutions for the benefit of other students.Keywords: inner context, outer context, over-aged students, vicious cycle
Procedia PDF Downloads 201378 Critical Mathematics Education and School Education in India: A Study of the National Curriculum Framework 2022 for Foundational Stage
Authors: Eish Sharma
Abstract:
Literature around Mathematics education suggests that democratic attitudes can be strengthened through teaching and learning Mathematics. Furthermore, connections between critical education and Mathematics education are observed in the light of critical pedagogy to locate Critical Mathematics Education (CME) as the theoretical framework. Critical pedagogy applied to Mathematics education is identified as one of the key themes subsumed under Critical Mathematics Education. Through the application of critical pedagogy in mathematics, unequal power relations and social injustice can be identified, analyzed, and challenged. The research question is: have educational policies in India viewed the role of critical pedagogy applied to mathematics education (i.e., critical mathematics education) to ensure social justice as an educational aim? The National Curriculum Framework (NCF), 2005 upholds education for democracy and the role of mathematics education in facilitating the same. More than this, NCF 2005 rests on Critical Pedagogy Framework and it recommends that critical pedagogy must be practiced in all dimensions of school education. NCF 2005 visualizes critical pedagogy for social sciences as well as sciences, stating that the science curriculum, including mathematics, must be used as an “instrument for achieving social change to reduce the divide based on economic class, gender, caste, religion, and the region”. Furthermore, the implementation of NCF 2005 led to a reform in the syllabus and textbooks in school mathematics at the national level, and critical pedagogy was applied to mathematics textbooks at the primary level. This intervention led to ethnomathematics and critical mathematics education in the school curriculum in India for the first time at the national level. In October 2022, the Ministry of Education launched the National Curriculum Framework for Foundational Stage (NCF-FS), developed in light of the National Education Policy, 2020, for children in the three to eight years age group. I want to find out whether critical pedagogy-based education and critical pedagogy-based mathematics education are carried forward in NCF 2022. To find this, an argument analysis of specific sections of the National Curriculum Framework 2022 document needs to be executed. Des Gasper suggests two tables: The first table contains four columns, namely, text component, comments on meanings, possible reformulation of the same text, and identified conclusions and assumptions (both stated and unstated). This table is for understanding the components and meanings of the text and is based on Scriven’s model for understanding the components and meanings of words in the text. The second table contains four columns i.e., claim identified, given data, warrant, and stated qualifier/rebuttal. This table is for describing the structure of the argument, how and how well the components fit together and is called ‘George Table diagram based on Toulmin-Bunn Model’.Keywords: critical mathematics education, critical pedagogy, social justice, etnomathematics
Procedia PDF Downloads 82