Search results for: earth science
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3700

Search results for: earth science

460 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: artificial intelligence, computer science, criminal investigation, digital forensics

Procedia PDF Downloads 212
459 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 409
458 Comics as an Intermediary for Media Literacy Education

Authors: Ryan C. Zlomek

Abstract:

The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.

Keywords: comics, graphics novels, mass communication, media literacy, metacognition

Procedia PDF Downloads 298
457 The Economic Burden of Mental Disorders: A Systematic Review

Authors: Maria Klitgaard Christensen, Carmen Lim, Sukanta Saha, Danielle Cannon, Finley Prentis, Oleguer Plana-Ripoll, Natalie Momen, Kim Moesgaard Iburg, John J. McGrath

Abstract:

Introduction: About a third of the world’s population will develop a mental disorder over their lifetime. Having a mental disorder is a huge burden in health loss and cost for the individual, but also for society because of treatment cost, production loss and caregivers’ cost. The objective of this study is to synthesize the international published literature on the economic burden of mental disorders. Methods: Systematic literature searches were conducted in the databases PubMed, Embase, Web of Science, EconLit, NHS York Database and PsychInfo using key terms for cost and mental disorders. Searches were restricted to 1980 until May 2019. The inclusion criteria were: (1) cost-of-illness studies or cost-analyses, (2) diagnosis of at least one mental disorder, (3) samples based on the general population, and (4) outcome in monetary units. 13,640 publications were screened by their title/abstract and 439 articles were full-text screened by at least two independent reviewers. 112 articles were included from the systematic searches and 31 articles from snowball searching, giving a total of 143 included articles. Results: Information about diagnosis, diagnostic criteria, sample size, age, sex, data sources, study perspective, study period, costing approach, cost categories, discount rate and production loss method and cost unit was extracted. The vast majority of the included studies were from Western countries and only a few from Africa and South America. The disorder group most often investigated was mood disorders, followed by schizophrenia and neurotic disorders. The disorder group least examined was intellectual disabilities, followed by eating disorders. The preliminary results show a substantial variety in the used perspective, methodology, costs components and outcomes in the included studies. An online tool is under development enabling the reader to explore the published information on costs by type of mental disorder, subgroups, country, methodology, and study quality. Discussion: This is the first systematic review synthesizing the economic cost of mental disorders worldwide. The paper will provide an important and comprehensive overview over the economic burden of mental disorders, and the output from this review will inform policymaking.

Keywords: cost-of-illness, health economics, mental disorders, systematic review

Procedia PDF Downloads 131
456 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer

Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi

Abstract:

Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.

Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales

Procedia PDF Downloads 124
455 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data

Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton

Abstract:

The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.

Keywords: analytics, digitization, industry 4.0, manufacturing

Procedia PDF Downloads 111
454 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 273
453 A Case Study on Experiences of Clinical Preceptors in the Undergraduate Nursing Program

Authors: Jacqueline M. Dias, Amina A Khowaja

Abstract:

Clinical education is one of the most important components of a nursing curriculum as it develops the students’ cognitive, psychomotor and affective skills. Clinical teaching ensures the integration of knowledge into practice. As the numbers of students increase in the field of nursing coupled with the faculty shortage, clinical preceptors are the best choice to ensure student learning in the clinical settings. The clinical preceptor role has been introduced in the undergraduate nursing programme. In Pakistan, this role emerged due to a faculty shortage. Initially, two clinical preceptors were hired. This study will explore clinical preceptors views and experiences of precepting Bachelor of Science in Nursing (BScN) students in an undergraduate program. A case study design was used. As case studies explore a single unit of study such as a person or very small number of subjects; the two clinical preceptors were fundamental to the study and served as a single case. Qualitative data were obtained through an iterative process using in depth interviews and written accounts from reflective journals that were kept by the clinical preceptors. The findings revealed that the clinical preceptors were dedicated to their roles and responsibilities. Another, key finding was that clinical preceptors’ prior knowledge and clinical experience were valuable assets to perform their role effectively. The clinical preceptors found their new role innovative and challenging; it was stressful at the same time. Findings also revealed that in the clinical agencies there were unclear expectations and role ambiguity. Furthermore, clinical preceptors had difficulty integrating theory into practice in the clinical area and they had difficulty in giving feedback to the students. Although this study is localized to one university, generalizations can be drawn from the results. The key findings indicate that the role of a clinical preceptor is demanding and stressful. Clinical preceptors need preparation prior to precepting students on clinicals. Also, institutional support is fundamental for their acceptance. This paper focuses on the views and experiences of clinical preceptors undertaking a newly established role and resonates with the literature. The following recommendations are drawn to strengthen the role of the clinical preceptors: A structured program for clinical preceptors is needed along with mentorship. Clinical preceptors should be provided with formal training in teaching and learning with emphasis on clinical teaching and giving feedback to students. Additionally, for improving integration of theory into practice, clinical modules should be provided ahead of the clinical. In spite of all the challenges, ten more clinical preceptors have been hired as the faculty shortage continues to persist.

Keywords: baccalaureate nursing education, clinical education, clinical preceptors, nursing curriculum

Procedia PDF Downloads 174
452 The Risk of Prioritizing Management over Education at Japanese Universities

Authors: Masanori Kimura

Abstract:

Due to the decline of the 18-year-old population, Japanese universities have a tendency to convert their form of employment from tenured positions to fixed-term positions for newly hired teachers. The advantage of this is that universities can be more flexible in their employment plans in case they fail to fill the enrollment of quotas of prospective students or they need to supplement teachers who can engage in other academic fields or research areas where new demand is expected. The most serious disadvantage of this, however, is that if secure positions cannot be provided to faculty members, there is the possibility that coherence of education and continuity of research supported by the university cannot be achieved. Therefore, the question of this presentation is as follows: Are universities aiming to give first priority to management, or are they trying to prioritize educational and research rather than management? To answer this question, the author examined the number of job offerings for college foreign language teachers posted on the JREC-IN (Japan Research Career Information Network, which is run by Japan Science and Technology Agency) website from April 2012 to October 2015. The results show that there were 1,002 and 1,056 job offerings for tenured positions and fixed-term contracts respectively, suggesting that, overall, today’s Japanese universities show a tendency to give first priority to management. More detailed examinations of the data, however, show that the tendency slightly varies depending on the types of universities. National universities which are supported by the central government and state universities which are supported by local governments posted more job offerings for tenured positions than for fixed-term contracts: national universities posted 285 and 257 job offerings for tenured positions and fixed-term contracts respectively, and state universities posted 106 and 86 job offerings for tenured positions and fixed-term contracts respectively. Yet the difference in number between the two types of employment status at national and state universities is marginal. As for private universities, they posted 713 job offerings for fixed-term contracts and 616 offerings for tenured positions. Moreover, 73% of the fixed-term contracts were offered for low rank positions including associate professors, lectures, and so forth. Generally speaking, those positions are offered to younger teachers. Therefore, this result indicates that private universities attempt to cut their budgets yet expect the same educational effect by hiring younger teachers. Although the results have shown that there are some differences in personal strategies among the three types of universities, the author argues that all three types of universities may lose important human resources that will take a pivotal role at their universities in the future unless they urgently review their employment strategies.

Keywords: higher education, management, employment status, foreign language education

Procedia PDF Downloads 134
451 Delving into the Concept of Social Capital in the Smart City Research

Authors: Atefe Malekkhani, Lee Beattie, Mohsen Mohammadzadeh

Abstract:

Unprecedented growth of megacities and urban areas all around the world have resulted in numerous risks, concerns, and problems across various aspects of urban life, including environmental, social, and economic domains like climate change, spatial and social inequalities. In this situation, ever-increasing progress of technology has created a hope for urban authorities that the negative effects of various socio-economic and environmental crises can potentially be mitigated with the use of information and communication technologies. The concept of 'smart city' represents an emerging solution to urban challenges arising from increased urbanization using ICTs. However, smart cities are often perceived primarily as technological initiatives and are implemented without considering the social and cultural contexts of cities and the needs of their residents. The implementation of smart city projects and initiatives has the potential to (un)intentionally exacerbate pre-existing social, spatial, and cultural segregation. Investigating the impact of smart city on social capital of people who are users of smart city systems and with governance as policymakers is worth exploring. The importance of inhabitants to the existence and development of smart cities cannot be overlooked. This concept has gained different perspectives in the smart city studies. Reviewing the literature about social capital and smart city show that social capital play three different roles in smart city development. Some research indicates that social capital is a component of a smart city and has embedded in its dimensions, definitions, or strategies, while other ones see it as a social outcome of smart city development and point out that the move to smart cities improves social capital; however, in most cases, it remains an unproven hypothesis. Other studies show that social capital can enhance the functions of smart cities, and the consideration of social capital in planning smart cities should be promoted. Despite the existing theoretical and practical knowledge, there is a significant research gap reviewing the knowledge domain of smart city studies through the lens of social capital. To shed light on this issue, this study aims to explore the domain of existing research in the field of smart city through the lens of social capital. This research will use the 'Preferred Reporting Items for Systematic Reviews and Meta-Analyses' (PRISMA) method to review relevant literature, focusing on the key concepts of 'Smart City' and 'Social Capital'. The studies will be selected Web of Science Core Collection, using a selection process that involves identifying literature sources, screening and filtering studies based on titles, abstracts, and full-text reading.

Keywords: smart city, urban digitalisation, ICT, social capital

Procedia PDF Downloads 14
450 Teaching Ethnic Relations in Social Work Education: A Study of Teachers' Strategies and Experiences in Sweden

Authors: Helene Jacobson Pettersson, Linda Lill

Abstract:

Demographic changes and globalization in society provide new opportunities for social work and social work education in Sweden. There has been an ambition to include these aspects into the Swedish social work education. However, the Swedish welfare state standard continued to be as affectionate as invisible starting point in discussions about people’s way of life and social problems. The aim of this study is to explore content given to ethnic relations in social work in the social work education in Sweden. Our standpoint is that the subject can be understood both from individual and structural levels, it changes over time, varies in different steering documents and differs from the perspectives of teachers and students. Our question is what content is given to ethnic relations in social work by the teachers in their strategies and teaching material. The study brings together research in the interface between education science, social work and research of international migration and ethnic relations. The presented narratives are from longer interviews with a total of 17 university teachers who teach in social work program at four different universities in Sweden. The universities have in different ways a curriculum that involves the theme of ethnic relations in social work, and the interviewed teachers are teaching and grading social workers on specific courses related to ethnic relations at undergraduate and graduate levels. Overall assesses these 17 teachers a large number of students during a semester. The questions were concerned on how the teachers handle ethnic relations in education in social work. The particular focus during the interviews has been the teacher's understanding of the documented learning objectives and content of literature and how this has implications for their teaching. What emerges is the teachers' own stories about the educational work and how they relate to the content of teaching, as well as the teaching strategies they use to promote the theme of ethnic relations in social work education. The analysis of this kind of pedagogy is that the teaching ends up at an individual level with a particular focus on the professional encounter with individuals. We can see the shortage of a critical analysis of the construction of social problems. The conclusion is that individual circumstance precedes theoretical perspective on social problems related to migration, transnational relations, globalization and social. This result has problematic implications from the perspective of sustainability in terms of ethnic diversity and integration in society. Thus these aspects have most relevance for social workers’ professional acting in social support and empowerment related activities, in supporting the social status and human rights and equality for immigrants.

Keywords: ethnic relations in Swedish social work education, teaching content, teaching strategies, educating for change, human rights and equality

Procedia PDF Downloads 248
449 Dexamethasone Treatment Deregulates Proteoglycans Expression in Normal Brain Tissue

Authors: A. Y. Tsidulko, T. M. Pankova, E. V. Grigorieva

Abstract:

High-grade gliomas are the most frequent and most aggressive brain tumors which are characterized by active invasion of tumor cells into the surrounding brain tissue, where the extracellular matrix (ECM) plays a crucial role. Disruption of ECM can be involved in anticancer drugs effectiveness, side-effects and also in tumor relapses. The anti-inflammatory agent dexamethasone is a common drug used during high-grade glioma treatment for alleviating cerebral edema. Although dexamethasone is widely used in the clinic, its effects on normal brain tissue ECM remain poorly investigated. It is known that proteoglycans (PGs) are a major component of the extracellular matrix in the central nervous system. In our work, we studied the effects of dexamethasone on the ECM proteoglycans (syndecan-1, glypican-1, perlecan, versican, brevican, NG2, decorin, biglican, lumican) using RT-PCR in the experimental animal model. It was shown that proteoglycans in rat brain have age-specific expression patterns. In early post-natal rat brain (8 days old rat pups) overall PGs expression was quite high and mainly expressed PGs were biglycan, decorin, and syndecan-1. The overall transcriptional activity of PGs in adult rat brain is 1.5-fold decreased compared to post-natal brain. The expression pattern was changed as well with biglycan, decorin, syndecan-1, glypican-1 and brevican becoming almost equally expressed. PGs expression patterns create a specific tissue microenvironment that differs in developing and adult brain. Dexamethasone regimen close to the one used in the clinic during high-grade glioma treatment significantly affects proteoglycans expression. It was shown that overall PGs transcription activity is 1.5-2-folds increased after dexamethasone treatment. The most up-regulated PGs were biglycan, decorin, and lumican. The PGs expression pattern in adult brain changed after treatment becoming quite close to the expression pattern in developing brain. It is known that microenvironment in developing tissues promotes cells proliferation while in adult tissues proliferation is usually suppressed. The changes occurring in the adult brain after dexamethasone treatment may lead to re-activation of cell proliferation due to signals from changed microenvironment. Taken together obtained data show that dexamethasone treatment significantly affects the normal brain ECM, creating the appropriate microenvironment for tumor cells proliferation and thus can reduce the effectiveness of anticancer treatment and promote tumor relapses. This work has been supported by a Russian Science Foundation (RSF Grant 16-15-10243)

Keywords: dexamthasone, extracellular matrix, glioma, proteoglycan

Procedia PDF Downloads 199
448 Dogs Chest Homogeneous Phantom for Image Optimization

Authors: Maris Eugênia Dela Rosa, Ana Luiza Menegatti Pavan, Marcela De Oliveira, Diana Rodrigues De Pina, Luis Carlos Vulcano

Abstract:

In medical veterinary as well as in human medicine, radiological study is essential for a safe diagnosis in clinical practice. Thus, the quality of radiographic image is crucial. In last year’s there has been an increasing substitution of image acquisition screen-film systems for computed radiology equipment (CR) without technical charts adequacy. Furthermore, to carry out a radiographic examination in veterinary patient is required human assistance for restraint this, which can compromise image quality by generating dose increasing to the animal, for Occupationally Exposed and also the increased cost to the institution. The image optimization procedure and construction of radiographic techniques are performed with the use of homogeneous phantoms. In this study, we sought to develop a homogeneous phantom of canine chest to be applied to the optimization of these images for the CR system. In carrying out the simulator was created a database with retrospectives chest images of computed tomography (CT) of the Veterinary Hospital of the Faculty of Veterinary Medicine and Animal Science - UNESP (FMVZ / Botucatu). Images were divided into four groups according to the animal weight employing classification by sizes proposed by Hoskins & Goldston. The thickness of biological tissues were quantified in a 80 animals, separated in groups of 20 animals according to their weights: (S) Small - equal to or less than 9.0 kg, (M) Medium - between 9.0 and 23.0 kg, (L) Large – between 23.1 and 40.0kg and (G) Giant – over 40.1 kg. Mean weight for group (S) was 6.5±2.0 kg, (M) 15.0±5.0 kg, (L) 32.0±5.5 kg and (G) 50.0 ±12.0 kg. An algorithm was developed in Matlab in order to classify and quantify biological tissues present in CT images and convert them in simulator materials. To classify tissues presents, the membership functions were created from the retrospective CT scans according to the type of tissue (adipose, muscle, bone trabecular or cortical and lung tissue). After conversion of the biologic tissue thickness in equivalent material thicknesses (acrylic simulating soft tissues, bone tissues simulated by aluminum and air to the lung) were obtained four different homogeneous phantoms, with (S) 5 cm of acrylic, 0,14 cm of aluminum and 1,8 cm of air; (M) 8,7 cm of acrylic, 0,2 cm of aluminum and 2,4 cm of air; (L) 10,6 cm of acrylic, 0,27 cm of aluminum and 3,1 cm of air and (G) 14,8 cm of acrylic, 0,33 cm of aluminum and 3,8 cm of air. The developed canine homogeneous phantom is a practical tool, which will be employed in future, works to optimize veterinary X-ray procedures.

Keywords: radiation protection, phantom, veterinary radiology, computed radiography

Procedia PDF Downloads 417
447 Issues of Accounting of Lease and Revenue according to International Financial Reporting Standards

Authors: Nadezhda Kvatashidze, Elena Kharabadze

Abstract:

It is broadly known that lease is a flexible means of funding enterprises. Lease reduces the risk related to access and possession of assets, as well as obtainment of funding. Therefore, it is important to refine lease accounting. The lease accounting regulations under the applicable standard (International Accounting Standards 17) make concealment of liabilities possible. As a result, the information users get inaccurate and incomprehensive information and have to resort to an additional assessment of the off-balance sheet lease liabilities. In order to address the problem, the International Financial Reporting Standards Board decided to change the approach to lease accounting. With the deficiencies of the applicable standard taken into account, the new standard (IFRS 16 ‘Leases’) aims at supplying appropriate and fair lease-related information to the users. Save certain exclusions; the lessee is obliged to recognize all the lease agreements in its financial report. The approach was determined by the fact that under the lease agreement, rights and obligations arise by way of assets and liabilities. Immediately upon conclusion of the lease agreement, the lessee takes an asset into its disposal and assumes the obligation to effect the lease-related payments in order to meet the recognition criteria defined by the Conceptual Framework for Financial Reporting. The payments are to be entered into the financial report. The new lease accounting standard secures supply of quality and comparable information to the financial information users. The International Accounting Standards Board and the US Financial Accounting Standards Board jointly developed IFRS 15: ‘Revenue from Contracts with Customers’. The standard allows the establishment of detailed revenue recognition practical criteria such as identification of the performance obligations in the contract, determination of the transaction price and its components, especially price variable considerations and other important components, as well as passage of control over the asset to the customer. IFRS 15: ‘Revenue from Contracts with Customers’ is very similar to the relevant US standards and includes requirements more specific and consistent than those of the standards in place. The new standard is going to change the recognition terms and techniques in the industries, such as construction, telecommunications (mobile and cable networks), licensing (media, science, franchising), real property, software etc.

Keywords: assessment of the lease assets and liabilities, contractual liability, division of contract, identification of contracts, contract price, lease identification, lease liabilities, off-balance sheet, transaction value

Procedia PDF Downloads 320
446 Friction and Wear Characteristics of Diamond Nanoparticles Mixed with Copper Oxide in Poly Alpha Olefin

Authors: Ankush Raina, Ankush Anand

Abstract:

Plyometric training is a form of specialised strength training that uses fast muscular contractions to improve power and speed in sports conditioning by coaches and athletes. Despite its useful role in sports conditioning programme, the information about plyometric training on the athletes cardiovascular health especially Electrocardiogram (ECG) has not been established in the literature. The purpose of the study was to determine the effects of lower and upper body plyometric training on ECG of athletes. The study was guided by three null hypotheses. Quasi–experimental research design was adopted for the study. Seventy-two university male athletes constituted the population of the study. Thirty male athletes aged 18 to 24 years volunteered to participate in the study, but only twenty-three completed the study. The volunteered athletes were apparently healthy, physically active and free of any lower and upper extremity bone injuries for past one year and they had no medical or orthopedic injuries that may affect their participation in the study. Ten subjects were purposively assigned to one of the three groups: lower body plyometric training (LBPT), upper body plyometric training (UBPT), and control (C). Training consisted of six plyometric exercises: lower (ankle hops, squat jumps, tuck jumps) and upper body plyometric training (push-ups, medicine ball-chest throws and side throws) with moderate intensity. The general data were collated and analysed using Statistical Package for Social Science (SPSS version 22.0). The research questions were answered using mean and standard deviation, while paired samples t-test was also used to test for the hypotheses. The results revealed that athletes who were trained using LBPT had reduced ECG parameters better than those in the control group. The results also revealed that athletes who were trained using both LBPT and UBPT indicated lack of significant differences following ten weeks plyometric training than those in the control group in the ECG parameters except in Q wave, R wave and S wave (QRS) complex. Based on the findings of the study, it was recommended among others that coaches should include both LBPT and UBPT as part of athletes’ overall training programme from primary to tertiary institution to optimise performance as well as reduce the risk of cardiovascular diseases and promotes good healthy lifestyle.

Keywords: boundary lubrication, copper oxide, friction, nano diamond

Procedia PDF Downloads 123
445 Profile of the Renal Failure Patients under Haemodialysis at B. P. Koirala Institute of Health Sciences Nepal

Authors: Ram Sharan Mehta, Sanjeev Sharma

Abstract:

Introduction: Haemodialysis (HD) is a mechanical process of removing waste products from the blood and replacing essential substances in patients with renal failure. First artificial kidney developed in Netherlands in 1943 AD First successful treatment of CRF reported in 1960AD, life-saving treatment begins for CRF in 1972 AD. In 1973 AD Medicare took over financial responsibility for many clients and after that method become popular. BP Koirala institute of health science is the only center outside the Kathmandu, where HD service is available. In BPKIHS PD started in Jan.1998, HD started in August 2002 till September 2003 about 278 patients received HD. Day by day the number of HD patients is increasing in BPKIHS as with institutional growth. No such type of study was conducted in past hence there is lack of valid & reliable baseline data. Hence, the investigators were interested to conduct the study on " Profile of the Renal Failure patients under Haemodialysis at B.P. Koirala Institute of Health Sciences Nepal". Objectives: The objectives of the study were: to find out the Socio-demographic characteristics of the patients, to explore the knowledge of the patients regarding disease process and Haemodialysis and to identify the problems encountered by the patients. Methods: It is a hospital-based exploratory study. The population of the study was the clients under HD and the sampling method was purposive. Fifty-four patients undergone HD during the period of 17 July 2012 to 16 July 2013 of complete one year were included in the study. Structured interview schedule was used for collect data after obtaining validity and reliability. Results: Total 54 subjects had undergone for HD, having age range of 5-75 years and majority of them were male (74%) and Hindu (93 %). Thirty-one percent illiterate, 28% had agriculture their occupation, 80% of them were from very poor community, and about 30% subjects were unaware about the disease they suffering. Majority of subjects reported that they had no complications during dialysis (61%), where as 20% reported nausea and vomiting, 9% Hypotension, 4% headache and 2%chest pain during dialysis. Conclusions: CRF leading to HD is a long battle for patients, required to make major and continuous adjustment, both physiologically and psychologically. The study suggests that non-compliance with HD regimen were common. The socio-demographic and knowledge profile will help in the management and early prevention of disease and evaluate aspects that will influence care and patients can select mode of treatment themselves properly.

Keywords: profile, haemodialysis, Nepal, patients, treatment

Procedia PDF Downloads 375
444 Barriers and Facilitators to Physical Activity Among Older Adults Living in Long‐Term Care Facilities: A Systematic Review with Qualitative Evidence Synthesis

Authors: Ying Shi, June Zhang, Lu Shao, Xiyan Xie, Aidi Lao, Zhangan Wang

Abstract:

Background: Low levels of physical activity are associated with poorer health outcomes, and this situation is more critical in older adults living in long‐term care facilities. Objectives: To systematically identify, appraise, and synthesize current qualitative research evidence regarding the barriers and facilitators to physical activity as reported by older adults and care staff in long‐term care facilities. Design: This is a systematic review with qualitative evidence synthesis adhering to PRISMA guidelines. Methods: We conducted a systematic search on PubMed, Science Citation Index Expanded, Social Sciences Citation Index, EMBASE, CINAHL, and PsychInfo databases from inception until 30 June 2023. Thematic synthesis was undertaken to identify the barriers and facilitators relating to physical activity. Then, we mapped them onto the Capability, Opportunity, Motivation, and Behavior model and Theoretical Domains Framework. Methodological quality was assessed using the CASP Qualitative Studies Checklist, and confidence in review findings was assessed using the GRADE-CERQual approach. Results: We included 32 studies after screening 10496 citations and 177 full texts. Seven themes and 17 subthemes were identified relating to barriers and facilitators influencing physical activity in elderly residents. The main themes were mapped onto COM-B) model-Capability (physical activity knowledge gaps and individual health issues), Opportunity (social support and macro-level resources) and Motivation (health beliefs, fear of falling or injury, and personal and social incentives to physical activity). Most subthemes were graded as high (n = 9) or moderate (n = 3) confidence. Conclusions and Implications: Our comprehensive synthesis of 32 studies provides a wealth of knowledge of barriers and facilitators to physical activity from both residents and care staff’s perspectives. Intervention components were also suggested within the context of long‐term care facilities. End users such as older residents, care staff, and researchers can have confidence in our findings when formulating policies and guidance on promoting physical activity among elderly residents in long‐term care facilities.

Keywords: long‐term care, older adults, physical activity, qualitative, systematic review

Procedia PDF Downloads 85
443 The Quantum Theory of Music and Human Languages

Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: language, music, sciences, quantum entenglement

Procedia PDF Downloads 77
442 Cancer Survivor’s Adherence to Healthy Lifestyle Behaviours; Meeting the World Cancer Research Fund/American Institute of Cancer Research Recommendations, a Systematic Review and Meta-Analysis

Authors: Daniel Nigusse Tollosa, Erica James, Alexis Hurre, Meredith Tavener

Abstract:

Introduction: Lifestyle behaviours such as healthy diet, regular physical activity and maintaining a healthy weight are essential for cancer survivors to improve the quality of life and longevity. However, there is no study that synthesis cancer survivor’s adherence to healthy lifestyle recommendations. The purpose of this review was to collate existing data on the prevalence of adherence to healthy behaviours and produce the pooled estimate among adult cancer survivors. Method: Multiple databases (Embase, Medline, Scopus, Web of Science and Google Scholar) were searched for relevant articles published since 2007, reporting cancer survivors adherence to more than two lifestyle behaviours based on the WCRF/AICR recommendations. The pooled prevalence of adherence to single and multiple behaviours (operationalized as adherence to more than 75% (3/4) of health behaviours included in a particular study) was calculated using a random effects model. Subgroup analysis adherence to multiple behaviours was undertaken corresponding to the mean survival years and year of publication. Results: A total of 3322 articles were generated through our search strategies. Of these, 51 studies matched our inclusion criteria, which presenting data from 2,620,586 adult cancer survivors. The highest prevalence of adherence was observed for smoking (pooled estimate: 87%, 95% CI: 85%, 88%) and alcohol intake (pooled estimate 83%, 95% CI: 81%, 86%), and the lowest was for fiber intake (pooled estimate: 31%, 95% CI: 21%, 40%). Thirteen studies were reported the proportion of cancer survivors (all used a simple summative index method) to multiple healthy behaviours, whereby the prevalence of adherence was ranged from 7% to 40% (pooled estimate 23%, 95% CI: 17% to 30%). Subgroup analysis suggest that short-term survivors ( < 5 years survival time) had relatively a better adherence to multiple behaviours (pooled estimate: 31%, 95% CI: 27%, 35%) than long-term ( > 5 years survival time) cancer survivors (pooled estimate: 25%, 95% CI: 14%, 36%). Pooling of estimates according to the year of publication (since 2007) also suggests an increasing trend of adherence to multiple behaviours over time. Conclusion: Overall, the adherence to multiple lifestyle behaviors was poor (not satisfactory), and relatively, it is a major concern for long-term than the short-term cancer survivor. Cancer survivors need to obey with healthy lifestyle recommendations related to physical activity, fruit and vegetable, fiber, red/processed meat and sodium intake.

Keywords: adherence, lifestyle behaviours, cancer survivors, WCRF/AICR

Procedia PDF Downloads 183
441 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 144
440 Foundations for Global Interactions: The Theoretical Underpinnings of Understanding Others

Authors: Randall E. Osborne

Abstract:

In a course on International Psychology, 8 theoretical perspectives (Critical Psychology, Liberation Psychology, Post-Modernism, Social Constructivism, Social Identity Theory, Social Reduction Theory, Symbolic Interactionism, and Vygotsky’s Sociocultural Theory) are used as a framework for getting students to understand the concept of and need for Globalization. One of critical psychology's main criticisms of conventional psychology is that it fails to consider or deliberately ignores the way power differences between social classes and groups can impact the mental and physical well-being of individuals or groups of people. Liberation psychology, also known as liberation social psychology or psicología social de la liberación, is an approach to psychological science that aims to understand the psychology of oppressed and impoverished communities by addressing the oppressive sociopolitical structure in which they exist. Postmodernism is largely a reaction to the assumed certainty of scientific, or objective, efforts to explain reality. It stems from a recognition that reality is not simply mirrored in human understanding of it, but rather, is constructed as the mind tries to understand its own particular and personal reality. Lev Vygotsky argued that all cognitive functions originate in, and must therefore be explained as products of social interactions and that learning was not simply the assimilation and accommodation of new knowledge by learners. Social Identity Theory discusses the implications of social identity for human interactions with and assumptions about other people. Social Identification Theory suggests people: (1) categorize—people find it helpful (humans might be perceived as having a need) to place people and objects into categories, (2) identify—people align themselves with groups and gain identity and self-esteem from it, and (3) compare—people compare self to others. Social reductionism argues that all behavior and experiences can be explained simply by the affect of groups on the individual. Symbolic interaction theory focuses attention on the way that people interact through symbols: words, gestures, rules, and roles. Meaning evolves from human their interactions in their environment and with people. Vygotsky’s sociocultural theory of human learning describes learning as a social process and the origination of human intelligence in society or culture. The major theme of Vygotsky’s theoretical framework is that social interaction plays a fundamental role in the development of cognition. This presentation will discuss how these theoretical perspectives are incorporated into a course on International Psychology, a course on the Politics of Hate, and a course on the Psychology of Prejudice, Discrimination and Hate to promote student thinking in a more ‘global’ manner.

Keywords: globalization, international psychology, society and culture, teaching interculturally

Procedia PDF Downloads 252
439 Predicting Personality and Psychological Distress Using Natural Language Processing

Authors: Jihee Jang, Seowon Yoon, Gaeun Son, Minjung Kang, Joon Yeon Choeh, Kee-Hong Choi

Abstract:

Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one’s personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple-choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological constructs to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II, which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: The phase I (pilot) study was conducted on fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 425 Korean adults were recruited using a convenience sampling method via an online survey. The text data collected from interviews were analyzed using natural language processing. The results of the online survey, including demographic data, depression, anxiety, and personality inventories, were analyzed together in the model to predict individuals’ FFM of personality and the level of psychological distress (phase 2).

Keywords: personality prediction, psychological distress prediction, natural language processing, machine learning, the five-factor model of personality

Procedia PDF Downloads 79
438 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks

Authors: Sulemana Ibrahim

Abstract:

Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.

Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks

Procedia PDF Downloads 62
437 The Relationship between Self-Injurious Behavior and Manner of Death

Authors: Sait Ozsoy, Hacer Yasar Teke, Mustafa Dalgic, Cetin Ketenci, Ertugrul Gok, Kenan Karbeyaz, Azem Irez, Mesut Akyol

Abstract:

Self-mutilating behavior or self-injury behavior (SIB) is defined as: intentional harm to one’s body without intends to commit suicide”. SIB cases are commonly seen in psychiatry and forensic medicine practices. Despite variety of SIB methods, cuts in the skin is the most common (70-97%) injury in this group of patients. Subjects with SIB have one or more other comorbidities which include depression, anxiety, depersonalization, and feeling of worthlessness, borderline personality disorder, antisocial behaviors, and histrionic personality. These individuals feel a high level of hostility towards themselves and their surroundings. Researches have also revealed a strong relationship between antisocial personality disorder, criminal behavior, and SIB. This study has retrospectively evaluated 6,599 autopsy cases performed at forensic medicine institutes of six major cities (Ankara, Izmir, Diyarbakir, Erzurum, Trabzon, Eskisehir) of Turkey in 2013. The study group consisted of all cases with SIB findings (psychopathic cuts, cigarette burns, scars, and etc.). The relationship between causes of death in the study group (SIB subjects) and the control group was investigated. The control group was created from subjects without signs of SIB. Mann-Whitney U test was used for age variables and Chi-square test for categorical variables. Multinomial logistic regression analysis was used in order to analyze group differences in respect to manner of death (natural, accident, homicide, suicide) and analysis of risk factors associated with each group was determined by the Binomial logistic regression analysis. This study used SPSS statistics 15.0 for all its statistical and calculation needs. The statistical significance was p <0.05. There was no significant difference between accidental and natural death among the groups (p=0.737). Also there was a unit increase in number of cuts in psychopathic group while number of accidental death decreased (95% CI: 0.941-0.993) by 0.967 times (p=0.015). In contrast, there was a significant difference between suicidal and natural death (p<0.001), and also between homicidal and natural death (p=0.025). SIB is often seen with borderline and antisocial personality disorder but may be associated with many psychiatric illnesses. Studies have shown a relationship between antisocial personality disorders with criminal behavior and SIB with suicidal behavior. In our study, rate of suicide, murder and intoxication was higher compared to the control group. It could be concluded that SIB can be used as a predictor of possibility of one’s harm to him/herself and other people.

Keywords: autopsy, cause of death, forensic science, self-injury behaviour

Procedia PDF Downloads 510
436 The Relationship between the Skill Mix Model and Patient Mortality: A Systematic Review

Authors: Yi-Fung Lin, Shiow-Ching Shun, Wen-Yu Hu

Abstract:

Background: A skill mix model is regarded as one of the most effective methods of reducing nursing shortages, as well as easing nursing staff workloads and labor costs. Although this model shows several benefits for the health workforce, the relationship between the optimal model of skill mix and the patient mortality rate remains to be discovered. Objectives: This review aimed to explore the relationship between the skill mix model and patient mortality rate in acute care hospitals. Data Sources: A systematic search of the PubMed, Web of Science, Embase, and Cochrane Library databases and researchers retrieved studies published between January 1986 and March 2022. Review methods: Two independent reviewers screened the titles and abstracts based on selection criteria, extracted the data, and performed critical appraisals using the STROBE checklist of each included study. The studies focused on adult patients in acute care hospitals, and the skill mix model and patient mortality rate were included in the analysis. Results: Six included studies were conducted in the USA, Canada, Italy, Taiwan, and European countries (Belgium, England, Finland, Ireland, Spain, and Switzerland), including patients in medical, surgical, and intensive care units. There were both nurses and nursing assistants in their skill mix team. This main finding is that three studies (324,592 participants) show evidence of fewer mortality rates associated with hospitals with a higher percentage of registered nurse staff (range percentage of registered nurse staff 36.1%-100%), but three articles (1,122,270 participants) did not find the same result (range of percentage of registered nurse staff 46%-96%). However, based on appraisal findings, those showing a significant association all meet good quality standards, but only one-third of their counterparts. Conclusions: In light of the limited amount and quality of published research in this review, it is prudent to treat the findings with caution. Although the evidence is not insufficient certainty to draw conclusions about the relationship between nurse staffing level and patients' mortality, this review lights the direction of relevant studies in the future. The limitation of this article is the variation in skill mix models among countries and institutions, making it impossible to do a meta-analysis to compare them further.

Keywords: nurse staffing level, nursing assistants, mortality, skill mix

Procedia PDF Downloads 117
435 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 122
434 The Importance of Efficient and Sustainable Water Resources Management and the Role of Artificial Intelligence in Preventing Forced Migration

Authors: Fateme Aysin Anka, Farzad Kiani

Abstract:

Forced migration is a situation in which people are forced to leave their homes against their will due to political conflicts, wars and conflicts, natural disasters, climate change, economic crises, or other emergencies. This type of migration takes place under conditions where people cannot lead a sustainable life due to reasons such as security, shelter and meeting their basic needs. This type of migration may occur in connection with different factors that affect people's living conditions. In addition to these general and widespread reasons, water security and resources will be one that is starting now and will be encountered more and more in the future. Forced migration may occur due to insufficient or depleted water resources in the areas where people live. In this case, people's living conditions become unsustainable, and they may have to go elsewhere, as they cannot obtain their basic needs, such as drinking water, water used for agriculture and industry. To cope with these situations, it is important to minimize the causes, as international organizations and societies must provide assistance (for example, humanitarian aid, shelter, medical support and education) and protection to address (or mitigate) this problem. From the international perspective, plans such as the Green New Deal (GND) and the European Green Deal (EGD) draw attention to the need for people to live equally in a cleaner and greener world. Especially recently, with the advancement of technology, science and methods have become more efficient. In this regard, in this article, a multidisciplinary case model is presented by reinforcing the water problem with an engineering approach within the framework of the social dimension. It is worth emphasizing that this problem is largely linked to climate change and the lack of a sustainable water management perspective. As a matter of fact, the United Nations Development Agency (UNDA) draws attention to this problem in its universally accepted sustainable development goals. Therefore, an artificial intelligence-based approach has been applied to solve this problem by focusing on the water management problem. The most general but also important aspect in the management of water resources is its correct consumption. In this context, the artificial intelligence-based system undertakes tasks such as water demand forecasting and distribution management, emergency and crisis management, water pollution detection and prevention, and maintenance and repair control and forecasting.

Keywords: water resource management, forced migration, multidisciplinary studies, artificial intelligence

Procedia PDF Downloads 86
433 Inner and Outer School Contextual Factors Associated with Poor Performance of Grade 12 Students: A Case Study of an Underperforming High School in Mpumalanga, South Africa

Authors: Victoria L. Nkosi, Parvaneh Farhangpour

Abstract:

Often a Grade 12 certificate is perceived as a passport to tertiary education and the minimum requirement to enter the world of work. In spite of its importance, many students do not make this milestone in South Africa. It is important to find out why so many students still fail in spite of transformation in the education system in the post-apartheid era. Given the complexity of education and its context, this study adopted a case study design to examine one historically underperforming high school in Bushbuckridge, Mpumalanga Province, South Africa in 2013. The aim was to gain a understanding of the inner and outer school contextual factors associated with the high failure rate among Grade 12 students.  Government documents and reports were consulted to identify factors in the district and the village surrounding the school and a student survey was conducted to identify school, home and student factors. The randomly-sampled half of the population of Grade 12 students (53) participated in the survey and quantitative data are analyzed using descriptive statistical methods. The findings showed that a host of factors is at play. The school is located in a village within a municipality which has been one of the poorest three municipalities in South Africa and the lowest Grade 12 pass rate in the Mpumalanga province.   Moreover, over half of the families of the students are single parents, 43% are unemployed and the majority has a low level of education. In addition, most families (83%) do not have basic study materials such as a dictionary, books, tables, and chairs. A significant number of students (70%) are over-aged (+19 years old); close to half of them (49%) are grade repeaters. The school itself lacks essential resources, namely computers, science laboratories, library, and enough furniture and textbooks. Moreover, teaching and learning are negatively affected by the teachers’ occasional absenteeism, inadequate lesson preparation, and poor communication skills. Overall, the continuous low performance of students in this school mirrors the vicious circle of multiple negative conditions present within and outside of the school. The complexity of factors associated with the underperformance of Grade 12 students in this school calls for a multi-dimensional intervention from government and stakeholders. One important intervention should be the placement of over-aged students and grade-repeaters in suitable educational institutions for the benefit of other students.

Keywords: inner context, outer context, over-aged students, vicious cycle

Procedia PDF Downloads 201
432 Critical Mathematics Education and School Education in India: A Study of the National Curriculum Framework 2022 for Foundational Stage

Authors: Eish Sharma

Abstract:

Literature around Mathematics education suggests that democratic attitudes can be strengthened through teaching and learning Mathematics. Furthermore, connections between critical education and Mathematics education are observed in the light of critical pedagogy to locate Critical Mathematics Education (CME) as the theoretical framework. Critical pedagogy applied to Mathematics education is identified as one of the key themes subsumed under Critical Mathematics Education. Through the application of critical pedagogy in mathematics, unequal power relations and social injustice can be identified, analyzed, and challenged. The research question is: have educational policies in India viewed the role of critical pedagogy applied to mathematics education (i.e., critical mathematics education) to ensure social justice as an educational aim? The National Curriculum Framework (NCF), 2005 upholds education for democracy and the role of mathematics education in facilitating the same. More than this, NCF 2005 rests on Critical Pedagogy Framework and it recommends that critical pedagogy must be practiced in all dimensions of school education. NCF 2005 visualizes critical pedagogy for social sciences as well as sciences, stating that the science curriculum, including mathematics, must be used as an “instrument for achieving social change to reduce the divide based on economic class, gender, caste, religion, and the region”. Furthermore, the implementation of NCF 2005 led to a reform in the syllabus and textbooks in school mathematics at the national level, and critical pedagogy was applied to mathematics textbooks at the primary level. This intervention led to ethnomathematics and critical mathematics education in the school curriculum in India for the first time at the national level. In October 2022, the Ministry of Education launched the National Curriculum Framework for Foundational Stage (NCF-FS), developed in light of the National Education Policy, 2020, for children in the three to eight years age group. I want to find out whether critical pedagogy-based education and critical pedagogy-based mathematics education are carried forward in NCF 2022. To find this, an argument analysis of specific sections of the National Curriculum Framework 2022 document needs to be executed. Des Gasper suggests two tables: The first table contains four columns, namely, text component, comments on meanings, possible reformulation of the same text, and identified conclusions and assumptions (both stated and unstated). This table is for understanding the components and meanings of the text and is based on Scriven’s model for understanding the components and meanings of words in the text. The second table contains four columns i.e., claim identified, given data, warrant, and stated qualifier/rebuttal. This table is for describing the structure of the argument, how and how well the components fit together and is called ‘George Table diagram based on Toulmin-Bunn Model’.

Keywords: critical mathematics education, critical pedagogy, social justice, etnomathematics

Procedia PDF Downloads 82
431 Investigating English Dominance in a Chinese-English Dual Language Program: Teachers' Language Use and Investment

Authors: Peizhu Liu

Abstract:

Dual language education, also known as immersion education, differs from traditional language programs that teach a second or foreign language as a subject. Instead, dual language programs adopt a content-based approach, using both a majority language (e.g., English, in the case of the United States) and a minority language (e.g., Spanish or Chinese) as a medium of instruction to teach math, science, and social studies. By granting each language of instruction equal status, dual language education seeks to educate not only meaningfully but equitably and to foster tolerance and appreciation of diversity, making it essential for immigrants, refugees, indigenous peoples, and other marginalized students. Despite the cognitive and academic benefits of dual language education, recent literature has revealed that English is disproportionately privileged across dual language programs. Scholars have expressed concerns about the unbalanced status of majority and minority languages in dual language education, as favoring English in this context may inadvertently reaffirm its dominance and moreover fail to serve the needs of children whose primary language is not English. Through a year-long study of a Chinese-English dual language program, the extensively disproportionate use of English has also been observed by the researcher. However, despite the fact that Chinese-English dual language programs are the second-most popular program type after Spanish in the United States, this issue remains underexplored in the existing literature on Chinese-English dual language education. In fact, the number of Chinese-English dual language programs being offered in the U.S. has grown rapidly, from 8 in 1988 to 331 as of 2023. Using Norton and Darvin's investment model theory, the current study investigates teachers' language use and investment in teaching Chinese and English in a Chinese-English dual language program at an urban public school in New York City. The program caters to a significant number of minority children from working-class families. Adopting an ethnographic and discourse analytic approach, this study seeks to understand language use dynamics in the program and how micro- and macro-factors, such as students' identity construction, parents' and teachers' language ideologies, and the capital associated with each language, influence teachers' investment in teaching Chinese and English. The research will help educators and policymakers understand the obstacles that stand in the way of the goal of dual language education—that is, the creation of a more inclusive classroom, which is achieved by regarding both languages of instruction as equally valuable resources. The implications for how to balance the use of the majority and minority languages will also be discussed.

Keywords: dual language education, bilingual education, language immersion education, content-based language teaching

Procedia PDF Downloads 84