Search results for: continual learning
204 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 183203 Constructing and Circulating Knowledge in Continuous Education: A Study of Norwegian Educational-Psychological Counsellors' Reflection Logs in Post-Graduate Education
Authors: Moen Torill, Rismark Marit, Astrid M. Solvberg
Abstract:
In Norway, every municipality shall provide an educational psychological service, EPS, to support kindergartens and schools in their work with children and youths with special needs. The EPS focus its work on individuals, aiming to identify special needs and to give advice to teachers and parents when they ask for it. In addition, the service also give priority to prevention and system intervention in kindergartens and schools. To master these big tasks university courses are established to support EPS counsellors' continuous learning. There is, however, a need for more in-depth and systematic knowledge on how they experience the courses they attend. In this study, EPS counsellors’ reflection logs during a particular course are investigated. The research question is: what are the content and priorities of the reflections that are communicated in the logs produced by the educational psychological counsellors during a post-graduate course? The investigated course is a credit course organized over a one-year period in two one-semester modules. The altogether 55 students enrolled in the course work as EPS counsellors in various municipalities across Norway. At the end of each day throughout the course period, the participants wrote reflection logs about what they had experienced during the day. The data material consists of 165 pages of typed text. The collaborating researchers studied the data material to ascertain, differentiate and understand the meaning of the content in each log. The analysis also involved the search for similarity in content and development of analytical categories that described the focus and primary concerns in each of the written logs. This involved constant 'critical and sustained discussions' for mutual construction of meaning between the co-researchers in the developing categories. The process is inspired by Grounded Theory. This means that the concepts developed during the analysis derived from the data material and not chosen prior to the investigation. The analysis revealed that the concept 'Useful' frequently appeared in the participants’ reflections and, as such, 'Useful' serves as a core category. The core category is described through three major categories: (1) knowledge sharing (concerning direct and indirect work with students with special needs) with colleagues is useful, (2) reflections on models and theoretical concepts (concerning students with special needs) are useful, (3) reflection on the role as EPS counsellor is useful. In all the categories, the notion of useful occurs in the participants’ emphasis on and acknowledgement of the immediate and direct link between the university course content and their daily work practice. Even if each category has an importance and value of its own, it is crucial that they are understood in connection with one another and as interwoven. It is the connectedness that gives the core category an overarching explanatory power. The knowledge from this study may be a relevant contribution when it comes to designing new courses that support continuing professional development for EPS counsellors, whether for post-graduate university courses or local courses at the EPS offices or whether in Norway or other countries in the world.Keywords: constructing and circulating knowledge, educational-psychological counsellor, higher education, professional development
Procedia PDF Downloads 115202 The Dilemma of Translanguaging Pedagogy in a Multilingual University in South Africa
Authors: Zakhile Somlata
Abstract:
In the context of international linguistic and cultural diversity, all languages can be used for all purposes. Africa in general and South Africa, in particular, is not an exception to multilingual and multicultural society. The multilingual and multicultural nature of South African society has a direct bearing to the heterogeneity of South African Universities in general. Universities as the centers of research, innovation, and transformation of the entire society should be at the forefront in leading multilingualism. The universities in South Africa had been using English and to a certain extent Afrikaans as the only academic languages during colonialism and apartheid regime. The democratic breakthrough of 1994 brought linguistic relief in South Africa. The Constitution of the Republic of South Africa recognizes 11 official languages that should enjoy parity of esteem for the realization of multilingualism. The elevation of the nine previously marginalized indigenous African languages as academic languages in higher education is central to multilingualism. It is high time that Afrocentric model instead of Eurocentric model should be the one which underpins education system in South Africa at all levels. Almost all South African universities have their language policies that seek to promote access and success of students through multilingualism, but the main dilemma is the implementation of language policies. This study is significant to respond to two objectives: (i) To evaluate how selected institutions use language policies for accessibility and success of students. (ii) To study how selected universities integrate African languages for both academic and administrative purposes. This paper reflects the language policy practices in one selected University of Technology (UoT) in South Africa. The UoT has its own language policy which depicts linguistic diversity of the institution and its commitment to promote multilingualism. Translanguaging pedagogy which accommodates minority languages' usage in the teaching and learning process plays a pivotal role in promoting multilingualism. This research paper employs mixed methods (quantitative and qualitative research) approach. Qualitative data has been collected from the key informants (insiders and experts), while quantitative data has been collected from a cohort of third-year students. A mixed methods approach with its convergent parallel design allows the data to be collected separately, analysed separately but with the comparison of the results. Language development initiatives have been discussed within the framework of language policy and policy implementation strategies. Theoretically, this paper is rooted in language as a problem, language as a right and language as a resource. The findings demonstrate that despite being a multilingual institution, there is a perpetuation of marginalization of African languages to be used as academic languages. Findings further display the hegemony of English. The promotion of status quo compromises the promotion of multilingualism, Africanization of Higher Education and intellectualization of indigenous African languages in South Africa under a democratic dispensation.Keywords: afro-centric model, hegemony of English, language as a resource, translanguaging pedagogy
Procedia PDF Downloads 192201 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 170200 Teachers' and Learners' Experiences of Learners' Writing in English First Additional Language
Authors: Jane-Francis A. Abongdia, Thandiswa Mpiti
Abstract:
There is an international concern to develop children’s literacy skills. In many parts of the world, the need to become fluent in a second language is essential for gaining meaningful access to education, the labour market and broader social functioning. In spite of these efforts, the problem still continues. The level of English language proficiency is far from satisfactory and these goals are unattainable by others. The issue is more complex in South Africa as learners are immersed in a second language (L2) curriculum. South Africa is a prime example of a country facing the dilemma of how to effectively equip a majority of its population with English as a second language or first additional language (FAL). Given the multilingual nature of South Africa with eleven official languages, and the position and power of English, the study investigates teachers’ and learners’ experiences on isiXhosa and Afrikaans background learners’ writing in English First Additional Language (EFAL). Moreover, possible causes of writing difficulties and teacher’s practices for writing are explored. The theoretical and conceptual framework for the study is provided by studies on constructivist theories and sociocultural theories. In exploring these issues, a qualitative approach through semi-structured interviews, classroom observations, and document analysis were adopted. This data is analysed by critical discourse analysis (CDA). The study identified a weak correlation between teachers’ beliefs and their actual teaching practices. Although the teachers believe that writing is as important as listening, speaking, reading, grammar and vocabulary, and that it needs regular practice, the data reveal that they fail to put their beliefs into practice. Moreover, the data revealed that learners were disturbed by their home language because when they do not know a word they would write either the isiXhosa or the Afrikaans equivalent. Code-switching seems to have instilled a sense of “dependence on translations” where some learners would not even try to answer English questions but would wait for the teacher to translate the questions into isiXhosa or Afrikaans before they could attempt to give answers. The findings of the study show a marked improvement in the writing performance of learners who used the process approach in writing. These findings demonstrate the need for assisting teachers to shift away from focusing only on learners’ performance (testing and grading) towards a stronger emphasis on the process of writing. The study concludes that the process approach to writing could enable teachers to focus on the various parts of the writing process which can give more freedom to learners to experiment their language proficiency. It would require that teachers develop a deeper understanding of the process/genre approaches to teaching writing advocated by CAPS. All in all, the study shows that both learners and teachers face numerous challenges relating to writing. This means that more work still needs to be done in this area. The present study argues that teachers teaching EFAL learners should approach writing as a critical and core aspect of learners’ education. Learners should be exposed to intensive writing activities throughout their school years.Keywords: constructivism, English second language, language of learning and teaching, writing
Procedia PDF Downloads 218199 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory
Authors: Xiaochen Mu
Abstract:
Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.Keywords: data protection, property rights, intellectual property, Big data
Procedia PDF Downloads 39198 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining
Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj
Abstract:
Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.Keywords: data mining, SME growth, success factors, web mining
Procedia PDF Downloads 267197 Multicultural Education in the National Context: A Study of Peoples' Friendship University of Russia
Authors: Maria V. Mishatkina
Abstract:
The modelling of dialogical environment is an essential feature of modern education. The dialogue of cultures is a foundation and an important prerequisite for a formation of a human’s main moral qualities such as an ability to understand another person, which is manifested in such values as tolerance, respect, mutual assistance and mercy. A formation of a modern expert occurs in an educational environment that is significantly different from what we had several years ago. Nowadays university education has qualitatively new characteristics. They may be observed in Peoples’ Friendship University of Russia (RUDN University), a top Russian higher education institution which unites representatives of more than 150 countries. The content of its educational strategies is not an adapted cultural experience but material between science and innovation. Besides, RUDN University’s profiles and specialization are not equal to the professional structures. People study not a profession in a strict sense but a basic scientific foundation of an activity in different socio-cultural areas (science, business and education). RUDN University also provides a considerable unit of professional education components. They are foreign languages skills, economic, political, ethnic, communication and computer culture, theory of information and basic management skills. Moreover, there is a rich social life (festive multicultural events, theme parties, journeys) and prospects concerning the inclusive approach to education (for example, a special course ‘Social Pedagogy: Issues of Tolerance’). In our research, we use such methods as analysis of modern and contemporary scientific literature, opinion poll (involving students, teachers and research workers) and comparative data analysis. We came to the conclusion that knowledge transfer of RUDN student in the activity happens through making goals, problems, issues, tasks and situations which simulate future innovative ambiguous environment that potentially prepares him/her to dialogical way of life. However, all these factors may not take effect if there is no ‘personal inspiration’ of students by communicative and dialogic values, their participation in a system of meanings and tools of learning activity that is represented by cooperation within the framework of scientific and pedagogical schools dialogue. We also found out that dominating strategies of ensuring the quality of education are those that put students in the position of the subject of their own education. Today these strategies and approaches should involve such approaches and methods as task, contextual, modelling, specialized, game-imitating and dialogical approaches, the method of practical situations, etc. Therefore, University in the modern sense is not only an educational institution, but also a generator of innovation, cooperation among nations and cultural progress. RUDN University has been performing exactly this mission for many decades.Keywords: dialogical developing situation, dialogue of cultures, readiness for dialogue, university graduate
Procedia PDF Downloads 219196 Audio-Visual Co-Data Processing Pipeline
Authors: Rita Chattopadhyay, Vivek Anand Thoutam
Abstract:
Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech
Procedia PDF Downloads 80195 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection
Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa
Abstract:
Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.Keywords: classification, airborne LiDAR, parameters selection, support vector machine
Procedia PDF Downloads 147194 An Evidence-Based Laboratory Medicine (EBLM) Test to Help Doctors in the Assessment of the Pancreatic Endocrine Function
Authors: Sergio J. Calleja, Adria Roca, José D. Santotoribio
Abstract:
Pancreatic endocrine diseases include pathologies like insulin resistance (IR), prediabetes, and type 2 diabetes mellitus (DM2). Some of them are highly prevalent in the U.S.—40% of U.S. adults have IR, 38% of U.S. adults have prediabetes, and 12% of U.S. adults have DM2—, as reported by the National Center for Biotechnology Information (NCBI). Building upon this imperative, the objective of the present study was to develop a non-invasive test for the assessment of the patient’s pancreatic endocrine function and to evaluate its accuracy in detecting various pancreatic endocrine diseases, such as IR, prediabetes, and DM2. This approach to a routine blood and urine test is based around serum and urine biomarkers. It is made by the combination of several independent public algorithms, such as the Adult Treatment Panel III (ATP-III), triglycerides and glucose (TyG) index, homeostasis model assessment-insulin resistance (HOMA-IR), HOMA-2, and the quantitative insulin-sensitivity check index (QUICKI). Additionally, it incorporates essential measurements such as the creatinine clearance, estimated glomerular filtration rate (eGFR), urine albumin-to-creatinine ratio (ACR), and urinalysis, which are helpful to achieve a full image of the patient’s pancreatic endocrine disease. To evaluate the estimated accuracy of this test, an iterative process was performed by a machine learning (ML) algorithm, with a training set of 9,391 patients. The sensitivity achieved was 97.98% and the specificity was 99.13%. Consequently, the area under the receiver operating characteristic (AUROC) curve, the positive predictive value (PPV), and the negative predictive value (NPV) were 92.48%, 99.12%, and 98.00%, respectively. The algorithm was validated with a randomized controlled trial (RCT) with a target sample size (n) of 314 patients. However, 50 patients were initially excluded from the study, because they had ongoing clinically diagnosed pathologies, symptoms or signs, so the n dropped to 264 patients. Then, 110 patients were excluded because they didn’t show up at the clinical facility for any of the follow-up visits—this is a critical point to improve for the upcoming RCT, since the cost of each patient is very high and for this RCT almost a third of the patients already tested were lost—, so the new n consisted of 154 patients. After that, 2 patients were excluded, because some of their laboratory parameters and/or clinical information were wrong or incorrect. Thus, a final n of 152 patients was achieved. In this validation set, the results obtained were: 100.00% sensitivity, 100.00% specificity, 100.00% AUROC, 100.00% PPV, and 100.00% NPV. These results suggest that this approach to a routine blood and urine test holds promise in providing timely and accurate diagnoses of pancreatic endocrine diseases, particularly among individuals aged 40 and above. Given the current epidemiological state of these type of diseases, these findings underscore the significance of early detection. Furthermore, they advocate for further exploration, prompting the intention to conduct a clinical trial involving 26,000 participants (from March 2025 to December 2026).Keywords: algorithm, diabetes, laboratory medicine, non-invasive
Procedia PDF Downloads 32193 Parenting Interventions for Refugee Families: A Systematic Scoping Review
Authors: Ripudaman S. Minhas, Pardeep K. Benipal, Aisha K. Yousafzai
Abstract:
Background: Children of refugee or asylum-seeking background have multiple, complex needs (e.g. trauma, mental health concerns, separation, relocation, poverty, etc.) that places them at an increased risk for developing learning problems. Families encounter challenges accessing support during resettlement, preventing children from achieving their full developmental potential. There are very few studies in literature that examine the unique parenting challenges refugee families’ face. Providing appropriate support services and educational resources that address these distinctive concerns of refugee parents, will alleviate these challenges allowing for a better developmental outcome for children. Objective: To identify the characteristics of effective parenting interventions that address the unique needs of refugee families. Methods: English-language articles published from 1997 onwards were included if they described or evaluated programmes or interventions for parents of refugee or asylum-seeking background, globally. Data were extracted and analyzed according to Arksey and O’Malley’s descriptive analysis model for scoping reviews. Results: Seven studies met criteria and were included, primarily studying families settled in high-income countries. Refugee parents identified parenting to be a major concern, citing they experienced: alienation/unwelcoming services, language barriers, and lack of familiarity with school and early years services. Services that focused on building the resilience of parents, parent education, or provided services in the family’s native language, and offered families safe spaces to promote parent-child interactions were most successful. Home-visit and family-centered programs showed particular success, minimizing barriers such as transportation and inflexible work schedules, while allowing caregivers to receive feedback from facilitators. The vast majority of studies evaluated programs implementing existing curricula and frameworks. Interventions were designed in a prescriptive manner, without direct participation by family members and not directly addressing accessibility barriers. The studies also did not employ evaluation measures of parenting practices or the caregiving environment, or child development outcomes, primarily focusing on parental perceptions. Conclusion: There is scarce literature describing parenting interventions for refugee families. Successful interventions focused on building parenting resilience and capacity in their native language. To date, there are no studies that employ a participatory approach to program design to tailor content or accessibility, and few that employ parenting, developmental, behavioural, or environmental outcome measures.Keywords: asylum-seekers, developmental pediatrics, parenting interventions, refugee families
Procedia PDF Downloads 161192 Data Science/Artificial Intelligence: A Possible Panacea for Refugee Crisis
Authors: Avi Shrivastava
Abstract:
In 2021, two heart-wrenching scenes, shown live on television screens across countries, painted a grim picture of refugees. One of them was of people clinging onto an airplane's wings in their desperate attempt to flee war-torn Afghanistan. They ultimately fell to their death. The other scene was the U.S. government authorities separating children from their parents or guardians to deter migrants/refugees from coming to the U.S. These events show the desperation refugees feel when they are trying to leave their homes in disaster zones. However, data paints a grave picture of the current refugee situation. It also indicates that a bleak future lies ahead for the refugees across the globe. Data and information are the two threads that intertwine to weave the shimmery fabric of modern society. Data and information are often used interchangeably, but they differ considerably. For example, information analysis reveals rationale, and logic, while data analysis, on the other hand, reveals a pattern. Moreover, patterns revealed by data can enable us to create the necessary tools to combat huge problems on our hands. Data analysis paints a clear picture so that the decision-making process becomes simple. Geopolitical and economic data can be used to predict future refugee hotspots. Accurately predicting the next refugee hotspots will allow governments and relief agencies to prepare better for future refugee crises. The refugee crisis does not have binary answers. Given the emotionally wrenching nature of the ground realities, experts often shy away from realistically stating things as they are. This hesitancy can cost lives. When decisions are based solely on data, emotions can be removed from the decision-making process. Data also presents irrefutable evidence and tells whether there is a solution or not. Moreover, it also responds to a nonbinary crisis with a binary answer. Because of all that, it becomes easier to tackle a problem. Data science and A.I. can predict future refugee crises. With the recent explosion of data due to the rise of social media platforms, data and insight into data has solved many social and political problems. Data science can also help solve many issues refugees face while staying in refugee camps or adopted countries. This paper looks into various ways data science can help solve refugee problems. A.I.-based chatbots can help refugees seek legal help to find asylum in the country they want to settle in. These chatbots can help them find a marketplace where they can find help from the people willing to help. Data science and technology can also help solve refugees' many problems, including food, shelter, employment, security, and assimilation. The refugee problem seems to be one of the most challenging for social and political reasons. Data science and machine learning can help prevent the refugee crisis and solve or alleviate some of the problems that refugees face in their journey to a better life. With the explosion of data in the last decade, data science has made it possible to solve many geopolitical and social issues.Keywords: refugee crisis, artificial intelligence, data science, refugee camps, Afghanistan, Ukraine
Procedia PDF Downloads 72191 Developing Computational Thinking in Early Childhood Education
Authors: Kalliopi Kanaki, Michael Kalogiannakis
Abstract:
Nowadays, in the digital era, the early acquisition of basic programming skills and knowledge is encouraged, as it facilitates students’ exposure to computational thinking and empowers their creativity, problem-solving skills, and cognitive development. More and more researchers and educators investigate the introduction of computational thinking in K-12 since it is expected to be a fundamental skill for everyone by the middle of the 21st century, just like reading, writing and arithmetic are at the moment. In this paper, a doctoral research in the process is presented, which investigates the infusion of computational thinking into science curriculum in early childhood education. The whole attempt aims to develop young children’s computational thinking by introducing them to the fundamental concepts of object-oriented programming in an enjoyable, yet educational framework. The backbone of the research is the digital environment PhysGramming (an abbreviation of Physical Science Programming), which provides children the opportunity to create their own digital games, turning them from passive consumers to active creators of technology. PhysGramming deploys an innovative hybrid schema of visual and text-based programming techniques, with emphasis on object-orientation. Through PhysGramming, young students are familiarized with basic object-oriented programming concepts, such as classes, objects, and attributes, while, at the same time, get a view of object-oriented programming syntax. Nevertheless, the most noteworthy feature of PhysGramming is that children create their own digital games within the context of physical science courses, in a way that provides familiarization with the basic principles of object-oriented programming and computational thinking, even though no specific reference is made to these principles. Attuned to the ethical guidelines of educational research, interventions were conducted in two classes of second grade. The interventions were designed with respect to the thematic units of the curriculum of physical science courses, as a part of the learning activities of the class. PhysGramming was integrated into the classroom, after short introductory sessions. During the interventions, 6-7 years old children worked in pairs on computers and created their own digital games (group games, matching games, and puzzles). The authors participated in these interventions as observers in order to achieve a realistic evaluation of the proposed educational framework concerning its applicability in the classroom and its educational and pedagogical perspectives. To better examine if the objectives of the research are met, the investigation was focused on six criteria; the educational value of PhysGramming, its engaging and enjoyable characteristics, its child-friendliness, its appropriateness for the purpose that is proposed, its ability to monitor the user’s progress and its individualizing features. In this paper, the functionality of PhysGramming and the philosophy of its integration in the classroom are both described in detail. Information about the implemented interventions and the results obtained is also provided. Finally, several limitations of the research conducted that deserve attention are denoted.Keywords: computational thinking, early childhood education, object-oriented programming, physical science courses
Procedia PDF Downloads 120190 Servant Leadership and Organisational Climate in South African Private Schools: A Qualitative Study
Authors: Christo Swart, Lidia Pottas, David Maree
Abstract:
Background: It is a sine qua non that the South African educational system finds itself in a profound crisis and that traditional school leadership styles are outdated and hinder quality education. New thinking is mandatory to improve the status quo and school leadership has an immense role to play to improve the current situation. It is believed that the servant leadership paradigm, when practiced by school leadership, may have a significant influence on the school environment in totality. This study investigates the private school segment in search of constructive answers to assist with the educational crises in South Africa. It is assumed that where school leadership can augment a supportive and empowering environment for teachers to constructively engage in their teaching and learning activities - then many challenges facing by school system may be subjugated in a productive manner. Aim: The aim of this study is fourfold. To outline the constructs of servant leadership which are perceived by teachers of private schools as priorities to enhance a successful school environment. To describe the constructs of organizational climate which are observed by teachers of private schools as priorities to enhance a successful school environment. To investigate whether the participants perceived a link between the constructs of servant leadership and organizational climate. To consider the process to be followed to introduce the constructs of SL and OC the school system in general as perceived by participants. Method: This study utilized a qualitative approach to explore the mediation between school leadership and the organizational climate in private schools in the search for amicable answers. The participants were purposefully selected for the study. Focus group interviews were held with participants from primary and secondary schools and a focus group discussion was conducted with principals of both primary and secondary schools. The interview data were transcribed and analyzed and identical patterns of coded data were grouped together under emerging themes. Findings: It was found that the practice of servant leadership by school leadership indeed mediates a constructive and positive school climate. It was found that the constructs of empowerment, accountability, humility and courage – interlinking with one other - are prominent of servant leadership concepts that are perceived by teachers of private schools as priorities for school leadership to enhance a successful school environment. It was confirmed that the groupings of training and development, communication, trust and work environment are perceived by teachers of private schools as prominent features of organizational climate as practiced by school leadership to augment a successful school environment. It can be concluded that the participants perceived several links between the constructs of servant leadership and organizational climate that encourage a constructive school environment and that there is a definite positive consideration and motivation that the two concepts be introduced to the school system in general. It is recommended that school leadership mentors and guides teachers to take ownership of the constructs of servant leadership as well as organizational climate and that public schools be researched and consider to implement the two paradigms. The study suggests that aspirant teachers be exposed to leadership as well as organizational paradigms during their studies at university.Keywords: empowering environment for teachers and learners, new thinking required, organizational climate, school leadership, servant leadership
Procedia PDF Downloads 222189 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review
Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni
Abstract:
Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing
Procedia PDF Downloads 71188 A Smart Sensor Network Approach Using Affordable River Water Level Sensors
Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan
Abstract:
Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.Keywords: smart sensing, internet of things, water level sensor, flooding
Procedia PDF Downloads 381187 Current Applications of Artificial Intelligence (AI) in Chest Radiology
Authors: Angelis P. Barlampas
Abstract:
Learning Objectives: The purpose of this study is to inform briefly the reader about the applications of AI in chest radiology. Background: Currently, there are 190 FDA-approved radiology AI applications, with 42 (22%) pertaining specifically to thoracic radiology. Imaging findings OR Procedure details Aids of AI in chest radiology1: Detects and segments pulmonary nodules. Subtracts bone to provide an unobstructed view of the underlying lung parenchyma and provides further information on nodule characteristics, such as nodule location, nodule two-dimensional size or three dimensional (3D) volume, change in nodule size over time, attenuation data (i.e., mean, minimum, and/or maximum Hounsfield units [HU]), morphological assessments, or combinations of the above. Reclassifies indeterminate pulmonary nodules into low or high risk with higher accuracy than conventional risk models. Detects pleural effusion . Differentiates tension pneumothorax from nontension pneumothorax. Detects cardiomegaly, calcification, consolidation, mediastinal widening, atelectasis, fibrosis and pneumoperitoneum. Localises automatically vertebrae segments, labels ribs and detects rib fractures. Measures the distance from the tube tip to the carina and localizes both endotracheal tubes and central vascular lines. Detects consolidation and progression of parenchymal diseases such as pulmonary fibrosis or chronic obstructive pulmonary disease (COPD).Can evaluate lobar volumes. Identifies and labels pulmonary bronchi and vasculature and quantifies air-trapping. Offers emphysema evaluation. Provides functional respiratory imaging, whereby high-resolution CT images are post-processed to quantify airflow by lung region and may be used to quantify key biomarkers such as airway resistance, air-trapping, ventilation mapping, lung and lobar volume, and blood vessel and airway volume. Assesses the lung parenchyma by way of density evaluation. Provides percentages of tissues within defined attenuation (HU) ranges besides furnishing automated lung segmentation and lung volume information. Improves image quality for noisy images with built-in denoising function. Detects emphysema, a common condition seen in patients with history of smoking and hyperdense or opacified regions, thereby aiding in the diagnosis of certain pathologies, such as COVID-19 pneumonia. It aids in cardiac segmentation and calcium detection, aorta segmentation and diameter measurements, and vertebral body segmentation and density measurements. Conclusion: The future is yet to come, but AI already is a helpful tool for the daily practice in radiology. It is assumed, that the continuing progression of the computerized systems and the improvements in software algorithms , will redder AI into the second hand of the radiologist.Keywords: artificial intelligence, chest imaging, nodule detection, automated diagnoses
Procedia PDF Downloads 72186 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 90185 Implementing Quality Improvement Projects to Enhance Contraception and Abortion Care Service Provision and Pre-Service Training of Health Care Providers
Authors: Munir Kassa, Mengistu Hailemariam, Meghan Obermeyer, Kefelegn Baruda, Yonas Getachew, Asnakech Dessie
Abstract:
Improving the quality of sexual and reproductive health services that women receive is expected to have an impact on women’s satisfaction with the services, on their continued use and, ultimately, on their ability to achieve their fertility goals or reproductive intentions. Surprisingly, however, there is little empirical evidence of either whether this expectation is correct, or how best to improve service quality within sexual and reproductive health programs so that these impacts can be achieved. The Recent focus on quality has prompted more physicians to do quality improvement work, but often without the needed skill sets, which results in poorly conceived and ultimately unsuccessful improvement initiatives. As this renders the work unpublishable, it further impedes progress in the field of health care improvement and widens the quality chasm. Moreover, since 2014, the Center for International Reproductive Health Training (CIRHT) has worked diligently with 11 teaching hospitals across Ethiopia to increase access to contraception and abortion care services. This work has included improving pre-service training through education and curriculum development, expanding hands-on training to better learn critical techniques and counseling skills, and fostering a “team science” approach to research by encouraging scientific exploration. This is the first time this systematic approach has been applied and documented to improve access to high-quality services in Ethiopia. The purpose of this article is to report initiatives undertaken, and findings concluded by the clinical service team at CIRHT in an effort to provide a pragmatic approach to quality improvement projects. An audit containing nearly 300 questions about several aspects of patient care, including structure, process, and outcome indicators was completed by each teaching hospital’s quality improvement team. This baseline audit assisted in identifying major gaps and barriers, and each team was responsible for determining specific quality improvement aims and tasks to support change interventions using Shewart’s Cycle for Learning and Improvement (the Plan-Do-Study-Act model). To measure progress over time, quality improvement teams met biweekly and compiled monthly data for review. Also, site visits to each hospital were completed by the clinical service team to ensure monitoring and support. The results indicate that applying an evidence-based, participatory approach to quality improvement has the potential to increase the accessibility and quality of services in a short amount of time. In addition, continued ownership and on-site support are vital in promoting sustainability. This approach could be adapted and applied in similar contexts, particularly in other African countries.Keywords: abortion, contraception, quality improvement, service provision
Procedia PDF Downloads 222184 Leadership Education for Law Enforcement Mid-Level Managers: The Mediating Role of Effectiveness of Training on Transformational and Authentic Leadership Traits
Authors: Kevin Baxter, Ron Grove, James Pitney, John Harrison, Ozlem Gumus
Abstract:
The purpose of this research is to determine the mediating effect of effectiveness of the training provided by Northwestern University’s School of Police Staff and Command (SPSC), on the ability of law enforcement mid-level managers to learn transformational and authentic leadership traits. This study will also evaluate the leadership styles, of course, graduates compared to non-attendees using a static group comparison design. The Louisiana State Police pay approximately $40,000 in salary, tuition, housing, and meals for each state police lieutenant attending the 10-week program of the SPSC. This school lists the development of transformational leaders as an increasing element. Additionally, the SPSC curriculum addresses all four components of authentic leadership - self-awareness, transparency, ethical/moral, and balanced processing. Upon return to law enforcement in roles of mid-level management, there are questions as to whether or not students revert to an “autocratic” leadership style. Insufficient evidence exists to support claims for the effectiveness of management training or leadership development. Though it is widely recognized that transformational styles are beneficial to law enforcement, there is little evidence that suggests police leadership styles are changing. Police organizations continue to hold to a more transactional style (i.e., most senior police leaders remain autocrats). Additionally, research in the application of transformational, transactional, and laissez-faire leadership related to police organizations is minimal. The population of the study is law enforcement mid-level managers from various states within the United States who completed leadership training presented by the SPSC. The sample will be composed of 66 active law enforcement mid-level managers (lieutenants and captains) who have graduated from SPSC and 65 active law enforcement mid-level managers (lieutenants and captains) who have not attended SPSC. Participants will answer demographics questions, Multifactor Leadership Questionnaire, Authentic Leadership Questionnaire, and the Kirkpatrick Hybrid Evaluation Survey. Analysis from descriptive statistics, group comparison, one-way MANCOVA, and the Kirkpatrick Evaluation Model survey will be used to determine training effectiveness in the four levels of reaction, learning, behavior, and results. Independent variables are SPSC graduates (two groups: upper and lower) and no-SPSC attendees, and dependent variables are transformational and authentic leadership scores. SPSC graduates are expected to have higher MLQ scores for transformational leadership traits and higher ALQ scores for authentic leadership traits than SPSC non-attendees. We also expect the graduates to rate the efficacy of SPSC leadership training as high. This study will validate (or invalidate) the benefits, costs, and resources required for leadership development from a nationally recognized police leadership program, and it will also help fill the gap in the literature that exists between law enforcement professional development and transformational and authentic leadership styles.Keywords: training effectiveness, transformational leadership, authentic leadership, law enforcement mid-level manager
Procedia PDF Downloads 105183 Employing Remotely Sensed Soil and Vegetation Indices and Predicting by Long Short-Term Memory to Irrigation Scheduling Analysis
Authors: Elham Koohikerade, Silvio Jose Gumiere
Abstract:
In this research, irrigation is highlighted as crucial for improving both the yield and quality of potatoes due to their high sensitivity to soil moisture changes. The study presents a hybrid Long Short-Term Memory (LSTM) model aimed at optimizing irrigation scheduling in potato fields in Quebec City, Canada. This model integrates model-based and satellite-derived datasets to simulate soil moisture content, addressing the limitations of field data. Developed under the guidance of the Food and Agriculture Organization (FAO), the simulation approach compensates for the lack of direct soil sensor data, enhancing the LSTM model's predictions. The model was calibrated using indices like Surface Soil Moisture (SSM), Normalized Vegetation Difference Index (NDVI), Enhanced Vegetation Index (EVI), and Normalized Multi-band Drought Index (NMDI) to effectively forecast soil moisture reductions. Understanding soil moisture and plant development is crucial for assessing drought conditions and determining irrigation needs. This study validated the spectral characteristics of vegetation and soil using ECMWF Reanalysis v5 (ERA5) and Moderate Resolution Imaging Spectrometer (MODIS) data from 2019 to 2023, collected from agricultural areas in Dolbeau and Peribonka, Quebec. Parameters such as surface volumetric soil moisture (0-7 cm), NDVI, EVI, and NMDI were extracted from these images. A regional four-year dataset of soil and vegetation moisture was developed using a machine learning approach combining model-based and satellite-based datasets. The LSTM model predicts soil moisture dynamics hourly across different locations and times, with its accuracy verified through cross-validation and comparison with existing soil moisture datasets. The model effectively captures temporal dynamics, making it valuable for applications requiring soil moisture monitoring over time, such as anomaly detection and memory analysis. By identifying typical peak soil moisture values and observing distribution shapes, irrigation can be scheduled to maintain soil moisture within Volumetric Soil Moisture (VSM) values of 0.25 to 0.30 m²/m², avoiding under and over-watering. The strong correlations between parcels suggest that a uniform irrigation strategy might be effective across multiple parcels, with adjustments based on specific parcel characteristics and historical data trends. The application of the LSTM model to predict soil moisture and vegetation indices yielded mixed results. While the model effectively captures the central tendency and temporal dynamics of soil moisture, it struggles with accurately predicting EVI, NDVI, and NMDI.Keywords: irrigation scheduling, LSTM neural network, remotely sensed indices, soil and vegetation monitoring
Procedia PDF Downloads 41182 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence
Authors: Sogand Barghi
Abstract:
The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting
Procedia PDF Downloads 71181 Multilingual Students Acting as Language Brokers in Italy: Their Points of View and Feelings towards This Activity
Authors: Federica Ceccoli
Abstract:
Italy is undergoing one of its largest migratory waves, and Italian schools are reporting the highest numbers of multilingual students coming from immigrant families and speaking minority languages. For these pupils, who have not perfectly acquired their mother tongue yet, learning a second language may represent a burden on their linguistic development and may have some repercussions on their school performances and relational skills. These are some of the reasons why they have turned out to be those who have the worst grades and the highest school drop-out rates. However, despite these negative outcomes, it has been demonstrated that multilingual immigrant students frequently act as translators or language brokers for their peers or family members who do not speak Italian fluently. This activity has been defined as Child Language Brokering (hereinafter CLB) and it has become a common practice especially in minority communities as immigrants’ children often learn the host language much more quickly than their parents, thus contributing to their family life by acting as language and cultural mediators. This presentation aims to analyse the data collected by a research carried out during the school year 2014-2015 in the province of Ravenna, in the Northern Italian region of Emilia-Romagna, among 126 immigrant students attending junior high schools. The purpose of the study was to analyse by means of a structured questionnaire whether multilingualism matched with language brokering experiences or not and to examine the perspectives of those students who reported having acted as translators using their linguistic knowledge to help people understand each other. The questionnaire consisted of 34 items roughly divided into 2 sections. The first section required multilingual students to provide personal details like their date and place of birth, as well as details about their families (number of siblings, parents’ jobs). In the second section, they were asked about the languages spoken in their families as well as their language brokering experience. The in-depth questionnaire sought to investigate a wide variety of brokering issues such as frequency and purpose of the activity, where, when and which documents young language brokers translate and how they feel about this practice. The results have demonstrated that CLB is a very common practice among immigrants’ children living in Ravenna and almost all students reported positive feelings when asked about their brokering experience with their families and also at school. In line with previous studies, responses to the questionnaire item regarding the people they brokered for revealed that the category ranking first is parents. Similarly, language-brokering activities tend to occur most often at home and the documents they translate the most (either orally or in writing) are notes from teachers. Such positive feelings towards this activity together with the evidence that it occurs very often in schools have laid the foundation for further projects on how this common practice may be valued and used to strengthen the linguistic skills of these multilingual immigrant students and thus their school performances.Keywords: immigration, language brokering, multilingualism, students' points of view
Procedia PDF Downloads 179180 The ‘Othered’ Body: Deafness and Disability in Nina Raine’s Tribes
Authors: Nurten Çelik
Abstract:
Under the new developments in science, medicine, sociology, psychology and literary theories, body studies has gained huge importance and the body has become a debatable issue. There has emerged, among sociologists and literary theorists, an overwhelming consensus that body is socially, politically and culturally perceived and constructed and thus, the position of an individual in the society is determined in accordance with his/her body image. In this regard, the most complicated point is the theoretical views propounded upon disability studies, where the disabled body is considered to be a site upon which social and political restrictions as well as repressions are inscribed. There has been the widely-accepted view that no matter what kind of disability it is, those with physical, mental or learning impairments face varied social, political and environmental obstacles that prevent them from being an active citizen, worker, lover and even a family member. In parallel with these approaches, the matter of the sufferings of disabled individuals attains its place in cinema and literature as well as in theatre studies under the category of disability theatre. One of the prominent plays that deal with physical disability came from the contemporary British playwright Nina Raine. In her awarded play Tribes, which premiered at the Royal Court Theatre in 2010, Raine develops the social strata where her deaf protagonist, Billy, caught up between two tribes – namely his family and his lover Slyvia, a member of the deaf community– experiences personal and social hardships due to his hearing impairment. In the play, intransigent and self-opinionated family members foster no sense of empathy towards Billy, there are noisy talking and shouting, but no communication, love, compassion or mutual understanding, and language becomes just a tool for the expression of rage and oppression. In the disordered atmosphere of the family life, Billy experiences isolation and loneliness. Billy’s hopes for success and love are destroyed when Slyvia, troubled between hearing and deafness, rejects him because she does not utterly grasp what Billy is experiencing. Drawing upon the hardships, Billy undergoes in his relationships with his family and his girlfriend, Tribes problematizes the concept of deafness and explores to what extent a deaf person can find a place in the hearing world. Setting ‘the disabled’ bodies against ‘the abled’ bodies in a family, a microcosm of the society where bodies are socially shaped and constructed, Tribes dramatizes how the disabled bodies are disenfranchised, stigmatised, marginalized and othered on the grounds that they are socially misfit. Tribes, with a specific focus on the dysfunctional family, shows that the lack of communication and empathy numbs the characters to the feelings of each other and thereby, they become more disabled than Billy. In conclusion, this paper, with the reference to the embodiment of disability and social theories, aims to explore how disabled bodies are socially marked and segregated from family and society.Keywords: body, deafness, disability, disability theatre, Nina Raine, tribes
Procedia PDF Downloads 262179 Scaling up Small and Sick Newborn Care Through the Establishment of the First Human Milk Bank in Nepal
Authors: Prajwal Paudel, Shreeprasad Adhikari, Shailendra Bir Karmacharya, Kalpana Upadhyaya
Abstract:
Background: Human milk banks have been recommended by the World Health Organization (WHO) for newborn and child nourishment in the provision of optimum nutrition as an alternative to breastfeeding in circumstances when direct breastfeeding is inaccessible. The vulnerable group of babies, mainly preterm, low birth weight, and sick newborns, are at a greater risk of mortality and possibly benefit from the safe use of donated human milk through milk banks. In this study, we aimed to shed light on the process involved during the setting up of the nation’s first milk bank and its vitality in small and sick newborn nutrition and care. Methods: The study was conducted in Paropakar Maternity and Women’s Hospital, where the first human milk (HMB) was established. The establishment involved a stepwise process of need assessment meeting, formation of the HMB committee, learning visit to HMB in India, studying the strengths and weaknesses of promoting breastfeeding and HMB system integration, procurement, installation, and setting up the infrastructure, and developing technical competency, launching of the HMB. After the initiation of HMB services, information regarding the recruited donor mothers and the volume of milk pasteurized and consumed by the needy recipient babies were recorded. Descriptive statistics with frequencies and percentages were used to describe the utilization of HMB services. Results: During the study period, a total of 506113 ml of milk was collected, while 49930 ml of milk was pasteurized. Of the pasteurized milk, 381248 ml of milk was dispensed. The total volume of milk received was from a total of 883 after proper routine screening tests. Similarly, the total number of babies who received the donated human milk (DHM) was 912 with different neonatal conditions. Among the babies who received DHM, 527(57.7%) were born via CS, and 385 (42.21%) were delivered normally. In the birth weight category,9 (1%) of the babies were less than 1000 grams, 75 (8.2%) were less than 1500 grams, 405 (44.4%) were between 1500 to less than 2500 grams whereas, 423 (46.4%) of the babies who received DHM were normal weight babies. Among the sick newborns, perinatal asphyxia accounted for 166 (18.2%), preterm with other complications 372 (40.7%), preterm 23 (2.02%), respiratory distress 140 (15.35%), neonatal jaundice 150 (16.44%), sepsis 94 (10.30%), meconium aspiration syndrome 9(1%), seizure disorder 28 (3.07%), congenital anomalies 13 (1.42%) and others 33(3. 61%). The neonatal mortality rate dropped to 6.2/1000 live births from 7.5/1000 live births in the first year of establishment as compared to the previous year. Conclusion: The establishment of the first HMB in Nepal involved a comprehensive approach to integrate a new system with the existing newborn care in the provision of safe DHM. Premature babies with complication, babies born via CS, perinatal asphyxia and babies with sepsis consumed the greater proportion of DHM. Rigorous research is warranted to assess the impact of DHM in small and sick newborn who otherwise would be fed formula milk.Keywords: human milk bank, sick-newborn, mortality, neonatal nutrition
Procedia PDF Downloads 11178 The Academic Importance of the Arts in Fostering Belonging
Authors: Ana Handel, Jamal Ellerbe, Sarah Kanzaki, Natalie White, Nathan Ousey, Sean Gallagher
Abstract:
A sense of belonging is the ability for individuals to feel they are a necessary part of whatever organization or community they find themselves in. In an academic setting, a sense of belonging is key to a student’s success. The collected research points to this sense of belonging in academic settings as a significant contributor of students’ levels of engagement and trust. When universities leverage the arts, students are provided with more opportunities to engage and feel confident in their surroundings. This allows for greater potential to develop within academic and social settings. The arts also call for the promotion of diversity, equity, and inclusion by showcasing works of artists from all different backgrounds, thus allowing students to gain cultural knowledge and be able to embrace differences. Equity, diversity, and inclusion are all emotional facets of belonging. Equity relates to the concept of making the conscious choice to recognize opportunities to incorporate inclusive and diverse ideals into different thought processes and collaboration. Inclusion involves providing equal access to opportunities and resources for people of all ‘ingroups. In an inclusive culture, individuals are able to maximize their potential with the confidence they have gained through an accepting environment. A variety of members in academic communities have noted it may be beneficial to make certain events surrounding the arts to be built into course requirements in order to ensure students are expanding their horizons and exposing themselves to the arts. These academics also recommend incorporating the arts into extracurricular activities, such as Greek life, in order to appeal to large groups of students. Once students have an understanding of the rich knowledge cultivated through exploring the arts, they will feel more comfortable in their surroundings and thus more confident to become involved in other areas of their university. A number of universities, including West Chester and Carnegie Mellon, have instituted programs aiming to provide students with the necessary tools and resources to feel comfortable in their educational settings. Different programs include references to hotlines for discrimination and office for diversity, equity, and inclusion. Staff members have also been provided with means of combating biases and increasing feelings of belongingness in order to properly support and communicate with students. These tools have successfully allowed universities to foster inviting environments for students of all backgrounds to feel belong as well as strengthening the community’s diversity, equity, and inclusion. Through demonstrating concepts of diversity, equity, and inclusion by introducing the arts into learning spaces, students can find a sense of belonging within their academic environments. It is essential to understand these topics and how they work together to achieve a common goal. The efforts of universities have made much progress in shedding light on different cultures and ideas to show students their full potential and opportunities. Once students feel more comfortable within their organizations, engagement will increase substantially.Keywords: arts, belonging, engagement, inclusion
Procedia PDF Downloads 165177 Bridging the Educational Gap: A Curriculum Framework for Mass Timber Construction Education and Comparative Analysis of Physical vs. Virtual Prototypes in Construction Management
Authors: Farnaz Jafari
Abstract:
The surge in mass timber construction represents a pivotal moment in sustainable building practices, yet the lack of comprehensive education in construction management poses a challenge in harnessing this innovation effectively. This research endeavors to bridge this gap by developing a curriculum framework integrating mass timber construction into undergraduate and industry certificate programs. To optimize learning outcomes, the study explores the impact of two prototype formats -Virtual Reality (VR) simulations and physical mock-ups- on students' understanding and skill development. The curriculum framework aims to equip future construction managers with a holistic understanding of mass timber, covering its unique properties, construction methods, building codes, and sustainable advantages. The study adopts a mixed-methods approach, commencing with a systematic literature review and leveraging surveys and interviews with educators and industry professionals to identify existing educational gaps. The iterative development process involves incorporating stakeholder feedback into the curriculum. The evaluation of prototype impact employs pre- and post-tests administered to participants engaged in pilot programs. Through qualitative content analysis and quantitative statistical methods, the study seeks to compare the effectiveness of VR simulations and physical mock-ups in conveying knowledge and skills related to mass timber construction. The anticipated findings will illuminate the strengths and weaknesses of each approach, providing insights for future curriculum development. The curriculum's expected contribution to sustainable construction education lies in its emphasis on practical application, bridging the gap between theoretical knowledge and hands-on skills. The research also seeks to establish a standard for mass timber construction education, contributing to the field through a unique comparative analysis of VR simulations and physical mock-ups. The study's significance extends to the development of best practices and evidence-based recommendations for integrating technology and hands-on experiences in construction education. By addressing current educational gaps and offering a comparative analysis, this research aims to enrich the construction management education experience and pave the way for broader adoption of sustainable practices in the industry. The envisioned curriculum framework is designed for versatile integration, catering to undergraduate programs and industry training modules, thereby enhancing the educational landscape for aspiring construction professionals. Ultimately, this study underscores the importance of proactive educational strategies in preparing industry professionals for the evolving demands of the construction landscape, facilitating a seamless transition towards sustainable building practices.Keywords: curriculum framework, mass timber construction, physical vs. virtual prototypes, sustainable building practices
Procedia PDF Downloads 72176 Digital Skepticism In A Legal Philosophical Approach
Authors: dr. Bendes Ákos
Abstract:
Digital skepticism, a critical stance towards digital technology and its pervasive influence on society, presents significant challenges when analyzed from a legal philosophical perspective. This abstract aims to explore the intersection of digital skepticism and legal philosophy, emphasizing the implications for justice, rights, and the rule of law in the digital age. Digital skepticism arises from concerns about privacy, security, and the ethical implications of digital technology. It questions the extent to which digital advancements enhance or undermine fundamental human values. Legal philosophy, which interrogates the foundations and purposes of law, provides a framework for examining these concerns critically. One key area where digital skepticism and legal philosophy intersect is in the realm of privacy. Digital technologies, particularly data collection and surveillance mechanisms, pose substantial threats to individual privacy. Legal philosophers must grapple with questions about the limits of state power and the protection of personal autonomy. They must consider how traditional legal principles, such as the right to privacy, can be adapted or reinterpreted in light of new technological realities. Security is another critical concern. Digital skepticism highlights vulnerabilities in cybersecurity and the potential for malicious activities, such as hacking and cybercrime, to disrupt legal systems and societal order. Legal philosophy must address how laws can evolve to protect against these new forms of threats while balancing security with civil liberties. Ethics plays a central role in this discourse. Digital technologies raise ethical dilemmas, such as the development and use of artificial intelligence and machine learning algorithms that may perpetuate biases or make decisions without human oversight. Legal philosophers must evaluate the moral responsibilities of those who design and implement these technologies and consider the implications for justice and fairness. Furthermore, digital skepticism prompts a reevaluation of the concept of the rule of law. In an increasingly digital world, maintaining transparency, accountability, and fairness becomes more complex. Legal philosophers must explore how legal frameworks can ensure that digital technologies serve the public good and do not entrench power imbalances or erode democratic principles. Finally, the intersection of digital skepticism and legal philosophy has practical implications for policy-making. Legal scholars and practitioners must work collaboratively to develop regulations and guidelines that address the challenges posed by digital technology. This includes crafting laws that protect individual rights, ensure security, and promote ethical standards in technology development and deployment. In conclusion, digital skepticism provides a crucial lens for examining the impact of digital technology on law and society. A legal philosophical approach offers valuable insights into how legal systems can adapt to protect fundamental values in the digital age. By addressing privacy, security, ethics, and the rule of law, legal philosophers can help shape a future where digital advancements enhance, rather than undermine, justice and human dignity.Keywords: legal philosophy, privacy, security, ethics, digital skepticism
Procedia PDF Downloads 43175 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis
Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel
Abstract:
Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI
Procedia PDF Downloads 171