Search results for: hidden models of Markov (HMM)
31 Increasing Student Engagement through Culturally-Responsive Classroom Management
Authors: Catherine P. Bradshaw, Elise T. Pas, Katrina J. Debnam, Jessika H. Bottiani, Michael Rosenberg
Abstract:
Worldwide, ethnically and culturally diverse students are at increased risk for school failure, discipline problems, and dropout. Despite decades of concern about this issue of disparities in education and other fields (e.g., 'school to prison pipeline'), there has been limited empirical examination of models that can actually reduce these gaps in schools. Moreover, few studies have examined the effectiveness of in-service teacher interventions and supports specifically designed to reduce discipline disparities and improve student engagement. This session provides an overview of the evidence-based Double Check model which serves as a framework for teachers to use culturally-responsive strategies to engage ethnically and culturally diverse students in the classroom and reduce discipline problems. Specifically, Double Check is a school-based prevention program which includes three core components: (a) enhancements to the school-wide Positive Behavioral Interventions and Supports (PBIS) tier-1 level of support; (b) five one-hour professional development training sessions, each of which addresses five domains of cultural competence (i.e., connection to the curriculum, authentic relationships, reflective thinking, effective communication, and sensitivity to students’ culture); and (c) coaching of classroom teachers using an adapted version of the Classroom Check-Up, which intends to increase teachers’ use of effective classroom management and culturally-responsive strategies using research-based motivational interviewing and data-informed problem-solving approaches. This paper presents findings from a randomized controlled trial (RCT) testing the impact of Double Check, on office discipline referrals (disaggregated by race) and independently observed and self-reported culturally-responsive practices and classroom behavior management. The RCT included 12 elementary and middle schools; 159 classroom teachers were randomized either to receive coaching or serve as comparisons. Specifically, multilevel analyses indicated that teacher self-reported culturally responsive behavior management improved over the course of the school year for teachers who received the coaching and professional development. However, the average annual office discipline referrals issued to black students were reduced among teachers who were randomly assigned to receive coaching relative to comparison teachers. Similarly, observations conducted by trained external raters indicated significantly more teacher proactive behavior management and anticipation of student problems, higher student compliance, less student non-compliance, and less socially disruptive behaviors in classrooms led by coached teachers than classrooms led teachers randomly assigned to the non-coached condition. These findings indicated promising effects of the Double Check model on a range of teacher and student outcomes, including disproportionality in office discipline referrals among Black students. These results also suggest that the Double Check model is one of only a few systematic approaches to promoting culturally-responsive behavior management which has been rigorously tested and shown to be associated with improvements in either student or staff outcomes indicated significant reductions in discipline problems and improvements in behavior management. Implications of these findings are considered within the broader context of globalization and demographic shifts, and their impacts on schools. These issues are particularly timely, given growing concerns about immigration policies in the U.S. and abroad.Keywords: ethnically and culturally diverse students, student engagement, school-based prevention, academic achievement
Procedia PDF Downloads 28430 Working at the Interface of Health and Criminal Justice: An Interpretative Phenomenological Analysis Exploration of the Experiences of Liaison and Diversion Nurses – Emerging Findings
Authors: Sithandazile Masuku
Abstract:
Introduction: Public health approaches to offender mental health are driven by international policies and frameworks in response to the disproportionately large representation of people with mental health problems within the offender pathway compared to the general population. Public health service innovations include mental health courts in the US, restorative models in Singapore and, liaison and diversion services in Australia, the UK, and some other European countries. Mental health nurses are at the forefront of offender health service innovations. In the U.K. context, police custody has been identified as an early point within the offender pathway where nurses can improve outcomes by offering assessments and share information with criminal justice partners. This scope of nursing practice has introduced challenges related to skills and support required for nurses working at the interface of health and the criminal justice system. Parallel literature exploring experiences of nurses working in forensic settings suggests the presence of compassion fatigue, burnout and vicarious trauma that may impede risk harm to the nurses in these settings. Published research explores mainly service-level outcomes including monitoring of figures indicative of a reduction in offending behavior. There is minimal research exploring the experiences of liaison and diversion nurses who are situated away from a supportive clinical environment and engaged in complex autonomous decision-making. Aim: This paper will share qualitative findings (in progress) from a PhD study that aims to explore the experiences of liaison and diversion nurses in one service in the U.K. Methodology: This is a qualitative interview study conducted using an Interpretative Phenomenological Analysis to gain an in-depth analysis of lived experiences. Methods: A purposive sampling technique was used to recruit n=8 mental health nurses registered with the UK professional body, Nursing and Midwifery Council, from one UK Liaison and Diversion service. All participants were interviewed online via video call using semi-structured interview topic guide. Data were recorded and transcribed verbatim. Data were analysed using the seven steps of the Interpretative Phenomenological Analysis data analysis method. Emerging Findings Analysis to date has identified pertinent themes: • Difficulties of meaning-making for nurses because of the complexity of their boundary spanning role. • Emotional burden experienced in a highly emotive and fast-changing environment. • Stress and difficulties with role identity impacting on individual nurses’ ability to be resilient. • Challenges to wellbeing related to a sense of isolation when making complex decisions. Conclusion Emerging findings have highlighted the lived experiences of nurses working in liaison and diversion as challenging. The nature of the custody environment has an impact on role identity and decision making. Nurses left feeling isolated and unsupported are less resilient and may go on to experience compassion fatigue. The findings from this study thus far point to a need to connect nurses working in these boundary spanning roles with a supportive infrastructure where the complexity of their role is acknowledged, and they can be connected with a health agenda. In doing this, the nurses would be protected from harm and the likelihood of sustained positive outcomes for service users is optimised.Keywords: liaison and diversion, nurse experiences, offender health, staff wellbeing
Procedia PDF Downloads 13729 Impact of Simulated Brain Interstitial Fluid Flow on the Chemokine CXC-Chemokine-Ligand-12 Release From an Alginate-Based Hydrogel
Authors: Wiam El Kheir, Anais Dumais, Maude Beaudoin, Bernard Marcos, Nick Virgilio, Benoit Paquette, Nathalie Faucheux, Marc-Antoine Lauzon
Abstract:
The high infiltrative pattern of glioblastoma multiforme cells (GBM) is the main cause responsible for the actual standard treatments failure. The tumor high heterogeneity, the interstitial fluid flow (IFF) and chemokines guides GBM cells migration in the brain parenchyma resulting in tumor recurrence. Drug delivery systems emerged as an alternative approach to develop effective treatments for the disease. Some recent studies have proposed to harness the effect CXC-lchemokine-ligand-12 to direct and control the cancer cell migration through delivery system. However, the dynamics of the brain environment on the delivery system remains poorly understood. Nanoparticles (NPs) and hydrogels are known as good carriers for the encapsulation of different agents and control their release. We studied the release of CXCL12 (free or loaded into NPs) from an alginate-based hydrogel under static and indirect perfusion (IP) conditions. Under static conditions, the main phenomena driving CXCL12 release from the hydrogel was diffusion with the presence of strong interactions between the positively charged CXCL12 and the negatively charge alginate. CXCL12 release profiles were independent from the initial mass loadings. Afterwards, we demonstrated that the release could tuned by loading CXCL12 into Alginate/Chitosan-Nanoparticles (Alg/Chit-NPs) and embedded them into alginate-hydrogel. The initial burst release was substantially attenuated and the overall cumulative release percentages of 21%, 16% and 7% were observed for initial mass loadings of 0.07, 0.13 and 0.26 µg, respectively, suggesting stronger electrostatic interactions. Results were mathematically modeled based on Fick’s second law of diffusion framework developed previously to estimate the effective diffusion coefficient (Deff) and the mass transfer coefficient. Embedding the CXCL12 into NPs decreased the Deff an order of magnitude, which was coherent with experimental data. Thereafter, we developed an in-vitro 3D model that takes into consideration the convective contribution of the brain IFF to study CXCL12 release in an in-vitro microenvironment that mimics as faithfully as possible the human brain. From is unique design, the model also allowed us to understand the effect of IP on CXCL12 release in respect to time and space. Four flow rates (0.5, 3, 6.5 and 10 µL/min) which may increase CXCL12 release in-vivo depending on the tumor location were assessed. Under IP, cumulative percentages varying between 4.5-7.3%, 23-58.5%, 77.8-92.5% and 89.2-95.9% were released for the three initial mass loadings of 0.08, 0.16 and 0.33 µg, respectively. As the flow rate increase, IP culture conditions resulted in a higher release of CXCL12 compared to static conditions as the convection contribution became the main driving mass transport phenomena. Further, depending on the flow rate, IP had a direct impact on CXCL12 distribution within the simulated brain tissue, which illustrates the importance of developing such 3D in-vitro models to assess the efficiency of a delivery system targeting the brain. In future work, using this very model, we aim to understand the impact of the different phenomenon occurring on GBM cell behaviors in response to the resulting chemokine gradient subjected to various flow while allowing them to express their invasive characteristics in an in-vitro microenvironment that mimics the in-vivo brain parenchyma.Keywords: 3D culture system, chemokines gradient, glioblastoma multiforme, kinetic release, mathematical modeling
Procedia PDF Downloads 8528 Taiwanese Pre-Service Elementary School EFL Teachers’ Perception and Practice of Station Teaching in English Remedial Education
Authors: Chien Chin-Wen
Abstract:
Collaborative teaching has different teaching models and station teaching is one type of collaborative teaching. Station teaching is not commonly practiced in elementary school English education and introduced in language teacher education programs in Taiwan. In station teaching, each teacher takes a small part of instructional content, working with a small number of students. Students rotate between stations where they receive the assignments and instruction from different teachers. The teachers provide the same content to each group, but the instructional method can vary based upon the needs of each group of students. This study explores thirty-four Taiwanese pre-service elementary school English teachers’ knowledge about station teaching and their competence demonstrated in designing activities for and delivering of station teaching in an English remedial education to six sixth graders in a local elementary school in northern Taiwan. The participants simultaneously enrolled in this Elementary School English Teaching Materials and Methods class, a part of an elementary school teacher education program in a northern Taiwan city. The instructor (Jennifer, pseudonym) in this Elementary School English Teaching Materials and Methods class collaborated with an English teacher (Olivia, pseudonym) in Maureen Elementary School (pseudonym), an urban elementary school in a northwestern Taiwan city. Of Olivia’s students, four male and two female sixth graders needed to have remedial English education. Olivia chose these six elementary school students because they were in the lowest 5 % of their class in terms of their English proficiency. The thirty-four pre-service English teachers signed up for and took turns in teaching these six sixth graders every Thursday afternoon from four to five o’clock for twelve weeks. While three participants signed up as a team and taught these six sixth graders, the last team consisted of only two pre-service teachers. Each team designed a 40-minute lesson plan on the given language focus (words, sentence patterns, dialogue, phonics) of the assigned unit. Data in this study included the KWLA chart, activity designs, and semi-structured interviews. Data collection lasted for four months, from September to December 2014. Data were analyzed as follows. First, all the notes were read and marked with appropriate codes (e.g., I don’t know, co-teaching etc.). Second, tentative categories were labeled (e.g., before, after, process, future implication, etc.). Finally, the data were sorted into topics that reflected the research questions on the basis of their relevance. This study has the following major findings. First of all, the majority of participants knew nothing about station teaching at the beginning of the study. After taking the course Elementary School English Teaching Materials and Methods and after designing and delivering the station teaching in an English remedial education program to six sixth graders, they learned that station teaching is co-teaching, and that it includes activity designs for different stations and students’ rotating from station to station. They demonstrated knowledge and skills in activity designs for vocabulary, sentence patterns, dialogue, and phonics. Moreover, they learned to interact with individual learners and guided them step by step in learning vocabulary, sentence patterns, dialogue, and phonics. However, they were still incompetent in classroom management, time management, English, and designing diverse and meaningful activities for elementary school students at different English proficiency levels. Hence, language teacher education programs are recommended to integrate station teaching to help pre-service teachers be equipped with eight knowledge and competences, including linguistic knowledge, content knowledge, general pedagogical knowledge, curriculum knowledge, knowledge of learners and their characteristics, pedagogical content knowledge, knowledge of education content, and knowledge of education’s ends and purposes.Keywords: co-teaching, competence, knowledge, pre-service teachers, station teaching
Procedia PDF Downloads 42827 Solid State Fermentation: A Technological Alternative for Enriching Bioavailability of Underutilized Crops
Authors: Vipin Bhandari, Anupama Singh, Kopal Gupta
Abstract:
Solid state fermentation, an eminent bioconversion technique for converting many biological substrates into a value-added product, has proven its role in the biotransformation of crops by nutritionally enriching them. Hence, an effort was made for nutritional enhancement of underutilized crops viz. barnyard millet, amaranthus and horse gram based composite flour using SSF. The grains were given pre-treatments before fermentation and these pre-treatments proved quite effective in diminishing the level of antinutrients in grains and in improving their nutritional characteristics. The present study deals with the enhancement of nutritional characteristics of underutilized crops viz. barnyard millet, amaranthus and horsegram based composite flour using solid state fermentation (SSF) as the principle bioconversion technique to convert the composite flour substrate into a nutritionally enriched value added product. Response surface methodology was used to design the experiments. The variables selected for the fermentation experiments were substrate particle size, substrate blend ratio, fermentation time, fermentation temperature and moisture content having three levels of each. Seventeen designed experiments were conducted randomly to find the effect of these variables on microbial count, reducing sugar, pH, total sugar, phytic acid and water absorption index. The data from all experiments were analyzed using Design Expert 8.0.6 and the response functions were developed using multiple regression analysis and second order models were fitted for each response. Results revealed that pretreatments proved quite handful in diminishing the level of antinutrients and thus enhancing the nutritional value of the grains appreciably, for instance, there was about 23% reduction in phytic acid levels after decortication of barnyard millet. The carbohydrate content of the decorticated barnyard millet increased to 81.5% from initial value of 65.2%. Similarly popping and puffing of horsegram and amaranthus respectively greatly reduced the trypsin inhibitor activity. Puffing of amaranthus also reduced the tannin content appreciably. Bacillus subtilis was used as the inoculating specie since it is known to produce phytases in solid state fermentation systems. These phytases remarkably reduce the phytic acid content which acts as a major antinutritional factor in food grains. Results of solid state fermentation experiments revealed that phytic acid levels reduced appreciably when fermentation was allowed to continue for 72 hours at a temperature of 35°C. Particle size and substrate blend ratio also affected the responses positively. All the parameters viz. substrate particle size, substrate blend ratio, fermentation time, fermentation temperature and moisture content affected the responses namely microbial count, reducing sugar, pH, total sugar, phytic acid and water absorption index but the effect of fermentation time was found to be most significant on all the responses. Statistical analysis resulted in the optimum conditions (particle size 355µ, substrate blend ratio 50:20:30 of barnyard millet, amaranthus and horsegram respectively, fermentation time 68 hrs, fermentation temperature 35°C and moisture content 47%) for maximum reduction in phytic acid. The model F- value was found to be highly significant at 1% level of significance in case of all the responses. Hence, second order model could be fitted to predict all the dependent parameters. The effect of fermentation time was found to be most significant as compared to other variables.Keywords: composite flour, solid state fermentation, underutilized crops, cereals, fermentation technology, food processing
Procedia PDF Downloads 32826 Interpretable Deep Learning Models for Medical Condition Identification
Authors: Dongping Fang, Lian Duan, Xiaojing Yuan, Mike Xu, Allyn Klunder, Kevin Tan, Suiting Cao, Yeqing Ji
Abstract:
Accurate prediction of a medical condition with straight clinical evidence is a long-sought topic in the medical management and health insurance field. Although great progress has been made with machine learning algorithms, the medical community is still, to a certain degree, suspicious about the model's accuracy and interpretability. This paper presents an innovative hierarchical attention deep learning model to achieve good prediction and clear interpretability that can be easily understood by medical professionals. This deep learning model uses a hierarchical attention structure that matches naturally with the medical history data structure and reflects the member’s encounter (date of service) sequence. The model attention structure consists of 3 levels: (1) attention on the medical code types (diagnosis codes, procedure codes, lab test results, and prescription drugs), (2) attention on the sequential medical encounters within a type, (3) attention on the medical codes within an encounter and type. This model is applied to predict the occurrence of stage 3 chronic kidney disease (CKD3), using three years’ medical history of Medicare Advantage (MA) members from a top health insurance company. The model takes members’ medical events, both claims and electronic medical record (EMR) data, as input, makes a prediction of CKD3 and calculates the contribution from individual events to the predicted outcome. The model outcome can be easily explained with the clinical evidence identified by the model algorithm. Here are examples: Member A had 36 medical encounters in the past three years: multiple office visits, lab tests and medications. The model predicts member A has a high risk of CKD3 with the following well-contributed clinical events - multiple high ‘Creatinine in Serum or Plasma’ tests and multiple low kidneys functioning ‘Glomerular filtration rate’ tests. Among the abnormal lab tests, more recent results contributed more to the prediction. The model also indicates regular office visits, no abnormal findings of medical examinations, and taking proper medications decreased the CKD3 risk. Member B had 104 medical encounters in the past 3 years and was predicted to have a low risk of CKD3, because the model didn’t identify diagnoses, procedures, or medications related to kidney disease, and many lab test results, including ‘Glomerular filtration rate’ were within the normal range. The model accurately predicts members A and B and provides interpretable clinical evidence that is validated by clinicians. Without extra effort, the interpretation is generated directly from the model and presented together with the occurrence date. Our model uses the medical data in its most raw format without any further data aggregation, transformation, or mapping. This greatly simplifies the data preparation process, mitigates the chance for error and eliminates post-modeling work needed for traditional model explanation. To our knowledge, this is the first paper on an interpretable deep-learning model using a 3-level attention structure, sourcing both EMR and claim data, including all 4 types of medical data, on the entire Medicare population of a big insurance company, and more importantly, directly generating model interpretation to support user decision. In the future, we plan to enrich the model input by adding patients’ demographics and information from free-texted physician notes.Keywords: deep learning, interpretability, attention, big data, medical conditions
Procedia PDF Downloads 9125 MusicTherapy for Actors: An Exploratory Study Applied to Students from University Theatre Faculty
Authors: Adriana De Serio, Adrian Korek
Abstract:
Aims: This experiential research work presents a Group-MusicTherapy-Theatre-Plan (MusThePlan) the authors have carried out to support the actors. The MusicTherapy gives rise to individual psychophysical feedback and influences the emotional centres of the brain and the subconsciousness. Therefore, the authors underline the effectiveness of the preventive, educational, and training goals of the MusThePlan to lead theatre students and actors to deal with anxiety and to overcome psychophysical weaknesses, shyness, emotional stress in stage performances, to increase flexibility, awareness of one's identity and resources for a positive self-development and psychophysical health, to develop and strengthen social bonds, increasing a network of subjects working for social inclusion and reduction of stigma. Materials-Methods: Thirty students from the University Theatre Faculty participated in weekly music therapy sessions for two months; each session lasted 120 minutes. MusThePlan: Each session began with a free group rhythmic-sonorous-musical-production by body-percussion, voice-canto, instruments, to stimulate communication. Then, a synchronized-structured bodily-rhythmic-sonorous-musical production also involved acting, dances, movements of hands and arms, hearing, and more sensorial perceptions and speech to balance motor skills and the muscular tone. Each student could be the director-leader of the group indicating a story to inspire the group's musical production. The third step involved the students in rhythmic speech and singing drills and in vocal exercises focusing on the musical pitch to improve the intonation and on the diction to improve the articulation and lead up it to an increased intelligibility. At the end of each musictherapy session and of the two months, the Musictherapy Assessment Document was drawn up by analysis of observation protocols and two Indices by the authors: Patient-Environment-Music-Index (time to - tn) to estimate the behavior evolution, Somatic Pattern Index to monitor subject’s eye and mouth and limb motility, perspiration, before, during and after musictherapy sessions. Results: After the first month, the students (non musicians) learned to play percussion instruments and formed a musical band that played classical/modern music on the percussion instruments with the musictherapist/pianist/conductor in a public concert. At the end of the second month, the students performed a public musical theatre show, acting, dancing, singing, and playing percussion instruments. The students highlighted the importance of the playful aspects of the group musical production in order to achieve emotional contact and harmony within the group. The students said they had improved kinetic and vocal and all the skills useful for acting activity and the nourishment of the bodily and emotional balance. Conclusions: The MusThePlan makes use of some specific MusicTherapy methodological models, techniques, and strategies useful for the actors. The MusThePlan can destroy the individual "mask" and can be useful when the verbal language is unable to undermine the defense mechanisms of the subject. The MusThePlan improves actor’s psychophysical activation, motivation, gratification, knowledge of one's own possibilities, and the quality of life. Therefore, the MusThePlan could be useful to carry out targeted interventions for the actors with characteristics of repeatability, objectivity, and predictability of results. Furthermore, it would be useful to plan a University course/master in “MusicTherapy for the Theatre”.Keywords: musictherapy, sonorous-musical energy, quality of life, theatre
Procedia PDF Downloads 7924 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU
Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais
Abstract:
Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking
Procedia PDF Downloads 3623 Sinhala Sign Language to Grammatically Correct Sentences using NLP
Authors: Anjalika Fernando, Banuka Athuraliya
Abstract:
This paper presents a comprehensive approach for converting Sinhala Sign Language (SSL) into grammatically correct sentences using Natural Language Processing (NLP) techniques in real-time. While previous studies have explored various aspects of SSL translation, the research gap lies in the absence of grammar checking for SSL. This work aims to bridge this gap by proposing a two-stage methodology that leverages deep learning models to detect signs and translate them into coherent sentences, ensuring grammatical accuracy. The first stage of the approach involves the utilization of a Long Short-Term Memory (LSTM) deep learning model to recognize and interpret SSL signs. By training the LSTM model on a dataset of SSL gestures, it learns to accurately classify and translate these signs into textual representations. The LSTM model achieves a commendable accuracy rate of 94%, demonstrating its effectiveness in accurately recognizing and translating SSL gestures. Building upon the successful recognition and translation of SSL signs, the second stage of the methodology focuses on improving the grammatical correctness of the translated sentences. The project employs a Neural Machine Translation (NMT) architecture, consisting of an encoder and decoder with LSTM components, to enhance the syntactical structure of the generated sentences. By training the NMT model on a parallel corpus of Sinhala wrong sentences and their corresponding grammatically correct translations, it learns to generate coherent and grammatically accurate sentences. The NMT model achieves an impressive accuracy rate of 98%, affirming its capability to produce linguistically sound translations. The proposed approach offers significant contributions to the field of SSL translation and grammar correction. Addressing the critical issue of grammar checking, it enhances the usability and reliability of SSL translation systems, facilitating effective communication between hearing-impaired and non-sign language users. Furthermore, the integration of deep learning techniques, such as LSTM and NMT, ensures the accuracy and robustness of the translation process. This research holds great potential for practical applications, including educational platforms, accessibility tools, and communication aids for the hearing-impaired. Furthermore, it lays the foundation for future advancements in SSL translation systems, fostering inclusive and equal opportunities for the deaf community. Future work includes expanding the existing datasets to further improve the accuracy and generalization of the SSL translation system. Additionally, the development of a dedicated mobile application would enhance the accessibility and convenience of SSL translation on handheld devices. Furthermore, efforts will be made to enhance the current application for educational purposes, enabling individuals to learn and practice SSL more effectively. Another area of future exploration involves enabling two-way communication, allowing seamless interaction between sign-language users and non-sign-language users.In conclusion, this paper presents a novel approach for converting Sinhala Sign Language gestures into grammatically correct sentences using NLP techniques in real time. The two-stage methodology, comprising an LSTM model for sign detection and translation and an NMT model for grammar correction, achieves high accuracy rates of 94% and 98%, respectively. By addressing the lack of grammar checking in existing SSL translation research, this work contributes significantly to the development of more accurate and reliable SSL translation systems, thereby fostering effective communication and inclusivity for the hearing-impaired communityKeywords: Sinhala sign language, sign Language, NLP, LSTM, NMT
Procedia PDF Downloads 10722 Geospatial and Statistical Evidences of Non-Engineered Landfill Leachate Effects on Groundwater Quality in a Highly Urbanised Area of Nigeria
Authors: David A. Olasehinde, Peter I. Olasehinde, Segun M. A. Adelana, Dapo O. Olasehinde
Abstract:
An investigation was carried out on underground water system dynamics within Ilorin metropolis to monitor the subsurface flow and its corresponding pollution. Africa population growth rate is the highest among the regions of the world, especially in urban areas. A corresponding increase in waste generation and a change in waste composition from predominantly organic to non-organic waste has also been observed. Percolation of leachate from non-engineered landfills, the chief means of waste disposal in many of its cities, constitutes a threat to the underground water bodies. Ilorin city, a transboundary town in southwestern Nigeria, is a ready microcosm of Africa’s unique challenge. In spite of the fact that groundwater is naturally protected from common contaminants such as bacteria as the subsurface provides natural attenuation process, groundwater samples have been noted to however possesses relatively higher dissolved chemical contaminants such as bicarbonate, sodium, and chloride which poses a great threat to environmental receptors and human consumption. The Geographic Information System (GIS) was used as a tool to illustrate, subsurface dynamics and the corresponding pollutant indicators. Forty-four sampling points were selected around known groundwater pollutant, major old dumpsites without landfill liners. The results of the groundwater flow directions and the corresponding contaminant transport were presented using expert geospatial software. The experimental results were subjected to four descriptive statistical analyses, namely: principal component analysis, Pearson correlation analysis, scree plot analysis, and Ward cluster analysis. Regression model was also developed aimed at finding functional relationships that can adequately relate or describe the behaviour of water qualities and the hypothetical factors landfill characteristics that may influence them namely; distance of source of water body from dumpsites, static water level of groundwater, subsurface permeability (inferred from hydraulic gradient), and soil infiltration. The regression equations developed were validated using the graphical approach. Underground water seems to flow from the northern portion of Ilorin metropolis down southwards transporting contaminants. Pollution pattern in the study area generally assumed a bimodal pattern with the major concentration of the chemical pollutants in the underground watershed and the recharge. The correlation between contaminant concentrations and the spread of pollution indicates that areas of lower subsurface permeability display a higher concentration of dissolved chemical content. The principal component analysis showed that conductivity, suspended solids, calcium hardness, total dissolved solids, total coliforms, and coliforms were the chief contaminant indicators in the underground water system in the study area. Pearson correlation revealed a high correlation of electrical conductivity for many parameters analyzed. In the same vein, the regression models suggest that the heavier the molecular weight of a chemical contaminant of a pollutant from a point source, the greater the pollution of the underground water system at a short distance. The study concludes that the associative properties of landfill have a significant effect on groundwater quality in the study area.Keywords: dumpsite, leachate, groundwater pollution, linear regression, principal component
Procedia PDF Downloads 11821 Cardiolipin-Incorporated Liposomes Carrying Curcumin and Nerve Growth Factor to Rescue Neurons from Apoptosis for Alzheimer’s Disease Treatment
Authors: Yung-Chih Kuo, Che-Yu Lin, Jay-Shake Li, Yung-I Lou
Abstract:
Curcumin (CRM) and nerve growth factor (NGF) were entrapped in liposomes (LIP) with cardiolipin (CL) to downregulate the phosphorylation of mitogen-activated protein kinases for Alzheimer’s disease (AD) management. AD belongs to neurodegenerative disorder with a gradual loss of memory, yielding irreversible dementia. CL-conjugated LIP loaded with CRM (CRM-CL/LIP) and that with NGF (NGF-CL/LIP) were applied to AD models of SK-N-MC cells and Wistar rats with an insult of β-amyloid peptide (Aβ). Lipids comprising 1,2-dipalmitoyl-sn-glycero-3- phosphocholine (Avanti Polar Lipids, Alabaster, AL), 1',3'-bis[1,2- dimyristoyl-sn-glycero-3-phospho]-sn-glycerol (CL; Avanti Polar Lipids), 1,2-dipalmitoyl-sn-glycero-3-phosphoethanolamine-N- [methoxy(polyethylene glycol)-2000] (Avanti Polar Lipids), 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[carboxy(polyethylene glycol)-2000] (Avanti Polar Lipids) and CRM (Sigma–Aldrich, St. Louis, MO) were dissolved in chloroform (J. T. Baker, Phillipsburg, NJ) and condensed using a rotary evaporator (Panchum, Kaohsiung, Taiwan). Human β-NGF (Alomone Lab, Jerusalem, Israel) was added in the aqueous phase. Wheat germ agglutinin (WGA; Medicago AB, Uppsala, Sweden) was grafted on LIP loaded with CRM for (WGA-CRM-LIP) and CL-conjugated LIP loaded with CRM (WGA-CRM-CL/LIP) using 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide (Sigma–Aldrich) and N-hydroxysuccinimide (Alfa Aesar, Ward Hill, MA). The protein samples of SK-N-MC cells (American Type Tissue Collection, Rockville, MD) were used for sodium dodecyl sulfate (Sigma–Aldrich) polyacrylamide gel (Sigma–Aldrich) electrophoresis. In animal study, the LIP formulations were administered by intravenous injection via a tail vein of male Wistar rats (250–280 g, 8 weeks, BioLasco, Taipei, Taiwan), which were housed in the Animal Laboratory of National Chung Cheng University in accordance with the institutional guidelines and the guidelines of Animal Protection Committee under the Council of Agriculture of the Republic of China. We found that CRM-CL/LIP could inhibit the expressions of phosphorylated p38 (p-p38), p-Jun N-terminal kinase (p-JNK), and p-tau protein at serine 202 (p-Ser202) to retard the neuronal apoptosis. Free CRM and released CRM from CRM-LIP and CRM-CL/LIP were not in a straightforward manner to effectively inhibit the expression of p-p38 and p-JNK in the cytoplasm. In addition, NGF-CL/LIP enhanced the quantities of p-neurotrophic tyrosine kinase receptor type 1 (p-TrkA) and p-extracellular-signal-regulated kinase 5 (p-ERK5), preventing the Aβ-induced degeneration of neurons. The membrane fusion of NGF-LIP activated the ERK5 pathway and the targeting capacity of NGF-CL/LIP enhanced the possibility of released NGF to affect the TrkA level. Moreover, WGA-CRM-LIP improved the permeation of CRM across the blood–brain barrier (BBB) and significantly reduced the Aβ plaque deposition and malondialdehyde level and increased the percentage of normal neurons and cholinergic function in the hippocampus of AD rats. This was mainly because the encapsulated CRM was protected by LIP against a rapid degradation in the blood. Furthermore, WGA on LIP could target N-acetylglucosamine on endothelia and increased the quantity of CRM transported across the BBB. In addition, WGA-CRM-CL/LIP could be effective in suppressing the synthesis of acetylcholinesterase and reduced the decomposition of acetylcholine for better neurotransmission. Based on the in vitro and in vivo evidences, WGA-CRM-CL/LIP can rescue neurons from apoptosis in the brain and can be a promising drug delivery system for clinical AD therapy.Keywords: Alzheimer’s disease, β-amyloid, liposome, mitogen-activated protein kinase
Procedia PDF Downloads 33120 Enhancing Plant Throughput in Mineral Processing Through Multimodal Artificial Intelligence
Authors: Muhammad Bilal Shaikh
Abstract:
Mineral processing plants play a pivotal role in extracting valuable minerals from raw ores, contributing significantly to various industries. However, the optimization of plant throughput remains a complex challenge, necessitating innovative approaches for increased efficiency and productivity. This research paper investigates the application of Multimodal Artificial Intelligence (MAI) techniques to address this challenge, aiming to improve overall plant throughput in mineral processing operations. The integration of multimodal AI leverages a combination of diverse data sources, including sensor data, images, and textual information, to provide a holistic understanding of the complex processes involved in mineral extraction. The paper explores the synergies between various AI modalities, such as machine learning, computer vision, and natural language processing, to create a comprehensive and adaptive system for optimizing mineral processing plants. The primary focus of the research is on developing advanced predictive models that can accurately forecast various parameters affecting plant throughput. Utilizing historical process data, machine learning algorithms are trained to identify patterns, correlations, and dependencies within the intricate network of mineral processing operations. This enables real-time decision-making and process optimization, ultimately leading to enhanced plant throughput. Incorporating computer vision into the multimodal AI framework allows for the analysis of visual data from sensors and cameras positioned throughout the plant. This visual input aids in monitoring equipment conditions, identifying anomalies, and optimizing the flow of raw materials. The combination of machine learning and computer vision enables the creation of predictive maintenance strategies, reducing downtime and improving the overall reliability of mineral processing plants. Furthermore, the integration of natural language processing facilitates the extraction of valuable insights from unstructured textual data, such as maintenance logs, research papers, and operator reports. By understanding and analyzing this textual information, the multimodal AI system can identify trends, potential bottlenecks, and areas for improvement in plant operations. This comprehensive approach enables a more nuanced understanding of the factors influencing throughput and allows for targeted interventions. The research also explores the challenges associated with implementing multimodal AI in mineral processing plants, including data integration, model interpretability, and scalability. Addressing these challenges is crucial for the successful deployment of AI solutions in real-world industrial settings. To validate the effectiveness of the proposed multimodal AI framework, the research conducts case studies in collaboration with mineral processing plants. The results demonstrate tangible improvements in plant throughput, efficiency, and cost-effectiveness. The paper concludes with insights into the broader implications of implementing multimodal AI in mineral processing and its potential to revolutionize the industry by providing a robust, adaptive, and data-driven approach to optimizing plant operations. In summary, this research contributes to the evolving field of mineral processing by showcasing the transformative potential of multimodal artificial intelligence in enhancing plant throughput. The proposed framework offers a holistic solution that integrates machine learning, computer vision, and natural language processing to address the intricacies of mineral extraction processes, paving the way for a more efficient and sustainable future in the mineral processing industry.Keywords: multimodal AI, computer vision, NLP, mineral processing, mining
Procedia PDF Downloads 6819 Social Enterprises over Microfinance Institutions: The Challenges of Governance and Management
Authors: Dean Sinković, Tea Golja, Morena Paulišić
Abstract:
Upon the end of the vicious war in former Yugoslavia in 1995, international development community widely promoted microfinance as the key development framework to eradicate poverty, create jobs, increase income. Widespread claims were made that microfinance institutions would play vital role in creating a bedrock for sustainable ‘bottom-up’ economic development trajectory, thus, helping newly formed states to find proper way from economic post-war depression. This uplifting neoliberal narrative has no empirical support in the Republic of Croatia. Firstly, the type of enterprises created via microfinance sector are small, unskilled, labor intensive, no technology and with huge debt burden. This results in extremely high failure rates of microenterprises and poor individuals plunging into even deeper poverty, acute indebtedness and social marginalization. Secondly, evidence shows that microcredit is exact reflection of dangerous and destructive sub-prime lending model with ‘boom-to-bust’ scenarios in which benefits are solely extracted by the tiny financial and political elite working around the microfinance sector. We argue that microcredit providers are not proper financial structures through which developing countries should look way out of underdevelopment and poverty. In order to achieve sustainable long-term growth goals, public policy needs to focus on creating, supporting and facilitating the small and mid-size enterprises development. These enterprises should be technically sophisticated, capable of creating new capabilities and innovations, with managerial expertise (skills formation) and inter-connected with other organizations (i.e. clusters, networks, supply chains, etc.). Evidence from South-East Europe suggest that such structures are not created via microfinance model but can be fostered through various forms of social enterprises. Various legal entities may operate as social enterprises: limited liability private company, limited liability public company, cooperative, associations, foundations, institutions, Mutual Insurances and Credit union. Our main hypothesis is that cooperatives are potential agents of social and economic transformation and community development in the region. Financial cooperatives are structures that can foster more efficient allocation of financial resources involving deeper democratic arrangements and more socially just outcomes. In Croatia, pioneers of the first social enterprises were civil society organizations whilst forming a separated legal entity. (i.e. cooperatives, associations, commercial companies working on the principles of returning the investment to the founder). Ever since 1995 cooperatives in Croatia have not grown by pursuing their own internal growth but mostly by relying on external financial support. The greater part of today’s registered cooperatives tend to be agricultural (39%), followed by war veterans cooperatives (38%) and others. There are no financial cooperatives in Croatia. Due to the above mentioned we look at the historical developments and the prevailing social enterprises forms and discuss their advantages and disadvantages as potential agents for social and economic transformation and community development in the region. There is an evident lack of understanding of this business model and of its potential for social and economic development followed by an unfavorable institutional environment. Thus, we discuss the role of governance and management in the formation of social enterprises in Croatia, stressing the challenges for the governance of the country’s social enterprise movement.Keywords: financial cooperatives, governance and management models, microfinance institutions, social enterprises
Procedia PDF Downloads 27718 Anti-Infective Potential of Selected Philippine Medicinal Plant Extracts against Multidrug-Resistant Bacteria
Authors: Demetrio L. Valle Jr., Juliana Janet M. Puzon, Windell L. Rivera
Abstract:
From the various medicinal plants available in the Philippines, crude ethanol extracts of twelve (12) Philippine medicinal plants, namely: Senna alata L. Roxb. (akapulko), Psidium guajava L. (bayabas), Piper betle L. (ikmo), Vitex negundo L. (lagundi), Mitrephora lanotan (Blanco) Merr. (Lanotan), Zingiber officinale Roscoe (luya), Curcuma longa L. (Luyang dilaw), Tinospora rumphii Boerl (Makabuhay), Moringga oleifera Lam. (malunggay), Phyllanthus niruri L. (sampa-sampalukan), Centella asiatica (L.) Urban (takip kuhol), and Carmona retusa (Vahl) Masam (tsaang gubat) were studied. In vitro methods of evaluation against selected Gram-positive and Gram-negative multidrug-resistant (MDR), bacteria were performed on the plant extracts. Although five of the plants showed varying antagonistic activities against the test organisms, only Piper betle L. exhibited significant activities against both Gram-negative and Gram-positive multidrug-resistant bacteria, exhibiting wide zones of growth inhibition in the disk diffusion assay, and with the lowest concentrations of the extract required to inhibit the growth of the bacteria, as supported by the minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) assays. Further antibacterial studies of the Piper betle L. leaf, obtained by three extraction methods (ethanol, methanol, supercritical CO2), revealed similar inhibitory activities against a multitude of Gram-positive and Gram-negative MDR bacteria. Thin layer chromatography (TLC) assay of the leaf extract revealed a maximum of eight compounds with Rf values of 0.92, 0.86, 0.76, 0.53, 0.40, 0.25, 0.13, and 0.013, best visualized when inspected under UV-366 nm. TLC- agar overlay bioautography of the isolated compounds showed the compounds with Rf values of 0.86 and 0.13 having inhibitory activities against Gram-positive MDR bacteria (MRSA and VRE). The compound with an Rf value of 0.86 also possesses inhibitory activity against Gram-negative MDR bacteria (CRE Klebsiella pneumoniae and MBL Acinetobacter baumannii). Gas Chromatography-Mass Spectrometry (GC-MS) was able to identify six volatile compounds, four of which are new compounds that have not been mentioned in the medical literature. The chemical compounds isolated include 4-(2-propenyl)phenol and eugenol; and the new four compounds were ethyl diazoacetate, tris(trifluoromethyl)phosphine, heptafluorobutyrate, and 3-fluoro-2-propynenitrite. Phytochemical screening and investigation of its antioxidant, cytotoxic, possible hemolytic activities, and mechanisms of antibacterial activity were also done. The results showed that the local variant of Piper betle leaf extract possesses significant antioxidant, anti-cancer and antimicrobial properties, attributed to the presence of bioactive compounds, particularly of flavonoids (condensed tannin, leucoanthocyanin, gamma benzopyrone), anthraquinones, steroids/triterpenes and 2-deoxysugars. Piper betle L. is also traditionally known to enhance wound healing, which could be primarily due to its antioxidant, anti-inflammatory and antimicrobial activities. In vivo studies on mice using 2.5% and 5% of the ethanol leaf extract cream formulations in the excised wound models significantly increased the process of wound healing in the mice subjects, the results and values of which are at par with the current antibacterial cream (Mupirocin). From the results of the series of studies, we have definitely proven the value of Piper betle L. as a source of bioactive compounds that could be developed into therapeutic agents against MDR bacteria.Keywords: Philippine herbal medicine, multidrug-resistant bacteria, Piper betle, TLC-bioautography
Procedia PDF Downloads 77117 EcoTeka, an Open-Source Software for Urban Ecosystem Restoration through Technology
Authors: Manon Frédout, Laëtitia Bucari, Mathias Aloui, Gaëtan Duhamel, Olivier Rovellotti, Javier Blanco
Abstract:
Ecosystems must be resilient to ensure cleaner air, better water and soil quality, and thus healthier citizens. Technology can be an excellent tool to support urban ecosystem restoration projects, especially when based on Open Source and promoting Open Data. This is the goal of the ecoTeka application: one single digital tool for tree management which allows decision-makers to improve their urban forestry practices, enabling more responsible urban planning and climate change adaptation. EcoTeka provides city councils with three main functionalities tackling three of their challenges: easier biodiversity inventories, better green space management, and more efficient planning. To answer the cities’ need for reliable tree inventories, the application has been first built with open data coming from the websites OpenStreetMap and OpenTrees, but it will also include very soon the possibility of creating new data. To achieve this, a multi-source algorithm will be elaborated, based on existing artificial intelligence Deep Forest, integrating open-source satellite images, 3D representations from LiDAR, and street views from Mapillary. This data processing will permit identifying individual trees' position, height, crown diameter, and taxonomic genus. To support urban forestry management, ecoTeka offers a dashboard for monitoring the city’s tree inventory and trigger alerts to inform about upcoming due interventions. This tool was co-constructed with the green space departments of the French cities of Alès, Marseille, and Rouen. The third functionality of the application is a decision-making tool for urban planning, promoting biodiversity and landscape connectivity metrics to drive ecosystem restoration roadmap. Based on landscape graph theory, we are currently experimenting with new methodological approaches to scale down regional ecological connectivity principles to local biodiversity conservation and urban planning policies. This methodological framework will couple graph theoretic approach and biological data, mainly biodiversity occurrences (presence/absence) data available on both international (e.g., GBIF), national (e.g., Système d’Information Nature et Paysage) and local (e.g., Atlas de la Biodiversté Communale) biodiversity data sharing platforms in order to help reasoning new decisions for ecological networks conservation and restoration in urban areas. An experiment on this subject is currently ongoing with Montpellier Mediterranee Metropole. These projects and studies have shown that only 26% of tree inventory data is currently geo-localized in France - the rest is still being done on paper or Excel sheets. It seems that technology is not yet used enough to enrich the knowledge city councils have about biodiversity in their city and that existing biodiversity open data (e.g., occurrences, telemetry, or genetic data), species distribution models, landscape graph connectivity metrics are still underexploited to make rational decisions for landscape and urban planning projects. This is the goal of ecoTeka: to support easier inventories of urban biodiversity and better management of urban spaces through rational planning and decisions relying on open databases. Future studies and projects will focus on the development of tools for reducing the artificialization of soils, selecting plant species adapted to climate change, and highlighting the need for ecosystem and biodiversity services in cities.Keywords: digital software, ecological design of urban landscapes, sustainable urban development, urban ecological corridor, urban forestry, urban planning
Procedia PDF Downloads 7316 The Use of Rule-Based Cellular Automata to Track and Forecast the Dispersal of Classical Biocontrol Agents at Scale, with an Application to the Fopius arisanus Fruit Fly Parasitoid
Authors: Agboka Komi Mensah, John Odindi, Elfatih M. Abdel-Rahman, Onisimo Mutanga, Henri Ez Tonnang
Abstract:
Ecosystems are networks of organisms and populations that form a community of various species interacting within their habitats. Such habitats are defined by abiotic and biotic conditions that establish the initial limits to a population's growth, development, and reproduction. The habitat’s conditions explain the context in which species interact to access resources such as food, water, space, shelter, and mates, allowing for feeding, dispersal, and reproduction. Dispersal is an essential life-history strategy that affects gene flow, resource competition, population dynamics, and species distributions. Despite the importance of dispersal in population dynamics and survival, understanding the mechanism underpinning the dispersal of organisms remains challenging. For instance, when an organism moves into an ecosystem for survival and resource competition, its progression is highly influenced by extrinsic factors such as its physiological state, climatic variables and ability to evade predation. Therefore, greater spatial detail is necessary to understand organism dispersal dynamics. Understanding organisms dispersal can be addressed using empirical and mechanistic modelling approaches, with the adopted approach depending on the study's purpose Cellular automata (CA) is an example of these approaches that have been successfully used in biological studies to analyze the dispersal of living organisms. Cellular automata can be briefly described as occupied cells by an individual that evolves based on proper decisions based on a set of neighbours' rules. However, in the ambit of modelling individual organisms dispersal at the landscape scale, we lack user friendly tools that do not require expertise in mathematical models and computing ability; such as a visual analytics framework for tracking and forecasting the dispersal behaviour of organisms. The term "visual analytics" (VA) describes a semiautomated approach to electronic data processing that is guided by users who can interact with data via an interface. Essentially, VA converts large amounts of quantitative or qualitative data into graphical formats that can be customized based on the operator's needs. Additionally, this approach can be used to enhance the ability of users from various backgrounds to understand data, communicate results, and disseminate information across a wide range of disciplines. To support effective analysis of the dispersal of organisms at the landscape scale, we therefore designed Pydisp which is a free visual data analytics tool for spatiotemporal dispersal modeling built in Python. Its user interface allows users to perform a quick and interactive spatiotemporal analysis of species dispersal using bioecological and climatic data. Pydisp enables reuse and upgrade through the use of simple principles such as Fuzzy cellular automata algorithms. The potential of dispersal modeling is demonstrated in a case study by predicting the dispersal of Fopius arisanus (Sonan), endoparasitoids to control Bactrocera dorsalis (Hendel) (Diptera: Tephritidae) in Kenya. The results obtained from our example clearly illustrate the parasitoid's dispersal process at the landscape level and confirm that dynamic processes in an agroecosystem are better understood when designed using mechanistic modelling approaches. Furthermore, as demonstrated in the example, the built software is highly effective in portraying the dispersal of organisms despite the unavailability of detailed data on the species dispersal mechanisms.Keywords: cellular automata, fuzzy logic, landscape, spatiotemporal
Procedia PDF Downloads 7915 Socio-Sensorial Assessment of Nursing Homes in Singapore: Towards Integrated Enabling Design
Authors: Zdravko Trivic, John Chye Fung, Ruzica Bozovic-Stamenovic
Abstract:
Within the context of rapidly ageing population in Singapore and the pressing demands on both caregivers and care providers, an integrated approach to ageing-friendly and ability-sensitive enabling environment becomes an imperative. This particularly applies to nursing home environments and their immediate surroundings, as they are becoming one of the main available options of long-term care for many senior adults who are unable to age at home. Yet, despite the considerable efforts to break the still predominant clinical approach to eldercare and to introduce more home-like design and person-centric care model, nursing homes keep being stigmatised and perceived as not so desirable environments to grow old in. The challenges are further emphasised by the associated physical, sensorial, psychological and cognitive declines that are the common consequences of ageing. Such declines have an immense impact on almost all aspects of older adults’ daily functioning, including problems with mobility and spatial orientation, difficulties in communication, withdrawal from social interaction, higher level of depression and decreased sense of independence and autonomy. However, typical nursing home designs tend to neglect the full capacities of balanced and carefully integrated multisensory stimuli as active component of care and ability building. This paper outlines part of a larger multi-disciplinary study of six nursing homes in Singapore, with overarching objectives to create new models of supportive nursing home environments that go beyond the clinical care model and encourage community integration with the nursing home settings. The paper focuses on the largely neglected aspects of sensorial comfort and multi-sensorial properties of nursing homes, including both indoor and immediate outdoor spaces (boundaries). The objective was to investigate the sensory rhythms and explore their role in nursing home users’ daily routine and therapeutic capacities. Socio-sensory rhythms were captured and analysed through a combination of on-site sensory recordings of “objective” quantitative sensory data (air temperature and humidity, sound level and luminance) using multi-function environment meter, perceived experienced data, spatial mapping, first-person observations of nursing home users’ activity patterns, and interviews. This was done in addition to employment of available assessment tools, such as Wisconsin Person Directed Care assessment tool, Dementia Quality of Life [DQoL] instrument, and Resident Environment Impact Scale [REIS], as these tools address the issues of sensorial experience insufficiently and selectively. Key findings indicate varied levels of sensory comfort, as well as diversity, intensity, and customisation of multi-sensory conditions within different nursing home spaces. Sensory stimulation is typically concentrated in communal living areas of the nursing homes or in the areas that often provide controlled or limited access, including specifically designed sensory rooms and outdoor green spaces (gardens and terraces). Opportunities for sensory stimulation are particularly limited for bed-bound senior residents and within more functional areas, such as corridors. This suggests that the capacities of nursing home designs to provide more diverse and better integrated pleasant sensory conditions as integrated “therapeutic devices” to build nursing home residents’ physical and mental abilities, encourage activity and improve wellbeing are far from exhausted.Keywords: ageing-supportive environment, enabling design, multi-sensory assessment, nursing home environment
Procedia PDF Downloads 17314 Investigation of Delamination Process in Adhesively Bonded Hardwood Elements under Changing Environmental Conditions
Authors: M. M. Hassani, S. Ammann, F. K. Wittel, P. Niemz, H. J. Herrmann
Abstract:
Application of engineered wood, especially in the form of glued-laminated timbers has increased significantly. Recent progress in plywood made of high strength and high stiffness hardwoods, like European beech, gives designers in general more freedom by increased dimensional stability and load-bearing capacity. However, the strong hygric dependence of basically all mechanical properties renders many innovative ideas futile. The tendency of hardwood for higher moisture sorption and swelling coefficients lead to significant residual stresses in glued-laminated configurations, cross-laminated patterns in particular. These stress fields cause initiation and evolution of cracks in the bond-lines resulting in: interfacial de-bonding, loss of structural integrity, and reduction of load-carrying capacity. Subsequently, delamination of glued-laminated timbers made of hardwood elements can be considered as the dominant failure mechanism in such composite elements. In addition, long-term creep and mechano-sorption under changing environmental conditions lead to loss of stiffness and can amplify delamination growth over the lifetime of a structure even after decades. In this study we investigate the delamination process of adhesively bonded hardwood (European beech) elements subjected to changing climatic conditions. To gain further insight into the long-term performance of adhesively bonded elements during the design phase of new products, the development and verification of an authentic moisture-dependent constitutive model for various species is of great significance. Since up to now, a comprehensive moisture-dependent rheological model comprising all possibly emerging deformation mechanisms was missing, a 3D orthotropic elasto-plastic, visco-elastic, mechano-sorptive material model for wood, with all material constants being defined as a function of moisture content, was developed. Apart from the solid wood adherends, adhesive layer also plays a crucial role in the generation and distribution of the interfacial stresses. Adhesive substance can be treated as a continuum layer constructed from finite elements, represented as a homogeneous and isotropic material. To obtain a realistic assessment on the mechanical performance of the adhesive layer and a detailed look at the interfacial stress distributions, a generic constitutive model including all potentially activated deformation modes, namely elastic, plastic, and visco-elastic creep was developed. We focused our studies on the three most common adhesive systems for structural timber engineering: one-component polyurethane adhesive (PUR), melamine-urea-formaldehyde (MUF), and phenol-resorcinol-formaldehyde (PRF). The corresponding numerical integration approaches, with additive decomposition of the total strain are implemented within the ABAQUS FEM environment by means of user subroutine UMAT. To predict the true stress state, we perform a history dependent sequential moisture-stress analysis using the developed material models for both wood substrate and adhesive layer. Prediction of the delamination process is founded on the fracture mechanical properties of the adhesive bond-line, measured under different levels of moisture content and application of the cohesive interface elements. Finally, we compare the numerical predictions with the experimental observations of de-bonding in glued-laminated samples under changing environmental conditions.Keywords: engineered wood, adhesive, material model, FEM analysis, fracture mechanics, delamination
Procedia PDF Downloads 43713 A Study of the Trap of Multi-Homing in Customers: A Comparative Case Study of Digital Payments
Authors: Shari S. C. Shang, Lynn S. L. Chiu
Abstract:
In the digital payment market, some consumers use only one payment wallet while many others play multi-homing with a variety of payment services. With the diffusion of new payment systems, we examined the determinants of the adoption of multi-homing behavior. This study aims to understand how a digital payment provider dynamically expands business touch points with cross-business strategies to enrich the digital ecosystem and avoid the trap of multi-homing in customers. By synthesizing platform ecosystem literature, we constructed a two-dimensional research framework with one determinant of user digital behavior from offline to online intentions and the other determinant of digital payment touch points from convenient accessibility to cross-business platforms. To explore on a broader scale, we selected 12 digital payments from 5 countries of UK, US, Japan, Korea, and Taiwan. With the interplays of user digital behaviors and payment touch points, we group the study cases into four types: (1) Channel Initiated: users originated from retailers with high access to in-store shopping with face-to-face guidance for payment adoption. Providers offer rewards for customer loyalty and secure the retailer’s efficient cash flow management. (2) Social Media Dependent: users usually are digital natives with high access to social media or the internet who shop and pay digitally. Providers might not own physical or online shops but are licensed to aggregate money flows through virtual ecosystems. (3) Early Life Engagement: digital banks race to capture the next generation from popularity to profitability. This type of payment aimed to give children a taste of financial freedom while letting parents track their spending. Providers are to capitalize on the digital payment and e-commerce boom and hold on to new customers into adulthood. (4) Traditional Banking: plastic credit cards are purposely designed as a control group to track the evolvement of business strategies in digital payments. Traditional credit card users may follow the bank’s digital strategy to land on different types of digital wallets or mostly keep using plastic credit cards. This research analyzed business growth models and inter-firms’ coopetition strategies of the selected cases. Results of the multiple case analysis reveal that channel initiated payments bundled rewards with retailer’s business discount for recurring purchases. They also extended other financial services, such as insurance, to fulfill customers’ new demands. Contrastively, social media dependent payments developed new usages and new value creation, such as P2P money transfer through network effects among the virtual social ties, while early life engagements offer virtual banking products to children who are digital natives but overlooked by incumbents. It has disrupted the banking business domains in preparation for the metaverse economy. Lastly, the control group of traditional plastic credit cards has gradually converted to a BaaS (banking as a service) model depending on customers’ preferences. The multi-homing behavior is not avoidable in digital payment competitions. Payment providers may encounter multiple waves of a multi-homing threat after a short period of success. A dynamic cross-business collaboration strategy should be explored to continuously evolve the digital ecosystems and allow users for a broader shopping experience and continual usage.Keywords: digital payment, digital ecosystems, multihoming users, cross business strategy, user digital behavior intentions
Procedia PDF Downloads 16312 Optimizing AI Voice for Adolescent Health Education: Preferences and Trustworthiness Across Teens and Parent
Authors: Yu-Lin Chen, Kimberly Koester, Marissa Raymond-Flesh, Anika Thapar, Jay Thapar
Abstract:
Purpose: Effectively communicating adolescent health topics to teens and their parents is crucial. This study emphasizes critically evaluating the optimal use of artificial intelligence tools (AI), which are increasingly prevalent in disseminating health information. By fostering a deeper understanding of AI voice preference in the context of health, the research aspires to have a ripple effect, enhancing the collective health literacy and decision-making capabilities of both teenagers and their parents. This study explores AI voices' potential within health learning modules for annual well-child visits. We aim to identify preferred voice characteristics and understand factors influencing perceived trustworthiness, ultimately aiming to improve health literacy and decision-making in both demographics. Methods: A cross-sectional study assessed preferences and trust perceptions of AI voices in learning modules among teens (11-18) and their parents/guardians in Northern California. The study involved the development of four distinct learning modules covering various adolescent health-related topics, including general communication, sexual and reproductive health communication, parental monitoring, and well-child check-ups. Participants were asked to evaluate eight AI voices across the modules, considering a set of six factors such as intelligibility, naturalness, prosody, social impression, trustworthiness, and overall appeal, using Likert scales ranging from 1 to 10 (the higher, the better). They were also asked to select their preferred choice of voice for each module. Descriptive statistics summarized participant demographics. Chi-square/t-tests explored differences in voice preferences between groups. Regression models identified factors impacting the perceived trustworthiness of the top-selected voice per module. Results: Data from 104 participants (teen=63; adult guardian = 41) were included in the analysis. The mean age is 14.9 for teens (54% male) and 41.9 for the parent/guardian (12% male). At the same time, similar voice quality ratings were observed across groups, and preferences varied by topic. For instance, in general communication, teens leaned towards young female voices, while parents preferred mature female tones. Interestingly, this trend reversed for parental monitoring, with teens favoring mature male voices and parents opting for mature female ones. Both groups, however, converged on mature female voices for sexual and reproductive health topics. Beyond preferences, the study delved into factors influencing perceived trustworthiness. Interestingly, social impression and sound appeal emerged as the most significant contributors across all modules, jointly explaining 71-75% of the variance in trustworthiness ratings. Conclusion: The study emphasizes the importance of catering AI voices to specific audiences and topics. Social impression and sound appeal emerged as critical factors influencing perceived trustworthiness across all modules. These findings highlight the need to tailor AI voices by age and the specific health information being delivered. Ensuring AI voices resonate with both teens and their parents can foster their engagement and trust, ultimately leading to improved health literacy and decision-making for both groups. Limitations and future research: This study lays the groundwork for understanding AI voice preferences for teenagers and their parents in healthcare settings. However, limitations exist. The sample represents a specific geographic location, and cultural variations might influence preferences. Additionally, the modules focused on topics related to well-child visits, and preferences might differ for more sensitive health topics. Future research should explore these limitations and investigate the long-term impact of AI voice on user engagement, health outcomes, and health behaviors.Keywords: artificial intelligence, trustworthiness, voice, adolescent
Procedia PDF Downloads 6311 Computational Fluid Dynamics Simulation of a Nanofluid-Based Annular Solar Collector with Different Metallic Nano-Particles
Authors: Sireetorn Kuharat, Anwar Beg
Abstract:
Motivation- Solar energy constitutes the most promising renewable energy source on earth. Nanofluids are a very successful family of engineered fluids, which contain well-dispersed nanoparticles suspended in a stable base fluid. The presence of metallic nanoparticles (e.g. gold, silver, copper, aluminum etc) significantly improves the thermo-physical properties of the host fluid and generally results in a considerable boost in thermal conductivity, density, and viscosity of nanofluid compared with the original base (host) fluid. This modification in fundamental thermal properties has profound implications in influencing the convective heat transfer process in solar collectors. The potential for improving solar collector direct absorber efficiency is immense and to gain a deeper insight into the impact of different metallic nanoparticles on efficiency and temperature enhancement, in the present work, we describe recent computational fluid dynamics simulations of an annular solar collector system. The present work studies several different metallic nano-particles and compares their performance. Methodologies- A numerical study of convective heat transfer in an annular pipe solar collector system is conducted. The inner tube contains pure water and the annular region contains nanofluid. Three-dimensional steady-state incompressible laminar flow comprising water- (and other) based nanofluid containing a variety of metallic nanoparticles (copper oxide, aluminum oxide, and titanium oxide nanoparticles) is examined. The Tiwari-Das model is deployed for which thermal conductivity, specific heat capacity and viscosity of the nanofluid suspensions is evaluated as a function of solid nano-particle volume fraction. Radiative heat transfer is also incorporated using the ANSYS solar flux and Rosseland radiative models. The ANSYS FLUENT finite volume code (version 18.1) is employed to simulate the thermo-fluid characteristics via the SIMPLE algorithm. Mesh-independence tests are conducted. Validation of the simulations is also performed with a computational Harlow-Welch MAC (Marker and Cell) finite difference method and excellent correlation achieved. The influence of volume fraction on temperature, velocity, pressure contours is computed and visualized. Main findings- The best overall performance is achieved with copper oxide nanoparticles. Thermal enhancement is generally maximized when water is utilized as the base fluid, although in certain cases ethylene glycol also performs very efficiently. Increasing nanoparticle solid volume fraction elevates temperatures although the effects are less prominent in aluminum and titanium oxide nanofluids. Significant improvement in temperature distributions is achieved with copper oxide nanofluid and this is attributed to the superior thermal conductivity of copper compared to other metallic nano-particles studied. Important fluid dynamic characteristics are also visualized including circulation and temperature shoots near the upper region of the annulus. Radiative flux is observed to enhance temperatures significantly via energization of the nanofluid although again the best elevation in performance is attained consistently with copper oxide. Conclusions-The current study generalizes previous investigations by considering multiple metallic nano-particles and furthermore provides a good benchmark against which to calibrate experimental tests on a new solar collector configuration currently being designed at Salford University. Important insights into the thermal conductivity and viscosity with metallic nano-particles is also provided in detail. The analysis is also extendable to other metallic nano-particles including gold and zinc.Keywords: heat transfer, annular nanofluid solar collector, ANSYS FLUENT, metallic nanoparticles
Procedia PDF Downloads 14310 Settlement Prediction in Cape Flats Sands Using Shear Wave Velocity – Penetration Resistance Correlations
Authors: Nanine Fouche
Abstract:
The Cape Flats is a low-lying sand-covered expanse of approximately 460 square kilometres, situated to the southeast of the central business district of Cape Town in the Western Cape of South Africa. The aeolian sands masking this area are often loose and compressible in the upper 1m to 1.5m of the surface, and there is a general exceedance of the maximum allowable settlement in these sands. The settlement of shallow foundations on Cape Flats sands is commonly predicted using the results of in-situ tests such as the SPT or DPSH due to the difficulty of retrieving undisturbed samples for laboratory testing. Varying degrees of accuracy and reliability are associated with these methods. More recently, shear wave velocity (Vs) profiles obtained from seismic testing, such as continuous surface wave tests (CSW), are being used for settlement prediction. Such predictions have the advantage of considering non-linear stress-strain behaviour of soil and the degradation of stiffness with increasing strain. CSW tests are rarely executed in the Cape Flats, whereas SPT’s are commonly performed. For this reason, and to facilitate better settlement predictions in Cape Flats sand, equations representing shear wave velocity (Vs) as a function of SPT blow count (N60) and vertical effective stress (v’) were generated by statistical regression of site investigation data. To reveal the most appropriate method of overburden correction, analyses were performed with a separate overburden term (Pa/σ’v) as well as using stress corrected shear wave velocity and SPT blow counts (correcting Vs. and N60 to Vs1and (N1)60respectively). Shear wave velocity profiles and SPT blow count data from three sites masked by Cape Flats sands were utilised to generate 80 Vs-SPT N data pairs for analysis. Investigated terrains included sites in the suburbs of Athlone, Muizenburg, and Atlantis, all underlain by windblown deposits comprising fine and medium sand with varying fines contents. Elastic settlement analysis was also undertaken for the Cape Flats sands, using a non-linear stepwise method based on small-strain stiffness estimates, which was obtained from the best Vs-N60 model and compared to settlement estimates using the general elastic solution with stiffness profiles determined using Stroud’s (1989) and Webb’s (1969) SPT N60-E transformation models. Stroud’s method considers strain level indirectly whereasWebb’smethod does not take account of the variation in elastic modulus with strain. The expression of Vs. in terms of N60 and Pa/σv’ derived from the Atlantis data set revealed the best fit with R2 = 0.83 and a standard error of 83.5m/s. Less accurate Vs-SPT N relations associated with the combined data set is presumably the result of inversion routines used in the analysis of the CSW results showcasing significant variation in relative density and stiffness with depth. The regression analyses revealed that the inclusion of a separate overburden term in the regression of Vs and N60, produces improved fits, as opposed to the stress corrected equations in which the R2 of the regression is notably lower. It is the correction of Vs and N60 to Vs1 and (N1)60 with empirical constants ‘n’ and ‘m’ prior to regression, that introduces bias with respect to overburden pressure. When comparing settlement prediction methods, both Stroud’s method (considering strain level indirectly) and the small strain stiffness method predict higher stiffnesses for medium dense and dense profiles than Webb’s method, which takes no account of strain level in the determination of soil stiffness. Webb’s method appears to be suitable for loose sands only. The Versak software appears to underestimate differences in settlement between square and strip footings of similar width. In conclusion, settlement analysis using small-strain stiffness data from the proposed Vs-N60 model for Cape Flats sands provides a way to take account of the non-linear stress-strain behaviour of the sands when calculating settlement.Keywords: sands, settlement prediction, continuous surface wave test, small-strain stiffness, shear wave velocity, penetration resistance
Procedia PDF Downloads 1759 Targeting Tumour Survival and Angiogenic Migration after Radiosensitization with an Estrone Analogue in an in vitro Bone Metastasis Model
Authors: Jolene M. Helena, Annie M. Joubert, Peace Mabeta, Magdalena Coetzee, Roy Lakier, Anne E. Mercier
Abstract:
Targeting the distant tumour and its microenvironment whilst preserving bone density is important in improving the outcomes of patients with bone metastases. 2-Ethyl-3-O-sulphamoyl-estra1,3,5(10)16-tetraene (ESE-16) is an in-silico-designed 2- methoxyestradiol analogue which aimed at enhancing the parent compound’s cytotoxicity and providing a more favourable pharmacokinetic profile. In this study, the potential radiosensitization effects of ESE-16 were investigated in an in vitro bone metastasis model consisting of murine pre-osteoblastic (MC3T3-E1) and pre-osteoclastic (RAW 264.7) bone cells, metastatic prostate (DU 145) and breast (MDA-MB-231) cancer cells, as well as human umbilical vein endothelial cells (HUVECs). Cytotoxicity studies were conducted on all cell lines via spectrophotometric quantification of 3-(4,5-dimethylthiazol-2-yl)-2,5- diphenyltetrazolium bromide. The experimental set-up consisted of flow cytometric analysis of cell cycle progression and apoptosis detection (Annexin V-fluorescein isothiocyanate) to determine the lowest ESE-16 and radiation doses to induce apoptosis and significantly reduce cell viability. Subsequent experiments entailed a 24-hour low-dose ESE-16-exposure followed by a single dose of radiation. Termination proceeded 2, 24 or 48 hours thereafter. The effect of the combination treatment was investigated on osteoclasts via tartrate-resistant acid phosphatase (TRAP) activity- and actin ring formation assays. Tumour cell experiments included investigation of mitotic indices via haematoxylin and eosin staining; pro-apoptotic signalling via spectrophotometric quantification of caspase 3; deoxyribonucleic acid (DNA) damage via micronuclei analysis and histone H2A.X phosphorylation (γ-H2A.X); and Western blot analyses of bone morphogenetic protein-7 and matrix metalloproteinase-9. HUVEC experiments included flow cytometric quantification of cell cycle progression and free radical production; fluorescent examination of cytoskeletal morphology; invasion and migration studies on an xCELLigence platform; and Western blot analyses of hypoxia-inducible factor 1-alpha and vascular endothelial growth factor receptor 1 and 2. Tumour cells yielded half-maximal growth inhibitory concentration (GI50) values in the nanomolar range. ESE-16 concentrations of 235 nM (DU 145) and 176 nM (MDA-MB-231) and a radiation dose of 4 Gy were found to be significant in cell cycle and apoptosis experiments. Bone and endothelial cells were exposed to the same doses as DU 145 cells. Cytotoxicity studies on bone cells reported that RAW 264.7 cells were more sensitive to the combination treatment than MC3T3-E1 cells. Mature osteoclasts were more sensitive than pre-osteoclasts with respect to TRAP activity. However, actin ring morphology was retained. The mitotic arrest was evident in tumour and endothelial cells in the mitotic index and cell cycle experiments. Increased caspase 3 activity and superoxide production indicated pro-apoptotic signalling in tumour and endothelial cells. Increased micronuclei numbers and γ-H2A.X foci indicated increased DNA damage in tumour cells. Compromised actin and tubulin morphologies and decreased invasion and migration were observed in endothelial cells. Western blot analyses revealed reduced metastatic and angiogenic signalling. ESE-16-induced radiosensitization inhibits metastatic signalling and tumour cell survival whilst preferentially preserving bone cells. This low-dose combination treatment strategy may promote the quality of life of patients with metastatic bone disease. Future studies will include 3-dimensional in-vitro and murine in-vivo models.Keywords: angiogenesis, apoptosis, bone metastasis, cancer, cell migration, cytoskeleton, DNA damage, ESE-16, radiosensitization.
Procedia PDF Downloads 1628 Blockchain Based Hydrogen Market (BBH₂): A Paradigm-Shifting Innovative Solution for Climate-Friendly and Sustainable Structural Change
Authors: Volker Wannack
Abstract:
Regional, national, and international strategies focusing on hydrogen (H₂) and blockchain are driving significant advancements in hydrogen and blockchain technology worldwide. These strategies lay the foundation for the groundbreaking "Blockchain Based Hydrogen Market (BBH₂)" project. The primary goal of this project is to develop a functional Blockchain Minimum Viable Product (B-MVP) for the hydrogen market. The B-MVP will leverage blockchain as an enabling technology with a common database and platform, facilitating secure and automated transactions through smart contracts. This innovation will revolutionize logistics, trading, and transactions within the hydrogen market. The B-MVP has transformative potential across various sectors. It benefits renewable energy producers, surplus energy-based hydrogen producers, hydrogen transport and distribution grid operators, and hydrogen consumers. By implementing standardized, automated, and tamper-proof processes, the B-MVP enhances cost efficiency and enables transparent and traceable transactions. Its key objective is to establish the verifiable integrity of climate-friendly "green" hydrogen by tracing its supply chain from renewable energy producers to end users. This emphasis on transparency and accountability promotes economic, ecological, and social sustainability while fostering a secure and transparent market environment. A notable feature of the B-MVP is its cross-border operability, eliminating the need for country-specific data storage and expanding its global applicability. This flexibility not only broadens its reach but also creates opportunities for long-term job creation through the establishment of a dedicated blockchain operating company. By attracting skilled workers and supporting their training, the B-MVP strengthens the workforce in the growing hydrogen sector. Moreover, it drives the emergence of innovative business models that attract additional company establishments and startups and contributes to long-term job creation. For instance, data evaluation can be utilized to develop customized tariffs and provide demand-oriented network capacities to producers and network operators, benefitting redistributors and end customers with tamper-proof pricing options. The B-MVP not only brings technological and economic advancements but also enhances the visibility of national and international standard-setting efforts. Regions implementing the B-MVP become pioneers in climate-friendly, sustainable, and forward-thinking practices, generating interest beyond their geographic boundaries. Additionally, the B-MVP serves as a catalyst for research and development, facilitating knowledge transfer between universities and companies. This collaborative environment fosters scientific progress, aligns with strategic innovation management, and cultivates an innovation culture within the hydrogen market. Through the integration of blockchain and hydrogen technologies, the B-MVP promotes holistic innovation and contributes to a sustainable future in the hydrogen industry. The implementation process involves evaluating and mapping suitable blockchain technology and architecture, developing and implementing the blockchain, smart contracts, and depositing certificates of origin. It also includes creating interfaces to existing systems such as nomination, portfolio management, trading, and billing systems, testing the scalability of the B-MVP to other markets and user groups, developing data formats for process-relevant data exchange, and conducting field studies to validate the B-MVP. BBH₂ is part of the "Technology Offensive Hydrogen" funding call within the research funding of the Federal Ministry of Economics and Climate Protection in the 7th Energy Research Programme of the Federal Government.Keywords: hydrogen, blockchain, sustainability, innovation, structural change
Procedia PDF Downloads 1727 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications
Authors: Atish Bagchi, Siva Chandrasekaran
Abstract:
Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning
Procedia PDF Downloads 1506 The Impact of Neighborhood Effects on the Economic Mobility of the Inhabitants of Three Segregated Communities in Salvador (Brazil)
Authors: Stephan Treuke
Abstract:
The paper analyses the neighbourhood effects on the economic mobility of the inhabitants of three segregated communities of Salvador (Brazil), in other words, the socio-economic advantages and disadvantages affecting the lives of poor people due to their embeddedness in specific socio-residential contexts. Recent studies performed in Brazilian metropolis have concentrated on the structural dimensions of negative externalities in order to explain neighbourhood-level variations in a field of different phenomena (delinquency, violence, access to the labour market and education) in spatial isolated and socially homogeneous slum areas (favelas). However, major disagreement remains whether the contiguity between residents of poor neighbourhoods and higher-class condominio-dwellers provides structures of opportunities or whether it fosters socio-spatial stigmatization. Based on a set of interviews, investigating the variability of interpersonal networks and their activation in the struggle for economic inclusion, the study confirms that the proximity of Nordeste de Amaralina to middle-/upper-class communities affects positively the access to labour opportunities. Nevertheless, residential stigmatization, as well as structures of social segmentation, annihilate these potentials. The lack of exposition to individuals and groups extrapolating from the favela’s social, educational and cultural context restricts the structures of opportunities to local level. Therefore, residents´ interpersonal networks reveal a high degree of redundancy and localism, based on bonding ties connecting family and neighbourhood members. The resilience of segregational structures in Plataforma contributes to the naturalization of social distance patters. It’s embeddedness in a socially homogeneous residential area (Subúrbio Ferroviário), growing informally and beyond official urban politics, encourages the construction of isotopic patterns of sociability, sharing the same values, social preferences, perspectives and behaviour models. Whereas it’s spatial isolation correlates with the scarcity of economic opportunities, the social heterogeneity of Fazenda Grande II interviewees and the socialising effects of public institutions mitigate the negative repercussions of segregation. The networks’ composition admits a higher degree of heterophilia and a greater proportion of bridging ties accounting for the access to broader information actives and facilitating economic mobility. The variability observed within the three different scenarios urges to reflect about the responsability of urban politics when it comes to the prevention or consolidation of the social segregation process in Salvador. Instead of promoting the local development of the favela Plataforma, public housing programs priorize technocratic habitational solutions without providing the residents’ socio-economic integration. The impact of negative externalities related to the homogeneously poor neighbourhood is potencialized in peripheral areas, turning its’ inhabitants socially invisible, thus being isolated from other social groups. The example of Nordeste de Amaralina portrays the failing interest of urban politics to bridge the social distances structuring the brazilian society’s rigid stratification model, founded on mecanisms of segmentation (unequal access to labour market and education system, public transport, social security and law protection) and generating permanent conflicts between the two socioeconomically distant groups living in geographic contiguity. Finally, in the case of Fazenda Grande II, the public investments in both housing projects and complementary infrastructure (e.g. schools, hospitals, community center, police stations, recreation areas) contributes to the residents’ socio-economic inclusion.Keywords: economic mobility, neighborhood effects, Salvador, segregation
Procedia PDF Downloads 2805 Light Sensitive Plasmonic Nanostructures for Photonic Applications
Authors: Istvan Csarnovics, Attila Bonyar, Miklos Veres, Laszlo Himics, Attila Csik, Judit Kaman, Julia Burunkova, Geza Szanto, Laszlo Balazs, Sandor Kokenyesi
Abstract:
In this work, the performance of gold nanoparticles were investigated for stimulation of photosensitive materials for photonic applications. It was widely used for surface plasmon resonance experiments, not in the last place because of the manifestation of optical resonances in the visible spectral region. The localized surface plasmon resonance is rather easily observed in nanometer-sized metallic structures and widely used for measurements, sensing, in semiconductor devices and even in optical data storage. Firstly, gold nanoparticles on silica glass substrate satisfy the conditions for surface plasmon resonance in the green-red spectral range, where the chalcogenide glasses have the highest sensitivity. The gold nanostructures influence and enhance the optical, structural and volume changes and promote the exciton generation in gold nanoparticles/chalcogenide layer structure. The experimental results support the importance of localized electric fields in the photo-induced transformation of chalcogenide glasses as well as suggest new approaches to improve the performance of these optical recording media. Results may be utilized for direct, micrometre- or submicron size geometrical and optical pattern formation and used also for further development of the explanations of these effects in chalcogenide glasses. Besides of that, gold nanoparticles could be added to the organic light-sensitive material. The acrylate-based materials are frequently used for optical, holographic recording of optoelectronic elements due to photo-stimulated structural transformations. The holographic recording process and photo-polymerization effect could be enhanced by the localized plasmon field of the created gold nanostructures. Finally, gold nanoparticles widely used for electrochemical and optical sensor applications. Although these NPs can be synthesized in several ways, perhaps one of the simplest methods is the thermal annealing of pre-deposited thin films on glass or silicon surfaces. With this method, the parameters of the annealing process (time, temperature) and the pre-deposited thin film thickness influence and define the resulting size and distribution of the NPs on the surface. Localized surface plasmon resonance (LSPR) is a very sensitive optical phenomenon and can be utilized for a large variety of sensing purposes (chemical sensors, gas sensors, biosensors, etc.). Surface-enhanced Raman spectroscopy (SERS) is an analytical method which can significantly increase the yield of Raman scattering of target molecules adsorbed on the surface of metallic nanoparticles. The sensitivity of LSPR and SERS based devices is strongly depending on the used material and also on the size and geometry of the metallic nanoparticles. By controlling these parameters the plasmon absorption band can be tuned and the sensitivity can be optimized. The technological parameters of the generated gold nanoparticles were investigated and influence on the SERS and on the LSPR sensitivity was established. The LSPR sensitivity were simulated for gold nanocubes and nanospheres with MNPBEM Matlab toolbox. It was found that the enhancement factor (which characterize the increase in the peak shift for multi-particle arrangements compared to single-particle models) depends on the size of the nanoparticles and on the distance between the particles. This work was supported by GINOP- 2.3.2-15-2016-00041 project, which is co-financed by the European Union and European Social Fund. Istvan Csarnovics is grateful for the support through the New National Excellence Program of the Ministry of Human Capacities, supported by the ÚNKP-17-4 Attila Bonyár and Miklós Veres are grateful for the support of the János Bolyai Research Scholarship of the Hungarian Academy of Sciences.Keywords: light sensitive nanocomposites, metallic nanoparticles, photonic application, plasmonic nanostructures
Procedia PDF Downloads 3064 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion
Authors: Ali Kazemi
Abstract:
Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting
Procedia PDF Downloads 683 Times2D: A Time-Frequency Method for Time Series Forecasting
Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan
Abstract:
Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation
Procedia PDF Downloads 442 Innovative Practices That Have Significantly Scaled up Depot Medroxy Progesterone Acetate-SC Self-Inject Services
Authors: Oluwaseun Adeleke, Samuel O. Ikani, Fidelis Edet, Anthony Nwala, Mopelola Raji, Simeon Christian Chukwu
Abstract:
Background The Delivering Innovations in Selfcare (DISC) project promotes universal access to quality selfcare services beginning with subcutaneous depot medroxy progesterone acetate (DMPA-SC) contraceptive self-injection (SI) option. Self-inject (SI) offers women a highly effective and convenient option that saves them frequent trips to providers. Its increased use has the potential to improve the efficiency of an overstretched healthcare system by reducing provider workloads. State Social and Behavioral Change Communications (SBCC) Officers lead project demand creation and service delivery innovations that have resulted in significant increases in SI uptake among women who opt for injectables. Strategies Service Delivery Innovations The implementation of the "Moment of Truth (MoT)" innovation helped providers overcome biases and address client fear and reluctance to self-inject. Bi-annual program audits and supportive mentoring visits helped providers retain their competence and motivation. Proper documentation, tracking, and replenishment of commodities were ensured through effective engagement with State Logistics Units. The project supported existing state monitoring and evaluation structures to effectively record and report subcutaneous depot medroxy progesterone acetate (DMPA-SC) service utilization. Demand creation Innovations SBCC Officers provide oversight, routinely evaluate performance, trains, and provides feedback for the demand creation activities implemented by community mobilizers (CMs). The scope and intensity of training given to CMs affect the outcome of their work. The project operates a demand creation model that uses a schedule to inform the conduct of interpersonal and group events. Health education sessions are specifically designed to counter misinformation, address questions and concerns, and educate target audience in an informed choice context. The project mapped facilities and their catchment areas and enlisted the support of identified influencers and gatekeepers to enlist their buy-in prior to entry. Each mobilization event began with pre-mobilization sensitization activities, particularly targeting male groups. Context-specific interventions were informed by the religious, traditional, and cultural peculiarities of target communities. Mobilizers also support clients to engage with and navigate online digital Family Planning (FP) online portals such as DiscoverYourPower website, Facebook page, digital companion (chat bot), interactive voice response (IVR), radio and television (TV) messaging. This improves compliance and provides linkages to nearby facilities. Results The project recorded 136,950 self-injection (SI) visits and a self-injection (SI) proportion rate that increased from 13 percent before the implementation of interventions in 2021 to 62 percent currently. The project cost-effectively demonstrated catalytic impact by leveraging state and partner resources, institutional platforms, and geographic scope to scale up interventions. The project also cost effectively demonstrated catalytic impact by leveraging on the state and partner resources, institutional platforms, and geographic scope to sustainably scale-up these strategies. Conclusion Using evidence-informed iterations of service delivery and demand creation models have been useful to significantly drive self-injection (SI) uptake. It will be useful to consider this implementation model during program design. Contemplation should also be given to systematic and strategic execution of strategies to optimize impact.Keywords: family planning, contraception, DMPA-SC, self-care, self-injection, innovation, service delivery, demand creation.
Procedia PDF Downloads 75