Search results for: input constraints
738 Biostimulant and Abiotic Plant Stress Interactions in Malting Barley: A Glasshouse Study
Authors: Conor Blunt, Mariluz del Pino-de Elias, Grace Cott, Saoirse Tracy, Rainer Melzer
Abstract:
The European Green Deal announced in 2021 details agricultural chemical pesticide use and synthetic fertilizer application to be reduced by 50% and 20% by 2030. Increasing and maintaining expected yields under these ambitious goals has strained the agricultural sector. This intergovernmental plan has identified plant biostimulants as one potential input to facilitate this new phase of sustainable agriculture; these products are defined as microorganisms or substances that can stimulate soil and plant functioning to enhance crop nutrient use efficiency, quality and tolerance to abiotic stresses. Spring barley is Ireland’s most widely sown tillage crop, and grain destined for malting commands the most significant market price. Heavy erratic rainfall is forecasted in Ireland’s climate future, and barley is particularly susceptible to waterlogging. Recent findings suggest that plant receptivity to biostimulants may depend on the level of stress inflicted on crops to elicit an assisted plant response. In this study, three biostimulants of different genesis (seaweed, protein hydrolysate and bacteria) are applied to ‘RGT Planet’ malting barley fertilized at three different rates (0 kg/ha, 40 kg/ha, 75 kg/ha) of calcium ammonium nitrogen (27% N) under non-stressed and waterlogged conditions. This 4x3x2 factorial trial design was planted in a completed randomized block with one plant per experimental unit. Leaf gas exchange data and key agronomic and grain quality parameters were analyzed via ANOVA. No penalty on productivity was evident on plants receiving 40 kg/ha of N and bio stimulant compared to 75 kg/ha of N treatments. The main effects of nitrogen application and waterlogging provided the most significant variation in the dataset.Keywords: biostimulant, Barley, malting, NUE, waterlogging
Procedia PDF Downloads 76737 Mindful Self-Compassion Training to Alleviate Work Stress and Fatigue in Community Workers: A Mixed Method Evaluation
Authors: Catherine Begin, Jeanne Berthod, Manon Truchon
Abstract:
In Quebec, there are more than 8,000 community organizations throughout the province, representing more than 72,000 jobs. Working in a community setting involves several particularities (e.g., contact with the suffering of users, feelings of powerlessness, institutional pressure, unstable funding, etc.), which can put workers at risk of fatigue, burnout, and psychological distress. A 2007 study shows that 52% of community workers surveyed have a high psychological distress index. The Ricochet project, founded in 2019, is an initiative aimed at providing various care and services to community workers in the Quebec City region, with a global health approach. Within this program, mindful self-compassion training (MSC) is offered at a low cost. MSC is one of the effective strategies proposed in the literature to help prevent and reduce burnout. Self-compassion is the recognition that suffering, failure, and inadequacies are inherent in the human experience and that everyone, including oneself, deserves compassion. MSC training targets several behavioral, cognitive, and emotional learnings (e.g., motivating oneself with caring, better managing difficult emotions, promoting resilience, etc.). A mixed-method evaluation was conducted with the participants in order to explore the effects of the training on community workers in the Quebec City region. The participants were community workers (management or caregiver). 15 participants completed satisfaction and perceived impact surveys, and 30 participated in structured interviews. Quantitative results showed that participants were generally completely satisfied or satisfied with the training (94%) and perceived that the training allowed them to develop new strategies for dealing with stress (87%). Participants perceived effects on their mood (93%), their contact with others (80%), and their stress level (67%). Some of the barriers raised were scheduling constraints, length of training, and guilt about taking time for oneself. The qualitative results show that individuals experienced long-term benefits, as they were able to apply the tools they received during the training in their daily lives. Some barriers were noted, such as difficulty in getting away from work or problems with the employer, which prevented enrollment. Overall, the results of this evaluation support the use of MSC (mindful self-compassion) training among community workers. Future research could support this evaluation by using a rigorous design and developing innovative ways to overcome the barriers raised.Keywords: mindful self-compassion, community workers, work stres, burnout, wellbeing at work
Procedia PDF Downloads 119736 A One-Dimensional Modeling Analysis of the Influence of Swirl and Tumble Coefficient in a Single-Cylinder Research Engine
Authors: Mateus Silva Mendonça, Wender Pereira de Oliveira, Gabriel Heleno de Paula Araújo, Hiago Tenório Teixeira Santana Rocha, Augusto César Teixeira Malaquias, José Guilherme Coelho Baeta
Abstract:
The stricter legislation and the greater demand of the population regard to gas emissions and their effects on the environment as well as on human health make the automotive industry reinforce research focused on reducing levels of contamination. This reduction can be achieved through the implementation of improvements in internal combustion engines in such a way that they promote the reduction of both specific fuel consumption and air pollutant emissions. These improvements can be obtained through numerical simulation, which is a technique that works together with experimental tests. The aim of this paper is to build, with support of the GT-Suite software, a one-dimensional model of a single-cylinder research engine to analyze the impact of the variation of swirl and tumble coefficients on the performance and on the air pollutant emissions of an engine. Initially, the discharge coefficient is calculated through the software Converge CFD 3D, given that it is an input parameter in GT-Power. Mesh sensitivity tests are made in 3D geometry built for this purpose, using the mass flow rate in the valve as a reference. In the one-dimensional simulation is adopted the non-predictive combustion model called Three Pressure Analysis (TPA) is, and then data such as mass trapped in cylinder, heat release rate, and accumulated released energy are calculated, aiming that the validation can be performed by comparing these data with those obtained experimentally. Finally, the swirl and tumble coefficients are introduced in their corresponding objects so that their influences can be observed when compared to the results obtained previously.Keywords: 1D simulation, single-cylinder research engine, swirl coefficient, three pressure analysis, tumble coefficient
Procedia PDF Downloads 105735 A Serious Game to Upgrade the Learning of Organizational Skills in Nursing Schools
Authors: Benoit Landi, Hervé Pingaud, Jean-Benoit Culie, Michel Galaup
Abstract:
Serious games have been widely disseminated in the field of digital learning. They have proved their utility in improving skills through virtual environments that simulate the field where new competencies have to be improved and assessed. This paper describes how we created CLONE, a serious game whose purpose is to help nurses create an efficient work plan in a hospital care unit. In CLONE, the number of patients to take care of is similar to the reality of their job, going far beyond what is currently practiced in nurse school classrooms. This similarity with the operational field increases proportionally the number of activities to be scheduled. Moreover, very often, the team of nurses is composed of regular nurses and nurse assistants that must share the work with respect to the regulatory obligations. Therefore, on the one hand, building a short-term planning is a complex task with a large amount of data to deal with, and on the other, good clinical practices have to be systematically applied. We present how reference planning has been defined by addressing an optimization problem formulation using the expertise of teachers. This formulation ensures the gameplay feasibility for the scenario that has been produced and enhanced throughout the game design process. It was also crucial to steer a player toward a specific gaming strategy. As one of our most important learning outcomes is a clear understanding of the workload concept, its factual calculation for each caregiver along time and its inclusion in the nurse reasoning during planning elaboration are focal points. We will demonstrate how to modify the game scenario to create a digital environment in which these somewhat abstract principles can be understood and applied. Finally, we give input on an experience we had on a pilot of a thousand undergraduate nursing students.Keywords: care planning, workload, game design, hospital nurse, organizational skills, digital learning, serious game
Procedia PDF Downloads 191734 Optimization of Sintering Process with Deteriorating Quality of Iron Ore Fines
Authors: Chandra Shekhar Verma, Umesh Chandra Mishra
Abstract:
Blast Furnace performance mainly depends on the quality of sinter as a major portion of iron-bearing material occupies by it hence its quality w.r.t. Tumbler Index (TI), Reducibility Index (RI) and Reduction Degradation Index (RDI) are the key performance indicators of sinter plant. Now it became very tough to maintain the desired quality with the increasing alumina (Al₂O₃) content in iron fines and study is focused on it. Alumina is a refractory material and required more heat input to fuse thereby affecting the desired sintering temperature, i.e. 1300°C. It goes in between the grain boundaries of the bond and makes it weaker. Sinter strength decreases with increasing alumina content, and weak sinter generates more fines thereby reduces the net sinter production as well as plant productivity. Presence of impurities beyond the acceptable norm: such as LOI, Al₂O₃, MnO, TiO₂, K₂O, Na₂O, Hydrates (Goethite & Limonite), SiO₂, phosphorous and zinc, has led to greater challenges in the thrust areas such as productivity, quality and cost. The ultimate aim of this study is maintaining the sinter strength even with high Al₂O without hampering the plant productivity. This study includes mineralogy test of iron fines to find out the fraction of different phases present in the ore and phase analysis of product sinter to know the distribution of different phases. Corrections were done focusing majorly on varying Al₂O₃/SiO₂ ratio, basicity: B2 (CaO/SiO₂), B3 (CaO+MgO/SiO₂) and B4 (CaO+MgO/SiO₂+Al₂O₃). The concept of Alumina / Silica ratio, B3 & B4 found to be useful. We used to vary MgO, Al₂O₃/SiO₂, B2, B3 and B4 to get the desired sinter strength even at high alumina (4.2 - 4.5%) in sinter. The study concludes with the establishment of B4, and Al₂O₃/SiO₂ ratio in between 1.53-1.60 and 0.63- 0.70 respectively and have achieved tumbler index (Drum Index) 76 plus with the plant productivity of 1.58-1.6 t/m2/hr. at JSPL, Raigarh. Study shows that despite of high alumina in sinter, its physical quality can be controlled by maintaining the above-mentioned parameters.Keywords: Basicity-2, Basicity-3, Basicity-4, Sinter
Procedia PDF Downloads 172733 Graph Clustering Unveiled: ClusterSyn - A Machine Learning Framework for Predicting Anti-Cancer Drug Synergy Scores
Authors: Babak Bahri, Fatemeh Yassaee Meybodi, Changiz Eslahchi
Abstract:
In the pursuit of effective cancer therapies, the exploration of combinatorial drug regimens is crucial to leverage synergistic interactions between drugs, thereby improving treatment efficacy and overcoming drug resistance. However, identifying synergistic drug pairs poses challenges due to the vast combinatorial space and limitations of experimental approaches. This study introduces ClusterSyn, a machine learning (ML)-powered framework for classifying anti-cancer drug synergy scores. ClusterSyn employs a two-step approach involving drug clustering and synergy score prediction using a fully connected deep neural network. For each cell line in the training dataset, a drug graph is constructed, with nodes representing drugs and edge weights denoting synergy scores between drug pairs. Drugs are clustered using the Markov clustering (MCL) algorithm, and vectors representing the similarity of drug pairs to each cluster are input into the deep neural network for synergy score prediction (synergy or antagonism). Clustering results demonstrate effective grouping of drugs based on synergy scores, aligning similar synergy profiles. Subsequently, neural network predictions and synergy scores of the two drugs on others within their clusters are used to predict the synergy score of the considered drug pair. This approach facilitates comparative analysis with clustering and regression-based methods, revealing the superior performance of ClusterSyn over state-of-the-art methods like DeepSynergy and DeepDDS on diverse datasets such as Oniel and Almanac. The results highlight the remarkable potential of ClusterSyn as a versatile tool for predicting anti-cancer drug synergy scores.Keywords: drug synergy, clustering, prediction, machine learning., deep learning
Procedia PDF Downloads 79732 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 122731 Development of a Software System for Management and Genetic Analysis of Biological Samples for Forensic Laboratories
Authors: Mariana Lima, Rodrigo Silva, Victor Stange, Teodiano Bastos
Abstract:
Due to the high reliability reached by DNA tests, since the 1980s this kind of test has allowed the identification of a growing number of criminal cases, including old cases that were unsolved, now having a chance to be solved with this technology. Currently, the use of genetic profiling databases is a typical method to increase the scope of genetic comparison. Forensic laboratories must process, analyze, and generate genetic profiles of a growing number of samples, which require time and great storage capacity. Therefore, it is essential to develop methodologies capable to organize and minimize the spent time for both biological sample processing and analysis of genetic profiles, using software tools. Thus, the present work aims the development of a software system solution for laboratories of forensics genetics, which allows sample, criminal case and local database management, minimizing the time spent in the workflow and helps to compare genetic profiles. For the development of this software system, all data related to the storage and processing of samples, workflows and requirements that incorporate the system have been considered. The system uses the following software languages: HTML, CSS, and JavaScript in Web technology, with NodeJS platform as server, which has great efficiency in the input and output of data. In addition, the data are stored in a relational database (MySQL), which is free, allowing a better acceptance for users. The software system here developed allows more agility to the workflow and analysis of samples, contributing to the rapid insertion of the genetic profiles in the national database and to increase resolution of crimes. The next step of this research is its validation, in order to operate in accordance with current Brazilian national legislation.Keywords: database, forensic genetics, genetic analysis, sample management, software solution
Procedia PDF Downloads 370730 Sensitivity and Uncertainty Analysis of One Dimensional Shape Memory Alloy Constitutive Models
Authors: A. B. M. Rezaul Islam, Ernur Karadogan
Abstract:
Shape memory alloys (SMAs) are known for their shape memory effect and pseudoelasticity behavior. Their thermomechanical behaviors are modeled by numerous researchers using microscopic thermodynamic and macroscopic phenomenological point of view. Tanaka, Liang-Rogers and Ivshin-Pence models are some of the most popular SMA macroscopic phenomenological constitutive models. They describe SMA behavior in terms of stress, strain and temperature. These models involve material parameters and they have associated uncertainty present in them. At different operating temperatures, the uncertainty propagates to the output when the material is subjected to loading followed by unloading. The propagation of uncertainty while utilizing these models in real-life application can result in performance discrepancies or failure at extreme conditions. To resolve this, we used probabilistic approach to perform the sensitivity and uncertainty analysis of Tanaka, Liang-Rogers, and Ivshin-Pence models. Sobol and extended Fourier Amplitude Sensitivity Testing (eFAST) methods have been used to perform the sensitivity analysis for simulated isothermal loading/unloading at various operating temperatures. As per the results, it is evident that the models vary due to the change in operating temperature and loading condition. The average and stress-dependent sensitivity indices present the most significant parameters at several temperatures. This work highlights the sensitivity and uncertainty analysis results and shows comparison of them at different temperatures and loading conditions for all these models. The analysis presented will aid in designing engineering applications by eliminating the probability of model failure due to the uncertainty in the input parameters. Thus, it is recommended to have a proper understanding of sensitive parameters and the uncertainty propagation at several operating temperatures and loading conditions as per Tanaka, Liang-Rogers, and Ivshin-Pence model.Keywords: constitutive models, FAST sensitivity analysis, sensitivity analysis, sobol, shape memory alloy, uncertainty analysis
Procedia PDF Downloads 144729 An Early Attempt of Artificial Intelligence-Assisted Language Oral Practice and Assessment
Authors: Paul Lam, Kevin Wong, Chi Him Chan
Abstract:
Constant practicing and accurate, immediate feedback are the keys to improving students’ speaking skills. However, traditional oral examination often fails to provide such opportunities to students. The traditional, face-to-face oral assessment is often time consuming – attending the oral needs of one student often leads to the negligence of others. Hence, teachers can only provide limited opportunities and feedback to students. Moreover, students’ incentive to practice is also reduced by their anxiety and shyness in speaking the new language. A mobile app was developed to use artificial intelligence (AI) to provide immediate feedback to students’ speaking performance as an attempt to solve the above-mentioned problems. Firstly, it was thought that online exercises would greatly increase the learning opportunities of students as they can now practice more without the needs of teachers’ presence. Secondly, the automatic feedback provided by the AI would enhance students’ motivation to practice as there is an instant evaluation of their performance. Lastly, students should feel less anxious and shy compared to directly practicing oral in front of teachers. Technically, the program made use of speech-to-text functions to generate feedback to students. To be specific, the software analyzes students’ oral input through certain speech-to-text AI engine and then cleans up the results further to the point that can be compared with the targeted text. The mobile app has invited English teachers for the pilot use and asked for their feedback. Preliminary trials indicated that the approach has limitations. Many of the users’ pronunciation were automatically corrected by the speech recognition function as wise guessing is already integrated into many of such systems. Nevertheless, teachers have confidence that the app can be further improved for accuracy. It has the potential to significantly improve oral drilling by giving students more chances to practice. Moreover, they believe that the success of this mobile app confirms the potential to extend the AI-assisted assessment to other language skills, such as writing, reading, and listening.Keywords: artificial Intelligence, mobile learning, oral assessment, oral practice, speech-to-text function
Procedia PDF Downloads 103728 Assimilating Multi-Mission Satellites Data into a Hydrological Model
Authors: Mehdi Khaki, Ehsan Forootan, Joseph Awange, Michael Kuhn
Abstract:
Terrestrial water storage, as a source of freshwater, plays an important role in human lives. Hydrological models offer important tools for simulating and predicting water storages at global and regional scales. However, their comparisons with 'reality' are imperfect mainly due to a high level of uncertainty in input data and limitations in accounting for all complex water cycle processes, uncertainties of (unknown) empirical model parameters, as well as the absence of high resolution (both spatially and temporally) data. Data assimilation can mitigate this drawback by incorporating new sets of observations into models. In this effort, we use multi-mission satellite-derived remotely sensed observations to improve the performance of World-Wide Water Resources Assessment system (W3RA) hydrological model for estimating terrestrial water storages. For this purpose, we assimilate total water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) and surface soil moisture data from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) into W3RA. This is done to (i) improve model estimations of water stored in ground and soil moisture, and (ii) assess the impacts of each satellite of data (from GRACE and AMSR-E) and their combination on the final terrestrial water storage estimations. These data are assimilated into W3RA using the Ensemble Square-Root Filter (EnSRF) filtering technique over Mississippi Basin (the United States) and Murray-Darling Basin (Australia) between 2002 and 2013. In order to evaluate the results, independent ground-based groundwater and soil moisture measurements within each basin are used.Keywords: data assimilation, GRACE, AMSR-E, hydrological model, EnSRF
Procedia PDF Downloads 289727 Quality Tools for Shaping Quality of Learning and Teaching in Education and Training
Authors: Renga Rao Krishnamoorthy, Raihan Tahir
Abstract:
The quality of classroom learning and teaching delivery has been and will continue to be debated at various levels worldwide. The regional cooperation programme to improve the quality and labour market orientation of the Technical and Vocational Education and Training (RECOTVET), ‘Deutsche Gesellschaft für Internationale Zusammenarbeit’ (GIZ), in line with the sustainable development goals (SDG), has taken the initiative in the development of quality TVET in the ASEAN region by developing the Quality Toolbox for Better TVET Delivery (Quality Toolbox). This initiative aims to provide quick and practical materials to trainers, instructors, and personnel involved in education and training at an institute to shape the quality of classroom learning and teaching. The Quality Toolbox for Better TVET Delivery was developed in three stages: literature review and development, validation, and finalization. Thematic areas in the Quality Toolbox were derived from collective input of concerns and challenges raised from experts’ workshops through moderated sessions involving representatives of TVET institutes from 9 ASEAN Member States (AMS). The sessions were facilitated by professional moderators and international experts. TVET practitioners representing AMS further analysed and discussed the structure of the Quality Toolbox and content of thematic areas and outlined a set of specific requirements and recommendations. The application exercise of the Quality Toolbox was carried out by TVET institutes among ASM. Experience sharing sessions from participating ASEAN countries were conducted virtually. The findings revealed that TVET institutes use two types of approaches in shaping the quality of learning and teaching, which is ascribed to inductive or deductive, shaping of quality in learning and teaching is a non-linear process and finally, Q-tools can be adopted and adapted to shape the quality of learning and teaching at TVET institutes in the following: improvement of the institutional quality, improvement of teaching quality and improvement on the organisation of learning and teaching for students and trainers. The Quality Toolbox has good potential to be used at education and training institutes to shape quality in learning and teaching.Keywords: AMS, GIZ, RECOTVET, quality tools
Procedia PDF Downloads 129726 GIS Based Spatial Modeling for Selecting New Hospital Sites Using APH, Entropy-MAUT and CRITIC-MAUT: A Study in Rural West Bengal, India
Authors: Alokananda Ghosh, Shraban Sarkar
Abstract:
The study aims to identify suitable sites for new hospitals with critical obstetric care facilities in Birbhum, one of the vulnerable and underserved districts of Eastern India, considering six main and 14 sub-criteria, using GIS-based Analytic Hierarchy Process (AHP) and Multi-Attribute Utility Theory (MAUT) approach. The criteria were identified through field surveys and previous literature. After collecting expert decisions, a pairwise comparison matrix was prepared using the Saaty scale to calculate the weights through AHP. On the contrary, objective weighting methods, i.e., Entropy and Criteria Importance through Interaction Correlation (CRITIC), were used to perform the MAUT. Finally, suitability maps were prepared by weighted sum analysis. Sensitivity analyses of AHP were performed to explore the effect of dominant criteria. Results from AHP reveal that ‘maternal death in transit’ followed by ‘accessibility and connectivity’, ‘maternal health care service (MHCS) coverage gap’ were three important criteria with comparatively higher weighted values. Whereas ‘accessibility and connectivity’ and ‘maternal death in transit’ were observed to have more imprint in entropy and CRITIC, respectively. While comparing the predictive suitable classes of these three models with the layer of existing hospitals, except Entropy-MAUT, the other two are pointing towards the left-over underserved areas of existing facilities. Only 43%-67% of existing hospitals were in the moderate to lower suitable class. Therefore, the results of the predictive models might bring valuable input in future planning.Keywords: hospital site suitability, analytic hierarchy process, multi-attribute utility theory, entropy, criteria importance through interaction correlation, multi-criteria decision analysis
Procedia PDF Downloads 67725 Petrogenesis of the Neoproterozoic Rocks of Megele Area, Asosa, Western Ethiopia
Authors: Temesgen Oljira, Olugbenga Akindeji Okunlola, Akinade Shadrach Olatunji, Dereje Ayalew, Bekele Ayele Bedada
Abstract:
The Western Ethiopian Shield (WES) is underlain by volcano-sedimentary terranes, gneissic terranes, and ophiolitic rocks intruded by different granitoid bodies. For the past few years, Neoproterozoic rocks of the Megele area in the western part of the WES have been explored. Understanding the geology of the area and assessing the mineralized area's economic potential requires petrological, geochemical, and geological characterization of the Neoproterozoic granitoids and associated metavolcanic rocks. Thus, the geological, geochemical, and petrogenetic features of Neoproterozoic granitoids and associated metavolcanic rocks were elucidated using a combination of field mapping, petrological, and geochemical study. The Megele area is part of a low-grade volcano-sedimentary zone that has been intruded by mafic (dolerite dyke) and granitoid intrusions (granodiorite, diorite, granite gneiss). The granodiorite, associated diorite, and granite gneiss are calc-alkaline, peraluminous to slightly metaluminous, S-type granitoids formed in volcanic arc subduction (VAG) to syn-collisional (syn-COLD) tectonic setting by fractionation of LREE-enriched, HREE-depleted basaltic magma with considerable crustal input. While the metabasalt is sub-alkaline (tholeiitic), metaluminous bodies are generated at the mid-oceanic ridge tectonic setting by partially melting HREE-depleted and LREE-enriched basaltic magma. The reworking of sediment-loaded crustal blocks at depth in a subduction zone resulted in the production of S-type granitoids. This basaltic magma was supplied from an LREE-enriched, HREE-depleted mantle.Keywords: fractional crystallization, geochemistry, Megele, petrogenesis, s-type granite
Procedia PDF Downloads 129724 E-learning resources for radiology training: Is an ideal program available?
Authors: Eric Fang, Robert Chen, Ghim Song Chia, Bien Soo Tan
Abstract:
Objective and Rationale: Training of radiology residents hinges on practical, on-the-job training in all facets and modalities of diagnostic radiology. Although residency is structured to be comprehensive, clinical exposure depends on the case mix available locally and during the posting period. To supplement clinical training, there are several e-learning resources available to allow for greater exposure to radiological cases. The objective of this study was to survey residents and faculty on the usefulness of these e-learning resources. Methods: E-learning resources were shortlisted with input from radiology residents, Google search and online discussion groups, and screened by their purported focus. Twelve e-learning resources were found to meet the criteria. Both radiology residents and experienced radiology faculty were then surveyed electronically. The e-survey asked for ratings on breadth, depth, testing capability and user-friendliness for each resource, as well as for rankings for the top 3 resources. Statistical analysis was performed using SAS 9.4. Results: Seventeen residents and fifteen faculties completed an e-survey. Mean response rate was 54% ± 8% (Range: 14- 96%). Ratings and rankings were statistically identical between residents and faculty. On a 5-point rating scale, breadth was 3.68 ± 0.18, depth was 3.95 ± 0.14, testing capability was 2.64 ± 0.16 and user-friendliness was 3.39 ± 0.13. Top-ranked resources were STATdx (first), Radiopaedia (second) and Radiology Assistant (third). 9% of responders singled out R-ITI as potentially good but ‘prohibitively costly’. Statistically significant predictive factors for higher rankings are familiarity with the resource (p = 0.001) and user-friendliness (p = 0.006). Conclusion: A good e-learning system will complement on-the-job training with a broad case base, deep discussion and quality trainee evaluation. Based on our study on twelve e-learning resources, no single program fulfilled all requirements. The perception and use of radiology e-learning resources depended more on familiarity and user-friendliness than on content differences and testing capability.Keywords: e-learning, medicine, radiology, survey
Procedia PDF Downloads 333723 Review of Student-Staff Agreements in Higher Education: Creating a Framework
Authors: Luke Power, Paul O'Leary
Abstract:
Research has long described the enhancement of student engagement as a fundamental aim of delivering a consistent, lifelong benefit to student success across the multitude of dimensions a quality HE (higher education) experience offers. Engagement may take many forms, with Universities and Institutes across the world attempting to define the parameters which constitutes a successful student engagement framework and implementation strategy. These efforts broadly include empowering students, encouraging involvement, and the transfer of decision-making power through a variety of methods with the goal of obtaining a meaningful partnership between students and staff. As the Republic of Ireland continues to observe an increasing population transferring directly from secondary education to HE institutions, it falls on these institutions to research and develop effective strategies which insures the growing student population have every opportunity to engage with their education, research community, and staff. This research systematically reviews SPAs (student partnership agreements) which are currently in the process of being defined, and/or have been adopted at HE institutions, worldwide. Despite the demonstrated importance of a student-staff partnership to the overall student engagement experience, there is no obvious framework or model by which to begin this process. This work will therefore provide a novel analysis of student-staff agreements which will focus on examining the factors of success common to each and builds towards a workable and applicable framework using critical review, analysis of the key words, phraseology, student involvement, and the broadly applicable HE traits and values. Following the analysis, this work proposes SPA ‘toolkit’ with input from key stakeholders such as students, staff, faculty, and alumni. The resulting implications for future research and the lessons learned from the development and implementation of the SPA will aid the systematic implementation of student-staff agreements in Ireland and beyond.Keywords: student engagement, student partnership agreements, student-staff partnerships, higher education, systematic review, democratising students, empowering students, student unions
Procedia PDF Downloads 181722 Evaluating Generative Neural Attention Weights-Based Chatbot on Customer Support Twitter Dataset
Authors: Sinarwati Mohamad Suhaili, Naomie Salim, Mohamad Nazim Jambli
Abstract:
Sequence-to-sequence (seq2seq) models augmented with attention mechanisms are playing an increasingly important role in automated customer service. These models, which are able to recognize complex relationships between input and output sequences, are crucial for optimizing chatbot responses. Central to these mechanisms are neural attention weights that determine the focus of the model during sequence generation. Despite their widespread use, there remains a gap in the comparative analysis of different attention weighting functions within seq2seq models, particularly in the domain of chatbots using the Customer Support Twitter (CST) dataset. This study addresses this gap by evaluating four distinct attention-scoring functions—dot, multiplicative/general, additive, and an extended multiplicative function with a tanh activation parameter — in neural generative seq2seq models. Utilizing the CST dataset, these models were trained and evaluated over 10 epochs with the AdamW optimizer. Evaluation criteria included validation loss and BLEU scores implemented under both greedy and beam search strategies with a beam size of k=3. Results indicate that the model with the tanh-augmented multiplicative function significantly outperforms its counterparts, achieving the lowest validation loss (1.136484) and the highest BLEU scores (0.438926 under greedy search, 0.443000 under beam search, k=3). These results emphasize the crucial influence of selecting an appropriate attention-scoring function in improving the performance of seq2seq models for chatbots. Particularly, the model that integrates tanh activation proves to be a promising approach to improve the quality of chatbots in the customer support context.Keywords: attention weight, chatbot, encoder-decoder, neural generative attention, score function, sequence-to-sequence
Procedia PDF Downloads 78721 A Hybrid Model of Structural Equation Modelling-Artificial Neural Networks: Prediction of Influential Factors on Eating Behaviors
Authors: Maryam Kheirollahpour, Mahmoud Danaee, Amir Faisal Merican, Asma Ahmad Shariff
Abstract:
Background: The presence of nonlinearity among the risk factors of eating behavior causes a bias in the prediction models. The accuracy of estimation of eating behaviors risk factors in the primary prevention of obesity has been established. Objective: The aim of this study was to explore the potential of a hybrid model of structural equation modeling (SEM) and Artificial Neural Networks (ANN) to predict eating behaviors. Methods: The Partial Least Square-SEM (PLS-SEM) and a hybrid model (SEM-Artificial Neural Networks (SEM-ANN)) were applied to evaluate the factors affecting eating behavior patterns among university students. 340 university students participated in this study. The PLS-SEM analysis was used to check the effect of emotional eating scale (EES), body shape concern (BSC), and body appreciation scale (BAS) on different categories of eating behavior patterns (EBP). Then, the hybrid model was conducted using multilayer perceptron (MLP) with feedforward network topology. Moreover, Levenberg-Marquardt, which is a supervised learning model, was applied as a learning method for MLP training. The Tangent/sigmoid function was used for the input layer while the linear function applied for the output layer. The coefficient of determination (R²) and mean square error (MSE) was calculated. Results: It was proved that the hybrid model was superior to PLS-SEM methods. Using hybrid model, the optimal network happened at MPLP 3-17-8, while the R² of the model was increased by 27%, while, the MSE was decreased by 9.6%. Moreover, it was found that which one of these factors have significantly affected on healthy and unhealthy eating behavior patterns. The p-value was reported to be less than 0.01 for most of the paths. Conclusion/Importance: Thus, a hybrid approach could be suggested as a significant methodological contribution from a statistical standpoint, and it can be implemented as software to be able to predict models with the highest accuracy.Keywords: hybrid model, structural equation modeling, artificial neural networks, eating behavior patterns
Procedia PDF Downloads 156720 Investigation of Light Transmission Characteristics and CO2 Capture Potential of Microalgae Panel Bioreactors for Building Façade Applications
Authors: E. S. Umdu, Ilker Kahraman, Nurdan Yildirim, Levent Bilir
Abstract:
Algae-culture offers new applications in sustainable architecture with its continuous productive cycle, and a potential for high carbon dioxide capture. Microalgae itself has multiple functions such as carbon dioxide fixation, biomass production, oxygen generation and waste water treatment. Incorporating microalgae cultivation processes and systems to building design to utilize this potential is promising. Microalgae cultivation systems, especially closed photo bioreactors can be implemented as components in buildings. And these systems be accommodated in the façade of a building, or in other urban infrastructure in the future. Application microalgae bio-reactors of on building’s façade has the added benefit of acting as an effective insulation system, keeping out the heat of the summer and the chill of the winter. Furthermore, microalgae can give a dynamic appearance with a liquid façade that also works as an adaptive sunshade. Recently, potential of microalgae to use as a building component to reduce net energy demand in buildings becomes a popular topic and innovative design proposals and a handful of pilot applications appeared. Yet there is only a handful of examples in application and even less information on how these systems affect building energy behavior. Further studies on microalgae mostly focused on single application approach targeting either carbon dioxide utilization through biomass production or biofuel production. The main objective of this study is to investigate effects of design parameters of microalgae panel bio-reactors on light transmission characteristics and CO2 capture potential during growth of Nannochloropsis occulata sp. A maximum reduction of 18 ppm in CO2 levels of input air during the experiments with a % light transmission of 14.10, was achieved in 6 day growth cycles. Heat transfer behavior during these cycles was also inspected for possible façade applications.Keywords: building façade, CO2 capture, light transmittance, microalgae
Procedia PDF Downloads 190719 Multiscale Process Modeling Analysis for the Prediction of Composite Strength Allowables
Authors: Marianna Maiaru, Gregory M. Odegard
Abstract:
During the processing of high-performance thermoset polymer matrix composites, chemical reactions occur during elevated pressure and temperature cycles, causing the constituent monomers to crosslink and form a molecular network that gradually can sustain stress. As the crosslinking process progresses, the material naturally experiences a gradual shrinkage due to the increase in covalent bonds in the network. Once the cured composite completes the cure cycle and is brought to room temperature, the thermal expansion mismatch of the fibers and matrix cause additional residual stresses to form. These compounded residual stresses can compromise the reliability of the composite material and affect the composite strength. Composite process modeling is greatly complicated by the multiscale nature of the composite architecture. At the molecular level, the degree of cure controls the local shrinkage and thermal-mechanical properties of the thermoset. At the microscopic level, the local fiber architecture and packing affect the magnitudes and locations of residual stress concentrations. At the macroscopic level, the layup sequence controls the nature of crack initiation and propagation due to residual stresses. The goal of this research is use molecular dynamics (MD) and finite element analysis (FEA) to predict the residual stresses in composite laminates and the corresponding effect on composite failure. MD is used to predict the polymer shrinkage and thermomechanical properties as a function of degree of cure. This information is used as input into FEA to predict the residual stresses on the microscopic level resulting from the complete cure process. Virtual testing is subsequently conducted to predict strength allowables. Experimental characterization is used to validate the modeling.Keywords: molecular dynamics, finite element analysis, processing modeling, multiscale modeling
Procedia PDF Downloads 92718 Landslide and Liquefaction Vulnerability Analysis Using Risk Assessment Analysis and Analytic Hierarchy Process Implication: Suitability of the New Capital of the Republic of Indonesia on Borneo Island
Authors: Rifaldy, Misbahudin, Khalid Rizky, Ricky Aryanto, M. Alfiyan Bagus, Fahri Septianto, Firman Najib Wibisana, Excobar Arman
Abstract:
Indonesia is a country that has a high level of disaster because it is on the ring of fire, and there are several regions with three major plates meeting in the world. So that disaster analysis must always be done to see the potential disasters that might always occur, especially in this research are landslides and liquefaction. This research was conducted to analyze areas that are vulnerable to landslides and liquefaction hazards and their relationship with the assessment of the issue of moving the new capital of the Republic of Indonesia to the island of Kalimantan with a total area of 612,267.22 km². The method in this analysis uses the Analytical Hierarchy Process and consistency ratio testing as a complex and unstructured problem-solving process into several parameters by providing values. The parameters used in this analysis are the slope, land cover, lithology distribution, wetness index, earthquake data, peak ground acceleration. Weighted overlay was carried out from all these parameters using the percentage value obtained from the Analytical Hierarchy Process and confirmed its accuracy with a consistency ratio so that a percentage of the area obtained with different vulnerability classification values was obtained. Based on the analysis results obtained vulnerability classification from very high to low vulnerability. There are (0.15%) 918.40083 km² of highly vulnerable, medium (20.75%) 127,045,44815 km², low (56.54%) 346,175.886188 km², very low (22.56%) 138,127.484832 km². This research is expected to be able to map landslides and liquefaction disasters on the island of Kalimantan and provide consideration of the suitability of regional development of the new capital of the Republic of Indonesia. Also, this research is expected to provide input or can be applied to all regions that are analyzing the vulnerability of landslides and liquefaction or the suitability of the development of certain regions.Keywords: analytic hierarchy process, Borneo Island, landslide and liquefaction, vulnerability analysis
Procedia PDF Downloads 176717 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends
Authors: Zheng Yuxun
Abstract:
This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis
Procedia PDF Downloads 51716 VeriFy: A Solution to Implement Autonomy Safely and According to the Rules
Authors: Michael Naderhirn, Marco Pavone
Abstract:
Problem statement, motivation, and aim of work: So far, the development of control algorithms was done by control engineers in a way that the controller would fit a specification by testing. When it comes to the certification of an autonomous car in highly complex scenarios, the challenge is much higher since such a controller must mathematically guarantee to implement the rules of the road while on the other side guarantee aspects like safety and real time executability. What if it becomes reality to solve this demanding problem by combining Formal Verification and System Theory? The aim of this work is to present a workflow to solve the above mentioned problem. Summary of the presented results / main outcomes: We show the usage of an English like language to transform the rules of the road into system specification for an autonomous car. The language based specifications are used to define system functions and interfaces. Based on that a formal model is developed which formally correctly models the specifications. On the other side, a mathematical model describing the systems dynamics is used to calculate the systems reachability set which is further used to determine the system input boundaries. Then a motion planning algorithm is applied inside the system boundaries to find an optimized trajectory in combination with the formal specification model while satisfying the specifications. The result is a control strategy which can be applied in real time independent of the scenario with a mathematical guarantee to satisfy a predefined specification. We demonstrate the applicability of the method in simulation driving scenarios and a potential certification. Originality, significance, and benefit: To the authors’ best knowledge, it is the first time that it is possible to show an automated workflow which combines a specification in an English like language and a mathematical model in a mathematical formal verified way to synthesizes a controller for potential real time applications like autonomous driving.Keywords: formal system verification, reachability, real time controller, hybrid system
Procedia PDF Downloads 241715 The Use of TRIZ to Map the Evolutive Pattern of Products
Authors: Fernando C. Labouriau, Ricardo M. Naveiro
Abstract:
This paper presents a model for mapping the evolutive pattern of products in order to generate new ideas, to perceive emerging technologies and to manage product’s portfolios in new product development (NPD). According to the proposed model, the information extracted from the patent system is filtered and analyzed with TRIZ tools to produce the input information to the NPD process. The authors acknowledge that the NPD process is well integrated within the enterprises business strategic planning and that new products are vital in the competitive market nowadays. In the other hand, it has been observed the proactive use of patent information in some methodologies for selecting projects, mapping technological change and generating product concepts. And one of these methodologies is TRIZ, a theory created to favor innovation and to improve product design that provided the analytical framework for the model. Initially, it is presented an introduction to TRIZ mainly focused on the patterns of evolution of technical systems and its strategic uses, a brief and absolutely non-comprehensive description as the theory has several others tools being widely employed in technical and business applications. Then, it is introduced the model for mapping the products evolutive pattern with its three basic pillars, namely patent information, TRIZ and NPD, and the methodology for implementation. Following, a case study of a Brazilian bike manufacturing is presented to proceed the mapping of a product evolutive pattern by decomposing and analyzing one of its assemblies along ten evolution lines in order to envision opportunities for further product development. Some of these lines are illustrated in more details to evaluate the features of the product in relation to the TRIZ concepts using a comparison perspective with patents in the state of the art to validate the product’s evolutionary potential. As a result, the case study provided several opportunities for a product improvement development program in different project categories, identifying technical and business impacts as well as indicating the lines of evolution that can mostly benefit from each opportunity.Keywords: product development, patents, product strategy, systems evolution
Procedia PDF Downloads 500714 The Role of Women in Climate Change Impact in Kupang-Indonesia
Authors: Rolland Epafras Fanggidae
Abstract:
The impact of climate change such as natural disasters, crop failures, increasing crop pests, bad gisi on children and other impacts, will indirectly affect education, health, food safety, as well as the economy. The impact of climate change has put a man in a situation of vulnerability, which was powerless to meet the minimum requirements, it is in close contact with poverty. When talking about poverty, the most plausible is female. The role of women in Indonesia, particularly in East Nusa Tenggara in Domestic aktifity very central and dominant. This makes Indonesian woman can say "outstanding actor in the face of climate change mitigation and adaptation and applying local knowledge", but still ignored when women based on gender division of work entrusted role in domestic activities. Similarly, in public activity is an extension of the Domestic example, trading activity in the market lele / mama. Although men are also affected by climate change, but most feel is female. From the above problems, it can be said that Indonesia's commitment has not been followed by optimal empowerment of women's role in addressing climate change, it is necessary to learn to know how the role of women in the face of climate change impacts that hit on her role as a woman, a housewife or head of the family and will be input in order to determine how women find a solution to tackle the problem of climate change. This study focuses on the efforts made by women cope with the impacts of climate change, efforts by the government, empowerment model used in Playing the impact of climate change. The container with the formulation of the title "The Role of Women in Climate Change Impact in Kupang district". Where the assessment in use types Research mix Methods combination of quantitative research and qualitative research. While the location of the research conducted in Kupang regency, East Nusa Tenggara, namely: District of East Kupang is a district granary in Kupang district. Subdistrict West Kupang, especially Tablolong Village is the center of seaweed cultivation in Kupang district.Keywords: climate change, women, women's roles, gender, family
Procedia PDF Downloads 293713 A Qualitative Study of Parents' Recommendations for Improving the Notification Process and Communication between Health Professionals and Families for New Diagnosis of Cystic Fibrosis
Authors: Mohammad S. Razai, Jan Williams, Rachel Nestel, Dermot Dalton
Abstract:
Purpose: This descriptive qualitative study aimed to obtain parents recommendations for improving the notification process and communication of positive newborn screening result for cystic fibrosis (CF). Methods: Thematic analysis of semi-structured open-ended interviews with 11 parents of 7 children with confirmed diagnosis of CF between 2 months — 2 years of age. Results: Parents preferred face to face disclosure of positive NBS results by a pediatrician with CF professional qualification. They trusted a pediatrician more than any other professional in providing accurate, credible and comprehensive information about the diagnosis and its implications. Parents recommended that health professionals be knowledgeable and provide clear, succinct and understandable information. Providers should also explore parents concerns and acknowledge feelings and emotions. Most parents reported that they preferred to be notified immediately as soon as the results were available. Several parents preferred to be told once the diagnosis was certain. Most parents regarded open access to CF team as the most significant part of care coordination. In addition to health professionals, most parents used internet as an important source of information, interaction and exchange of experiences. Most parents also used social networking sites such as Facebook groups and smart phone apps. Conclusion: This study provides significant new evidence from parental perspective in emphasizing the pivotal role of good communication skills deployed by a knowledgeable CF specialist in person. Parents use of social media and internet has replaced some traditional methods of information exchange and may reduce the need for professional input for newly diagnosed CF patients.Keywords: care coordination, cystic fibrosis, newborn screening, notification process, parental preferences, professional-paren communication
Procedia PDF Downloads 398712 Assessing the Benefits of Super Depo Sutorejo as a Model of integration of Waste Pickers in a Sustainable City Waste Management
Authors: Yohanes Kambaru Windi, Loetfia Dwi Rahariyani, Dyah Wijayanti, Eko Rustamaji
Abstract:
Surabaya, the second largest city in Indonesia, has been struggling for years with waste production and its management. Nearly 11,000 tons of waste are generated daily by domestic, commercial and industrial areas. It is predicted that approximately 1,300 tons of waste overflew the Benowo Landfill daily in 2013 and projected that the landfill operation will be critical in 2015. The Super Depo Sutorejo (SDS) is a pilot project on waste management launched by the government of Surabaya in March 2013. The project is aimed to reduce the amount of waste dumped in landfill by sorting the recyclable and organic waste for composting by employing waste pickers to sort the waste before transported to landfill. This study is intended to assess the capacity of SDS to process and reduce waste and its complementary benefits. It also overviews the benefits of the project to the waste pickers in term of satisfaction to the job. Waste processing data-sheets were used to assess the difference between input and outputs waste. A survey was distributed to 30 waste pickers and interviews were conducted as a further insight on a particular issue. The analysis showed that SDS enable to reduce waste up to 50% before dumped in the final disposal area. The cost-benefits analysis using cost differential calculation revealed the economic benefit is considerable low, but composting may substitute tangible benefits for maintain the city’s parks. Waste pickers are mostly satisfied with their job (i.e. Salary, health coverage, job security), services and facilities available in SDS and enjoyed rewarding social life within the project. It is concluded that SDS is an effective and efficient model for sustainable waste management and reliable to be developed in developing countries. It is a strategic approach to empower and open up working opportunity for the poor urban community and prolong the operation of landfills.Keywords: cost-benefits, integration, satisfaction, waste management
Procedia PDF Downloads 476711 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent
Procedia PDF Downloads 178710 Destruction of Coastal Wetlands in Harper City-Liberia: Setting Nature against the Future Society
Authors: Richard Adu Antwako
Abstract:
Coastal wetland destruction and its consequences have recently taken the center stage of global discussions. This phenomenon is no gray area to humanity as coastal wetland-human interaction seems inevitably ingrained in the earliest civilizations, amidst the demanding use of its resources to meet their necessities. The severity of coastal wetland destruction parallels with growing civilizations, and it is against this backdrop that, this paper interrogated the causes of coastal wetland destruction in Harper City in Liberia, compared the degree of coastal wetland stressors to the non-equilibrium thermodynamic scale as well as suggested an integrated coastal zone management to address the problems. Literature complemented the primary data gleaned via global positioning system devices, field observation, questionnaire, and interviews. Multi-sampling techniques were used to generate data from the sand miners, institutional heads, fisherfolk, community-based groups, and other stakeholders. Non-equilibrium thermodynamic theory remains vibrant in discerning the ecological stability, and it would be employed to further understand the coastal wetland destruction in Harper City, Liberia and to measure the coastal wetland stresses-amplitude and elasticity. The non-equilibrium thermodynamics postulates that the coastal wetlands are capable of assimilating resources (inputs), as well as discharging products (outputs). However, the input-output relationship exceedingly stretches beyond the thresholds of the coastal wetlands, leading to coastal wetland disequilibrium. Findings revealed that the sand mining, mangrove removal, and crude dumping have transformed the coastal wetlands, resulting in water pollution, flooding, habitat loss and disfigured beaches in Harper City in Liberia. This paper demonstrates that the coastal wetlands are converted into developmental projects and agricultural fields, thus, endangering the future society against nature.Keywords: amplitude, crude dumping, elasticity, non-equilibrium thermodynamics, wetland destruction
Procedia PDF Downloads 141709 E-Learning Platform for School Kids
Authors: Gihan Thilakarathna, Fernando Ishara, Rathnayake Yasith, Bandara A. M. R. Y.
Abstract:
E-learning is a crucial component of intelligent education. Even in the midst of a pandemic, E-learning is becoming increasingly important in the educational system. Several e-learning programs are accessible for students. Here, we decided to create an e-learning framework for children. We've found a few issues that teachers are having with their online classes. When there are numerous students in an online classroom, how does a teacher recognize a student's focus on academics and below-the-surface behaviors? Some kids are not paying attention in class, and others are napping. The teacher is unable to keep track of each and every student. Key challenge in e-learning is online exams. Because students can cheat easily during online exams. Hence there is need of exam proctoring is occurred. In here we propose an automated online exam cheating detection method using a web camera. The purpose of this project is to present an E-learning platform for math education and include games for kids as an alternative teaching method for math students. The game will be accessible via a web browser. The imagery in the game is drawn in a cartoonish style. This will help students learn math through games. Everything in this day and age is moving towards automation. However, automatic answer evaluation is only available for MCQ-based questions. As a result, the checker has a difficult time evaluating the theory solution. The current system requires more manpower and takes a long time to evaluate responses. It's also possible to mark two identical responses differently and receive two different grades. As a result, this application employs machine learning techniques to provide an automatic evaluation of subjective responses based on the keyword provided to the computer as student input, resulting in a fair distribution of marks. In addition, it will save time and manpower. We used deep learning, machine learning, image processing and natural language technologies to develop these research components.Keywords: math, education games, e-learning platform, artificial intelligence
Procedia PDF Downloads 156