Search results for: processing individual
6240 Phylogenetic Relationships between the Whole Sets of Individual Flow Sorted U, M, S and C Chromosomes of Aegilops and Wheat as Revealed by COS Markers
Authors: András Farkas, István Molnár, Jan Vrána, Veronika Burešová, Petr Cápal, András Cseh, Márta Molnár-Láng, Jaroslav Doležel
Abstract:
Species of Aegilops played a central role in the evolution of wheat and are sources of traits related to yield quality and tolerance against biotic and abiotic stresses. These wild genes and alleles are desirable to use in crop improvement programs via introgressive hybridization. However, the success of chromosome mediated gene transfer to wheat are hampered by the pour knowledge on the genome structure of Aegilops relative to wheat and by the low number of cost-effective molecular markers specific for Aegilops chromosomes. The COS markers specific for genes conserved throughout evolution in both sequence and copy number between Triticeae/Aegilops taxa and define orthologous regions, thus enabling the comparison of regions on the chromosomes of related species. The present study compared individual chromosomes of Aegilops umbellulata (UU), Ae. comosa (MM), Ae. speltoides (SS) and Ae. caudata (CC) purified by flourescent labelling with oligonucleotid SSR repeats and biparametric flow cytometry with wheat by identifying orthologous chromosomal regions by COS markers. The linear order of bin-mapped COS markers along the wheat D chromosomes was identified by the use of chromosome-specific sequence data and virtual gene order. Syntenic regions of wheat identifying genome rearrangements differentiating the U, M, S or C genomes from the D genome of wheat were detected. The conserved orthologous set markers assigned to Aegilops chromosomes promise to accelerate gene introgression by facilitating the identification of alien chromatin. The syntenic relationships between the Aegilops species and wheat will facilitate the targeted development of new markers specific for U, M, S and C genomic regions and will contribute to the understanding of molecular processes related to the evolution of Aegilops.Keywords: Aegilops, cos-markers, flow-sorting, wheat
Procedia PDF Downloads 5026239 Descriptive Analysis of the Relationship between State and Civil Society in Hegel's Political Thought
Authors: Garineh Keshishyan Siraki
Abstract:
Civil society is one of the most important concepts of the twentieth century and even so far. Modern and postmodern thinkers have provided different definitions of civil society. Of course, the concept of civil society has undergone many changes over time. The relationship between government and civil society is one of the relationships that attracted the attention of many contemporary thinkers. Hegel, the thinker we discussed in this article also explores the relationship between these concepts and emphasizing the dialectical method, he has drawn three lines between family, state, and civil society. In Hegel's view, the creation of civil society will lead to a reduction of social conflict and increased social cohesion. The importance of the issue is due to the study of social cohesion and the ways to increase it. The importance of the issue is due to the study of social cohesion and the ways to increase it. This paper, which uses a descriptive-analytic method to examine Hegel's dialectical theory of civil society, after examining the relationship between the family and the state and finding the concept of civil society as the interface and the interconnected circle of these two, investigates tripartite economic, legal, and pluralistic systems. In this article, after examining the concepts of the market, the right and duty, the individual interests and the development of the exchange economy, Hegel's view is to examine the concept of freedom and its relation with civil society. The results of this survey show that, in Hegel's thought, the separation between the political system and the social system is a natural and necessary thing. In Hegel's view, because of those who are in society, they have selfish features; the community is in tension and contradiction. Therefore, the social realms within which conflicts emerge must be identified and controlled by specific mechanisms. It can also be concluded that the government can act to reduce social conflicts by legislating, using force or forming trade unions. The bottom line is that Hegel wants to reconcile between the individual, the state and civil society and it is not possible to rely on ethics.Keywords: civil society, cohesion system, economic system, family, the legal system, state
Procedia PDF Downloads 1986238 Nurturing of Children with Results from Their Nature (DNA) Using DNA-MILE
Authors: Tan Lay Cheng (Cheryl), Low Huiqi
Abstract:
Background: All children learn at different pace. Individualized learning is an approach that tailors to the individual learning needs of each child. When implementing this approach, educators have to base their lessons on the understanding that all students learn differently and that what works for one student may not work for another. In the current early childhood environment, individualized learning is for children with diverse needs. However, a typical developing child is also able to benefit from individualized learning. This research abstract explores the concept of utilizing DNA-MILE, a patented (in Singapore) DNA-based assessment tool that can be used to measure a variety of factors that can impact learning. The assessment report includes the dominant intelligence of the user or, in this case, the child. From the result, a personalized learning plan that is tailored to each individual student's needs. Methods: A study will be conducted to investigate the effectiveness of DNA-MILE in supporting individualized learning. The study will involve a group of 20 preschoolers who were randomly assigned to either a DNA-MILE-assessed group (experimental group) or a control group. 10 children in each group. The experimental group will receive DNA Mile assessments and personalized learning plans, while the control group will not. The children in the experimental group will be taught using the dominant intelligence (as shown in the DNA-MILE report) to enhance their learning in other domains. The children in the control group will be taught using the curriculum and lesson plan set by their teacher for the whole class. Parents’ and teachers’ interviews will be conducted to provide information about the children before the study and after the study. Results: The results of the study will show the difference in the outcome of the learning, which received DNA Mile assessments and personalized learning plans, significantly outperformed the control group on a variety of measures, including standardized tests, grades, and motivation. Conclusion: The results of this study suggest that DNA Mile can be an effective tool for supporting individualized learning. By providing personalized learning plans, DNA Mile can help to improve learning outcomes for all students.Keywords: individualized, DNA-MILE, learning, preschool, DNA, multiple intelligence
Procedia PDF Downloads 1186237 The Mechanisms of Peer-Effects in Education: A Frame-Factor Analysis of Instruction
Authors: Pontus Backstrom
Abstract:
In the educational literature on peer effects, attention has been brought to the fact that the mechanisms creating peer effects are still to a large extent hidden in obscurity. The hypothesis in this study is that the Frame Factor Theory can be used to explain these mechanisms. At heart of the theory is the concept of “time needed” for students to learn a certain curricula unit. The relations between class-aggregated time needed and the actual time available, steers and hinders the actions possible for the teacher. Further, the theory predicts that the timing and pacing of the teachers’ instruction is governed by a “criterion steering group” (CSG), namely the pupils in the 10th-25th percentile of the aptitude distribution in class. The class composition hereby set the possibilities and limitations for instruction, creating peer effects on individual outcomes. To test if the theory can be applied to the issue of peer effects, the study employs multilevel structural equation modelling (M-SEM) on Swedish TIMSS 2015-data (Trends in International Mathematics and Science Study; students N=4090, teachers N=200). Using confirmatory factor analysis (CFA) in the SEM-framework in MPLUS, latent variables are specified according to the theory, such as “limitations of instruction” from TIMSS survey items. The results indicate a good model fit to data of the measurement model. Research is still in progress, but preliminary results from initial M-SEM-models verify a strong relation between the mean level of the CSG and the latent variable of limitations on instruction, a variable which in turn have a great impact on individual students’ test results. Further analysis is required, but so far the analysis indicates a confirmation of the predictions derived from the frame factor theory and reveals that one of the important mechanisms creating peer effects in student outcomes is the effect the class composition has upon the teachers’ instruction in class.Keywords: compositional effects, frame factor theory, peer effects, structural equation modelling
Procedia PDF Downloads 1346236 A U-Net Based Architecture for Fast and Accurate Diagram Extraction
Authors: Revoti Prasad Bora, Saurabh Yadav, Nikita Katyal
Abstract:
In the context of educational data mining, the use case of extracting information from images containing both text and diagrams is of high importance. Hence, document analysis requires the extraction of diagrams from such images and processes the text and diagrams separately. To the author’s best knowledge, none among plenty of approaches for extracting tables, figures, etc., suffice the need for real-time processing with high accuracy as needed in multiple applications. In the education domain, diagrams can be of varied characteristics viz. line-based i.e. geometric diagrams, chemical bonds, mathematical formulas, etc. There are two broad categories of approaches that try to solve similar problems viz. traditional computer vision based approaches and deep learning approaches. The traditional computer vision based approaches mainly leverage connected components and distance transform based processing and hence perform well in very limited scenarios. The existing deep learning approaches either leverage YOLO or faster-RCNN architectures. These approaches suffer from a performance-accuracy tradeoff. This paper proposes a U-Net based architecture that formulates the diagram extraction as a segmentation problem. The proposed method provides similar accuracy with a much faster extraction time as compared to the mentioned state-of-the-art approaches. Further, the segmentation mask in this approach allows the extraction of diagrams of irregular shapes.Keywords: computer vision, deep-learning, educational data mining, faster-RCNN, figure extraction, image segmentation, real-time document analysis, text extraction, U-Net, YOLO
Procedia PDF Downloads 1376235 A Method to Predict the Thermo-Elastic Behavior of Laser-Integrated Machine Tools
Authors: C. Brecher, M. Fey, F. Du Bois-Reymond, S. Neus
Abstract:
Additive manufacturing has emerged into a fast-growing section within the manufacturing technologies. Established machine tool manufacturers, such as DMG MORI, recently presented machine tools combining milling and laser welding. By this, machine tools can realize a higher degree of flexibility and a shorter production time. Still there are challenges that have to be accounted for in terms of maintaining the necessary machining accuracy - especially due to thermal effects arising through the use of high power laser processing units. To study the thermal behavior of laser-integrated machine tools, it is essential to analyze and simulate the thermal behavior of machine components, individual and assembled. This information will help to design a geometrically stable machine tool under the influence of high power laser processes. This paper presents an approach to decrease the loss of machining precision due to thermal impacts. Real effects of laser machining processes are considered and thus enable an optimized design of the machine tool, respective its components, in the early design phase. Core element of this approach is a matched FEM model considering all relevant variables arising, e.g. laser power, angle of laser beam, reflective coefficients and heat transfer coefficient. Hence, a systematic approach to obtain this matched FEM model is essential. Indicating the thermal behavior of structural components as well as predicting the laser beam path, to determine the relevant beam intensity on the structural components, there are the two constituent aspects of the method. To match the model both aspects of the method have to be combined and verified empirically. In this context, an essential machine component of a five axis machine tool, the turn-swivel table, serves as the demonstration object for the verification process. Therefore, a turn-swivel table test bench as well as an experimental set-up to measure the beam propagation were developed and are described in the paper. In addition to the empirical investigation, a simulative approach of the described types of experimental examination is presented. Concluding, it is shown that the method and a good understanding of the two core aspects, the thermo-elastic machine behavior and the laser beam path, as well as their combination helps designers to minimize the loss of precision in the early stages of the design phase.Keywords: additive manufacturing, laser beam machining, machine tool, thermal effects
Procedia PDF Downloads 2656234 The Importance of Visual Communication in Artificial Intelligence
Authors: Manjitsingh Rajput
Abstract:
Visual communication plays an important role in artificial intelligence (AI) because it enables machines to understand and interpret visual information, similar to how humans do. This abstract explores the importance of visual communication in AI and emphasizes the importance of various applications such as computer vision, object emphasis recognition, image classification and autonomous systems. In going deeper, with deep learning techniques and neural networks that modify visual understanding, In addition to AI programming, the abstract discusses challenges facing visual interfaces for AI, such as data scarcity, domain optimization, and interpretability. Visual communication and other approaches, such as natural language processing and speech recognition, have also been explored. Overall, this abstract highlights the critical role that visual communication plays in advancing AI capabilities and enabling machines to perceive and understand the world around them. The abstract also explores the integration of visual communication with other modalities like natural language processing and speech recognition, emphasizing the critical role of visual communication in AI capabilities. This methodology explores the importance of visual communication in AI development and implementation, highlighting its potential to enhance the effectiveness and accessibility of AI systems. It provides a comprehensive approach to integrating visual elements into AI systems, making them more user-friendly and efficient. In conclusion, Visual communication is crucial in AI systems for object recognition, facial analysis, and augmented reality, but challenges like data quality, interpretability, and ethics must be addressed. Visual communication enhances user experience, decision-making, accessibility, and collaboration. Developers can integrate visual elements for efficient and accessible AI systems.Keywords: visual communication AI, computer vision, visual aid in communication, essence of visual communication.
Procedia PDF Downloads 956233 Effect of Hybrid Fibers on Mechanical Properties in Autoclaved Aerated Concrete
Authors: B. Vijay Antony Raj, Umarani Gunasekaran, R. Thiru Kumara Raja Vallaban
Abstract:
Fibrous autoclaved aerated concrete (FAAC) is concrete containing fibrous material in it which helps to increase its structural integrity when compared to that of convention autoclaved aerated concrete (CAAC). These short discrete fibers are uniformly distributed and randomly oriented, which enhances the bond strength within the aerated concrete matrix. Conventional red-clay bricks create larger impact to the environment due to red soil depletion and it also consumes large amount to time for construction. Whereas, AAC are larger in size, lighter in weight and it is environmentally friendly in nature and hence it is a viable replacement for red-clay bricks. Internal micro cracks and corner cracks are the only disadvantages of conventional autoclaved aerated concrete, to resolve this particular issue it is preferable to make use of fibers in it.These fibers are bonded together within the matrix and they induce the aerated concrete to withstand considerable stresses, especially during the post cracking stage. Hence, FAAC has the capability of enhancing the mechanical properties and energy absorption capacity of CAAC. In this research work, individual fibers like glass, nylon, polyester and polypropylene are used they generally reduce the brittle fracture of AAC.To study the fibre’s surface topography and composition, SEM analysis is performed and then to determine the composition of a specimen as a whole as well as the composition of individual components EDAX mapping is carried out and then an experimental approach was performed to determine the effect of hybrid (multiple) fibres at various dosage (0.5%, 1%, 1.5%) and curing temperature of 180-2000 C is maintained to determine the mechanical properties of autoclaved aerated concrete. As an analytical part, the outcome experimental results is compared with fuzzy logic using MATLAB.Keywords: fiberous AAC, crack control, energy absorption, mechanical properies, SEM, EDAX, MATLAB
Procedia PDF Downloads 2696232 An Exploration of Architecture Design Methods in Urban Fringe Belt Based on Typo-Morphological Research- A Case of Expansion Project of the Second Middle School in Xuancheng, China
Authors: Dong Yinan, Zhou Zijie
Abstract:
Urban fringe belt is an important part of urban morphology research. Different from the relatively fixed central district of city, the position of fringe belt is changing. In the process of urban expansion, the original fringe belt is likely to be merged by the new-built city, even become new city public center. During the change, we are facing the dialectic between restoring the organicity of old urban form and creating new urban image. There are lots of relevant research in urban scale, but when we focus on building scale, rare design method can be proposed, thus some new individual building cannot match the overall urban planning intent. The expansion project of the second middle school in Xuancheng is facing this situation. The existing campus is located in the south fringe belt of Xuancheng, Anhui province, China, adjacent to farmland and ponds. While based on the Xucheng urban planning, the farmland and ponds will be transformed into a big lake, around which new public center will be built; the expansion of the school becomes an important part of the boundary of the new public center. Therefore, the expansion project faces challenges from both urban and building scale. In urban scale, we analyze and summarize the fringe belt characters through the reading of existing and future urban organism, in order to determine the form of the expansion project. Meanwhile, in building scale, we study on different types of school buildings and select appropriate type which can satisfy to both urban form and school function. This research attempts to investigate design methods based on an under construction project in Xuancheng, a historic city in southeast China. It also aims to bridge the gap from urban design to individual building design through the typo-morphological research.Keywords: design methods, urban fringe belt, typo-morphological research, middle school
Procedia PDF Downloads 5066231 Uplift Segmentation Approach for Targeting Customers in a Churn Prediction Model
Authors: Shivahari Revathi Venkateswaran
Abstract:
Segmenting customers plays a significant role in churn prediction. It helps the marketing team with proactive and reactive customer retention. For the reactive retention, the retention team reaches out to customers who already showed intent to disconnect by giving some special offers. When coming to proactive retention, the marketing team uses churn prediction model, which ranks each customer from rank 1 to 100, where 1 being more risk to churn/disconnect (high ranks have high propensity to churn). The churn prediction model is built by using XGBoost model. However, with the churn rank, the marketing team can only reach out to the customers based on their individual ranks. To profile different groups of customers and to frame different marketing strategies for targeted groups of customers are not possible with the churn ranks. For this, the customers must be grouped in different segments based on their profiles, like demographics and other non-controllable attributes. This helps the marketing team to frame different offer groups for the targeted audience and prevent them from disconnecting (proactive retention). For segmentation, machine learning approaches like k-mean clustering will not form unique customer segments that have customers with same attributes. This paper finds an alternate approach to find all the combination of unique segments that can be formed from the user attributes and then finds the segments who have uplift (churn rate higher than the baseline churn rate). For this, search algorithms like fast search and recursive search are used. Further, for each segment, all customers can be targeted using individual churn ranks from the churn prediction model. Finally, a UI (User Interface) is developed for the marketing team to interactively search for the meaningful segments that are formed and target the right set of audience for future marketing campaigns and prevent them from disconnecting.Keywords: churn prediction modeling, XGBoost model, uplift segments, proactive marketing, search algorithms, retention, k-mean clustering
Procedia PDF Downloads 716230 ALEF: An Enhanced Approach to Arabic-English Bilingual Translation
Authors: Abdul Muqsit Abbasi, Ibrahim Chhipa, Asad Anwer, Saad Farooq, Hassan Berry, Sonu Kumar, Sundar Ali, Muhammad Owais Mahmood, Areeb Ur Rehman, Bahram Baloch
Abstract:
Accurate translation between structurally diverse languages, such as Arabic and English, presents a critical challenge in natural language processing due to significant linguistic and cultural differences. This paper investigates the effectiveness of Facebook’s mBART model, fine-tuned specifically for sequence-tosequence (seq2seq) translation tasks between Arabic and English, and enhanced through advanced refinement techniques. Our approach leverages the Alef Dataset, a meticulously curated parallel corpus spanning various domains to capture the linguistic richness, nuances, and contextual accuracy essential for high-quality translation. We further refine the model’s output using advanced language models such as GPT-3.5 and GPT-4, which improve fluency, coherence, and correct grammatical errors in translated texts. The fine-tuned model demonstrates substantial improvements, achieving a BLEU score of 38.97, METEOR score of 58.11, and TER score of 56.33, surpassing widely used systems such as Google Translate. These results underscore the potential of mBART, combined with refinement strategies, to bridge the translation gap between Arabic and English, providing a reliable, context-aware machine translation solution that is robust across diverse linguistic contexts.Keywords: natural language processing, machine translation, fine-tuning, Arabic-English translation, transformer models, seq2seq translation, translation evaluation metrics, cross-linguistic communication
Procedia PDF Downloads 86229 X-Ray Diffraction, Microstructure, and Mössbauer Studies of Nanostructured Materials Obtained by High-Energy Ball Milling
Authors: N. Boudinar, A. Djekoun, A. Otmani, B. Bouzabata, J. M. Greneche
Abstract:
High-energy ball milling is a solid-state powder processing technique that allows synthesizing a variety of equilibrium and non-equilibrium alloy phases starting from elemental powders. The advantage of this process technology is that the powder can be produced in large quantities and the processing parameters can be easily controlled, thus it is a suitable method for commercial applications. It can also be used to produce amorphous and nanocrystalline materials in commercially relevant amounts and is also amenable to the production of a variety of alloy compositions. Mechanical alloying (high-energy ball milling) provides an inter-dispersion of elements through a repeated cold welding and fracture of free powder particles; the grain size decreases to nano metric scale and the element mix together. Progressively, the concentration gradients disappear and eventually the elements are mixed at the atomic scale. The end products depend on many parameters such as the milling conditions and the thermodynamic properties of the milled system. Here, the mechanical alloying technique has been used to prepare nano crystalline Fe_50 and Fe_64 wt.% Ni alloys from powder mixtures. Scanning electron microscopy (SEM) with energy-dispersive, X-ray analyses and Mössbauer spectroscopy were used to study the mixing at nanometric scale. The Mössbauer Spectroscopy confirmed the ferromagnetic ordering and was use to calculate the distribution of hyperfin field. The Mössbauer spectrum for both alloys shows the existence of a ferromagnetic phase attributed to γ-Fe-Ni solid solution.Keywords: nanocrystalline, mechanical alloying, X-ray diffraction, Mössbauer spectroscopy, phase transformations
Procedia PDF Downloads 4376228 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy
Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu
Abstract:
The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis
Procedia PDF Downloads 656227 Prophylactic Replacement of Voice Prosthesis: A Study to Predict Prosthesis Lifetime
Authors: Anne Heirman, Vincent van der Noort, Rob van Son, Marije Petersen, Lisette van der Molen, Gyorgy Halmos, Richard Dirven, Michiel van den Brekel
Abstract:
Objective: Voice prosthesis leakage significantly impacts laryngectomies patients' quality of life, causing insecurity and frequent unplanned hospital visits and costs. In this study, the concept of prophylactic voice prosthesis replacement was explored to prevent leakages. Study Design: A retrospective cohort study. Setting: Tertiary hospital. Methods: Device lifetimes and voice prosthesis replacements of a retrospective cohort, including all patients with laryngectomies between 2000 and 2012 in the Netherlands Cancer Institute, were used to calculate the number of needed voice prostheses per patient per year when preventing 70% of the leakages by prophylactic replacement. Various strategies for the timing of prophylactic replacement were considered: Adaptive strategies based on the individual patient’s history of replacement and fixed strategies based on the results of patients with similar voice prosthesis or treatment characteristics. Results: Patients used a median of 3.4 voice prostheses per year (range 0.1-48.1). We found a high inter-and intrapatient variability in device lifetime. When applying prophylactic replacement, this would become a median of 9.4 voice prostheses per year, which means replacement every 38 days, implying more than six additional voice prostheses per patient per year. The individual adaptive model showed that preventing 70% of the leakages was impossible for most patients, and only a median of 25% can be prevented. Monte-Carlo simulations showed that prophylactic replacement is not feasible due to the high Coefficient of Variation (Standard Deviation/Mean) in device lifetime. Conclusion: Based on our simulations, prophylactic replacement of voice prostheses is not feasible due to high inter-and intrapatient variation in device lifetime.Keywords: voice prosthesis, voice rehabilitation, total laryngectomy, prosthetic leakage, device lifetime
Procedia PDF Downloads 1296226 Vehicle Speed Estimation Using Image Processing
Authors: Prodipta Bhowmik, Poulami Saha, Preety Mehra, Yogesh Soni, Triloki Nath Jha
Abstract:
In India, the smart city concept is growing day by day. So, for smart city development, a better traffic management and monitoring system is a very important requirement. Nowadays, road accidents increase due to more vehicles on the road. Reckless driving is mainly responsible for a huge number of accidents. So, an efficient traffic management system is required for all kinds of roads to control the traffic speed. The speed limit varies from road to road basis. Previously, there was a radar system but due to high cost and less precision, the radar system is unable to become favorable in a traffic management system. Traffic management system faces different types of problems every day and it has become a researchable topic on how to solve this problem. This paper proposed a computer vision and machine learning-based automated system for multiple vehicle detection, tracking, and speed estimation of vehicles using image processing. Detection of vehicles and estimating their speed from a real-time video is tough work to do. The objective of this paper is to detect vehicles and estimate their speed as accurately as possible. So for this, a real-time video is first captured, then the frames are extracted from that video, then from that frames, the vehicles are detected, and thereafter, the tracking of vehicles starts, and finally, the speed of the moving vehicles is estimated. The goal of this method is to develop a cost-friendly system that can able to detect multiple types of vehicles at the same time.Keywords: OpenCV, Haar Cascade classifier, DLIB, YOLOV3, centroid tracker, vehicle detection, vehicle tracking, vehicle speed estimation, computer vision
Procedia PDF Downloads 846225 Teacher Education: Teacher Development and Support
Authors: Khadem Hichem
Abstract:
With the new technology challenges, dynamics and challenges of the contemporary world, most teachers are struggling to maintain effective and successful teaching /learning environment for learners. Teachers as a key to the success of reforms in the educational setting, they must improve their competencies to teach effectively. Many researchers emphasis on the ongoing professional development of the teacher by enhancing their experiences and encouraging their responsibility for learning, and thus promoting self-reliance, collaboration, and reflection. In short, teachers are considered as learners and they need to learn together. The educational system must support, both conceptually and financially, the teachers’ development as lifelong learners Teachers need opportunities to grow in language proficiency and in knowledge. Changing nature of language and culture in the world, all teachers must have opportunities to update their knowledge and practices. Many researchers in the field of foreign or additional languages indicate that teachers keep side by side of effective instructional practices and they need special support with the challenging task of developing and administering proficiency tests to their students. For significant change to occur, each individual teacher’s needs must be addressed. The teacher must be involved experientially in the process of development, since, by itself, knowledge of how to change does not mean change will be initiated. For improvement to occur, new skills have to be guided, practiced, and reflected upon in collaboration with colleagues. Clearly, teachers are at different places developmentally; therefore, allowances for various entry levels and individual differences need to be built into the professional development structure. Objectives must be meaningful to the participant and teacher improvement must be stated terms of student knowledge, student performance, and motivation. The most successful professional development process acknowledges the student-centered nature of good teaching. This paper highlights the importance of teacher professional development process and institutional supports as way to enhance good teaching and learning environment.Keywords: teacher professional development, teacher competencies, institutional support, teacher education
Procedia PDF Downloads 3546224 Assesment of Quality of Life among Iranian Male Amateur Athletes via WHOQOL-Brief
Authors: Shirko Ahmadi, Ahmad Fallahi, Marco C. Uchida, Gustavo L. Gutierrez
Abstract:
The aims of the present study are to assess and compare the health habits and quality of life (QoL) of Iranian amateur athletes in different sports. A total of 120 male amateur athletes between 17 and 31 years, engaged in 16 kinds of sports which include team (n=44), individual (n=40) and combat sports (n=36) from sports clubs in the west cities of Iran; and also those not involved in any competition in the past. Additionally, this is a cross-sectional, descriptive observational study, which the subjects completed the WHOQOL-brief questionnaire to evaluate QoL. The questionnaire is composed of 26 questions in four domains (physical health, psychological, social and environmental domains), that was applied in the Persian language. Information on the frequency and duration of training sessions were also collected. The Shapiro-Wilk test was used to verify normal distribution, followed by the chi-squared test for proportions and simple analysis of variance for comparisons between groups of sports. Pearson’s correlation was used to assess the relationships between the variables analyzed. According to the findings, those from individual sports obtained highest points in the all domains of QoL; physical domains (87.1 ± 8.1 point), psychological domains (87.6 ± 9.6 point), social domains (89.7 ± 9.2 point), environmental domains (75.5± 10.7 point) and overall QoL score (84.9 ± 9.4 point). Generally, social domains were the highest QoL index (84.3 ± 7.2 points), and environmental domains were the lowest QoL index (68.1 ± 10.8 points), in all of the sports. No correlations were found between QoL domains and time engaged in the sport (r = 0.01; p = 0.93), number of weekly training sessions (r = 0.09; p = 0.37) and session duration (r = -0.06; p= 0.58). Comparison of QoL results with those of the general population revealed higher levels in the physical and psychological components of amateur athletes. In the present study, engaging in sports was associated with higher QoL levels in amateur athletes, particularly in the physical and psychological domains. Moreover, correlations were found between the overall score and domains of QoL.Keywords: amateur, domains, Iranian, quality of life
Procedia PDF Downloads 1536223 Low Temperature Biological Treatment of Chemical Oxygen Demand for Agricultural Water Reuse Application Using Robust Biocatalysts
Authors: Vedansh Gupta, Allyson Lutz, Ameen Razavi, Fatemeh Shirazi
Abstract:
The agriculture industry is especially vulnerable to forecasted water shortages. In the fresh and fresh-cut produce sector, conventional flume-based washing with recirculation exhibits high water demand. This leads to a large water footprint and possible cross-contamination of pathogens. These can be alleviated through advanced water reuse processes, such as membrane technologies including reverse osmosis (RO). Water reuse technologies effectively remove dissolved constituents but can easily foul without pre-treatment. Biological treatment is effective for the removal of organic compounds responsible for fouling, but not at the low temperatures encountered at most produce processing facilities. This study showed that the Microvi MicroNiche Engineering (MNE) technology effectively removes organic compounds (> 80%) at low temperatures (6-8 °C) from wash water. The MNE technology uses synthetic microorganism-material composites with negligible solids production, making it advantageously situated as an effective bio-pretreatment for RO. A preliminary technoeconomic analysis showed 60-80% savings in operation and maintenance costs (OPEX) when using the Microvi MNE technology for organics removal. This study and the accompanying economic analysis indicated that the proposed technology process will substantially reduce the cost barrier for adopting water reuse practices, thereby contributing to increased food safety and furthering sustainable water reuse processes across the agricultural industry.Keywords: biological pre-treatment, innovative technology, vegetable processing, water reuse, agriculture, reverse osmosis, MNE biocatalysts
Procedia PDF Downloads 1296222 Quality Analysis of Lake Malawi's Diplotaxodon Fish Species Processed in Solar Tent Dryer versus Open Sun Drying
Authors: James Banda, Jupiter Simbeye, Essau Chisale, Geoffrey Kanyerere, Kings Kamtambe
Abstract:
Improved solar tent dryers for processing small fish species were designed to reduce post-harvest fish losses and improve supply of quality fish products in the southern part of Lake Malawi under CultiAF project. A comparative analysis of the quality of Diplotaxodon (Ndunduma) from Lake Malawi processed in solar tent dryer and open sun drying was conducted using proximate analysis, microbial analysis and sensory evaluation. Proximates for solar tent dried fish and open sun dried fish in terms of proteins, fats, moisture and ash were 63.3±0.15% and 63.3±0.34%, 19.6±0.09% and 19.9±0.25%, 8.3±0.12% and 17.0±0.01%, and 15.6±0.61% and 21.9±0.91% respectively. Crude protein and crude fat showed non-significant differences (p = 0.05), while moisture and ash content were significantly different (p = 001). Open sun dried fish had significantly higher numbers of viable bacteria counts (5.2×10⁶ CFU) than solar tent dried fish (3.9×10² CFU). Most isolated bacteria from solar tent dried and open sun dried fish were 1.0×10¹ and 7.2×10³ for Total coliform, 0 and 4.5 × 10³ for Escherishia coli, 0 and 7.5 × 10³ for Salmonella, 0 and 5.7×10² for shigella, 4.0×10¹ and 6.1×10³ for Staphylococcus, 1.0×10¹ and 7.0×10² for vibrio. Qualitative evaluation of sensory properties showed higher acceptability of 3.8 for solar tent dried fish than 1.7 for open sun dried fish. It is concluded that promotion of solar tent drying in processing small fish species in Malawi would support small-scale fish processors to produce quality fish in terms of nutritive value, reduced microbial contamination, sensory acceptability and reduced moisture content.Keywords: diplotaxodon, Malawi, open sun drying, solar tent drying
Procedia PDF Downloads 3366221 Exploring Barriers to Quality of Care in South African Midwifery Obstetric Units: The Perspective of Nurses and Midwives
Abstract:
Achieving quality and respectful maternal health care is part of the global agenda to improve reproductive health and achieve universal reproductive rights. Barriers to quality of care in South African maternal health facilities exist at both systemic and individual levels. Addition to this, the normalization of gender violence within South Africa has a large impact on people seeking health care as well as those who provide care within health facilities. The hierarchical environment of South Africa’s public health system penalizes both patients and providers who battle to assume any assessable power. This paper explores how systemic and individual level barriers to quality of care affect the midwifery profession within South African maternal health services and create, at times, an environment of enmity rather than care. This paper analyzes and discusses the data collected from in-depth, semi-structured interviews with nurses and midwives at three maternal health facilities in South Africa. This study has taken a holistic approach to understand the realities of nurses and midwives in order to explore the ways in which experience informs their practice and treatment of pregnant women. Through collecting and analyzing narratives, linkages between nurses and midwives day-to-day and historical experiences and disrespectful care have been made. Findings from this study show that barriers to quality of care take form in complex and interrelated ways. The physical structure of the health facility, human resource shortages, and the current model of maternal health care, which often lacks a person-centered approach, is entangled within personal beliefs and attitudes of what it means to be a midwife to create an environment that is often not conducive to a positive birthing experience. This entanglement sits within a society of high rates of violence, inequality, and poverty. Having teased out the nuances of each of these barriers and the multiple ways they reinforce each other, the findings of this paper demonstrate that birth, and the work of a midwife, are situated in a mode of discipline and punishment within this context. For analytical purposes, this paper has broken down the individual barriers to quality care and discusses the current and historical significance before returning to the interrelated forms in which barriers to quality maternal health care manifest. In conclusion this paper questions the role of agency in the ability to subvert systemic barriers to quality care and ideas around shifting attitudes and beliefs of and about midwives. International and local policies and guidelines have a role to play in realizing such shifts, however, as this paper suggests, when policy does not speak to the local context there is the risk of it contributing to frustrations and impeding the path to quality and respectful maternal health care.Keywords: disrespect and abuse in childbirth, midwifery, South African maternal health care, quality of care
Procedia PDF Downloads 1726220 Design and Implementation of Collaborative Editing System Based on Physical Simulation Engine Running State
Authors: Zhang Songning, Guan Zheng, Ci Yan, Ding Gangyi
Abstract:
The application of physical simulation engines in collaborative editing systems has an important background and role. Firstly, physical simulation engines can provide real-world physical simulations, enabling users to interact and collaborate in real time in virtual environments. This provides a more intuitive and immersive experience for collaborative editing systems, allowing users to more accurately perceive and understand various elements and operations in collaborative editing. Secondly, through physical simulation engines, different users can share virtual space and perform real-time collaborative editing within it. This real-time sharing and collaborative editing method helps to synchronize information among team members and improve the efficiency of collaborative work. Through experiments, the average model transmission speed of a single person in the collaborative editing system has increased by 141.91%; the average model processing speed of a single person has increased by 134.2%; the average processing flow rate of a single person has increased by 175.19%; the overall efficiency improvement rate of a single person has increased by 150.43%. With the increase in the number of users, the overall efficiency remains stable, and the physical simulation engine running status collaborative editing system also has horizontal scalability. It is not difficult to see that the design and implementation of a collaborative editing system based on physical simulation engines not only enriches the user experience but also optimizes the effectiveness of team collaboration, providing new possibilities for collaborative work.Keywords: physics engine, simulation technology, collaborative editing, system design, data transmission
Procedia PDF Downloads 856219 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 1296218 Sustainability in Community-Based Forestry Management: A Case from Nepal
Authors: Tanka Nath Dahal
Abstract:
Community-based forestry is seen as a promising instrument for sustainable forest management (SFM) through the purposeful involvement of local communities. Globally, forest area managed by local communities is on the rise. However, transferring management responsibilities to forest users alone cannot guarantee the sustainability of forest management. A monitoring tool, that allows the local communities to track the progress of forest management towards the goal of sustainability, is essential. A case study, including six forest user groups (FUGs), two from each three community-based forestry models—community forestry (CF), buffer zone community forestry (BZCF), and collaborative forest management (CFM) representing three different physiographic regions, was conducted in Nepal. The study explores which community-based forest management model (CF, BZCF or CFM) is doing well in terms of sustainable forest management. The study assesses the overall performance of the three models towards SFM using locally developed criteria (four), indicators (26) and verifiers (60). This paper attempts to quantify the sustainability of the models using sustainability index for individual criteria (SIIC), and overall sustainability index (OSI). In addition, rating to the criteria and scoring of the verifiers by the FUGs were done. Among the four criteria, the FUGs ascribed the highest weightage to institutional framework and governance criterion; followed by economic and social benefits, forest management practices, and extent of forest resources. Similarly, the SIIC was found to be the highest for the institutional framework and governance criterion. The average values of OSI for CFM, CF, and BZCF were 0.48, 0.51 and 0.60 respectively; suggesting that buffer zone community forestry is the more sustainable model among the three. The study also suggested that the SIIC and OSI help local communities to quantify the overall progress of their forestry practices towards sustainability. The indices provided a clear picture of forest management practices to indicate the direction where they are heading in terms of sustainability; and informed the users on issues to pay attention to enhancing the sustainability of their forests.Keywords: community forestry, collaborative management, overall sustainability, sustainability index for individual criteria
Procedia PDF Downloads 2486217 The Journalistic Representation of Femicide in Italy
Authors: Saveria Capecchi
Abstract:
In recent decades, the issue of gender-based violence, particularly femicide, has been increasingly presented to the public by Italian media. However, it is often treated in a trivialized and sensationalistic manner, focusing on cases that exhibit the most "attractive" elements (brutality, sex, drugs, the young age and/or good looks of the victims, stories with "mystery," "horror," etc.). Furthermore, this phenomenon is most often represented by referring to the psycho-individualistic paradigm, focusing on the psychological and individual characteristics of the perpetrator rather than referring to the feminist and/or constructivist paradigms. According to the latter, the causes of male violence against women do not lie in the individual problems of the perpetrator but in the social and cultural construction of the power hierarchy between men and women. The following study presents the results of qualitative research on the journalistic approach to male violence against women in Italy, aimed at examining the limitations of the narrative strategies used by the media. The research focuses on the case of Giulia Cecchettin (killed by her ex-boyfriend Filippo Turetta on November 11, 2023), which has fueled the debate on the narrative surrounding male violence against women. This case was chosen based on its significant media coverage and the victim's family's commitment to combating gender-based violence. The research involves a content analysis of 150 articles from four different national newspapers («Corriere della Sera», «La Stampa», «Il Giornale», «la Repubblica»). Additionally, the study analyzed the social media use of two Italian newspapers («Corriere della Sera» and «la Repubblica»), examining 20 posts and their 600 related comments, highlighting the various types of public responses, including criticisms of how femicide is represented by the media. Furthermore, the paper will reflect on the role that the Italian women's movement and certain journalist communities have played in promoting a narrative of femicide that is more attentive to power dynamics and free from gender stereotypes.Keywords: gender-based violence, femicide, gender stereotypes, Italian newspapers
Procedia PDF Downloads 216216 Social Media Idea Ontology: A Concept for Semantic Search of Product Ideas in Customer Knowledge through User-Centered Metrics and Natural Language Processing
Authors: Martin H¨ausl, Maximilian Auch, Johannes Forster, Peter Mandl, Alexander Schill
Abstract:
In order to survive on the market, companies must constantly develop improved and new products. These products are designed to serve the needs of their customers in the best possible way. The creation of new products is also called innovation and is primarily driven by a company’s internal research and development department. However, a new approach has been taking place for some years now, involving external knowledge in the innovation process. This approach is called open innovation and identifies customer knowledge as the most important source in the innovation process. This paper presents a concept of using social media posts as an external source to support the open innovation approach in its initial phase, the Ideation phase. For this purpose, the social media posts are semantically structured with the help of an ontology and the authors are evaluated using graph-theoretical metrics such as density. For the structuring and evaluation of relevant social media posts, we also use the findings of Natural Language Processing, e. g. Named Entity Recognition, specific dictionaries, Triple Tagger and Part-of-Speech-Tagger. The selection and evaluation of the tools used are discussed in this paper. Using our ontology and metrics to structure social media posts enables users to semantically search these posts for new product ideas and thus gain an improved insight into the external sources such as customer needs.Keywords: idea ontology, innovation management, semantic search, open information extraction
Procedia PDF Downloads 1886215 Microfluidic Impedimetric Biochip and Related Methods for Measurement Chip Manufacture and Counting Cells
Authors: Amina Farooq, Nauman Zafar Butt
Abstract:
This paper is about methods and tools for counting particles of interest, such as cells. A microfluidic system with interconnected electronics on a flexible substrate, inlet-outlet ports and interface schemes, sensitive and selective detection of cells specificity, and processing of cell counting at polymer interfaces in a microscale biosensor for use in the detection of target biological and non-biological cells. The development of fluidic channels, planar fluidic contact ports, integrated metal electrodes on a flexible substrate for impedance measurements, and a surface modification plasma treatment as an intermediate bonding layer are all part of the fabrication process. Magnetron DC sputtering is used to deposit a double metal layer (Ti/Pt) over the polypropylene film. Using a photoresist layer, specified and etched zones are established. Small fluid volumes, a reduced detection region, and electrical impedance measurements over a range of frequencies for cell counts improve detection sensitivity and specificity. The procedure involves continuous flow of fluid samples that contain particles of interest through the microfluidic channels, counting all types of particles in a portion of the sample using the electrical differential counter to generate a bipolar pulse for each passing cell—calculating the total number of particles of interest originally in the fluid sample by using MATLAB program and signal processing. It's indeed potential to develop a robust and economical kit for cell counting in whole-blood samples using these methods and similar devices.Keywords: impedance, biochip, cell counting, microfluidics
Procedia PDF Downloads 1616214 White Individuals' Perception On Whiteness
Authors: Sebastian Del Corral Winder, Kiriana Sanchez, Mixalis Poulakis, Samantha Gray
Abstract:
This paper seeks to explore White privilege and Whiteness. Being White in the U.S. is often perceived as the norm and it brings significant social, economic, educational, and health privileges that often are hidden in social interactions. One quality of Whiteness has been its invisibility given its intrinsic impact on the system, which becomes only visible when paying close attention to White identity and culture and during cross-cultural interactions. The cross-cultural interaction provides an emphasis on differences between the participants and people of color are often viewed as “the other.” These interactions may promote an increased opportunity for discrimination and negative stereotypes against a person of color. Given the recent increase of violence against culturally diverse groups, there has been an increased sense of otherness and division in the country. Furthermore, the accent prestige theory has found that individuals who speak English with a foreign accent are perceived as less educated, competent, friendly, and trustworthy by White individuals in the United States. Using the consensual qualitative research (CQR) methodology, this study explored the cross-cultural dyad from the White individual’s perspective focusing on the psychotherapeutic relationship. The participants were presented with an audio recording of a conversation between a psychotherapist with a Hispanic accent and a patient with an American English accent. Then, the participants completed an interview regarding their perceptions of race, culture, and cross-cultural interactions. The preliminary results suggested that the Hispanic accent alone was enough for the participants to assign stereotypical ethnic and cultural characteristics to the individual with the Hispanic accent. Given the quality of the responses, the authors completed a secondary analysis to explore Whiteness and White privilege in more depth. Participants were found to be on a continuum in their understanding and acknowledgment of systemic racism; while some participants listed examples of inequality, other participants noted: “all people are treated equally.” Most participants noted their feelings of discomfort in discussing topics of cultural diversity and systemic racism by fearing to “say the ‘wrong thing.” Most participants placed the responsibility of discussing cultural differences with the person of color, which has been observed to create further alienation and otherness for culturally diverse individuals. The results indicate the importance of examining racial and cultural biases from White individuals to promote an anti-racist stance. The results emphasize the need for greater systemic changes in education, policies, and individual awareness regarding cultural identity. The results suggest the importance for White individuals to take ownership of their own cultural biases in order to promote equity and engage in cultural humility in a multicultural world. Future research should continue exploring the role of White ethnic identity and education as they appear to moderate White individuals’ attitudes and beliefs regarding other races and cultures.Keywords: culture, qualitative research, whiteness, white privilege
Procedia PDF Downloads 1586213 Establishing a Surrogate Approach to Assess the Exposure Concentrations during Coating Process
Authors: Shan-Hong Ying, Ying-Fang Wang
Abstract:
A surrogate approach was deployed for assessing exposures of multiple chemicals at the selected working area of coating processes and applied to assess the exposure concentration of similar exposed groups using the same chemicals but different formula ratios. For the selected area, 6 to 12 portable photoionization detector (PID) were placed uniformly in its workplace to measure its total VOCs concentrations (CT-VOCs) for 6 randomly selected workshifts. Simultaneously, one sampling strain was placed beside one of these portable PIDs, and the collected air sample was analyzed for individual concentration (CVOCi) of 5 VOCs (xylene, butanone, toluene, butyl acetate, and dimethylformamide). Predictive models were established by relating the CT-VOCs to CVOCi of each individual compound via simple regression analysis. The established predictive models were employed to predict each CVOCi based on the measured CT-VOC for each the similar working area using the same portable PID. Results show that predictive models obtained from simple linear regression analyses were found with an R2 = 0.83~0.99 indicating that CT-VOCs were adequate for predicting CVOCi. In order to verify the validity of the exposure prediction model, the sampling analysis of the above chemical substances was further carried out and the correlation between the measured value (Cm) and the predicted value (Cp) was analyzed. It was found that there is a good correction between the predicted value and measured value of each measured chemical substance (R2=0.83~0.98). Therefore, the surrogate approach could be assessed the exposure concentration of similar exposed groups using the same chemicals but different formula ratios. However, it is recommended to establish the prediction model between the chemical substances belonging to each coater and the direct-reading PID, which is more representative of reality exposure situation and more accurately to estimate the long-term exposure concentration of operators.Keywords: exposure assessment, exposure prediction model, surrogate approach, TVOC
Procedia PDF Downloads 1506212 COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes
Authors: Shahad Nagoor, Lucy Hederman, Kevin Koidl, Annalina Caputo
Abstract:
Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ).Keywords: BERT fine-tuning, clinical sentiment, COVID-19, data augmentation
Procedia PDF Downloads 2066211 Case Study on Exploration of Pediatric Cardiopulmonary Resuscitation among Involved Team Members in Pediatric Intensive Care Unit Institut Jantung Negara
Authors: Farah Syazwani Hilmy Zaki
Abstract:
Background: Compared to adult cardiopulmonary resuscitation (CPR), high-quality research and evidence on pediatric CPR remain relatively scarce. This knowledge gap hinders the development of optimal guidelines and best practices for resuscitating children. Objectives: To explore pediatric intensive care unit (PICU) CPR current practices in PICU of Institut Jantung Negara (IJN) Malaysia. Method: The research employed a qualitative approach, utilising case study research design. The data collection process involved in-depth interviews and reviewing the Resuscitation Feedback Form. Purposive sampling was used to select two cases consisting of 14 participants. The study participants comprised a cardiologist, one anaesthetist, and twelve nurses. The data collected were transcribed and entered into NVivo software to facilitate theme development. Subsequently, thematic analysis was conducted to analyse the data. Findings: The study yielded key findings regarding the enhancement of PICU CPR practices. These findings are categorised into four themes, namely routine procedures, resuscitation techniques, team dynamics, and individual contributions. Establishment of cohesive team is crucial in facilitating the effectiveness of resuscitation. According to participants, lack of confidence, skills and knowledge presents significant obstacles to effective PICU CPR. Conclusion: The findings of this study indicate that the participants express satisfaction with the current practices of PICU CPR. However, the research also highlights the need for enhancements in various areas, including routine procedures, resuscitation techniques, as well as team and individual factors. Furthermore, it was suggested that additional training be conducted on the resuscitation process to enhance the preparedness of the medical team.Keywords: cardiopulmonary resuscitation, feedback, nurses, pediatric intensive care unit
Procedia PDF Downloads 90