Search results for: task value
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2104

Search results for: task value

694 Potentials for Learning History through Role-Playing in Virtual Reality: An Exploratory Study on Role-Playing on a Virtual Heritage Site

Authors: Danzhao Cheng, Eugene Ch'ng

Abstract:

Virtual Reality technologies can reconstruct cultural heritage objects and sites to a level of realism. Concentrating mostly on documenting authentic data and accurate representations of tangible contents, current virtual heritage is limited to accumulating visually presented objects. Such constructions, however, are fragmentary and may not convey the inherent significance of heritage in a meaningful way. In order to contextualise fragmentary historical contents where history can be told, a strategy is to create a guided narrative via role-playing. Such an approach can strengthen the logical connections of cultural elements and facilitate creative synthesis within the virtual world. This project successfully reconstructed the Ningbo Sanjiangkou VR site in Yuan Dynasty combining VR technology and role-play game approach. The results with 80 pairs of participants suggest that VR role-playing can be beneficial in a number of ways. Firstly, it creates thematic interactivity which encourages users to explore the virtual heritage in a more entertaining way with task-oriented goals. Secondly, the experience becomes highly engaging since users can interpret a historical context through the perspective of specific roles that exist in past societies. Thirdly, personalisation allows open-ended sequences of the expedition, reinforcing user’s acquisition of procedural knowledge relative to the cultural domain. To sum up, role-playing in VR poses great potential for experiential learning as it allows users to interpret a historical context in a more entertaining way.

Keywords: experiential learning, maritime silk road, role-playing, virtual heritage, virtual reality

Procedia PDF Downloads 164
693 Full-Face Hyaluronic Acid Implants Assisted by Artificial Intelligence-Generated Post-treatment 3D Models

Authors: Ciro Cursio, Pio Luigi Cursio, Giulia Cursio, Isabella Chiardi, Luigi Cursio

Abstract:

Introduction: Full-face aesthetic treatments often present a difficult task: since different patients possess different anatomical and tissue characteristics, there is no guarantee that the same treatment will have the same effect on multiple patients; additionally, full-face rejuvenation and beautification treatments require not only a high degree of technical skill but also the ability to choose the right product for each area and a keen artistic eye. Method: We present an artificial intelligence-based algorithm that can generate realistic post-treatment 3D models based on the patient’s requests together with the doctor’s input. These 3-dimensional predictions can be used by the practitioner for two purposes: firstly, they help ensure that the patient and the doctor are completely aligned on the expectations of the treatment; secondly, the doctor can use them as a visual guide, obtaining a natural result that would normally stem from the practitioner's artistic skills. To this end, the algorithm is able to predict injection zones, the type and quantity of hyaluronic acid, the injection depth, and the technique to use. Results: Our innovation consists in providing an objective visual representation of the patient that is helpful in the patient-doctor dialogue. The patient, based on this information, can express her desire to undergo a specific treatment or make changes to the therapeutic plan. In short, the patient becomes an active agent in the choices made before the treatment. Conclusion: We believe that this algorithm will reveal itself as a useful tool in the pre-treatment decision-making process to prevent both the patient and the doctor from making a leap into the dark.

Keywords: hyaluronic acid, fillers, full face, artificial intelligence, 3D

Procedia PDF Downloads 89
692 Specific Language Impirment in Kannada: Evidence Form a Morphologically Complex Language

Authors: Shivani Tiwari, Prathibha Karanth, B. Rajashekhar

Abstract:

Impairments of syntactic morphology are often considered central in children with Specific Language Impairment (SLI). In English and related languages, deficits of tense-related grammatical morphology could serve as a clinical marker of SLI. Yet, cross-linguistic studies on SLI in the recent past suggest that the nature and severity of morphosyntactic deficits in children with SLI varies with the language being investigated. Therefore, in the present study we investigated the morphosyntactic deficits in a group of children with SLI who speak Kannada, a morphologically complex Dravidian language spoken in Indian subcontinent. A group of 15 children with SLI participated in this study. Two more groups of typical developing children (15 each) matched for language and age to children with SLI, were included as control participants. All participants were assessed for morphosyntactic comprehension and expression using standardized language test and a spontaneous speech task. Results of the study showed that children with SLI differed significantly from age-matched but not language-matched control group, on tasks of both comprehension and expression of morphosyntax. This finding is, however, in contrast with the reports of English-speaking children with SLI who are reported to be poorer than younger MLU-matched children on tasks of morphosyntax. The observed difference in impairments of morphosyntax in Kannada-speaking children with SLI from English-speaking children with SLI is explained based on the morphological richness theory. The theory predicts that children with SLI perform relatively better in morphologically rich language due to occurrence of their frequent and consistent features that mark the morphological markers. The authors, therefore, conclude that language-specific features do influence manifestation of the disorder in children with SLI.

Keywords: specific language impairment, morphosyntax, Kannada, manifestation

Procedia PDF Downloads 243
691 An End-to-end Piping and Instrumentation Diagram Information Recognition System

Authors: Taekyong Lee, Joon-Young Kim, Jae-Min Cha

Abstract:

Piping and instrumentation diagram (P&ID) is an essential design drawing describing the interconnection of process equipment and the instrumentation installed to control the process. P&IDs are modified and managed throughout a whole life cycle of a process plant. For the ease of data transfer, P&IDs are generally handed over from a design company to an engineering company as portable document format (PDF) which is hard to be modified. Therefore, engineering companies have to deploy a great deal of time and human resources only for manually converting P&ID images into a computer aided design (CAD) file format. To reduce the inefficiency of the P&ID conversion, various symbols and texts in P&ID images should be automatically recognized. However, recognizing information in P&ID images is not an easy task. A P&ID image usually contains hundreds of symbol and text objects. Most objects are pretty small compared to the size of a whole image and are densely packed together. Traditional recognition methods based on geometrical features are not capable enough to recognize every elements of a P&ID image. To overcome these difficulties, state-of-the-art deep learning models, RetinaNet and connectionist text proposal network (CTPN) were used to build a system for recognizing symbols and texts in a P&ID image. Using the RetinaNet and the CTPN model carefully modified and tuned for P&ID image dataset, the developed system recognizes texts, equipment symbols, piping symbols and instrumentation symbols from an input P&ID image and save the recognition results as the pre-defined extensible markup language format. In the test using a commercial P&ID image, the P&ID information recognition system correctly recognized 97% of the symbols and 81.4% of the texts.

Keywords: object recognition system, P&ID, symbol recognition, text recognition

Procedia PDF Downloads 153
690 Students' Perceptions of Assessment and Feedback in Higher Education

Authors: Jonathan Glazzard

Abstract:

National student satisfaction data in England demonstrate that undergraduate students are less satisfied overall with assessment and feedback than other aspects of their higher education courses. Given that research findings suggest that high-quality feedback is a critical factor associated with academic achievement, it is important that feedback enables students to demonstrate improved academic achievement in their subsequent assessments. Given the growing importance of staff-student partnerships in higher education, this research examined students’ perceptions of assessment and feedback in one UK university. Students’ perceptions were elicited through the use of a university-wide survey which was completed by undergraduate students. In addition, three focus groups were used to provide qualitative student perception data across the three university Facilities. The data indicate that whilst students valued detailed feedback on their work, less detailed feedback could be compensated for by the development of pre-assessment literacy skills which are front-loaded into courses. Assessment literacy skills valued by students included the use of clear assessment criteria and assignment briefings which enabled students to fully understand the assessment task. Additionally, students valued assessment literacy pre-assessment tasks which enabled them to understand the standards which they were expected to achieve. Students valued opportunities for self and peer assessment prior to the final assessment and formative assessment feedback which matched the summative assessment feedback. Students also valued dialogic face-to-face feedback after receiving written feedback Above all, students valued feedback which was particular to their work and which gave recognition for the effort they had put into completing specific assessments. The data indicate that there is a need for higher education lecturers to receive systematic training in assessment and feedback which provides a comprehensive grounding in pre-assessment literacy skills.

Keywords: formative assessment, summative assessment, feedback, marking

Procedia PDF Downloads 322
689 Complex Management of Arrhythmogenic Right Ventricular Dysplasia/Cardiomyopathy

Authors: Fahad Almehmadi, Abdullah Alrajhi, Bader K. Alaslab, Abdullah A. Al Qurashi, Hattan A. Hassani

Abstract:

Arrhythmogenic Right Ventricular Dysplasia/Cardiomyopathy (ARVD/C) is an uncommon, inheritable cardiac disorder characterized by the progressive substitution of cardiac myocytes by fibro-fatty tissues. This pathologic substitution predisposes patients to ventricular arrhythmias and right ventricular failure. The underlying genetic defect predominantly involves genes encoding for desmosome proteins, particularly plakophilin-2 (PKP2). These aberrations lead to impaired cell adhesion, heightening the susceptibility to fibrofatty scarring under conditions of mechanical stress. Primarily, ARVD/C affects the right ventricle, but it can also compromise the left ventricle, potentially leading to biventricular heart failure. Clinical presentations can vary, spanning from asymptomatic individuals to those experiencing palpitations, syncopal episodes, and, in severe instances, sudden cardiac death. The establishment of a diagnostic criterion specifically tailored for ARVD/C significantly aids in its accurate diagnosis. Nevertheless, the task of early diagnosis is complicated by the disease's frequently asymptomatic initial stages, and the overall rarity of ARVD/C cases reported globally. In some cases, as exemplified by the adult female patient in this report, the disease may advance to terminal stages, rendering therapies like Ventricular Tachycardia (VT) ablation ineffective. This case underlines the necessity for increased awareness and understanding of ARVD/C to aid in its early detection and management. Through such efforts, we aim to decrease morbidity and mortality associated with this challenging cardiac disorder.

Keywords: ARVD/C, cardiology, interventional cardiology, cardiac electrophysiology

Procedia PDF Downloads 63
688 The Counselling Practice of School Social Workers in Swedish Elementary Schools - A Focus Group Study

Authors: Kjellgren Maria, Lilliehorn Sara, Markström Urban

Abstract:

This article describes the counselling practice of school social workers (SSWs) with individual children. SSWs work in the school system’s pupil health team, whose primary task is health promotion and prevention. The work of SSWs is about helping children and adolescents who, for various reasons, suffer from mental ill-health, school absenteeism, or stress that make them unable to achieve their intended goals. SSWs preferably meet these children in individual counselling sessions. The aim of this article is to describe and analyse SSWs’ experience of counselling with children and to examine the characteristics of counselling practice. The data collection was conducted through four semi-structured focus group interviews with a total of 22 SSWs in four different regions in Sweden. SSWs provide counselling to children in order to bring about improved feelings or behavioural changes. It can be noted that SSWs put emphasis on both the counselling process and the alliance with the child. The interviews showed a common practice among SSWs regarding the structure of the counselling sessions, with certain steps and approaches being employed. However, the specific interventions differed and were characterised by an eclectic standpoint in which SSWs utilise a broad repertoire of therapeutic schools and techniques. Furthermore, a relational perspective emerged as a most prominent focus for the SSWs by re-emerging throughout the material. We believe that SSWs could benefit from theoretical perspectives on ‘contextual model’ and ‘attachment theory’ as ‘models of the mind’. Being emotionally close to the child and being able to follow their development requires a lot from SSWs, as both professional caregivers and as “safe havens”.

Keywords: school social conselling, school social workers, contextual model, attachment thory

Procedia PDF Downloads 134
687 Object-Based Image Analysis for Gully-Affected Area Detection in the Hilly Loess Plateau Region of China Using Unmanned Aerial Vehicle

Authors: Hu Ding, Kai Liu, Guoan Tang

Abstract:

The Chinese Loess Plateau suffers from serious gully erosion induced by natural and human causes. Gully features detection including gully-affected area and its two dimension parameters (length, width, area et al.), is a significant task not only for researchers but also for policy-makers. This study aims at gully-affected area detection in three catchments of Chinese Loess Plateau, which were selected in Changwu, Ansai, and Suide by using unmanned aerial vehicle (UAV). The methodology includes a sequence of UAV data generation, image segmentation, feature calculation and selection, and random forest classification. Two experiments were conducted to investigate the influences of segmentation strategy and feature selection. Results showed that vertical and horizontal root-mean-square errors were below 0.5 and 0.2 m, respectively, which were ideal for the Loess Plateau region. The segmentation strategy adopted in this paper, which considers the topographic information, and optimal parameter combination can improve the segmentation results. Besides, the overall extraction accuracy in Changwu, Ansai, and Suide achieved was 84.62%, 86.46%, and 93.06%, respectively, which indicated that the proposed method for detecting gully-affected area is more objective and effective than traditional methods. This study demonstrated that UAV can bridge the gap between field measurement and satellite-based remote sensing, obtaining a balance in resolution and efficiency for catchment-scale gully erosion research.

Keywords: unmanned aerial vehicle (UAV), object-analysis image analysis, gully erosion, gully-affected area, Loess Plateau, random forest

Procedia PDF Downloads 218
686 Optimization and Energy Management of Hybrid Standalone Energy System

Authors: T. M. Tawfik, M. A. Badr, E. Y. El-Kady, O. E. Abdellatif

Abstract:

Electric power shortage is a serious problem in remote rural communities in Egypt. Over the past few years, electrification of remote communities including efficient on-site energy resources utilization has achieved high progress. Remote communities usually fed from diesel generator (DG) networks because they need reliable energy and cheap fresh water. The main objective of this paper is to design an optimal economic power supply from hybrid standalone energy system (HSES) as alternative energy source. It covers energy requirements for reverse osmosis desalination unit (DU) located in National Research Centre farm in Noubarya, Egypt. The proposed system consists of PV panels, Wind Turbines (WT), Batteries, and DG as a backup for supplying DU load of 105.6 KWh/day rated power with 6.6 kW peak load operating 16 hours a day. Optimization of HSES objective is selecting the suitable size of each of the system components and control strategy that provide reliable, efficient, and cost-effective system using net present cost (NPC) as a criterion. The harmonization of different energy sources, energy storage, and load requirements are a difficult and challenging task. Thus, the performance of various available configurations is investigated economically and technically using iHOGA software that is based on genetic algorithm (GA). The achieved optimum configuration is further modified through optimizing the energy extracted from renewable sources. Effective minimization of energy charging the battery ensures that most of the generated energy directly supplies the demand, increasing the utilization of the generated energy.

Keywords: energy management, hybrid system, renewable energy, remote area, optimization

Procedia PDF Downloads 199
685 Fully Automated Methods for the Detection and Segmentation of Mitochondria in Microscopy Images

Authors: Blessing Ojeme, Frederick Quinn, Russell Karls, Shannon Quinn

Abstract:

The detection and segmentation of mitochondria from fluorescence microscopy are crucial for understanding the complex structure of the nervous system. However, the constant fission and fusion of mitochondria and image distortion in the background make the task of detection and segmentation challenging. In the literature, a number of open-source software tools and artificial intelligence (AI) methods have been described for analyzing mitochondrial images, achieving remarkable classification and quantitation results. However, the availability of combined expertise in the medical field and AI required to utilize these tools poses a challenge to its full adoption and use in clinical settings. Motivated by the advantages of automated methods in terms of good performance, minimum detection time, ease of implementation, and cross-platform compatibility, this study proposes a fully automated framework for the detection and segmentation of mitochondria using both image shape information and descriptive statistics. Using the low-cost, open-source python and openCV library, the algorithms are implemented in three stages: pre-processing, image binarization, and coarse-to-fine segmentation. The proposed model is validated using the mitochondrial fluorescence dataset. Ground truth labels generated using a Lab kit were also used to evaluate the performance of our detection and segmentation model. The study produces good detection and segmentation results and reports the challenges encountered during the image analysis of mitochondrial morphology from the fluorescence mitochondrial dataset. A discussion on the methods and future perspectives of fully automated frameworks conclude the paper.

Keywords: 2D, binarization, CLAHE, detection, fluorescence microscopy, mitochondria, segmentation

Procedia PDF Downloads 357
684 [Keynote Talk]: Mathematical and Numerical Modelling of the Cardiovascular System: Macroscale, Mesoscale and Microscale Applications

Authors: Aymen Laadhari

Abstract:

The cardiovascular system is centered on the heart and is characterized by a very complex structure with different physical scales in space (e.g. micrometers for erythrocytes and centimeters for organs) and time (e.g. milliseconds for human brain activity and several years for development of some pathologies). The development and numerical implementation of mathematical models of the cardiovascular system is a tremendously challenging topic at the theoretical and computational levels, inducing consequently a growing interest over the past decade. The accurate computational investigations in both healthy and pathological cases of processes related to the functioning of the human cardiovascular system can be of great potential in tackling several problems of clinical relevance and in improving the diagnosis of specific diseases. In this talk, we focus on the specific task of simulating three particular phenomena related to the cardiovascular system on the macroscopic, mesoscopic and microscopic scales, respectively. Namely, we develop numerical methodologies tailored for the simulation of (i) the haemodynamics (i.e., fluid mechanics of blood) in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets, (ii) the hyperelastic anisotropic behaviour of cardiomyocytes and the influence of calcium concentrations on the contraction of single cells, and (iii) the dynamics of red blood cells in microvasculature. For each problem, we present an appropriate fully Eulerian finite element methodology. We report several numerical examples to address in detail the relevance of the mathematical models in terms of physiological meaning and to illustrate the accuracy and efficiency of the numerical methods.

Keywords: finite element method, cardiovascular system, Eulerian framework, haemodynamics, heart valve, cardiomyocyte, red blood cell

Procedia PDF Downloads 252
683 Application of New Sprouted Wheat Brine for Delicatessen Products From Horse Meat, Beef and Pork

Authors: Gulmira Kenenbay, Urishbay Chomanov, Aruzhan Shoman, Rabiga Kassimbek

Abstract:

The main task of the meat-processing industry is the production of meat products as the main source of animal protein, ensuring the vital activity of the human body, in the required volumes, high quality, diverse assortment. Providing the population with high-quality food products what are biologically full, balanced in composition of basic nutrients and enriched by targeted physiologically active components, is one of the highest priority scientific and technical problems to be solved. In this regard, the formulation of a new brine from sprouted wheat for meat delicacies from horse meat, beef and pork has been developed. The new brine contains flavored aromatic ingredients, juice of the germinated wheat and vegetable juice. The viscosity of meat of horse meat, beef and pork were studied during massaging. Thermodynamic indices, water activity and binding energy of horse meat, beef and pork with application of new brine are investigated. A recipe for meat products with vegetable additives has been developed. Organoleptic evaluation of meat products was carried out. Physicochemical parameters of meat products with vegetable additives are carried out. Analysis of the obtained data shows that the values of the index aw (water activity) and the binding energy of moisture in the experimental samples of meat products are higher than in the control samples. It has been established by investigations that with increasing water activity and the binding energy of moisture, the tenderness of ready meat delicacies increases with the use of a new brine.

Keywords: compounding, functional products, delicatessen products, brine, vegetable additives

Procedia PDF Downloads 178
682 Evidence Theory Based Emergency Multi-Attribute Group Decision-Making: Application in Facility Location Problem

Authors: Bidzina Matsaberidze

Abstract:

It is known that, in emergency situations, multi-attribute group decision-making (MAGDM) models are characterized by insufficient objective data and a lack of time to respond to the task. Evidence theory is an effective tool for describing such incomplete information in decision-making models when the expert and his knowledge are involved in the estimations of the MAGDM parameters. We consider an emergency decision-making model, where expert assessments on humanitarian aid from distribution centers (HADC) are represented in q-rung ortho-pair fuzzy numbers, and the data structure is described within the data body theory. Based on focal probability construction and experts’ evaluations, an objective function-distribution centers’ selection ranking index is constructed. Our approach for solving the constructed bicriteria partitioning problem consists of two phases. In the first phase, based on the covering’s matrix, we generate a matrix, the columns of which allow us to find all possible partitionings of the HADCs with the service centers. Some constraints are also taken into consideration while generating the matrix. In the second phase, based on the matrix and using our exact algorithm, we find the partitionings -allocations of the HADCs to the centers- which correspond to the Pareto-optimal solutions. For an illustration of the obtained results, a numerical example is given for the facility location-selection problem.

Keywords: emergency MAGDM, q-rung orthopair fuzzy sets, evidence theory, HADC, facility location problem, multi-objective combinatorial optimization problem, Pareto-optimal solutions

Procedia PDF Downloads 92
681 The Utility and the Consequences of Counter Terrorism Financing

Authors: Fatemah Alzubairi

Abstract:

Terrorism financing is a theme that dramatically evolved post-9/11. Supra-national bodies, above all UN Security Council and the Financial Action Task Form (FATF), have established an executive-like mechanism, which allows blacklisting individuals and groups, freezing their funds, and restricting their travel, all of which have become part of states’ anti-terrorism frameworks. A number of problems arise from building counter-terrorism measures on the foundation of a vague definition of terrorism. This paper examines the utility and consequences of counter-terrorism financing with considering the lack of an international definition of terrorism. The main problem with national and international anti-terrorism legislation is the lack of a clear objective definition of terrorism. Most, if not all, national laws are broad and vague. Determining what terrorism remains the crucial underpinning of any successful discussion of counter-terrorism, and of the future success of counter-terrorist measures. This paper focuses on the legal and political consequences of equalizing the treatment of violent terrorist crimes, such as bombing, with non-violent terrorism-related crimes, such as funding terrorist groups. While both sorts of acts requires criminalization, treating them equally risks wrongfully or unfairly condemning innocent people who have associated with “terrorists” but are not involved in terrorist activities. This paper examines whether global obligations to counter terrorism financing focus on controlling terrorist groups more than terrorist activities. It also examines the utility of the obligations adopted by the UN Security Council and FATF, and whether they serve global security; or whether the utility is largely restricted to Western security, with little attention paid to the unique needs and demands of other regions.

Keywords: counter-terrorism, definition of terrorism, FATF, security, terrorism financing, UN Security Council

Procedia PDF Downloads 324
680 Exploring Paper Mill Sludge and Sugarcane Bagasse as Carrier Matrix in Solid State Fermentation for Carotenoid Pigment Production by Planococcus sp. TRC1

Authors: Subhasree Majumdar, Sovan Dey, Sayari Mukherjee, Sourav Dutta, Dalia Dasgupta Mandal

Abstract:

Bacterial isolates from Planococcus genus are known for the production of yellowish orange pigment that belongs to the carotenoid family. These pigments are of immense pharmacological importance as antioxidant, anticancer, eye and liver protective agent, etc. The production of this pigment in a cost effective manner is a challenging task. The present study explored paper mill sludge (PMS), a solid lignocellulosic waste generated in large quantities from pulp and paper mill industry as a substrate for carotenoid pigment production by Planococcus sp. TRC1. PMS was compared in terms of efficacy with sugarcane bagasse, which is a highly explored substrate for valuable product generation via solid state fermentation. The results showed that both the biomasses yielded the highest carotenoid during 48 hours of incubation, 31.6 mg/gm and 42.1 mg/gm for PMS and bagasse respectively. Compositional alterations of both the biomasses showed reduction in lignin, hemicellulose and cellulose content by 41%, 15%, 1% for PMS and 38%, 25% and 6% for sugarcane bagasse after 72 hours of incubation. Structural changes in the biomasses were examined by FT-IR, FESEM, and XRD which further confirmed modification of solid biomasses by bacterial isolate. This study revealed the potential of PMS to act as cheap substrate for carotenoid pigment production by Planococcus sp. TRC1, as it showed a significant production in comparison to sugarcane bagasse which gave only 1.3 fold higher production than PMS. Delignification of PMS by TRC1 during pigment production is another important finding for the reuse of this waste from the paper industry.

Keywords: carotenoid, lignocellulosic, paper mill sludge, Planococcus sp. TRC1, solid state fermentation, sugarcane bagasse

Procedia PDF Downloads 235
679 Omni-Modeler: Dynamic Learning for Pedestrian Redetection

Authors: Michael Karnes, Alper Yilmaz

Abstract:

This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.

Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition

Procedia PDF Downloads 76
678 Integrating Knowledge Distillation of Multiple Strategies

Authors: Min Jindong, Wang Mingxia

Abstract:

With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.

Keywords: object detection, knowledge distillation, convolutional network, model compression

Procedia PDF Downloads 278
677 A Pilot Study on Integration of Simulation in the Nursing Educational Program: Hybrid Simulation

Authors: Vesile Unver, Tulay Basak, Hatice Ayhan, Ilknur Cinar, Emine Iyigun, Nuran Tosun

Abstract:

The aim of this study is to analyze the effects of the hybrid simulation. In this simulation, types standardized patients and task trainers are employed simultaneously. For instance, in order to teach the IV activities standardized patients and IV arm models are used. The study was designed as a quasi-experimental research. Before the implementation an ethical permission was taken from the local ethical commission and administrative permission was granted from the nursing school. The universe of the study included second-grade nursing students (n=77). The participants were selected through simple random sample technique and total of 39 nursing students were included. The views of the participants were collected through a feedback form with 12 items. The form was developed by the authors and “Patient intervention self-confidence/competence scale”. Participants reported advantages of the hybrid simulation practice. Such advantages include the following: developing connections between the simulated scenario and real life situations in clinical conditions; recognition of the need for learning more about clinical practice. They all stated that the implementation was very useful for them. They also added three major gains; improvement of critical thinking skills (94.7%) and the skill of making decisions (97.3%); and feeling as if a nurse (92.1%). In regard to the mean scores of the participants in the patient intervention self-confidence/competence scale, it was found that the total mean score for the scale was 75.23±7.76. The findings obtained in the study suggest that the hybrid simulation has positive effects on the integration of theoretical and practical activities before clinical activities for the nursing students.

Keywords: hybrid simulation, clinical practice, nursing education, nursing students

Procedia PDF Downloads 292
676 An Efficient Machine Learning Model to Detect Metastatic Cancer in Pathology Scans Using Principal Component Analysis Algorithm, Genetic Algorithm, and Classification Algorithms

Authors: Bliss Singhal

Abstract:

Machine learning (ML) is a branch of Artificial Intelligence (AI) where computers analyze data and find patterns in the data. The study focuses on the detection of metastatic cancer using ML. Metastatic cancer is the stage where cancer has spread to other parts of the body and is the cause of approximately 90% of cancer-related deaths. Normally, pathologists spend hours each day to manually classifying whether tumors are benign or malignant. This tedious task contributes to mislabeling metastasis being over 60% of the time and emphasizes the importance of being aware of human error and other inefficiencies. ML is a good candidate to improve the correct identification of metastatic cancer, saving thousands of lives and can also improve the speed and efficiency of the process, thereby taking fewer resources and time. So far, the deep learning methodology of AI has been used in research to detect cancer. This study is a novel approach to determining the potential of using preprocessing algorithms combined with classification algorithms in detecting metastatic cancer. The study used two preprocessing algorithms: principal component analysis (PCA) and the genetic algorithm, to reduce the dimensionality of the dataset and then used three classification algorithms: logistic regression, decision tree classifier, and k-nearest neighbors to detect metastatic cancer in the pathology scans. The highest accuracy of 71.14% was produced by the ML pipeline comprising of PCA, the genetic algorithm, and the k-nearest neighbor algorithm, suggesting that preprocessing and classification algorithms have great potential for detecting metastatic cancer.

Keywords: breast cancer, principal component analysis, genetic algorithm, k-nearest neighbors, decision tree classifier, logistic regression

Procedia PDF Downloads 82
675 The Residual Effects of Special Merchandising Sections on Consumers' Shopping Behavior

Authors: Shih-Ching Wang, Mark Lang

Abstract:

This paper examines the secondary effects and consequences of special displays on subsequent shopping behavior. Special displays are studied as a prominent form of in-store or shopper marketing activity. Two experiments are performed using special value and special quality-oriented displays in an online simulated store environment. The impact of exposure to special displays on mindsets and resulting product choices are tested in a shopping task. Impact on store image is also tested. The experiments find that special displays do trigger shopping mindsets that affect product choices and shopping basket composition and value. There are intended and unintended positive and negative effects found. Special value displays improve store price image but trigger a price sensitive shopping mindset that causes more lower-priced items to be purchased, lowering total basket dollar value. Special natural food displays improve store quality image and trigger a quality-oriented mindset that causes fewer lower-priced items to be purchased, increasing total basket dollar value. These findings extend the theories of product categorization, mind-sets, and price sensitivity found in communication research into the retail store environment. Findings also warn retailers to consider the total effects and consequences of special displays when designing and executing in-store or shopper marketing activity.

Keywords: special displays, mindset, shopping behavior, price consciousness, product categorization, store image

Procedia PDF Downloads 283
674 'I Mean' in Teacher Questioning Sequences in Post-Task Discussions: A Conversation Analytic Study

Authors: Derya Duran, Christine Jacknick

Abstract:

Despite a growing body of research on classroom, especially language classroom interactions, much more is yet to be discovered on how interaction is organized in higher education settings. This study investigates how the discourse marker 'I mean' in teacher questioning turns functions as a resource to promote student participation as well as to enhance collective understanding in whole-class discussions. This paper takes a conversation analytic perspective, drawing on 30-hour video recordings of classroom interaction in an English as a medium of instruction university in Turkey. Two content classrooms (i.e., Guidance) were observed during an academic term. The course was offered to 4th year students (n=78) in the Faculty of Education; students were majoring in different subjects (i.e., Early Childhood Education, Foreign Language Education, Mathematics Education). Results of the study demonstrate the multi-functionality of discourse marker 'I mean' in teacher questioning turns. In the context of English as a medium of instruction classrooms where possible sources of confusion may occur, we found that 'I mean' is primarily used to indicate upcoming adjustments. More specifically, it is employed for a variety of interactional purposes such as elaboration, clarification, specification, reformulation, and reference to the instructional activity. The study sheds light on the multiplicity of functions of the discourse marker in academic interactions and it uncovers how certain linguistic resources serve functions to the organization of repair such as the maintenance of understanding in classroom interaction. In doing so, it also shows the ways in which participation is routinely enacted in shared interactional events through linguistic resources.

Keywords: conversation analysis, discourse marker, English as a medium of instruction, repair

Procedia PDF Downloads 161
673 Investigating Complement Clause Choice in Written Educated Nigerian English (ENE)

Authors: Juliet Udoudom

Abstract:

Inappropriate complement selection constitutes one of the major features of non-standard complementation in the Nigerian users of English output of sentence construction. This paper investigates complement clause choice in Written Educated Nigerian English (ENE) and offers some results. It aims at determining preferred and dispreferred patterns of complement clause selection in respect of verb heads in English by selected Nigerian users of English. The complementation data analyzed in this investigation were obtained from experimental tasks designed to elicit complement categories of Verb – Noun -, Adjective – and Prepositional – heads in English. Insights from the Government – Binding relations were employed in analyzing data, which comprised responses obtained from one hundred subjects to a picture elicitation exercise, a grammaticality judgement test, and a free composition task. The findings indicate a general tendency for clausal complements (CPs) introduced by the complementizer that to be preferred by the subjects studied. Of the 235 tokens of clausal complements which occurred in our corpus, 128 of them representing 54.46% were CPs headed by that, while whether – and if-clauses recorded 31.07% and 8.94%, respectively. The complement clause-type which recorded the lowest incidence of choice was the CP headed by the Complementiser, for with a 5.53% incident of occurrence. Further findings from the study indicate that semantic features of relevant embedding verb heads were not taken into consideration in the choice of complementisers which introduce the respective complement clauses, hence the that-clause was chosen to complement verbs like prefer. In addition, the dispreferred choice of the for-clause is explicable in terms of the fact that the respondents studied regard ‘for’ as a preposition, and not a complementiser.

Keywords: complement, complement clause complement selection, complementisers, government-binding

Procedia PDF Downloads 188
672 Labor Productivity and Organization Performance in Specialty Trade Construction: The Moderating Effect of Safety

Authors: Shalini Priyadarshini

Abstract:

The notion of performance measurement has held great appeal for the industry and research communities alike. This idea is also true for the construction sector, and some propose that performance measurement and productivity analysis are two separate management functions, where productivity is a subset of performance, the latter requiring comprehensive analysis of comparable factors. Labor productivity is considered one of the best indicators of production efficiency. The construction industry continues to account for a disproportionate share of injuries and illnesses despite adopting several technological and organizational interventions that promote worker safety. Specialty trades contractors typically complete a large fraction of work on any construction project, but insufficient body of work exists that address subcontractor safety and productivity issues. Literature review has revealed the possibility of a relation between productivity, safety and other factors and their links to project, organizational, task and industry performance. This research posits that there is an association between productivity and performance at project as well as organizational levels in the construction industry. Moreover, prior exploration of the importance of safety within the performance-productivity framework has been anecdotal at best. Using structured questionnaire survey and organization- and project level data, this study, which is a combination of cross-sectional and longitudinal research designs, addresses the identified research gap and models the relationship between productivity, safety, and performance with a focus on specialty trades in the construction sector. Statistical analysis is used to establish a correlation between the variables of interest. This research identifies the need for developing and maintaining productivity and safety logs for smaller businesses. Future studies can design and develop research to establish causal relationships between these variables.

Keywords: construction, safety, productivity, performance, specialty trades

Procedia PDF Downloads 278
671 Extreme Value Theory Applied in Reliability Analysis: Case Study of Diesel Generator Fans

Authors: Jelena Vucicevic

Abstract:

Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. In this paper, the results for the reliability of diesel generator fans were calculated through Extreme Value Theory. The Extreme Value Theory is not widely used in the engineering field. Its usage is well known in other areas such as hydrology, meteorology, finance. The significance of this theory is in the fact that unlike the other statistical methods it is focused on rare and extreme values, and not on average. It should be noted that this theory is not designed exclusively for extreme events, but for extreme values in any event. Therefore, this is a great opportunity to apply the theory and test if it could be applied in this situation. The significance of the work is the calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know the time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. The results achieved in this method will show the approximation of time for which the fans will work as they should, and the percentage of probability of fans working more than certain estimated time. Extreme Value Theory can be applied not only for rare and extreme events, but for any event that has values which we can consider as extreme.

Keywords: extreme value theory, lifetime, reliability analysis, statistic, time to failure

Procedia PDF Downloads 328
670 Automated Fact-Checking by Incorporating Contextual Knowledge and Multi-Faceted Search

Authors: Wenbo Wang, Yi-Fang Brook Wu

Abstract:

The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state-of-the-art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study introduces a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive, and authoritative data; 2) developing a search function to automatically select relevant, new, and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graphs in Wikidata to dynamically augment the representations of claims and references without introducing too much noise, II) exploring semantic relations in claims and references to further enhance fact-checking.

Keywords: fact checking, claim verification, deep learning, natural language processing

Procedia PDF Downloads 62
669 Device-integrated Micro-thermocouples for Reliable Temperature Measurement of GaN HEMTs

Authors: Hassan Irshad Bhatti, Saravanan Yuvaraja, Xiaohang Li

Abstract:

GaN-based devices, such as high electron mobility transistors (HEMTs), offer superior characteristics for high-power, high-frequency, and high-temperature applications [1]. However, this exceptional electrical performance is compromised by undesirable self-heating effects under high-power applications [2, 3]. Some of the issues caused by self-heating are current collapse, thermal runway and performance degradation [4, 5]. Therefore, accurate and reliable methods for measuring the temperature of individual devices on a chip are needed to monitor and control the thermal behavior of GaN-based devices [6]. Temperature measurement at the micro/nanoscale is a challenging task that requires specialized techniques such as Infrared microscopy, Raman thermometry, and thermoreflectance. Recently, micro-thermocouples (MTCs) have attracted considerable attention due to their advantages of simplicity, low cost, high sensitivity, and compatibility with standard fabrication processes [7, 8]. A micro-thermocouple is a junction of two different metal thin films, which generates a Seebeck voltage related to the temperature difference between a hot and cold zone. Integrating MTC in a device allows local temperature to be measured with high sensitivity and accuracy [9]. This work involves the fabrication and integration of micro-thermocouples (MTCs) to measure the channel temperature of GaN HEMT. Our fabricated MTC (Platinum-Chromium junction) has shown a sensitivity of 16.98 µV/K and can measure device channel temperature with high precision and accuracy. The temperature information obtained using this sensor can help improve GaN-based devices and provide thermal engineers with useful insights for optimizing their designs.

Keywords: Electrical Engineering, Thermal engineering, Power Devices, Semiconuctors

Procedia PDF Downloads 19
668 Study of Methods to Reduce Carbon Emissions in Structural Engineering

Authors: Richard Krijnen, Alan Wang

Abstract:

As the world is aiming to reach net zero around 2050, structural engineers must begin finding solutions to contribute to this global initiative. Approximately 40% of global energy-related emissions are due to buildings and construction, and a building’s structure accounts for 50% of its embodied carbon, which indicates that structural engineers are key contributors to finding solutions to reach carbon neutrality. However, this task presents a multifaceted challenge as structural engineers must navigate technical, safety and economic considerations while striving to reduce emissions. This study reviews several options and considerations to reduce carbon emissions that structural engineers can use in their future designs without compromising the structural integrity of their proposed design. Low-carbon structures should adhere to several guiding principles. Firstly, prioritize the selection of materials with low carbon footprints, such as recyclable or alternative materials. Optimization of design and engineering methods is crucial to minimize material usage. Encouraging the use of recyclable and renewable materials reduces dependency on natural resources. Energy efficiency is another key consideration involving the design of structures to minimize energy consumption across various systems. Choosing local materials and minimizing transportation distances help in reducing carbon emissions during transport. Innovation, such as pre-fabrication and modular design or low-carbon concrete, can further cut down carbon emissions during manufacturing and construction. Collaboration among stakeholders and sharing experiences and resources are essential for advancing the development and application of low-carbon structures. This paper identifies current available tools and solutions to reduce embodied carbon in structures, which can be used as part of daily structural engineering practice.

Keywords: efficient structural design, embodied carbon, low-carbon material, sustainable structural design

Procedia PDF Downloads 41
667 Working with Children and Young People as a much Neglected Area of Education within the Social Studies Curriculum in Poland

Authors: Marta Czechowska-Bieluga

Abstract:

Social work education in Poland focuses mostly on developing competencies that address the needs of individuals and families affected by a variety of life's problems. As a result of the ageing of the Polish population, much attention is equally devoted to adults, including the elderly. However, social work with children and young people is the area of education which should be given more consideration. Social work students are mostly trained to cater to the needs of families and the competencies aimed to respond to the needs of children and young people do not receive enough attention and are only offered as elective classes. This paper strives to review the social work programmes offered by the selected higher education institutions in Poland in terms of social work training aimed at helping children and young people to address their life problems. The analysis conducted in this study indicates that university education for social work focuses on training professionals who will provide assistance only to adults. Due to changes in the social and political situation, including, in particular, changes in social policy implemented for the needy, it is necessary to extend this area of education to include the specificity of the support for children and young people; especially, in the light of the appearance of new support professions within the area of social work. For example, family assistants, whose task is to support parents in performing their roles as guardians and educators, also assist children. Therefore, it becomes necessary to equip social work professionals with competencies which include issues related to the quality of life of underage people living in families. Social work curricula should be extended to include the issues of child and young person development and the patterns governing this phase of life.

Keywords: social work education, social work programmes, social worker, university

Procedia PDF Downloads 289
666 COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes

Authors: Shahad Nagoor, Lucy Hederman, Kevin Koidl, Annalina Caputo

Abstract:

Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ).

Keywords: BERT fine-tuning, clinical sentiment, COVID-19, data augmentation

Procedia PDF Downloads 206
665 Experimental Simulation Set-Up for Validating Out-Of-The-Loop Mitigation when Monitoring High Levels of Automation in Air Traffic Control

Authors: Oliver Ohneiser, Francesca De Crescenzio, Gianluca Di Flumeri, Jan Kraemer, Bruno Berberian, Sara Bagassi, Nicolina Sciaraffa, Pietro Aricò, Gianluca Borghini, Fabio Babiloni

Abstract:

An increasing degree of automation in air traffic will also change the role of the air traffic controller (ATCO). ATCOs will fulfill significantly more monitoring tasks compared to today. However, this rather passive role may lead to Out-Of-The-Loop (OOTL) effects comprising vigilance decrement and less situation awareness. The project MINIMA (Mitigating Negative Impacts of Monitoring high levels of Automation) has conceived a system to control and mitigate such OOTL phenomena. In order to demonstrate the MINIMA concept, an experimental simulation set-up has been designed. This set-up consists of two parts: 1) a Task Environment (TE) comprising a Terminal Maneuvering Area (TMA) simulator as well as 2) a Vigilance and Attention Controller (VAC) based on neurophysiological data recording such as electroencephalography (EEG) and eye-tracking devices. The current vigilance level and the attention focus of the controller are measured during the ATCO’s active work in front of the human machine interface (HMI). The derived vigilance level and attention trigger adaptive automation functionalities in the TE to avoid OOTL effects. This paper describes the full-scale experimental set-up and the component development work towards it. Hence, it encompasses a pre-test whose results influenced the development of the VAC as well as the functionalities of the final TE and the two VAC’s sub-components.

Keywords: automation, human factors, air traffic controller, MINIMA, OOTL (Out-Of-The-Loop), EEG (Electroencephalography), HMI (Human Machine Interface)

Procedia PDF Downloads 383