Search results for: feature attribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1597

Search results for: feature attribution

1117 Tardiness and Self-Regulation: Degree and Reason for Tardiness in Undergraduate Students in Japan

Authors: Keiko Sakai

Abstract:

In Japan, all stages of public education aim to foster a zest for life. ‘Zest’ implies solving problems by oneself, using acquired knowledge and skills. It is related to the self-regulation of metacognition. To enhance this, establishing good learning habits is important. Tardiness in undergraduate students should be examined based on self-regulation. Accordingly, we focussed on self-monitoring and self-planning strategies among self-regulated learning factors to examine the causes of tardiness. This study examines the impact of self-monitoring and self-planning learning skills on the degree and reason for tardiness in undergraduate students. A questionnaire survey was conducted, targeted to undergraduate students in University X in the autumn semester of 2018. Participants were 247 (average age 19.7, SD 1.9; 144 males, 101 females, 2 no answers). The survey contained the following items and measures: school year, the number of classes in the semester, degree of tardiness in the semester (subjective degree and objective times), active participation in and action toward schoolwork, self-planning and self-monitoring learning skills, and reason for tardiness (open-ended question). First, the relation between strategies and tardiness was examined by multiple regressions. A statistically significant relationship between a self-monitoring learning strategy and the degree of subjective and objective tardiness was revealed, after statistically controlling the school year and the number of classes. There was no significant relationship between a self-planning learning strategy and the degree of tardiness. These results suggest that self-monitoring skills reduce tardiness. Secondly, the relation between a self-monitoring learning strategy and the reason of tardiness was analysed, after classifying the reason for tardiness into one of seven categories: ‘overslept’, ‘illness’, ‘poor time management’, ‘traffic delays’, ‘carelessness’, ‘low motivation’, and ‘stuff to do’. Chi-square tests and Fisher’s exact tests showed a statistically significant relationship between a self-monitoring learning strategy and the frequency of ‘traffic delays’. This result implies that self-monitoring skills prevent tardiness because of traffic delays. Furthermore, there was a weak relationship between a self-monitoring learning strategy score and the reason-for-tardiness categories. When self-monitoring skill is higher, a decrease in ‘overslept’ and ‘illness’, and an increase in ‘poor time management’, ‘carelessness’, and ‘low motivation’ are indicated. It is suggested that a self-monitoring learning strategy is related to an internal causal attribution of failure and self-management for how to prevent tardiness. From these findings, the effectiveness of a self-monitoring learning skill strategy for reducing tardiness in undergraduate students is indicated.

Keywords: higher-education, self-monitoring, self-regulation, tardiness

Procedia PDF Downloads 125
1116 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 121
1115 Meitu and the Case of the AI Art Movement

Authors: Taliah Foudah, Sana Masri, Jana Al Ghamdi, Rimaz Alzaaqi

Abstract:

This research project explores the creative works of the app Metui, which allows consumers to edit their photos and use the new and popular AI feature, which turns any photo into a cartoon-like animated image with beautified enhancements. Studying this AI app demonstrates the significance of the ability in which AI can develop intricate designs which verily replicate the human mind. Our goal was to investigate the Metui app by asking our audience certain questions about its functionality and their personal feelings about its credibility as well as their beliefs as to how this app will add to the future of the AI generation, both positively and negatively. Their responses were further explored by analyzing the questions and responses thoroughly and calculating the results through pie charts. Overall, it was concluded that the Metui app is a powerful step forward for AI by replicating the intelligence of humans and its creativity to either benefit society or do the opposite.

Keywords: AI Art, Meitu, application, photo editing

Procedia PDF Downloads 60
1114 Mood Recognition Using Indian Music

Authors: Vishwa Joshi

Abstract:

The study of mood recognition in the field of music has gained a lot of momentum in the recent years with machine learning and data mining techniques and many audio features contributing considerably to analyze and identify the relation of mood plus music. In this paper we consider the same idea forward and come up with making an effort to build a system for automatic recognition of mood underlying the audio song’s clips by mining their audio features and have evaluated several data classification algorithms in order to learn, train and test the model describing the moods of these audio songs and developed an open source framework. Before classification, Preprocessing and Feature Extraction phase is necessary for removing noise and gathering features respectively.

Keywords: music, mood, features, classification

Procedia PDF Downloads 490
1113 An Architectural Approach for the Dynamic Adaptation of Services-Based Software

Authors: Mohhamed Yassine Baroudi, Abdelkrim Benammar, Fethi Tarik Bendimerad

Abstract:

This paper proposes software architecture for dynamical service adaptation. The services are constituted by reusable software components. The adaptation’s goal is to optimize the service function of their execution context. For a first step, the context will take into account just the user needs but other elements will be added. A particular feature in our proposition is the profiles that are used not only to describe the context’s elements but also the components itself. An adapter analyzes the compatibility between all these profiles and detects the points where the profiles are not compatibles. The same Adapter search and apply the possible adaptation solutions: component customization, insertion, extraction or replacement.

Keywords: adaptative service, software component, service, dynamic adaptation

Procedia PDF Downloads 286
1112 Clinicopathological Characteristics in Male Breast Cancer: A Case Series and Literature Review

Authors: Mohamed Shafi Mahboob Ali

Abstract:

Male breast cancer (MBC) is a rare entity with overall cases reported less than 1%. However, the incidence of MBC is regularly rising every year. Due to the lack of data on MBC, diagnosis and treatment are tailored to female breast cancer. MBC risk increases with age and is usually diagnosed ten years late as the disease progression is slow compared to female breast cancer (FBC). The most common feature of MBC is an intra-ductal variant, and often, upon diagnosis, the stage of the disease is already advanced. The Prognosis of MBC is often flawed, but new treatment modalities are emerging with the current knowledge and advancement. We presented a series of male breast cancer in our center, highlighting the clinicopathological, radiological and treatment options.

Keywords: male, breast, cancer, clinicopathology, ultrasound, CT scan

Procedia PDF Downloads 89
1111 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana

Authors: Gautier Viaud, Paul-Henry Cournède

Abstract:

Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.

Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models

Procedia PDF Downloads 294
1110 An EEG-Based Scale for Comatose Patients' Vigilance State

Authors: Bechir Hbibi, Lamine Mili

Abstract:

Understanding the condition of comatose patients can be difficult, but it is crucial to their optimal treatment. Consequently, numerous scoring systems have been developed around the world to categorize patient states based on physiological assessments. Although validated and widely adopted by medical communities, these scores still present numerous limitations and obstacles. Even with the addition of additional tests and extensions, these scoring systems have not been able to overcome certain limitations, and it appears unlikely that they will be able to do so in the future. On the other hand, physiological tests are not the only way to extract ideas about comatose patients. EEG signal analysis has helped extensively to understand the human brain and human consciousness and has been used by researchers in the classification of different levels of disease. The use of EEG in the ICU has become an urgent matter in several cases and has been recommended by medical organizations. In this field, the EEG is used to investigate epilepsy, dementia, brain injuries, and many other neurological disorders. It has recently also been used to detect pain activity in some regions of the brain, for the detection of stress levels, and to evaluate sleep quality. In our recent findings, our aim was to use multifractal analysis, a very successful method of handling multifractal signals and feature extraction, to establish a state of awareness scale for comatose patients based on their electrical brain activity. The results show that this score could be instantaneous and could overcome many limitations with which the physiological scales stock. On the contrary, multifractal analysis stands out as a highly effective tool for characterizing non-stationary and self-similar signals. It demonstrates strong performance in extracting the properties of fractal and multifractal data, including signals and images. As such, we leverage this method, along with other features derived from EEG signal recordings from comatose patients, to develop a scale. This scale aims to accurately depict the vigilance state of patients in intensive care units and to address many of the limitations inherent in physiological scales such as the Glasgow Coma Scale (GCS) and the FOUR score. The results of applying version V0 of this approach to 30 patients with known GCS showed that the EEG-based score similarly describes the states of vigilance but distinguishes between the states of 8 sedated patients where the GCS could not be applied. Therefore, our approach could show promising results with patients with disabilities, injected with painkillers, and other categories where physiological scores could not be applied.

Keywords: coma, vigilance state, EEG, multifractal analysis, feature extraction

Procedia PDF Downloads 52
1109 Design and Implementation of an Image Based System to Enhance the Security of ATM

Authors: Seyed Nima Tayarani Bathaie

Abstract:

In this paper, an image-receiving system was designed and implemented through optimization of object detection algorithms using Haar features. This optimized algorithm served as face and eye detection separately. Then, cascading them led to a clear image of the user. Utilization of this feature brought about higher security by preventing fraud. This attribute results from the fact that services will be given to the user on condition that a clear image of his face has already been captured which would exclude the inappropriate person. In order to expedite processing and eliminating unnecessary ones, the input image was compressed, a motion detection function was included in the program, and detection window size was confined.

Keywords: face detection algorithm, Haar features, security of ATM

Procedia PDF Downloads 405
1108 The Connection Between the Semiotic Theatrical System and the Aesthetic Perception

Authors: Păcurar Diana Istina

Abstract:

The indissoluble link between aesthetics and semiotics, the harmonization and semiotic understanding of the interactions between the viewer and the object being looked at, are the basis of the practical demonstration of the importance of aesthetic perception within the theater performance. The design of a theater performance includes several structures, some considered from the beginning, art forms (i.e., the text), others being represented by simple, common objects (e.g., scenographic elements), which, if reunited, can trigger a certain aesthetic perception. The audience is delivered, by the team involved in the performance, a series of auditory and visual signs with which they interact. It is necessary to explain some notions about the physiological support of the transformation of different types of stimuli at the level of the cerebral hemispheres. The cortex considered the superior integration center of extransecal and entanged stimuli, permanently processes the information received, but even if it is delivered at a constant rate, the generated response is individualized and is conditioned by a number of factors. Each changing situation represents a new opportunity for the viewer to cope with, developing feelings of different intensities that influence the generation of meanings and, therefore, the management of interactions. In this sense, aesthetic perception depends on the detection of the “correctness” of signs, the forms of which are associated with an aesthetic property. Fairness and aesthetic properties can have positive or negative values. Evaluating the emotions that generate judgment and implicitly aesthetic perception, whether we refer to visual emotions or auditory emotions, involves the integration of three areas of interest: Valence, arousal and context control. In this context, superior human cognitive processes, memory, interpretation, learning, attribution of meanings, etc., help trigger the mechanism of anticipation and, no less important, the identification of error. This ability to locate a short circuit produced in a series of successive events is fundamental in the process of forming an aesthetic perception. Our main purpose in this research is to investigate the possible conditions under which aesthetic perception and its minimum content are generated by all these structures and, in particular, by interactions with forms that are not commonly considered aesthetic forms. In order to demonstrate the quantitative and qualitative importance of the categories of signs used to construct a code for reading a certain message, but also to emphasize the importance of the order of using these indices, we have structured a mathematical analysis that has at its core the analysis of the percentage of signs used in a theater performance.

Keywords: semiology, aesthetics, theatre semiotics, theatre performance, structure, aesthetic perception

Procedia PDF Downloads 78
1107 An 8-Bit, 100-MSPS Fully Dynamic SAR ADC for Ultra-High Speed Image Sensor

Authors: F. Rarbi, D. Dzahini, W. Uhring

Abstract:

In this paper, a dynamic and power efficient 8-bit and 100-MSPS Successive Approximation Register (SAR) Analog-to-Digital Converter (ADC) is presented. The circuit uses a non-differential capacitive Digital-to-Analog (DAC) architecture segmented by 2. The prototype is produced in a commercial 65-nm 1P7M CMOS technology with 1.2-V supply voltage. The size of the core ADC is 208.6 x 103.6 µm2. The post-layout noise simulation results feature a SNR of 46.9 dB at Nyquist frequency, which means an effective number of bit (ENOB) of 7.5-b. The total power consumption of this SAR ADC is only 1.55 mW at 100-MSPS. It achieves then a figure of merit of 85.6 fJ/step.

Keywords: CMOS analog to digital converter, dynamic comparator, image sensor application, successive approximation register

Procedia PDF Downloads 409
1106 Reflective Thinking and Experiential Learning – A Quasi-Experimental Quanti-Quali Response to Greater Diversification of Activities, Greater Integration of Student Profiles

Authors: Paulo Sérgio Ribeiro de Araújo Bogas

Abstract:

Although several studies have assumed (at least implicitly) that learners' approaches to learning develop into deeper approaches to higher education, there appears to be no clear theoretical basis for this assumption and no empirical evidence. As a scientific contribution to this discussion, a pedagogical intervention of a quasi-experimental nature was developed, with a mixed methodology, evaluating the intervention within a single curricular unit of Marketing, using cases based on real challenges of brands, business simulation, and customer projects. Primary and secondary experiences were incorporated in the intervention: the primary experiences are the experiential activities themselves; the secondary experiences result from the primary experience, such as reflection and discussion in work teams. A diversified learning relationship was encouraged through the various connections between the different members of the learning community. The present study concludes that in the same context, the student's responses can be described as students who reinforce the initial deep approach, students who maintain the initial deep approach level, and others who change from an emphasis on the deep approach to one closer to superficial. This typology did not always confirm studies reported in the literature, namely, whether the initial level of deep processing would influence the superficial and the opposite. The result of this investigation points to the inclusion of pedagogical and didactic activities that integrate different motivations and initial strategies, leading to the possible adoption of deep approaches to learning since it revealed statistically significant differences in the difference in the scores of the deep/superficial approach and the experiential level. In the case of real challenges, the categories of “attribution of meaning and meaning of studied” and the possibility of “contact with an aspirational context” for their future professional stand out. In this category, the dimensions of autonomy that will be required of them were also revealed when comparing the classroom context of real cases and the future professional context and the impact they may have on the world. Regarding the simulated practice, two categories of response stand out: on the one hand, the motivation associated with the possibility of measuring the results of the decisions taken, an awareness of oneself, and, on the other hand, the additional effort that this practice required for some of the students.

Keywords: experiential learning, higher education, mixed methods, reflective learning, marketing

Procedia PDF Downloads 76
1105 View Synthesis of Kinetic Depth Imagery for 3D Security X-Ray Imaging

Authors: O. Abusaeeda, J. P. O. Evans, D. Downes

Abstract:

We demonstrate the synthesis of intermediary views within a sequence of X-ray images that exhibit depth from motion or kinetic depth effect in a visual display. Each synthetic image replaces the requirement for a linear X-ray detector array during the image acquisition process. Scale invariant feature transform, SIFT, in combination with epipolar morphing is employed to produce synthetic imagery. Comparison between synthetic and ground truth images is reported to quantify the performance of the approach. Our work is a key aspect in the development of a 3D imaging modality for the screening of luggage at airport checkpoints. This programme of research is in collaboration with the UK Home Office and the US Dept. of Homeland Security.

Keywords: X-ray, kinetic depth, KDE, view synthesis

Procedia PDF Downloads 255
1104 Market Index Trend Prediction using Deep Learning and Risk Analysis

Authors: Shervin Alaei, Reza Moradi

Abstract:

Trading in financial markets is subject to risks due to their high volatilities. Here, using an LSTM neural network, and by doing some risk-based feature engineering tasks, we developed a method that can accurately predict trends of the Tehran stock exchange market index from a few days ago. Our test results have shown that the proposed method with an average prediction accuracy of more than 94% is superior to the other common machine learning algorithms. To the best of our knowledge, this is the first work incorporating deep learning and risk factors to accurately predict market trends.

Keywords: deep learning, LSTM, trend prediction, risk management, artificial neural networks

Procedia PDF Downloads 139
1103 Reconstructing the Segmental System of Proto-Graeco-Phrygian: a Bottom-Up Approach

Authors: Aljoša Šorgo

Abstract:

Recent scholarship on Phrygian has begun to more closely examine the long-held belief that Greek and Phrygian are two very closely related languages. It is now clear that Graeco-Phrygian can be firmly postulated as a subclade of the Indo-European languages. The present paper will focus on the reconstruction of the phonological and phonetic segments of Proto-Graeco-Phrygian (= PGPh.) by providing relevant correspondence sets and reconstructing the classes of segments. The PGPh. basic vowel system consisted of ten phonemic oral vowels: */a e o ā ē ī ō ū/. The correspondences of the vowels are clear and leave little open to ambiguity. There were four resonants and two semi-vowels in PGPh.: */r l m n i̯ u̯/, which could appear in both a consonantal and a syllabic function, with the distribution between the two still being phonotactically predictable. Of note is the fact that the segments *m and *n seem to have merged when their phonotactic position would see them used in a syllabic function. Whether the segment resulting from this merger was a nasalized vowel (most likely *[ã]) or a syllabic nasal *[N̥] (underspecified for place of articulation) cannot be determined at this stage. There were three fricatives in PGPh.: */s h ç/. *s and *h are easily identifiable. The existence of *ç, which may seem unexpected, is postulated on the basis of the correspondence Gr. ὄς ~ Phr. yos/ιος. It is of note that Bozzone has previously proposed the existence of *ç ( < PIE *h₁i̯-) in an early stage of Greek even without taking into account Phrygian data. Finally, the system of stops in PGPh. distinguished four places of articulation (labial, dental, velar, and labiovelar) and three phonation types. The question of which three phonation types were actually present in PGPh. is one of great importance for the ongoing debate on the realization of the three series in PIE. Since the matter is still very much in dispute, we ought to, at this stage, endeavour to reconstruct the PGPh. system without recourse to the other IE languages. The three series of correspondences are: 1. Gr. T (= tenuis) ~ Phr. T; 2. Gr. D (= media) ~ Phr. T; 3. Gr. TA (= tenuis aspirata) ~ Phr. M. The first series must clearly be reconstructed as composed of voiceless stops. The second and third series are more problematic. With a bottom-up approach, neither the second nor the third series of correspondences are compatible with simple modal voicing, and the reflexes differ greatly in voice onset time. Rather, the defining feature distinguishing the two series was [±spread glottis], with ancillary vibration of the vocal cords. In PGPh. the second series was undergoing further spreading of the glottis. As the two languages split, this process would continue, but be affected by dissimilar changes in VOT, which was ultimately phonemicized in both languages as the defining feature distinguishing between their series of stops.

Keywords: bottom-up reconstruction, Proto-Graeco-Phrygian, spread glottis, syllabic resonant

Procedia PDF Downloads 42
1102 Enhanced Thai Character Recognition with Histogram Projection Feature Extraction

Authors: Benjawan Rangsikamol, Chutimet Srinilta

Abstract:

This research paper deals with extraction of Thai character features using the proposed histogram projection so as to improve the recognition performance. The process starts with transformation of image files into binary files before thinning. After character thinning, the skeletons are entered into the proposed extraction using histogram projection (horizontal and vertical) to extract unique features which are inputs of the subsequent recognition step. The recognition rate with the proposed extraction technique is as high as 97 percent since the technique works very well with the idiosyncrasies of Thai characters.

Keywords: character recognition, histogram projection, multilayer perceptron, Thai character features extraction

Procedia PDF Downloads 450
1101 The Impact of AI on Consumers’ Morality: An Empirical Evidence

Authors: Mingxia Zhu, Matthew Tingchi Liu

Abstract:

AI grows gradually in the market with its efficiency and accuracy, influencing people’s perceptions, attitude, and even consequential behaviors. Current study extends prior research by focusing on AI’s impact on consumers’ morality. First, study 1 tested individuals’ believes about AI and human’s moral perceptions and people’s attribution of moral worth to AI and human. Moral perception refers to a computational system an entity maintains to detect and identify moral violations, while moral worth here denotes whether individual regard an entity as worthy of moral treatment. To identify the effect of AI on consumers’ morality, two studies were employed. Study 1 is a within-subjects survey, while study 2 is an experimental study. In the study 1, one hundred and forty participants were recruited through online survey company in China (M_age = 27.31 years, SD = 7.12 years; 65% female). The participants were asked to assign moral perception and moral worth to AI and human. A paired samples t-test reveals that people generally regard that human has higher moral perception (M_Human = 6.03, SD = .86) than AI (M_AI = 2.79, SD = 1.19; t(139) = 27.07, p < .001; Cohen’s d = 1.41). In addition, another paired samples t-test results showed that people attributed higher moral worth to the human personnel (M_Human = 6.39, SD = .56) compared with AIs (M_AI = 5.43, SD = .85; t(139) = 12.96, p < .001; d = .88). In the next study, two hundred valid samples were recruited from survey company in China (M_age = 27.87 years, SD = 6.68 years; 55% female) and the participants were randomly assigned to two conditions (AI vs. human). After viewing the stimuli of human versus AI, participants are informed that one insurance company would determine the price purely based on their declaration. Therefore, their open-ended answers were coded into ethical, honest behavior and unethical, dishonest behavior according to the design of prior literature. A Chi-square analysis revealed that 64% of the participants would immorally lie towards AI insurance inspector while 42% of participants reported deliberately lower mileage facing with human inspector (χ^2 (1) = 9.71, p = .002). Similarly, the logistic regression results suggested that people would significantly more likely to report fraudulent answer when facing with AI (β = .89, odds ratio = 2.45, Wald = 9.56, p = .002). It is demonstrated that people would be more likely to behave unethically in front of non-human agents, such as AI agent, rather than human. The research findings shed light on new practical ethical issues in human-AI interaction and address the important role of human employees during the process of service delivery in the new era of AI.

Keywords: AI agent, consumer morality, ethical behavior, human-AI interaction

Procedia PDF Downloads 71
1100 Investigating Reading Comprehension Proficiency and Self-Efficacy among Algerian EFL Students within Collaborative Strategic Reading Approach and Attributional Feedback Intervention

Authors: Nezha Badi

Abstract:

It has been shown in the literature that Algerian university students suffer from low levels of reading comprehension proficiency, which hinder their overall proficiency in English. This low level is mainly related to the methodology of teaching reading which is employed by the teacher in the classroom (a teacher-centered environment), as well as students’ poor sense of self-efficacy to undertake reading comprehension activities. Arguably, what is needed is an approach necessary for enhancing students’ self-beliefs about their abilities to deal with different reading comprehension activities. This can be done by providing them with opportunities to take responsibility for their own learning (learners’ autonomy). As a result of learning autonomy, learners’ beliefs about their abilities to deal with certain language tasks may increase, and hence, their language learning ability. Therefore, this experimental research study attempts to assess the extent to which an integrated approach combining one particular reading approach known as ‘collaborative strategic reading’ (CSR), and teacher’s attributional feedback (on students’ reading performance and strategy use) can improve the reading comprehension skill and the sense of self-efficacy of EFL Algerian university students. It also seeks to examine students’ main reasons for their successful or unsuccessful achievements in reading comprehension activities, and whether students’ attributions for their reading comprehension outcomes can be modified after exposure to the instruction. To obtain the data, different tools including a reading comprehension test, questionnaires, an observation, an interview, and learning logs were used with 105 second year Algerian EFL university students. The sample of the study was divided into three groups; one control group (with no treatment), one experimental group (CSR group) who received a CSR instruction, and a second intervention group (CSR Plus group) who received teacher’s attribution feedback in addition to the CSR intervention. Students in the CSR Plus group received the same experiment as the CSR group using the same tools, except that they were asked to keep learning logs, for which teacher’s feedback on reading performance and strategy use was provided. The results of this study indicate that the CSR and the attributional feedback intervention was effective in improving students’ reading comprehension proficiency and sense of self-efficacy. However, there was not a significant change in students’ adaptive and maladaptive attributions for their success and failure d from the pre-test to the post-test phase. Analysis of the perception questionnaire, the interview, and the learning logs shows that students have positive perceptions about the CSR and the attributional feedback instruction. Based on the findings, this study, therefore, seeks to provide EFL teachers in general and Algerian EFL university teachers in particular with pedagogical implications on how to teach reading comprehension to their students to help them achieve well and feel more self-efficacious in reading comprehension activities, and in English language learning more generally.

Keywords: attributions, attributional feedback, collaborative strategic reading, self-efficacy

Procedia PDF Downloads 109
1099 Multiple-Lump-Type Solutions of the 2D Toda Equation

Authors: Jian-Ping Yu, Wen-Xiu Ma, Yong-Li Sun, Chaudry Masood Khalique

Abstract:

In this paper, a 2d Toda equation is studied, which is a classical integrable system and plays a vital role in mathematics, physics and other areas. New lump-type solution is constructed by using the Hirota bilinear method. One interesting feature of this research is that this lump-type solutions possesses two types of multiple-lump-type waves, which are one- and two-lump-type waves. Moreover, the corresponding 3d plots, density plots and contour plots are given to show the dynamical features of the obtained multiple-lump-type solutions.

Keywords: 2d Toda equation, Hirota bilinear method, Lump-type solution, multiple-lump-type solution

Procedia PDF Downloads 212
1098 A Review on Design and Analysis of Structure Against Blast Forces

Authors: Akshay Satishrao Kawtikwar

Abstract:

The effect of blast masses on structures is an essential aspect that need to be considered. This type of assault could be very horrifying, who where we take it into consideration in the course of the design system. While designing a building, now not only the wind and seismic masses however also the consequences of the blast have to be take into consideration. Blast load is the burden implemented to a structure form a blast wave that comes straight away after an explosion. A blast in or close to a constructing can reason catastrophic harm to the interior and exterior of the building, inner structural framework, wall collapsing, and so on. The most important feature of blast resistant construction is the ability to absorb blast energy without causing catastrophic failure of the structure as a whole. Construction materials in blastprotective structures must have ductility as well as strength.

Keywords: blast resistant design, blast load, explosion, ETABS

Procedia PDF Downloads 89
1097 Asynchronous Sequential Machines with Fault Detectors

Authors: Seong Woo Kwak, Jung-Min Yang

Abstract:

A strategy of fault diagnosis and tolerance for asynchronous sequential machines is discussed in this paper. With no synchronizing clock, it is difficult to diagnose an occurrence of permanent or stuck-in faults in the operation of asynchronous machines. In this paper, we present a fault detector comprised of a timer and a set of static functions to determine the occurrence of faults. In order to realize immediate fault tolerance, corrective control theory is applied to designing a dynamic feedback controller. Existence conditions for an appropriate controller and its construction algorithm are presented in terms of reachability of the machine and the feature of fault occurrences.

Keywords: asynchronous sequential machines, corrective control, fault diagnosis and tolerance, fault detector

Procedia PDF Downloads 335
1096 Urdu Text Extraction Method from Images

Authors: Samabia Tehsin, Sumaira Kausar

Abstract:

Due to the vast increase in the multimedia data in recent years, efficient and robust retrieval techniques are needed to retrieve and index images/ videos. Text embedded in the images can serve as the strong retrieval tool for images. This is the reason that text extraction is an area of research with increasing attention. English text extraction is the focus of many researchers but very less work has been done on other languages like Urdu. This paper is focusing on Urdu text extraction from video frames. This paper presents a text detection feature set, which has the ability to deal up with most of the problems connected with the text extraction process. To test the validity of the method, it is tested on Urdu news dataset, which gives promising results.

Keywords: caption text, content-based image retrieval, document analysis, text extraction

Procedia PDF Downloads 504
1095 The Outcome of Using Machine Learning in Medical Imaging

Authors: Adel Edwar Waheeb Louka

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery

Procedia PDF Downloads 57
1094 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions

Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams

Abstract:

The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.

Keywords: architecture, central pavilions, classicism, machine learning

Procedia PDF Downloads 133
1093 Application of Machine Learning Techniques in Forest Cover-Type Prediction

Authors: Saba Ebrahimi, Hedieh Ashrafi

Abstract:

Predicting the cover type of forests is a challenge for natural resource managers. In this project, we aim to perform a comprehensive comparative study of two well-known classification methods, support vector machine (SVM) and decision tree (DT). The comparison is first performed among different types of each classifier, and then the best of each classifier will be compared by considering different evaluation metrics. The effect of boosting and bagging for decision trees is also explored. Furthermore, the effect of principal component analysis (PCA) and feature selection is also investigated. During the project, the forest cover-type dataset from the remote sensing and GIS program is used in all computations.

Keywords: classification methods, support vector machine, decision tree, forest cover-type dataset

Procedia PDF Downloads 205
1092 Numerical Model Validation Using Durbin Method

Authors: H. Al-Hajeri

Abstract:

The computation of the effectiveness of turbulence enhancement surface features, such as ribs as means of promoting mixing and hence heat transfer, has attracted the continued attention of the engineering community. In this study, the simulation of a three-dimensional cooling passage is carried out employing a number of turbulence models including Durbin model. The cooling passage consists of a square section duct whose upper and lower surfaces feature staggered cuboid ribs. The main objective of this paper is to provide comparisons of the performance of the v2-f model against other established turbulence models as implemented in the commercial CFD code Ansys Fluent. The present study demonstrates that the v2-f model can successfully capture the isothermal air flow phenomena in flow over obstacles.

Keywords: CFD, cooling passage, Durbin model, turbulence model

Procedia PDF Downloads 497
1091 Astronomy in the Education Area: A Narrative Review

Authors: Isabella Lima Leite de Freitas

Abstract:

The importance of astronomy for humanity is unquestionable. Despite being a robust science, capable of bringing new discoveries every day and quickly increasing the ability of researchers to understand the universe more deeply, scientific research in this area can also help in various applications outside the domain of astronomy. The objective of this study was to review and conduct a descriptive analysis of published studies that presented the importance of astronomy in the area of education. A narrative review of the literature has been performed, considering the articles published in the last five years. As astronomy involves the study of physics, chemistry, biology, mathematics and technology, one of the studies evaluated presented astronomy as the gateway to science, demonstrating the presence of astronomy in 52 school curricula in 37 countries, with celestial movement the dominant content area. Another intervention study, evaluating individuals aged 4-5 years, demonstrated that the attribution of personal characteristics to cosmic bodies, in addition to the use of comprehensive astronomy concepts, favored the learning of science in preschool-age children, considering the use of practical activities of accompaniment and free drawing. Aiming to measure scientific literacy, another study developed in Turkey, motivated the authorities of this country to change the teaching materials and curriculum of secondary schools after the term “astronomy” appeared as one of the most attractive subjects for young people aged 15 to 24. There are also reports in the literature of the use of pedagogical tools, such as the representation of the Solar System on a human scale, where students can walk along the orbits of the planets while studying the laws of dynamics. The use of this tool favored the teaching of the relationship between distance, duration and speed over the period of the planets, in addition to improving the motivation and well-being of students aged 14-16. An important impact of astronomy on education was demonstrated in the study that evaluated the participation of high school students in the Astronomical Olympiads and the International Astronomy Olympiad. The study concluded that these Olympics have considerable influence on students who pursue a career in teaching or research later on, many of whom are in the area of astronomy itself. In addition, the literature indicates that the teaching of astronomy in the digital age has facilitated the availability of data for researchers, but also for the general population. This fact can increase even more the curiosity that the astronomy area has always instilled in people and promote the dissemination of knowledge on an expanded scale. Currently, astronomy has been considered an important ally in strengthening the school curricula of children, adolescents and young adults. This has been used as teaching tools, in addition to being extremely useful for scientific literacy, being increasingly used in the area of education.

Keywords: astronomy, education area, teaching, review

Procedia PDF Downloads 95
1090 Inequalities in Higher Education and Students’ Perceptions of Factors Influencing Academic Performance

Authors: Violetta Parutis

Abstract:

This qualitative study aims to answer the following research questions: i) What are the factors that students perceive as relevant to a) promoting and b) preventing good grades? ii) How does socio-economic status (SES) feature in those beliefs? We conducted in-depth interviews with 19 first- and second-year undergraduates of varying SES at a research-intensive university in the UK. The interviews yielded eight factors that students perceived as promoting and six perceived as preventing good grades. The findings suggested one significant difference between the beliefs of low and high SES students in that low SES students perceive themselves to be at a greater disadvantage to their peers while high SES students do not have such beliefs. This could have knock-on effects on their performance.

Keywords: social class, education, academic performance, students’ beliefs

Procedia PDF Downloads 171
1089 A Drawing Software for Designers: AutoCAD

Authors: Mayar Almasri, Rosa Helmi, Rayana Enany

Abstract:

This report describes the features of AutoCAD software released by Adobe. It explains how the program makes it easier for engineers and designers and reduces their time and effort spent using AutoCAD. Moreover, it highlights how AutoCAD works, how some of the commands used in it, such as Shortcut, make it easy to use, and features that make it accurate in measurements. The results of the report show that most users of this program are designers and engineers, but few people know about it and find it easy to use. They prefer to use it because it is easy to use, and the shortcut commands shorten a lot of time for them. The feature got a high rate and some suggestions for improving AutoCAD in Aperture, but it was a small percentage, and the highest percentage was that they didn't need to improve the program, and it was good.

Keywords: artificial intelligence, design, planning, commands, autodesk, dimensions

Procedia PDF Downloads 119
1088 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 124