Search results for: predictive coding
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1577

Search results for: predictive coding

857 Automatic Content Curation of Visual Heritage

Authors: Delphine Ribes Lemay, Valentine Bernasconi, André Andrade, Lara DéFayes, Mathieu Salzmann, FréDéRic Kaplan, Nicolas Henchoz

Abstract:

Digitization and preservation of large heritage induce high maintenance costs to keep up with the technical standards and ensure sustainable access. Creating impactful usage is instrumental to justify the resources for long-term preservation. The Museum für Gestaltung of Zurich holds one of the biggest poster collections of the world from which 52’000 were digitised. In the process of building a digital installation to valorize the collection, one objective was to develop an algorithm capable of predicting the next poster to show according to the ones already displayed. The work presented here describes the steps to build an algorithm able to automatically create sequences of posters reflecting associations performed by curator and professional designers. The exposed challenge finds similarities with the domain of song playlist algorithms. Recently, artificial intelligence techniques and more specifically, deep-learning algorithms have been used to facilitate their generations. Promising results were found thanks to Recurrent Neural Networks (RNN) trained on manually generated playlist and paired with clusters of extracted features from songs. We used the same principles to create the proposed algorithm but applied to a challenging medium, posters. First, a convolutional autoencoder was trained to extract features of the posters. The 52’000 digital posters were used as a training set. Poster features were then clustered. Next, an RNN learned to predict the next cluster according to the previous ones. RNN training set was composed of poster sequences extracted from a collection of books from the Gestaltung Museum of Zurich dedicated to displaying posters. Finally, within the predicted cluster, the poster with the best proximity compared to the previous poster is selected. The mean square distance between features of posters was used to compute the proximity. To validate the predictive model, we compared sequences of 15 posters produced by our model to randomly and manually generated sequences. Manual sequences were created by a professional graphic designer. We asked 21 participants working as professional graphic designers to sort the sequences from the one with the strongest graphic line to the one with the weakest and to motivate their answer with a short description. The sequences produced by the designer were ranked first 60%, second 25% and third 15% of the time. The sequences produced by our predictive model were ranked first 25%, second 45% and third 30% of the time. The sequences produced randomly were ranked first 15%, second 29%, and third 55% of the time. Compared to designer sequences, and as reported by participants, model and random sequences lacked thematic continuity. According to the results, the proposed model is able to generate better poster sequencing compared to random sampling. Eventually, our algorithm is sometimes able to outperform a professional designer. As a next step, the proposed algorithm should include a possibility to create sequences according to a selected theme. To conclude, this work shows the potentiality of artificial intelligence techniques to learn from existing content and provide a tool to curate large sets of data, with a permanent renewal of the presented content.

Keywords: Artificial Intelligence, Digital Humanities, serendipity, design research

Procedia PDF Downloads 184
856 PEINS: A Generic Compression Scheme Using Probabilistic Encoding and Irrational Number Storage

Authors: P. Jayashree, S. Rajkumar

Abstract:

With social networks and smart devices generating a multitude of data, effective data management is the need of the hour for networks and cloud applications. Some applications need effective storage while some other applications need effective communication over networks and data reduction comes as a handy solution to meet out both requirements. Most of the data compression techniques are based on data statistics and may result in either lossy or lossless data reductions. Though lossy reductions produce better compression ratios compared to lossless methods, many applications require data accuracy and miniature details to be preserved. A variety of data compression algorithms does exist in the literature for different forms of data like text, image, and multimedia data. In the proposed work, a generic progressive compression algorithm, based on probabilistic encoding, called PEINS is projected as an enhancement over irrational number stored coding technique to cater to storage issues of increasing data volumes as a cost effective solution, which also offers data security as a secondary outcome to some extent. The proposed work reveals cost effectiveness in terms of better compression ratio with no deterioration in compression time.

Keywords: compression ratio, generic compression, irrational number storage, probabilistic encoding

Procedia PDF Downloads 294
855 Association of MIR146A rs2910164 Variation with a Predisposition to Sporadic Breast Cancer in a Pakistani Cohort

Authors: Mushtaq Ahmad, Bashir Rahman, Taqweem-ul-Haq, Fazal Jalil, Aftab Ali Shah

Abstract:

Single nucleotide polymorphisms (SNPs) in genes coding for microRNAs (miRNAs) play a pivotal role in the progression of breast cancer (BC). We investigated the association of miR-146a rs2910164 G/C polymorphism with the risk of BC in the Pakistani population. The miR-146a rs2910164 polymorphism was genotyped in 300 BC-cases and 300 age- and gender-matched healthy controls using T-ARMS-PCR. Genotype and allele frequencies were calculated, and the association between genotypes and the risk of BC was calculated by odds ratios (OR) and confidence intervals (95%). A significant difference in genotypic frequencies (χ2=63.10; p ≤ 0.0001) and allelic frequencies (OR=0.3955 (0.3132-0.4993); p ≤ 0.0001) was observed between cases and controls. Furthermore, we also found that miR-146 rs2910164 CC homozygote increased the risk of breast cancer in the dominant (OR=0.2397 (0.1629-0.3526); p=0.0001; GG vs GC+CC) and recessive (OR=2.803 (1.865- 4.213); P ≤ 0.0001; CC vs GC+GG) inheritance models. In summary, miR-146a rs2910164 G/C is significantly associated with BC in the Pakistani population. To our knowledge, this is the first study that assessed MIR146a rs2910164 G > C SNP in Pakistani population. By analyzing the secondary structure of MIR146A variant, a significant structural modification was noted. Study with a larger sample size is needed to further confirm these findings.

Keywords: breast cancer, MIR146A, microRNA, SNP

Procedia PDF Downloads 136
854 On Hyperbolic Gompertz Growth Model (HGGM)

Authors: S. O. Oyamakin, A. U. Chukwu,

Abstract:

We proposed a Hyperbolic Gompertz Growth Model (HGGM), which was developed by introducing a stabilizing parameter called θ using hyperbolic sine function into the classical gompertz growth equation. The resulting integral solution obtained deterministically was reprogrammed into a statistical model and used in modeling the height and diameter of Pines (Pinus caribaea). Its ability in model prediction was compared with the classical gompertz growth model, an approach which mimicked the natural variability of height/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using goodness of fit tests and model selection criteria. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the compliance of the error term to normality assumptions while using testing the independence of the error term using the runs test. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic gompertz growth models better than the source model (classical gompertz growth model) while the results of R2, Adj. R2, MSE, and AIC confirmed the predictive power of the Hyperbolic Monomolecular growth models over its source model.

Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, gompertz

Procedia PDF Downloads 441
853 An Investigation into the Views of Distant Science Education Students Regarding Teaching Laboratory Work Online

Authors: Abraham Motlhabane

Abstract:

This research analysed the written views of science education students regarding the teaching of laboratory work using the online mode. The research adopted the qualitative methodology. The qualitative research was aimed at investigating small and distinct groups normally regarded as a single-site study. Qualitative research was used to describe and analyze the phenomena from the student’s perspective. This means the research began with assumptions of the world view that use theoretical lenses of research problems inquiring into the meaning of individual students. The research was conducted with three groups of students studying for Postgraduate Certificate in Education, Bachelor of Education and honors Bachelor of Education respectively. In each of the study programmes, the science education module is compulsory. Five science education students from each study programme were purposively selected to participate in this research. Therefore, 15 students participated in the research. In order to analysis the data, the data were first printed and hard copies were used in the analysis. The data was read several times and key concepts and ideas were highlighted. Themes and patterns were identified to describe the data. Coding as a process of organising and sorting data was used. The findings of the study are very diverse; some students are in favour of online laboratory whereas other students argue that science can only be learnt through hands-on experimentation.

Keywords: online learning, laboratory work, views, perceptions

Procedia PDF Downloads 144
852 Using the Technology Acceptance Model to Examine Seniors’ Attitudes toward Facebook

Authors: Chien-Jen Liu, Shu Ching Yang

Abstract:

Using the technology acceptance model (TAM), this study examined the external variables of technological complexity (TC) to acquire a better understanding of the factors that influence the acceptance of computer application courses by learners at Active Aging Universities. After the learners in this study had completed a 27-hour Facebook course, 44 learners responded to a modified TAM survey. Data were collected to examine the path relationships among the variables that influence the acceptance of Facebook-mediated community learning. The partial least squares (PLS) method was used to test the measurement and the structural model. The study results demonstrated that attitudes toward Facebook use directly influence behavioral intentions (BI) with respect to Facebook use, evincing a high prediction rate of 58.3%. In addition to the perceived usefulness (PU) and perceived ease of use (PEOU) measures that are proposed in the TAM, other external variables, such as TC, also indirectly influence BI. These four variables can explain 88% of the variance in BI and demonstrate a high level of predictive ability. Finally, limitations of this investigation and implications for further research are discussed.

Keywords: technology acceptance model (TAM), technological complexity, partial least squares (PLS), perceived usefulness

Procedia PDF Downloads 346
851 Correlation of Clinical and Sonographic Findings with Cytohistology for Diagnosis of Ovarian Tumours

Authors: Meenakshi Barsaul Chauhan, Aastha Chauhan, Shilpa Hurmade, Rajeev Sen, Jyotsna Sen, Monika Dalal

Abstract:

Introduction: Ovarian masses are common forms of neoplasm in women and represent 2/3rd of gynaecological malignancies. A pre-operative suggestion of malignancy can guide the gynecologist to refer women with suspected pelvic mass to a gynecological oncologist for appropriate therapy and optimized treatment, which can improve survival. In the younger age group preoperative differentiation into benign or malignant pathology can decide for conservative or radical surgery. Imaging modalities have a definite role in establishing the diagnosis. By using International Ovarian Tumor Analysis (IOTA) classification with sonography, costly radiological methods like Magnetic Resonance Imaging (MRI) / computed tomography (CT) scan can be reduced, especially in developing countries like India. Thus, this study is being undertaken to evaluate the role of clinical methods and sonography for diagnosis of the nature of the ovarian tumor. Material And Methods: This prospective observational study was conducted on 40 patients presenting with ovarian masses, in the Department of Obstetrics and Gynaecology, at a tertiary care center in northern India. Functional cysts were excluded. Ultrasonography and color Doppler were performed on all the cases.IOTA rules were applied, which take into account locularity, size, presence of solid components, acoustic shadow, dopper flow etc . Magnetic Resonance Imaging (MRI) / computed tomography (CT) scans abdomen and pelvis were done in cases where sonography was inconclusive. In inoperable cases, Fine needle aspiration cytology (FNAC) was done. The histopathology report after surgery and cytology report after FNAC was correlated statistically with the pre-operative diagnosis made clinically and sonographically using IOTA rules. Statistical Analysis: Descriptive measures were analyzed by using mean and standard deviation and the Student t-test was applied and the proportion was analyzed by applying the chi-square test. Inferential measures were analyzed by sensitivity, specificity, negative predictive value, and positive predictive value. Results: Provisional diagnosis of the benign tumor was made in 16(42.5%) and of the malignant tumor was made in 24(57.5%) patients on the basis of clinical findings. With IOTA simple rules on sonography, 15(37.5%) were found to be benign, while 23 (57.5%) were found to be malignant and findings were inconclusive in 2 patients (5%). FNAC/Histopathology reported that benign ovarian tumors were 14 (35%) and 26(65%) were malignant, which was taken as the gold standard. The clinical finding alone was found to have a sensitivity of 66.6% and a specificity of 90.9%. USG alone had a sensitivity of 86% and a specificity of 80%. When clinical findings and IOTA simple rules of sonography were combined (excluding inconclusive masses), the sensitivity and specificity were 83.3% and 92.3%, respectively. While including inconclusive masses, sensitivity came out to be 91.6% and specificity was 89.2. Conclusion: IOTA's simple sonography rules are highly sensitive and specific in the prediction of ovarian malignancy and also easy to use and easily reproducible. Thus, combining clinical examination with USG will help in the better management of patients in terms of time, cost and better prognosis. This will also avoid the need for costlier modalities like CT, and MRI.

Keywords: benign, international ovarian tumor analysis classification, malignant, ovarian tumours, sonography

Procedia PDF Downloads 80
850 Modelling Fluoride Pollution of Groundwater Using Artificial Neural Network in the Western Parts of Jharkhand

Authors: Neeta Kumari, Gopal Pathak

Abstract:

Artificial neural network has been proved to be an efficient tool for non-parametric modeling of data in various applications where output is non-linearly associated with input. It is a preferred tool for many predictive data mining applications because of its power , flexibility, and ease of use. A standard feed forward networks (FFN) is used to predict the groundwater fluoride content. The ANN model is trained using back propagated algorithm, Tansig and Logsig activation function having varying number of neurons. The models are evaluated on the basis of statistical performance criteria like Root Mean Squarred Error (RMSE) and Regression coefficient (R2), bias (mean error), Coefficient of variation (CV), Nash-Sutcliffe efficiency (NSE), and the index of agreement (IOA). The results of the study indicate that Artificial neural network (ANN) can be used for groundwater fluoride prediction in the limited data situation in the hard rock region like western parts of Jharkhand with sufficiently good accuracy.

Keywords: Artificial neural network (ANN), FFN (Feed-forward network), backpropagation algorithm, Levenberg-Marquardt algorithm, groundwater fluoride contamination

Procedia PDF Downloads 550
849 Predicting COVID-19 Severity Using a Simple Parameters in Resource-Limited Settings

Authors: Sireethorn Nimitvilai, Ussanee Poolvivatchaikarn, Nuchanart Tomeun

Abstract:

Objective: To determine the simple laboratory parameters to predict disease severity among COVID-19 patients in resource-limited settings. Material and methods: A retrospective cohort study was conducted at Nakhonpathom Hospital, a 722-bed tertiary care hospital, with an average of 50,000 admissions per year, during April 15 and May 15, 2021. Eligible patients were adults aged ≥ 15 years who were hospitalized with COVID-19. Baseline characteristics, comorbid conditions ad laboratory findings at admission were collected. Predictive factors for severe COVID-19 infection were analyzed. Result: There were 207 patients (79 male and 128 female) and the mean age was 46.7 (16.8) years. Of these, 39 cases (18.8%) were severe and 168 (81.2%) cases were non-severe. Factors associated with severe COVID-19 were neutrophil to lymphocyte ratio ≥ 4 (OR 8.1, 95%CI 2.3-20.3, P < 0.001) and C-reactive protein to albumin ratio ≥ 10 (OR 3.49, 95%CI 1.3-9.1, p 0.01). Conclusions: Complete blood counts, C-reactive protein and albumin are simple, inexpensive, widely available tests and can be used to predict severe COVID-19 in resource-limited settings.

Keywords: COVID-19, predictor of severity, resource-limiting settings, simple laboratory parameters

Procedia PDF Downloads 180
848 A Development of Portable Intrinsically Safe Explosion-Proof Type of Dual Gas Detector

Authors: Sangguk Ahn, Youngyu Kim, Jaheon Gu, Gyoutae Park

Abstract:

In this paper, we developed a dual gas leak instrument to detect Hydrocarbon (HC) and Monoxide (CO) gases. To two kinds of gases, it is necessary to design compact structure for sensors. And then it is important to draw sensing circuits such as measuring, amplifying and filtering. After that, it should be well programmed with robust, systematic and module coding methods. In center of them, improvement of accuracy and initial response time are a matter of vital importance. To manufacture distinguished gas leak detector, we applied intrinsically safe explosion-proof structure to lithium ion battery, main circuits, a pump with motor, color LCD interfaces and sensing circuits. On software, to enhance measuring accuracy we used numerical analysis such as Lagrange and Neville interpolation. Performance test result is conducted by using standard Methane with seven different concentrations with three other products. We want raise risk prevention and efficiency of gas safe management through distributing to the field of gas safety. Acknowledgment: This study was supported by Small and Medium Business Administration under the research theme of ‘Commercialized Development of a portable intrinsically safe explosion-proof type dual gas leak detector’, (task number S2456036).

Keywords: gas leak, dual gas detector, intrinsically safe, explosion proof

Procedia PDF Downloads 228
847 A Neural Network Modelling Approach for Predicting Permeability from Well Logs Data

Authors: Chico Horacio Jose Sambo

Abstract:

Recently neural network has gained popularity when come to solve complex nonlinear problems. Permeability is one of fundamental reservoir characteristics system that are anisotropic distributed and non-linear manner. For this reason, permeability prediction from well log data is well suited by using neural networks and other computer-based techniques. The main goal of this paper is to predict reservoir permeability from well logs data by using neural network approach. A multi-layered perceptron trained by back propagation algorithm was used to build the predictive model. The performance of the model on net results was measured by correlation coefficient. The correlation coefficient from testing, training, validation and all data sets was evaluated. The results show that neural network was capable of reproducing permeability with accuracy in all cases, so that the calculated correlation coefficients for training, testing and validation permeability were 0.96273, 0.89991 and 0.87858, respectively. The generalization of the results to other field can be made after examining new data, and a regional study might be possible to study reservoir properties with cheap and very fast constructed models.

Keywords: neural network, permeability, multilayer perceptron, well log

Procedia PDF Downloads 403
846 COVID-19 Vaccine Hesitancy: The Role of Existential Concerns in Individual’s Decisions Regarding the Vaccine Uptake

Authors: Vittoria Franchina, Laura Salerno, Rubinia Celeste Bonfanti, Gianluca Lo Coco

Abstract:

This study examines the relationships between existential concerns (ECs), basic psychological needs (BPNs), vaccine hesitancy (VH), and the mediating role of negative attitudes toward COVID-19 vaccines. A cross-sectional survey was carried out on a sample of two-hundred eighty-seven adults (Mage = 36.04 (12.07); 59.9% females). Participants were recruited online through clickworker and filled in measures on existential concerns, basic psychological needs, attitudes toward COVID-19 vaccines, and vaccine hesitancy for Pfizer-BioNTech and Astrazeneca vaccines separately. Structural equation modelling showed that existential concerns were related to Pfizer-BioNTech and Astrazeneca vaccine hesitancy both directly and indirectly through negative attitudes toward possible side effects of COVID-19 vaccines. The present study has identified several predictive factors relating to the intention to uptake vaccination to protect against COVID-19 in Italy. Specifically, these findings suggest a causal link between existential concerns, attitudes, and vaccine hesitancy.

Keywords: COVID-19, existential concerns, Pfizer-BioNTech and Astrazeneca vaccines, vaccine hesitancy

Procedia PDF Downloads 99
845 Observing Teaching Practices Through the Lenses of Self-Regulated Learning: A Study Within the String Instrument Individual Context

Authors: Marija Mihajlovic Pereira

Abstract:

Teaching and learning a musical instrument is challenging for both teachers and students. Teachers generally use diverse strategies to resolve students' particular issues in a one-to-one context. Considering individual sessions as a supportive educational context, the teacher can play a decisive role in stimulating and promoting self-regulated learning strategies, especially with beginning learners. The teachers who promote self-controlling behaviors, strategic monitoring, and regulation of actions toward goals could expect their students to practice more qualitatively and consciously. When encouraged to adopt self-regulation habits, students' could benefit from greater productivity on a longer path. Founded on Bary Zimmerman's cyclical model that comprehends three phases - forethought, performance, and self-reflection, this work aims to articulate self-regulated and music learning. Self-regulated learning appeals to the individual's attitude in planning, controlling, and reflecting on their performance. Furthermore, this study aimed to present an observation grid for perceiving teaching instructions that encourage students' controlling cognitive behaviors in light of the belief that conscious promotion of self-regulation may motivate strategic actions toward goals in musical performance. The participants, two teachers, and two students have been involved in the social inclusion project in Lisbon (Portugal). The author and one independent inter-observer analyzed six video-recorded string instrument lessons. The data correspond to three sessions per teacher lectured to one (different) student. Violin (f) and violoncello (m) teachers hold a Master's degree in music education and approximately five years of experience. In their second year of learning an instrument, students have acquired reasonable skills in musical reading, posture, and sound quality until then. The students also manifest positive learning behaviors, interest in learning a musical instrument, although their study habits are still inconsistent. According to the grid's four categories (parent codes), in-class rehearsal frames were coded using MaxQda software, version 20, according to the grid's four categories (parent codes): self-regulated learning, teaching verbalizations, teaching strategies, and students' in-class performance. As a result, selected rehearsal frames qualitatively describe teaching instructions that might promote students' body and hearing awareness, such as "close the eyes while playing" or "sing to internalize the pitch." Another analysis type, coding the short video events according to the observation grid's subcategories (child codes), made it possible to perceive the time teachers dedicate to specific verbal or non-verbal strategies. Furthermore, a coding overlay analysis indicated that teachers tend to stimulate. (i) Forethought – explain tasks, offer feedback and ensure that students identify a goal, (ii) Performance – teach study strategies and encourage students to sing and use vocal abilities to ensure inner audition, (iii) Self-reflection – frequent inquiring and encouraging the student to verbalize their perception of performance. Although developed in the context of individual string instrument lessons, this classroom observation grid brings together essential variables in a one-to-one lesson. It may find utility in a broader context of music education due to the possibility to organize, observe and evaluate teaching practices. Besides that, this study contributes to cognitive development by suggesting a practical approach to fostering self-regulated learning.

Keywords: music education, observation grid, self-regulated learning, string instruments, teaching practices

Procedia PDF Downloads 98
844 Identifying the Barriers Facing Chinese Small and Medium-Sized Enterprises and Evaluating the Effectiveness of Public Supports

Authors: A. Yongsheng Guo, B. Obedat. Abdulazeez, C. Xiaoxian Zhu

Abstract:

This study aimed to identify the barriers to the development of small and medium-sized enterprises (SMEs) in China and build a theoretical framework to evaluate the support provided by the authorities and institutions. A grounded theory approach was adopted to collect and analyze data. 32 interviews were conducted with SME managers, and open, axial and selective coding was utilized to develop themes. Based on institutional theory, grounded theory models were used to present findings. The findings showed that the main barriers in the business environment were defaulting on contracts, bureaucracy in procedures, lack of financial and legal support, limited intermediaries and channels, and poor quality of products and services. This study found that many programs were provided to support SMEs. A theoretical framework was developed to evaluate the performance of the programs from the managers’ perspective. The concepts of economy, efficiency and effectiveness were used to evaluate the perceived value of the programs. This study suggests that specialized programs are needed to suit sector-specific requirements, and creative packages are helpful in supporting SMEs' growth.

Keywords: business support, public economics, public programme, SME

Procedia PDF Downloads 50
843 Change in Food Choice Behavior: Trend and Challenges

Authors: Gargi S. Kumar, Mrinmoyi Kulkarni

Abstract:

Food choice behavior is complex and determined by biological, psychological, socio-cultural, and economic factors. The past two decades, have seen dramatic changes in food consumption patterns among urban Indian consumers. The objective of the current study was to evaluate perceptions about changes with respect to food choice behavior. Ten participants [urban men and women] ranging in age from 40 to 65 were selected and in-depth interviews were conducted with a set of open ended questions. The recorded interviews were transcribed and thematically analyzed using inductive, open and axial coding. The results identified themes that act as drivers and consequences of change in food choice behavior. Drivers such as globalization [sub themes of urbanization, education, income, and work environment], media and advertising, changing gender roles, women in the workforce, and change in family structure have influenced food choice, both at an individual and national level. The consequences of changes in food choice were health implications, processed food consumption, food decisions driven by children and eating out among others. The study reveals that, over time, food choices change and evolve. However it is interesting to note how market forces and culture interact to influence individual behavior and the overall food environment which subsequently affects food choice and the health of the people.

Keywords: change, consequences, drivers, food choice, globalization

Procedia PDF Downloads 228
842 Developing Early Intervention Tools: Predicting Academic Dishonesty in University Students Using Psychological Traits and Machine Learning

Authors: Pinzhe Zhao

Abstract:

This study focuses on predicting university students' cheating tendencies using psychological traits and machine learning techniques. Academic dishonesty is a significant issue that compromises the integrity and fairness of educational institutions. While much research has been dedicated to detecting cheating behaviors after they have occurred, there is limited work on predicting such tendencies before they manifest. The aim of this research is to develop a model that can identify students who are at higher risk of engaging in academic misconduct, allowing for earlier interventions to prevent such behavior. Psychological factors are known to influence students' likelihood of cheating. Research shows that traits such as test anxiety, moral reasoning, self-efficacy, and achievement motivation are strongly linked to academic dishonesty. High levels of anxiety may lead students to cheat as a way to cope with pressure. Those with lower self-efficacy are less confident in their academic abilities, which can push them toward dishonest behaviors to secure better outcomes. Students with weaker moral judgment may also justify cheating more easily, believing it to be less wrong under certain conditions. Achievement motivation also plays a role, as students driven primarily by external rewards, such as grades, are more likely to cheat compared to those motivated by intrinsic learning goals. In this study, data on students’ psychological traits is collected through validated assessments, including scales for anxiety, moral reasoning, self-efficacy, and motivation. Additional data on academic performance, attendance, and engagement in class are also gathered to create a more comprehensive profile. Using machine learning algorithms such as Random Forest, Support Vector Machines (SVM), and Long Short-Term Memory (LSTM) networks, the research builds models that can predict students’ cheating tendencies. These models are trained and evaluated using metrics like accuracy, precision, recall, and F1 scores to ensure they provide reliable predictions. The findings demonstrate that combining psychological traits with machine learning provides a powerful method for identifying students at risk of cheating. This approach allows for early detection and intervention, enabling educational institutions to take proactive steps in promoting academic integrity. The predictive model can be used to inform targeted interventions, such as counseling for students with high test anxiety or workshops aimed at strengthening moral reasoning. By addressing the underlying factors that contribute to cheating behavior, educational institutions can reduce the occurrence of academic dishonesty and foster a culture of integrity. In conclusion, this research contributes to the growing body of literature on predictive analytics in education. It offers a approach by integrating psychological assessments with machine learning to predict cheating tendencies. This method has the potential to significantly improve how academic institutions address academic dishonesty, shifting the focus from punishment after the fact to prevention before it occurs. By identifying high-risk students and providing them with the necessary support, educators can help maintain the fairness and integrity of the academic environment.

Keywords: academic dishonesty, cheating prediction, intervention strategies, machine learning, psychological traits, academic integrity

Procedia PDF Downloads 20
841 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques

Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu

Abstract:

Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.

Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare

Procedia PDF Downloads 64
840 Talent Management through Integration of Talent Value Chain and Human Capital Analytics Approaches

Authors: Wuttigrai Ngamsirijit

Abstract:

Talent management in today’s modern organizations has become data-driven due to a demand for objective human resource decision making and development of analytics technologies. HR managers have been faced with some obstacles in exploiting data and information to obtain their effective talent management decisions. These include process-based data and records; insufficient human capital-related measures and metrics; lack of capabilities in data modeling in strategic manners; and, time consuming to add up numbers and make decisions. This paper proposes a framework of talent management through integration of talent value chain and human capital analytics approaches. It encompasses key data, measures, and metrics regarding strategic talent management decisions along the organizational and talent value chain. Moreover, specific predictive and prescriptive models incorporating these data and information are recommended to help managers in understanding the state of talent, gaps in managing talent and the organization, and the ways to develop optimized talent strategies.    

Keywords: decision making, human capital analytics, talent management, talent value chain

Procedia PDF Downloads 187
839 The Effect of Adolescents’ Grit on Stem Creativity: The Mediation of Creative Self-Efficacy and the Moderation of Future Time Perspective

Authors: Han Kuikui

Abstract:

Adolescents, serving as the reserve force for technological innovation talents, possess STEM creativity that is not only pivotal to achieving STEM education goals but also provides a viable path for reforming science curricula in compulsory education and cultivating innovative talents in China. To investigate the relationship among adolescents' grit, creative self-efficacy, future time perspective, and STEM creativity, a survey was conducted in 2023 using stratified random sampling. A total of 1263 junior high school students from the main urban areas of Chongqing, from grade 7 to grade 9, were sampled. The results indicated that (1) Grit positively predicts adolescents' creative self-efficacy and STEM creativity significantly; (2) Creative self-efficacy mediates the positive relationship between grit and adolescents' STEM creativity; (3) The mediating role of creative self-efficacy is moderated by future time perspective, such that with a higher future time perspective, the positive predictive effect of grit on creative self-efficacy is more substantial, which in turn positively affects their STEM creativity.

Keywords: grit, stem creativity, creative self-efficacy, future time perspective

Procedia PDF Downloads 52
838 A Hybrid Feature Selection and Deep Learning Algorithm for Cancer Disease Classification

Authors: Niousha Bagheri Khulenjani, Mohammad Saniee Abadeh

Abstract:

Learning from very big datasets is a significant problem for most present data mining and machine learning algorithms. MicroRNA (miRNA) is one of the important big genomic and non-coding datasets presenting the genome sequences. In this paper, a hybrid method for the classification of the miRNA data is proposed. Due to the variety of cancers and high number of genes, analyzing the miRNA dataset has been a challenging problem for researchers. The number of features corresponding to the number of samples is high and the data suffer from being imbalanced. The feature selection method has been used to select features having more ability to distinguish classes and eliminating obscures features. Afterward, a Convolutional Neural Network (CNN) classifier for classification of cancer types is utilized, which employs a Genetic Algorithm to highlight optimized hyper-parameters of CNN. In order to make the process of classification by CNN faster, Graphics Processing Unit (GPU) is recommended for calculating the mathematic equation in a parallel way. The proposed method is tested on a real-world dataset with 8,129 patients, 29 different types of tumors, and 1,046 miRNA biomarkers, taken from The Cancer Genome Atlas (TCGA) database.

Keywords: cancer classification, feature selection, deep learning, genetic algorithm

Procedia PDF Downloads 111
837 Attribute Based Comparison and Selection of Modular Self-Reconfigurable Robot Using Multiple Attribute Decision Making Approach

Authors: Manpreet Singh, V. P. Agrawal, Gurmanjot Singh Bhatti

Abstract:

From the last decades, there is a significant technological advancement in the field of robotics, and a number of modular self-reconfigurable robots were introduced that can help in space exploration, bucket to stuff, search, and rescue operation during earthquake, etc. As there are numbers of self-reconfigurable robots, choosing the optimum one is always a concern for robot user since there is an increase in available features, facilities, complexity, etc. The objective of this research work is to present a multiple attribute decision making based methodology for coding, evaluation, comparison ranking and selection of modular self-reconfigurable robots using a technique for order preferences by similarity to ideal solution approach. However, 86 attributes that affect the structure and performance are identified. A database for modular self-reconfigurable robot on the basis of different pertinent attribute is generated. This database is very useful for the user, for selecting a robot that suits their operational needs. Two visual methods namely linear graph and spider chart are proposed for ranking of modular self-reconfigurable robots. Using five robots (Atron, Smores, Polybot, M-Tran 3, Superbot), an example is illustrated, and raking of the robots is successfully done, which shows that Smores is the best robot for the operational need illustrated, and this methodology is found to be very effective and simple to use.

Keywords: self-reconfigurable robots, MADM, TOPSIS, morphogenesis, scalability

Procedia PDF Downloads 223
836 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm

Authors: Ghada Badr, Arwa Alturki

Abstract:

The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.

Keywords: alignment, RNA secondary structure, pairwise, component-based, data mining

Procedia PDF Downloads 458
835 Data Mining Meets Educational Analysis: Opportunities and Challenges for Research

Authors: Carla Silva

Abstract:

Recent development of information and communication technology enables us to acquire, collect, analyse data in various fields of socioeconomic – technological systems. Along with the increase of economic globalization and the evolution of information technology, data mining has become an important approach for economic data analysis. As a result, there has been a critical need for automated approaches to effective and efficient usage of massive amount of educational data, in order to support institutions to a strategic planning and investment decision-making. In this article, we will address data from several different perspectives and define the applied data to sciences. Many believe that 'big data' will transform business, government, and other aspects of the economy. We discuss how new data may impact educational policy and educational research. Large scale administrative data sets and proprietary private sector data can greatly improve the way we measure, track, and describe educational activity and educational impact. We also consider whether the big data predictive modeling tools that have emerged in statistics and computer science may prove useful in educational and furthermore in economics. Finally, we highlight a number of challenges and opportunities for future research.

Keywords: data mining, research analysis, investment decision-making, educational research

Procedia PDF Downloads 358
834 Self-denigration in Doctoral Defense Sessions: Scale Development and Validation

Authors: Alireza Jalilifar, Nadia Mayahi

Abstract:

The dissertation defense as a complicated conflict-prone context entails the adoption of elegant interactional strategies, one of which is self-denigration. This study aimed to develop and validate a self-denigration model that fits the context of doctoral defense sessions in applied linguistics. Two focus group discussions provided the basis for developing this conceptual model, which assumed 10 functions for self-denigration, namely good manners, modesty, affability, altruism, assertiveness, diffidence, coercive self-deprecation, evasion, diplomacy, and flamboyance. These functions were used to design a 40-item questionnaire on the attitudes of applied linguists concerning self-denigration in defense sessions. The confirmatory factor analysis of the questionnaire indicated the predictive ability of the measurement model. The findings of this study suggest that self-denigration in doctoral defense sessions is the social representation of the participants’ values, ideas and practices adopted as a negotiation strategy and a conflict management policy for the purpose of establishing harmony and maintaining resilience. This study has implications for doctoral students and academics and illuminates further research on self-denigration in other contexts.

Keywords: academic discourse, politeness, self-denigration, grounded theory, dissertation defense

Procedia PDF Downloads 137
833 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights

Procedia PDF Downloads 115
832 Artificial Neural Network-Based Short-Term Load Forecasting for Mymensingh Area of Bangladesh

Authors: S. M. Anowarul Haque, Md. Asiful Islam

Abstract:

Electrical load forecasting is considered to be one of the most indispensable parts of a modern-day electrical power system. To ensure a reliable and efficient supply of electric energy, special emphasis should have been put on the predictive feature of electricity supply. Artificial Neural Network-based approaches have emerged to be a significant area of interest for electric load forecasting research. This paper proposed an Artificial Neural Network model based on the particle swarm optimization algorithm for improved electric load forecasting for Mymensingh, Bangladesh. The forecasting model is developed and simulated on the MATLAB environment with a large number of training datasets. The model is trained based on eight input parameters including historical load and weather data. The predicted load data are then compared with an available dataset for validation. The proposed neural network model is proved to be more reliable in terms of day-wise load forecasting for Mymensingh, Bangladesh.

Keywords: load forecasting, artificial neural network, particle swarm optimization

Procedia PDF Downloads 171
831 EGFR Signal Induced-Nuclear Translocation of Beta-catenin and PKM2 Promotes HCC Malignancy and Indicates Early Recurrence After Curative Resection

Authors: Fangtian Fan, Zhaoguo Liu, Yin Lu

Abstract:

Early recurrence (ER) (< 1 year) after liver resection is one of the most important factors that impacts the prognosis of patients with hepatocellular carcinoma (HCC). However, the molecular mechanisms and predictive indexes of ER after curative resection remain largely unknown. The present study aimed to exploit the role of EGFR signaling in EMT and early recurrence of HCC after curative resection and elucidate the molecular mechanisms. Our results showed that nuclear beta-catenin / PKM2 was a independent predictor of early recurrence after curative resection in EGFR-overexpressed HCC. Mechanistic investigation indicated that nuclear accumulation of beta-catenin and PKM2 induced by EGFR signal promoted HCC cell invasion and proliferation, which were required for early recurrence of HCC. These effects were mediated by PI3K/AKT and ERK pathways rather than the canonical Wnt signaling. In conclusions, EGFR signal induced-nuclear translocation of beta-catenin and PKM2 promotes HCC malignancy and indicates early recurrence after curative resection.

Keywords: beta-catenin, early recurrence, hepatocellular carcinoma, malignancy, PKM2

Procedia PDF Downloads 357
830 The Use of Hec Ras One-Dimensional Model and Geophysics for the Determination of Flood Zones

Authors: Ayoub El Bourtali, Abdessamed Najine, Amrou Moussa Benmoussa

Abstract:

It is becoming more and more necessary to manage flood risk, and it must include all stakeholders and all possible means available. The goal of this work is to map the vulnerability of the Oued Derna-region Tagzirt flood zone in the semi-arid region. This is about implementing predictive models and flood control. This allows for the development of flood risk prevention plans. In this study, A resistivity survey was conducted over the area to locate and evaluate soil characteristics in order to calculate discharges and prevent flooding for the study area. The development of a one-dimensional (1D) hydrodynamic model of the Derna River was carried out in HEC-RAS 5.0.4 using a combination of survey data and spatially extracted cross-sections and recorded river flows. The study area was hit by several extreme floods, causing a lot of property loss and loss of life. This research focuses on the most recent flood events, based on the collected data, the water level, river flow and river cross-section were analyzed. A set of flood levels were obtained as the outputs of the hydraulic model and the accuracy of the simulated flood levels and velocity.

Keywords: derna river, 1D hydrodynamic model, flood modelling, HEC-RAS 5.0.4

Procedia PDF Downloads 312
829 Development and Characterisation of a Microbioreactor 'Cassette' for Cell Culture Applications

Authors: Nelson Barrientos, Matthew J. Davies, Marco C. Marques, Darren N. Nesbeth, Gary J. Lye, Nicolas Szita

Abstract:

Microbioreactor technology is making important advances towards its application in cell culture and bioprocess development. In particular, the technology promises flexible and controllable devices capable to perform parallelised experimentation at low cost. Currently, state of the art methods (e.g. optical sensors) allow the accurate monitoring of the microbioreactor operation. In addition, the laminar flow regime encountered in these devices allows more predictive fluid dynamics modelling, improving the control over the soluble, physical and mechanical environment of the cells. This work describes the development and characterisation of a novel microbioreactor cassette system (microbioreactor volume is 150 μL. The volumetric oxygen transfer coefficient (KLa) and mixing time have been characterised to be between 25 to 113 h-1 and 0.5 and 0.1 s, respectively. In addition, the Residence time distribution (RTD) analysis confirms that the reactor operates at well mixed conditions. Finally, Staphylococcus carnosus TM300 growth is demonstrated via batch culture experiments. Future work consists in expanding the optics of the microbioreactor design to include the monitoring of variables such as fluorescent protein expression, among others.

Keywords: microbioreactor, cell-culture, fermentation, microfluidics

Procedia PDF Downloads 415
828 Risk Assessment and Management Using Machine Learning Models

Authors: Lagnajeet Mohanty, Mohnish Mishra, Pratham Tapdiya, Himanshu Sekhar Nayak, Swetapadma Singh

Abstract:

In the era of global interconnectedness, effective risk assessment and management are critical for organizational resilience. This review explores the integration of machine learning (ML) into risk processes, examining its transformative potential and the challenges it presents. The literature reveals ML's success in sectors like consumer credit, demonstrating enhanced predictive accuracy, adaptability, and potential cost savings. However, ethical considerations, interpretability issues, and the demand for skilled practitioners pose limitations. Looking forward, the study identifies future research scopes, including refining ethical frameworks, advancing interpretability techniques, and fostering interdisciplinary collaborations. The synthesis of limitations and future directions highlights the dynamic landscape of ML in risk management, urging stakeholders to navigate challenges innovatively. This abstract encapsulates the evolving discourse on ML's role in shaping proactive and effective risk management strategies in our interconnected and unpredictable global landscape.

Keywords: machine learning, risk assessment, ethical considerations, financial inclusion

Procedia PDF Downloads 72