Search results for: virtual machine
2829 A Highly Accurate Computer-Aided Diagnosis: CAD System for the Diagnosis of Breast Cancer by Using Thermographic Analysis
Authors: Mahdi Bazarganigilani
Abstract:
Computer-aided diagnosis (CAD) systems can play crucial roles in diagnosing crucial diseases such as breast cancer at the earliest. In this paper, a CAD system for the diagnosis of breast cancer was introduced and evaluated. This CAD system was developed by using spatio-temporal analysis of data on a set of consecutive thermographic images by employing wavelet transformation. By using this analysis, a very accurate machine learning model using random forest was obtained. The final results showed a promising accuracy of 91% in terms of the F1 measure indicator among 200 patients' sample data. The CAD system was further extended to obtain a detailed analysis of the effect of smaller sub-areas of each breast on the occurrence of cancer.Keywords: computer-aided diagnosis systems, thermographic analysis, spatio-temporal analysis, image processing, machine learning
Procedia PDF Downloads 2122828 Immersive and Non-Immersive Virtual Reality Applied to the Cervical Spine Assessment
Authors: Pawel Kiper, Alfonc Baba, Mahmoud Alhelou, Giorgia Pregnolato, Michela Agostini, Andrea Turolla
Abstract:
Impairment of cervical spine mobility is often related to pain triggered by musculoskeletal disorders or direct traumatic injuries of the spine. To date, these disorders are assessed with goniometers and inclinometers, which are the most popular devices used in clinical settings. Nevertheless, these technologies usually allow measurement of no more than two-dimensional range of motion (ROM) quotes in static conditions. Conversely, the wide use of motion tracking systems able to measure 3 to 6 degrees of freedom dynamically, while performing standard ROM assessment, are limited due to technical complexities in preparing the setup and high costs. Thus, motion tracking systems are primarily used in research. These systems are an integral part of virtual reality (VR) technologies, which can be used for measuring spine mobility. To our knowledge, the accuracy of VR measure has not yet been studied within virtual environments. Thus, the aim of this study was to test the reliability of a protocol for the assessment of sensorimotor function of the cervical spine in a population of healthy subjects and to compare whether using immersive or non-immersive VR for visualization affects the performance. Both VR assessments consisted of the same five exercises and random sequence determined which of the environments (i.e. immersive or non-immersive) was used as first assessment. Subjects were asked to perform head rotation (right and left), flexion, extension and lateral flexion (right and left side bending). Each movement was executed five times. Moreover, the participants were invited to perform head reaching movements i.e. head movements toward 8 targets placed along a circular perimeter each 45°, visualized one-by-one in random order. Finally, head repositioning movement was obtained by head movement toward the same 8 targets as for reaching and following reposition to the start point. Thus, each participant performed 46 tasks during assessment. Main measures were: ROM of rotation, flexion, extension, lateral flexion and complete kinematics of the cervical spine (i.e. number of completed targets, time of execution (seconds), spatial length (cm), angle distance (°), jerk). Thirty-five healthy participants (i.e. 14 males and 21 females, mean age 28.4±6.47) were recruited for the cervical spine assessment with immersive and non-immersive VR environments. Comparison analysis demonstrated that: head right rotation (p=0.027), extension (p=0.047), flexion (p=0.000), time (p=0.001), spatial length (p=0.004), jerk target (p=0.032), trajectory repositioning (p=0.003), and jerk target repositioning (p=0.007) were significantly better in immersive than non-immersive VR. A regression model showed that assessment in immersive VR was influenced by height, trajectory repositioning (p<0.05), and handedness (p<0.05), whereas in non-immersive VR performance was influenced by height, jerk target (p=0.002), head extension, jerk target repositioning (p=0.002), and by age, head flex/ext, trajectory repositioning, and weight (p=0.040). The results of this study showed higher accuracy of cervical spine assessment when executed in immersive VR. The assessment of ROM and kinematics of the cervical spine can be affected by independent and dependent variables in both immersive and non-immersive VR settings.Keywords: virtual reality, cervical spine, motion analysis, range of motion, measurement validity
Procedia PDF Downloads 1672827 Random Access in IoT Using Naïve Bayes Classification
Authors: Alhusein Almahjoub, Dongyu Qiu
Abstract:
This paper deals with the random access procedure in next-generation networks and presents the solution to reduce total service time (TST) which is one of the most important performance metrics in current and future internet of things (IoT) based networks. The proposed solution focuses on the calculation of optimal transmission probability which maximizes the success probability and reduces TST. It uses the information of several idle preambles in every time slot, and based on it, it estimates the number of backlogged IoT devices using Naïve Bayes estimation which is a type of supervised learning in the machine learning domain. The estimation of backlogged devices is necessary since optimal transmission probability depends on it and the eNodeB does not have information about it. The simulations are carried out in MATLAB which verify that the proposed solution gives excellent performance.Keywords: random access, LTE/LTE-A, 5G, machine learning, Naïve Bayes estimation
Procedia PDF Downloads 1462826 Trust: The Enabler of Knowledge-Sharing Culture in an Informal Setting
Authors: Emmanuel Ukpe, S. M. F. D. Syed Mustapha
Abstract:
Trust in an organization has been perceived as one of the key factors behind knowledge sharing, mainly in an unstructured work environment. In an informal working environment, to instill trust among individuals is a challenge and even more in the virtual environment. The study has contributed in developing the framework for building trust in an unstructured organization in performing knowledge sharing in a virtual environment. The artifact called KAPE (Knowledge Acquisition, Processing, and Exchange) was developed for knowledge sharing for the informal organization where the framework was incorporated. It applies to Cassava farmers to facilitate knowledge sharing using web-based platform. A survey was conducted; data were collected from 382 farmers from 21 farm communities. Multiple regression technique, Cronbach’s Alpha reliability test; Tukey’s Honestly significant difference (HSD) analysis; one way Analysis of Variance (ANOVA), and all trust acceptable measures (TAM) were used to test the hypothesis and to determine noteworthy relationships. The results show a significant difference when there is a trust in knowledge sharing between farmers, the ones who have high in trust acceptable factors found in the model (M = 3.66 SD = .93) and the ones who have low on trust acceptable factors (M = 2.08 SD = .28), (t (48) = 5.69, p = .00). Furthermore, when applying Cognitive Expectancy Theory, the farmers with cognitive-consonance show higher level of trust and satisfaction with knowledge and information from KAPE, as compared with a low level of cognitive-dissonance. These results imply that the adopted trust model KAPE positively improved knowledge sharing activities in an informal environment amongst rural farmers.Keywords: trust, knowledge, sharing, knowledge acquisition, processing and exchange, KAPE
Procedia PDF Downloads 1212825 Recommendation Systems for Cereal Cultivation using Advanced Casual Inference Modeling
Authors: Md Yeasin, Ranjit Kumar Paul
Abstract:
In recent years, recommendation systems have become indispensable tools for agricultural system. The accurate and timely recommendations can significantly impact crop yield and overall productivity. Causal inference modeling aims to establish cause-and-effect relationships by identifying the impact of variables or factors on outcomes, enabling more accurate and reliable recommendations. New advancements in causal inference models have been found in the literature. With the advent of the modern era, deep learning and machine learning models have emerged as efficient tools for modeling. This study proposed an innovative approach to enhance recommendation systems-based machine learning based casual inference model. By considering the causal effect and opportunity cost of covariates, the proposed system can provide more reliable and actionable recommendations for cereal farmers. To validate the effectiveness of the proposed approach, experiments are conducted using cereal cultivation data of eastern India. Comparative evaluations are performed against existing correlation-based recommendation systems, demonstrating the superiority of the advanced causal inference modeling approach in terms of recommendation accuracy and impact on crop yield. Overall, it empowers farmers with personalized recommendations tailored to their specific circumstances, leading to optimized decision-making and increased crop productivity.Keywords: agriculture, casual inference, machine learning, recommendation system
Procedia PDF Downloads 812824 Creation of a Test Machine for the Scientific Investigation of Chain Shot
Authors: Mark McGuire, Eric Shannon, John Parmigiani
Abstract:
Timber harvesting increasingly involves mechanized equipment. This has increased the efficiency of harvesting, but has also introduced worker-safety concerns. One such concern arises from the use of harvesters. During operation, harvesters subject saw chain to large dynamic mechanical stresses. These stresses can, under certain conditions, cause the saw chain to fracture. The high speed of harvester saw chain can cause the resulting open chain loop to fracture a second time due to the dynamic loads placed upon it as it travels through space. If a second fracture occurs, it can result in a projectile consisting of one-to-several chain links. This projectile is referred to as a chain shot. It has speeds similar to a bullet but typically has greater mass and is a significant safety concern. Numerous examples exist of chain shots penetrating bullet-proof barriers and causing severe injury and death. Improved harvester-cab barriers can help prevent injury however a comprehensive scientific understanding of chain shot is required to consistently reduce or prevent it. Obtaining this understanding requires a test machine with the capability to cause chain shot to occur under carefully controlled conditions and accurately measure the response. Worldwide few such test machine exist. Those that do focus on validating the ability of barriers to withstand a chain shot impact rather than obtaining a scientific understanding of the chain shot event itself. The purpose of this paper is to describe the design, fabrication, and use of a test machine capable of a comprehensive scientific investigation of chain shot. The capabilities of this machine are to test all commercially-available saw chains and bars at chain tensions and speeds meeting and exceeding those typically encountered in harvester use and accurately measure the corresponding key technical parameters. The test machine was constructed inside of a standard shipping container. This provides space for both an operator station and a test chamber. In order to contain the chain shot under any possible test conditions, the test chamber was lined with a base layer of AR500 steel followed by an overlay of HDPE. To accommodate varying bar orientations and fracture-initiation sites, the entire saw chain drive unit and bar mounting system is modular and capable of being located anywhere in the test chamber. The drive unit consists of a high-speed electric motor with a flywheel. Standard Ponsse harvester head components are used to bar mounting and chain tensioning. Chain lubrication is provided by a separate peristaltic pump. Chain fracture is initiated through ISO standard 11837. Measure parameters include shaft speed, motor vibration, bearing temperatures, motor temperature, motor current draw, hydraulic fluid pressure, chain force at fracture, and high-speed camera images. Results show that the machine is capable of consistently causing chain shot. Measurement output shows fracture location and the force associated with fracture as a function of saw chain speed and tension. Use of this machine will result in a scientific understanding of chain shot and consequently improved products and greater harvester operator safety.Keywords: chain shot, safety, testing, timber harvesters
Procedia PDF Downloads 1532823 Off-Topic Text Detection System Using a Hybrid Model
Authors: Usama Shahid
Abstract:
Be it written documents, news columns, or students' essays, verifying the content can be a time-consuming task. Apart from the spelling and grammar mistakes, the proofreader is also supposed to verify whether the content included in the essay or document is relevant or not. The irrelevant content in any document or essay is referred to as off-topic text and in this paper, we will address the problem of off-topic text detection from a document using machine learning techniques. Our study aims to identify the off-topic content from a document using Echo state network model and we will also compare data with other models. The previous study uses Convolutional Neural Networks and TFIDF to detect off-topic text. We will rearrange the existing datasets and take new classifiers along with new word embeddings and implement them on existing and new datasets in order to compare the results with the previously existing CNN model.Keywords: off topic, text detection, eco state network, machine learning
Procedia PDF Downloads 882822 Virtual Reality for Chemical Engineering Unit Operations
Authors: Swee Kun Yap, Sachin Jangam, Suraj Vasudevan
Abstract:
Experiential learning is dubbed as a highly effective way to enhance learning. Virtual reality (VR) is thus a helpful tool in providing a safe, memorable, and interactive learning environment. A class of 49 fluid mechanics students participated in starting up a pump, one of the most used equipment in the chemical industry, in VR. They experience the process in VR to familiarize themselves with the safety training and the standard operating procedure (SOP) in guided mode. Students subsequently observe their peers (in groups of 4 to 5) complete the same training. The training first brings each user through the personal protection equipment (PPE) selection, before guiding the user through a series of steps for pump startup. One of the most common feedback given by industries include the weakness of our graduates in pump design and operation. Traditional fluid mechanics is a highly theoretical module loaded with engineering equations, providing limited opportunity for visualization and operation. With VR pump, students can now learn to startup, shutdown, troubleshoot and observe the intricacies of a centrifugal pump in a safe and controlled environment, thereby bridging the gap between theory and practical application. Following the completion of the guided mode operation, students then individually complete the VR assessment for pump startup on the same day, which requires students to complete the same series of steps, without any cues given in VR to test their recollection rate. While most students miss out a few minor steps such as the checking of lubrication oil and the closing of minor drain valves before pump priming, all the students scored full marks in the PPE selection, and over 80% of the students were able to complete all the critical steps that are required to startup a pump safely. The students were subsequently tested for their recollection rate by means of an online quiz 3 weeks later, and it is again found that over 80% of the students were able to complete the critical steps in the correct order. In the survey conducted, students reported that the VR experience has been enjoyable and enriching, and 79.5% of the students voted to include VR as a positive supplementary exercise in addition to traditional teaching methods. One of the more notable feedback is the higher ease of noticing and learning from mistakes as an observer rather than as a VR participant. Thus, the cycling between being a VR participant and an observer has helped tremendously in their knowledge retention. This reinforces the positive impact VR has on learning.Keywords: experiential learning, learning by doing, pump, unit operations, virtual reality
Procedia PDF Downloads 1402821 Early Prediction of Diseases in a Cow for Cattle Industry
Authors: Ghufran Ahmed, Muhammad Osama Siddiqui, Shahbaz Siddiqui, Rauf Ahmad Shams Malick, Faisal Khan, Mubashir Khan
Abstract:
In this paper, a machine learning-based approach for early prediction of diseases in cows is proposed. Different ML algos are applied to extract useful patterns from the available dataset. Technology has changed today’s world in every aspect of life. Similarly, advanced technologies have been developed in livestock and dairy farming to monitor dairy cows in various aspects. Dairy cattle monitoring is crucial as it plays a significant role in milk production around the globe. Moreover, it has become necessary for farmers to adopt the latest early prediction technologies as the food demand is increasing with population growth. This highlight the importance of state-ofthe-art technologies in analyzing how important technology is in analyzing dairy cows’ activities. It is not easy to predict the activities of a large number of cows on the farm, so, the system has made it very convenient for the farmers., as it provides all the solutions under one roof. The cattle industry’s productivity is boosted as the early diagnosis of any disease on a cattle farm is detected and hence it is treated early. It is done on behalf of the machine learning output received. The learning models are already set which interpret the data collected in a centralized system. Basically, we will run different algorithms on behalf of the data set received to analyze milk quality, and track cows’ health, location, and safety. This deep learning algorithm draws patterns from the data, which makes it easier for farmers to study any animal’s behavioral changes. With the emergence of machine learning algorithms and the Internet of Things, accurate tracking of animals is possible as the rate of error is minimized. As a result, milk productivity is increased. IoT with ML capability has given a new phase to the cattle farming industry by increasing the yield in the most cost-effective and time-saving manner.Keywords: IoT, machine learning, health care, dairy cows
Procedia PDF Downloads 732820 Heterogenous Dimensional Super Resolution of 3D CT Scans Using Transformers
Authors: Helen Zhang
Abstract:
Accurate segmentation of the airways from CT scans is crucial for early diagnosis of lung cancer. However, the existing airway segmentation algorithms often rely on thin-slice CT scans, which can be inconvenient and costly. This paper presents a set of machine learning-based 3D super-resolution algorithms along heterogeneous dimensions to improve the resolution of thicker CT scans to reduce the reliance on thin-slice scans. To evaluate the efficacy of the super-resolution algorithms, quantitative assessments using PSNR (Peak Signal to Noise Ratio) and SSIM (Structural SIMilarity index) were performed. The impact of super-resolution on airway segmentation accuracy is also studied. The proposed approach has the potential to make airway segmentation more accessible and affordable, thereby facilitating early diagnosis and treatment of lung cancer.Keywords: 3D super-resolution, airway segmentation, thin-slice CT scans, machine learning
Procedia PDF Downloads 1202819 A Combination of Independent Component Analysis, Relative Wavelet Energy and Support Vector Machine for Mental State Classification
Authors: Nguyen The Hoang Anh, Tran Huy Hoang, Vu Tat Thang, T. T. Quyen Bui
Abstract:
Mental state classification is an important step for realizing a control system based on electroencephalography (EEG) signals which could benefit a lot of paralyzed people including the locked-in or Amyotrophic Lateral Sclerosis. Considering that EEG signals are nonstationary and often contaminated by various types of artifacts, classifying thoughts into correct mental states is not a trivial problem. In this work, our contribution is that we present and realize a novel model which integrates different techniques: Independent component analysis (ICA), relative wavelet energy, and support vector machine (SVM) for the same task. We applied our model to classify thoughts in two types of experiment whether with two or three mental states. The experimental results show that the presented model outperforms other models using Artificial Neural Network, K-Nearest Neighbors, etc.Keywords: EEG, ICA, SVM, wavelet
Procedia PDF Downloads 3842818 Python Implementation for S1000D Applicability Depended Processing Model - SALERNO
Authors: Theresia El Khoury, Georges Badr, Amir Hajjam El Hassani, Stéphane N’Guyen Van Ky
Abstract:
The widespread adoption of machine learning and artificial intelligence across different domains can be attributed to the digitization of data over several decades, resulting in vast amounts of data, types, and structures. Thus, data processing and preparation turn out to be a crucial stage. However, applying these techniques to S1000D standard-based data poses a challenge due to its complexity and the need to preserve logical information. This paper describes SALERNO, an S1000d AppLicability dEpended pRocessiNg mOdel. This python-based model analyzes and converts the XML S1000D-based files into an easier data format that can be used in machine learning techniques while preserving the different logic and relationships in files. The model parses the files in the given folder, filters them, and extracts the required information to be saved in appropriate data frames and Excel sheets. Its main idea is to group the extracted information by applicability. In addition, it extracts the full text by replacing internal and external references while maintaining the relationships between files, as well as the necessary requirements. The resulting files can then be saved in databases and used in different models. Documents in both English and French languages were tested, and special characters were decoded. Updates on the technical manuals were taken into consideration as well. The model was tested on different versions of the S1000D, and the results demonstrated its ability to effectively handle the applicability, requirements, references, and relationships across all files and on different levels.Keywords: aeronautics, big data, data processing, machine learning, S1000D
Procedia PDF Downloads 1592817 Using Virtual Reality Exergaming to Improve Health of College Students
Authors: Juanita Wallace, Mark Jackson, Bethany Jurs
Abstract:
Introduction: Exergames, VR games used as a form of exercise, are being used to reduce sedentary lifestyles in a vast number of populations. However, there is a distinct lack of research comparing the physiological response during VR exergaming to that of traditional exercises. The purpose of this study was to create a foundationary investigation establishing changes in physiological responses resulting from VR exergaming in a college aged population. Methods: In this IRB approved study, college aged students were recruited to play a virtual reality exergame (Beat Saber) on the Oculus Quest 2 (Facebook, 2021) in either a control group (CG) or training group (TG). Both groups consisted of subjects who were not habitual users of virtual reality. The CG played VR one time per week for three weeks and the TG played 150 min/week three weeks. Each group played the same nine Beat Saber songs, in a randomized order, during 30 minute sessions. Song difficulty was increased during play based on song performance. Subjects completed a pre- and posttests at which the following was collected: • Beat Saber Game Metrics: song level played, song score, number of beats completed per song and accuracy (beats completed/total beats) • Physiological Data: heart rate (max and avg.), active calories • Demographics Results: A total of 20 subjects completed the study; nine in the CG (3 males, 6 females) and 11 (5 males, 6 females) in the TG. • Beat Saber Song Metrics: The TG improved performance from a normal/hard difficulty to hard/expert. The CG stayed at the normal/hard difficulty. At the pretest there was no difference in game accuracy between groups. However, at the posttest the CG had a higher accuracy. • Physiological Data (Table 1): Average heart rates were similar between the TG and CG at both the pre- and posttest. However, the TG expended more total calories. Discussion: Due to the lack of peer reviewed literature on c exergaming using Beat Saber, the results of this study cannot be directly compared. However, the results of this study can be compared with the previously established trends for traditional exercise. In traditional exercise, an increase in training volume equates to increased efficiency at the activity. The TG should naturally increase in difficulty at a faster rate than the CG because they played 150 hours per week. Heart rate and caloric responses also increase during traditional exercise as load increases (i.e. speed or resistance). The TG reported an increase in total calories due to a higher difficulty of play. The song accuracy decreases in the TG can be explained by the increased difficulty of play. Conclusion: VR exergaming is comparable to traditional exercise for loads within the 50-70% of maximum heart rate. The ability to use VR for health could motivate individuals who do not engage in traditional exercise. In addition, individuals in health professions can and should promote VR exergaming as a viable way to increase physical activity and improve health in their clients/patients.Keywords: virtual reality, exergaming, health, heart rate, wellness
Procedia PDF Downloads 1882816 Virtual Reference Service as a Space for Communication and Interaction: Providing Infrastructure for Learning in Times of Crisis at Uppsala University
Authors: Nadja Ylvestedt
Abstract:
Uppsala University Library is a geographically dispersed research library consisting of nine subject libraries located in different campus areas throughout the city of Uppsala. Despite the geographical dispersion, it is the library's ambition to be perceived as a cohesive library with consistently high service and quality. A key factor to being one cohesive library is the library's online services, especially the virtual reference service. E-mail, chat and phone are answered by a team of specially trained staff under the supervision of a team leader. When covid-19 hit, well-established routines and processes to provide an infrastructure for students and researchers at the university changed radically. The strong connection between services provided at the library locations as well as at the VRS has been one of the key components of the library’s success in providing patrons with the help they need. With radically minimized availability at the physical locations, the infrastructure was at risk of collapsing. Objectives:- The objective of this project has been to evaluate the consequences of the sudden change in the organization of the library. The focus of this evaluation is the library’s VRS as an important space for learning, interaction and communication between the library and the community when other traditional spaces were not available. The goal of this evaluation is to capture the lessons learned from providing infrastructure for learning and research in times of crisis both on a practical, user-centered level but also to stress the importance of leadership in ever-changing environments that supports and creates agile, flexible services and teams instead of rigid processes adhering to obsolete goals. Results:- Reduced availability at the physical library locations was one of the strategies to prevent the spread of the covid-19 virus. The library staff was encouraged to work from home, so student workers staffed the library’s physical locations during that time, leaving the VRS to be the only place where patrons could get expert help. The VRS had an increase of 65% of questions asked between spring term 2019 and spring term 2020. The VRS team had to navigate often complicated and fast-changing new routines depending on national guidelines. The VRS team has a strong emphasis on agility in their approach to the challenges and opportunities, with methods to evaluate decisions regularly with user experience in mind. Fast decision-making, collecting feedback, an open-minded approach to reviewing rules and processes with both a short-term and a long-term focus and providing a healthy work environment have been key factors in managing this crisis and learn from it. This was resting on a strong sense of ownership regarding the VRS, well-working communication tools and agile and active communication between team members, as well as between the team and the rest of the organization who served as a second-line support system to aid the VRS team. Moving forward, the VRS has become an important space for communication, interaction and provider of infrastructure, implementing new routines and more extensive availability due to the lessons learned during crisis. The evaluation shows that the virtual environment has become an important addition to the physical spaces, existing in its own right but always in connection with and in relationship with the library structure as a whole. Thereby showing that the basis of human interaction stays the same while its form morphs and adapts to changes, thus leaving the virtual environment as a space of communication and infrastructure with unique opportunities for outreach and the potential to become a staple in patron’s education and learning.Keywords: virtual reference service, leadership, digital infrastructure, research library
Procedia PDF Downloads 1722815 Representation of Islamophobia on Social Media: Facebook Comments Analysis
Authors: Nadia Syed
Abstract:
The digital age has inevitably changed the way in which hate crime is committed. The cyber world has become a highly effective means for individuals and groups to be targeted, harmed, and marginalized , largely through online medium. Facebook has become one of the fastest growing social media platforms. At the end of 2013, Facebook had 1,23bn monthly active users and 757 million daily users who log onto Facebook. Within this online space, there are also an increasing number of online virtual communities, and hate groups who are using this freedom to share a violent, Islamophobic and racist description which attempts to create a aggressive virtual environment. This paper is a research on the rise of Islamophobia and the role of media in spreading it. This paper focusing on how the media especially Facebook is portraying Islam as the religion which promotes violence and ultimately playing a significant role in the global rise of Islamophobia against Muslims. It is important to analyse these ‘new’ communities by monitoring the activities they conduct, because the material they post, potentially can have a harmful impact on community cohesion within society. Additionally, as a result of recent figures that shows an increase in online anti-Muslim abuse, there is a pertinent need to address the issue about Islamophobia on social media. On the whole, this study found Muslims being demonized and vilified online which had manifested through negative attitudes, discrimination, stereotypes, physical threats and online harassment which all had the potential to incite violence or prejudicial action because it disparages and intimidates a protected individual or group.Keywords: Islamophobia, online, social media, facebook, internet, extremism
Procedia PDF Downloads 932814 Performance of an Absorption Refrigerator Using a Solar Thermal Collector
Authors: Abir Hmida, Nihel Chekir, Ammar Ben Brahim
Abstract:
In the present paper, we investigate the feasibility of a thermal solar driven cold room in Gabes, southern region of Tunisia. The cold room of 109 m3 is refrigerated using an ammonia absorption machine. It is destined to preserve dates during the hot months of the year. A detailed study of the cold room leads previously to the estimation of the cooling load of the proposed storage room in the operating conditions of the region. The next step consists of the estimation of the required heat in the generator of the absorption machine to ensure the desired cold temperature. A thermodynamic analysis was accomplished and complete description of the system is determined. We propose, here, to provide the needed heat thermally from the sun by using vacuum tube collectors. We found that at least 21m² of solar collectors are necessary to accomplish the work of the solar cold room.Keywords: absorption, ammonia, cold room, solar collector, vacuum tube
Procedia PDF Downloads 1782813 A Machine Learning Approach to Detecting Evasive PDF Malware
Authors: Vareesha Masood, Ammara Gul, Nabeeha Areej, Muhammad Asif Masood, Hamna Imran
Abstract:
The universal use of PDF files has prompted hackers to use them for malicious intent by hiding malicious codes in their victim’s PDF machines. Machine learning has proven to be the most efficient in identifying benign files and detecting files with PDF malware. This paper has proposed an approach using a decision tree classifier with parameters. A modern, inclusive dataset CIC-Evasive-PDFMal2022, produced by Lockheed Martin’s Cyber Security wing is used. It is one of the most reliable datasets to use in this field. We designed a PDF malware detection system that achieved 99.2%. Comparing the suggested model to other cutting-edge models in the same study field, it has a great performance in detecting PDF malware. Accordingly, we provide the fastest, most reliable, and most efficient PDF Malware detection approach in this paper.Keywords: PDF, PDF malware, decision tree classifier, random forest classifier
Procedia PDF Downloads 922812 A Machine Learning Based Framework for Education Levelling in Multicultural Countries: UAE as a Case Study
Authors: Shatha Ghareeb, Rawaa Al-Jumeily, Thar Baker
Abstract:
In Abu Dhabi, there are many different education curriculums where sector of private schools and quality assurance is supervising many private schools in Abu Dhabi for many nationalities. As there are many different education curriculums in Abu Dhabi to meet expats’ needs, there are different requirements for registration and success. In addition, there are different age groups for starting education in each curriculum. In fact, each curriculum has a different number of years, assessment techniques, reassessment rules, and exam boards. Currently, students that transfer curriculums are not being placed in the right year group due to different start and end dates of each academic year and their date of birth for each year group is different for each curriculum and as a result, we find students that are either younger or older for that year group which therefore creates gaps in their learning and performance. In addition, there is not a way of storing student data throughout their academic journey so that schools can track the student learning process. In this paper, we propose to develop a computational framework applicable in multicultural countries such as UAE in which multi-education systems are implemented. The ultimate goal is to use cloud and fog computing technology integrated with Artificial Intelligence techniques of Machine Learning to aid in a smooth transition when assigning students to their year groups, and provide leveling and differentiation information of students who relocate from a particular education curriculum to another, whilst also having the ability to store and access student data from anywhere throughout their academic journey.Keywords: admissions, algorithms, cloud computing, differentiation, fog computing, levelling, machine learning
Procedia PDF Downloads 1432811 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates
Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe
Abstract:
Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.Keywords: machine learning, MTB, WGS, drug resistant TB
Procedia PDF Downloads 532810 Project Management Agile Model Based on Project Management Body of Knowledge Guideline
Authors: Mehrzad Abdi Khalife, Iraj Mahdavi
Abstract:
This paper presents the agile model for project management process. For project management process, the Project Management Body of Knowledge (PMBOK) guideline has been selected as platform. Combination of computational science and artificial intelligent methodology has been added to the guideline to transfer the standard to agile project management process. The model is the combination of practical standard, computational science and artificial intelligent. In this model, we present communication model and protocols to keep process agile. Here, we illustrate the collaboration man and machine in project management area with artificial intelligent approach.Keywords: artificial intelligent, conceptual model, man-machine collaboration, project management, standard
Procedia PDF Downloads 3422809 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling
Authors: Florin Leon, Silvia Curteanu
Abstract:
Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression
Procedia PDF Downloads 3052808 Communicative and Artistic Machines: A Survey of Models and Experiments on Artificial Agents
Authors: Artur Matuck, Guilherme F. Nobre
Abstract:
Machines can be either tool, media, or social agents. Advances in technology have been delivering machines capable of autonomous expression, both through communication and art. This paper deals with models (theoretical approach) and experiments (applied approach) related to artificial agents. On one hand it traces how social sciences' scholars have worked with topics such as text automatization, man-machine writing cooperation, and communication. On the other hand it covers how computer sciences' scholars have built communicative and artistic machines, including the programming of creativity. The aim is to present a brief survey on artificially intelligent communicators and artificially creative writers, and provide the basis to understand the meta-authorship and also to new and further man-machine co-authorship.Keywords: artificial communication, artificial creativity, artificial writers, meta-authorship, robotic art
Procedia PDF Downloads 2932807 Analysis of the Significance of Multimedia Channels Using Sparse PCA and Regularized SVD
Authors: Kourosh Modarresi
Abstract:
The abundance of media channels and devices has given users a variety of options to extract, discover, and explore information in the digital world. Since, often, there is a long and complicated path that a typical user may venture before taking any (significant) action (such as purchasing goods and services), it is critical to know how each node (media channel) in the path of user has contributed to the final action. In this work, the significance of each media channel is computed using statistical analysis and machine learning techniques. More specifically, “Regularized Singular Value Decomposition”, and “Sparse Principal Component” has been used to compute the significance of each channel toward the final action. The results of this work are a considerable improvement compared to the present approaches.Keywords: multimedia attribution, sparse principal component, regularization, singular value decomposition, feature significance, machine learning, linear systems, variable shrinkage
Procedia PDF Downloads 3112806 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis
Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab
Abstract:
Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.Keywords: deep neural network, foot disorder, plantar pressure, support vector machine
Procedia PDF Downloads 3592805 EEG-Based Screening Tool for School Student’s Brain Disorders Using Machine Learning Algorithms
Authors: Abdelrahman A. Ramzy, Bassel S. Abdallah, Mohamed E. Bahgat, Sarah M. Abdelkader, Sherif H. ElGohary
Abstract:
Attention-Deficit/Hyperactivity Disorder (ADHD), epilepsy, and autism affect millions of children worldwide, many of which are undiagnosed despite the fact that all of these disorders are detectable in early childhood. Late diagnosis can cause severe problems due to the late treatment and to the misconceptions and lack of awareness as a whole towards these disorders. Moreover, electroencephalography (EEG) has played a vital role in the assessment of neural function in children. Therefore, quantitative EEG measurement will be utilized as a tool for use in the evaluation of patients who may have ADHD, epilepsy, and autism. We propose a screening tool that uses EEG signals and machine learning algorithms to detect these disorders at an early age in an automated manner. The proposed classifiers used with epilepsy as a step taken for the work done so far, provided an accuracy of approximately 97% using SVM, Naïve Bayes and Decision tree, while 98% using KNN, which gives hope for the work yet to be conducted.Keywords: ADHD, autism, epilepsy, EEG, SVM
Procedia PDF Downloads 1922804 Machine Learning Models for the Prediction of Heating and Cooling Loads of a Residential Building
Authors: Aaditya U. Jhamb
Abstract:
Due to the current energy crisis that many countries are battling, energy-efficient buildings are the subject of extensive research in the modern technological era because of growing worries about energy consumption and its effects on the environment. The paper explores 8 factors that help determine energy efficiency for a building: (relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area, and glazing area distribution), with Tsanas and Xifara providing a dataset. The data set employed 768 different residential building models to anticipate heating and cooling loads with a low mean squared error. By optimizing these characteristics, machine learning algorithms may assess and properly forecast a building's heating and cooling loads, lowering energy usage while increasing the quality of people's lives. As a result, the paper studied the magnitude of the correlation between these input factors and the two output variables using various statistical methods of analysis after determining which input variable was most closely associated with the output loads. The most conclusive model was the Decision Tree Regressor, which had a mean squared error of 0.258, whilst the least definitive model was the Isotonic Regressor, which had a mean squared error of 21.68. This paper also investigated the KNN Regressor and the Linear Regression, which had to mean squared errors of 3.349 and 18.141, respectively. In conclusion, the model, given the 8 input variables, was able to predict the heating and cooling loads of a residential building accurately and precisely.Keywords: energy efficient buildings, heating load, cooling load, machine learning models
Procedia PDF Downloads 972803 Designing and Prototyping Permanent Magnet Generators for Wind Energy
Authors: T. Asefi, J. Faiz, M. A. Khan
Abstract:
This paper introduces dual rotor axial flux machines with surface mounted and spoke type ferrite permanent magnets with concentrated windings; they are introduced as alternatives to a generator with surface mounted Nd-Fe-B magnets. The output power, voltage, speed and air gap clearance for all the generators are identical. The machine designs are optimized for minimum mass using a population-based algorithm, assuming the same efficiency as the Nd-Fe-B machine. A finite element analysis (FEA) is applied to predict the performance, emf, developed torque, cogging torque, no load losses, leakage flux and efficiency of both ferrite generators and that of the Nd-Fe-B generator. To minimize cogging torque, different rotor pole topologies and different pole arc to pole pitch ratios are investigated by means of 3D FEA. It was found that the surface mounted ferrite generator topology is unable to develop the nominal electromagnetic torque, and has higher torque ripple and is heavier than the spoke type machine. Furthermore, it was shown that the spoke type ferrite permanent magnet generator has favorable performance and could be an alternative to rare-earth permanent magnet generators, particularly in wind energy applications. Finally, the analytical and numerical results are verified using experimental results.Keywords: axial flux, permanent magnet generator, dual rotor, ferrite permanent magnet generator, finite element analysis, wind turbines, cogging torque, population-based algorithms
Procedia PDF Downloads 1522802 Modular 3D Environmental Development for Augmented Reality
Authors: Kevin William Taylor
Abstract:
This work used industry-standard practices and technologies as a foundation to explore current and future advancements in modularity for 3D environmental production. Covering environmental generation, and AI-assisted generation, this study investigated how these areas will shape the industries goal to achieve full immersion within augmented reality environments. This study will explore modular environmental construction techniques utilized in large scale 3D productions. This will include the reasoning behind this approach to production, the principles in the successful development, potential pitfalls, and different methodologies for successful implementation of practice in commercial and proprietary interactive engines. A focus will be on the role of the 3D artists in the future of environmental development, requiring adaptability to new approaches, as the field evolves in response to tandem technological advancements. Industry findings and projections theorize how these factors will impact the widespread utilization of augmented reality in daily life. This will continue to inform the direction of technology towards expansive interactive environments. It will change the tools and techniques utilized in the development of environments for game, film, and VFX. This study concludes that this technology will be the cornerstone for the creation of AI-driven AR that is able to fully theme our world, change how we see and engage with one another. This will impact the concept of a virtual self-identity that will be as prevalent as real-world identity. While this progression scares or even threaten some, it is safe to say that we are seeing the beginnings of a technological revolution that will surpass the impact that the smartphone had on modern society.Keywords: virtual reality, augmented reality, training, 3D environments
Procedia PDF Downloads 1242801 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement
Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti
Abstract:
Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing
Procedia PDF Downloads 1092800 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index
Authors: Todd Zhou, Mikhail Yurochkin
Abstract:
Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index
Procedia PDF Downloads 124