Search results for: cleaning machine
1122 Detection of Extrusion Blow Molding Defects by Airflow Analysis
Authors: Eva Savy, Anthony Ruiz
Abstract:
In extrusion blow molding, there is great variability in product quality due to the sensitivity of the machine settings. These variations lead to unnecessary rejects and loss of time. Yet production control is a major challenge for companies in this sector to remain competitive within their market. Current quality control methods only apply to finished products (vision control, leak test...). It has been shown that material melt temperature, blowing pressure, and ambient temperature have a significant impact on the variability of product quality. Since blowing is a key step in the process, we have studied this parameter in this paper. The objective is to determine if airflow analysis allows the identification of quality problems before the full completion of the manufacturing process. We conducted tests to determine if it was possible to identify a leakage defect and an obstructed defect, two common defects on products. The results showed that it was possible to identify a leakage defect by airflow analysis.Keywords: extrusion blow molding, signal, sensor, defects, detection
Procedia PDF Downloads 1491121 Using of Particle Swarm Optimization for Loss Minimization of Vector-Controlled Induction Motors
Authors: V. Rashtchi, H. Bizhani, F. R. Tatari
Abstract:
This paper presents a new online loss minimization for an induction motor drive. Among the many loss minimization algorithms (LMAs) for an induction motor, a particle swarm optimization (PSO) has the advantages of fast response and high accuracy. However, the performance of the PSO and other optimization algorithms depend on the accuracy of the modeling of the motor drive and losses. In the development of the loss model, there is always a trade off between accuracy and complexity. This paper presents a new online optimization to determine an optimum flux level for the efficiency optimization of the vector-controlled induction motor drive. An induction motor (IM) model in d-q coordinates is referenced to the rotor magnetizing current. This transformation results in no leakage inductance on the rotor side, thus the decomposition into d-q components in the steady-state motor model can be utilized in deriving the motor loss model. The suggested algorithm is simple for implementation.Keywords: induction machine, loss minimization, magnetizing current, particle swarm optimization
Procedia PDF Downloads 6301120 Redefining Infrastructure as Code Orchestration Using AI
Authors: Georges Bou Ghantous
Abstract:
This research delves into the transformative impact of Artificial Intelligence (AI) on Infrastructure as Code (IaaC) practices, specifically focusing on the redefinition of infrastructure orchestration. By harnessing AI technologies such as machine learning algorithms and predictive analytics, organizations can achieve unprecedented levels of efficiency and optimization in managing their infrastructure resources. AI-driven IaaC introduces proactive decision-making through predictive insights, enabling organizations to anticipate and address potential issues before they arise. Dynamic resource scaling, facilitated by AI, ensures that infrastructure resources can seamlessly adapt to fluctuating workloads and changing business requirements. Through case studies and best practices, this paper sheds light on the tangible benefits and challenges associated with AI-driven IaaC transformation, providing valuable insights for organizations navigating the evolving landscape of digital infrastructure management.Keywords: artificial intelligence, infrastructure as code, efficiency optimization, predictive insights, dynamic resource scaling, proactive decision-making
Procedia PDF Downloads 321119 A Second Look at Gesture-Based Passwords: Usability and Vulnerability to Shoulder-Surfing Attacks
Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier
Abstract:
For security purposes, it is important to detect passwords entered by unauthorized users. With traditional alphanumeric passwords, if the content of a password is acquired and correctly entered by an intruder, it is impossible to differentiate the password entered by the intruder from those entered by the authorized user because the password entries contain precisely the same character set. However, no two entries for the gesture-based passwords, even those entered by the person who created the password, will be identical. There are always variations between entries, such as the shape and length of each stroke, the location of each stroke, and the speed of drawing. It is possible that passwords entered by the unauthorized user contain higher levels of variations when compared with those entered by the authorized user (the creator). The difference in the levels of variations may provide cues to detect unauthorized entries. To test this hypothesis, we designed an empirical study, collected and analyzed the data with the help of machine-learning algorithms. The results of the study are significant.Keywords: authentication, gesture-based passwords, shoulder-surfing attacks, usability
Procedia PDF Downloads 1371118 A Reliable Multi-Type Vehicle Classification System
Authors: Ghada S. Moussa
Abstract:
Vehicle classification is an important task in traffic surveillance and intelligent transportation systems. Classification of vehicle images is facing several problems such as: high intra-class vehicle variations, occlusion, shadow, illumination. These problems and others must be considered to develop a reliable vehicle classification system. In this study, a reliable multi-type vehicle classification system based on Bag-of-Words (BoW) paradigm is developed. Our proposed system used and compared four well-known classifiers; Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), k-Nearest Neighbour (KNN), and Decision Tree to classify vehicles into four categories: motorcycles, small, medium and large. Experiments on a large dataset show that our approach is efficient and reliable in classifying vehicles with accuracy of 95.7%. The SVM outperforms other classification algorithms in terms of both accuracy and robustness alongside considerable reduction in execution time. The innovativeness of developed system is it can serve as a framework for many vehicle classification systems.Keywords: vehicle classification, bag-of-words technique, SVM classifier, LDA classifier, KNN classifier, decision tree classifier, SIFT algorithm
Procedia PDF Downloads 3551117 Future Metro Station: Remodeling Underground Environment Based on Experience Scenarios and IoT Technology
Authors: Joo Min Kim, Dongyoun Shin
Abstract:
The project Future Station (FS) seek for a deeper understanding of metro station. The main idea of the project is enhancing the underground environment by combining new architectural design with IoT technology. This research shows the understanding of the metro environment giving references regarding traditional design approaches and IoT combined space design. Based on the analysis, this research presents design alternatives in two metro stations those are chosen for a testbed. It also presents how the FS platform giving a response to travelers and deliver the benefit to metro operators. In conclusion, the project describes methods to build future metro service and platform that understand traveler’s intentions and giving appropriate services back for enhancing travel experience. It basically used contemporary technology such as smart sensing grid, big data analysis, smart building, and machine learning technology.Keywords: future station, digital lifestyle experience, sustainable metro, smart metro, smart city
Procedia PDF Downloads 2981116 An Investigation of the Strength Deterioration of Forged Aluminum 6082 (T6) Alloy
Authors: Rajveer, Abhinav Saxena, Sanjeev Das
Abstract:
The study is focused on the strength of forged aluminum alloy (AA) 6082 (T6). Aluminum alloy 6082 belongs to Al-Mg-Si family which has a wide range of automotive applications. A decrease in the strength of AA 6082 alloy was observed after T6 treatment. The as-received (extruded), forged, and forged + heat treated samples were examined to understand the reason. These examinations were accomplished by optical (OM) and scanning electron microscope (SEM) and X-ray diffraction (XRD) studies. It was observed that the defects had an insignificant effect on the alloy strength. The alloy samples were subjected to age hardening treatment and the time to achieve peak hardening was acquired. Standard tensile specimens were prepared from as-received (extruded), forged, forged + solutionized and forged + solutionized + age hardened. Tensile tests were conducted by Instron universal testing machine. It was observed that there was a significant drop in tensile strength in the case of solutionized sample. The detailed study of the fracture samples showed that the solutionizing after forging was not the best way to increase the strength of Al 6082 alloy.Keywords: aluminum alloy 6082, strength, forging, age hardening
Procedia PDF Downloads 4301115 Wear and Fraction Behavior of Porcelain Coated with Polyurethane/SiO2 Coating Layer
Authors: Ching Yern Chee
Abstract:
Various loading of nano silica is added into polyurethane (PU) and then coated on porcelain substrate. The wear and friction properties of the porcelain substrates coated with polyurethane/nano silica nano composite coatings were investigated using the reciprocating wear testing machine. The friction and wear test of polyurethane/nano silica coated porcelain substrate was studied at different sliding speed and applied load. It was found that the optimum composition of nano silica is 3 wt% which gives the lowest friction coefficient and wear rate in all applied load ranges and sliding speeds. For 3 wt% nano silica filled PU coated porcelain substrate, the increment of sliding speed caused higher wear rates but lower frictions coefficient. Besides, the friction coefficient of nano silica filled PU coated porcelain substrate decreased but the wear rate increased with the applied load.Keywords: porcelain, nanocomposite coating, morphology, friction, wear behavior
Procedia PDF Downloads 5281114 An Evaluation Model for Enhancing Flexibility in Production Systems through Additive Manufacturing
Authors: Angela Luft, Sebastian Bremen, Nicolae Balc
Abstract:
Additive manufacturing processes have entered large parts of the industry and their range of application have progressed and grown significantly in the course of time. A major advantage of additive manufacturing is the innate flexibility of the machines. This corelates with the ongoing demand of creating highly flexible production environments. However, the potential of additive manufacturing technologies to enhance the flexibility of production systems has not yet been truly considered and quantified in a systematic way. In order to determine the potential of additive manufacturing technologies with regards to the strategic flexibility design in production systems, an integrated evaluation model has been developed, that allows for the simultaneous consideration of both conventional as well as additive production resources. With the described model, an operational scope of action can be identified and quantified in terms of mix and volume flexibility, process complexity, and machine capacity that goes beyond the current cost-oriented approaches and offers a much broader and more holistic view on the potential of additive manufacturing. A respective evaluation model is presented this paper.Keywords: additive manufacturing, capacity planning, production systems, strategic production planning, flexibility enhancement
Procedia PDF Downloads 1551113 Emerging Threats and Adaptive Defenses: Navigating the Future of Cybersecurity in a Hyperconnected World
Authors: Olasunkanmi Jame Ayodeji, Adebayo Adeyinka Victor
Abstract:
In a hyperconnected world, cybersecurity faces a continuous evolution of threats that challenge traditional defence mechanisms. This paper explores emerging cybersecurity threats like malware, ransomware, phishing, social engineering, and the Internet of Things (IoT) vulnerabilities. It delves into the inadequacies of existing cybersecurity defences in addressing these evolving risks and advocates for adaptive defence mechanisms that leverage AI, machine learning, and zero-trust architectures. The paper proposes collaborative approaches, including public-private partnerships and information sharing, as essential to building a robust defence strategy to address future cyber threats. The need for continuous monitoring, real-time incident response, and adaptive resilience strategies is highlighted to fortify digital infrastructures in the face of escalating global cyber risks.Keywords: cybersecurity, hyperconnectivity, malware, adaptive defences, zero-trust architecture, internet of things vulnerabilities
Procedia PDF Downloads 191112 Bactericidal Efficacy of Quaternary Ammonium Compound on Carriers with Food Additive Grade Calcium Hydroxide against Salmonella Infantis and Escherichia coli
Authors: M. Shahin Alam, Satoru Takahashi, Mariko Itoh, Miyuki Komura, Mayuko Suzuki, Natthanan Sangsriratanakul, Kazuaki Takehara
Abstract:
Cleaning and disinfection are key components of routine biosecurity in livestock farming and food processing industry. The usage of suitable disinfectants and their proper concentration are important factors for a successful biosecurity program. Disinfectants have optimum bactericidal and virucidal efficacies at temperatures above 20°C, but very few studies on application and effectiveness of disinfectants at low temperatures have been done. In the present study, the bactericidal efficacies of food additive grade calcium hydroxide (FdCa(OH)), quaternary ammonium compound (QAC) and their mixture, were investigated under different conditions, including time, organic materials (fetal bovine serum: FBS) and temperature, either in suspension or in carrier test. Salmonella Infantis and Escherichia coli, which are the most prevalent gram negative bacteria in commercial poultry housing and food processing industry, were used in this study. Initially, we evaluated these disinfectants at two different temperatures (4°C and room temperature (RT) (25°C ± 2°C)) and 7 contact times (0, 5 and 30 sec, 1, 3, 20 and 30 min), with suspension tests either in the presence or absence of 5% FBS. Secondly, we investigated the bactericidal efficacies of these disinfectants by carrier tests (rubber, stainless steel and plastic) at same temperatures and 4 contact times (30 sec, 1, 3, and 5 min). Then, we compared the bactericidal efficacies of each disinfectant within their mixtures, as follows. When QAC was diluted with redistilled water (dW2) at 1: 500 (QACx500) to obtain the final concentration of didecyl-dimethylammonium chloride (DDAC) of 200 ppm, it could inactivate Salmonella Infantis within 5 sec at RT either with or without 5% FBS in suspension test; however, at 4°C it required 30 min in presence of 5% FBS. FdCa(OH)2 solution alone could inactivate bacteria within 1 min both at RT and 4°C even with 5% FBS. While FdCa(OH)2 powder was added at final concentration 0.2% to QACx500 (Mix500), the mixture could inactivate bacteria within 30 sec and 5 sec, respectively, with or without 5% FBS at 4°C. The findings from the suspension test indicated that low temperature inhibited the bactericidal efficacy of QAC, whereas Mix500 was effective, regardless of short contact time and low temperature, even with 5% FBS. In the carrier test, single disinfectant required bit more time to inactivate bacteria on rubber and plastic surfaces than on stainless steel. However, Mix500 could inactivate S. Infantis on rubber, stainless steel and plastic surfaces within 30 sec and 1 min, respectively, at RT and 4°C; but, for E. coli, it required only 30 sec at both temperatures. So, synergistic effects were observed on different carriers at both temperatures. For a successful enhancement of biosecurity during winter, the disinfectants should be selected that could have short contact times with optimum efficacy against the target pathogen. The present study findings help farmers to make proper strategies for application of disinfectants in their livestock farming and food processing industry.Keywords: carrier, food additive grade calcium hydroxide (FdCa(OH)₂), quaternary ammonium compound, synergistic effects
Procedia PDF Downloads 2931111 Assessing Relationships between Glandularity and Gray Level by Using Breast Phantoms
Authors: Yun-Xuan Tang, Pei-Yuan Liu, Kun-Mu Lu, Min-Tsung Tseng, Liang-Kuang Chen, Yuh-Feng Tsai, Ching-Wen Lee, Jay Wu
Abstract:
Breast cancer is predominant of malignant tumors in females. The increase in the glandular density increases the risk of breast cancer. BI-RADS is a frequently used density indicator in mammography; however, it significantly overestimates the glandularity. Therefore, it is very important to accurately and quantitatively assess the glandularity by mammography. In this study, 20%, 30% and 50% glandularity phantoms were exposed using a mammography machine at 28, 30 and 31 kVp, and 30, 55, 80 and 105 mAs, respectively. The regions of interest (ROIs) were drawn to assess the gray level. The relationship between the glandularity and gray level under various compression thicknesses, kVp, and mAs was established by the multivariable linear regression. A phantom verification was performed with automatic exposure control (AEC). The regression equation was obtained with an R-square value of 0.928. The average gray levels of the verification phantom were 8708, 8660 and 8434 for 0.952, 0.963 and 0.985 g/cm3, respectively. The percent differences of glandularity to the regression equation were 3.24%, 2.75% and 13.7%. We concluded that the proposed method could be clinically applied in mammography to improve the glandularity estimation and further increase the importance of breast cancer screening.Keywords: mammography, glandularity, gray value, BI-RADS
Procedia PDF Downloads 4901110 Performance of Neural Networks vs. Radial Basis Functions When Forming a Metamodel for Residential Buildings
Authors: Philip Symonds, Jon Taylor, Zaid Chalabi, Michael Davies
Abstract:
With the world climate projected to warm and major cities in developing countries becoming increasingly populated and polluted, governments are tasked with the problem of overheating and air quality in residential buildings. This paper presents the development of an adaptable model of these risks. Simulations are performed using the EnergyPlus building physics software. An accurate metamodel is formed by randomly sampling building input parameters and training on the outputs of EnergyPlus simulations. Metamodels are used to vastly reduce the amount of computation time required when performing optimisation and sensitivity analyses. Neural Networks (NNs) are compared to a Radial Basis Function (RBF) algorithm when forming a metamodel. These techniques were implemented using the PyBrain and scikit-learn python libraries, respectively. NNs are shown to perform around 15% better than RBFs when estimating overheating and air pollution metrics modelled by EnergyPlus.Keywords: neural networks, radial basis functions, metamodelling, python machine learning libraries
Procedia PDF Downloads 4451109 The Role of Artificial Intelligence in Concrete Constructions
Authors: Ardalan Tofighi Soleimandarabi
Abstract:
Artificial intelligence has revolutionized the concrete construction industry and improved processes by increasing efficiency, accuracy, and sustainability. This article examines the applications of artificial intelligence in predicting the compressive strength of concrete, optimizing mixing plans, and improving structural health monitoring systems. Artificial intelligence-based models, such as artificial neural networks (ANN) and combined machine learning techniques, have shown better performance than traditional methods in predicting concrete properties. In addition, artificial intelligence systems have made it possible to improve quality control and real-time monitoring of structures, which helps in preventive maintenance and increases the life of infrastructure. Also, the use of artificial intelligence plays an effective role in sustainable construction by optimizing material consumption and reducing waste. Although the implementation of artificial intelligence is associated with challenges such as high initial costs and the need for specialized training, it will create a smarter, more sustainable, and more affordable future for concrete structures.Keywords: artificial intelligence, concrete construction, compressive strength prediction, structural health monitoring, stability
Procedia PDF Downloads 141108 SVID: Structured Vulnerability Intelligence for Building Deliberated Vulnerable Environment
Authors: Wenqing Fan, Yixuan Cheng, Wei Huang
Abstract:
The diversity and complexity of modern IT systems make it almost impossible for internal teams to find vulnerabilities in all software before the software is officially released. The emergence of threat intelligence and vulnerability reporting policy has greatly reduced the burden on software vendors and organizations to find vulnerabilities. However, to prove the existence of the reported vulnerability, it is necessary but difficult for security incident response team to build a deliberated vulnerable environment from the vulnerability report with limited and incomplete information. This paper presents a structured, standardized, machine-oriented vulnerability intelligence format, that can be used to automate the orchestration of Deliberated Vulnerable Environment (DVE). This paper highlights the important role of software configuration and proof of vulnerable specifications in vulnerability intelligence, and proposes a triad model, which is called DIR (Dependency Configuration, Installation Configuration, Runtime Configuration), to define software configuration. Finally, this paper has also implemented a prototype system to demonstrate that the orchestration of DVE can be automated with the intelligence.Keywords: DIR triad model, DVE, vulnerability intelligence, vulnerability recurrence
Procedia PDF Downloads 1191107 Modelling of Powered Roof Supports Work
Authors: Marcin Michalak
Abstract:
Due to the increasing efforts on saving our natural environment a change in the structure of energy resources can be observed - an increasing fraction of a renewable energy sources. In many countries traditional underground coal mining loses its significance but there are still countries, like Poland or Germany, in which the coal based technologies have the greatest fraction in a total energy production. This necessitates to make an effort to limit the costs and negative effects of underground coal mining. The longwall complex is as essential part of the underground coal mining. The safety and the effectiveness of the work is strongly dependent of the diagnostic state of powered roof supports. The building of a useful and reliable diagnostic system requires a lot of data. As the acquisition of a data of any possible operating conditions it is important to have a possibility to generate a demanded artificial working characteristics. In this paper a new approach of modelling a leg pressure in the single unit of powered roof support. The model is a result of the analysis of a typical working cycles.Keywords: machine modelling, underground mining, coal mining, structure
Procedia PDF Downloads 3661106 Algorithm for Path Recognition in-between Tree Rows for Agricultural Wheeled-Mobile Robots
Authors: Anderson Rocha, Pedro Miguel de Figueiredo Dinis Oliveira Gaspar
Abstract:
Machine vision has been widely used in recent years in agriculture, as a tool to promote the automation of processes and increase the levels of productivity. The aim of this work is the development of a path recognition algorithm based on image processing to guide a terrestrial robot in-between tree rows. The proposed algorithm was developed using the software MATLAB, and it uses several image processing operations, such as threshold detection, morphological erosion, histogram equalization and the Hough transform, to find edge lines along tree rows on an image and to create a path to be followed by a mobile robot. To develop the algorithm, a set of images of different types of orchards was used, which made possible the construction of a method capable of identifying paths between trees of different heights and aspects. The algorithm was evaluated using several images with different characteristics of quality and the results showed that the proposed method can successfully detect a path in different types of environments.Keywords: agricultural mobile robot, image processing, path recognition, hough transform
Procedia PDF Downloads 1461105 A Semi-supervised Classification Approach for Trend Following Investment Strategy
Authors: Rodrigo Arnaldo Scarpel
Abstract:
Trend following is a widely accepted investment strategy that adopts a rule-based trading mechanism that rather than striving to predict market direction or on information gathering to decide when to buy and when to sell a stock. Thus, in trend following one must respond to market’s movements that has recently happen and what is currently happening, rather than on what will happen. Optimally, in trend following strategy, is to catch a bull market at its early stage, ride the trend, and liquidate the position at the first evidence of the subsequent bear market. For applying the trend following strategy one needs to find the trend and identify trade signals. In order to avoid false signals, i.e., identify fluctuations of short, mid and long terms and to separate noise from real changes in the trend, most academic works rely on moving averages and other technical analysis indicators, such as the moving average convergence divergence (MACD) and the relative strength index (RSI) to uncover intelligible stock trading rules following trend following strategy philosophy. Recently, some works has applied machine learning techniques for trade rules discovery. In those works, the process of rule construction is based on evolutionary learning which aims to adapt the rules to the current environment and searches for the global optimum rules in the search space. In this work, instead of focusing on the usage of machine learning techniques for creating trading rules, a time series trend classification employing a semi-supervised approach was used to early identify both the beginning and the end of upward and downward trends. Such classification model can be employed to identify trade signals and the decision-making procedure is that if an up-trend (down-trend) is identified, a buy (sell) signal is generated. Semi-supervised learning is used for model training when only part of the data is labeled and Semi-supervised classification aims to train a classifier from both the labeled and unlabeled data, such that it is better than the supervised classifier trained only on the labeled data. For illustrating the proposed approach, it was employed daily trade information, including the open, high, low and closing values and volume from January 1, 2000 to December 31, 2022, of the São Paulo Exchange Composite index (IBOVESPA). Through this time period it was visually identified consistent changes in price, upwards or downwards, for assigning labels and leaving the rest of the days (when there is not a consistent change in price) unlabeled. For training the classification model, a pseudo-label semi-supervised learning strategy was used employing different technical analysis indicators. In this learning strategy, the core is to use unlabeled data to generate a pseudo-label for supervised training. For evaluating the achieved results, it was considered the annualized return and excess return, the Sortino and the Sharpe indicators. Through the evaluated time period, the obtained results were very consistent and can be considered promising for generating the intended trading signals.Keywords: evolutionary learning, semi-supervised classification, time series data, trading signals generation
Procedia PDF Downloads 881104 Detection of Cardiac Arrhythmia Using Principal Component Analysis and Xgboost Model
Authors: Sujay Kotwale, Ramasubba Reddy M.
Abstract:
Electrocardiogram (ECG) is a non-invasive technique used to study and analyze various heart diseases. Cardiac arrhythmia is a serious heart disease which leads to death of the patients, when left untreated. An early-time detection of cardiac arrhythmia would help the doctors to do proper treatment of the heart. In the past, various algorithms and machine learning (ML) models were used to early-time detection of cardiac arrhythmia, but few of them have achieved better results. In order to improve the performance, this paper implements principal component analysis (PCA) along with XGBoost model. The PCA was implemented to the raw ECG signals which suppress redundancy information and extracted significant features. The obtained significant ECG features were fed into XGBoost model and the performance of the model was evaluated. In order to valid the proposed technique, raw ECG signals obtained from standard MIT-BIH database were employed for the analysis. The result shows that the performance of proposed method is superior to the several state-of-the-arts techniques.Keywords: cardiac arrhythmia, electrocardiogram, principal component analysis, XGBoost
Procedia PDF Downloads 1171103 A NoSQL Based Approach for Real-Time Managing of Robotics's Data
Authors: Gueidi Afef, Gharsellaoui Hamza, Ben Ahmed Samir
Abstract:
This paper deals with the secret of the continual progression data that new data management solutions have been emerged: The NoSQL databases. They crossed several areas like personalization, profile management, big data in real-time, content management, catalog, view of customers, mobile applications, internet of things, digital communication and fraud detection. Nowadays, these database management systems are increasing. These systems store data very well and with the trend of big data, a new challenge’s store demands new structures and methods for managing enterprise data. The new intelligent machine in the e-learning sector, thrives on more data, so smart machines can learn more and faster. The robotics are our use case to focus on our test. The implementation of NoSQL for Robotics wrestle all the data they acquire into usable form because with the ordinary type of robotics; we are facing very big limits to manage and find the exact information in real-time. Our original proposed approach was demonstrated by experimental studies and running example used as a use case.Keywords: NoSQL databases, database management systems, robotics, big data
Procedia PDF Downloads 3521102 Online Learning Versus Face to Face Learning: A Sentiment Analysis on General Education Mathematics in the Modern World of University of San Carlos School of Arts and Sciences Students Using Natural Language Processing
Authors: Derek Brandon G. Yu, Clyde Vincent O. Pilapil, Christine F. Peña
Abstract:
College students of Cebu province have been indoors since March 2020, and a challenge encountered is the sudden shift from face to face to online learning and with the lack of empirical data on online learning on Higher Education Institutions (HEIs) in the Philippines. Sentiments on face to face and online learning will be collected from University of San Carlos (USC), School of Arts and Sciences (SAS) students regarding Mathematics in the Modern World (MMW), a General Education (GE) course. Natural Language Processing with machine learning algorithms will be used to classify the sentiments of the students. Results of the research study are the themes identified through topic modelling and the overall sentiments of the students in USC SASKeywords: natural language processing, online learning, sentiment analysis, topic modelling
Procedia PDF Downloads 2441101 The Proactive Approach of Digital Forensics Methodology against Targeted Attack Malware
Authors: Mohamed Fadzlee Sulaiman, Mohd Zabri Adil Talib, Aswami Fadillah Mohd Ariffin
Abstract:
Each individual organization has their own mechanism to build up cyber defense capability in protecting their information infrastructures from data breaches and cyber espionage. But, we can not deny the possibility of failing to detect and stop cyber attacks especially for those targeting credential information and intellectual property (IP). In this paper, we would like to share the modern approach of effective digital forensic methodology in order to identify the artifacts in tracing the trails of evidence while mitigating the infection from the target machine/s. This proposed approach will suit the digital forensic investigation to be conducted while resuming the business critical operation after mitigating the infection and minimizing the risk from the identified attack to transpire. Therefore, traditional digital forensics methodology has to be improvised to be proactive which not only focusing to discover the root caused and the threat actor but to develop the relevant mitigation plan in order to prevent from the same attack.Keywords: digital forensic, detection, eradication, targeted attack, malware
Procedia PDF Downloads 2731100 Exploring Acceptance of Artificial Intelligence Software Solution Amongst Healthcare Personnel: A Case in a Private Medical Centre
Authors: Sandra So, Mohd Roslan Ismail, Safurah Jaafar
Abstract:
With the rapid proliferation of data in healthcare has provided an opportune platform creation of Artificial Intelligence (AI). AI has brought a paradigm shift for healthcare professionals, promising improvement in delivery and quality. This study aims to determine the perception of healthcare personnel on perceived ease of use, perceived usefulness, and subjective norm toward attitude for artificial intelligence acceptance. A cross-sectional single institutional study of employees’ perception of adopting AI in the hospital was conducted. The survey was conducted using a questionnaire adapted from Technology Acceptance Model and a four-point Likert scale was used. There were 96 or 75.5% of the total population responded. This study has shown the significant relationship and the importance of ease of use, perceived usefulness, and subjective norm to the acceptance of AI. In the study results, it concluded that the determining factor to the strong acceptance of AI in their practices is mostly those respondents with the most interaction with the patients and clinical management.Keywords: artificial intelligence, machine learning, perceived ease of use, perceived usefulness, subjective norm
Procedia PDF Downloads 2251099 Improvement of Thermal Stability in Ethylene Methyl Acrylate Composites for Gasket Application
Authors: Pemika Ketsuwan, Pitt Supaphol, Manit Nithitanakul
Abstract:
A typical used of ethylene methyl acrylate (EMA) gasket is in the manufacture of optical lens, and often, they are deteriorated rapidly due to high temperature during the process. The objective of this project is to improve the thermal stability of the EMA copolymer gasket by preparing EMA with cellulose and silica composites. Hydroxy propyl methyl cellulose (HPMC) and Carboxy methyl cellulose (CMC) were used in preparing of EMA/cellulose composites and fumed silica (SiO2) was used in preparing EMA/silica composites with different amounts of filler (3, 5, 7, 10, 15 wt.%), using a twin screw extruder at 160 °C and the test specimens were prepared by the injection molding machine. The morphology and dispersion of fillers in the EMA matrix were investigated by field emission scanning electron microscopy (FESEM). The thermal stability of the composite was determined by thermal gravimetric analysis (TGA), and differential scanning calorimeter (DSC). Mechanical properties were evaluated by tensile testing. The developed composites were found to enhance thermal and mechanical properties when compared to that of the EMA copolymer alone.Keywords: ethylene methyl acrylate, HPMC, Silica, Thermal stability
Procedia PDF Downloads 1201098 Large Neural Networks Learning From Scratch With Very Few Data and Without Explicit Regularization
Authors: Christoph Linse, Thomas Martinetz
Abstract:
Recent findings have shown that Neural Networks generalize also in over-parametrized regimes with zero training error. This is surprising, since it is completely against traditional machine learning wisdom. In our empirical study we fortify these findings in the domain of fine-grained image classification. We show that very large Convolutional Neural Networks with millions of weights do learn with only a handful of training samples and without image augmentation, explicit regularization or pretraining. We train the architectures ResNet018, ResNet101 and VGG19 on subsets of the difficult benchmark datasets Caltech101, CUB_200_2011, FGVCAircraft, Flowers102 and StanfordCars with 100 classes and more, perform a comprehensive comparative study and draw implications for the practical application of CNNs. Finally, we show that VGG19 with 140 million weights learns to distinguish airplanes and motorbikes with up to 95% accuracy using only 20 training samples per class.Keywords: convolutional neural networks, fine-grained image classification, generalization, image recognition, over-parameterized, small data sets
Procedia PDF Downloads 871097 Design and Manufacture Detection System for Patient's Unwanted Movements during Radiology and CT Scan
Authors: Anita Yaghobi, Homayoun Ebrahimian
Abstract:
One of the important tools that can help orthopedic doctors for diagnose diseases is imaging scan. Imaging techniques can help physicians in see different parts of the body, including the bones, muscles, tendons, nerves, and cartilage. During CT scan, a patient must be in the same position from the start to the end of radiation treatment. Patient movements are usually monitored by the technologists through the closed circuit television (CCTV) during scan. If the patient makes a small movement, it is difficult to be noticed by them. In the present work, a simple patient movement monitoring device is fabricated to monitor the patient movement. It uses an electronic sensing device. It continuously monitors the patient’s position while the CT scan is in process. The device has been retrospectively tested on 51 patients whose movement and distance were measured. The results show that 25 patients moved 1 cm to 2.5 cm from their initial position during the CT scan. Hence, the device can potentially be used to control and monitor patient movement during CT scan and Radiography. In addition, an audible alarm situated at the control panel of the control room is provided with this device to alert the technologists. It is an inexpensive, compact device which can be used in any CT scan machine.Keywords: CT scan, radiology, X Ray, unwanted movement
Procedia PDF Downloads 4581096 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines
Authors: Kamyar Tolouei, Ehsan Moosavi
Abstract:
In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization
Procedia PDF Downloads 1031095 Digital Preservation: Requirement of 21st Century
Authors: Gaurav Kumar, Shilpa
Abstract:
Digital libraries have been established all over the world to create, maintain and to preserve the digital materials. This paper focuses on operational digital preservation systems specifically in educational organizations in India. It considers the broad range of digital objects including e-journals, technical reports, e-records, project documents, scientific data, etc. This paper describes the main objectives, process and technological issues involved in preservation of digital materials. Digital preservation refers to the various methods of keeping digital materials alive for the future. It includes everything from electronic publications on CD-ROM to Online database and collections of experimental data in digital format maintains the ability to display, retrieve and use digital collections in the face of rapidly changing technological and organizational infrastructures elements. This paper exhibits the importance and objectives of digital preservation. The necessities of preservation are hardware and software technology to interpret the digital documents and discuss various aspects of digital preservation.Keywords: preservation, digital preservation, digital dark age, conservation, archive, repository, document, information technology, hardware, software, organization, machine readable format
Procedia PDF Downloads 4551094 An Interpretable Data-Driven Approach for the Stratification of the Cardiorespiratory Fitness
Authors: D.Mendes, J. Henriques, P. Carvalho, T. Rocha, S. Paredes, R. Cabiddu, R. Trimer, R. Mendes, A. Borghi-Silva, L. Kaminsky, E. Ashley, R. Arena, J. Myers
Abstract:
The continued exploration of clinically relevant predictive models continues to be an important pursuit. Cardiorespiratory fitness (CRF) portends clinical vital information and as such its accurate prediction is of high importance. Therefore, the aim of the current study was to develop a data-driven model, based on computational intelligence techniques and, in particular, clustering approaches, to predict CRF. Two prediction models were implemented and compared: 1) the traditional Wasserman/Hansen Equations; and 2) an interpretable clustering approach. Data used for this analysis were from the 'FRIEND - Fitness Registry and the Importance of Exercise: The National Data Base'; in the present study a subset of 10690 apparently healthy individuals were utilized. The accuracy of the models was performed through the computation of sensitivity, specificity, and geometric mean values. The results show the superiority of the clustering approach in the accurate estimation of CRF (i.e., maximal oxygen consumption).Keywords: cardiorespiratory fitness, data-driven models, knowledge extraction, machine learning
Procedia PDF Downloads 2851093 Nanoparticle Exposure Levels in Indoor and Outdoor Demolition Sites
Authors: Aniruddha Mitra, Abbas Rashidi, Shane Lewis, Jefferson Doehling, Alexis Pawlak, Jacob Schwartz, Imaobong Ekpo, Atin Adhikari
Abstract:
Working or living close to demolition sites can increase risks of dust-related health problems. Demolition of concrete buildings may produce crystalline silica dust, which can be associated with a broad range of respiratory diseases including silicosis and lung cancers. Previous studies demonstrated significant associations between demolition dust exposure and increase in the incidence of mesothelioma or asbestos cancer. Dust is a generic term used for minute solid particles of typically <500 µm in diameter. Dust particles in demolition sites vary in a wide range of sizes. Larger particles tend to settle down from the air. On the other hand, the smaller and lighter solid particles remain dispersed in the air for a long period and pose sustained exposure risks. Submicron ultrafine particles and nanoparticles are respirable deeper into our alveoli beyond our body’s natural respiratory cleaning mechanisms such as cilia and mucous membranes and are likely to be retained in the lower airways. To our knowledge, how various demolition tasks release nanoparticles are largely unknown and previous studies mostly focused on course dust, PM2.5, and PM10. General belief is that the dust generated during demolition tasks are mostly large particles formed through crushing, grinding, or sawing of various concrete and wooden structures. Therefore, little consideration has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor, which was used for nanoparticle monitoring at two adjacent indoor and outdoor building demolition sites in southern Georgia. Nanoparticle levels were measured (n = 10) by TSI NanoScan SMPS Model 3910 at four different distances (5, 10, 15, and 30 m) from the work location as well as in control sites. Temperature and relative humidity levels were recorded. Indoor demolition works included acetylene torch, masonry drilling, ceiling panel removal, and other miscellaneous tasks. Whereas, outdoor demolition works included acetylene torch and skid-steer loader use to remove a HVAC system. Concentration ranges of nanoparticles of 13 particle sizes at the indoor demolition site were: 11.5 nm: 63 – 1054/cm³; 15.4 nm: 170 – 1690/cm³; 20.5 nm: 321 – 730/cm³; 27.4 nm: 740 – 3255/cm³; 36.5 nm: 1,220 – 17,828/cm³; 48.7 nm: 1,993 – 40,465/cm³; 64.9 nm: 2,848 – 58,910/cm³; 86.6 nm: 3,722 – 62,040/cm³; 115.5 nm: 3,732 – 46,786/cm³; 154 nm: 3,022 – 21,506/cm³; 205.4 nm: 12 – 15,482/cm³; 273.8 nm: