Search results for: automated checklists
737 Automated Multisensory Data Collection System for Continuous Monitoring of Refrigerating Appliances Recycling Plants
Authors: Georgii Emelianov, Mikhail Polikarpov, Fabian Hübner, Jochen Deuse, Jochen Schiemann
Abstract:
Recycling refrigerating appliances plays a major role in protecting the Earth's atmosphere from ozone depletion and emissions of greenhouse gases. The performance of refrigerator recycling plants in terms of material retention is the subject of strict environmental certifications and is reviewed periodically through specialized audits. The continuous collection of Refrigerator data required for the input-output analysis is still mostly manual, error-prone, and not digitalized. In this paper, we propose an automated data collection system for recycling plants in order to deduce expected material contents in individual end-of-life refrigerating appliances. The system utilizes laser scanner measurements and optical data to extract attributes of individual refrigerators by applying transfer learning with pre-trained vision models and optical character recognition. Based on Recognized features, the system automatically provides material categories and target values of contained material masses, especially foaming and cooling agents. The presented data collection system paves the way for continuous performance monitoring and efficient control of refrigerator recycling plants.Keywords: automation, data collection, performance monitoring, recycling, refrigerators
Procedia PDF Downloads 164736 The Role of Twitter Bots in Political Discussion on 2019 European Elections
Authors: Thomai Voulgari, Vasilis Vasilopoulos, Antonis Skamnakis
Abstract:
The aim of this study is to investigate the effect of the European election campaigns (May 23-26, 2019) on Twitter achieving with artificial intelligence tools such as troll factories and automated inauthentic accounts. Our research focuses on the last European Parliamentary elections that took place between 23 and 26 May 2019 specifically in Italy, Greece, Germany and France. It is difficult to estimate how many Twitter users are actually bots (Echeverría, 2017). Detection for fake accounts is becoming even more complicated as AI bots are made more advanced. A political bot can be programmed to post comments on a Twitter account for a political candidate, target journalists with manipulated content or engage with politicians and artificially increase their impact and popularity. We analyze variables related to 1) the scope of activity of automated bots accounts and 2) degree of coherence and 3) degree of interaction taking into account different factors, such as the type of content of Twitter messages and their intentions, as well as the spreading to the general public. For this purpose, we collected large volumes of Twitter accounts of party leaders and MEP candidates between 10th of May and 26th of July based on content analysis of tweets based on hashtags while using an innovative network analysis tool known as MediaWatch.io (https://mediawatch.io/). According to our findings, one of the highest percentage (64.6%) of automated “bot” accounts during 2019 European election campaigns was in Greece. In general terms, political bots aim to proliferation of misinformation on social media. Targeting voters is a way that it can be achieved contribute to social media manipulation. We found that political parties and individual politicians create and promote purposeful content on Twitter using algorithmic tools. Based on this analysis, online political advertising play an important role to the process of spreading misinformation during elections campaigns. Overall, inauthentic accounts and social media algorithms are being used to manipulate political behavior and public opinion.Keywords: artificial intelligence tools, human-bot interactions, political manipulation, social networking, troll factories
Procedia PDF Downloads 138735 Viability of Irrigation Water Conservation Practices in the Low Desert of California
Authors: Ali Montazar
Abstract:
California and the Colorado River Basin are facing increasing uncertainty concerning water supplies. The Colorado River is the main source of irrigation water in the low desert of California. Currently, due to an increasing water-use competition and long-term drought at the Colorado River Basin, efficient use of irrigation water is one of the highest conservation priorities in the region. This study aims to present some of current irrigation technologies and management approaches in the low desert and assess the viability and potential of these water management practices. The results of several field experiments are used to assess five water conservation practices of sub-surface drip irrigation, automated surface irrigation, sprinkler irrigation, tail-water recovery system, and deficit irrigation strategy. The preliminary results of several ongoing studies at commercial fields are presented, particularly researches in alfalfa, sugar beets, kliengrass, sunflower, and spinach fields. The findings indicate that all these practices have significant potential to conserve water (an average of 1 ac-ft/ac) and enhance the efficiency of water use (15-25%). Further work is needed to better understand the feasibility of each of these applications and to help maintain profitable and sustainable agricultural production system in the low desert as water and labor costs, and environmental issues increase.Keywords: automated surface irrigation, deficit irrigation, low desert of California, sprinkler irrigation, sub-surface drip irrigation, tail-water recovery system
Procedia PDF Downloads 158734 Optimization of a Convolutional Neural Network for the Automated Diagnosis of Melanoma
Authors: Kemka C. Ihemelandu, Chukwuemeka U. Ihemelandu
Abstract:
The incidence of melanoma has been increasing rapidly over the past two decades, making melanoma a current public health crisis. Unfortunately, even as screening efforts continue to expand in an effort to ameliorate the death rate from melanoma, there is a need to improve diagnostic accuracy to decrease misdiagnosis. Artificial intelligence (AI) a new frontier in patient care has the ability to improve the accuracy of melanoma diagnosis. Convolutional neural network (CNN) a form of deep neural network, most commonly applied to analyze visual imagery, has been shown to outperform the human brain in pattern recognition. However, there are noted limitations with the accuracy of the CNN models. Our aim in this study was the optimization of convolutional neural network algorithms for the automated diagnosis of melanoma. We hypothesized that Optimal selection of the momentum and batch hyperparameter increases model accuracy. Our most successful model developed during this study, showed that optimal selection of momentum of 0.25, batch size of 2, led to a superior performance and a faster model training time, with an accuracy of ~ 83% after nine hours of training. We did notice a lack of diversity in the dataset used, with a noted class imbalance favoring lighter vs. darker skin tone. Training set image transformations did not result in a superior model performance in our study.Keywords: melanoma, convolutional neural network, momentum, batch hyperparameter
Procedia PDF Downloads 101733 Field Production Data Collection, Analysis and Reporting Using Automated System
Authors: Amir AlAmeeri, Mohamed Ibrahim
Abstract:
Various data points are constantly being measured in the production system, and due to the nature of the wells, these data points, such as pressure, temperature, water cut, etc.., fluctuations are constant, which requires high frequency monitoring and collection. It is a very difficult task to analyze these parameters manually using spreadsheets and email. An automated system greatly enhances efficiency, reduce errors, the need for constant emails which take up disk space, and frees up time for the operator to perform other critical tasks. Various production data is being recorded in an oil field, and this huge volume of data can be seen as irrelevant to some, especially when viewed on its own with no context. In order to fully utilize all this information, it needs to be properly collected, verified and stored in one common place and analyzed for surveillance and monitoring purposes. This paper describes how data is recorded by different parties and departments in the field, and verified numerous times as it is being loaded into a repository. Once it is loaded, a final check is done before being entered into a production monitoring system. Once all this is collected, various calculations are performed to report allocated production. Calculated production data is used to report field production automatically. It is also used to monitor well and surface facility performance. Engineers can use this for their studies and analyses to ensure field is performing as it should be, predict and forecast production, and monitor any changes in wells that could affect field performance.Keywords: automation, oil production, Cheleken, exploration and production (E&P), Caspian Sea, allocation, forecast
Procedia PDF Downloads 156732 Friend or Foe: Decoding the Legal Challenges Posed by Artificial Intellegence in the Era of Intellectual Property
Authors: Latika Choudhary
Abstract:
“The potential benefits of Artificial Intelligence are huge, So are the dangers.” - Dave Water. Artificial intelligence is one of the facet of Information technology domain which despite several attempts does not have a clear definition or ambit. However it can be understood as technology to solve problems via automated decisions and predictions. Artificial intelligence is essentially an algorithm based technology which analyses the large amounts of data and then solves problems by detecting useful patterns. Owing to its automated feature it will not be wrong to say that humans & AI have more utility than humans alone or computers alone.1 For many decades AI experienced enthusiasm as well as setbacks, yet it has today become part and parcel of our everyday life, making it convenient or at times problematic. AI and related technology encompass Intellectual Property in multiple ways, the most important being AI technology for management of Intellectual Property, IP for protecting AI and IP as a hindrance to the transparency of AI systems. Thus the relationship between the two is of reciprocity as IP influences AI and vice versa. While AI is a recent concept, the IP laws for protection or even dealing with its challenges are relatively older, raising the need for revision to keep up with the pace of technological advancements. This paper will analyze the relationship between AI and IP to determine how beneficial or conflictual the same is, address how the old concepts of IP are being stretched to its maximum limits so as to accommodate the unwanted consequences of the Artificial Intelligence and propose ways to mitigate the situation so that AI becomes the friend it is and not turn into a potential foe it appears to be.Keywords: intellectual property rights, information technology, algorithm, artificial intelligence
Procedia PDF Downloads 87731 Knowledge Diffusion via Automated Organizational Cartography: Autocart
Authors: Mounir Kehal, Adel Al Araifi
Abstract:
The post-globalisation epoch has placed businesses everywhere in new and different competitive situations where knowledgeable, effective and efficient behaviour has come to provide the competitive and comparative edge. Enterprises have turned to explicit- and even conceptualising on tacit- Knowledge Management to elaborate a systematic approach to develop and sustain the Intellectual Capital needed to succeed. To be able to do that, you have to be able to visualize your organization as consisting of nothing but knowledge and knowledge flows, whilst being presented in a graphical and visual framework, referred to as automated organizational cartography. Hence, creating the ability of further actively classifying existing organizational content evolving from and within data feeds, in an algorithmic manner, potentially giving insightful schemes and dynamics by which organizational know-how is visualised. It is discussed and elaborated on most recent and applicable definitions and classifications of knowledge management, representing a wide range of views from mechanistic (systematic, data driven) to a more socially (psychologically, cognitive/metadata driven) orientated. More elaborate continuum models, for knowledge acquisition and reasoning purposes, are being used for effectively representing the domain of information that an end user may contain in their decision making process for utilization of available organizational intellectual resources (i.e. Autocart). In this paper we present likewise an empirical research study conducted previously to try and explore knowledge diffusion in a specialist knowledge domain.Keywords: knowledge management, knowledge maps, knowledge diffusion, organizational cartography
Procedia PDF Downloads 417730 Metropolis-Hastings Sampling Approach for High Dimensional Testing Methods of Autonomous Vehicles
Authors: Nacer Eddine Chelbi, Ayet Bagane, Annie Saleh, Claude Sauvageau, Denis Gingras
Abstract:
As recently stated by National Highway Traffic Safety Administration (NHTSA), to demonstrate the expected performance of a highly automated vehicles system, test approaches should include a combination of simulation, test track, and on-road testing. In this paper, we propose a new validation method for autonomous vehicles involving on-road tests (Field Operational Tests), test track (Test Matrix) and simulation (Worst Case Scenarios). We concentrate our discussion on the simulation aspects, in particular, we extend recent work based on Importance Sampling by using a Metropolis-Hasting algorithm (MHS) to sample collected data from the Safety Pilot Model Deployment (SPMD) in lane-change scenarios. Our proposed MH sampling method will be compared to the Importance Sampling method, which does not perform well in high-dimensional problems. The importance of this study is to obtain a sampler that could be applied to high dimensional simulation problems in order to reduce and optimize the number of test scenarios that are necessary for validation and certification of autonomous vehicles.Keywords: automated driving, autonomous emergency braking (AEB), autonomous vehicles, certification, evaluation, importance sampling, metropolis-hastings sampling, tests
Procedia PDF Downloads 289729 The Impact of Open Defecation on Fecal-Oral Infections: A Case Study in Burat and Ngaremara Wards of Isiolo County, Kenya
Authors: Kimutai Joan Jepkorir, Moturi Wilkister Nyaora
Abstract:
The practice of open defecation can be devastating for human health as well as the environment, and this practice persistence could be due to ingrained habits that individuals continue to engage in despite having a better alternative. Safe disposal of human excreta is essential for public health protection. This study sought to find if open defecation relates to fecal-oral infections in Burat and Ngaremara Wards in Isiolo County. This was achieved through conducting a cross-sectional study. Simple random sampling technique was used to select 385 households that were used in the study. Data collection was done by use of questionnaires and observation checklists. The result show that 66% of the respondents disposed-off fecal matter in a safe manner, whereas 34% disposed-off fecal matter in unsafe manner through open defecation. The prevalence proportions per 1000 of diarrhea and intestinal worms among children under-5 years of age were 142 and 21, respectively. The prevalence proportions per 1000 of diarrhea and typhoid among children over-5 years of age were 20 and 20, respectively.Keywords: faecal-oral infections, open defecation, prevalence proportion, sanitation
Procedia PDF Downloads 305728 An Automated Optimal Robotic Assembly Sequence Planning Using Artificial Bee Colony Algorithm
Authors: Balamurali Gunji, B. B. V. L. Deepak, B. B. Biswal, Amrutha Rout, Golak Bihari Mohanta
Abstract:
Robots play an important role in the operations like pick and place, assembly, spot welding and much more in manufacturing industries. Out of those, assembly is a very important process in manufacturing, where 20% of manufacturing cost is wholly occupied by the assembly process. To do the assembly task effectively, Assembly Sequences Planning (ASP) is required. ASP is one of the multi-objective non-deterministic optimization problems, achieving the optimal assembly sequence involves huge search space and highly complex in nature. Many researchers have followed different algorithms to solve ASP problem, which they have several limitations like the local optimal solution, huge search space, and execution time is more, complexity in applying the algorithm, etc. By keeping the above limitations in mind, in this paper, a new automated optimal robotic assembly sequence planning using Artificial Bee Colony (ABC) Algorithm is proposed. In this algorithm, automatic extraction of assembly predicates is done using Computer Aided Design (CAD) interface instead of extracting the assembly predicates manually. Due to this, the time of extraction of assembly predicates to obtain the feasible assembly sequence is reduced. The fitness evaluation of the obtained feasible sequence is carried out using ABC algorithm to generate the optimal assembly sequence. The proposed methodology is applied to different industrial products and compared the results with past literature.Keywords: assembly sequence planning, CAD, artificial Bee colony algorithm, assembly predicates
Procedia PDF Downloads 237727 Information Communication Technologies and Renewable Technologies' Impact on Irish People's Lifestyle: A Constructivist Grounded Theory Study
Authors: Hamilton V. Niculescu
Abstract:
This paper discusses findings relating to people's engagement with mobile communication technologies and remote automated systems. This interdisciplinary study employs a constructivist grounded theory methodology, with qualitative data that was generated following in-depth semi-structured interviews with 18 people living in Ireland being corroborated with participants' observations and quantitative data. Additional data was collected following participants' remote interaction with six custom-built automated enclosures, located at six different sites around Dublin, Republic of Ireland. This paper argues that ownership and education play a vital role in people engaging with and adoption of new technologies. Analysis of participants' behavior and attitude towards Information Communication Technologies (ICT) suggests that innovations do not always improve peoples' social inclusion. Technological innovations are sometimes perceived as destroying communities and create a dysfunctional society. Moreover, the findings indicate that a lack of public information and support from Irish governmental institutions, as well as limited off-the-shelves availability, has led to low trust and adoption of renewable technologies. A limited variation in participants' behavior and interaction patterns with technologies was observed during the study. This suggests that people will eventually adopt new technologies according to their needs and experience, even though they initially rejected the idea of changing their lifestyle.Keywords: automation, communication, ICT, renewables
Procedia PDF Downloads 112726 The Automated Soil Erosion Monitoring System (ASEMS)
Authors: George N. Zaimes, Valasia Iakovoglou, Paschalis Koutalakis, Konstantinos Ioannou, Ioannis Kosmadakis, Panagiotis Tsardaklis, Theodoros Laopoulos
Abstract:
The advancements in technology allow the development of a new system that can continuously measure surface soil erosion. Continuous soil erosion measurements are required in order to comprehend the erosional processes and propose effective and efficient conservation measures to mitigate surface erosion. Mitigating soil erosion, especially in Mediterranean countries such as Greece, is essential in order to maintain environmental and agricultural sustainability. In this paper, we present the Automated Soil Erosion Monitoring System (ASEMS) that measures surface soil erosion along with other factors that impact erosional process. Specifically, this system measures ground level changes (surface soil erosion), rainfall, air temperature, soil temperature and soil moisture. Another important innovation is that the data will be collected by remote communication. In addition, stakeholder’s awareness is a key factor to help reduce any environmental problem. The different dissemination activities that were utilized are described. The overall outcomes were the development of an innovative system that can measure erosion very accurately. These data from the system help study the process of erosion and find the best possible methods to reduce erosion. The dissemination activities enhance the stakeholder's and public's awareness on surface soil erosion problems and will lead to the adoption of more effective soil erosion conservation practices in Greece.Keywords: soil management, climate change, new technologies, conservation practices
Procedia PDF Downloads 345725 Enhancing Information Technologies with AI: Unlocking Efficiency, Scalability, and Innovation
Authors: Abdal-Hafeez Alhussein
Abstract:
Artificial Intelligence (AI) has become a transformative force in the field of information technologies, reshaping how data is processed, analyzed, and utilized across various domains. This paper explores the multifaceted applications of AI within information technology, focusing on three key areas: automation, scalability, and data-driven decision-making. We delve into how AI-powered automation is optimizing operational efficiency in IT infrastructures, from automated network management to self-healing systems that reduce downtime and enhance performance. Scalability, another critical aspect, is addressed through AI’s role in cloud computing and distributed systems, enabling the seamless handling of increasing data loads and user demands. Additionally, the paper highlights the use of AI in cybersecurity, where real-time threat detection and adaptive response mechanisms significantly improve resilience against sophisticated cyberattacks. In the realm of data analytics, AI models—especially machine learning and natural language processing—are driving innovation by enabling more precise predictions, automated insights extraction, and enhanced user experiences. The paper concludes with a discussion on the ethical implications of AI in information technologies, underscoring the importance of transparency, fairness, and responsible AI use. It also offers insights into future trends, emphasizing the potential of AI to further revolutionize the IT landscape by integrating with emerging technologies like quantum computing and IoT.Keywords: artificial intelligence, information technology, automation, scalability
Procedia PDF Downloads 17724 Glycan Analyzer: Software to Annotate Glycan Structures from Exoglycosidase Experiments
Authors: Ian Walsh, Terry Nguyen-Khuong, Christopher H. Taron, Pauline M. Rudd
Abstract:
Glycoproteins and their covalently bonded glycans play critical roles in the immune system, cell communication, disease and disease prognosis. Ultra performance liquid chromatography (UPLC) coupled with mass spectrometry is conventionally used to qualitatively and quantitatively characterise glycan structures in a given sample. Exoglycosidases are enzymes that catalyze sequential removal of monosaccharides from the non-reducing end of glycans. They naturally have specificity for a particular type of sugar, its stereochemistry (α or β anomer) and its position of attachment to an adjacent sugar on the glycan. Thus, monitoring the peak movements (both in the UPLC and MS1) after application of exoglycosidases provides a unique and effective way to annotate sugars with high detail - i.e. differentiating positional and linkage isomers. Manual annotation of an exoglycosidase experiment is difficult and time consuming. As such, with increasing sample complexity and the number of exoglycosidases, the analysis could result in manually interpreting hundreds of peak movements. Recently, we have implemented pattern recognition software for automated interpretation of UPLC-MS1 exoglycosidase digestions. In this work, we explain the software, indicate how much time it will save and provide example usage showing the annotation of positional and linkage isomers in Immunoglobulin G, apolipoprotein J, and simple glycan standards.Keywords: bioinformatics, automated glycan assignment, liquid chromatography, mass spectrometry
Procedia PDF Downloads 200723 Fractal: Formative Reflective Assessment and Critical Thinking in Learning
Authors: Yannis Stavrakakis, Damian Gordon
Abstract:
Critical Thinking and Reflective Practice are two vital skills that students undertaking postgraduate studies should ideally possess. To help students develop and enhance these skills, this research developed several authentic activities to be undertaken as part of a module that is delivered early in a taught MSc to enhance these skills. One of the challenges of these topics is that they are somewhat ill-defined in terms of precisely what they mean, and also, there is no clear route to operationalizing the teaching of these skills. This research focuses on identifying suitable models of these skills and delivering them in a manner that is both clear and highly motivating. To achieve this, a class of 22 Master's students was divided into two groups, one was provided with a presentation and checklist about critical thinking skills, and the other group was given the same materials on the reflective practice process. The groups were given two scenarios each to analyze using their respective checklists and were asked to present their outcomes to each other and give peer review. The results were coded and compared, and key differences were noted, including the fact that the Critical Thinking outcomes were more future-focused, and the Reflective Practice outcomes were more past-focused and present-focused, as well as the fact that the Reflective Practice process generated a significantly wider range of perspectives on the scenarios.Keywords: critical thinking, ethical scenarios, formative assessment, reflective practice
Procedia PDF Downloads 68722 Risk Management in Construction Projects
Authors: Mustafa Dogru, Ruveyda Komurlu
Abstract:
Companies and professionals in the construction sector face various risks in every project depending on the characteristics, size, complexity, the location of the projects and the techniques used. Some risks’ effects may increase as the project progresses whereas new risks may emerge. Because of the ever-changing nature of the risks, risk management is a cyclical process that needs to be repeated throughout the project. Since the risks threaten the success of the project, risk management is an important part of the entire project management process. The aims of this study are to emphasize the importance of risk management in construction projects, summarize the risk identification process, and introduce a number of methods for preventing risks such as alternative design, checklists, prototyping and test-analysis-correction technique etc. Following the literature review conducted to list the techniques for preventing risks, case studies has been performed to compare and evaluate the success of the techniques in a number of completed projects with the same typology, performed domestic and international. Findings of the study suggest that controlling and minimizing the level of the risks in construction projects, taking optimal precautions for different risks, and mitigating or eliminating the effects of risks are important in order to prevent additional costs for the project. Additionally, focusing on the risks that have highest impact is the most rational way to minimize the effects of the risks on projects.Keywords: construction projects, construction management, project management, risk management
Procedia PDF Downloads 319721 Mobile Application to Generate Automate Plan for Tourist in The South and West of Saudi Arabia, Saferk
Authors: Hanan M. Alghamdi, Kholud E. Alsalami, Manal I. Alshaikhi, Nouf M. Alsalami, Sara A. Awad, Ruqaya A. Alrabei
Abstract:
Tourism in Saudi Arabia is one of the emerging sectors with rapid growth. The Kingdom of Saudi Arabia is characterized by its wonderful and historical areas, which constitute important cultural and tourist landmarks. These landmarks attract the attention of the government of Saudi Arabia; hence the improvement of the tourism sector becomes one of the important axes of Saudi Arabia's vision 2030. There is a need to enhance the tourist experience by facilitating the tourism process for visitors to the Kingdom of Saudi Arabia. This project aims to design an application to serve domestic tourists and visitors from outside the Kingdom of Saudi Arabia. This application will contain an automated tourist generate plan service by sentiment analysis of comments in Google Map using Lexicon for method Rule-based approach. There are thirteen regions in the kingdom of Saudi Arabia. The regions supported in this application will be Makkah and Asir regions. According to the output of the sentiment analysis, the application will recommend restaurants and cafes, activities (parks, museums) and shopping (shopping centers) in the generated plan. After that, the system will show the user a drop-down list of “Mega-events in Saudi Arabia” containing a link to the site of events in the Kingdom of Saudi Arabia. and “important information for you” public decency regulations.Keywords: tourist automated plan, sentiment analysis, comments in google map, tourism in Saudi Arabia
Procedia PDF Downloads 143720 Investigation of Information Security Incident Management Based on International Standard ISO/IEC 27002 in Educational Hospitals in 2014
Authors: Nahid Tavakoli, Asghar Ehteshami, Akbar Hassanzadeh, Fatemeh Amini
Abstract:
Introduction: The Information security incident management guidelines was been developed to help hospitals to meet their information security event and incident management requirements. The purpose of this Study was to investigate on Information Security Incident Management in Isfahan’s educational hospitals in accordance to ISO/IEC 27002 standards. Methods: This was a cross-sectional study to investigate on Information Security Incident Management of educational hospitals in 2014. Based on ISO/IEC 27002 standards, two checklists were applied to check the compliance with standards on Reporting Information Security Events and Weakness and Management of Information Security Incidents and Improvements. One inspector was trained to carry out the assessments in the hospitals. The data was analyzed by SPSS. Findings: In general the score of compliance Information Security Incident Management requirements in two steps; Reporting Information Security Events and Weakness and Management of Information Security Incidents and Improvements was %60. There was the significant difference in various compliance levels among the hospitals (p-value719 AutoML: Comprehensive Review and Application to Engineering Datasets
Authors: Parsa Mahdavi, M. Amin Hariri-Ardebili
Abstract:
The development of accurate machine learning and deep learning models traditionally demands hands-on expertise and a solid background to fine-tune hyperparameters. With the continuous expansion of datasets in various scientific and engineering domains, researchers increasingly turn to machine learning methods to unveil hidden insights that may elude classic regression techniques. This surge in adoption raises concerns about the adequacy of the resultant meta-models and, consequently, the interpretation of the findings. In response to these challenges, automated machine learning (AutoML) emerges as a promising solution, aiming to construct machine learning models with minimal intervention or guidance from human experts. AutoML encompasses crucial stages such as data preparation, feature engineering, hyperparameter optimization, and neural architecture search. This paper provides a comprehensive overview of the principles underpinning AutoML, surveying several widely-used AutoML platforms. Additionally, the paper offers a glimpse into the application of AutoML on various engineering datasets. By comparing these results with those obtained through classical machine learning methods, the paper quantifies the uncertainties inherent in the application of a single ML model versus the holistic approach provided by AutoML. These examples showcase the efficacy of AutoML in extracting meaningful patterns and insights, emphasizing its potential to revolutionize the way we approach and analyze complex datasets.Keywords: automated machine learning, uncertainty, engineering dataset, regression
Procedia PDF Downloads 61718 Using Autoencoder as Feature Extractor for Malware Detection
Authors: Umm-E-Hani, Faiza Babar, Hanif Durad
Abstract:
Malware-detecting approaches suffer many limitations, due to which all anti-malware solutions have failed to be reliable enough for detecting zero-day malware. Signature-based solutions depend upon the signatures that can be generated only when malware surfaces at least once in the cyber world. Another approach that works by detecting the anomalies caused in the environment can easily be defeated by diligently and intelligently written malware. Solutions that have been trained to observe the behavior for detecting malicious files have failed to cater to the malware capable of detecting the sandboxed or protected environment. Machine learning and deep learning-based approaches greatly suffer in training their models with either an imbalanced dataset or an inadequate number of samples. AI-based anti-malware solutions that have been trained with enough samples targeted a selected feature vector, thus ignoring the input of leftover features in the maliciousness of malware just to cope with the lack of underlying hardware processing power. Our research focuses on producing an anti-malware solution for detecting malicious PE files by circumventing the earlier-mentioned shortcomings. Our proposed framework, which is based on automated feature engineering through autoencoders, trains the model over a fairly large dataset. It focuses on the visual patterns of malware samples to automatically extract the meaningful part of the visual pattern. Our experiment has successfully produced a state-of-the-art accuracy of 99.54 % over test data.Keywords: malware, auto encoders, automated feature engineering, classification
Procedia PDF Downloads 72717 Automated Feature Detection and Matching Algorithms for Breast IR Sequence Images
Authors: Chia-Yen Lee, Hao-Jen Wang, Jhih-Hao Lai
Abstract:
In recent years, infrared (IR) imaging has been considered as a potential tool to assess the efficacy of chemotherapy and early detection of breast cancer. Regions of tumor growth with high metabolic rate and angiogenesis phenomenon lead to the high temperatures. Observation of differences between the heat maps in long term is useful to help assess the growth of breast cancer cells and detect breast cancer earlier, wherein the multi-time infrared image alignment technology is a necessary step. Representative feature points detection and matching are essential steps toward the good performance of image registration and quantitative analysis. However, there is no clear boundary on the infrared images and the subject's posture are different for each shot. It cannot adhesive markers on a body surface for a very long period, and it is hard to find anatomic fiducial markers on a body surface. In other words, it’s difficult to detect and match features in an IR sequence images. In this study, automated feature detection and matching algorithms with two type of automatic feature points (i.e., vascular branch points and modified Harris corner) are developed respectively. The preliminary results show that the proposed method could identify the representative feature points on the IR breast images successfully of 98% accuracy and the matching results of 93% accuracy.Keywords: Harris corner, infrared image, feature detection, registration, matching
Procedia PDF Downloads 304716 A Large Language Model-Driven Method for Automated Building Energy Model Generation
Authors: Yake Zhang, Peng Xu
Abstract:
The development of building energy models (BEM) required for architectural design and analysis is a time-consuming and complex process, demanding a deep understanding and proficient use of simulation software. To streamline the generation of complex building energy models, this study proposes an automated method for generating building energy models using a large language model and the BEM library aimed at improving the efficiency of model generation. This method leverages a large language model to parse user-specified requirements for target building models, extracting key features such as building location, window-to-wall ratio, and thermal performance of the building envelope. The BEM library is utilized to retrieve energy models that match the target building’s characteristics, serving as reference information for the large language model to enhance the accuracy and relevance of the generated model, allowing for the creation of a building energy model that adapts to the user’s modeling requirements. This study enables the automatic creation of building energy models based on natural language inputs, reducing the professional expertise required for model development while significantly decreasing the time and complexity of manual configuration. In summary, this study provides an efficient and intelligent solution for building energy analysis and simulation, demonstrating the potential of a large language model in the field of building simulation and performance modeling.Keywords: artificial intelligence, building energy modelling, building simulation, large language model
Procedia PDF Downloads 26715 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA
Authors: Marek Dosbaba
Abstract:
Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data
Procedia PDF Downloads 110714 The Hypoglycemic Grab Back (HOGG): Preparing Hypo-Screen-Bags to Streamline the Time-Consuming Process of Administering Glucose Systemic Correction
Authors: Mai Ali
Abstract:
Background: Preparing Hypo-screen-bags in advance streamlines the time-consuming process of administering glucose systemic correction. Additionally, Hypo-Screen Grab Bags are widely adopted in UK hospitals. Aim: The aim of the study is to improve hypoglycemia screening efficiency and equipment accessibility by streamlining item access to grab bag restocking staff. Methodology: The study centered on neonatal wards at LGI & St. James Neonatal Unit and related units. A web-based survey was conducted to evaluate local practices, gathering 21 responses from relevant general staff. The survey outcomes: (1) The demand for accessible grab bags is evident for smoother processes. (2) The potential to enhance efficiency through improved preparation of hypo-screen grab bags. Intervention: A Hypo-Screen Grab Bag was designed, including checklists for stocked items and required samples. Medical staff oversee restocking after use. Conclusion: The study successfully improved hypoglycemia screening efficiency and aided junior staff with accessible supplies and a user-friendly checklist.Keywords: neonatal hypoglycemia, grab bag, hypo-screening, junior staff
Procedia PDF Downloads 63713 FTIR Spectroscopy for in vitro Screening in Microbial Biotechnology
Authors: V. Shapaval, N. K. Afseth, D. Tzimorotas, A. Kohler
Abstract:
Globally there is a dramatic increase in the demand for food, energy, materials and clean water since natural resources are limited. As a result, industries are looking for ways to reduce rest materials and to improve resource efficiency. Microorganisms have a high potential to be used as bio factories for the production of primary and secondary metabolites that represent high-value bio-products (enzymes, polyunsaturated fatty acids, bio-plastics, glucans, etc.). In order to find good microbial producers, to design suitable substrates from food rest materials and to optimize fermentation conditions, rapid analytical techniques for quantifying target bio products in microbial cells are needed. In the EU project FUST (R4SME, Fp7), we have developed a fully automated high-throughput FUST system based on micro-cultivation and FTIR spectroscopy that facilitates the screening of microorganisms, substrates and fermentation conditions for the optimization of the production of different high-value metabolites (single cell oils, bio plastics). The automated system allows the preparation of 100 samples per hour. Currently, The FUST system is in use for screening of filamentous fungi in order to find oleaginous strains with the ability to produce polyunsaturated fatty acids, and the optimization of cheap substrates, derived from food rest materials, and the optimization of fermentation conditions for the high yield of single cell oil.Keywords: FTIR spectroscopy, FUST system, screening, biotechnology
Procedia PDF Downloads 443712 Fake News Detection for Korean News Using Machine Learning Techniques
Authors: Tae-Uk Yun, Pullip Chung, Kee-Young Kwahk, Hyunchul Ahn
Abstract:
Fake news is defined as the news articles that are intentionally and verifiably false, and could mislead readers. Spread of fake news may provoke anxiety, chaos, fear, or irrational decisions of the public. Thus, detecting fake news and preventing its spread has become very important issue in our society. However, due to the huge amount of fake news produced every day, it is almost impossible to identify it by a human. Under this context, researchers have tried to develop automated fake news detection using machine learning techniques over the past years. But, there have been no prior studies proposed an automated fake news detection method for Korean news to our best knowledge. In this study, we aim to detect Korean fake news using text mining and machine learning techniques. Our proposed method consists of two steps. In the first step, the news contents to be analyzed is convert to quantified values using various text mining techniques (topic modeling, TF-IDF, and so on). After that, in step 2, classifiers are trained using the values produced in step 1. As the classifiers, machine learning techniques such as logistic regression, backpropagation network, support vector machine, and deep neural network can be applied. To validate the effectiveness of the proposed method, we collected about 200 short Korean news from Seoul National University’s FactCheck. which provides with detailed analysis reports from 20 media outlets and links to source documents for each case. Using this dataset, we will identify which text features are important as well as which classifiers are effective in detecting Korean fake news.Keywords: fake news detection, Korean news, machine learning, text mining
Procedia PDF Downloads 275711 Investor Sentiment and Satisfaction in Automated Investment: A Sentimental Analysis of Robo-Advisor Platforms
Authors: Vertika Goswami, Gargi Sharma
Abstract:
The rapid evolution of fintech has led to the rise of robo-advisor platforms that utilize artificial intelligence (AI) and machine learning to offer personalized investment solutions efficiently and cost-effectively. This research paper conducts a comprehensive sentiment analysis of investor experiences with these platforms, employing natural language processing (NLP) and sentiment classification techniques. The study investigates investor perceptions, engagement, and satisfaction, identifying key drivers of positive sentiment such as clear communication, low fees, consistent returns, and robust security. Conversely, negative sentiment is linked to issues like inconsistent performance, hidden fees, poor customer support, and a lack of transparency. The analysis reveals that addressing these pain points—through improved transparency, enhanced customer service, and ongoing technological advancements—can significantly boost investor trust and satisfaction. This paper contributes valuable insights into the fields of behavioral finance and fintech innovation, offering actionable recommendations for stakeholders, practitioners, and policymakers. Future research should explore the long-term impact of these factors on investor loyalty, the role of emerging technologies, and the effects of ethical investment choices and regulatory compliance on investor sentiment.Keywords: artificial intelligence in finance, automated investment, financial technology, investor satisfaction, investor sentiment, robo-advisors, sentimental analysis
Procedia PDF Downloads 18710 Role of Speech Language Pathologists in Vocational Rehabilitation
Authors: Marlyn Mathew
Abstract:
Communication is the key factor in any vocational /job set-up. However many persons with disabilities suffer a deficit in this very area in terms of comprehension, expression and cognitive skills making it difficult for them to get employed appropriately or stay employed. Vocational Rehabilitation is a continuous and coordinated process which involves the provision of vocational related services designed to enable a person with disability to obtain and maintain employment. Therefore the role of the speech language pathologist is crucial in assessing the communication deficits and needs of the individual at the various phases of employment- right from the time of seeking a job and attending interview with suitable employers and also at regular intervals of the employment. This article discusses the various communication deficits and the obstacles faced by individuals with special needs including but not limited to cognitive- linguistic deficits, execution function deficits, speech and language processing difficulties and strategies that can be introduced in the workplace to overcome these obstacles including use of visual cues, checklists, flow charts. The paper also throws light on the importance of educating colleagues and work partners about the communication difficulties faced by the individual. This would help to reduce the communication barriers in the workplace, help colleagues develop an empathetic approach and also reduce misunderstandings that can arise as a result of the communication impairment.Keywords: vocational rehabilitation, disability, speech language pathologist, cognitive, linguistics
Procedia PDF Downloads 135709 Concept to Enhance the Project Success and Promote the Implementation of Success Factors in Infrastructure Projects
Abstract:
Infrastructure projects are often subjected to delays and cost overruns and mistakenly described as unsuccessful projects. These projects have many peculiarities such as public attention, impact on the environment, subjected to special regulations, etc. They also deal with several stakeholders with different motivations and face unique risks. With this in mind we need to reconsider our approach to manage them, define their success factors and implement these success factors. Infrastructure projects are not only lacking a unified meaning of project success or a definition of success factors, but also a clear method to implement these factors. This paper investigates this gap and introduces a concept to implement success factors in an efficient way, taking into consideration the specific characteristics of infrastructure projects. This concept consists of six enablers such as project organization, project team, project management workflow, contract management, communication and knowledge transfer and project documentations. These enablers allow other success factors to be efficiently implemented in projects. In conclusion, this paper provides project managers as well as company managers with a tool to define and implement success factors efficiently in their projects, along with upgrading their assets for the coming projects. This tool consists of processes and validated checklists to ensure the best use of company resources and knowledge. Due to the special features of infrastructure projects this tool will be tested in the German infrastructure market. However, it is meant to be adaptable to other markets and industries.Keywords: infrastructure projects, operative success factors, project success, success factors, transportation projects
Procedia PDF Downloads 129708 The Influence of Students’ Race and Socioeconomic Status on Teachers’ Assessment of ADHD: Implications for Educational Inequalities
Authors: Justine McKay
Abstract:
Implicit Bias and its impact on the schooling experience of racial minorities with ADHD is significant. ADHD has become a globally diagnosed disorder. The lack of an objective diagnostic tool for ADHD has created controversy over the disease and its validity. ADHD is referred to as a social construct or a suburban problem related to active white boys who disrupt classrooms. The subjectivity of an ADHD diagnosis and the diagnostic process is based on norm-referenced checklists of behaviours completed by the student, caregiver, teachers, clinicians, and other community members. Teachers' perceptions of classroom behaviours are influenced by implicit bias related to race and socioeconomic status. The same behaviours displayed by white and marginalized or low-income students are perceived differently. The white student is perceived to be struggling academically and needing support, while the marginalized or lower-income student's behaviour is seen as disruptive or criminal. The presence of teacher implicit bias results in the inequity of diagnosis, and academic support, which has long-term implications for these students. The subjectivity of the diagnostic process socially reproduces the systemic injustice of opportunity for marginalized youth within the education system.Keywords: ADHD, education, equity, implicit bias, subjectivity
Procedia PDF Downloads 73