Search results for: nature inspired algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6371

Search results for: nature inspired algorithms

6131 Exposure to Nature: An Underutilized Component of Student Mental Health

Authors: Jeremy Bekker, Guy Salazar

Abstract:

Introduction: Nature-exposure interventions on university campuses may serve as an effective addition to overburdened counseling and student support centers. Nature-exposure interventions can work as a preventative well-being enhancement measure on campuses, which can be used adjacently with existing health resources. Specifically, this paper analyzes how spending time in nature impacts psychological well-being, cognitive functioning, and physical health. The poster covers the core findings and recommendations of this paper, which has been previously published in the BYU undergraduate psychology journal Intuition. Research Goals and Method: The goal of this paper was to outline the potential benefits of nature exposure for students’ physical health, mental well-being, and academic success. Another objective of this paper was to outline potential research-based interventions that use campus green spaces to improve student outcomes. Given that the core objective of this paper was to identify and establish research-based nature exposure interventions that could be used on college campuses, a broad literature review focused on these areas. Specifically, the databases Scopus and PsycINFO were used to screen for research focused on psychological well-being, physical health, cognitive functioning, and nature exposure interventions. Outcomes: Nature exposure has been shown to help increase positive affect, life satisfaction, happiness, coping ability and subjective well-being. Further, nature exposure has been shown to decrease negative affect, lower mental distress, reduce cognitive load, and decrease negative psychological symptoms. Finally, nature exposure has been shown to lead to better physical health. Findings and Recommendations: Potential interventions include adding green space to university buildings and grounds, dedicating already natural environments as nature restoration areas, and providing means for outdoor excursions. Potential limitations and suggested areas for future research are also addressed. Many campuses already contain green spaces, defined as any part of an environment that is predominately made of natural elements, and these green spaces comprise an untapped resource that is relatively cheap and simple.

Keywords: nature exposure, preventative care, undergraduate mental health, well-being intervention

Procedia PDF Downloads 171
6130 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 31
6129 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine

Authors: Adriana Haulica

Abstract:

Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.

Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics

Procedia PDF Downloads 47
6128 Development of Calcium Carbonate Molecular Sheets via Wet Chemical Route

Authors: Sudhir Kumar Sharma, Ramesh Jagannathan

Abstract:

The interaction of organic and inorganic matrices of biological origin resulting in self-assembled structures with unique properties is well established. The development of such self-assembled nanostructures by synthetic and bio-inspired techniques is an established field of active research. Among bio-materials, nacre, a laminar stack of calcium carbonate nanosheets, which are interleaved with organic material, has long been focused research due to its unique mechanical properties. In this paper, we present the development of nacre-like lamellar structures made up of calcium carbonate via a wet chemical route. We used the binding affinity of carboxylate anions and calcium cations using poly (acrylic) acid (PAA) to lead CaCO₃ crystallization. In these experiments, we selected calcium acetate as the precursor molecule along with PAA (Mw ~ 8000 Da). We found that Ca⁺²/COO⁻ ratio provided a tunable control for the morphology and growth of CaCO₃ nanostructures. Drop casting one such formulation on a silicon substrate followed by calcination resulted in co-planner, molecular sheets of CaCO₃, separated by a spacer layer of carbon. The scope of our process could be expanded to produce unit cell thick molecular sheets of other important inorganic materials.

Keywords: self-assembled structures, bio-inspired materials, calcium carbonate, wet chemical route

Procedia PDF Downloads 108
6127 Computational Modeling of Load Limits of Carbon Fibre Composite Laminates Subjected to Low-Velocity Impact Utilizing Convolution-Based Fast Fourier Data Filtering Algorithms

Authors: Farhat Imtiaz, Umar Farooq

Abstract:

In this work, we developed a computational model to predict ply level failure in impacted composite laminates. Data obtained from physical testing from flat and round nose impacts of 8-, 16-, 24-ply laminates were considered. Routine inspections of the tested laminates were carried out to approximate ply by ply inflicted damage incurred. Plots consisting of load–time, load–deflection, and energy–time history were drawn to approximate the inflicted damages. Impact test generated unwanted data logged due to restrictions on testing and logging systems were also filtered. Conventional filters (built-in, statistical, and numerical) reliably predicted load thresholds for relatively thin laminates such as eight and sixteen ply panels. However, for relatively thick laminates such as twenty-four ply laminates impacted by flat nose impact generated clipped data which can just be de-noised using oscillatory algorithms. The literature search reveals that modern oscillatory data filtering and extrapolation algorithms have scarcely been utilized. This investigation reports applications of filtering and extrapolation of the clipped data utilising fast Fourier Convolution algorithm to predict load thresholds. Some of the results were related to the impact-induced damage areas identified with Ultrasonic C-scans and found to be in acceptable agreement. Based on consistent findings, utilizing of modern data filtering and extrapolation algorithms to data logged by the existing machines has efficiently enhanced data interpretations without resorting to extra resources. The algorithms could be useful for impact-induced damage approximations of similar cases.

Keywords: fibre reinforced laminates, fast Fourier algorithms, mechanical testing, data filtering and extrapolation

Procedia PDF Downloads 112
6126 Solution Approaches for Some Scheduling Problems with Learning Effect and Job Dependent Delivery Times

Authors: M. Duran Toksari, Berrin Ucarkus

Abstract:

In this paper, we propose two algorithms to optimally solve makespan and total completion time scheduling problems with learning effect and job dependent delivery times in a single machine environment. The delivery time is the extra time to eliminate adverse effect between the main processing and delivery to the customer. In this paper, we introduce the job dependent delivery times for some single machine scheduling problems with position dependent learning effect, which are makespan are total completion. The results with respect to two algorithms proposed for solving of the each problem are compared with LINGO solutions for 50-jobs, 100-jobs and 150-jobs problems. The proposed algorithms can find the same results in shorter time.

Keywords: delivery Times, learning effect, makespan, scheduling, total completion time

Procedia PDF Downloads 446
6125 Development of a Decision Model to Optimize Total Cost in Food Supply Chain

Authors: Henry Lau, Dilupa Nakandala, Li Zhao

Abstract:

All along the length of the supply chain, fresh food firms face the challenge of managing both product quality, due to the perishable nature of the products, and product cost. This paper develops a method to assist logistics managers upstream in the fresh food supply chain in making cost optimized decisions regarding transportation, with the objective of minimizing the total cost while maintaining the quality of food products above acceptable levels. Considering the case of multiple fresh food products collected from multiple farms being transported to a warehouse or a retailer, this study develops a total cost model that includes various costs incurred during transportation. The practical application of the model is illustrated by using several computational intelligence approaches including Genetic Algorithms (GA), Fuzzy Genetic Algorithms (FGA) as well as an improved Simulated Annealing (SA) procedure applied with a repair mechanism for efficiency benchmarking. We demonstrate the practical viability of these approaches by using a simulation study based on pertinent data and evaluate the simulation outcomes. The application of the proposed total cost model was demonstrated using three approaches of GA, FGA and SA with a repair mechanism. All three approaches are adoptable; however, based on the performance evaluation, it was evident that the FGA is more likely to produce a better performance than the other two approaches of GA and SA. This study provides a pragmatic approach for supporting logistics and supply chain practitioners in fresh food industry in making important decisions on the arrangements and procedures related to the transportation of multiple fresh food products to a warehouse from multiple farms in a cost-effective way without compromising product quality. This study extends the literature on cold supply chain management by investigating cost and quality optimization in a multi-product scenario from farms to a retailer and, minimizing cost by managing the quality above expected quality levels at delivery. The scalability of the proposed generic function enables the application to alternative situations in practice such as different storage environments and transportation conditions.

Keywords: cost optimization, food supply chain, fuzzy sets, genetic algorithms, product quality, transportation

Procedia PDF Downloads 196
6124 Scheduling of Repetitive Activities for Height-Rise Buildings: Optimisation by Genetic Algorithms

Authors: Mohammed Aljoma

Abstract:

In this paper, a developed prototype for the scheduling of repetitive activities in height-rise buildings was presented. The activities that describe the behavior of the most of activities in multi-storey buildings are scheduled using the developed approach. The prototype combines three methods to attain the optimized planning. The methods include Critical Path Method (CPM), Gantt and Line of Balance (LOB). The developed prototype; POTER is used to schedule repetitive and non-repetitive activities with respect to all constraints that can be automatically generated using a generic database. The prototype uses the method of genetic algorithms for optimizing the planning process. As a result, this approach enables contracting organizations to evaluate various planning solutions that are calculated, tested and classified by POTER to attain an optimal time-cost equilibrium according to their own criteria of time or coast.

Keywords: planning scheduling, genetic algorithms, repetitive activity, construction management, planning, scheduling, risk management, project duration

Procedia PDF Downloads 282
6123 Acoustic Echo Cancellation Using Different Adaptive Algorithms

Authors: Hamid Sharif, Nazish Saleem Abbas, Muhammad Haris Jamil

Abstract:

An adaptive filter is a filter that self-adjusts its transfer function according to an optimization algorithm driven by an error signal. Because of the complexity of the optimization algorithms, most adaptive filters are digital filters. Adaptive filtering constitutes one of the core technologies in digital signal processing and finds numerous application areas in science as well as in industry. Adaptive filtering techniques are used in a wide range of applications, including adaptive noise cancellation and echo cancellation. Acoustic echo cancellation is a common occurrence in today’s telecommunication systems. The signal interference caused by acoustic echo is distracting to both users and causes a reduction in the quality of the communication. In this paper, we review different techniques of adaptive filtering to reduce this unwanted echo. In this paper, we see the behavior of techniques and algorithms of adaptive filtering like Least Mean Square (LMS), Normalized Least Mean Square (NLMS), Variable Step-Size Least Mean Square (VSLMS), Variable Step-Size Normalized Least Mean Square (VSNLMS), New Varying Step Size LMS Algorithm (NVSSLMS) and Recursive Least Square (RLS) algorithms to reduce this unwanted echo, to increase communication quality.

Keywords: adaptive acoustic, echo cancellation, LMS algorithm, adaptive filter, normalized least mean square (NLMS), variable step-size least mean square (VSLMS)

Procedia PDF Downloads 52
6122 A Developmental Survey of Local Stereo Matching Algorithms

Authors: André Smith, Amr Abdel-Dayem

Abstract:

This paper presents an overview of the history and development of stereo matching algorithms. Details from its inception, up to relatively recent techniques are described, noting challenges that have been surmounted across these past decades. Different components of these are explored, though focus is directed towards the local matching techniques. While global approaches have existed for some time, and demonstrated greater accuracy than their counterparts, they are generally quite slow. Many strides have been made more recently, allowing local methods to catch up in terms of accuracy, without sacrificing the overall performance.

Keywords: developmental survey, local stereo matching, rectification, stereo correspondence

Procedia PDF Downloads 264
6121 Reinforcement Learning Optimization: Unraveling Trends and Advancements in Metaheuristic Algorithms

Authors: Rahul Paul, Kedar Nath Das

Abstract:

The field of machine learning (ML) is experiencing rapid development, resulting in a multitude of theoretical advancements and extensive practical implementations across various disciplines. The objective of ML is to facilitate the ability of machines to perform cognitive tasks by leveraging knowledge gained from prior experiences and effectively addressing complex problems, even in situations that deviate from previously encountered instances. Reinforcement Learning (RL) has emerged as a prominent subfield within ML and has gained considerable attention in recent times from researchers. This surge in interest can be attributed to the practical applications of RL, the increasing availability of data, and the rapid advancements in computing power. At the same time, optimization algorithms play a pivotal role in the field of ML and have attracted considerable interest from researchers. A multitude of proposals have been put forth to address optimization problems or improve optimization techniques within the domain of ML. The necessity of a thorough examination and implementation of optimization algorithms within the context of ML is of utmost importance in order to provide guidance for the advancement of research in both optimization and ML. This article provides a comprehensive overview of the application of metaheuristic evolutionary optimization algorithms in conjunction with RL to address a diverse range of scientific challenges. Furthermore, this article delves into the various challenges and unresolved issues pertaining to the optimization of RL models.

Keywords: machine learning, reinforcement learning, loss function, evolutionary optimization techniques

Procedia PDF Downloads 50
6120 Designing and Making Sustainable Architectural Clothing Inspired by Reconstruction of Bam’s Bazaar

Authors: Marzieh Khaleghi Baygi, Maryam Khaleghy Baygy

Abstract:

The main aim of this project was designing and making sustainable architectural wearable dress inspired by reconstruction project of Bam’s Bazar in Iran. To achieve the goals of this project, Bam Bazar became the architectural reference. A mixed research method (including applied, qualitative and case studies methods) was used. After research, data gathering and considering related intellectual, mental and cultural background, the garment was modeled by using 3ds Max's modeling tools and Marvelous. After making the pattern, the wearable architecture was built and an architectural and historical building converted to a clothing. The implementation of sustainable architectural clothing, took seventeen months. The result of this project was a cloth in a new form that had been worn on its architect body. The comparison between present project and previous research were focusing on the same subjects (architectural clothing) shows some dramatic differentiations, including, the architect, designer and executive of this project was the same person who was the main researcher. Also, in this research, special attention was paid to the sustainability, volume and forms. Most projects in this subject (especially pervious related Iranian research) relied on painting and not on the volumes and forms. The sustainable immovable architecture had worn on its architect, became a cloth on a human's body that was moving.

Keywords: wearable architecture, clothing, bam bazar, space, sustainability

Procedia PDF Downloads 38
6119 Back to Nature: Addressing the German Nudist Movement’s Colonial Past and Its Repercussions

Authors: Saskia Köbschall

Abstract:

The idea of ‘being close to nature’ and the ways of achieving this proximity are socially and historically constructed, as are notions of nakedness and nudity. During the colonial period, nudity and clothedness functioned as instruments of racial domination. Nakedness became central to the colonialists’ thinking, to their binary of the ‘civilized’ and those ‘close to nature’, therefore turning the level of perceived unclothedness into a measurement of ‘civilization'. While being ‘one with nature’ continued to be a criterion of backwardness in the colonies, it emerged as a futuristic and avant-garde endeavor in the metropole: In Germany, at the height of its colonial expansion, the Life Reform Movement (Lebensreformbewegung) called for the liberation of the white body from the ‘constraints of civilization’, for its ‘return to nature’ via practices like nudism. Despite this simultaneity, the scholarship of the life reform and the nudist movement in particular does not address the colonial past of the movement or its repercussions in the present. Taking the biography of prominent life reformist Hans Paasche (1881 - 1920) as a starting point, this paper explores the work of imperial legacies in the history and present of the German nudist movement. Paasche started his career as a German colonial officer, participating in the brutal obliteration of the Maji-Maji uprising (1905/06) that claimed the lives of nearly 200.000 people. Once a passionate game hunter, he later became a known nature conservationist; once a self-proclaimed explorer of Africa, he later became one of the most prominent advocates of nudism and vegetarianism. The paper joins conceptual and historical research in order to address the German nudist movement’s colonial past and understand its repercussions in the present.

Keywords: Germany, life reform, colonialism, archives, nudity, nature

Procedia PDF Downloads 60
6118 Two Stage Assembly Flowshop Scheduling Problem Minimizing Total Tardiness

Authors: Ali Allahverdi, Harun Aydilek, Asiye Aydilek

Abstract:

The two stage assembly flowshop scheduling problem has lots of application in real life. To the best of our knowledge, the two stage assembly flowshop scheduling problem with total tardiness performance measure and separate setup times has not been addressed so far, and hence, it is addressed in this paper. Different dominance relations are developed and several algorithms are proposed. Extensive computational experiments are conducted to evaluate the proposed algorithms. The computational experiments have shown that one of the algorithms performs much better than the others. Moreover, the experiments have shown that the best performing algorithm performs much better than the best existing algorithm for the case of zero setup times in the literature. Therefore, the proposed best performing algorithm not only can be used for problems with separate setup times but also for the case of zero setup times.

Keywords: scheduling, assembly flowshop, total tardiness, algorithm

Procedia PDF Downloads 317
6117 Genetic Algorithms for Feature Generation in the Context of Audio Classification

Authors: José A. Menezes, Giordano Cabral, Bruno T. Gomes

Abstract:

Choosing good features is an essential part of machine learning. Recent techniques aim to automate this process. For instance, feature learning intends to learn the transformation of raw data into a useful representation to machine learning tasks. In automatic audio classification tasks, this is interesting since the audio, usually complex information, needs to be transformed into a computationally convenient input to process. Another technique tries to generate features by searching a feature space. Genetic algorithms, for instance, have being used to generate audio features by combining or modifying them. We find this approach particularly interesting and, despite the undeniable advances of feature learning approaches, we wanted to take a step forward in the use of genetic algorithms to find audio features, combining them with more conventional methods, like PCA, and inserting search control mechanisms, such as constraints over a confusion matrix. This work presents the results obtained on particular audio classification problems.

Keywords: feature generation, feature learning, genetic algorithm, music information retrieval

Procedia PDF Downloads 400
6116 Summarizing Data Sets for Data Mining by Using Statistical Methods in Coastal Engineering

Authors: Yunus Doğan, Ahmet Durap

Abstract:

Coastal regions are the one of the most commonly used places by the natural balance and the growing population. In coastal engineering, the most valuable data is wave behaviors. The amount of this data becomes very big because of observations that take place for periods of hours, days and months. In this study, some statistical methods such as the wave spectrum analysis methods and the standard statistical methods have been used. The goal of this study is the discovery profiles of the different coast areas by using these statistical methods, and thus, obtaining an instance based data set from the big data to analysis by using data mining algorithms. In the experimental studies, the six sample data sets about the wave behaviors obtained by 20 minutes of observations from Mersin Bay in Turkey and converted to an instance based form, while different clustering techniques in data mining algorithms were used to discover similar coastal places. Moreover, this study discusses that this summarization approach can be used in other branches collecting big data such as medicine.

Keywords: clustering algorithms, coastal engineering, data mining, data summarization, statistical methods

Procedia PDF Downloads 337
6115 Unmanned Air Vehicles against Disasters: Wildfires, Avalanches, Floods

Authors: İsmail Şimşekoğlu, Serkan Yılmaz

Abstract:

There have been great improvements in technology that caused epoch-making changes in aviation. Thus, we can control air vehicles from ground without pilots in them: The UAVs. Due to UAV’s lack of need of pilots and their small size make them have crucial importance for us. UAVs have variety of usage area, especially in military. However, as soldiers we believe that we can use UAVs for better purposes. In this essay we indicate the usage of UAVs for the sake of saving nature from destruction of disasters by expressing what happened in the past and what can possibly happen in the future, especially in firefighting, preventing avalanches and decreasing the effects of floods. These three disasters cause hazardous consequences to the nature. Wildfires endanger so many lives by burning and destroying what comes in their paths. The numbers of avalanches are increased with the global warming. The changes of seasons triggered floods all over the world that threaten the city life. Besides all of these people may lose their lives in order to intrude these disasters. Drones will do the job without involving people lives. Thus it will diminish the risks so drones will be used for the sake of nature and people.

Keywords: unmanned air vehicles, nature, firefighting, avalanche, flood

Procedia PDF Downloads 419
6114 Robot Operating System-Based SLAM for a Gazebo-Simulated Turtlebot2 in 2d Indoor Environment with Cartographer Algorithm

Authors: Wilayat Ali, Li Sheng, Waleed Ahmed

Abstract:

The ability of the robot to make simultaneously map of the environment and localize itself with respect to that environment is the most important element of mobile robots. To solve SLAM many algorithms could be utilized to build up the SLAM process and SLAM is a developing area in Robotics research. Robot Operating System (ROS) is one of the frameworks which provide multiple algorithm nodes to work with and provide a transmission layer to robots. Manyof these algorithms extensively in use are Hector SLAM, Gmapping and Cartographer SLAM. This paper describes a ROS-based Simultaneous localization and mapping (SLAM) library Google Cartographer mapping, which is open-source algorithm. The algorithm was applied to create a map using laser and pose data from 2d Lidar that was placed on a mobile robot. The model robot uses the gazebo package and simulated in Rviz. Our research work's primary goal is to obtain mapping through Cartographer SLAM algorithm in a static indoor environment. From our research, it is shown that for indoor environments cartographer is an applicable algorithm to generate 2d maps with LIDAR placed on mobile robot because it uses both odometry and poses estimation. The algorithm has been evaluated and maps are constructed against the SLAM algorithms presented by Turtlebot2 in the static indoor environment.

Keywords: SLAM, ROS, navigation, localization and mapping, gazebo, Rviz, Turtlebot2, slam algorithms, 2d indoor environment, cartographer

Procedia PDF Downloads 107
6113 Wait-Optimized Scheduler Algorithm for Efficient Process Scheduling in Computer Systems

Authors: Md Habibur Rahman, Jaeho Kim

Abstract:

Efficient process scheduling is a crucial factor in ensuring optimal system performance and resource utilization in computer systems. While various algorithms have been proposed over the years, there are still limitations to their effectiveness. This paper introduces a new Wait-Optimized Scheduler (WOS) algorithm that aims to minimize process waiting time by dividing them into two layers and considering both process time and waiting time. The WOS algorithm is non-preemptive and prioritizes processes with the shortest WOS. In the first layer, each process runs for a predetermined duration, and any unfinished process is subsequently moved to the second layer, resulting in a decrease in response time. Whenever the first layer is free or the number of processes in the second layer is twice that of the first layer, the algorithm sorts all the processes in the second layer based on their remaining time minus waiting time and sends one process to the first layer to run. This ensures that all processes eventually run, optimizing waiting time. To evaluate the performance of the WOS algorithm, we conducted experiments comparing its performance with traditional scheduling algorithms such as First-Come-First-Serve (FCFS) and Shortest-Job-First (SJF). The results showed that the WOS algorithm outperformed the traditional algorithms in reducing the waiting time of processes, particularly in scenarios with a large number of short tasks with long wait times. Our study highlights the effectiveness of the WOS algorithm in improving process scheduling efficiency in computer systems. By reducing process waiting time, the WOS algorithm can improve system performance and resource utilization. The findings of this study provide valuable insights for researchers and practitioners in developing and implementing efficient process scheduling algorithms.

Keywords: process scheduling, wait-optimized scheduler, response time, non-preemptive, waiting time, traditional scheduling algorithms, first-come-first-serve, shortest-job-first, system performance, resource utilization

Procedia PDF Downloads 63
6112 On an Approach for Rule Generation in Association Rule Mining

Authors: B. Chandra

Abstract:

In Association Rule Mining, much attention has been paid for developing algorithms for large (frequent/closed/maximal) itemsets but very little attention has been paid to improve the performance of rule generation algorithms. Rule generation is an important part of Association Rule Mining. In this paper, a novel approach named NARG (Association Rule using Antecedent Support) has been proposed for rule generation that uses memory resident data structure named FCET (Frequent Closed Enumeration Tree) to find frequent/closed itemsets. In addition, the computational speed of NARG is enhanced by giving importance to the rules that have lower antecedent support. Comparative performance evaluation of NARG with fast association rule mining algorithm for rule generation has been done on synthetic datasets and real life datasets (taken from UCI Machine Learning Repository). Performance analysis shows that NARG is computationally faster in comparison to the existing algorithms for rule generation.

Keywords: knowledge discovery, association rule mining, antecedent support, rule generation

Procedia PDF Downloads 292
6111 A Comparative Analysis of Asymmetric Encryption Schemes on Android Messaging Service

Authors: Mabrouka Algherinai, Fatma Karkouri

Abstract:

Today, Short Message Service (SMS) is an important means of communication. SMS is not only used in informal environment for communication and transaction, but it is also used in formal environments such as institutions, organizations, companies, and business world as a tool for communication and transactions. Therefore, there is a need to secure the information that is being transmitted through this medium to ensure security of information both in transit and at rest. But, encryption has been identified as a means to provide security to SMS messages in transit and at rest. Several past researches have proposed and developed several encryption algorithms for SMS and Information Security. This research aims at comparing the performance of common Asymmetric encryption algorithms on SMS security. The research employs the use of three algorithms, namely RSA, McEliece, and RABIN. Several experiments were performed on SMS of various sizes on android mobile device. The experimental results show that each of the three techniques has different key generation, encryption, and decryption times. The efficiency of an algorithm is determined by the time that it takes for encryption, decryption, and key generation. The best algorithm can be chosen based on the least time required for encryption. The obtained results show the least time when McEliece size 4096 is used. RABIN size 4096 gives most time for encryption and so it is the least effective algorithm when considering encryption. Also, the research shows that McEliece size 2048 has the least time for key generation, and hence, it is the best algorithm as relating to key generation. The result of the algorithms also shows that RSA size 1024 is the most preferable algorithm in terms of decryption as it gives the least time for decryption.

Keywords: SMS, RSA, McEliece, RABIN

Procedia PDF Downloads 133
6110 Prediction of the Lateral Bearing Capacity of Short Piles in Clayey Soils Using Imperialist Competitive Algorithm-Based Artificial Neural Networks

Authors: Reza Dinarvand, Mahdi Sadeghian, Somaye Sadeghian

Abstract:

Prediction of the ultimate bearing capacity of piles (Qu) is one of the basic issues in geotechnical engineering. So far, several methods have been used to estimate Qu, including the recently developed artificial intelligence methods. In recent years, optimization algorithms have been used to minimize artificial network errors, such as colony algorithms, genetic algorithms, imperialist competitive algorithms, and so on. In the present research, artificial neural networks based on colonial competition algorithm (ANN-ICA) were used, and their results were compared with other methods. The results of laboratory tests of short piles in clayey soils with parameters such as pile diameter, pile buried length, eccentricity of load and undrained shear resistance of soil were used for modeling and evaluation. The results showed that ICA-based artificial neural networks predicted lateral bearing capacity of short piles with a correlation coefficient of 0.9865 for training data and 0.975 for test data. Furthermore, the results of the model indicated the superiority of ICA-based artificial neural networks compared to back-propagation artificial neural networks as well as the Broms and Hansen methods.

Keywords: artificial neural network, clayey soil, imperialist competition algorithm, lateral bearing capacity, short pile

Procedia PDF Downloads 116
6109 Data Mining Approach: Classification Model Evaluation

Authors: Lubabatu Sada Sodangi

Abstract:

The rapid growth in exchange and accessibility of information via the internet makes many organisations acquire data on their own operation. The aim of data mining is to analyse the different behaviour of a dataset using observation. Although, the subset of the dataset being analysed may not display all the behaviours and relationships of the entire data and, therefore, may not represent other parts that exist in the dataset. There is a range of techniques used in data mining to determine the hidden or unknown information in datasets. In this paper, the performance of two algorithms Chi-Square Automatic Interaction Detection (CHAID) and multilayer perceptron (MLP) would be matched using an Adult dataset to find out the percentage of an/the adults that earn > 50k and those that earn <= 50k per year. The two algorithms were studied and compared using IBM SPSS statistics software. The result for CHAID shows that the most important predictors are relationship and education. The algorithm shows that those are married (husband) and have qualification: Bachelor, Masters, Doctorate or Prof-school whose their age is > 41<57 earn > 50k. Also, multilayer perceptron displays marital status and capital gain as the most important predictors of the income. It also shows that individuals that their capital gain is less than 6,849 and are single, separated or widow, earn <= 50K, whereas individuals with their capital gain is > 6,849, work > 35 hrs/wk, and > 27yrs their income will be > 50k. By comparing the two algorithms, it is observed that both algorithms are reliable but there is strong reliability in CHAID which clearly shows that relation and education contribute to the prediction as displayed in the data visualisation.

Keywords: data mining, CHAID, multi-layer perceptron, SPSS, Adult dataset

Procedia PDF Downloads 355
6108 Experimental Evaluation of Succinct Ternary Tree

Authors: Dmitriy Kuptsov

Abstract:

Tree data structures, such as binary or in general k-ary trees, are essential in computer science. The applications of these data structures can range from data search and retrieval to sorting and ranking algorithms. Naive implementations of these data structures can consume prohibitively large volumes of random access memory limiting their applicability in certain solutions. Thus, in these cases, more advanced representation of these data structures is essential. In this paper we present the design of the compact version of ternary tree data structure and demonstrate the results for the experimental evaluation using static dictionary problem. We compare these results with the results for binary and regular ternary trees. The conducted evaluation study shows that our design, in the best case, consumes up to 12 times less memory (for the dictionary used in our experimental evaluation) than a regular ternary tree and in certain configuration shows performance comparable to regular ternary trees. We have evaluated the performance of the algorithms using both 32 and 64 bit operating systems.

Keywords: algorithms, data structures, succinct ternary tree, per- formance evaluation

Procedia PDF Downloads 138
6107 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal

Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan

Abstract:

This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.

Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal

Procedia PDF Downloads 84
6106 Classification of Potential Biomarkers in Breast Cancer Using Artificial Intelligence Algorithms and Anthropometric Datasets

Authors: Aref Aasi, Sahar Ebrahimi Bajgani, Erfan Aasi

Abstract:

Breast cancer (BC) continues to be the most frequent cancer in females and causes the highest number of cancer-related deaths in women worldwide. Inspired by recent advances in studying the relationship between different patient attributes and features and the disease, in this paper, we have tried to investigate the different classification methods for better diagnosis of BC in the early stages. In this regard, datasets from the University Hospital Centre of Coimbra were chosen, and different machine learning (ML)-based and neural network (NN) classifiers have been studied. For this purpose, we have selected favorable features among the nine provided attributes from the clinical dataset by using a random forest algorithm. This dataset consists of both healthy controls and BC patients, and it was noted that glucose, BMI, resistin, and age have the most importance, respectively. Moreover, we have analyzed these features with various ML-based classifier methods, including Decision Tree (DT), K-Nearest Neighbors (KNN), eXtreme Gradient Boosting (XGBoost), Logistic Regression (LR), Naive Bayes (NB), and Support Vector Machine (SVM) along with NN-based Multi-Layer Perceptron (MLP) classifier. The results revealed that among different techniques, the SVM and MLP classifiers have the most accuracy, with amounts of 96% and 92%, respectively. These results divulged that the adopted procedure could be used effectively for the classification of cancer cells, and also it encourages further experimental investigations with more collected data for other types of cancers.

Keywords: breast cancer, diagnosis, machine learning, biomarker classification, neural network

Procedia PDF Downloads 97
6105 Spectrum Assignment Algorithms in Optical Networks with Protection

Authors: Qusay Alghazali, Tibor Cinkler, Abdulhalim Fayad

Abstract:

In modern optical networks, the flex grid spectrum usage is most widespread, where higher bit rate streams get larger spectrum slices while lower bit rate traffic streams get smaller spectrum slices. To our practice, under the ITU-T recommendation, G.694.1, spectrum slices of 50, 75, and 100 GHz are being used with central frequency at 193.1 THz. However, when these spectrum slices are not sufficient, multiple spectrum slices can use either one next to another or anywhere in the optical wavelength. In this paper, we propose the analysis of the wavelength assignment problem. We compare different algorithms for this spectrum assignment with and without protection. As a reference for comparisons, we concluded that the Integer Linear Programming (ILP) provides the global optimum for all cases. The most scalable algorithm is the greedy one, which yields results in subsequent ranges even for more significant network instances. The algorithms’ benchmark implemented using the LEMON C++ optimization library and simulation runs based on a minimum number of spectrum slices assigned to lightpaths and their execution time.

Keywords: spectrum assignment, integer linear programming, greedy algorithm, international telecommunication union, library for efficient modeling and optimization in networks

Procedia PDF Downloads 145
6104 Guided Energy Theory of a Particle: Answered Questions Arise from Quantum Foundation

Authors: Desmond Agbolade Ademola

Abstract:

This work aimed to introduce a theory, called Guided Energy Theory of a particle that answered questions that arise from quantum foundation, quantum mechanics theory, and interpretation such as: what is nature of wavefunction? Is mathematical formalism of wavefunction correct? Does wavefunction collapse during measurement? Do quantum physical entanglement and many world interpretations really exist? In addition, is there uncertainty in the physical reality of our nature as being concluded in the Quantum theory? We have been able to show by the fundamental analysis presented in this work that the way quantum mechanics theory, and interpretation describes nature is not correlated with physical reality. Because, we discovered amongst others that, (1) Guided energy theory of a particle fundamentally provides complete physical observable series of quantized measurement of a particle momentum, force, energy e.t.c. in a given distance and time.In contrast, quantum mechanics wavefunction describes that nature has inherited probabilistic and indeterministic physical quantities, resulting in unobservable physical quantities that lead to many worldinterpretation.(2) Guided energy theory of a particle fundamentally predicts that it is mathematically possible to determine precise quantized measurementof position and momentum of a particle simultaneously. Because, there is no uncertainty in nature; nature however naturally guides itself against uncertainty. Contrary to the conclusion in quantum mechanics theory that, it is mathematically impossible to determine the position and the momentum of a particle simultaneously. Furthermore, we have been able to show by this theory that, it is mathematically possible to determine quantized measurement of force acting on a particle simultaneously, which is not possible on the premise of quantum mechanics theory. (3) It is evidently shown by our theory that, guided energy does not collapse, only describes the lopsided nature of a particle behavior in motion. This pretty offers us insight on gradual process of engagement - convergence and disengagement – divergence of guided energy holders which further highlight the picture how wave – like behavior return to particle-like behavior and how particle – like behavior return to wave – like behavior respectively. This further proves that the particles’ behavior in motion is oscillatory in nature. The mathematical formalism of Guided energy theory shows that nature is certainty whereas the mathematical formalism of Quantum mechanics theory shows that nature is absolutely probabilistics. In addition, the nature of wavefunction is the guided energy of the wave. In conclusion, the fundamental mathematical formalism of Quantum mechanics theory is wrong.

Keywords: momentum, physical entanglement, wavefunction, uncertainty

Procedia PDF Downloads 262
6103 A Method for Reduction of Association Rules in Data Mining

Authors: Diego De Castro Rodrigues, Marcelo Lisboa Rocha, Daniela M. De Q. Trevisan, Marcos Dias Da Conceicao, Gabriel Rosa, Rommel M. Barbosa

Abstract:

The use of association rules algorithms within data mining is recognized as being of great value in the knowledge discovery in databases. Very often, the number of rules generated is high, sometimes even in databases with small volume, so the success in the analysis of results can be hampered by this quantity. The purpose of this research is to present a method for reducing the quantity of rules generated with association algorithms. Therefore, a computational algorithm was developed with the use of a Weka Application Programming Interface, which allows the execution of the method on different types of databases. After the development, tests were carried out on three types of databases: synthetic, model, and real. Efficient results were obtained in reducing the number of rules, where the worst case presented a gain of more than 50%, considering the concepts of support, confidence, and lift as measures. This study concluded that the proposed model is feasible and quite interesting, contributing to the analysis of the results of association rules generated from the use of algorithms.

Keywords: data mining, association rules, rules reduction, artificial intelligence

Procedia PDF Downloads 130
6102 Bridging Minds and Nature: Revolutionizing Elementary Environmental Education Through Artificial Intelligence

Authors: Hoora Beheshti Haradasht, Abooali Golzary

Abstract:

Environmental education plays a pivotal role in shaping the future stewards of our planet. Leveraging the power of artificial intelligence (AI) in this endeavor presents an innovative approach to captivate and educate elementary school children about environmental sustainability. This paper explores the application of AI technologies in designing interactive and personalized learning experiences that foster curiosity, critical thinking, and a deep connection to nature. By harnessing AI-driven tools, virtual simulations, and personalized content delivery, educators can create engaging platforms that empower children to comprehend complex environmental concepts while nurturing a lifelong commitment to protecting the Earth. With the pressing challenges of climate change and biodiversity loss, cultivating an environmentally conscious generation is imperative. Integrating AI in environmental education revolutionizes traditional teaching methods by tailoring content, adapting to individual learning styles, and immersing students in interactive scenarios. This paper delves into the potential of AI technologies to enhance engagement, comprehension, and pro-environmental behaviors among elementary school children. Modern AI technologies, including natural language processing, machine learning, and virtual reality, offer unique tools to craft immersive learning experiences. Adaptive platforms can analyze individual learning patterns and preferences, enabling real-time adjustments in content delivery. Virtual simulations, powered by AI, transport students into dynamic ecosystems, fostering experiential learning that goes beyond textbooks. AI-driven educational platforms provide tailored content, ensuring that environmental lessons resonate with each child's interests and cognitive level. By recognizing patterns in students' interactions, AI algorithms curate customized learning pathways, enhancing comprehension and knowledge retention. Utilizing AI, educators can develop virtual field trips and interactive nature explorations. Children can navigate virtual ecosystems, analyze real-time data, and make informed decisions, cultivating an understanding of the delicate balance between human actions and the environment. While AI offers promising educational opportunities, ethical concerns must be addressed. Safeguarding children's data privacy, ensuring content accuracy, and avoiding biases in AI algorithms are paramount to building a trustworthy learning environment. By merging AI with environmental education, educators can empower children not only with knowledge but also with the tools to become advocates for sustainable practices. As children engage in AI-enhanced learning, they develop a sense of agency and responsibility to address environmental challenges. The application of artificial intelligence in elementary environmental education presents a groundbreaking avenue to cultivate environmentally conscious citizens. By embracing AI-driven tools, educators can create transformative learning experiences that empower children to grasp intricate ecological concepts, forge an intimate connection with nature, and develop a strong commitment to safeguarding our planet for generations to come.

Keywords: artificial intelligence, environmental education, elementary children, personalized learning, sustainability

Procedia PDF Downloads 43