Search results for: artificial neural networks controller
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5582

Search results for: artificial neural networks controller

3332 Simulation, Optimization, and Analysis Approach of Microgrid Systems

Authors: Saqib Ali

Abstract:

Sources are classified into two depending upon the factor of reviving. These sources, which cannot be revived into their original shape once they are consumed, are considered as nonrenewable energy resources, i.e., (coal, fuel) Moreover, those energy resources which are revivable to the original condition even after being consumed are known as renewable energy resources, i.e., (wind, solar, hydel) Renewable energy is a cost-effective way to generate clean and green electrical energy Now a day’s majority of the countries are paying heed to energy generation from RES Pakistan is mostly relying on conventional energy resources which are mostly nonrenewable in nature coal, fuel is one of the major resources, and with the advent of time their prices are increasing on the other hand RES have great potential in the country with the deployment of RES greater reliability and an effective power system can be obtained In this thesis, a similar concept is being used and a hybrid power system is proposed which is composed of intermixing of renewable and nonrenewable sources The Source side is composed of solar, wind, fuel cells which will be used in an optimal manner to serve load The goal is to provide an economical, reliable, uninterruptable power supply. This is achieved by optimal controller (PI, PD, PID, FOPID) Optimization techniques are applied to the controllers to achieve the desired results. Advanced algorithms (Particle swarm optimization, Flower Pollination Algorithm) will be used to extract the desired output from the controller Detailed comparison in the form of tables and results will be provided, which will highlight the efficiency of the proposed system.

Keywords: distributed generation, demand-side management, hybrid power system, micro grid, renewable energy resources, supply-side management

Procedia PDF Downloads 85
3331 R-Killer: An Email-Based Ransomware Protection Tool

Authors: B. Lokuketagoda, M. Weerakoon, U. Madushan, A. N. Senaratne, K. Y. Abeywardena

Abstract:

Ransomware has become a common threat in past few years and the recent threat reports show an increase of growth in Ransomware infections. Researchers have identified different variants of Ransomware families since 2015. Lack of knowledge of the user about the threat is a major concern. Ransomware detection methodologies are still growing through the industry. Email is the easiest method to send Ransomware to its victims. Uninformed users tend to click on links and attachments without much consideration assuming the emails are genuine. As a solution to this in this paper R-Killer Ransomware detection tool is introduced. Tool can be integrated with existing email services. The core detection Engine (CDE) discussed in the paper focuses on separating suspicious samples from emails and handling them until a decision is made regarding the suspicious mail. It has the capability of preventing execution of identified ransomware processes. On the other hand, Sandboxing and URL analyzing system has the capability of communication with public threat intelligence services to gather known threat intelligence. The R-Killer has its own mechanism developed in its Proactive Monitoring System (PMS) which can monitor the processes created by downloaded email attachments and identify potential Ransomware activities. R-killer is capable of gathering threat intelligence without exposing the user’s data to public threat intelligence services, hence protecting the confidentiality of user data.

Keywords: ransomware, deep learning, recurrent neural networks, email, core detection engine

Procedia PDF Downloads 191
3330 [Keynote Speech]: Determination of Naturally Occurring and Artificial Radionuclide Activity Concentrations in Marine Sediments in Western Marmara, Turkey

Authors: Erol Kam, Z. U. Yümün

Abstract:

Natural and artificial radionuclides cause radioactive contamination in environments, just as the other non-biodegradable pollutants (heavy metals, etc.) sink to the sea floor and accumulate in sediments. Especially the habitat of benthic foraminifera living on the surface of sediments or in sediments at the seafloor are affected by radioactive pollution in the marine environment. Thus, it is important for pollution analysis to determine the radionuclides. Radioactive pollution accumulates in the lowest level of the food chain and reaches humans at the highest level. The more the accumulation, the more the environment is endangered. This study used gamma spectrometry to investigate the natural and artificial radionuclide distribution of sediment samples taken from living benthic foraminifera habitats in the Western Marmara Sea. The radionuclides, K-40, Cs-137, Ra-226, Mn 54, Zr-95+ and Th-232, were identified in the sediment samples. For this purpose, 18 core samples were taken from depths of about 25-30 meters in the Marmara Sea in 2016. The locations of the core samples were specifically selected exclusively from discharge points for domestic and industrial areas, port locations, and so forth to represent pollution in the study area. Gamma spectrometric analysis was used to determine the radioactive properties of sediments. The radionuclide concentration activity values in the sediment samples obtained were Cs-137=0.9-9.4 Bq/kg, Th-232=18.9-86 Bq/kg, Ra-226=10-50 Bq/kg, K-40=24.4–670 Bq/kg, Mn 54=0.71–0.9 Bq/kg and Zr-95+=0.18–0.19 Bq/kg. These values were compared with the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) data, and an environmental analysis was carried out. The Ra-226 series, the Th-232 series, and the K-40 radionuclides accumulate naturally and are increasing every day due to anthropogenic pollution. Although the Ra-226 values obtained in the study areas remained within normal limits according to the UNSCEAR values, the K-40, and Th-232 series values were found to be high in almost all the locations.

Keywords: Ra-226, Th-232, K-40, Cs-137, Mn 54, Zr-95+, radionuclides, Western Marmara Sea

Procedia PDF Downloads 402
3329 Suppressing Vibration in a Three-axis Flexible Satellite: An Approach with Composite Control

Authors: Jalal Eddine Benmansour, Khouane Boulanoir, Nacera Bekhadda, Elhassen Benfriha

Abstract:

This paper introduces a novel composite control approach that addresses the challenge of stabilizing the three-axis attitude of a flexible satellite in the presence of vibrations caused by flexible appendages. The key contribution of this research lies in the development of a disturbance observer, which effectively observes and estimates the unwanted torques induced by the vibrations. By utilizing the estimated disturbance, the proposed approach enables efficient compensation for the detrimental effects of vibrations on the satellite system. To govern the attitude angles of the spacecraft, a proportional derivative controller (PD) is specifically designed and proposed. The PD controller ensures precise control over all attitude angles, facilitating stable and accurate spacecraft maneuvering. In order to demonstrate the global stability of the system, the Lyapunov method, a well-established technique in control theory, is employed. Through rigorous analysis, the Lyapunov method verifies the convergence of system dynamics, providing strong evidence of system stability. To evaluate the performance and efficacy of the proposed control algorithm, extensive simulations are conducted. The simulation results validate the effectiveness of the combined approach, showcasing significant improvements in the stabilization and control of the satellite's attitude, even in the presence of disruptive vibrations from flexible appendages. This novel composite control approach presented in this paper contributes to the advancement of satellite attitude control techniques, offering a promising solution for achieving enhanced stability and precision in challenging operational environments.

Keywords: attitude control, flexible satellite, vibration control, disturbance observer

Procedia PDF Downloads 68
3328 Self-Organizing Maps for Credit Card Fraud Detection

Authors: ChunYi Peng, Wei Hsuan CHeng, Shyh Kuang Ueng

Abstract:

This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.

Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies

Procedia PDF Downloads 34
3327 Evaluation of the Conditions of Managed Aquifer Recharge in the West African Basement Area

Authors: Palingba Aimé Marie Doilkom, Mahamadou Koïta, Jean-michel Vouillamoz, Angelbert Biaou

Abstract:

Most African populations rely on groundwater in rural areas for their consumption. Indeed, in the face of climate change and strong demographic growth, groundwater, particularly in the basement, is increasingly in demand. The question of the sustainability of water resources in this type of environment is therefore becoming a major issue. Groundwater recharge can be natural or artificial. Unlike natural recharge, which often results from the natural infiltration of surface water (e.g. a share of rainfall), artificial recharge consists of causing water infiltration through appropriate developments to artificially replenish the water stock of an aquifer. Artificial recharge is, therefore, one of the measures that can be implemented to secure water supply, combat the effects of climate change, and, more generally, contribute to improving the quantitative status of groundwater bodies. It is in this context that the present research is conducted with the aim of developing artificial recharge in order to contribute to the sustainability of basement aquifers in a context of climatic variability and constantly increasing water needs of populations. In order to achieve the expected results, it is therefore important to determine the characteristics of the infiltration basins and to identify the areas suitable for their implementation. The geometry of the aquifer was reproduced, and the hydraulic properties of the aquifer were collected and characterized, including boundary conditions, hydraulic conductivity, effective porosity, recharge, Van Genuchten parameters, and saturation indices. The aquifer of the Sanon experimental site is made up of three layers, namely the saprolite, the fissured horizon, and the healthy basement. Indeed, the saprolite and the fissured medium were considered for the simulations. The first results with FEFLOW model show that the water table reacts continuously for the first 100 days before stabilizing. The hydraulic charge increases by an average of 1 m. The further away from the basin, the less the water table reacts. However, if a variable hydraulic head is imposed on the basins, it can be seen that the response of the water table is not uniform over time. The lower the basin hydraulic head, the less it affects the water table. These simulations must be continued by improving the characteristics of the basins in order to obtain the appropriate characteristics for a good recharge.

Keywords: basement area, FEFLOW, infiltration basin, MAR

Procedia PDF Downloads 62
3326 Graph Clustering Unveiled: ClusterSyn - A Machine Learning Framework for Predicting Anti-Cancer Drug Synergy Scores

Authors: Babak Bahri, Fatemeh Yassaee Meybodi, Changiz Eslahchi

Abstract:

In the pursuit of effective cancer therapies, the exploration of combinatorial drug regimens is crucial to leverage synergistic interactions between drugs, thereby improving treatment efficacy and overcoming drug resistance. However, identifying synergistic drug pairs poses challenges due to the vast combinatorial space and limitations of experimental approaches. This study introduces ClusterSyn, a machine learning (ML)-powered framework for classifying anti-cancer drug synergy scores. ClusterSyn employs a two-step approach involving drug clustering and synergy score prediction using a fully connected deep neural network. For each cell line in the training dataset, a drug graph is constructed, with nodes representing drugs and edge weights denoting synergy scores between drug pairs. Drugs are clustered using the Markov clustering (MCL) algorithm, and vectors representing the similarity of drug pairs to each cluster are input into the deep neural network for synergy score prediction (synergy or antagonism). Clustering results demonstrate effective grouping of drugs based on synergy scores, aligning similar synergy profiles. Subsequently, neural network predictions and synergy scores of the two drugs on others within their clusters are used to predict the synergy score of the considered drug pair. This approach facilitates comparative analysis with clustering and regression-based methods, revealing the superior performance of ClusterSyn over state-of-the-art methods like DeepSynergy and DeepDDS on diverse datasets such as Oniel and Almanac. The results highlight the remarkable potential of ClusterSyn as a versatile tool for predicting anti-cancer drug synergy scores.

Keywords: drug synergy, clustering, prediction, machine learning., deep learning

Procedia PDF Downloads 59
3325 Optimal Simultaneous Sizing and Siting of DGs and Smart Meters Considering Voltage Profile Improvement in Active Distribution Networks

Authors: T. Sattarpour, D. Nazarpour

Abstract:

This paper investigates the effect of simultaneous placement of DGs and smart meters (SMs), on voltage profile improvement in active distribution networks (ADNs). A substantial center of attention has recently been on responsive loads initiated in power system problem studies such as distributed generations (DGs). Existence of responsive loads in active distribution networks (ADNs) would have undeniable effect on sizing and siting of DGs. For this reason, an optimal framework is proposed for sizing and siting of DGs and SMs in ADNs. SMs are taken into consideration for the sake of successful implementing of demand response programs (DRPs) such as direct load control (DLC) with end-side consumers. Looking for voltage profile improvement, the optimization procedure is solved by genetic algorithm (GA) and tested on IEEE 33-bus distribution test system. Different scenarios with variations in the number of DG units, individual or simultaneous placing of DGs and SMs, and adaptive power factor (APF) mode for DGs to support reactive power have been established. The obtained results confirm the significant effect of DRPs and APF mode in determining the optimal size and site of DGs to be connected in ADN resulting to the improvement of voltage profile as well.

Keywords: active distribution network (ADN), distributed generations (DGs), smart meters (SMs), demand response programs (DRPs), adaptive power factor (APF)

Procedia PDF Downloads 287
3324 Comparative Analysis of Control Techniques Based Sliding Mode for Transient Stability Assessment for Synchronous Multicellular Converter

Authors: Rihab Hamdi, Amel Hadri Hamida, Fatiha Khelili, Sakina Zerouali, Ouafae Bennis

Abstract:

This paper features a comparative study performance of sliding mode controller (SMC) for closed-loop voltage control of direct current to direct current (DC-DC) three-cells buck converter connected in parallel, operating in continuous conduction mode (CCM), based on pulse-width modulation (PWM) with SMC based on hysteresis modulation (HM) where an adaptive feedforward technique is adopted. On one hand, for the PWM-based SM, the approach is to incorporate a fixed-frequency PWM scheme which is effectively a variant of SM control. On the other hand, for the HM-based SM, oncoming an adaptive feedforward control that makes the hysteresis band variable in the hysteresis modulator of the SM controller in the aim to restrict the switching frequency variation in the case of any change of the line input voltage or output load variation are introduced. The results obtained under load change, input change and reference change clearly demonstrates a similar dynamic response of both proposed techniques, their effectiveness is fast and smooth tracking of the desired output voltage. The PWM-based SM technique has greatly improved the dynamic behavior with a bit advantageous compared to the HM-based SM technique, as well as provide stability in any operating conditions. Simulation studies in MATLAB/Simulink environment have been performed to verify the concept.

Keywords: DC-DC converter, hysteresis modulation, parallel multi-cells converter, pulse-width modulation, robustness, sliding mode control

Procedia PDF Downloads 153
3323 A Comprehensive Theory of Communication with Biological and Non-Biological Intelligence for a 21st Century Curriculum

Authors: Thomas Schalow

Abstract:

It is commonly recognized that our present curriculum is not preparing students to function in the 21st century. This is particularly true in regard to communication needs across cultures - both human and non-human. In this paper, a comprehensive theory of communication-based on communication with non-human cultures and intelligences is presented to meet the following three imminent contingencies: communicating with sentient biological intelligences, communicating with extraterrestrial intelligences, and communicating with artificial super-intelligences. The paper begins with the argument that we need to become much more serious about communicating with the non-human, intelligent life forms that already exists around us here on Earth. We need to broaden our definition of communication and reach out to other sentient life forms in order to provide humanity with a better perspective of its place within our ecosystem. The paper next examines the science and philosophy behind CETI (communication with extraterrestrial intelligences) and how it could prove useful even in the absence of contact with alien life. However, CETI’s assumptions and methodology need to be revised in accordance with the communication theory being proposed in this paper if we are truly serious about finding and communicating with life beyond Earth. The final theme explored in this paper is communication with non-biological super-intelligences. Humanity has never been truly compelled to converse with other species, and our failure to seriously consider such intercourse has left us largely unprepared to deal with communication in a future that will be mediated and controlled by computer algorithms. Fortunately, our experience dealing with other cultures can provide us with a framework for this communication. The basic concepts behind intercultural communication can be applied to the three types of communication envisioned in this paper if we are willing to recognize that we are in fact dealing with other cultures when we interact with other species, alien life, and artificial super-intelligence. The ideas considered in this paper will require a new mindset for humanity, but a new disposition will yield substantial gains. A curriculum that is truly ready for the 21st century needs to be aligned with this new theory of communication.

Keywords: artificial intelligence, CETI, communication, language

Procedia PDF Downloads 344
3322 Multi Criteria Authentication Method in Cognitive Radio Networks

Authors: Shokoufeh Monjezi Kouchak

Abstract:

Cognitive radio network (CRN) is future network .Without this network wireless devices can’t work appropriately in the next decades. Today, wireless devices use static spectrum access methods and these methods don’t use spectrums optimum so we need use dynamic spectrum access methods to solve shortage spectrum challenge and CR is a great device for DSA but first of all its challenges should be solved .security is one of these challenges .In this paper we provided a survey about CR security. You can see this survey in tables 1 to 7 .After that we proposed a multi criteria authentication method in CRN. Our criteria in this method are: sensing results, following sending data rules, position of secondary users and no talk zone. Finally we compared our method with other authentication methods.

Keywords: authentication, cognitive radio, security, radio networks

Procedia PDF Downloads 374
3321 The AI Arena: A Framework for Distributed Multi-Agent Reinforcement Learning

Authors: Edward W. Staley, Corban G. Rivera, Ashley J. Llorens

Abstract:

Advances in reinforcement learning (RL) have resulted in recent breakthroughs in the application of artificial intelligence (AI) across many different domains. An emerging landscape of development environments is making powerful RL techniques more accessible for a growing community of researchers. However, most existing frameworks do not directly address the problem of learning in complex operating environments, such as dense urban settings or defense-related scenarios, that incorporate distributed, heterogeneous teams of agents. To help enable AI research for this important class of applications, we introduce the AI Arena: a scalable framework with flexible abstractions for distributed multi-agent reinforcement learning. The AI Arena extends the OpenAI Gym interface to allow greater flexibility in learning control policies across multiple agents with heterogeneous learning strategies and localized views of the environment. To illustrate the utility of our framework, we present experimental results that demonstrate performance gains due to a distributed multi-agent learning approach over commonly-used RL techniques in several different learning environments.

Keywords: reinforcement learning, multi-agent, deep learning, artificial intelligence

Procedia PDF Downloads 144
3320 Comparative Analysis of Predictive Models for Customer Churn Prediction in the Telecommunication Industry

Authors: Deepika Christopher, Garima Anand

Abstract:

To determine the best model for churn prediction in the telecom industry, this paper compares 11 machine learning algorithms, namely Logistic Regression, Support Vector Machine, Random Forest, Decision Tree, XGBoost, LightGBM, Cat Boost, AdaBoost, Extra Trees, Deep Neural Network, and Hybrid Model (MLPClassifier). It also aims to pinpoint the top three factors that lead to customer churn and conducts customer segmentation to identify vulnerable groups. According to the data, the Logistic Regression model performs the best, with an F1 score of 0.6215, 81.76% accuracy, 68.95% precision, and 56.57% recall. The top three attributes that cause churn are found to be tenure, Internet Service Fiber optic, and Internet Service DSL; conversely, the top three models in this article that perform the best are Logistic Regression, Deep Neural Network, and AdaBoost. The K means algorithm is applied to establish and analyze four different customer clusters. This study has effectively identified customers that are at risk of churn and may be utilized to develop and execute strategies that lower customer attrition.

Keywords: attrition, retention, predictive modeling, customer segmentation, telecommunications

Procedia PDF Downloads 40
3319 Self-Organizing Maps for Credit Card Fraud Detection and Visualization

Authors: Peng, Chun-Yi, Chen, Wei-Hsuan, Ueng, Shyh-Kuang

Abstract:

This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.

Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies

Procedia PDF Downloads 42
3318 The Influence of Hydrogen Addition to Natural Gas Networks on Gas Appliances

Authors: Yitong Xie, Chaokui Qin, Zhiguang Chen, Shuangqian Guo

Abstract:

Injecting hydrogen, a competitive carbon-free energy carrier, into existing natural gas networks has become a promising step toward alleviating global warming. Considering the differences in properties of hydrogen and natural gas, there is very little evidence showing how many degrees of hydrogen admixture can be accepted and how to adjust appliances to adapt to gas constituents' variation. The lack of this type of analysis provides more uncertainty in injecting hydrogen into networks because of the short the basis of burner design and adjustment. First, the properties of methane and hydrogen were compared for a comprehensive analysis of the impact of hydrogen addition to methane. As the main determinant of flame stability, the burning velocity was adopted for hydrogen addition analysis. Burning velocities for hydrogen-enriched natural gas with different hydrogen percentages and equivalence ratios were calculated by the software CHEMKIN. Interchangeability methods, including single index methods, multi indices methods, and diagram methods, were adopted to determine the limit of hydrogen percentage. Cooktops and water heaters were experimentally tested in the laboratory. Flame structures of different hydrogen percentages and equivalence ratios were observed and photographed. Besides, the change in heat efficiency, burner temperature, emission by hydrogen percentage, and equivalence ratio was studied. The experiment methodologies and results in this paper provide an important basis for the introduction of hydrogen into gas pipelines and the adjustment of gas appliances.

Keywords: hydrogen, methane, combustion, appliances, interchangeability

Procedia PDF Downloads 72
3317 Optimized Cluster Head Selection Algorithm Based on LEACH Protocol for Wireless Sensor Networks

Authors: Wided Abidi, Tahar Ezzedine

Abstract:

Low-Energy Adaptive Clustering Hierarchy (LEACH) has been considered as one of the effective hierarchical routing algorithms that optimize energy and prolong the lifetime of network. Since the selection of Cluster Head (CH) in LEACH is carried out randomly, in this paper, we propose an approach of electing CH based on LEACH protocol. In other words, we present a formula for calculating the threshold responsible for CH election. In fact, we adopt three principle criteria: the remaining energy of node, the number of neighbors within cluster range and the distance between node and CH. Simulation results show that our proposed approach beats LEACH protocol in regards of prolonging the lifetime of network and saving residual energy.

Keywords: wireless sensors networks, LEACH protocol, cluster head election, energy efficiency

Procedia PDF Downloads 319
3316 Reconstructability Analysis for Landslide Prediction

Authors: David Percy

Abstract:

Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.

Keywords: reconstructability analysis, machine learning, landslides, raster analysis

Procedia PDF Downloads 47
3315 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 61
3314 Power Allocation Algorithm for Orthogonal Frequency Division Multiplexing Based Cognitive Radio Networks

Authors: Bircan Demiral

Abstract:

Cognitive radio (CR) is the promising technology that addresses the spectrum scarcity problem for future wireless communications. Orthogonal Frequency Division Multiplexing (OFDM) technology provides more power band ratios for cognitive radio networks (CRNs). While CR is a solution to the spectrum scarcity, it also brings up the capacity problem. In this paper, a novel power allocation algorithm that aims at maximizing the sum capacity in the OFDM based cognitive radio networks is proposed. Proposed allocation algorithm is based on the previously developed water-filling algorithm. To reduce the computational complexity calculating in water filling algorithm, proposed algorithm allocates the total power according to each subcarrier. The power allocated to the subcarriers increases sum capacity. To see this increase, Matlab program was used, and the proposed power allocation was compared with average power allocation, water filling and general power allocation algorithms. The water filling algorithm performed worse than the proposed algorithm while it performed better than the other two algorithms. The proposed algorithm is better than other algorithms in terms of capacity increase. In addition the effect of the change in the number of subcarriers on capacity was discussed. Simulation results show that the increase in the number of subcarrier increases the capacity.

Keywords: cognitive radio network, OFDM, power allocation, water filling

Procedia PDF Downloads 122
3313 Health Trajectory Clustering Using Deep Belief Networks

Authors: Farshid Hajati, Federico Girosi, Shima Ghassempour

Abstract:

We present a Deep Belief Network (DBN) method for clustering health trajectories. Deep Belief Network (DBN) is a deep architecture that consists of a stack of Restricted Boltzmann Machines (RBM). In a deep architecture, each layer learns more complex features than the past layers. The proposed method depends on DBN in clustering without using back propagation learning algorithm. The proposed DBN has a better a performance compared to the deep neural network due the initialization of the connecting weights. We use Contrastive Divergence (CD) method for training the RBMs which increases the performance of the network. The performance of the proposed method is evaluated extensively on the Health and Retirement Study (HRS) database. The University of Michigan Health and Retirement Study (HRS) is a nationally representative longitudinal study that has surveyed more than 27,000 elderly and near-elderly Americans since its inception in 1992. Participants are interviewed every two years and they collect data on physical and mental health, insurance coverage, financial status, family support systems, labor market status, and retirement planning. The dataset is publicly available and we use the RAND HRS version L, which is easy to use and cleaned up version of the data. The size of sample data set is 268 and the length of the trajectories is equal to 10. The trajectories do not stop when the patient dies and represent 10 different interviews of live patients. Compared to the state-of-the-art benchmarks, the experimental results show the effectiveness and superiority of the proposed method in clustering health trajectories.

Keywords: health trajectory, clustering, deep learning, DBN

Procedia PDF Downloads 351
3312 A Neuron Model of Facial Recognition and Detection of an Authorized Entity Using Machine Learning System

Authors: J. K. Adedeji, M. O. Oyekanmi

Abstract:

This paper has critically examined the use of Machine Learning procedures in curbing unauthorized access into valuable areas of an organization. The use of passwords, pin codes, user’s identification in recent times has been partially successful in curbing crimes involving identities, hence the need for the design of a system which incorporates biometric characteristics such as DNA and pattern recognition of variations in facial expressions. The facial model used is the OpenCV library which is based on the use of certain physiological features, the Raspberry Pi 3 module is used to compile the OpenCV library, which extracts and stores the detected faces into the datasets directory through the use of camera. The model is trained with 50 epoch run in the database and recognized by the Local Binary Pattern Histogram (LBPH) recognizer contained in the OpenCV. The training algorithm used by the neural network is back propagation coded using python algorithmic language with 200 epoch runs to identify specific resemblance in the exclusive OR (XOR) output neurons. The research however confirmed that physiological parameters are better effective measures to curb crimes relating to identities.

Keywords: biometric characters, facial recognition, neural network, OpenCV

Procedia PDF Downloads 239
3311 Application of Combined Cluster and Discriminant Analysis to Make the Operation of Monitoring Networks More Economical

Authors: Norbert Magyar, Jozsef Kovacs, Peter Tanos, Balazs Trasy, Tamas Garamhegyi, Istvan Gabor Hatvani

Abstract:

Water is one of the most important common resources, and as a result of urbanization, agriculture, and industry it is becoming more and more exposed to potential pollutants. The prevention of the deterioration of water quality is a crucial role for environmental scientist. To achieve this aim, the operation of monitoring networks is necessary. In general, these networks have to meet many important requirements, such as representativeness and cost efficiency. However, existing monitoring networks often include sampling sites which are unnecessary. With the elimination of these sites the monitoring network can be optimized, and it can operate more economically. The aim of this study is to illustrate the applicability of the CCDA (Combined Cluster and Discriminant Analysis) to the field of water quality monitoring and optimize the monitoring networks of a river (the Danube), a wetland-lake system (Kis-Balaton & Lake Balaton), and two surface-subsurface water systems on the watershed of Lake Neusiedl/Lake Fertő and on the Szigetköz area over a period of approximately two decades. CCDA combines two multivariate data analysis methods: hierarchical cluster analysis and linear discriminant analysis. Its goal is to determine homogeneous groups of observations, in our case sampling sites, by comparing the goodness of preconceived classifications obtained from hierarchical cluster analysis with random classifications. The main idea behind CCDA is that if the ratio of correctly classified cases for a grouping is higher than at least 95% of the ratios for the random classifications, then at the level of significance (α=0.05) the given sampling sites don’t form a homogeneous group. Due to the fact that the sampling on the Lake Neusiedl/Lake Fertő was conducted at the same time at all sampling sites, it was possible to visualize the differences between the sampling sites belonging to the same or different groups on scatterplots. Based on the results, the monitoring network of the Danube yields redundant information over certain sections, so that of 12 sampling sites, 3 could be eliminated without loss of information. In the case of the wetland (Kis-Balaton) one pair of sampling sites out of 12, and in the case of Lake Balaton, 5 out of 10 could be discarded. For the groundwater system of the catchment area of Lake Neusiedl/Lake Fertő all 50 monitoring wells are necessary, there is no redundant information in the system. The number of the sampling sites on the Lake Neusiedl/Lake Fertő can decrease to approximately the half of the original number of the sites. Furthermore, neighbouring sampling sites were compared pairwise using CCDA and the results were plotted on diagrams or isoline maps showing the location of the greatest differences. These results can help researchers decide where to place new sampling sites. The application of CCDA proved to be a useful tool in the optimization of the monitoring networks regarding different types of water bodies. Based on the results obtained, the monitoring networks can be operated more economically.

Keywords: combined cluster and discriminant analysis, cost efficiency, monitoring network optimization, water quality

Procedia PDF Downloads 335
3310 Thermalytix: An Advanced Artificial Intelligence Based Solution for Non-Contact Breast Screening

Authors: S. Sudhakar, Geetha Manjunath, Siva Teja Kakileti, Himanshu Madhu

Abstract:

Diagnosis of breast cancer at early stages has seen better clinical and survival outcomes. Survival rates in developing countries like India are very low due to accessibility and affordability issues of screening tests such as Mammography. In addition, Mammography is not much effective in younger women with dense breasts. This leaves a gap in current screening methods. Thermalytix is a new technique for detecting breast abnormality in a non-contact, non-invasive way. It is an AI-enabled computer-aided diagnosis solution that automates interpretation of high resolution thermal images and identifies potential malignant lesions. The solution is low cost, easy to use, portable and is effective in all age groups. This paper presents the results of a retrospective comparative analysis of Thermalytix over Mammography and Clinical Breast Examination for breast cancer screening. Thermalytix was found to have better sensitivity than both the tests, with good specificity as well. In addition, Thermalytix identified all malignant patients without palpable lumps.

Keywords: breast cancer screening, radiology, thermalytix, artificial intelligence, thermography

Procedia PDF Downloads 263
3309 The Role of ChatGPT in Enhancing ENT Surgical Training

Authors: Laura Brennan, Ram Balakumar

Abstract:

ChatGPT has been developed by Open AI (Nov 2022) as a powerful artificial intelligence (AI) language model which has been designed to produce human-like text from user written prompts. To gain the most from the system, user written prompts must give context specific information. This article aims to give guidance on how to optimise the ChatGPT system in the context of education for otolaryngology. Otolaryngology is a specialist field which sees little time dedicated to providing education to both medical students and doctors. Additionally, otolaryngology trainees have seen a reduction in learning opportunities since the COVID-19 pandemic. In this article we look at these various barriers to medical education in Otolaryngology training and suggest ways that ChatGPT can overcome them and assist in simulation-based training. Examples provide how this can be achieved using the Authors’ experience to further highlight the practicalities. What this article has found is that while ChatGPT cannot replace traditional mentorship and practical surgical experience, it can serve as an invaluable supplementary resource to simulation based medical education in Otolaryngology.

Keywords: artificial intelligence, otolaryngology, surgical training, medical education

Procedia PDF Downloads 134
3308 Optimization Method of Dispersed Generation in Electrical Distribution Systems

Authors: Mahmoud Samkan

Abstract:

Dispersed Generation (DG) is a promising solution to many power system problems such as voltage regulation and power loss. This paper proposes a heuristic two-step method to optimize the location and size of DG for reducing active power losses and, therefore, improve the voltage profile in radial distribution networks. In addition to a DG placed at the system load gravity center, this method consists in assigning a DG to each lateral of the network. After having determined the central DG placement, the location and size of each lateral DG are predetermined in the first step. The results are then refined in the second step. This method is tested for 33-bus system for 100% DG penetration. The results obtained are compared with those of other methods found in the literature.

Keywords: optimal location, optimal size, dispersed generation (DG), radial distribution networks, reducing losses

Procedia PDF Downloads 428
3307 Investigating the Effect of Artificial Intelligence on the Improvement of Green Supply Chain in Industry

Authors: Sepinoud Hamedi

Abstract:

Over the past few decades, companies have appeared developing concerns in connection to the natural affect of their fabricating exercises. Green supply chain administration has been considered by the producers as a attainable choice to decrease the natural affect of operations whereas at the same time moving forward their operational execution. Contemporaneously the coming of digitalization and globalization within the supply chain space has driven to a developing acknowledgment of the importance of data preparing methodologies, such as enormous information analytics and fake insights innovations, in improving and optimizing supply chain execution. Also, supply chain collaboration in part intervenes the relationship between manufactured innovation and supply chain execution Ponders appear that the use of BDA-AI advances includes a significant impact on natural handle integration and green supply chain collaboration conjointly underlines that both natural handle integration and green supply chain collaboration have a critical affect on natural execution. Correspondingly savvy supply chain contributes to green execution through overseeing green connections and setting up green operations.

Keywords: green supply chain, artificial intelligence, manufacturers, technology, environmental

Procedia PDF Downloads 52
3306 MULTI-FLGANs: Multi-Distributed Adversarial Networks for Non-Independent and Identically Distributed Distribution

Authors: Akash Amalan, Rui Wang, Yanqi Qiao, Emmanouil Panaousis, Kaitai Liang

Abstract:

Federated learning is an emerging concept in the domain of distributed machine learning. This concept has enabled General Adversarial Networks (GANs) to benefit from the rich distributed training data while preserving privacy. However, in a non-IID setting, current federated GAN architectures are unstable, struggling to learn the distinct features, and vulnerable to mode collapse. In this paper, we propose an architecture MULTI-FLGAN to solve the problem of low-quality images, mode collapse, and instability for non-IID datasets. Our results show that MULTI-FLGAN is four times as stable and performant (i.e., high inception score) on average over 20 clients compared to baseline FLGAN.

Keywords: federated learning, generative adversarial network, inference attack, non-IID data distribution

Procedia PDF Downloads 137
3305 Examining the Importance of the Structure Based on Grid Computing Service and Virtual Organizations

Authors: Sajjad Baghernezhad, Saeideh Baghernezhad

Abstract:

Vast changes and developments achieved in information technology field in recent decades have made the review of different issues such as organizational structures unavoidable. Applying informative technologies such as internet and also vast use of computer and related networks have led to new organizational formations with a nature completely different from the traditional, great and bureaucratic ones; some common specifications of such organizations are transfer of the affairs out of the organization, benefiting from informative and communicative networks and centered-science workers. Such communicative necessities have led to network sciences development including grid computing. First, the grid computing was only to relate some sites for short – time and use their sources simultaneously, but now it has gone beyond such idea. In this article, the grid computing technology was examined, and at the same time, virtual organization concept was discussed.

Keywords: grid computing, virtual organizations, software engineering, organization

Procedia PDF Downloads 316
3304 Theory of Mind and Its Brain Distribution in Patients with Temporal Lobe Epilepsy

Authors: Wei-Han Wang, Hsiang-Yu Yu, Mau-Sun Hua

Abstract:

Theory of Mind (ToM) refers to the ability to infer another’s mental state. With appropriate ToM, one can behave well in social interactions. A growing body of evidence has demonstrated that patients with temporal lobe epilepsy (TLE) may have damaged ToM due to impact on regions of the underlying neural network of ToM. However, the question of whether there is cerebral laterality for ToM functions remains open. This study aimed to examine whether there is cerebral lateralization for ToM abilities in TLE patients. Sixty-seven adult TLE patients and 30 matched healthy controls (HC) were recruited. Patients were classified into right (RTLE), left (LTLE), and bilateral (BTLE) TLE groups on the basis of a consensus panel review of their seizure semiology, EEG findings, and brain imaging results. All participants completed an intellectual test and four tasks measuring basic and advanced ToM. The results showed that, on all ToM tasks; (1)each patient group performed worse than HC; (2)there were no significant differences between LTLE and RTLE groups; (3)the BTLE group performed the worst. It appears that the neural network responsible for ToM is distributed evenly between the cerebral hemispheres.

Keywords: cerebral lateralization, social cognition, temporal lobe epilepsy, theory of mind

Procedia PDF Downloads 408
3303 Smart Web Services in the Web of Things

Authors: Sekkal Nawel

Abstract:

The Web of Things (WoT), integration of smart technologies from the Internet or network to Web architecture or application, is becoming more complex, larger, and dynamic. The WoT is associated with various elements such as sensors, devices, networks, protocols, data, functionalities, and architectures to perform services for stakeholders. These services operate in the context of the interaction of stakeholders and the WoT elements. Such context is becoming a key information source from which data are of various nature and uncertain, thus leading to complex situations. In this paper, we take interest in the development of intelligent Web services. The key ingredients of this “intelligent” notion are the context diversity, the necessity of a semantic representation to manage complex situations and the capacity to reason with uncertain data. In this perspective, we introduce a multi-layered architecture based on a generic intelligent Web service model dealing with various contexts, which proactively predict future situations and reactively respond to real-time situations in order to support decision-making. For semantic context data representation, we use PR-OWL, which is a probabilistic ontology based on Multi-Entity Bayesian Networks (MEBN). PR-OWL is flexible enough to represent complex, dynamic, and uncertain contexts, the key requirements of the development for the intelligent Web services. A case study was carried out using the proposed architecture for intelligent plant watering to show the role of proactive and reactive contextual reasoning in terms of WoT.

Keywords: smart web service, the web of things, context reasoning, proactive, reactive, multi-entity bayesian networks, PR-OWL

Procedia PDF Downloads 47