Search results for: convolutional neural network topology
3074 Classification of Contexts for Mentioning Love in Interviews with Victims of the Holocaust
Authors: Marina Yurievna Aleksandrova
Abstract:
Research of the Holocaust retains value not only for history but also for sociology and psychology. One of the most important fields of study is how people were coping during and after this traumatic event. The aim of this paper is to identify the main contexts of the topic of love and to determine which contexts are more characteristic for different groups of victims of the Holocaust (gender, nationality, age). In this research, transcripts of interviews with Holocaust victims that were collected during 1946 for the "Voices of the Holocaust" project were used as data. Main contexts were analyzed with methods of network analysis and latent semantic analysis and classified by gender, age, and nationality with random forest. The results show that love is articulated and described significantly differently for male and female informants, nationality is shown results with lower values of quality metrics, as well as the age.Keywords: Holocaust, latent semantic analysis, network analysis, text-mining, random forest
Procedia PDF Downloads 1833073 Synchronization of Bus Frames during Universal Serial Bus Transfer
Authors: Petr Šimek
Abstract:
This work deals with the problem of synchronization of bus frames during transmission using USB (Universal Serial Bus). The principles for synchronization between USB and the non-deterministic CAN (Controller Area Network) bus will be described here. Furthermore, the work deals with ensuring the time sequence of communication frames when receiving from multiple communication bus channels. The structure of a general object for storing frames from different types of communication buses, such as CAN and LIN (Local Interconnect Network), will be described here. Finally, an evaluation of the communication throughput of bus frames for USB High speed will be performed. The creation of this architecture was based on the analysis of the communication of control units with a large number of communication buses. For the design of the architecture, a test HW with a USB-HS interface was used, which received previously known messages, which were compared with the received result. The result of this investigation is the block architecture of the control program for test HW ensuring correct data transmission via the USB bus.Keywords: analysis, CAN, interface, LIN, synchronization, USB
Procedia PDF Downloads 663072 Detect Critical Thinking Skill in Written Text Analysis. The Use of Artificial Intelligence in Text Analysis vs Chat/Gpt
Authors: Lucilla Crosta, Anthony Edwards
Abstract:
Companies and the market place nowadays struggle to find employees with adequate skills in relation to anticipated growth of their businesses. At least half of workers will need to undertake some form of up-skilling process in the next five years in order to remain aligned with the requests of the market . In order to meet these challenges, there is a clear need to explore the potential uses of AI (artificial Intelligence) based tools in assessing transversal skills (critical thinking, communication and soft skills of different types in general) of workers and adult students while empowering them to develop those same skills in a reliable trustworthy way. Companies seek workers with key transversal skills that can make a difference between workers now and in the future. However, critical thinking seems to be the one of the most imprtant skill, bringing unexplored ideas and company growth in business contexts. What employers have been reporting since years now, is that this skill is lacking in the majority of workers and adult students, and this is particularly visible trough their writing. This paper investigates how critical thinking and communication skills are currently developed in Higher Education environments through use of AI tools at postgraduate levels. It analyses the use of a branch of AI namely Machine Learning and Big Data and of Neural Network Analysis. It also examines the potential effect the acquisition of these skills through AI tools and what kind of effects this has on employability This paper will draw information from researchers and studies both at national (Italy & UK) and international level in Higher Education. The issues associated with the development and use of one specific AI tool Edulai, will be examined in details. Finally comparisons will be also made between these tools and the more recent phenomenon of Chat GPT and forthcomings and drawbacks will be analysed.Keywords: critical thinking, artificial intelligence, higher education, soft skills, chat GPT
Procedia PDF Downloads 1143071 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores
Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan
Abstract:
Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics
Procedia PDF Downloads 1313070 Automatic Content Curation of Visual Heritage
Authors: Delphine Ribes Lemay, Valentine Bernasconi, André Andrade, Lara DéFayes, Mathieu Salzmann, FréDéRic Kaplan, Nicolas Henchoz
Abstract:
Digitization and preservation of large heritage induce high maintenance costs to keep up with the technical standards and ensure sustainable access. Creating impactful usage is instrumental to justify the resources for long-term preservation. The Museum für Gestaltung of Zurich holds one of the biggest poster collections of the world from which 52’000 were digitised. In the process of building a digital installation to valorize the collection, one objective was to develop an algorithm capable of predicting the next poster to show according to the ones already displayed. The work presented here describes the steps to build an algorithm able to automatically create sequences of posters reflecting associations performed by curator and professional designers. The exposed challenge finds similarities with the domain of song playlist algorithms. Recently, artificial intelligence techniques and more specifically, deep-learning algorithms have been used to facilitate their generations. Promising results were found thanks to Recurrent Neural Networks (RNN) trained on manually generated playlist and paired with clusters of extracted features from songs. We used the same principles to create the proposed algorithm but applied to a challenging medium, posters. First, a convolutional autoencoder was trained to extract features of the posters. The 52’000 digital posters were used as a training set. Poster features were then clustered. Next, an RNN learned to predict the next cluster according to the previous ones. RNN training set was composed of poster sequences extracted from a collection of books from the Gestaltung Museum of Zurich dedicated to displaying posters. Finally, within the predicted cluster, the poster with the best proximity compared to the previous poster is selected. The mean square distance between features of posters was used to compute the proximity. To validate the predictive model, we compared sequences of 15 posters produced by our model to randomly and manually generated sequences. Manual sequences were created by a professional graphic designer. We asked 21 participants working as professional graphic designers to sort the sequences from the one with the strongest graphic line to the one with the weakest and to motivate their answer with a short description. The sequences produced by the designer were ranked first 60%, second 25% and third 15% of the time. The sequences produced by our predictive model were ranked first 25%, second 45% and third 30% of the time. The sequences produced randomly were ranked first 15%, second 29%, and third 55% of the time. Compared to designer sequences, and as reported by participants, model and random sequences lacked thematic continuity. According to the results, the proposed model is able to generate better poster sequencing compared to random sampling. Eventually, our algorithm is sometimes able to outperform a professional designer. As a next step, the proposed algorithm should include a possibility to create sequences according to a selected theme. To conclude, this work shows the potentiality of artificial intelligence techniques to learn from existing content and provide a tool to curate large sets of data, with a permanent renewal of the presented content.Keywords: Artificial Intelligence, Digital Humanities, serendipity, design research
Procedia PDF Downloads 1863069 Investigation of Different Machine Learning Algorithms in Large-Scale Land Cover Mapping within the Google Earth Engine
Authors: Amin Naboureh, Ainong Li, Jinhu Bian, Guangbin Lei, Hamid Ebrahimy
Abstract:
Large-scale land cover mapping has become a new challenge in land change and remote sensing field because of involving a big volume of data. Moreover, selecting the right classification method, especially when there are different types of landscapes in the study area is quite difficult. This paper is an attempt to compare the performance of different machine learning (ML) algorithms for generating a land cover map of the China-Central Asia–West Asia Corridor that is considered as one of the main parts of the Belt and Road Initiative project (BRI). The cloud-based Google Earth Engine (GEE) platform was used for generating a land cover map for the study area from Landsat-8 images (2017) by applying three frequently used ML algorithms including random forest (RF), support vector machine (SVM), and artificial neural network (ANN). The selected ML algorithms (RF, SVM, and ANN) were trained and tested using reference data obtained from MODIS yearly land cover product and very high-resolution satellite images. The finding of the study illustrated that among three frequently used ML algorithms, RF with 91% overall accuracy had the best result in producing a land cover map for the China-Central Asia–West Asia Corridor whereas ANN showed the worst result with 85% overall accuracy. The great performance of the GEE in applying different ML algorithms and handling huge volume of remotely sensed data in the present study showed that it could also help the researchers to generate reliable long-term land cover change maps. The finding of this research has great importance for decision-makers and BRI’s authorities in strategic land use planning.Keywords: land cover, google earth engine, machine learning, remote sensing
Procedia PDF Downloads 1143068 Wireless Information Transfer Management and Case Study of a Fire Alarm System in a Residential Building
Authors: Mohsen Azarmjoo, Mehdi Mehdizadeh Koupaei, Maryam Mehdizadeh Koupaei, Asghar Mahdlouei Azar
Abstract:
The increasing prevalence of wireless networks in our daily lives has made them indispensable. The aim of this research is to investigate the management of information transfer in wireless networks and the integration of renewable solar energy resources in a residential building. The focus is on the transmission of electricity and information through wireless networks, as well as the utilization of sensors and wireless fire alarm systems. The research employs a descriptive approach to examine the transmission of electricity and information on a wireless network with electric and optical telephone lines. It also investigates the transmission of signals from sensors and wireless fire alarm systems via radio waves. The methodology includes a detailed analysis of security, comfort conditions, and costs related to the utilization of wireless networks and renewable solar energy resources. The study reveals that it is feasible to transmit electricity on a network cable using two pairs of network cables without the need for separate power cabling. Additionally, the integration of renewable solar energy systems in residential buildings can reduce dependence on traditional energy carriers. The use of sensors and wireless remote information processing can enhance the safety and efficiency of energy usage in buildings and the surrounding spaces.Keywords: renewable energy, intelligentization, wireless sensors, fire alarm system
Procedia PDF Downloads 553067 Cryptography and Cryptosystem a Panacea to Security Risk in Wireless Networking
Authors: Modesta E. Ezema, Chikwendu V. Alabekee, Victoria N. Ishiwu, Ifeyinwa NwosuArize, Chinedu I. Nwoye
Abstract:
The advent of wireless networking in computing technology cannot be overemphasized, it opened up easy accessibility to information resources, networking made easier and brought internet accessibility to our doorsteps, but despite all these, some mishap came in with it that is causing mayhem in today ‘s overall information security. The cyber criminals will always compromise the integrity of a message that is not encrypted or that is encrypted with a weak algorithm.In other to correct the mayhem, this study focuses on cryptosystem and cryptography. This ensures end to end crypt messaging. The study of various cryptographic algorithms, as well as the techniques and applications of the cryptography for efficiency, were all considered in the work., present and future applications of cryptography were dealt with as well as Quantum Cryptography was exposed as the current and the future area in the development of cryptography. An empirical study was conducted to collect data from network users.Keywords: algorithm, cryptography, cryptosystem, network
Procedia PDF Downloads 3503066 A Cooperative Signaling Scheme for Global Navigation Satellite Systems
Authors: Keunhong Chae, Seokho Yoon
Abstract:
Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.Keywords: global navigation satellite network, cooperative signaling, data combining, nodes
Procedia PDF Downloads 2823065 The Iraqi Fibre-to-the-Home Networks, Problems, Challenges, and Solutions along with Less Expense
Authors: Hasanein Hasan, Mohammed Al-Taie, Basil Shanshool, Khalaf Abd-Ali
Abstract:
This approach aims to deal with establishing and operating Iraqi Fibre-To-The-Home (FTTH) projects. The problems they suffer from are organized sabotage, vandalism, accidental damage and poor planning. It provides practical solutions that deal with the aforementioned problems. These solutions consist of both technical and financial clarifications that ensure the achievement of the FTTH network’s stability for the purpose of equipping citizens, private sector companies, and governmental institutions with services, data transmission, the Internet, and other services. They aim to solve problems and obstacles accompanying the operation and maintenance of FTTH projects implemented by the Informatics and Telecommunications Public Company (ITPC)/ Iraqi Ministry of Communications (MoC). This approach takes the FTTH network of AlMaalif-AlMuaslat districts/ Baghdad-Iraq as a case study.Keywords: CCTV, FTTH, ITPC, MoC, NVR, PTZ
Procedia PDF Downloads 853064 Semantic Network Analysis of the Saudi Women Driving Decree
Authors: Dania Aljouhi
Abstract:
September 26th, 2017, is a historic date for all women in Saudi Arabia. On that day, Saudi Arabia announced the decree on allowing Saudi women to drive. With the advent of vision 2030 and its goal to empower women and increase their participation in Saudi society, we see how Saudis’ Twitter users deliberate the 2017 decree from different social, cultural, religious, economic and political factors. This topic bridges social media 'Twitter,' gender and social-cultural studies to offer insights into how Saudis’ tweets reflect a broader discourse on Saudi women in the age of social media. The present study aims to explore the meanings and themes that emerge by Saudis’ Twitter users in response to the 2017 royal decree on women driving. The sample used in the current study involves (n= 1000) tweets that were collected from Sep 2017 to March 2019 to account for the Saudis’ tweets before and after implementing the decree. The paper uses semantic and thematic network analysis methods to examine the Saudis’ Twitter discourse on the women driving issue. The paper argues that Twitter as a platform has mediated the discourse of women driving among the Saudi community and facilitated social changes. Finally, framing theory (Goffman, 1974) and Networked framing (Meraz & Papacharissi 2013) are both used to explain the tweets on the decree of allowing Saudi women to drive based on # Saudi women-driving-cars.Keywords: Saudi Arabia, women, Twitter, semantic network analysis, framing
Procedia PDF Downloads 1593063 A Look at the Quantum Theory of Atoms in Molecules from the Discrete Morse Theory
Authors: Dairo Jose Hernandez Paez
Abstract:
The quantum theory of atoms in molecules (QTAIM) allows us to obtain topological information on electronic density in quantum mechanical systems. The QTAIM starts by considering the electron density as a continuous mathematical object. On the other hand, the discretization of electron density is also a mathematical object, which, from discrete mathematics, would allow a new approach to its topological study. From this point of view, it is necessary to develop a series of steps that provide the theoretical support that guarantees its application. Some of the steps that we consider most important are mentioned below: (1) obtain good representations of the electron density through computational calculations, (2) design a methodology for the discretization of electron density, and construct the simplicial complex. (3) Make an analysis of the discrete vector field associating the simplicial complex. (4) Finally, in this research, we propose to use the discrete Morse theory as a mathematical tool to carry out studies of electron density topology.Keywords: discrete mathematics, Discrete Morse theory, electronic density, computational calculations
Procedia PDF Downloads 1053062 Cross-Knowledge Graph Relation Completion for Non-Isomorphic Cross-Lingual Entity Alignment
Authors: Yuhong Zhang, Dan Lu, Chenyang Bu, Peipei Li, Kui Yu, Xindong Wu
Abstract:
The Cross-Lingual Entity Alignment (CLEA) task aims to find the aligned entities that refer to the same identity from two knowledge graphs (KGs) in different languages. It is an effective way to enhance the performance of data mining for KGs with scarce resources. In real-world applications, the neighborhood structures of the same entities in different KGs tend to be non-isomorphic, which makes the representation of entities contain diverse semantic information and then poses a great challenge for CLEA. In this paper, we try to address this challenge from two perspectives. On the one hand, the cross-KG relation completion rules are designed with the alignment constraint of entities and relations to improve the topology isomorphism of two KGs. On the other hand, a representation method combining isomorphic weights is designed to include more isomorphic semantics for counterpart entities, which will benefit the CLEA. Experiments show that our model can improve the isomorphism of two KGs and the alignment performance, especially for two non-isomorphic KGs.Keywords: knowledge graphs, cross-lingual entity alignment, non-isomorphic, relation completion
Procedia PDF Downloads 1253061 Fuzzy Logic Based Fault Tolerant Model Predictive MLI Topology
Authors: Abhimanyu Kumar, Chirag Gupta
Abstract:
This work presents a comprehensive study on the employment of Model Predictive Control (MPC) for a three-phase voltage-source inverter to regulate the output voltage efficiently. The inverter is modeled via the Clarke Transformation, considering a scenario where the load is unknown. An LC filter model is developed, demonstrating its efficacy in Total Harmonic Distortion (THD) reduction. The system, when implemented with fault-tolerant multilevel inverter topologies, ensures reliable operation even under fault conditions, a requirement that is paramount with the increasing dependence on renewable energy sources. The research also integrates a Fuzzy Logic based fault tolerance system which identifies and manages faults, ensuring consistent inverter performance. The efficacy of the proposed methodology is substantiated through rigorous simulations and comparative results, shedding light on the voltage prediction efficiency and the robustness of the model even under fault conditions.Keywords: total harmonic distortion, fuzzy logic, renewable energy sources, MLI
Procedia PDF Downloads 1353060 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria
Authors: Isaac Kayode Ogunlade
Abstract:
Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device
Procedia PDF Downloads 943059 Navigating Government Finance Statistics: Effortless Retrieval and Comparative Analysis through Data Science and Machine Learning
Authors: Kwaku Damoah
Abstract:
This paper presents a methodology and software application (App) designed to empower users in accessing, retrieving, and comparatively exploring data within the hierarchical network framework of the Government Finance Statistics (GFS) system. It explores the ease of navigating the GFS system and identifies the gaps filled by the new methodology and App. The GFS, embodies a complex Hierarchical Network Classification (HNC) structure, encapsulating institutional units, revenues, expenses, assets, liabilities, and economic activities. Navigating this structure demands specialized knowledge, experience, and skill, posing a significant challenge for effective analytics and fiscal policy decision-making. Many professionals encounter difficulties deciphering these classifications, hindering confident utilization of the system. This accessibility barrier obstructs a vast number of professionals, students, policymakers, and the public from leveraging the abundant data and information within the GFS. Leveraging R programming language, Data Science Analytics and Machine Learning, an efficient methodology enabling users to access, navigate, and conduct exploratory comparisons was developed. The machine learning Fiscal Analytics App (FLOWZZ) democratizes access to advanced analytics through its user-friendly interface, breaking down expertise barriers.Keywords: data science, data wrangling, drilldown analytics, government finance statistics, hierarchical network classification, machine learning, web application.
Procedia PDF Downloads 713058 SISSLE in Consensus-Based Ripple: Some Improvements in Speed, Security, Last Mile Connectivity and Ease of Use
Authors: Mayank Mundhra, Chester Rebeiro
Abstract:
Cryptocurrencies are rapidly finding wide application in areas such as Real Time Gross Settlements and Payments Systems. Ripple is a cryptocurrency that has gained prominence with banks and payment providers. It solves the Byzantine General’s Problem with its Ripple Protocol Consensus Algorithm (RPCA), where each server maintains a list of servers, called Unique Node List (UNL) that represents the network for the server, and will not collectively defraud it. The server believes that the network has come to a consensus when members of the UNL come to a consensus on a transaction. In this paper we improve Ripple to achieve better speed, security, last mile connectivity and ease of use. We implement guidelines and automated systems for building and maintaining UNLs for resilience, robustness, improved security, and efficient information propagation. We enhance the system so as to ensure that each server receives information from across the whole network rather than just from the UNL members. We also introduce the paradigm of UNL overlap as a function of information propagation and the trust a server assigns to its own UNL. Our design not only reduces vulnerabilities such as eclipse attacks, but also makes it easier to identify malicious behaviour and entities attempting to fraudulently Double Spend or stall the system. We provide experimental evidence of the benefits of our approach over the current Ripple scheme. We observe ≥ 4.97x and 98.22x in speedup and success rate for information propagation respectively, and ≥ 3.16x and 51.70x in speedup and success rate in consensus.Keywords: Ripple, Kelips, unique node list, consensus, information propagation
Procedia PDF Downloads 1503057 GIS-Based Identification of Overloaded Distribution Transformers and Calculation of Technical Electric Power Losses
Authors: Awais Ahmed, Javed Iqbal
Abstract:
Pakistan has been for many years facing extreme challenges in energy deficit due to the shortage of power generation compared to increasing demand. A part of this energy deficit is also contributed by the power lost in transmission and distribution network. Unfortunately, distribution companies are not equipped with modern technologies and methods to identify and eliminate these losses. According to estimate, total energy lost in early 2000 was between 20 to 26 percent. To address this issue the present research study was designed with the objectives of developing a standalone GIS application for distribution companies having the capability of loss calculation as well as identification of overloaded transformers. For this purpose, Hilal Road feeder in Faisalabad Electric Supply Company (FESCO) was selected as study area. An extensive GPS survey was conducted to identify each consumer, linking it to the secondary pole of the transformer, geo-referencing equipment and documenting conductor sizes. To identify overloaded transformer, accumulative kWH reading of consumer on transformer was compared with threshold kWH. Technical losses of 11kV and 220V lines were calculated using the data from substation and resistance of the network calculated from the geo-database. To automate the process a standalone GIS application was developed using ArcObjects with engineering analysis capabilities. The application uses GIS database developed for 11kV and 220V lines to display and query spatial data and present results in the form of graphs. The result shows that about 14% of the technical loss on both high tension (HT) and low tension (LT) network while about 4 out of 15 general duty transformers were found overloaded. The study shows that GIS can be a very effective tool for distribution companies in management and planning of their distribution network.Keywords: geographical information system, GIS, power distribution, distribution transformers, technical losses, GPS, SDSS, spatial decision support system
Procedia PDF Downloads 3773056 Implementation of the Interlock Protocol to Enhance Security in Unmanned Aerial Vehicles
Authors: Vikram Prabhu, Mohammad Shikh Bahaei
Abstract:
This paper depicts the implementation of a new infallible technique to protect an Unmanned Aerial Vehicle from cyber-attacks. An Unmanned Aerial Vehicle (UAV) could be vulnerable to cyber-attacks because of jammers or eavesdroppers over the network which pose as a threat to the security of the UAV. In the field of network security, there are quite a few protocols which can be used to establish a secure connection between UAVs and their Operators. In this paper, we discuss how the Interlock Protocol could be implemented to foil the Man-in-the-Middle Attack. In this case, Wireshark has been used as the sniffer (man-in-the-middle). This paper also shows a comparison between the Interlock Protocol and the TCP Protocols using cryptcat and netcat and at the same time highlights why the Interlock Protocol is the most efficient security protocol to prevent eavesdropping over the communication channel.Keywords: interlock protocol, Diffie-Hellman algorithm, unmanned aerial vehicles, control station, man-in-the-middle attack, Wireshark
Procedia PDF Downloads 3043055 Feature Based Unsupervised Intrusion Detection
Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein
Abstract:
The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.Keywords: information gain (IG), intrusion detection system (IDS), k-means clustering, Weka
Procedia PDF Downloads 2973054 Multi Agent System Architecture Oriented Prometheus Methodology Design for Reverse Logistics
Authors: F. Lhafiane, A. Elbyed, M. Bouchoum
Abstract:
The design of Reverse logistics Network has attracted growing attention with the stringent pressures from both environmental awareness and business sustainability. Reverse logistical activities include return, remanufacture, disassemble and dispose of products can be quite complex to manage. In addition, demand can be difficult to predict, and decision making is one of the challenges tasks. This complexity has amplified the need to develop an integrated architecture for product return as an enterprise system. The main purpose of this paper is to design Multi agent system (MAS) architecture using the Prometheus methodology to efficiently manage reverse logistics processes. The proposed MAS architecture includes five types of agents: Gate keeping Agent, Collection Agent, Sorting Agent, Processing Agent and Disposal Agent which act respectively during the five steps of reverse logistics Network.Keywords: reverse logistics, multi agent system, prometheus methodology
Procedia PDF Downloads 4743053 Case Analysis of Bamboo Based Social Enterprises in India-Improving Profitability and Sustainability
Authors: Priyal Motwani
Abstract:
The current market for bamboo products in India is about Rs. 21000 crores and is highly unorganised and fragmented. In this study, we have closely analysed the structure and functions of a major bamboo craft based organisation in Kerela, India and elaborated about its value chain, product mix, pricing strategy and supply chain, collaborations and competitive landscape. We have identified six major bottlenecks that are prevalent in such organisations, based on the Indian context, in relation to their product mix, asset management, and supply chain- corresponding waste management and retail network. The study has identified that the target customers for the bamboo based products and alternative revenue streams (eco-tourism, microenterprises, training), by carrying out secondary and primary research (5000 sample space), that can boost the existing revenue by 150%. We have then recommended an optimum product mix-covering premium, medium and low valued processing, for medium sized bamboo based organisations, in accordance with their capacity to maximize their revenue potential. After studying such organisations and their counter parts, the study has established an optimum retail network, considering B2B, B2C physical and online retail, to maximize their sales to their target groups. On the basis of the results obtained from the analysis of the future and present trends, our study gives recommendations to improve the revenue potential of bamboo based organisation in India and promote sustainability.Keywords: bamboo, bottlenecks, optimization, product mix, retail network, value chain
Procedia PDF Downloads 2183052 An Application of Fuzzy Analytical Network Process to Select a New Production Base: An AEC Perspective
Authors: Walailak Atthirawong
Abstract:
By the end of 2015, the Association of Southeast Asian Nations (ASEAN) countries proclaim to transform into the next stage of an economic era by having a single market and production base called ASEAN Economic Community (AEC). One objective of the AEC is to establish ASEAN as a single market and one production base making ASEAN highly competitive economic region and competitive with new mechanisms. As a result, it will open more opportunities to enterprises in both trade and investment, which offering a competitive market of US$ 2.6 trillion and over 622 million people. Location decision plays a key role in achieving corporate competitiveness. Hence, it may be necessary for enterprises to redesign their supply chains via enlarging a new production base which has low labor cost, high labor skill and numerous of labor available. This strategy will help companies especially for apparel industry in order to maintain a competitive position in the global market. Therefore, in this paper a generic model for location selection decision for Thai apparel industry using Fuzzy Analytical Network Process (FANP) is proposed. Myanmar, Vietnam and Cambodia are referred for alternative location decision from interviewing expert persons in this industry who have planned to enlarge their businesses in AEC countries. The contribution of this paper lies in proposing an approach model that is more practical and trustworthy to top management in making a decision on location selection.Keywords: apparel industry, ASEAN Economic Community (AEC), Fuzzy Analytical Network Process (FANP), location decision
Procedia PDF Downloads 2373051 Neuroecological Approach for Anthropological Studies in Archaeology
Authors: Kalangi Rodrigo
Abstract:
The term Neuroecology elucidates the study of customizable variation in cognition and the brain. Subject marked the birth since 1980s, when researches began to apply methods of comparative evolutionary biology to cognitive processes and the underlying neural mechanisms of cognition. In Archaeology and Anthropology, we observe behaviors such as social learning skills, innovative feeding and foraging, tool use and social manipulation to determine the cognitive processes of ancient mankind. Depending on the brainstem size was used as a control variable, and phylogeny was controlled using independent contrasts. Both disciplines need to enriched with comparative literature and neurological experimental, behavioral studies among tribal peoples as well as primate groups which will lead the research to a potential end. Neuroecology examines the relations between ecological selection pressure and mankind or sex differences in cognition and the brain. The goal of neuroecology is to understand how natural law acts on perception and its neural apparatus. Furthermore, neuroecology will eventually lead both principal disciplines to Ethology, where human behaviors and social management studies from a biological perspective. It can be either ethnoarchaeological or prehistoric. Archaeology should adopt general approach of neuroecology, phylogenetic comparative methods can be used in the field, and new findings on the cognitive mechanisms and brain structures involved mating systems, social organization, communication and foraging. The contribution of neuroecology to archaeology and anthropology is the information it provides on the selective pressures that have influenced the evolution of cognition and brain structure of the mankind. It will shed a new light to the path of evolutionary studies including behavioral ecology, primate archaeology and cognitive archaeology.Keywords: Neuroecology, Archaeology, Brain Evolution, Cognitive Archaeology
Procedia PDF Downloads 1223050 DUSP16 Inhibition Rescues Neurogenic and Cognitive Deficits in Alzheimer's Disease Mice Models
Authors: Huimin Zhao, Xiaoquan Liu, Haochen Liu
Abstract:
The major challenge facing Alzheimer's Disease (AD) drug development is how to effectively improve cognitive function in clinical practice. Growing evidence indicates that stimulating hippocampal neurogenesis is a strategy for restoring cognition in animal models of AD. The mitogen-activated protein kinase (MAPK) pathway is a crucial factor in neurogenesis, which is negatively regulated by Dual-specificity phosphatase 16 (DUSP16). Transcriptome analysis of post-mortem brain tissue revealed up-regulation of DUSP16 expression in AD patients. Additionally, DUSP16 was involved in regulating the proliferation and neural differentiation of neural progenitor cells (NPCs). Nevertheless, whether the effect of DUSP16 on ameliorating cognitive disorders by influencing NPCs differentiation in AD mice remains unclear. Our study demonstrates an association between DUSP16 SNPs and clinical progression in individuals with mild cognitive impairment (MCI). Besides, we found that increased DUSP16 expression in both 3×Tg and SAMP8 models of AD led to NPC differentiation impairments. By silencing DUSP16, cognitive benefits, the induction of AHN and synaptic plasticity, were observed in AD mice. Furthermore, we found that DUSP16 is involved in the process of NPC differentiation by regulating c-Jun N-terminal kinase (JNK) phosphorylation. Moreover, the increased DUSP16 may be regulated by the ETS transcription factor (ELK1), which binds to the promoter region of DUSP16. Loss of ELK1 resulted in decreased DUSP16 mRNA and protein levels. Our data uncover a potential regulatory role for DUSP16 in adult hippocampal neurogenesis and provide a possibility to find the target of AD intervention.Keywords: alzheimer's disease, cognitive function, DUSP16, hippocampal neurogenesis
Procedia PDF Downloads 733049 Omni-Modeler: Dynamic Learning for Pedestrian Redetection
Authors: Michael Karnes, Alper Yilmaz
Abstract:
This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition
Procedia PDF Downloads 783048 Comparative Analysis of Hybrid and Non-hybrid Cooled 185 KW High-Speed Permanent Magnet Synchronous Machine for Air Suspension Blower
Authors: Usman Abubakar, Xiaoyuan Wang, Sayyed Haleem Shah, Sadiq Ur Rahman, Rabiu Saleh Zakariyya
Abstract:
High-speed Permanent magnet synchronous machine (HSPMSM) uses in different industrial applications like blowers, compressors as a result of its superb performance. Nevertheless, the over-temperature rise of both winding and PM is one of their substantial problem for a high-power HSPMSM, which affects its lifespan and performance. According to the literature, HSPMSM with a Hybrid cooling configuration has a much lower temperature rise than non-hybrid cooling. This paper presents the design 185kW, 26K rpm with two different cooling configurations, i.e., hybrid cooling configuration (forced air and housing spiral water jacket) and non-hybrid (forced air cooling assisted with winding’s potting material and sleeve’s material) to enhance the heat dissipation of winding and PM respectively. Firstly, the machine’s electromagnetic design is conducted by the finite element method to accurately account for machine losses. Then machine’s cooling configurations are introduced, and their effectiveness is validated by lumped parameter thermal network (LPTN). Investigation shows that using potting, sleeve materials to assist non-hybrid cooling configuration makes the machine’s winding and PM temperature closer to hybrid cooling configuration. Therefore, the machine with non-hybrid cooling is prototyped and tested due to its simplicity, lower energy consumption and can still maintain the lifespan and performance of the HSPMSM.Keywords: airflow network, axial ventilation, high-speed PMSM, thermal network
Procedia PDF Downloads 2343047 Reconfigurable Ubiquitous Computing Infrastructure for Load Balancing
Authors: Khaled Sellami, Lynda Sellami, Pierre F. Tiako
Abstract:
Ubiquitous computing helps make data and services available to users anytime and anywhere. This makes the cooperation of devices a crucial need. In return, such cooperation causes an overload of the devices and/or networks, resulting in network malfunction and suspension of its activities. Our goal in this paper is to propose an approach of devices reconfiguration in order to help to reduce the energy consumption in ubiquitous environments. The idea is that when high-energy consumption is detected, we proceed to a change in component distribution on the devices to reduce and/or balance the energy consumption. We also investigate the possibility to detect high-energy consumption of devices/network based on devices abilities. As a result, our idea realizes a reconfiguration of devices aimed at reducing the consumption of energy and/or load balancing in ubiquitous environments.Keywords: ubiquitous computing, load balancing, device energy consumption, reconfiguration
Procedia PDF Downloads 2763046 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods
Authors: Mohammad Arabi
Abstract:
The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.Keywords: electric motor, fault detection, frequency features, temporal features
Procedia PDF Downloads 523045 CSoS-STRE: A Combat System-of-System Space-Time Resilience Enhancement Framework
Authors: Jiuyao Jiang, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge
Abstract:
Modern warfare has transitioned from the paradigm of isolated combat forces to system-to-system confrontations due to advancements in combat technologies and application concepts. A combat system-of-systems (CSoS) is a combat network composed of independently operating entities that interact with one another to provide overall operational capabilities. Enhancing the resilience of CSoS is garnering increasing attention due to its significant practical value in optimizing network architectures, improving network security and refining operational planning. Accordingly, a unified framework called CSoS space-time resilience enhancement (CSoS-STRE) has been proposed, which enhances the resilience of CSoS by incorporating spatial features. Firstly, a multilayer spatial combat network model has been constructed, which incorporates an information layer depicting the interrelations among combat entities based on the OODA loop, along with a spatial layer that considers the spatial characteristics of equipment entities, thereby accurately reflecting the actual combat process. Secondly, building upon the combat network model, a spatiotemporal resilience optimization model is proposed, which reformulates the resilience optimization problem as a classical linear optimization model with spatial features. Furthermore, the model is extended from scenarios without obstacles to those with obstacles, thereby further emphasizing the importance of spatial characteristics. Thirdly, a resilience-oriented recovery optimization method based on improved non dominated sorting genetic algorithm II (R-INSGA) is proposed to determine the optimal recovery sequence for the damaged entities. This method not only considers spatial features but also provides the optimal travel path for multiple recovery teams. Finally, the feasibility, effectiveness, and superiority of the CSoS-STRE are demonstrated through a case study. Simultaneously, under deliberate attack conditions based on degree centrality and maximum operational loop performance, the proposed CSoS-STRE method is compared with six baseline recovery strategies, which are based on performance, time, degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. The comparison demonstrates that CSoS-STRE achieves faster convergence and superior performance.Keywords: space-time resilience enhancement, resilience optimization model, combat system-of-systems, recovery optimization method, no-obstacles and obstacles
Procedia PDF Downloads 17