Search results for: Bayesian neural network
3527 Non-Linear Causality Inference Using BAMLSS and Bi-CAM in Finance
Authors: Flora Babongo, Valerie Chavez
Abstract:
Inferring causality from observational data is one of the fundamental subjects, especially in quantitative finance. So far most of the papers analyze additive noise models with either linearity, nonlinearity or Gaussian noise. We fill in the gap by providing a nonlinear and non-gaussian causal multiplicative noise model that aims to distinguish the cause from the effect using a two steps method based on Bayesian additive models for location, scale and shape (BAMLSS) and on causal additive models (CAM). We have tested our method on simulated and real data and we reached an accuracy of 0.86 on average. As real data, we considered the causality between financial indices such as S&P 500, Nasdaq, CAC 40 and Nikkei, and companies' log-returns. Our results can be useful in inferring causality when the data is heteroskedastic or non-injective.Keywords: causal inference, DAGs, BAMLSS, financial index
Procedia PDF Downloads 1513526 Development of Prediction Models of Day-Ahead Hourly Building Electricity Consumption and Peak Power Demand Using the Machine Learning Method
Authors: Dalin Si, Azizan Aziz, Bertrand Lasternas
Abstract:
To encourage building owners to purchase electricity at the wholesale market and reduce building peak demand, this study aims to develop models that predict day-ahead hourly electricity consumption and demand using artificial neural network (ANN) and support vector machine (SVM). All prediction models are built in Python, with tool Scikit-learn and Pybrain. The input data for both consumption and demand prediction are time stamp, outdoor dry bulb temperature, relative humidity, air handling unit (AHU), supply air temperature and solar radiation. Solar radiation, which is unavailable a day-ahead, is predicted at first, and then this estimation is used as an input to predict consumption and demand. Models to predict consumption and demand are trained in both SVM and ANN, and depend on cooling or heating, weekdays or weekends. The results show that ANN is the better option for both consumption and demand prediction. It can achieve 15.50% to 20.03% coefficient of variance of root mean square error (CVRMSE) for consumption prediction and 22.89% to 32.42% CVRMSE for demand prediction, respectively. To conclude, the presented models have potential to help building owners to purchase electricity at the wholesale market, but they are not robust when used in demand response control.Keywords: building energy prediction, data mining, demand response, electricity market
Procedia PDF Downloads 3163525 Tabu Search to Draw Evacuation Plans in Emergency Situations
Authors: S. Nasri, H. Bouziri
Abstract:
Disasters are quite experienced in our days. They are caused by floods, landslides, and building fires that is the main objective of this study. To cope with these unexpected events, precautions must be taken to protect human lives. The emphasis on disposal work focuses on the resolution of the evacuation problem in case of no-notice disaster. The problem of evacuation is listed as a dynamic network flow problem. Particularly, we model the evacuation problem as an earliest arrival flow problem with load dependent transit time. This problem is classified as NP-Hard. Our challenge here is to propose a metaheuristic solution for solving the evacuation problem. We define our objective as the maximization of evacuees during earliest periods of a time horizon T. The objective provides the evacuation of persons as soon as possible. We performed an experimental study on emergency evacuation from the tunisian children’s hospital. This work prompts us to look for evacuation plans corresponding to several situations where the network dynamically changes.Keywords: dynamic network flow, load dependent transit time, evacuation strategy, earliest arrival flow problem, tabu search metaheuristic
Procedia PDF Downloads 3723524 Intelligent Indoor Localization Using WLAN Fingerprinting
Authors: Gideon C. Joseph
Abstract:
The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression
Procedia PDF Downloads 3473523 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers
Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang
Abstract:
In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.Keywords: centrality, patent coupling network, patent influence, social network analysis
Procedia PDF Downloads 543522 An MrPPG Method for Face Anti-Spoofing
Authors: Lan Zhang, Cailing Zhang
Abstract:
In recent years, many face anti-spoofing algorithms have high detection accuracy when detecting 2D face anti-spoofing or 3D mask face anti-spoofing alone in the field of face anti-spoofing, but their detection performance is greatly reduced in multidimensional and cross-datasets tests. The rPPG method used for face anti-spoofing uses the unique vital information of real face to judge real faces and face anti-spoofing, so rPPG method has strong stability compared with other methods, but its detection rate of 2D face anti-spoofing needs to be improved. Therefore, in this paper, we improve an rPPG(Remote Photoplethysmography) method(MrPPG) for face anti-spoofing which through color space fusion, using the correlation of pulse signals between real face regions and background regions, and introducing the cyclic neural network (LSTM) method to improve accuracy in 2D face anti-spoofing. Meanwhile, the MrPPG also has high accuracy and good stability in face anti-spoofing of multi-dimensional and cross-data datasets. The improved method was validated on Replay-Attack, CASIA-FASD, Siw and HKBU_MARs_V2 datasets, the experimental results show that the performance and stability of the improved algorithm proposed in this paper is superior to many advanced algorithms.Keywords: face anti-spoofing, face presentation attack detection, remote photoplethysmography, MrPPG
Procedia PDF Downloads 1783521 Cluster Based Ant Colony Routing Algorithm for Mobile Ad-Hoc Networks
Authors: Alaa Eddien Abdallah, Bajes Yousef Alskarnah
Abstract:
Ant colony based routing algorithms are known to grantee the packet delivery, but they suffer from the huge overhead of control messages which are needed to discover the route. In this paper we utilize the network nodes positions to group the nodes in connected clusters. We use clusters-heads only on forwarding the route discovery control messages. Our simulations proved that the new algorithm has decreased the overhead dramatically without affecting the delivery rate.Keywords: ad-hoc network, MANET, ant colony routing, position based routing
Procedia PDF Downloads 4253520 Analysis of Linguistic Disfluencies in Bilingual Children’s Discourse
Authors: Sheena Christabel Pravin, M. Palanivelan
Abstract:
Speech disfluencies are common in spontaneous speech. The primary purpose of this study was to distinguish linguistic disfluencies from stuttering disfluencies in bilingual Tamil–English (TE) speaking children. The secondary purpose was to determine whether their disfluencies are mediated by native language dominance and/or on an early onset of developmental stuttering at childhood. A detailed study was carried out to identify the prosodic and acoustic features that uniquely represent the disfluent regions of speech. This paper focuses on statistical modeling of repetitions, prolongations, pauses and interjections in the speech corpus encompassing bilingual spontaneous utterances from school going children – English and Tamil. Two classifiers including Hidden Markov Models (HMM) and the Multilayer Perceptron (MLP), which is a class of feed-forward artificial neural network, were compared in the classification of disfluencies. The results of the classifiers document the patterns of disfluency in spontaneous speech samples of school-aged children to distinguish between Children Who Stutter (CWS) and Children with Language Impairment CLI). The ability of the models in classifying the disfluencies was measured in terms of F-measure, Recall, and Precision.Keywords: bi-lingual, children who stutter, children with language impairment, hidden markov models, multi-layer perceptron, linguistic disfluencies, stuttering disfluencies
Procedia PDF Downloads 2173519 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures
Authors: Jungyeol Hong, Dongjoo Park
Abstract:
The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership
Procedia PDF Downloads 1773518 Coupling Random Demand and Route Selection in the Transportation Network Design Problem
Authors: Shabnam Najafi, Metin Turkay
Abstract:
Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.Keywords: epsilon-constraint, multi-objective, network design, stochastic
Procedia PDF Downloads 6473517 Frequency Modulation Continuous Wave Radar Human Fall Detection Based on Time-Varying Range-Doppler Features
Authors: Xiang Yu, Chuntao Feng, Lu Yang, Meiyang Song, Wenhao Zhou
Abstract:
The existing two-dimensional micro-Doppler features extraction ignores the correlation information between the spatial and temporal dimension features. For the range-Doppler map, the time dimension is introduced, and a frequency modulation continuous wave (FMCW) radar human fall detection algorithm based on time-varying range-Doppler features is proposed. Firstly, the range-Doppler sequence maps are generated from the echo signals of the continuous motion of the human body collected by the radar. Then the three-dimensional data cube composed of multiple frames of range-Doppler maps is input into the three-dimensional Convolutional Neural Network (3D CNN). The spatial and temporal features of time-varying range-Doppler are extracted by the convolution layer and pool layer at the same time. Finally, the extracted spatial and temporal features are input into the fully connected layer for classification. The experimental results show that the proposed fall detection algorithm has a detection accuracy of 95.66%.Keywords: FMCW radar, fall detection, 3D CNN, time-varying range-doppler features
Procedia PDF Downloads 1233516 Improving Machine Learning Translation of Hausa Using Named Entity Recognition
Authors: Aishatu Ibrahim Birma, Aminu Tukur, Abdulkarim Abbass Gora
Abstract:
Machine translation plays a vital role in the Field of Natural Language Processing (NLP), breaking down language barriers and enabling communication across diverse communities. In the context of Hausa, a widely spoken language in West Africa, mainly in Nigeria, effective translation systems are essential for enabling seamless communication and promoting cultural exchange. However, due to the unique linguistic characteristics of Hausa, accurate translation remains a challenging task. The research proposes an approach to improving the machine learning translation of Hausa by integrating Named Entity Recognition (NER) techniques. Named entities, such as person names, locations, organizations, and dates, are critical components of a language's structure and meaning. Incorporating NER into the translation process can enhance the quality and accuracy of translations by preserving the integrity of named entities and also maintaining consistency in translating entities (e.g., proper names), and addressing the cultural references specific to Hausa. The NER will be incorporated into Neural Machine Translation (NMT) for the Hausa to English Translation.Keywords: machine translation, natural language processing (NLP), named entity recognition (NER), neural machine translation (NMT)
Procedia PDF Downloads 433515 Building Capacity and Personnel Flow Modeling for Operating amid COVID-19
Authors: Samuel Fernandes, Dylan Kato, Emin Burak Onat, Patrick Keyantuo, Raja Sengupta, Amine Bouzaghrane
Abstract:
The COVID-19 pandemic has spread across the United States, forcing cities to impose stay-at-home and shelter-in-place orders. Building operations had to adjust as non-essential personnel worked from home. But as buildings prepare for personnel to return, they need to plan for safe operations amid new COVID-19 guidelines. In this paper we propose a methodology for capacity and flow modeling of personnel within buildings to safely operate under COVID-19 guidelines. We model personnel flow within buildings by network flows with queuing constraints. We study maximum flow, minimum cost, and minimax objectives. We compare our network flow approach with a simulation model through a case study and present the results. Our results showcase various scenarios of how buildings could be operated under new COVID-19 guidelines and provide a framework for building operators to plan and operate buildings in this new paradigm.Keywords: network analysis, building simulation, COVID-19
Procedia PDF Downloads 1603514 Blockchain-Resilient Framework for Cloud-Based Network Devices within the Architecture of Self-Driving Cars
Authors: Mirza Mujtaba Baig
Abstract:
Artificial Intelligence (AI) is evolving rapidly, and one of the areas in which this field has influenced is automation. The automobile, healthcare, education, and robotic industries deploy AI technologies constantly, and the automation of tasks is beneficial to allow time for knowledge-based tasks and also introduce convenience to everyday human endeavors. The paper reviews the challenges faced with the current implementations of autonomous self-driving cars by exploring the machine learning, robotics, and artificial intelligence techniques employed for the development of this innovation. The controversy surrounding the development and deployment of autonomous machines, e.g., vehicles, begs the need for the exploration of the configuration of the programming modules. This paper seeks to add to the body of knowledge of research assisting researchers in decreasing the inconsistencies in current programming modules. Blockchain is a technology of which applications are mostly found within the domains of financial, pharmaceutical, manufacturing, and artificial intelligence. The registering of events in a secured manner as well as applying external algorithms required for the data analytics are especially helpful for integrating, adapting, maintaining, and extending to new domains, especially predictive analytics applications.Keywords: artificial intelligence, automation, big data, self-driving cars, machine learning, neural networking algorithm, blockchain, business intelligence
Procedia PDF Downloads 1193513 Myers-Briggs Type Index Personality Type Classification Based on an Individual’s Spotify Playlists
Authors: Sefik Can Karakaya, Ibrahim Demir
Abstract:
In this study, the relationship between musical preferences and personality traits has been investigated in terms of Spotify audio analysis features. The aim of this paper is to build such a classifier capable of segmenting people into their Myers-Briggs Type Index (MBTI) personality type based on their Spotify playlists. Music takes an important place in the lives of people all over the world and online music streaming platforms make it easier to reach musical contents. In this context, the motivation to build such a classifier is allowing people to gain access to their MBTI personality type and perhaps for more reliably and more quickly. For this purpose, logistic regression and deep neural networks have been selected for classifier and their performances are compared. In conclusion, it has been found that musical preferences differ statistically between personality traits, and evaluated models are able to distinguish personality types based on given musical data structure with over %60 accuracy rate.Keywords: myers-briggs type indicator, music psychology, Spotify, behavioural user profiling, deep neural networks, logistic regression
Procedia PDF Downloads 1443512 An AI-Based Dynamical Resource Allocation Calculation Algorithm for Unmanned Aerial Vehicle
Authors: Zhou Luchen, Wu Yubing, Burra Venkata Durga Kumar
Abstract:
As the scale of the network becomes larger and more complex than before, the density of user devices is also increasing. The development of Unmanned Aerial Vehicle (UAV) networks is able to collect and transform data in an efficient way by using software-defined networks (SDN) technology. This paper proposed a three-layer distributed and dynamic cluster architecture to manage UAVs by using an AI-based resource allocation calculation algorithm to address the overloading network problem. Through separating services of each UAV, the UAV hierarchical cluster system performs the main function of reducing the network load and transferring user requests, with three sub-tasks including data collection, communication channel organization, and data relaying. In this cluster, a head node and a vice head node UAV are selected considering the Central Processing Unit (CPU), operational (RAM), and permanent (ROM) memory of devices, battery charge, and capacity. The vice head node acts as a backup that stores all the data in the head node. The k-means clustering algorithm is used in order to detect high load regions and form the UAV layered clusters. The whole process of detecting high load areas, forming and selecting UAV clusters, and moving the selected UAV cluster to that area is proposed as offloading traffic algorithm.Keywords: k-means, resource allocation, SDN, UAV network, unmanned aerial vehicles
Procedia PDF Downloads 1113511 Fault Location Detection in Active Distribution System
Authors: R. Rezaeipour, A. R. Mehrabi
Abstract:
Recent increase of the DGs and microgrids in distribution systems, disturbs the tradition structure of the system. Coordination between protection devices in such a system becomes the concern of the network operators. This paper presents a new method for fault location detection in the active distribution networks, independent of the fault type or its resistance. The method uses synchronized voltage and current measurements at the interconnection of DG units and is able to adapt to changes in the topology of the system. The method has been tested on a 38-bus distribution system, with very encouraging results.Keywords: fault location detection, active distribution system, micro grids, network operators
Procedia PDF Downloads 7893510 Examining Predictive Coding in the Hierarchy of Visual Perception in the Autism Spectrum Using Fast Periodic Visual Stimulation
Authors: Min L. Stewart, Patrick Johnston
Abstract:
Predictive coding has been proposed as a general explanatory framework for understanding the neural mechanisms of perception. As such, an underweighting of perceptual priors has been hypothesised to underpin a range of differences in inferential and sensory processing in autism spectrum disorders. However, empirical evidence to support this has not been well established. The present study uses an electroencephalography paradigm involving changes of facial identity and person category (actors etc.) to explore how levels of autistic traits (AT) affect predictive coding at multiple stages in the visual processing hierarchy. The study uses a rapid serial presentation of faces, with hierarchically structured sequences involving both periodic and aperiodic repetitions of different stimulus attributes (i.e., person identity and person category) in order to induce contextual expectations relating to these attributes. It investigates two main predictions: (1) significantly larger and late neural responses to change of expected visual sequences in high-relative to low-AT, and (2) significantly reduced neural responses to violations of contextually induced expectation in high- relative to low-AT. Preliminary frequency analysis data comparing high and low-AT show greater and later event-related-potentials (ERPs) in occipitotemporal areas and prefrontal areas in high-AT than in low-AT for periodic changes of facial identity and person category but smaller ERPs over the same areas in response to aperiodic changes of identity and category. The research advances our understanding of how abnormalities in predictive coding might underpin aberrant perceptual experience in autism spectrum. This is the first stage of a research project that will inform clinical practitioners in developing better diagnostic tests and interventions for people with autism.Keywords: hierarchical visual processing, face processing, perceptual hierarchy, prediction error, predictive coding
Procedia PDF Downloads 1113509 A Survey on Traditional Mac Layer Protocols in Cognitive Wireless Mesh Networks
Authors: Anusha M., V. Srikanth
Abstract:
Maximizing spectrum usage and numerous applications of the wireless communication networks have forced to a high interest of available spectrum. Cognitive Radio control its receiver and transmitter features exactly so that they can utilize the vacant approved spectrum without impacting the functionality of the principal licensed users. The Use of various channels assists to address interferences thereby improves the whole network efficiency. The MAC protocol in cognitive radio network explains the spectrum usage by interacting with multiple channels among the users. In this paper we studied about the architecture of cognitive wireless mesh network and traditional TDMA dependent MAC method to allocate channels dynamically. The majority of the MAC protocols suggested in the research are operated on Common-Control-Channel (CCC) to handle the services between Cognitive Radio secondary users. In this paper, an extensive study of Multi-Channel Multi-Radios or frequency range channel allotment and continually synchronized TDMA scheduling are shown in summarized way.Keywords: TDMA, MAC, multi-channel, multi-radio, WMN’S, cognitive radios
Procedia PDF Downloads 5613508 A Highly Efficient Broadcast Algorithm for Computer Networks
Authors: Ganesh Nandakumaran, Mehmet Karaata
Abstract:
A wave is a distributed execution, often made up of a broadcast phase followed by a feedback phase, requiring the participation of all the system processes before a particular event called decision is taken. Wave algorithms with one initiator such as the 1-wave algorithm have been shown to be very efficient for broadcasting messages in tree networks. Extensions of this algorithm broadcasting a sequence of waves using a single initiator have been implemented in algorithms such as the m-wave algorithm. However as the network size increases, having a single initiator adversely affects the message delivery times to nodes further away from the initiator. As a remedy, broadcast waves can be allowed to be initiated by multiple initiator nodes distributed across the network to reduce the completion time of broadcasts. These waves initiated by one or more initiator processes form a collection of waves covering the entire network. Solutions to global-snapshots, distributed broadcast and various synchronization problems can be solved efficiently using waves with multiple concurrent initiators. In this paper, we propose the first stabilizing multi-wave sequence algorithm implementing waves started by multiple initiator processes such that every process in the network receives at least one sequence of broadcasts. Due to being stabilizing, the proposed algorithm can withstand transient faults and do not require initialization. We view a fault as a transient fault if it perturbs the configuration of the system but not its program.Keywords: distributed computing, multi-node broadcast, propagation of information with feedback and cleaning (PFC), stabilization, wave algorithms
Procedia PDF Downloads 5043507 Investigating the Behavior of Individual Business Taxpayers: Behavioral Economics Approach
Authors: Yeganeh Mousavi Jahromi, Sahar Dehghan
Abstract:
In Direct Tax Act, penalties and incentives are two strategies for realization of the expected tax revenues. In this study, the interaction between individual businesses' taxpayers' behaviors and National Tax Administration is investigated by using prospect theory which is based on behavioral economics approach. For this purpose, the structure of the tax compliance of the mentioned taxpayers is evaluated via the changes in penalty and incentive rates. In this way, a special questionnaire regarding the items of individual businesses sector of Direct Tax Act was designed for tax compliance evaluation, and the results were obtained using Bayesian Hierarchical method. The results indicate that the investigated individual business taxpayers, at all income levels, were more sensitive toward incentive rates so that this result can be useful for tax policymakers.Keywords: behavioral economics, prospect theory, tax compliance, penalties, incentives
Procedia PDF Downloads 683506 The Omicron Variant BA.2.86.1 of SARS- 2 CoV-2 Demonstrates an Altered Interaction Network and Dynamic Features to Enhance the Interaction with the hACE2
Authors: Taimur Khan, Zakirullah, Muhammad Shahab
Abstract:
The SARS-CoV-2 variant BA.2.86 (Omicron) has emerged with unique mutations that may increase its transmission and infectivity. This study investigates how these mutations alter the Omicron receptor-binding domain's interaction network and dynamic properties (RBD) compared to the wild-type virus, focusing on its binding affinity to the human ACE2 (hACE2) receptor. Protein-protein docking and all-atom molecular dynamics simulations were used to analyze structural and dynamic differences. Despite the structural similarity to the wild-type virus, the Omicron variant exhibits a distinct interaction network involving new residues that enhance its binding capacity. The dynamic analysis reveals increased flexibility in the RBD, particularly in loop regions crucial for hACE2 interaction. Mutations significantly alter the secondary structure, leading to greater flexibility and conformational adaptability compared to the wild type. Binding free energy calculations confirm that the Omicron RBD has a higher binding affinity (-70.47 kcal/mol) to hACE2 than the wild-type RBD (-61.38 kcal/mol). These results suggest that the altered interaction network and enhanced dynamics of the Omicron variant contribute to its increased infectivity, providing insights for the development of targeted therapeutics and vaccines.Keywords: SARS-CoV-2, molecular dynamic simulation, receptor binding domain, vaccine
Procedia PDF Downloads 223505 Enhancing Spatial Interpolation: A Multi-Layer Inverse Distance Weighting Model for Complex Regression and Classification Tasks in Spatial Data Analysis
Authors: Yakin Hajlaoui, Richard Labib, Jean-François Plante, Michel Gamache
Abstract:
This study introduces the Multi-Layer Inverse Distance Weighting Model (ML-IDW), inspired by the mathematical formulation of both multi-layer neural networks (ML-NNs) and Inverse Distance Weighting model (IDW). ML-IDW leverages ML-NNs' processing capabilities, characterized by compositions of learnable non-linear functions applied to input features, and incorporates IDW's ability to learn anisotropic spatial dependencies, presenting a promising solution for nonlinear spatial interpolation and learning from complex spatial data. it employ gradient descent and backpropagation to train ML-IDW, comparing its performance against conventional spatial interpolation models such as Kriging and standard IDW on regression and classification tasks using simulated spatial datasets of varying complexity. the results highlight the efficacy of ML-IDW, particularly in handling complex spatial datasets, exhibiting lower mean square error in regression and higher F1 score in classification.Keywords: deep learning, multi-layer neural networks, gradient descent, spatial interpolation, inverse distance weighting
Procedia PDF Downloads 523504 Multiperson Drone Control with Seamless Pilot Switching Using Onboard Camera and Openpose Real-Time Keypoint Detection
Authors: Evan Lowhorn, Rocio Alba-Flores
Abstract:
Traditional classification Convolutional Neural Networks (CNN) attempt to classify an image in its entirety. This becomes problematic when trying to perform classification with a drone’s camera in real-time due to unpredictable backgrounds. Object detectors with bounding boxes can be used to isolate individuals and other items, but the original backgrounds remain within these boxes. These basic detectors have been regularly used to determine what type of object an item is, such as “person” or “dog.” Recent advancement in computer vision, particularly with human imaging, is keypoint detection. Human keypoint detection goes beyond bounding boxes to fully isolate humans and plot points, or Regions of Interest (ROI), on their bodies within an image. ROIs can include shoulders, elbows, knees, heads, etc. These points can then be related to each other and used in deep learning methods such as pose estimation. For drone control based on human motions, poses, or signals using the onboard camera, it is important to have a simple method for pilot identification among multiple individuals while also giving the pilot fine control options for the drone. To achieve this, the OpenPose keypoint detection network was used with body and hand keypoint detection enabled. OpenPose supports the ability to combine multiple keypoint detection methods in real-time with a single network. Body keypoint detection allows simple poses to act as the pilot identifier. The hand keypoint detection with ROIs for each finger can then offer a greater variety of signal options for the pilot once identified. For this work, the individual must raise their non-control arm to be identified as the operator and send commands with the hand on their other arm. The drone ignores all other individuals in the onboard camera feed until the current operator lowers their non-control arm. When another individual wish to operate the drone, they simply raise their arm once the current operator relinquishes control, and then they can begin controlling the drone with their other hand. This is all performed mid-flight with no landing or script editing required. When using a desktop with a discrete NVIDIA GPU, the drone’s 2.4 GHz Wi-Fi connection combined with OpenPose restrictions to only body and hand allows this control method to perform as intended while maintaining the responsiveness required for practical use.Keywords: computer vision, drone control, keypoint detection, openpose
Procedia PDF Downloads 1843503 Introduce a New Model of Anomaly Detection in Computer Networks Using Artificial Immune Systems
Authors: Mehrshad Khosraviani, Faramarz Abbaspour Leyl Abadi
Abstract:
The fundamental component of the computer network of modern information society will be considered. These networks are connected to the network of the internet generally. Due to the fact that the primary purpose of the Internet is not designed for, in recent decades, none of these networks in many of the attacks has been very important. Today, for the provision of security, different security tools and systems, including intrusion detection systems are used in the network. A common diagnosis system based on artificial immunity, the designer, the Adhasaz Foundation has been evaluated. The idea of using artificial safety methods in the diagnosis of abnormalities in computer networks it has been stimulated in the direction of their specificity, there are safety systems are similar to the common needs of m, that is non-diagnostic. For example, such methods can be used to detect any abnormalities, a variety of attacks, being memory, learning ability, and Khodtnzimi method of artificial immune algorithm pointed out. Diagnosis of the common system of education offered in this paper using only the normal samples is required for network and any additional data about the type of attacks is not. In the proposed system of positive selection and negative selection processes, selection of samples to create a distinction between the colony of normal attack is used. Copa real data collection on the evaluation of ij indicates the proposed system in the false alarm rate is often low compared to other ir methods and the detection rate is in the variations.Keywords: artificial immune system, abnormality detection, intrusion detection, computer networks
Procedia PDF Downloads 3533502 A 5G Architecture Based to Dynamic Vehicular Clustering Enhancing VoD Services Over Vehicular Ad hoc Networks
Authors: Lamaa Sellami, Bechir Alaya
Abstract:
Nowadays, video-on-demand (VoD) applications are becoming one of the tendencies driving vehicular network users. In this paper, considering the unpredictable vehicle density, the unexpected acceleration or deceleration of the different cars included in the vehicular traffic load, and the limited radio range of the employed communication scheme, we introduce the “Dynamic Vehicular Clustering” (DVC) algorithm as a new scheme for video streaming systems over VANET. The proposed algorithm takes advantage of the concept of small cells and the introduction of wireless backhauls, inspired by the different features and the performance of the Long Term Evolution (LTE)- Advanced network. The proposed clustering algorithm considers multiple characteristics such as the vehicle’s position and acceleration to reduce latency and packet loss. Therefore, each cluster is counted as a small cell containing vehicular nodes and an access point that is elected regarding some particular specifications.Keywords: video-on-demand, vehicular ad-hoc network, mobility, vehicular traffic load, small cell, wireless backhaul, LTE-advanced, latency, packet loss
Procedia PDF Downloads 1413501 Choosing between the Regression Correlation, the Rank Correlation, and the Correlation Curve
Authors: Roger L. Goodwin
Abstract:
This paper presents a rank correlation curve. The traditional correlation coefficient is valid for both continuous variables and for integer variables using rank statistics. Since the correlation coefficient has already been established in rank statistics by Spearman, such a calculation can be extended to the correlation curve. This paper presents two survey questions. The survey collected non-continuous variables. We will show weak to moderate correlation. Obviously, one question has a negative effect on the other. A review of the qualitative literature can answer which question and why. The rank correlation curve shows which collection of responses has a positive slope and which collection of responses has a negative slope. Such information is unavailable from the flat, "first-glance" correlation statistics.Keywords: Bayesian estimation, regression model, rank statistics, correlation, correlation curve
Procedia PDF Downloads 4763500 Citation Analysis of New Zealand Court Decisions
Authors: Tobias Milz, L. Macpherson, Varvara Vetrova
Abstract:
The law is a fundamental pillar of human societies as it shapes, controls and governs how humans conduct business, behave and interact with each other. Recent advances in computer-assisted technologies such as NLP, data science and AI are creating opportunities to support the practice, research and study of this pervasive domain. It is therefore not surprising that there has been an increase in investments into supporting technologies for the legal industry (also known as “legal tech” or “law tech”) over the last decade. A sub-discipline of particular appeal is concerned with assisted legal research. Supporting law researchers and practitioners to retrieve information from the vast amount of ever-growing legal documentation is of natural interest to the legal research community. One tool that has been in use for this purpose since the early nineteenth century is legal citation indexing. Among other use cases, they provided an effective means to discover new precedent cases. Nowadays, computer-assisted network analysis tools can allow for new and more efficient ways to reveal the “hidden” information that is conveyed through citation behavior. Unfortunately, access to openly available legal data is still lacking in New Zealand and access to such networks is only commercially available via providers such as LexisNexis. Consequently, there is a need to create, analyze and provide a legal citation network with sufficient data to support legal research tasks. This paper describes the development and analysis of a legal citation Network for New Zealand containing over 300.000 decisions from 125 different courts of all areas of law and jurisdiction. Using python, the authors assembled web crawlers, scrapers and an OCR pipeline to collect and convert court decisions from openly available sources such as NZLII into uniform and machine-readable text. This facilitated the use of regular expressions to identify references to other court decisions from within the decision text. The data was then imported into a graph-based database (Neo4j) with the courts and their respective cases represented as nodes and the extracted citations as links. Furthermore, additional links between courts of connected cases were added to indicate an indirect citation between the courts. Neo4j, as a graph-based database, allows efficient querying and use of network algorithms such as PageRank to reveal the most influential/most cited courts and court decisions over time. This paper shows that the in-degree distribution of the New Zealand legal citation network resembles a power-law distribution, which indicates a possible scale-free behavior of the network. This is in line with findings of the respective citation networks of the U.S. Supreme Court, Austria and Germany. The authors of this paper provide the database as an openly available data source to support further legal research. The decision texts can be exported from the database to be used for NLP-related legal research, while the network can be used for in-depth analysis. For example, users of the database can specify the network algorithms and metrics to only include specific courts to filter the results to the area of law of interest.Keywords: case citation network, citation analysis, network analysis, Neo4j
Procedia PDF Downloads 1073499 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 1643498 Supporting Densification through the Planning and Implementation of Road Infrastructure in the South African Context
Authors: K. Govender, M. Sinclair
Abstract:
This paper demonstrates a proof of concept whereby shorter trips and land use densification can be promoted through an alternative approach to planning and implementation of road infrastructure in the South African context. It briefly discusses how the development of the Compact City concept relies on a combination of promoting shorter trips and densification through a change in focus in road infrastructure provision. The methodology developed in this paper uses a traffic model to test the impact of synthesized deterrence functions on congestion locations in the road network through the assignment of traffic on the study network. The results from this study demonstrate that intelligent planning of road infrastructure can indeed promote reduced urban sprawl, increased residential density and mixed-use areas which are supported by an efficient public transport system; and reduced dependence on the freeway network with a fixed road infrastructure budget. The study has resonance for all cities where urban sprawl is seemingly unstoppable.Keywords: compact cities, densification, road infrastructure planning, transportation modelling
Procedia PDF Downloads 178