Search results for: star network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4791

Search results for: star network

2421 Predictive Analysis of the Stock Price Market Trends with Deep Learning

Authors: Suraj Mehrotra

Abstract:

The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.

Keywords: machine learning, testing set, artificial intelligence, stock analysis

Procedia PDF Downloads 67
2420 Prediction of Wind Speed by Artificial Neural Networks for Energy Application

Authors: S. Adjiri-Bailiche, S. M. Boudia, H. Daaou, S. Hadouche, A. Benzaoui

Abstract:

In this work the study of changes in the wind speed depending on the altitude is calculated and described by the model of the neural networks, the use of measured data, the speed and direction of wind, temperature and the humidity at 10 m are used as input data and as data targets at 50m above sea level. Comparing predict wind speeds and extrapolated at 50 m above sea level is performed. The results show that the prediction by the method of artificial neural networks is very accurate.

Keywords: MATLAB, neural network, power low, vertical extrapolation, wind energy, wind speed

Procedia PDF Downloads 661
2419 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text

Authors: Duncan Wallace, M-Tahar Kechadi

Abstract:

In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.

Keywords: artificial neural networks, data-mining, machine learning, medical informatics

Procedia PDF Downloads 104
2418 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud

Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal

Abstract:

Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.

Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid

Procedia PDF Downloads 295
2417 Thermodynamic Attainable Region for Direct Synthesis of Dimethyl Ether from Synthesis Gas

Authors: Thulane Paepae, Tumisang Seodigeng

Abstract:

This paper demonstrates the use of a method of synthesizing process flowsheets using a graphical tool called the GH-plot and in particular, to look at how it can be used to compare the reactions of a combined simultaneous process with regard to their thermodynamics. The technique uses fundamental thermodynamic principles to allow the mass, energy and work balances locate the attainable region for chemical processes in a reactor. This provides guidance on what design decisions would be best suited to developing new processes that are more effective and make lower demands on raw material and energy usage.

Keywords: attainable regions, dimethyl ether, optimal reaction network, GH Space

Procedia PDF Downloads 218
2416 A Survey of Domain Name System Tunneling Attacks: Detection and Prevention

Authors: Lawrence Williams

Abstract:

As the mechanism which converts domains to internet protocol (IP) addresses, Domain Name System (DNS) is an essential part of internet usage. It was not designed securely and can be subject to attacks. DNS attacks have become more frequent and sophisticated and the need for detecting and preventing them becomes more important for the modern network. DNS tunnelling attacks are one type of attack that are primarily used for distributed denial-of-service (DDoS) attacks and data exfiltration. Discussion of different techniques to detect and prevent DNS tunneling attacks is done. The methods, models, experiments, and data for each technique are discussed. A proposal about feasibility is made. Future research on these topics is proposed.

Keywords: DNS, tunneling, exfiltration, botnet

Procedia PDF Downloads 50
2415 Regional Flood Frequency Analysis in Narmada Basin: A Case Study

Authors: Ankit Shah, R. K. Shrivastava

Abstract:

Flood and drought are two main features of hydrology which affect the human life. Floods are natural disasters which cause millions of rupees’ worth of damage each year in India and the whole world. Flood causes destruction in form of life and property. An accurate estimate of the flood damage potential is a key element to an effective, nationwide flood damage abatement program. Also, the increase in demand of water due to increase in population, industrial and agricultural growth, has let us know that though being a renewable resource it cannot be taken for granted. We have to optimize the use of water according to circumstances and conditions and need to harness it which can be done by construction of hydraulic structures. For their safe and proper functioning of hydraulic structures, we need to predict the flood magnitude and its impact. Hydraulic structures play a key role in harnessing and optimization of flood water which in turn results in safe and maximum use of water available. Mainly hydraulic structures are constructed on ungauged sites. There are two methods by which we can estimate flood viz. generation of Unit Hydrographs and Flood Frequency Analysis. In this study, Regional Flood Frequency Analysis has been employed. There are many methods for estimating the ‘Regional Flood Frequency Analysis’ viz. Index Flood Method. National Environmental and Research Council (NERC Methods), Multiple Regression Method, etc. However, none of the methods can be considered universal for every situation and location. The Narmada basin is located in Central India. It is drained by most of the tributaries, most of which are ungauged. Therefore it is very difficult to estimate flood on these tributaries and in the main river. As mentioned above Artificial Neural Network (ANN)s and Multiple Regression Method is used for determination of Regional flood Frequency. The annual peak flood data of 20 sites gauging sites of Narmada Basin is used in the present study to determine the Regional Flood relationships. Homogeneity of the considered sites is determined by using the Index Flood Method. Flood relationships obtained by both the methods are compared with each other, and it is found that ANN is more reliable than Multiple Regression Method for the present study area.

Keywords: artificial neural network, index flood method, multi layer perceptrons, multiple regression, Narmada basin, regional flood frequency

Procedia PDF Downloads 390
2414 Internet of Things Applications on Supply Chain Management

Authors: Beatriz Cortés, Andrés Boza, David Pérez, Llanos Cuenca

Abstract:

The Internet of Things (IoT) field is been applied in industries with different purposes. Sensing Enterprise (SE) is an attribute of an enterprise or a network that allows it to react to business stimuli originating on the internet. These fields have come into focus recently on the enterprises and there is some evidence of the use and implications in supply chain management while finding it as an interesting aspect to work on. This paper presents a revision and proposals of IoT applications in supply chain management.

Keywords: industrial, internet of things, production systems, sensing enterprises, sensor, supply chain management

Procedia PDF Downloads 391
2413 The Use of Instagram as a Sales Tool by Small Fashion/Clothing Businesses

Authors: Santos Andressa M. N.

Abstract:

The research brings reflections on the importance of Instagram for the clothing trade, aiming to analyze the use of this social network as a sales tool by small companies in the fashion/clothing sector in Boqueirão-PI. Thus, field research was carried out, with the application of questionnaires, to raise and analyze data related to the topic. Thus, it is believed that Instagram positively influences the dissemination, visibility, reach and profitability of companies in Boqueirão do Piauí. The survey had a low number of companies due to the lack of availability of the owners during the COVID-19 pandemic.

Keywords: Instagram, sales, fashion, marketing

Procedia PDF Downloads 27
2412 Study of a Crude Oil Desalting Plant of the National Iranian South Oil Company in Gachsaran by Using Artificial Neural Networks

Authors: H. Kiani, S. Moradi, B. Soltani Soulgani, S. Mousavian

Abstract:

Desalting/dehydration plants (DDP) are often installed in crude oil production units in order to remove water-soluble salts from an oil stream. In order to optimize this process, desalting unit should be modeled. In this research, artificial neural network is used to model efficiency of desalting unit as a function of input parameter. The result of this research shows that the mentioned model has good agreement with experimental data.

Keywords: desalting unit, crude oil, neural networks, simulation, recovery, separation

Procedia PDF Downloads 410
2411 Factorization of Computations in Bayesian Networks: Interpretation of Factors

Authors: Linda Smail, Zineb Azouz

Abstract:

Given a Bayesian network relative to a set I of discrete random variables, we are interested in computing the probability distribution P(S) where S is a subset of I. The general idea is to write the expression of P(S) in the form of a product of factors where each factor is easy to compute. More importantly, it will be very useful to give an interpretation of each of the factors in terms of conditional probabilities. This paper considers a semantic interpretation of the factors involved in computing marginal probabilities in Bayesian networks. Establishing such a semantic interpretations is indeed interesting and relevant in the case of large Bayesian networks.

Keywords: Bayesian networks, D-Separation, level two Bayesian networks, factorization of computation

Procedia PDF Downloads 498
2410 A Literature Review of Precision Agriculture: Applications of Diagnostic Diseases in Corn, Potato, and Rice Based on Artificial Intelligence

Authors: Carolina Zambrana, Grover Zurita

Abstract:

The food loss production that occurs in deficient agricultural production is one of the major problems worldwide. This puts the population's food security and the efficiency of farming investments at risk. It is to be expected that this food security will be achieved with the own and efficient production of each country. It will have an impact on the well-being of its population and, thus, also on food sovereignty. The production losses in quantity and quality occur due to the lack of efficient detection of diseases at an early stage. It is very difficult to solve the agriculture efficiency using traditional methods since it takes a long time to be carried out due to detection imprecision of the main diseases, especially when the production areas are extensive. Therefore, the main objective of this research study is to perform a systematic literature review, of the latest five years, of Precision Agriculture (PA) to be able to understand the state of the art of the set of new technologies, procedures, and optimization processes with Artificial Intelligence (AI). This study will focus on Corns, Potatoes, and Rice diagnostic diseases. The extensive literature review will be performed on Elsevier, Scopus, and IEEE databases. In addition, this research will focus on advanced digital imaging processing and the development of software and hardware for PA. The convolution neural network will be handling special attention due to its outstanding diagnostic results. Moreover, the studied data will be incorporated with artificial intelligence algorithms for the automatic diagnosis of crop quality. Finally, precision agriculture with technology applied to the agricultural sector allows the land to be exploited efficiently. This system requires sensors, drones, data acquisition cards, and global positioning systems. This research seeks to merge different areas of science, control engineering, electronics, digital image processing, and artificial intelligence for the development, in the near future, of a low-cost image measurement system that allows the optimization of crops with AI.

Keywords: precision agriculture, convolutional neural network, deep learning, artificial intelligence

Procedia PDF Downloads 56
2409 Classic Training of a Neural Observer for Estimation Purposes

Authors: R. Loukil, M. Chtourou, T. Damak

Abstract:

This paper investigates the training of multilayer neural network using the classic approach. Then, for estimation purposes, we suggest the use of a specific neural observer that we study its training algorithm which is the back-propagation one in the case of the disponibility of the state and in the case of an unmeasurable state. A MATLAB simulation example will be studied to highlight the usefulness of this kind of observer.

Keywords: training, estimation purposes, neural observer, back-propagation, unmeasurable state

Procedia PDF Downloads 541
2408 Assessment and Optimisation of Building Services Electrical Loads for Off-Grid or Hybrid Operation

Authors: Desmond Young

Abstract:

In building services electrical design, a key element of any project will be assessing the electrical load requirements. This needs to be done early in the design process to allow the selection of infrastructure that would be required to meet the electrical needs of the type of building. The type of building will define the type of assessment made, and the values applied in defining the maximum demand for the building, and ultimately the size of supply or infrastructure required, and the application that needs to be made to the distribution network operator, or alternatively to an independent network operator. The fact that this assessment needs to be undertaken early in the design process provides limits on the type of assessment that can be used, as different methods require different types of information, and sometimes this information is not available until the latter stages of a project. A common method applied in the earlier design stages of a project, typically during stages 1,2 & 3, is the use of benchmarks. It is a possibility that some of the benchmarks applied are excessive in relation to the current loads that exist in a modern installation. This lack of accuracy is based on information which does not correspond to the actual equipment loads that are used. This includes lighting and small power loads, where the use of more efficient equipment and lighting has reduced the maximum demand required. The electrical load can be used as part of the process to assess the heat generated from the equipment, with the heat gains from other sources, this feeds into the sizing of the infrastructure required to cool the building. Any overestimation of the loads would contribute to the increase in the design load for the heating and ventilation systems. Finally, with the new policies driving the industry to decarbonise buildings, a prime example being the recently introduced London Plan, loads are potentially going to increase. In addition, with the advent of the pandemic and changes to working practices, and the adoption of electric heating and vehicles, a better understanding of the loads that should be applied will aid in ensuring that infrastructure is not oversized, as a cost to the client, or undersized to the detriment of the building. In addition, more accurate benchmarks and methods will allow assessments to be made for the incorporation of energy storage and renewable technologies as these technologies become more common in buildings new or refurbished.

Keywords: energy, ADMD, electrical load assessment, energy benchmarks

Procedia PDF Downloads 83
2407 Relationship Between Brain Entropy Patterns Estimated by Resting State fMRI and Child Behaviour

Authors: Sonia Boscenco, Zihan Wang, Euclides José de Mendoça Filho, João Paulo Hoppe, Irina Pokhvisneva, Geoffrey B.C. Hall, Michael J. Meaney, Patricia Pelufo Silveira

Abstract:

Entropy can be described as a measure of the number of states of a system, and when used in the context of physiological time-based signals, it serves as a measure of complexity. In functional connectivity data, entropy can account for the moment-to-moment variability that is neglected in traditional functional magnetic resonance imaging (fMRI) analyses. While brain fMRI resting state entropy has been associated with some pathological conditions like schizophrenia, no investigations have explored the association between brain entropy measures and individual differences in child behavior in healthy children. We describe a novel exploratory approach to evaluate brain fMRI resting state data in two child cohorts, and MAVAN (N=54, 4.5 years, 48% males) and GUSTO (N = 206, 4.5 years, 48% males) and its associations to child behavior, that can be used in future research in the context of child exposures and long-term health. Following rs-fMRI data pre-processing and Shannon entropy calculation across 32 network regions of interest to acquire 496 unique functional connections, partial correlation coefficient analysis adjusted for sex was performed to identify associations between entropy data and Strengths and Difficulties questionnaire in MAVAN and Child Behavior Checklist domains in GUSTO. Significance was set at p < 0.01, and we found eight significant associations in GUSTO. Negative associations were found between two frontoparietal regions and cerebellar posterior and oppositional defiant problems, (r = -0.212, p = 0.006) and (r = -0.200, p = 0.009). Positive associations were identified between somatic complaints and four default mode connections: salience insula (r = 0.202, p < 0.01), dorsal attention intraparietal sulcus (r = 0.231, p = 0.003), language inferior frontal gyrus (r = 0.207, p = 0.008) and language posterior superior temporal gyrus (r = 0.210, p = 0.008). Positive associations were also found between insula and frontoparietal connection and attention deficit / hyperactivity problems (r = 0.200, p < 0.01), and insula – default mode connection and pervasive developmental problems (r = 0.210, p = 0.007). In MAVAN, ten significant associations were identified. Two positive associations were found = with prosocial scores: the salience prefrontal cortex and dorsal attention connection (r = 0.474, p = 0.005) and the salience supramarginal gyrus and dorsal attention intraparietal sulcus (r = 0.447, p = 0.008). The insula and prefrontal connection were negatively associated with peer problems (r = -0.437, p < 0.01). Conduct problems were negatively associated with six separate connections, the left salience insula and right salience insula (r = -0.449, p = 0.008), left salience insula and right salience supramarginal gyrus (r = -0.512, p = 0.002), the default mode and visual network (r = -0.444, p = 0.009), dorsal attention and language network (r = -0.490, p = 0.003), and default mode and posterior parietal cortex (r = -0.546, p = 0.001). Entropy measures of resting state functional connectivity can be used to identify individual differences in brain function that are correlated with variation in behavioral problems in healthy children. Further studies applying this marker into the context of environmental exposures are warranted.

Keywords: child behaviour, functional connectivity, imaging, Shannon entropy

Procedia PDF Downloads 175
2406 Cognitive Footprints: Analytical and Predictive Paradigm for Digital Learning

Authors: Marina Vicario, Amadeo Argüelles, Pilar Gómez, Carlos Hernández

Abstract:

In this paper, the Computer Research Network of the National Polytechnic Institute of Mexico proposes a paradigmatic model for the inference of cognitive patterns in digital learning systems. This model leads to metadata architecture useful for analysis and prediction in online learning systems; especially on MOOc's architectures. The model is in the design phase and expects to be tested through an institutional of courses project which is going to develop for the MOOc.

Keywords: cognitive footprints, learning analytics, predictive learning, digital learning, educational computing, educational informatics

Procedia PDF Downloads 453
2405 Thinking about the Loss of Social Networking Sites May Expand the Distress of Social Exclusion

Authors: Wen-Bin Chiou, Hsiao-Chiao Weng

Abstract:

Social networking sites (SNS) such as Facebook and Twitter are low-cost tools that can promote the creation of social connections by providing a convenient platform that can be accessed at any time. In the current research, a laboratory experiment was conducted test the hypothesis that reminders of losing SNS would alter the impact of social events, especially those involving social exclusion. Specifically, this study explored whether losing SNS would intensify perceived social distress induced by exclusionary bogus feedback. Eighty-eight Facebook users (46 females, 42 males; mean age = 22.6 years, SD = 3.1 years) were recruited via campus posters and flyers at a national university in southern Taiwan. After participants provided consent, they were randomly assigned to a 2 (SNS non-use vs. neutral) between-subjects experiment. Participants completed an ostensible survey about online social networking in which we included an item about the time spent on SNS per day. The last question was used to manipulate thoughts about losing SNS access. Participants under the non-use condition were asked to record three conditions that would render them unable to use SNS (e.g., a network adaptor problem, malfunctioning cable modem, or problems with Internet service providers); participants under the neutral condition recorded three conditions that would render them unable to log onto the college website (e.g., server maintenance, local network or firewall problems). Later, this experiment employed a bogus-feedback paradigm to induce social exclusion. Participants then rated their social distress on a four-item scale, identical to that of Experiment 1 (α = .84). The results showed that thoughts of losing SNS intensified distress caused by social exclusion, suggesting that the loss of SNS has a similar effect to the loss of a primary source for social reconnections. Moreover, the priming effects of SNS on perceived distress were more prominent for heavy users. The demonstrated link between the idea of losing SNS use and increased pain of social exclusion manifests the importance of SNS as a crucial gateway for acquiring and rebuilding social connections. Use of online social networking appears to be a two-edged sword for coping with social exclusion in human lives in the e-society.

Keywords: online social networking, perceived distress, social exclusion, SNS

Procedia PDF Downloads 398
2404 Design of Bidirectional Wavelength Division Multiplexing Passive Optical Network in Optisystem Environment

Authors: Ashiq Hussain, Mahwash Hussain, Zeenat Parveen

Abstract:

Now a days the demand for broadband service has increased. Due to which the researchers are trying to find a solution to provide a large amount of service. There is a shortage of bandwidth because of the use of downloading video, voice and data. One of the solutions to overcome this shortage of bandwidth is to provide the communication system with passive optical components. We have increased the data rate in this system. From experimental results we have concluded that the quality factor has increased by adding passive optical networks.

Keywords: WDM-PON, optical fiber, BER, Q-factor, eye diagram

Procedia PDF Downloads 480
2403 Deep Q-Network for Navigation in Gazebo Simulator

Authors: Xabier Olaz Moratinos

Abstract:

Drone navigation is critical, particularly during the initial phases, such as the initial ascension, where pilots may fail due to strong external interferences that could potentially lead to a crash. In this ongoing work, a drone has been successfully trained to perform an ascent of up to 6 meters at speeds with external disturbances pushing it up to 24 mph, with the DQN algorithm managing external forces affecting the system. It has been demonstrated that the system can control its height, position, and stability in all three axes (roll, pitch, and yaw) throughout the process. The learning process is carried out in the Gazebo simulator, which emulates interferences, while ROS is used to communicate with the agent.

Keywords: machine learning, DQN, Gazebo, navigation

Procedia PDF Downloads 50
2402 Dynamic Network Approach to Air Traffic Management

Authors: Catia S. A. Sima, K. Bousson

Abstract:

Congestion in the Terminal Maneuvering Areas (TMAs) of larger airports impacts all aspects of air traffic flow, not only at national level but may also induce arrival delays at international level. Hence, there is a need to monitor appropriately the air traffic flow in TMAs so that efficient decisions may be taken to manage their occupancy rates. It would be desirable to physically increase the existing airspace to accommodate all existing demands, but this question is entirely utopian and, given this possibility, several studies and analyses have been developed over the past decades to meet the challenges that have arisen due to the dizzying expansion of the aeronautical industry. The main objective of the present paper is to propose concepts to manage and reduce the degree of uncertainty in the air traffic operations, maximizing the interest of all involved, ensuring a balance between demand and supply, and developing and/or adapting resources that enable a rapid and effective adaptation of measures to the current context and the consequent changes perceived in the aeronautical industry. A central task is to emphasize the increase in air traffic flow management capacity to the present day, taking into account not only a wide range of methodologies but also equipment and/or tools already available in the aeronautical industry. The efficient use of these resources is crucial as the human capacity for work is limited and the actors involved in all processes related to air traffic flow management are increasingly overloaded and, as a result, operational safety could be compromised. The methodology used to answer and/or develop the issues listed above is based on the advantages promoted by the application of Markov Chain principles that enable the construction of a simplified model of a dynamic network that describes the air traffic flow behavior anticipating their changes and eventual measures that could better address the impact of increased demand. Through this model, the proposed concepts are shown to have potentials to optimize the air traffic flow management combined with the operation of the existing resources at each moment and the circumstances found in each TMA, using historical data from the air traffic operations and specificities found in the aeronautical industry, namely in the Portuguese context.

Keywords: air traffic flow, terminal maneuvering area, TMA, air traffic management, ATM, Markov chains

Procedia PDF Downloads 107
2401 Dynamic Communications Mapping in NoC-Based Heterogeneous MPSoCs

Authors: M. K. Benhaoua, A. K. Singh, A. E. H. Benyamina

Abstract:

In this paper, we propose heuristic for dynamic communications mapping that considers the placement of communications in order to optimize the overall performance. The mapping technique uses a newly proposed Algorithm to place communications between the tasks. The placement we propose of the communications leads to a better optimization of several performance metrics (time and energy consumption). Experimental results show that the proposed mapping approach provides significant performance improvements when compared to those using static routing.

Keywords: Multi-Processor Systems-on-Chip (MPSoCs), Network-on-Chip (NoC), heterogeneous architectures, dynamic mapping heuristics

Procedia PDF Downloads 507
2400 A Survey of Dynamic QoS Methods in Sofware Defined Networking

Authors: Vikram Kalekar

Abstract:

Modern Internet Protocol (IP) networks deploy traditional and modern Quality of Service (QoS) management methods to ensure the smooth flow of network packets during regular operations. SDN (Software-defined networking) networks have also made headway into better service delivery by means of novel QoS methodologies. While many of these techniques are experimental, some of them have been tested extensively in controlled environments, and few of them have the potential to be deployed widely in the industry. With this survey, we plan to analyze the approaches to QoS and resource allocation in SDN, and we will try to comment on the possible improvements to QoS management in the context of SDN.

Keywords: QoS, policy, congestion, flow management, latency, delay index terms-SDN, delay

Procedia PDF Downloads 167
2399 Basics of Gamma Ray Burst and Its Afterglow

Authors: Swapnil Kumar Singh

Abstract:

Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.

Keywords: GRB, synchrotron, X-ray, isotropic energy

Procedia PDF Downloads 70
2398 Review of Different Machine Learning Algorithms

Authors: Syed Romat Ali Shah, Bilal Shoaib, Saleem Akhtar, Munib Ahmad, Shahan Sadiqui

Abstract:

Classification is a data mining technique, which is recognizedon Machine Learning (ML) algorithm. It is used to classifythe individual articlein a knownofinformation into a set of predefinemodules or group. Web mining is also a portion of that sympathetic of data mining methods. The main purpose of this paper to analysis and compare the performance of Naïve Bayse Algorithm, Decision Tree, K-Nearest Neighbor (KNN), Artificial Neural Network (ANN)and Support Vector Machine (SVM). This paper consists of different ML algorithm and their advantages and disadvantages and also define research issues.

Keywords: Data Mining, Web Mining, classification, ML Algorithms

Procedia PDF Downloads 262
2397 Songwriting in the Postdigital Age: Using TikTok and Instagram as Online Informal Learning Technologies

Authors: Matthias Haenisch, Marc Godau, Julia Barreiro, Dominik Maxelon

Abstract:

In times of ubiquitous digitalization and the increasing entanglement of humans and technologies in musical practices in the 21st century, it is to be asked, how popular musicians learn in the (post)digital Age. Against the backdrop of the increasing interest in transferring informal learning practices into formal settings of music education the interdisciplinary research association »MusCoDA – Musical Communities in the (Post)Digital Age« (University of Erfurt/University of Applied Sciences Clara Hoffbauer Potsdam, funded by the German Ministry of Education and Research, pursues the goal to derive an empirical model of collective songwriting practices from the study of informal lelearningf songwriters and bands that can be translated into pedagogical concepts for music education in schools. Drawing on concepts from Community of Musical Practice and Actor Network Theory, lelearnings considered not only as social practice and as participation in online and offline communities, but also as an effect of heterogeneous networks composed of human and non-human actors. Learning is not seen as an individual, cognitive process, but as the formation and transformation of actor networks, i.e., as a practice of assembling and mediating humans and technologies. Based on video stimulated recall interviews and videography of online and offline activities, songwriting practices are followed from the initial idea to different forms of performance and distribution. The data evaluation combines coding and mapping methods of Grounded Theory Methodology and Situational Analysis. This results in network maps in which both the temporality of creative practices and the material and spatial relations of human and technological actors are reconstructed. In addition, positional analyses document the power relations between the participants that structure the learning process of the field. In the area of online informal lelearninginitial key research findings reveal a transformation of the learning subject through the specific technological affordances of TikTok and Instagram and the accompanying changes in the learning practices of the corresponding online communities. Learning is explicitly shaped by the material agency of online tools and features and the social practices entangled with these technologies. Thus, any human online community member can be invited to directly intervene in creative decisions that contribute to the further compositional and structural development of songs. At the same time, participants can provide each other with intimate insights into songwriting processes in progress and have the opportunity to perform together with strangers and idols. Online Lelearnings characterized by an increase in social proximity, distribution of creative agency and informational exchange between participants. While it seems obvious that traditional notions not only of lelearningut also of the learning subject cannot be maintained, the question arises, how exactly the observed informal learning practices and the subject that emerges from the use of social media as online learning technologies can be transferred into contexts of formal learning

Keywords: informal learning, postdigitality, songwriting, actor-network theory, community of musical practice, social media, TikTok, Instagram, apps

Procedia PDF Downloads 103
2396 Behavioral Pattern of 2G Mobile Internet Subscribers: A Study on an Operator of Bangladesh

Authors: Azfar Adib

Abstract:

Like many other countries of the world, mobile internet has been playing a key role in the growth of internet subscriber base in Bangladesh. This study has attempted to identify particular behavioral or usage patterns of 2G mobile internet subscribers who were using the service of the topmost internet service provider (as well as the top mobile operator) of Bangladesh prior to the launching of 3G services (when 2G was fully dominant). It contains some comprehensive analysis carried on different info regarding 2G mobile internet subscribers, obtained from the operator’s own network insights.This is accompanied by the results of a survey conducted among 40 high-frequency users of this service.

Keywords: mobile internet, Symbian, Android, iPhone

Procedia PDF Downloads 414
2395 Ontology-Based Approach for Temporal Semantic Modeling of Social Networks

Authors: Souâad Boudebza, Omar Nouali, Faiçal Azouaou

Abstract:

Social networks have recently gained a growing interest on the web. Traditional formalisms for representing social networks are static and suffer from the lack of semantics. In this paper, we will show how semantic web technologies can be used to model social data. The SemTemp ontology aligns and extends existing ontologies such as FOAF, SIOC, SKOS and OWL-Time to provide a temporal and semantically rich description of social data. We also present a modeling scenario to illustrate how our ontology can be used to model social networks.

Keywords: ontology, semantic web, social network, temporal modeling

Procedia PDF Downloads 353
2394 Localized Variabilities in Traffic-related Air Pollutant Concentrations Revealed Using Compact Sensor Networks

Authors: Eric A. Morris, Xia Liu, Yee Ka Wong, Greg J. Evans, Jeff R. Brook

Abstract:

Air quality monitoring stations tend to be widely distributed and are often located far from major roadways, thus, determining where, when, and which traffic-related air pollutants (TRAPs) have the greatest impact on public health becomes a matter of extrapolation. Compact, multipollutant sensor systems are an effective solution as they enable several TRAPs to be monitored in a geospatially dense network, thus filling in the gaps between conventional monitoring stations. This work describes two applications of one such system named AirSENCE for gathering actionable air quality data relevant to smart city infrastructures. In the first application, four AirSENCE devices were co-located with traffic monitors around the perimeter of a city block in Oshawa, Ontario. This study, which coincided with the COVID-19 outbreak of 2020 and subsequent lockdown measures, demonstrated a direct relationship between decreased traffic volumes and TRAP concentrations. Conversely, road construction was observed to cause elevated TRAP levels while reducing traffic volumes, illustrating that conventional smart city sensors such as traffic counters provide inadequate data for inferring air quality conditions. The second application used two AirSENCE sensors on opposite sides of a major 2-way commuter road in Toronto. Clear correlations of TRAP concentrations with wind direction were observed, which shows that impacted areas are not necessarily static and may exhibit high day-to-day variability in air quality conditions despite consistent traffic volumes. Both of these applications provide compelling evidence favouring the inclusion of air quality sensors in current and future smart city infrastructure planning. Such sensors provide direct measurements that are useful for public health alerting as well as decision-making for projects involving traffic mitigation, heavy construction, and urban renewal efforts.

Keywords: distributed sensor network, continuous ambient air quality monitoring, Smart city sensors, Internet of Things, traffic-related air pollutants

Procedia PDF Downloads 49
2393 Routing Metrics and Protocols for Wireless Mesh Networks

Authors: Samira Kalantary, Zohre Saatzade

Abstract:

Wireless Mesh Networks (WMNs) are low-cost access networks built on cooperative routing over a backbone composed of stationary wireless routers. WMNs must deal with the highly unstable wireless medium. Thus, routing metrics and protocols are evolving by designing algorithms that consider link quality to choose the best routes. In this work, we analyse the state of the art in WMN metrics and propose taxonomy for WMN routing protocols. Performance measurements of a wireless mesh network deployed using various routing metrics are presented and corroborate our analysis.

Keywords: wireless mesh networks, routing protocols, routing metrics, bioinformatics

Procedia PDF Downloads 425
2392 Geographic Information System Cloud for Sustainable Digital Water Management: A Case Study

Authors: Mohamed H. Khalil

Abstract:

Water is one of the most crucial elements which influence human lives and development. Noteworthy, over the last few years, GIS plays a significant role in optimizing water management systems, especially after exponential developing in this sector. In this context, the Egyptian government initiated an advanced ‘GIS-Web Based System’. This system is efficiently designed to tangibly assist and optimize the complement and integration of data between departments of Call Center, Operation and Maintenance, and laboratory. The core of this system is a unified ‘Data Model’ for all the spatial and tabular data of the corresponding departments. The system is professionally built to provide advanced functionalities such as interactive data collection, dynamic monitoring, multi-user editing capabilities, enhancing data retrieval, integrated work-flow, different access levels, and correlative information record/track. Noteworthy, this cost-effective system contributes significantly not only in the completeness of the base-map (93%), the water network (87%) in high level of details GIS format, enhancement of the performance of the customer service, but also in reducing the operating costs/day-to-day operations (~ 5-10 %). In addition, the proposed system facilitates data exchange between different departments (Call Center, Operation and Maintenance, and laboratory), which allowed a better understanding/analyzing of complex situations. Furthermore, this system reflected tangibly on: (i) dynamic environmental monitor/water quality indicators (ammonia, turbidity, TDS, sulfate, iron, pH, etc.), (ii) improved effectiveness of the different water departments, (iii) efficient deep advanced analysis, (iv) advanced web-reporting tools (daily, weekly, monthly, quarterly, and annually), (v) tangible planning synthesizing spatial and tabular data; and finally, (vi) scalable decision support system. It is worth to highlight that the proposed future plan (second phase) of this system encompasses scalability will extend to include integration with departments of Billing and SCADA. This scalability will comprise advanced functionalities in association with the existing one to allow further sustainable contributions.

Keywords: GIS Web-Based, base-map, water network, decision support system

Procedia PDF Downloads 62