Search results for: neural network models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11176

Search results for: neural network models

9496 Dynamic Transmission Modes of Network Public Opinion on Subevents Clusters of an Emergent Event

Authors: Yuan Xu, Xun Liang, Meina Zhang

Abstract:

The rise and attenuation of the public opinion broadcast of an emergent accident, in the social network, has a close relationship with the dynamic development of its subevents cluster. In this article, we take Tianjin Port explosion's subevents as an example to research the dynamic propagation discipline of Internet public opinion in a sudden accident, and analyze the overall structure of dynamic propagation to propose four different routes for subevents clusters propagation. We also generate network diagrams for the dynamic public opinion propagation, analyze each propagation type specifically. Based on this, suggestions on the supervision and guidance of Internet public opinion broadcast can be made.

Keywords: network dynamic transmission modes, emergent subevents clusters, Tianjin Port explosion, public opinion supervision

Procedia PDF Downloads 297
9495 A POX Controller Module to Prepare a List of Flow Header Information Extracted from SDN Traffic

Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin

Abstract:

Software Defined Networking (SDN) is a paradigm designed to facilitate the way of controlling the network dynamically and with more agility. Network traffic is a set of flows, each of which contains a set of packets. In SDN, a matching process is performed on every packet coming to the network in the SDN switch. Only the headers of the new packets will be forwarded to the SDN controller. In terminology, the flow header fields are called tuples. Basically, these tuples are 5-tuple: the source and destination IP addresses, source and destination ports, and protocol number. This flow information is used to provide an overview of the network traffic. Our module is meant to extract this 5-tuple with the packets and flows numbers and show them as a list. Therefore, this list can be used as a first step in the way of detecting the DDoS attack. Thus, this module can be considered as the beginning stage of any flow-based DDoS detection method.

Keywords: matching, OpenFlow tables, POX controller, SDN, table-miss

Procedia PDF Downloads 199
9494 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 141
9493 Artificial Neurons Based on Memristors for Spiking Neural Networks

Authors: Yan Yu, Wang Yu, Chen Xintong, Liu Yi, Zhang Yanzhong, Wang Yanji, Chen Xingyu, Zhang Miaocheng, Tong Yi

Abstract:

Neuromorphic computing based on spiking neural networks (SNNs) has emerged as a promising avenue for building the next generation of intelligent computing systems. Owing to its high-density integration, low power, and outstanding nonlinearity, memristors have attracted emerging attention on achieving SNNs. However, fabricating a low-power and robust memristor-based spiking neuron without extra electrical components is still a challenge for brain-inspired systems. In this work, we demonstrate a TiO₂-based threshold switching (TS) memristor to emulate a leaky integrate-and-fire (LIF) neuron without auxiliary circuits, used to realize single layer fully connected (FC) SNNs. Moreover, our TiO₂-based resistive switching (RS) memristors realize spiking-time-dependent-plasticity (STDP), originating from the Ag diffusion-based filamentary mechanism. This work demonstrates that TiO2-based memristors may provide an efficient method to construct hardware neuromorphic computing systems.

Keywords: leaky integrate-and-fire, memristor, spiking neural networks, spiking-time-dependent-plasticity

Procedia PDF Downloads 135
9492 Minimization of Propagation Delay in Multi Unmanned Aerial Vehicle Network

Authors: Purva Joshi, Rohit Thanki, Omar Hanif

Abstract:

Unmanned aerial vehicles (UAVs) are becoming increasingly important in various industrial applications and sectors. Nowadays, a multi UAV network is used for specific types of communication (e.g., military) and monitoring purposes. Therefore, it is critical to reducing propagation delay during communication between UAVs, which is essential in a multi UAV network. This paper presents how the propagation delay between the base station (BS) and the UAVs is reduced using a searching algorithm. Furthermore, the iterative-based K-nearest neighbor (k-NN) algorithm and Travelling Salesmen Problem (TSP) algorthm were utilized to optimize the distance between BS and individual UAV to overcome the problem of propagation delay in multi UAV networks. The simulation results show that this proposed method reduced complexity, improved reliability, and reduced propagation delay in multi UAV networks.

Keywords: multi UAV network, optimal distance, propagation delay, K - nearest neighbor, traveling salesmen problem

Procedia PDF Downloads 206
9491 A QoE-driven Cross-layer Resource Allocation Scheme for High Traffic Service over Open Wireless Network Downlink

Authors: Liya Shan, Qing Liao, Qinyue Hu, Shantao Jiang, Tao Wang

Abstract:

In this paper, a Quality of Experience (QoE)-driven cross-layer resource allocation scheme for high traffic service over Open Wireless Network (OWN) downlink is proposed, and the related problem about the users in the whole cell including the users in overlap region of different cells has been solved.A method, in which assess models of the BestEffort service and the no-reference assess algorithm for video service are adopted, to calculate the Mean Opinion Score (MOS) value for high traffic service has been introduced. The cross-layer architecture considers the parameters in application layer, media access control layer and physical layer jointly. Based on this architecture and the MOS value, the Binary Constrained Particle Swarm Optimization (B_CPSO) algorithm is used to solve the cross-layer resource allocation problem. In addition,simulationresults show that the proposed scheme significantly outperforms other schemes in terms of maximizing average users’ MOS value for the whole system as well as maintaining fairness among users.

Keywords: high traffic service, cross-layer resource allocation, QoE, B_CPSO, OWN

Procedia PDF Downloads 542
9490 Epoxomicin Affects Proliferating Neural Progenitor Cells of Rat

Authors: Bahaa Eldin A. Fouda, Khaled N. Yossef, Mohamed Elhosseny, Ahmed Lotfy, Mohamed Salama, Mohamed Sobh

Abstract:

Developmental neurotoxicity (DNT) entails the toxic effects imparted by various chemicals on the brain during the early childhood period. As human brains are vulnerable during this period, various chemicals would have their maximum effects on brains during early childhood. Some toxicants have been confirmed to induce developmental toxic effects on CNS e.g. lead, however; most of the agents cannot be identified with certainty due the defective nature of predictive toxicology models used. A novel alternative method that can overcome most of the limitations of conventional techniques is the use of 3D neurospheres system. This in-vitro system can recapitulate most of the changes during the period of brain development making it an ideal model for predicting neurotoxic effects. In the present study, we verified the possible DNT of epoxomicin which is a naturally occurring selective proteasome inhibitor with anti-inflammatory activity. Rat neural progenitor cells were isolated from rat embryos (E14) extracted from placental tissue. The cortices were aseptically dissected out from the brains of the fetuses and the tissues were triturated by repeated passage through a fire-polished constricted Pasteur pipette. The dispersed tissues were allowed to settle for 3 min. The supernatant was, then, transferred to a fresh tube and centrifuged at 1,000 g for 5 min. The pellet was placed in Hank’s balanced salt solution cultured as free-floating neurospheres in proliferation medium. Two doses of epoxomicin (1µM and 10µM) were used in cultured neuropsheres for a period of 14 days. For proliferation analysis, spheres were cultured in proliferation medium. After 0, 4, 5, 11, and 14 days, sphere size was determined by software analyses. The diameter of each neurosphere was measured and exported to excel file further to statistical analysis. For viability analysis, trypsin-EDTA solution were added to neurospheres for 3 min to dissociate them into single cells suspension, then viability evaluated by the Trypan Blue exclusion test. Epoxomicin was found to affect proliferation and viability of neuropsheres, these effects were positively correlated to doses and progress of time. This study confirms the DNT effects of epoxomicin on 3D neurospheres model. The effects on proliferation suggest possible gross morphologic changes while the decrease in viability propose possible focal lesion on exposure to epoxomicin during early childhood.

Keywords: neural progentor cells, epoxomicin, neurosphere, medical and health sciences

Procedia PDF Downloads 429
9489 Dynamical Relation of Poisson Spike Trains in Hodkin-Huxley Neural Ion Current Model and Formation of Non-Canonical Bases, Islands, and Analog Bases in DNA, mRNA, and RNA at or near the Transcription

Authors: Michael Fundator

Abstract:

Groundbreaking application of biomathematical and biochemical research in neural networks processes to formation of non-canonical bases, islands, and analog bases in DNA and mRNA at or near the transcription that contradicts the long anticipated statistical assumptions for the distribution of bases and analog bases compounds is implemented through statistical and stochastic methods apparatus with addition of quantum principles, where the usual transience of Poisson spike train becomes very instrumental tool for finding even almost periodical type of solutions to Fokker-Plank stochastic differential equation. Present article develops new multidimensional methods of finding solutions to stochastic differential equations based on more rigorous approach to mathematical apparatus through Kolmogorov-Chentsov continuity theorem that allows the stochastic processes with jumps under certain conditions to have γ-Holder continuous modification that is used as basis for finding analogous parallels in dynamics of neutral networks and formation of analog bases and transcription in DNA.

Keywords: Fokker-Plank stochastic differential equation, Kolmogorov-Chentsov continuity theorem, neural networks, translation and transcription

Procedia PDF Downloads 408
9488 Monitoring Memories by Using Brain Imaging

Authors: Deniz Erçelen, Özlem Selcuk Bozkurt

Abstract:

The course of daily human life calls for the need for memories and remembering the time and place for certain events. Recalling memories takes up a substantial amount of time for an individual. Unfortunately, scientists lack the proper technology to fully understand and observe different brain regions that interact to form or retrieve memories. The hippocampus, a complex brain structure located in the temporal lobe, plays a crucial role in memory. The hippocampus forms memories as well as allows the brain to retrieve them by ensuring that neurons fire together. This process is called “neural synchronization.” Sadly, the hippocampus is known to deteriorate often with age. Proteins and hormones, which repair and protect cells in the brain, typically decline as the age of an individual increase. With the deterioration of the hippocampus, an individual becomes more prone to memory loss. Many memory loss starts off as mild but may evolve into serious medical conditions such as dementia and Alzheimer’s disease. In their quest to fully comprehend how memories work, scientists have created many different kinds of technology that are used to examine the brain and neural pathways. For instance, Magnetic Resonance Imaging - or MRI- is used to collect detailed images of an individual's brain anatomy. In order to monitor and analyze brain functions, a different version of this machine called Functional Magnetic Resonance Imaging - or fMRI- is used. The fMRI is a neuroimaging procedure that is conducted when the target brain regions are active. It measures brain activity by detecting changes in blood flow associated with neural activity. Neurons need more oxygen when they are active. The fMRI measures the change in magnetization between blood which is oxygen-rich and oxygen-poor. This way, there is a detectable difference across brain regions, and scientists can monitor them. Electroencephalography - or EEG - is also a significant way to monitor the human brain. The EEG is more versatile and cost-efficient than an fMRI. An EEG measures electrical activity which has been generated by the numerous cortical layers of the brain. EEG allows scientists to be able to record brain processes that occur after external stimuli. EEGs have a very high temporal resolution. This quality makes it possible to measure synchronized neural activity and almost precisely track the contents of short-term memory. Science has come a long way in monitoring memories using these kinds of devices, which have resulted in the inspections of neurons and neural pathways becoming more intense and detailed.

Keywords: brain, EEG, fMRI, hippocampus, memories, neural pathways, neurons

Procedia PDF Downloads 89
9487 Reconstructability Analysis for Landslide Prediction

Authors: David Percy

Abstract:

Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.

Keywords: reconstructability analysis, machine learning, landslides, raster analysis

Procedia PDF Downloads 69
9486 Stability Analysis of Endemic State of Modelling the Effect of Vaccination and Novel Quarantine-Adjusted Incidence on the Spread of Newcastle Disease Virus

Authors: Nurudeen Oluwasola Lasisi, Abdulkareem Afolabi Ibrahim

Abstract:

Newcastle disease is an infection of domestic poultry and other bird species with virulent Newcastle disease virus (NDV). In this paper, we study the dynamics of modeling the Newcastle disease virus (NDV) using a novel quarantine-adjusted incidence. We do a comparison of Vaccination, linear incident rate, and novel quarantine adjusted incident rate in the models. The dynamics of the models yield disease free and endemic equilibrium states. The effective reproduction numbers of the models are computed in order to measure the relative impact for the individual bird or combined intervention for effective disease control. We showed the local and global stability of endemic equilibrium states of the models, and we found that stability of endemic equilibrium states of models are globally asymptotically stable if the effective reproduction numbers of the models equations are greater than a unit.

Keywords: effective reproduction number, endemic state, mathematical model, Newcastle disease virus, novel quarantine-adjusted incidence, stability analysis

Procedia PDF Downloads 245
9485 Reservoir Fluids: Occurrence, Classification, and Modeling

Authors: Ahmed El-Banbi

Abstract:

Several PVT models exist to represent how PVT properties are handled in sub-surface and surface engineering calculations for oil and gas production. The most commonly used models include black oil, modified black oil (MBO), and compositional models. These models are used in calculations that allow engineers to optimize and forecast well and reservoir performance (e.g., reservoir simulation calculations, material balance, nodal analysis, surface facilities, etc.). The choice of which model is dependent on fluid type and the production process (e.g., depletion, water injection, gas injection, etc.). Based on close to 2,000 reservoir fluid samples collected from different basins and locations, this paper presents some conclusions on the occurrence of reservoir fluids. It also reviews the common methods used to classify reservoir fluid types. Based on new criteria related to the production behavior of different fluids and economic considerations, an updated classification of reservoir fluid types is presented in the paper. Recommendations on the use of different PVT models to simulate the behavior of different reservoir fluid types are discussed. Each PVT model requirement is highlighted. Available methods for the calculation of PVT properties from each model are also discussed. Practical recommendations and tips on how to control the calculations to achieve the most accurate results are given.

Keywords: PVT models, fluid types, PVT properties, fluids classification

Procedia PDF Downloads 74
9484 Modeling Curriculum for High School Students to Learn about Electric Circuits

Authors: Meng-Fei Cheng, Wei-Lun Chen, Han-Chang Ma, Chi-Che Tsai

Abstract:

Recent K–12 Taiwan Science Education Curriculum Guideline emphasize the essential role of modeling curriculum in science learning; however, few modeling curricula have been designed and adopted in current science teaching. Therefore, this study aims to develop modeling curriculum on electric circuits to investigate any learning difficulties students have with modeling curriculum and further enhance modeling teaching. This study was conducted with 44 10th-grade students in Central Taiwan. Data collection included a students’ understanding of models in science (SUMS) survey that explored the students' epistemology of scientific models and modeling and a complex circuit problem to investigate the students’ modeling abilities. Data analysis included the following: (1) Paired sample t-tests were used to examine the improvement of students’ modeling abilities and conceptual understanding before and after the curriculum was taught. (2) Paired sample t-tests were also utilized to determine the students’ modeling abilities before and after the modeling activities, and a Pearson correlation was used to understand the relationship between students’ modeling abilities during the activities and on the posttest. (3) ANOVA analysis was used during different stages of the modeling curriculum to investigate the differences between the students’ who developed microscopic models and macroscopic models after the modeling curriculum was taught. (4) Independent sample t-tests were employed to determine whether the students who changed their models had significantly different understandings of scientific models than the students who did not change their models. The results revealed the following: (1) After the modeling curriculum was taught, the students had made significant progress in both their understanding of the science concept and their modeling abilities. In terms of science concepts, this modeling curriculum helped the students overcome the misconception that electric currents reduce after flowing through light bulbs. In terms of modeling abilities, this modeling curriculum helped students employ macroscopic or microscopic models to explain their observed phenomena. (2) Encouraging the students to explain scientific phenomena in different context prompts during the modeling process allowed them to convert their models to microscopic models, but it did not help them continuously employ microscopic models throughout the whole curriculum. The students finally consistently employed microscopic models when they had help visualizing the microscopic models. (3) During the modeling process, the students who revised their own models better understood that models can be changed than the students who did not revise their own models. Also, the students who revised their models to explain different scientific phenomena tended to regard models as explanatory tools. In short, this study explored different strategies to facilitate students’ modeling processes as well as their difficulties with the modeling process. The findings can be used to design and teach modeling curricula and help students enhance their modeling abilities.

Keywords: electric circuits, modeling curriculum, science learning, scientific model

Procedia PDF Downloads 461
9483 A Structuring and Classification Method for Assigning Application Areas to Suitable Digital Factory Models

Authors: R. Hellmuth

Abstract:

The method of factory planning has changed a lot, especially when it is about planning the factory building itself. Factory planning has the task of designing products, plants, processes, organization, areas, and the building of a factory. Regular restructuring is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity and Ambiguity) lead to more frequent restructuring measures within a factory. A digital factory model is the planning basis for rebuilding measures and becomes an indispensable tool. Furthermore, digital building models are increasingly being used in factories to support facility management and manufacturing processes. The main research question of this paper is, therefore: What kind of digital factory model is suitable for the different areas of application during the operation of a factory? First, different types of digital factory models are investigated, and their properties and usabilities for use cases are analysed. Within the scope of investigation are point cloud models, building information models, photogrammetry models, and these enriched with sensor data are examined. It is investigated which digital models allow a simple integration of sensor data and where the differences are. Subsequently, possible application areas of digital factory models are determined by means of a survey and the respective digital factory models are assigned to the application areas. Finally, an application case from maintenance is selected and implemented with the help of the appropriate digital factory model. It is shown how a completely digitalized maintenance process can be supported by a digital factory model by providing information. Among other purposes, the digital factory model is used for indoor navigation, information provision, and display of sensor data. In summary, the paper shows a structuring of digital factory models that concentrates on the geometric representation of a factory building and its technical facilities. A practical application case is shown and implemented. Thus, the systematic selection of digital factory models with the corresponding application cases is evaluated.

Keywords: building information modeling, digital factory model, factory planning, maintenance

Procedia PDF Downloads 111
9482 Mediation Models in Triadic Relationships: Illness Narratives and Medical Education

Authors: Yoko Yamada, Chizumi Yamada

Abstract:

Narrative psychology is based on the dialogical relationship between self and other. The dialogue can consist of divided, competitive, or opposite communication between self and other. We constructed models of coexistent dialogue in which self and other were positioned side by side and communicated sympathetically. We propose new mediation models for narrative relationships. The mediation models are based on triadic relationships that incorporate a medium or a mediator along with self and other. We constructed three types of mediation model. In the first type, called the “Joint Attention Model”, self and other are positioned side by side and share attention with the medium. In the second type, the “Triangle Model”, an agent mediates between self and other. In the third type, the “Caring Model”, a caregiver stands beside the communication between self and other. We apply the three models to the illness narratives of medical professionals and patients. As these groups have different views and experiences of disease or illness, triadic mediation facilitates the ability to see things from the other person’s perspective and to bridge differences in people’s experiences and feelings. These models would be useful for medical education in various situations, such as in considering the relationships between senior and junior doctors and between old and young patients.

Keywords: illness narrative, mediation, psychology, model, medical education

Procedia PDF Downloads 410
9481 Evaluating the Perception of Roma in Europe through Social Network Analysis

Authors: Giulia I. Pintea

Abstract:

The Roma people are a nomadic ethnic group native to India, and they are one of the most prevalent minorities in Europe. In the past, Roma were enslaved and they were imprisoned in concentration camps during the Holocaust; today, Roma are subject to hate crimes and are denied access to healthcare, education, and proper housing. The aim of this project is to analyze how the public perception of the Roma people may be influenced by antiziganist and pro-Roma institutions in Europe. In order to carry out this project, we used social network analysis to build two large social networks: The antiziganist network, which is composed of institutions that oppress and racialize Roma, and the pro-Roma network, which is composed of institutions that advocate for and protect Roma rights. Measures of centrality, density, and modularity were obtained to determine which of the two social networks is exerting the greatest influence on the public’s perception of Roma in European societies. Furthermore, data on hate crimes on Roma were gathered from the Organization for Security and Cooperation in Europe (OSCE). We analyzed the trends in hate crimes on Roma for several European countries for 2009-2015 in order to see whether or not there have been changes in the public’s perception of Roma, thus helping us evaluate which of the two social networks has been more influential. Overall, the results suggest that there is a greater and faster exchange of information in the pro-Roma network. However, when taking the hate crimes into account, the impact of the pro-Roma institutions is ambiguous, due to differing patterns among European countries, suggesting that the impact of the pro-Roma network is inconsistent. Despite antiziganist institutions having a slower flow of information, the hate crime patterns also suggest that the antiziganist network has a higher impact on certain countries, which may be due to institutions outside the political sphere boosting the spread of antiziganist ideas and information to the European public.

Keywords: applied mathematics, oppression, Roma people, social network analysis

Procedia PDF Downloads 278
9480 Development of a Forecast-Supported Approach for the Continuous Pre-Planning of Mandatory Transportation Capacity for the Design of Sustainable Transport Chains: A Literature Review

Authors: Georg Brunnthaller, Sandra Stein, Wilfried Sihn

Abstract:

Transportation service providers are facing increasing volatility concerning future transport demand. Short-term planning horizons and planning uncertainties lead to reduced capacity utilization and increasing empty mileage. To overcome these challenges, a model is proposed to continuously pre-plan future transportation capacity in order to redesign and adjust the intermodal fleet accordingly. It is expected that the model will enable logistics service providers to organize more economically and ecologically sustainable transport chains in a more flexible way. To further describe these planning aspects, this paper gives an overview on transportation planning problems in a structured way. The focus is on strategic and tactical planning levels, comprising relevant fleet-sizing, service-network-design and choice-of-carriers-problems. Models and their developed solution techniques are presented, and the literature review is concluded with an outlook to our future research directions.

Keywords: freight transportation planning, multimodal, fleet-sizing, service network design, choice of transportation mode, review

Procedia PDF Downloads 318
9479 Design and Study of a Parabolic Trough Solar Collector for Generating Electricity

Authors: A. A. A. Aboalnour, Ahmed M. Amasaib, Mohammed-Almujtaba A. Mohammed-Farah, Abdelhakam, A. Noreldien

Abstract:

This paper presents a design and study of Parabolic Trough Solar Collector (PTC). Mathematical models were used in this work to find the direct and reflected solar radiation from the air layer on the surface of the earth per hour based on the total daily solar radiation on a horizontal surface. Also mathematical models had been used to calculate the radiation of the tilted surfaces. Most of the ingredients used in this project as previews data required on several solar energy applications, thermal simulation, and solar power systems. In addition, mathematical models had been used to study the flow of the fluid inside the tube (receiver), and study the effect of direct and reflected solar radiation on the pressure, temperature, speed, kinetic energy and forces of fluid inside the tube. Finally, the mathematical models had been used to study the (PTC) performances and estimate its thermal efficiency.

Keywords: CFD, experimental, mathematical models, parabolic trough, radiation

Procedia PDF Downloads 424
9478 The Nature and the Structure of Scientific and Innovative Collaboration Networks

Authors: Afshin Moazami, Andrea Schiffauerova

Abstract:

The objective of this work is to investigate the development and the role of collaboration networks in the creation of knowledge and innovations in the US and Canada, with a special focus on Quebec. In order to create scientific networks, the data on journal articles were extracted from SCOPUS, and the networks were built based on the co-authorship of the journal papers. For innovation networks, the USPTO database was used, and the networks were built on the patent co-inventorship. Various indicators characterizing the evolution of the network structure and the positions of the researchers and inventors in the networks were calculated. The comparison between the United States, Canada, and Quebec was then carried out. The preliminary results show that the nature of scientific collaboration networks differs from the one seen in innovation networks. Scientists work in bigger teams and are mostly interconnected within one giant network component, whereas the innovation network is much more clustered and fragmented, the inventors work more repetitively with the same partners, often in smaller isolated groups. In both Canada and the US, an increasing tendency towards collaboration was observed, and it was found that networks are getting bigger and more centralized with time. Moreover, a declining share of knowledge transfers per scientist was detected, suggesting an increasing specialization of science. The US collaboration networks tend to be more centralized than the Canadian ones. Quebec shares a lot of features with the Canadian network, but some differences were observed, for example, Quebec inventors rely more on the knowledge transmission through intermediaries.

Keywords: Canada, collaboration, innovation network, scientific network, Quebec, United States

Procedia PDF Downloads 203
9477 Energy Balance Routing to Enhance Network Performance in Wireless Sensor Network

Authors: G. Baraneedaran, Deepak Singh, Kollipara Tejesh

Abstract:

The wireless sensors network has been an active research area over the y-ear passed. Due to the limited energy and communication ability of sensor nodes, it seems especially important to design a routing protocol for WSNs so that sensing data can be transmitted to the receiver effectively, an energy-balanced routing method based on forward-aware factor (FAF-EBRM) is proposed in this paper. In FAF-EBRM, the next-hop node is selected according to the awareness of link weight and forward energy density. A spontaneous reconstruction mechanism for Local topology is designed additionally. In this experiment, FAF-EBRM is compared with LEACH and EECU, experimental results show that FAF-EBRM outperforms LEACH and EECU, which balances the energy consumption, prolongs the function lifetime and guarantees high Qos of WSN.

Keywords: energy balance, forward-aware factor (FAF), forward energy density, link weight, network performance

Procedia PDF Downloads 540
9476 A Taxonomy of Routing Protocols in Wireless Sensor Networks

Authors: A. Kardi, R. Zagrouba, M. Alqahtani

Abstract:

The Internet of Everything (IoE) presents today a very attractive and motivating field of research. It is basically based on Wireless Sensor Networks (WSNs) in which the routing task is the major analysis topic. In fact, it directly affects the effectiveness and the lifetime of the network. This paper, developed from recent works and based on extensive researches, proposes a taxonomy of routing protocols in WSNs. Our main contribution is that we propose a classification model based on nine classes namely application type, delivery mode, initiator of communication, network architecture, path establishment (route discovery), network topology (structure), protocol operation, next hop selection and latency-awareness and energy-efficient routing protocols. In order to provide a total classification pattern to serve as reference for network designers, each class is subdivided into possible subclasses, presented, and discussed using different parameters such as purposes and characteristics.

Keywords: routing, sensor, survey, wireless sensor networks, WSNs

Procedia PDF Downloads 184
9475 Imputing Missing Data in Electronic Health Records: A Comparison of Linear and Non-Linear Imputation Models

Authors: Alireza Vafaei Sadr, Vida Abedi, Jiang Li, Ramin Zand

Abstract:

Missing data is a common challenge in medical research and can lead to biased or incomplete results. When the data bias leaks into models, it further exacerbates health disparities; biased algorithms can lead to misclassification and reduced resource allocation and monitoring as part of prevention strategies for certain minorities and vulnerable segments of patient populations, which in turn further reduce data footprint from the same population – thus, a vicious cycle. This study compares the performance of six imputation techniques grouped into Linear and Non-Linear models on two different realworld electronic health records (EHRs) datasets, representing 17864 patient records. The mean absolute percentage error (MAPE) and root mean squared error (RMSE) are used as performance metrics, and the results show that the Linear models outperformed the Non-Linear models in terms of both metrics. These results suggest that sometimes Linear models might be an optimal choice for imputation in laboratory variables in terms of imputation efficiency and uncertainty of predicted values.

Keywords: EHR, machine learning, imputation, laboratory variables, algorithmic bias

Procedia PDF Downloads 86
9474 Cyber Security Enhancement via Software Defined Pseudo-Random Private IP Address Hopping

Authors: Andre Slonopas, Zona Kostic, Warren Thompson

Abstract:

Obfuscation is one of the most useful tools to prevent network compromise. Previous research focused on the obfuscation of the network communications between external-facing edge devices. This work proposes the use of two edge devices, external and internal facing, which communicate via private IPv4 addresses in a software-defined pseudo-random IP hopping. This methodology does not require additional IP addresses and/or resources to implement. Statistical analyses demonstrate that the hopping surface must be at least 1e3 IP addresses in size with a broad standard deviation to minimize the possibility of coincidence of monitored and communication IPs. The probability of breaking the hopping algorithm requires a collection of at least 1e6 samples, which for large hopping surfaces will take years to collect. The probability of dropped packets is controlled via memory buffers and the frequency of hops and can be reduced to levels acceptable for video streaming. This methodology provides an impenetrable layer of security ideal for information and supervisory control and data acquisition systems.

Keywords: moving target defense, cybersecurity, network security, hopping randomization, software defined network, network security theory

Procedia PDF Downloads 191
9473 Improvement of Process Competitiveness Using Intelligent Reference Models

Authors: Julio Macedo

Abstract:

Several methodologies are now available to conceive the improvements of a process so that it becomes competitive as for example total quality, process reengineering, six sigma, define measure analysis improvement control method. These improvements are of different nature and can be external to the process represented by an optimization model or a discrete simulation model. In addition, the process stakeholders are several and have different desired performances for the process. Hence, the methodologies above do not have a tool to aid in the conception of the required improvements. In order to fill this void we suggest the use of intelligent reference models. A reference model is a set of qualitative differential equations and an objective function that minimizes the gap between the current and the desired performance indexes of the process. The reference models are intelligent so when they receive the current state of the problematic process and the desired performance indexes they generate the required improvements for the problematic process. The reference models are fuzzy cognitive maps added with an objective function and trained using the improvements implemented by the high performance firms. Experiments done in a set of students show the reference models allow them to conceive more improvements than students that do not use these models.

Keywords: continuous improvement, fuzzy cognitive maps, process competitiveness, qualitative simulation, system dynamics

Procedia PDF Downloads 89
9472 Evaluation of Security and Performance of Master Node Protocol in the Bitcoin Peer-To-Peer Network

Authors: Muntadher Sallal, Gareth Owenson, Mo Adda, Safa Shubbar

Abstract:

Bitcoin is a digital currency based on a peer-to-peer network to propagate and verify transactions. Bitcoin is gaining wider adoption than any previous crypto-currency. However, the mechanism of peers randomly choosing logical neighbors without any knowledge about underlying physical topology can cause a delay overhead in information propagation, which makes the system vulnerable to double-spend attacks. Aiming at alleviating the propagation delay problem, this paper introduces proximity-aware extensions to the current Bitcoin protocol, named Master Node Based Clustering (MNBC). The ultimate purpose of the proposed protocol, that are based on how clusters are formulated and how nodes can define their membership, is to improve the information propagation delay in the Bitcoin network. In MNBC protocol, physical internet connectivity increases, as well as the number of hops between nodes, decreases through assigning nodes to be responsible for maintaining clusters based on physical internet proximity. We show, through simulations, that the proposed protocol defines better clustering structures that optimize the performance of the transaction propagation over the Bitcoin protocol. The evaluation of partition attacks in the MNBC protocol, as well as the Bitcoin network, was done in this paper. Evaluation results prove that even though the Bitcoin network is more resistant against the partitioning attack than the MNBC protocol, more resources are needed to be spent to split the network in the MNBC protocol, especially with a higher number of nodes.

Keywords: Bitcoin network, propagation delay, clustering, scalability

Procedia PDF Downloads 117
9471 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning

Authors: Saahith M. S., Sivakami R.

Abstract:

In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.

Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis

Procedia PDF Downloads 40
9470 Component-Based Approach in Assessing Sewer Manholes

Authors: Khalid Kaddoura, Tarek Zayed

Abstract:

Sewer networks are constructed to protect the communities and the environment from any contact with the sewer mediums. Pipelines, being laterals or sewer mains, and manholes form the huge underground infrastructure in every urban city. Due to the sewer networks importance, the infrastructure asset management field has extensive advancement in condition assessment and rehabilitation decision models. However, most of the focus was devoted to pipelines giving little attention toward manholes condition assessment. In fact, recent studies started to emerge in this area to preserve manholes from any malfunction. Therefore, the main objective of this study is to propose a condition assessment model for sewer manholes. The model divides the manhole into several components and determines the relative importance weight of each component using the Analytic Network Process (ANP) decision-making method. Later, the condition of the manhole is computed by aggregating the condition of each component with its corresponding weight. Accordingly, the proposed assessment model will enable decision-makers to have a final index suggesting the overall condition of the manhole and a backward analysis to check the condition of each component. Consequently, better decisions are made pertinent to maintenance, rehabilitation, and replacement actions.

Keywords: Analytic Network Process (ANP), condition assessment, decision-making, manholes

Procedia PDF Downloads 358
9469 Prediction of PM₂.₅ Concentration in Ulaanbaatar with Deep Learning Models

Authors: Suriya

Abstract:

Rapid socio-economic development and urbanization have led to an increasingly serious air pollution problem in Ulaanbaatar (UB), the capital of Mongolia. PM₂.₅ pollution has become the most pressing aspect of UB air pollution. Therefore, monitoring and predicting PM₂.₅ concentration in UB is of great significance for the health of the local people and environmental management. As of yet, very few studies have used models to predict PM₂.₅ concentrations in UB. Using data from 0:00 on June 1, 2018, to 23:00 on April 30, 2020, we proposed two deep learning models based on Bayesian-optimized LSTM (Bayes-LSTM) and CNN-LSTM. We utilized hourly observed data, including Himawari8 (H8) aerosol optical depth (AOD), meteorology, and PM₂.₅ concentration, as input for the prediction of PM₂.₅ concentrations. The correlation strengths between meteorology, AOD, and PM₂.₅ were analyzed using the gray correlation analysis method; the comparison of the performance improvement of the model by using the AOD input value was tested, and the performance of these models was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The prediction accuracies of Bayes-LSTM and CNN-LSTM deep learning models were both improved when AOD was included as an input parameter. Improvement of the prediction accuracy of the CNN-LSTM model was particularly enhanced in the non-heating season; in the heating season, the prediction accuracy of the Bayes-LSTM model slightly improved, while the prediction accuracy of the CNN-LSTM model slightly decreased. We propose two novel deep learning models for PM₂.₅ concentration prediction in UB, Bayes-LSTM, and CNN-LSTM deep learning models. Pioneering the use of AOD data from H8 and demonstrating the inclusion of AOD input data improves the performance of our two proposed deep learning models.

Keywords: deep learning, AOD, PM2.5, prediction, Ulaanbaatar

Procedia PDF Downloads 49
9468 Statistical Analysis for Overdispersed Medical Count Data

Authors: Y. N. Phang, E. F. Loh

Abstract:

Many researchers have suggested the use of zero inflated Poisson (ZIP) and zero inflated negative binomial (ZINB) models in modeling over-dispersed medical count data with extra variations caused by extra zeros and unobserved heterogeneity. The studies indicate that ZIP and ZINB always provide better fit than using the normal Poisson and negative binomial models in modeling over-dispersed medical count data. In this study, we proposed the use of Zero Inflated Inverse Trinomial (ZIIT), Zero Inflated Poisson Inverse Gaussian (ZIPIG) and zero inflated strict arcsine models in modeling over-dispersed medical count data. These proposed models are not widely used by many researchers especially in the medical field. The results show that these three suggested models can serve as alternative models in modeling over-dispersed medical count data. This is supported by the application of these suggested models to a real life medical data set. Inverse trinomial, Poisson inverse Gaussian, and strict arcsine are discrete distributions with cubic variance function of mean. Therefore, ZIIT, ZIPIG and ZISA are able to accommodate data with excess zeros and very heavy tailed. They are recommended to be used in modeling over-dispersed medical count data when ZIP and ZINB are inadequate.

Keywords: zero inflated, inverse trinomial distribution, Poisson inverse Gaussian distribution, strict arcsine distribution, Pearson’s goodness of fit

Procedia PDF Downloads 548
9467 The Strengths and Limitations of the Statistical Modeling of Complex Social Phenomenon: Focusing on SEM, Path Analysis, or Multiple Regression Models

Authors: Jihye Jeon

Abstract:

This paper analyzes the conceptual framework of three statistical methods, multiple regression, path analysis, and structural equation models. When establishing research model of the statistical modeling of complex social phenomenon, it is important to know the strengths and limitations of three statistical models. This study explored the character, strength, and limitation of each modeling and suggested some strategies for accurate explaining or predicting the causal relationships among variables. Especially, on the studying of depression or mental health, the common mistakes of research modeling were discussed.

Keywords: multiple regression, path analysis, structural equation models, statistical modeling, social and psychological phenomenon

Procedia PDF Downloads 659