Search results for: analytical network design model
27238 Tracing Back the Bot Master
Authors: Sneha Leslie
Abstract:
The current situation in the cyber world is that crimes performed by Botnets are increasing and the masterminds (botmaster) are not detectable easily. The botmaster in the botnet compromises the legitimate host machines in the network and make them bots or zombies to initiate the cyber-attacks. This paper will focus on the live detection of the botmaster in the network by using the strong framework 'metasploit', when distributed denial of service (DDOS) attack is performed by the botnet. The affected victim machine will be continuously monitoring its incoming packets. Once the victim machine gets to know about the excessive count of packets from any IP, that particular IP is noted and details of the noted systems are gathered. Using the vulnerabilities present in the zombie machines (already compromised by botmaster), the victim machine will compromise them. By gaining access to the compromised systems, applications are run remotely. By analyzing the incoming packets of the zombies, the victim comes to know the address of the botmaster. This is an effective and a simple system where no specific features of communication protocol are considered.Keywords: bonet, DDoS attack, network security, detection system, metasploit framework
Procedia PDF Downloads 25427237 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach
Authors: Jianli Jiang, Bai-Chen Xie
Abstract:
The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.Keywords: spatial network DEA, environmental efficiency, sustainable development, power system
Procedia PDF Downloads 10927236 Trend Detection Using Community Rank and Hawkes Process
Authors: Shashank Bhatnagar, W. Wilfred Godfrey
Abstract:
We develop in this paper, an approach to find the trendy topic, which not only considers the user-topic interaction but also considers the community, in which user belongs. This method modifies the previous approach of user-topic interaction to user-community-topic interaction with better speed-up in the range of [1.1-3]. We assume that trend detection in a social network is dependent on two things. The one is, broadcast of messages in social network governed by self-exciting point process, namely called Hawkes process and the second is, Community Rank. The influencer node links to others in the community and decides the community rank based on its PageRank and the number of users links to that community. The community rank decides the influence of one community over the other. Hence, the Hawkes process with the kernel of user-community-topic decides the trendy topic disseminated into the social network.Keywords: community detection, community rank, Hawkes process, influencer node, pagerank, trend detection
Procedia PDF Downloads 38427235 Lean Impact Analysis Assessment Models: Development of a Lean Measurement Structural Model
Authors: Catherine Maware, Olufemi Adetunji
Abstract:
The paper is aimed at developing a model to measure the impact of Lean manufacturing deployment on organizational performance. The model will help industry practitioners to assess the impact of implementing Lean constructs on organizational performance. It will also harmonize the measurement models of Lean performance with the house of Lean that seems to have become the industry standard. The sheer number of measurement models for impact assessment of Lean implementation makes it difficult for new adopters to select an appropriate assessment model or deployment methodology. A literature review is conducted to classify the Lean performance model. Pareto analysis is used to select the Lean constructs for the development of the model. The model is further formalized through the use of Structural Equation Modeling (SEM) in defining the underlying latent structure of a Lean system. An impact assessment measurement model developed can be used to measure Lean performance and can be adopted by different industries.Keywords: impact measurement model, lean bundles, lean manufacturing, organizational performance
Procedia PDF Downloads 48527234 Prediction of Unsteady Heat Transfer over Square Cylinder in the Presence of Nanofluid by Using ANN
Authors: Ajoy Kumar Das, Prasenjit Dey
Abstract:
Heat transfer due to forced convection of copper water based nanofluid has been predicted by Artificial Neural network (ANN). The present nanofluid is formed by mixing copper nano particles in water and the volume fractions are considered here are 0% to 15% and the Reynolds number are kept constant at 100. The back propagation algorithm is used to train the network. The present ANN is trained by the input and output data which has been obtained from the numerical simulation, performed in finite volume based Computational Fluid Dynamics (CFD) commercial software Ansys Fluent. The numerical simulation based results are compared with the back propagation based ANN results. It is found that the forced convection heat transfer of water based nanofluid can be predicted correctly by ANN. It is also observed that the back propagation ANN can predict the heat transfer characteristics of nanofluid very quickly compared to standard CFD method.Keywords: forced convection, square cylinder, nanofluid, neural network
Procedia PDF Downloads 32127233 Design an Assessment Model of Research and Development Capabilities with the New Product Development Approach: A Case Study of Iran Khodro Company
Authors: Hamid Hanifi, Adel Azar, Alireza Booshehri
Abstract:
In order to know about the capability level of R & D units in automotive industry, it is essential that organizations always compare themselves with standard level and higher than themselves so that to be improved continuously. In this research, with respect to the importance of this issue, we have tried to present an assessment model for R & D capabilities having reviewed on new products development in automotive industry of Iran. Iran Khodro Company was selected for the case study. To this purpose, first, having a review on the literature, about 200 indicators effective in R & D capabilities and new products development were extracted. Then, of these numbers, 29 indicators which were more important were selected by industry and academia experts and the questionnaire was distributed among statistical population. Statistical population was consisted of 410 individuals in Iran Khodro Company. We used the 410 questionnaires for exploratory factor analysis and then used the data of 308 questionnaires from the same population randomly for confirmatory factor analysis. The results of exploratory factor analysis led to categorization of dimensions in 9 secondary dimensions. Naming the dimensions was done according to a literature review and the professors’ opinion. Using structural equation modeling and AMOS software, confirmatory factor analysis was conducted and ultimate model with 9 secondary dimensions was confirmed. Meanwhile, 9 secondary dimensions of this research are as follows: 1) Research and design capability, 2) Customer and market capability, 3) Technology capability, 4) Financial resources capability, 5) Organizational chart, 6) Intellectual capital capability, 7) NPD process capability, 8) Managerial capability and 9) Strategy capability.Keywords: research and development, new products development, structural equations, exploratory factor analysis, confirmatory factor analysis
Procedia PDF Downloads 33927232 Would Intra-Individual Variability in Attention to Be the Indicator of Impending the Senior Adults at Risk of Cognitive Decline: Evidence from Attention Network Test(ANT)
Authors: Hanna Lu, Sandra S. M. Chan, Linda C. W. Lam
Abstract:
Objectives: Intra-individual variability (IIV) has been considered as a biomarker of healthy ageing. However, the composite role of IIV in attention, as an impending indicator for neurocognitive disorders warrants further exploration. This study aims to investigate the IIV, as well as their relationships with attention network functions in adults with neurocognitive disorders (NCD). Methods: 36adults with NCD due to Alzheimer’s disease(NCD-AD), 31adults with NCD due to vascular disease (NCD-vascular), and 137 healthy controls were recruited. Intraindividual standard deviations (iSD) and intraindividual coefficient of variation of reaction time (ICV-RT) were used to evaluate the IIV. Results: NCD groups showed greater IIV (iSD: F= 11.803, p < 0.001; ICV-RT:F= 9.07, p < 0.001). In ROC analyses, the indices of IIV could differentiateNCD-AD (iSD: AUC value = 0.687, p= 0.001; ICV-RT: AUC value = 0.677, p= 0.001) and NCD-vascular (iSD: AUC value = 0.631, p= 0.023;ICV-RT: AUC value = 0.615, p= 0.045) from healthy controls. Moreover, the processing speed could distinguish NCD-AD from NCD-vascular (AUC value = 0.647, p= 0.040). Discussion: Intra-individual variability in attention provides a stable measure of cognitive performance, and seems to help distinguish the senior adults with different cognitive status.Keywords: intra-individual variability, attention network, neurocognitive disorders, ageing
Procedia PDF Downloads 47527231 Characterization and Correlation of Neurodegeneration and Biological Markers of Model Mice with Traumatic Brain Injury and Alzheimer's Disease
Authors: J. DeBoard, R. Dietrich, J. Hughes, K. Yurko, G. Harms
Abstract:
Alzheimer’s disease (AD) is a predominant type of dementia and is likely a major cause of neural network impairment. The pathogenesis of this neurodegenerative disorder has yet to be fully elucidated. There are currently no known cures for the disease, and the best hope is to be able to detect it early enough to impede its progress. Beyond age and genetics, another prevalent risk factor for AD might be traumatic brain injury (TBI), which has similar neurodegenerative hallmarks. Our research focuses on obtaining information and methods to be able to predict when neurodegenerative effects might occur at a clinical level by observation of events at a cellular and molecular level in model mice. First, we wish to introduce our evidence that brain damage can be observed via brain imaging prior to the noticeable loss of neuromuscular control in model mice of AD. We then show our evidence that some blood biomarkers might be able to be early predictors of AD in the same model mice. Thus, we were interested to see if we might be able to predict which mice might show long-term neurodegenerative effects due to differing degrees of TBI and what level of TBI causes further damage and earlier death to the AD model mice. Upon application of TBIs via an apparatus to effectively induce extremely mild to mild TBIs, wild-type (WT) mice and AD mouse models were tested for cognition, neuromuscular control, olfactory ability, blood biomarkers, and brain imaging. Experiments are currently still in process, and more results are therefore forthcoming. Preliminary data suggest that neuromotor control diminishes as well as olfactory function for both AD and WT mice after the administration of five consecutive mild TBIs. Also, seizure activity increases significantly for both AD and WT after the administration of the five TBI treatment. If future data supports these findings, important implications about the effect of TBI on those at risk for AD might be possible.Keywords: Alzheimer's disease, blood biomarker, neurodegeneration, neuromuscular control, olfaction, traumatic brain injury
Procedia PDF Downloads 14127230 Design of Collection and Transportation System of Municipal Solid Waste in Meshkinshahr City
Authors: Ebrahim Fataei, Seyed Ali Hosseini, Zahra Arabi, Habib farhadi, Mehdi Aalipour Erdi, Seiied Taghi Seiied Safavian
Abstract:
Solid waste production is an integral part of human life and management of waste require full scientific approach and essential planning. The allocation of most management cost to collection and transportation and also the necessity of operational efficiency in this system, by limiting time consumption, and on the other hand optimum collection system and transportation is the base of waste design and management. This study was done to optimize the exits collection and transportation system of solid waste in Meshkinshahr city. So based on the analyzed data of municipal solid waste components in seven zones of Meshkinshahr city, and GIS software, applied to design storage place based on origin recycling and a route to collect and transport. It was attempted to represent an appropriate model to store, collect and transport municipal solid waste. The result shows that GIS can be applied to locate the waste container and determine a waste collection direction in an appropriate way.Keywords: municipal solid waste management, transportation, optimizing, GIS, Iran
Procedia PDF Downloads 53427229 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality
Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo
Abstract:
Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.Keywords: linear model, models and modeling, probability, randomness, sample
Procedia PDF Downloads 11827228 A Neurosymbolic Learning Method for Uplink LTE-A Channel Estimation
Authors: Lassaad Smirani
Abstract:
In this paper we propose a Neurosymbolic Learning System (NLS) as a channel estimator for Long Term Evolution Advanced (LTE-A) uplink. The proposed system main idea based on Neural Network has modules capable of performing bidirectional information transfer between symbolic module and connectionist module. We demonstrate various strengths of the NLS especially the ability to integrate theoretical knowledge (rules) and experiential knowledge (examples), and to make an initial knowledge base (rules) converted into a connectionist network. Also to use empirical knowledge witch by learning will have the ability to revise the theoretical knowledge and acquire new one and explain it, and finally the ability to improve the performance of symbolic or connectionist systems. Compared with conventional SC-FDMA channel estimation systems, The performance of NLS in terms of complexity and quality is confirmed by theoretical analysis and simulation and shows that this system can make the channel estimation accuracy improved and bit error rate decreased.Keywords: channel estimation, SC-FDMA, neural network, hybrid system, BER, LTE-A
Procedia PDF Downloads 39427227 Stress Analysis of a Pressurizer in a Pressurized Water Reactor Using Finite Element Method
Authors: Tanvir Hasan, Minhaz Uddin, Anwar Sadat Anik
Abstract:
A pressurizer is a safety-related reactor component that maintains the reactor operating pressure to guarantee safety. Its structure is usually made of high thermal and pressure resistive material. The mechanical structure of these components should be maintained in all working settings, including transient to severe accidents conditions. The goal of this study is to examine the structural integrity and stress of the pressurizer in order to ensure its design integrity towards transient situations. For this, the finite element method (FEM) was used to analyze the mechanical stress on pressurizer components in this research. ANSYS MECHANICAL tool was used to analyze a 3D model of the pressurizer. The material for the body and safety relief nozzle is selected as low alloy steel i.e., SA-508 Gr.3 Cl.2. The model was put into ANSYS WORKBENCH and run under the boundary conditions of (internal Pressure, -17.2 MPa, inside radius, -1348mm, the thickness of the shell, -127mm, and the ratio of the outside radius to an inside radius, - 1.059). The theoretical calculation was done using the formulas and then the results were compared with the simulated results. When stimulated at design conditions, the findings revealed that the pressurizer stress analysis completely fulfilled the ASME standards.Keywords: pressurizer, stress analysis, finite element method, nuclear reactor
Procedia PDF Downloads 15827226 Downtime Modelling for the Post-Earthquake Building Assessment Phase
Authors: S. Khakurel, R. P. Dhakal, T. Z. Yeow
Abstract:
Downtime is one of the major sources (alongside damage and injury/death) of financial loss incurred by a structure in an earthquake. The length of downtime associated with a building after an earthquake varies depending on the time taken for the reaction (to the earthquake), decision (on the future course of action) and execution (of the decided course of action) phases. Post-earthquake assessment of buildings is a key step in the decision making process to decide the appropriate safety placarding as well as to decide whether a damaged building is to be repaired or demolished. The aim of the present study is to develop a model to quantify downtime associated with the post-earthquake building-assessment phase in terms of two parameters; i) duration of the different assessment phase; and ii) probability of different colour tagging. Post-earthquake assessment of buildings includes three stages; Level 1 Rapid Assessment including a fast external inspection shortly after the earthquake, Level 2 Rapid Assessment including a visit inside the building and Detailed Engineering Evaluation (if needed). In this study, the durations of all three assessment phases are first estimated from the total number of damaged buildings, total number of available engineers and the average time needed for assessing each building. Then, probability of different tag colours is computed from the 2010-11 Canterbury earthquake Sequence database. Finally, a downtime model for the post-earthquake building inspection phase is proposed based on the estimated phase length and probability of tag colours. This model is expected to be used for rapid estimation of seismic downtime within the Loss Optimisation Seismic Design (LOSD) framework.Keywords: assessment, downtime, LOSD, Loss Optimisation Seismic Design, phase length, tag color
Procedia PDF Downloads 18527225 Applying Multiplicative Weight Update to Skin Cancer Classifiers
Authors: Animish Jain
Abstract:
This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer
Procedia PDF Downloads 7927224 Flexible Design of Triboelectric Nanogenerators for Efficient Vibration Energy Harvesting
Authors: Meriam Khelifa
Abstract:
In recent years, many studies have focused on the harvesting of the vibrations energy to produce electrical energy using contact separation (CS) triboelectric nanogenerators (TENG). The simplest design for a TENG consists of a capacitor comprising a single moving electrode. The conversion efficiency of vibration energy into electrical energy can, in principle, reach 100%. But to actually achieve this objective, it is necessary to optimize the parameters of the TENG, such as the dielectric constant and the thickness of the insulator, the load resistance, etc. In particular, the use of a switch which is actioned at optimal times within the TENG cycle is essential. Using numerical modeling and experimental design, we applied a methodology to find the TENG parameters which optimize the energy transfer efficiency (ETE) to almost 100% for any vibration frequency and amplitude. The rather simple design of a TENG is promising as an environment friendly device. It opens the doors for harvesting acoustic vibrations from the environment and to design effective protection against environmental noise.Keywords: vibrations, CS TENG, efficiency, design of experiments
Procedia PDF Downloads 9027223 Study of Unsteady Behaviour of Dynamic Shock Systems in Supersonic Engine Intakes
Authors: Siddharth Ahuja, T. M. Muruganandam
Abstract:
An analytical investigation is performed to study the unsteady response of a one-dimensional, non-linear dynamic shock system to external downstream pressure perturbations in a supersonic flow in a varying area duct. For a given pressure ratio across a wind tunnel, the normal shock's location can be computed as per one-dimensional steady gas dynamics. Similarly, for some other pressure ratio, the location of the normal shock will change accordingly, again computed using one-dimensional gas dynamics. This investigation focuses on the small-time interval between the first steady shock location and the new steady shock location (corresponding to different pressure ratios). In essence, this study aims to shed light on the motion of the shock from one steady location to another steady location. Further, this study aims to create the foundation of the Unsteady Gas Dynamics field enabling further insight in future research work. According to the new pressure ratio, a pressure pulse, generated at the exit of the tunnel which travels and perturbs the shock from its original position, setting it into motion. During such activity, other numerous physical phenomena also happen at the same time. However, three broad phenomena have been focused on, in this study - Traversal of a Wave, Fluid Element Interactions and Wave Interactions. The above mentioned three phenomena create, alter and kill numerous waves for different conditions. The waves which are created by the above-mentioned phenomena eventually interact with the shock and set it into motion. Numerous such interactions with the shock will slowly make it settle into its final position owing to the new pressure ratio across the duct, as estimated by one-dimensional gas dynamics. This analysis will be extremely helpful in the prediction of inlet 'unstart' of the flow in a supersonic engine intake and its prominence with the incoming flow Mach number, incoming flow pressure and the external perturbation pressure is also studied to help design more efficient supersonic intakes for engines like ramjets and scramjets.Keywords: analytical investigation, compression and expansion waves, fluid element interactions, shock trajectory, supersonic flow, unsteady gas dynamics, varying area duct, wave interactions
Procedia PDF Downloads 21827222 Augmented Reality in Advertising and Brand Communication: An Experimental Study
Authors: O. Mauroner, L. Le, S. Best
Abstract:
Digital technologies offer many opportunities in the design and implementation of brand communication and advertising. Augmented reality (AR) is an innovative technology in marketing communication that focuses on the fact that virtual interaction with a product ad offers additional value to consumers. AR enables consumers to obtain (almost) real product experiences by the way of virtual information even before the purchase of a certain product. Aim of AR applications in relation with advertising is in-depth examination of product characteristics to enhance product knowledge as well as brand knowledge. Interactive design of advertising provides observers with an intense examination of a specific advertising message and therefore leads to better brand knowledge. The elaboration likelihood model and the central route to persuasion strongly support this argumentation. Nevertheless, AR in brand communication is still in an initial stage and therefore scientific findings about the impact of AR on information processing and brand attitude are rare. The aim of this paper is to empirically investigate the potential of AR applications in combination with traditional print advertising. To that effect an experimental design with different levels of interactivity is built to measure the impact of interactivity of an ad on different variables o advertising effectiveness.Keywords: advertising effectiveness, augmented reality, brand communication, brand recall
Procedia PDF Downloads 30227221 Neural Network Based Decision Trees Using Machine Learning for Alzheimer's Diagnosis
Authors: P. S. Jagadeesh Kumar, Tracy Lin Huan, S. Meenakshi Sundaram
Abstract:
Alzheimer’s disease is one of the prevalent kind of ailment, expected for impudent reconciliation or an effectual therapy is to be accredited hitherto. Probable detonation of patients in the upcoming years, and consequently an enormous deal of apprehension in early discovery of the disorder, this will conceivably chaperon to enhanced healing outcomes. Complex impetuosity of the brain is an observant symbolic of the disease and a unique recognition of genetic sign of the disease. Machine learning alongside deep learning and decision tree reinforces the aptitude to absorb characteristics from multi-dimensional data’s and thus simplifies automatic classification of Alzheimer’s disease. Susceptible testing was prophesied and realized in training the prospect of Alzheimer’s disease classification built on machine learning advances. It was shrewd that the decision trees trained with deep neural network fashioned the excellent results parallel to related pattern classification.Keywords: Alzheimer's diagnosis, decision trees, deep neural network, machine learning, pattern classification
Procedia PDF Downloads 29727220 A Novel Gateway Location Algorithm for Wireless Mesh Networks
Authors: G. M. Komba
Abstract:
The Internet Gateway (IGW) has extra ability than a simple Mesh Router (MR) and the responsibility to route mostly the all traffic from Mesh Clients (MCs) to the Internet backbone however, IGWs are more expensive. Choosing strategic locations for the Internet Gateways (IGWs) best location in Backbone Wireless Mesh (BWM) precarious to the Wireless Mesh Network (WMN) and the location of IGW can improve a quantity of performance related problem. In this paper, we propose a novel algorithm, namely New Gateway Location Algorithm (NGLA), which aims to achieve four objectives, decreasing the network cost effective, minimizing delay, optimizing the throughput capacity, Different from existing algorithms, the NGLA increasingly recognizes IGWs, allocates mesh routers (MRs) to identify IGWs and promises to find a feasible IGW location and install minimum as possible number of IGWs while regularly conserving the all Quality of Service (QoS) requests. Simulation results showing that the NGLA outperforms other different algorithms by comparing the number of IGWs with a large margin and it placed 40% less IGWs and 80% gain of throughput. Furthermore the NGLA is easy to implement and could be employed for BWM.Keywords: Wireless Mesh Network, Gateway Location Algorithm, Quality of Service, BWM
Procedia PDF Downloads 37127219 Selection of Designs in Ordinal Regression Models under Linear Predictor Misspecification
Authors: Ishapathik Das
Abstract:
The purpose of this article is to find a method of comparing designs for ordinal regression models using quantile dispersion graphs in the presence of linear predictor misspecification. The true relationship between response variable and the corresponding control variables are usually unknown. Experimenter assumes certain form of the linear predictor of the ordinal regression models. The assumed form of the linear predictor may not be correct always. Thus, the maximum likelihood estimates (MLE) of the unknown parameters of the model may be biased due to misspecification of the linear predictor. In this article, the uncertainty in the linear predictor is represented by an unknown function. An algorithm is provided to estimate the unknown function at the design points where observations are available. The unknown function is estimated at all points in the design region using multivariate parametric kriging. The comparison of the designs are based on a scalar valued function of the mean squared error of prediction (MSEP) matrix, which incorporates both variance and bias of the prediction caused by the misspecification in the linear predictor. The designs are compared using quantile dispersion graphs approach. The graphs also visually depict the robustness of the designs on the changes in the parameter values. Numerical examples are presented to illustrate the proposed methodology.Keywords: model misspecification, multivariate kriging, multivariate logistic link, ordinal response models, quantile dispersion graphs
Procedia PDF Downloads 39327218 Fruit and Vegetable Consumption in High School Students in Bandar Abbas, Iran: An Application of the Trans-Theoretical Model
Authors: Aghamolaei Teamur, Hosseini Zahra, Ghanbarnejad Amin
Abstract:
Introduction: A diet rich in fruits and vegetables, especially for adolescents is of a great importance due to the need for nutrients and the rapid growth of this age group. The aim of this study was to investigate the relationship between decisional balance and self-efficacy with stages of change for fruit and vegetable consumption in high school students in Bandar Abbas, Iran. Methods: In this descriptive-analytical study, the data were collected from 345 students studying in 8 high schools of Bandar Abbas were selected through multistage sampling. To collect data, separate questionnaires were designed for evaluating each of the variables including the stages of change, perceived benefits, perceived barriers, and self-efficacy of fruit and vegetable consumption. Decisional balance was estimated by subtracting the perceived benefits and barriers. The data were analyzed using SPSS19 and one-way ANOVA. Results: The results of this study indicated that individuals’ progress along the stages of change from pre-contemplation to maintenance level was associated with a significant increase in their decisional balance and self-efficacy for fruit and vegetable consumption. (P < 0.001). The lowest level of decisional balance and self-efficacy regarding for fruit showed up in the pre-contemplation stage, and the highest level of decisional balance and self-efficacy was in the maintenance stage. The same trends were observed in the case of vegetable consumption. Conclusion: Decisional balance and self-efficacy should be considered in designing interventions to increase consumption of fruits and vegetables. There needs to be more emphasis in educational programs based on the Trans-theoretical Model (TTM) on the enhancement of perceived benefits and elimination of perceived barriers regarding consumption of fruits and vegetables.Keywords: fruit, vegetable, decision balance, self-efficacy, trans-theoretical model
Procedia PDF Downloads 29427217 Developed CNN Model with Various Input Scale Data Evaluation for Bearing Faults Prognostics
Authors: Anas H. Aljemely, Jianping Xuan
Abstract:
Rolling bearing fault diagnosis plays a pivotal issue in the rotating machinery of modern manufacturing. In this research, a raw vibration signal and improved deep learning method for bearing fault diagnosis are proposed. The multi-dimensional scales of raw vibration signals are selected for evaluation condition monitoring system, and the deep learning process has shown its effectiveness in fault diagnosis. In the proposed method, employing an Exponential linear unit (ELU) layer in a convolutional neural network (CNN) that conducts the identical function on positive data, an exponential nonlinearity on negative inputs, and a particular convolutional operation to extract valuable features. The identification results show the improved method has achieved the highest accuracy with a 100-dimensional scale and increase the training and testing speed.Keywords: bearing fault prognostics, developed CNN model, multiple-scale evaluation, deep learning features
Procedia PDF Downloads 21027216 An Empirical Research on Customer Knowledge Management in the Iranian Banks
Authors: Ebrahim Gharleghi
Abstract:
This paper aims to examine how customer knowledge management (CKM) can be implemented in Iranian Banks in practice, with the focus on the human resource (people, technology and processes) as important factors of CKM. A conceptual model of an analytical CKM strategy for CKM in this Iranian Banks is developed from the findings and literature review. This article has been based on interviews and distributing the questionnaire. Data were collected from 260 managers from bank managers. The paper finds that hypotheses were tested using student’s t-test (one-sample t-test), Pearson correlation analysis and regression analysis. Test of hypotheses revealed that human, technology and processes factors positively and significantly influenced the implementation of CKM practices. These findings tend to corroborate our conceptual model. Human factor of CKM was found to be more significantly affecting appropriate CKM implementation than others CKM factors, indicating that this factor is more important than the others aspects of CKM. On the other hand, this factor is appropriate in Iranian Banks. Process is in second part and technology is in final part. This indicates that technology infrastructures are so weak in Iranian Banks for CKM implementation. In this paper there is little or no empirical evidence investigating the amount of the execution of the CKM in Iranian Banks. This paper rectifies this imbalance by clarifying the significance human, technology and processes factors in CKM implementation.Keywords: knowledge management, customer relationship management, customer knowledge management, integration, people, technology, process
Procedia PDF Downloads 27427215 Forecasting Materials Demand from Multi-Source Ordering
Authors: Hui Hsin Huang
Abstract:
The downstream manufactures will order their materials from different upstream suppliers to maintain a certain level of the demand. This paper proposes a bivariate model to portray this phenomenon of material demand. We use empirical data to estimate the parameters of model and evaluate the RMSD of model calibration. The results show that the model has better fitness.Keywords: recency, ordering time, materials demand quantity, multi-source ordering
Procedia PDF Downloads 53427214 Detecting Earnings Management via Statistical and Neural Networks Techniques
Authors: Mohammad Namazi, Mohammad Sadeghzadeh Maharluie
Abstract:
Predicting earnings management is vital for the capital market participants, financial analysts and managers. The aim of this research is attempting to respond to this query: Is there a significant difference between the regression model and neural networks’ models in predicting earnings management, and which one leads to a superior prediction of it? In approaching this question, a Linear Regression (LR) model was compared with two neural networks including Multi-Layer Perceptron (MLP), and Generalized Regression Neural Network (GRNN). The population of this study includes 94 listed companies in Tehran Stock Exchange (TSE) market from 2003 to 2011. After the results of all models were acquired, ANOVA was exerted to test the hypotheses. In general, the summary of statistical results showed that the precision of GRNN did not exhibit a significant difference in comparison with MLP. In addition, the mean square error of the MLP and GRNN showed a significant difference with the multi variable LR model. These findings support the notion of nonlinear behavior of the earnings management. Therefore, it is more appropriate for capital market participants to analyze earnings management based upon neural networks techniques, and not to adopt linear regression models.Keywords: earnings management, generalized linear regression, neural networks multi-layer perceptron, Tehran stock exchange
Procedia PDF Downloads 42227213 Instant Fire Risk Assessment Using Artifical Neural Networks
Authors: Tolga Barisik, Ali Fuat Guneri, K. Dastan
Abstract:
Major industrial facilities have a high potential for fire risk. In particular, the indices used for the detection of hidden fire are used very effectively in order to prevent the fire from becoming dangerous in the initial stage. These indices provide the opportunity to prevent or intervene early by determining the stage of the fire, the potential for hazard, and the type of the combustion agent with the percentage values of the ambient air components. In this system, artificial neural network will be modeled with the input data determined using the Levenberg-Marquardt algorithm, which is a multi-layer sensor (CAA) (teacher-learning) type, before modeling the modeling methods in the literature. The actual values produced by the indices will be compared with the outputs produced by the network. Using the neural network and the curves to be created from the resulting values, the feasibility of performance determination will be investigated.Keywords: artifical neural networks, fire, Graham Index, levenberg-marquardt algoritm, oxygen decrease percentage index, risk assessment, Trickett Index
Procedia PDF Downloads 13727212 Designing a Model to Increase the Flow of Circular Economy Startups Using a Systemic and Multi-Generational Approach
Authors: Luís Marques, João Rocha, Andreia Fernandes, Maria Moura, Cláudia Caseiro, Filipa Figueiredo, João Nunes
Abstract:
The implementation of circularity strategies other than recycling, such as reducing the amount of raw material, as well as reusing or sharing existing products, remains marginal. The European Commission announced that the transition towards a more circular economy could lead to the net creation of about 700,000 jobs in Europe by 2030, through additional labour demand from recycling plants, repair services and other circular activities. Efforts to create new circular business models in accordance with completely circular processes, as opposed to linear ones, have increased considerably in recent years. In order to create a societal Circular Economy transition model, it is necessary to include innovative solutions, where startups play a key role. Early-stage startups based on new business models according to circular processes often face difficulties in creating enough impact. The StartUp Zero Program designs a model and approach to increase the flow of startups in the Circular Economy field, focusing on a systemic decision analysis and multi-generational approach, considering Multi-Criteria Decision Analysis to support a decision-making tool, which is also supported by the use of a combination of an Analytical Hierarchy Process and Multi-Attribute Value Theory methods. We define principles, criteria and indicators for evaluating startup prerogatives, quantifying the evaluation process in a unique result. Additionally, this entrepreneurship program spanning 16 months involved more than 2400 young people, from ages 14 to 23, in more than 200 interaction activities.Keywords: circular economy, entrepreneurship, startups;, multi-criteria decision analysis
Procedia PDF Downloads 10527211 Survival Analysis Based Delivery Time Estimates for Display FAB
Authors: Paul Han, Jun-Geol Baek
Abstract:
In the flat panel display industry, the scheduler and dispatching system to meet production target quantities and the deadline of production are the major production management system which controls each facility production order and distribution of WIP (Work in Process). In dispatching system, delivery time is a key factor for the time when a lot can be supplied to the facility. In this paper, we use survival analysis methods to identify main factors and a forecasting model of delivery time. Of survival analysis techniques to select important explanatory variables, the cox proportional hazard model is used to. To make a prediction model, the Accelerated Failure Time (AFT) model was used. Performance comparisons were conducted with two other models, which are the technical statistics model based on transfer history and the linear regression model using same explanatory variables with AFT model. As a result, the Mean Square Error (MSE) criteria, the AFT model decreased by 33.8% compared to the existing prediction model, decreased by 5.3% compared to the linear regression model. This survival analysis approach is applicable to implementing a delivery time estimator in display manufacturing. And it can contribute to improve the productivity and reliability of production management system.Keywords: delivery time, survival analysis, Cox PH model, accelerated failure time model
Procedia PDF Downloads 54327210 A Resistant-Based Comparative Study between Iranian Concrete Design Code and Some Worldwide Ones
Authors: Seyed Sadegh Naseralavi, Najmeh Bemani
Abstract:
The design in most counties should be inevitably carried out by their native code such as Iran. Since the Iranian concrete code does not exist in structural design software, most engineers in this country analyze the structures using commercial software but design the structural members manually. This point motivated us to make a communication between Iranian code and some other well-known ones to create facility for the engineers. Finally, this paper proposes the so-called interpretation charts which help specify the position of Iranian code in comparison of some worldwide ones.Keywords: beam, concrete code, strength, interpretation charts
Procedia PDF Downloads 52627209 Using Multi-Arm Bandits to Optimize Game Play Metrics and Effective Game Design
Authors: Kenny Raharjo, Ramon Lawrence
Abstract:
Game designers have the challenging task of building games that engage players to spend their time and money on the game. There are an infinite number of game variations and design choices, and it is hard to systematically determine game design choices that will have positive experiences for players. In this work, we demonstrate how multi-arm bandits can be used to automatically explore game design variations to achieve improved player metrics. The advantage of multi-arm bandits is that they allow for continuous experimentation and variation, intrinsically converge to the best solution, and require no special infrastructure to use beyond allowing minor game variations to be deployed to users for evaluation. A user study confirms that applying multi-arm bandits was successful in determining the preferred game variation with highest play time metrics and can be a useful technique in a game designer's toolkit.Keywords: game design, multi-arm bandit, design exploration and data mining, player metric optimization and analytics
Procedia PDF Downloads 510