Search results for: network externality
2946 A Hybrid Expert System for Generating Stock Trading Signals
Authors: Hosein Hamisheh Bahar, Mohammad Hossein Fazel Zarandi, Akbar Esfahanipour
Abstract:
In this paper, a hybrid expert system is developed by using fuzzy genetic network programming with reinforcement learning (GNP-RL). In this system, the frame-based structure of the system uses the trading rules extracted by GNP. These rules are extracted by using technical indices of the stock prices in the training time period. For developing this system, we applied fuzzy node transition and decision making in both processing and judgment nodes of GNP-RL. Consequently, using these method not only did increase the accuracy of node transition and decision making in GNP's nodes, but also extended the GNP's binary signals to ternary trading signals. In the other words, in our proposed Fuzzy GNP-RL model, a No Trade signal is added to conventional Buy or Sell signals. Finally, the obtained rules are used in a frame-based system implemented in Kappa-PC software. This developed trading system has been used to generate trading signals for ten companies listed in Tehran Stock Exchange (TSE). The simulation results in the testing time period shows that the developed system has more favorable performance in comparison with the Buy and Hold strategy.Keywords: fuzzy genetic network programming, hybrid expert system, technical trading signal, Tehran stock exchange
Procedia PDF Downloads 3322945 Bayesian Networks Scoping the Climate Change Impact on Winter Wheat Freezing Injury Disasters in Hebei Province, China
Authors: Xiping Wang,Shuran Yao, Liqin Dai
Abstract:
Many studies report the winter is getting warmer and the minimum air temperature is obviously rising as the important climate warming evidences. The exacerbated air temperature fluctuation tending to bring more severe weather variation is another important consequence of recent climate change which induced more disasters to crop growth in quite a certain regions. Hebei Province is an important winter wheat growing province in North of China that recently endures more winter freezing injury influencing the local winter wheat crop management. A winter wheat freezing injury assessment Bayesian Network framework was established for the objectives of estimating, assessing and predicting winter wheat freezing disasters in Hebei Province. In this framework, the freezing disasters was classified as three severity degrees (SI) among all the three types of freezing, i.e., freezing caused by severe cold in anytime in the winter, long extremely cold duration in the winter and freeze-after-thaw in early season after winter. The factors influencing winter wheat freezing SI include time of freezing occurrence, growth status of seedlings, soil moisture, winter wheat variety, the longitude of target region and, the most variable climate factors. The climate factors included in this framework are daily mean and range of air temperature, extreme minimum temperature and number of days during a severe cold weather process, the number of days with the temperature lower than the critical temperature values, accumulated negative temperature in a potential freezing event. The Bayesian Network model was evaluated using actual weather data and crop records at selected sites in Hebei Province using real data. With the multi-stage influences from the various factors, the forecast and assessment of the event-based target variables, freezing injury occurrence and its damage to winter wheat production, were shown better scoped by Bayesian Network model.Keywords: bayesian networks, climatic change, freezing Injury, winter wheat
Procedia PDF Downloads 4082944 Pump-as-Turbine: Testing and Characterization as an Energy Recovery Device, for Use within the Water Distribution Network
Authors: T. Lydon, A. McNabola, P. Coughlan
Abstract:
Energy consumption in the water distribution network (WDN) is a well established problem equating to the industry contributing heavily to carbon emissions, with 0.9 kg CO2 emitted per m3 of water supplied. It is indicated that 85% of energy wasted in the WDN can be recovered by installing turbines. Existing potential in networks is present at small capacity sites (5-10 kW), numerous and dispersed across networks. However, traditional turbine technology cannot be scaled down to this size in an economically viable fashion, thus alternative approaches are needed. This research aims to enable energy recovery potential within the WDN by exploring the potential of pumps-as-turbines (PATs), to realise this potential. PATs are estimated to be ten times cheaper than traditional micro-hydro turbines, presenting potential to contribute to an economically viable solution. However, a number of technical constraints currently prohibit their widespread use, including the inability of a PAT to control pressure, difficulty in the selection of PATs due to lack of performance data and a lack of understanding on how PATs can cater for fluctuations as extreme as +/- 50% of the average daily flow, characteristic of the WDN. A PAT prototype is undergoing testing in order to identify the capabilities of the technology. Results of preliminary testing, which involved testing the efficiency and power potential of the PAT for varying flow and pressure conditions, in order to develop characteristic and efficiency curves for the PAT and a baseline understanding of the technologies capabilities, are presented here: •The limitations of existing selection methods which convert BEP from pump operation to BEP in turbine operation was highlighted by the failure of such methods to reflect the conditions of maximum efficiency of the PAT. A generalised selection method for the WDN may need to be informed by an understanding of impact of flow variations and pressure control on system power potential capital cost, maintenance costs, payback period. •A clear relationship between flow and efficiency rate of the PAT has been established. The rate of efficiency reductions for flows +/- 50% BEP is significant and more extreme for deviations in flow above the BEP than below, but not dissimilar to the reaction of efficiency of other turbines. •PAT alone is not sufficient to regulate pressure, yet the relationship of pressure across the PAT is foundational in exploring ways which PAT energy recovery systems can maintain required pressure level within the WDN. Efficiencies of systems of PAT energy recovery systems operating conditions of pressure regulation, which have been conceptualise in current literature, need to be established. Initial results guide the focus of forthcoming testing and exploration of PAT technology towards how PATs can form part of an efficiency energy recovery system.Keywords: energy recovery, pump-as-turbine, water distribution network, water distribution network
Procedia PDF Downloads 2602943 An Ethnographic Study on Peer Support Work-Ers in a Peer Driven Non Governmental Organization: The Colorado Mental Wellness Network
Authors: Shawna M. Margesson
Abstract:
This research study seeks to explore the lived experience of peer support workers (PSWs) in a peer-led non-governmental organization in Denver, Colorado, USA. The Colorado Mental Wellness Network offers supportive wellness recovery services such as wellness recovery action plans (WRAP), advocacy trainings for anti-stigma campaigns, and PSWs to work with and for consumers in the community. This study suggests that a peer-run environment is a unique community setting for PSWs to work given all employees are living in mental wellness recovery. Little has been documented about PSWs' personal accounts of working within a recovery-oriented organization and their first-person accounts to working with consumers. The importance of this study is to provide an ethnographic account of both subjects; the lived experiences of PSWs of both organizational and consumer-driven recovery. This study seeks to add to the literature and the social work profession the personal accounts of PSWs as they provide services to others like themselves. It also will provide an additional lens to view the peer-driven movement in mental health and wellness recovery.Keywords: peer to peer movement, mental health, ethnography, peer support workers
Procedia PDF Downloads 1642942 Research on Road Openness in the Old Urban Residential District Based on Space Syntax: A Case Study on Kunming within the First Loop Road
Authors: Haoyang Liang, Dandong Ge
Abstract:
With the rapid development of Chinese cities, traffic congestion has become more and more serious. At the same time, there are many closed old residential area in Chinese cities, which seriously affect the connectivity of urban roads and reduce the density of urban road networks. After reopening the restricted old residential area, the internal roads in the original residential area were transformed into urban roads, which was of great help to alleviate traffic congestion. This paper uses the spatial syntactic theory to analyze the urban road network and compares the roads with the integration and connectivity degree to evaluate whether the opening of the roads in the residential areas can improve the urban traffic. Based on the road network system within the first loop road in Kunming, the Space Syntax evaluation model is established for status analysis. And comparative analysis method will be used to compare the change of the model before and after the road openness of the old urban residential district within the first-ring road in Kunming. Then it will pick out the areas which indicate a significant difference for the small dimensions model analysis. According to the analyzed results and traffic situation, the evaluation of road openness in the old urban residential district will be proposed to improve the urban residential districts.Keywords: Space Syntax, Kunming, urban renovation, traffic jam
Procedia PDF Downloads 1622941 Data Mining of Students' Performance Using Artificial Neural Network: Turkish Students as a Case Study
Authors: Samuel Nii Tackie, Oyebade K. Oyedotun, Ebenezer O. Olaniyi, Adnan Khashman
Abstract:
Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task; and the performances obtained from these networks evaluated in consideration of achieved recognition rates and training time.Keywords: artificial neural network, data mining, classification, students’ evaluation
Procedia PDF Downloads 6132940 Groundwater Potential Delineation Using Geodetector Based Convolutional Neural Network in the Gunabay Watershed of Ethiopia
Authors: Asnakew Mulualem Tegegne, Tarun Kumar Lohani, Abunu Atlabachew Eshete
Abstract:
Groundwater potential delineation is essential for efficient water resource utilization and long-term development. The scarcity of potable and irrigation water has become a critical issue due to natural and anthropogenic activities in meeting the demands of human survival and productivity. With these constraints, groundwater resources are now being used extensively in Ethiopia. Therefore, an innovative convolutional neural network (CNN) is successfully applied in the Gunabay watershed to delineate groundwater potential based on the selected major influencing factors. Groundwater recharge, lithology, drainage density, lineament density, transmissivity, and geomorphology were selected as major influencing factors during the groundwater potential of the study area. For dataset training, 70% of samples were selected and 30% were used for serving out of the total 128 samples. The spatial distribution of groundwater potential has been classified into five groups: very low (10.72%), low (25.67%), moderate (31.62%), high (19.93%), and very high (12.06%). The area obtains high rainfall but has a very low amount of recharge due to a lack of proper soil and water conservation structures. The major outcome of the study showed that moderate and low potential is dominant. Geodetoctor results revealed that the magnitude influences on groundwater potential have been ranked as transmissivity (0.48), recharge (0.26), lineament density (0.26), lithology (0.13), drainage density (0.12), and geomorphology (0.06). The model results showed that using a convolutional neural network (CNN), groundwater potentiality can be delineated with higher predictive capability and accuracy. CNN-based AUC validation platform showed that 81.58% and 86.84% were accrued from the accuracy of training and testing values, respectively. Based on the findings, the local government can receive technical assistance for groundwater exploration and sustainable water resource development in the Gunabay watershed. Finally, the use of a detector-based deep learning algorithm can provide a new platform for industrial sectors, groundwater experts, scholars, and decision-makers.Keywords: CNN, geodetector, groundwater influencing factors, Groundwater potential, Gunabay watershed
Procedia PDF Downloads 212939 The Use of Network Tool for Brain Signal Data Analysis: A Case Study with Blind and Sighted Individuals
Authors: Cleiton Pons Ferreira, Diana Francisca Adamatti
Abstract:
Advancements in computers technology have allowed to obtain information for research in biology and neuroscience. In order to transform the data from these surveys, networks have long been used to represent important biological processes, changing the use of this tools from purely illustrative and didactic to more analytic, even including interaction analysis and hypothesis formulation. Many studies have involved this application, but not directly for interpretation of data obtained from brain functions, asking for new perspectives of development in neuroinformatics using existent models of tools already disseminated by the bioinformatics. This study includes an analysis of neurological data through electroencephalogram (EEG) signals, using the Cytoscape, an open source software tool for visualizing complex networks in biological databases. The data were obtained from a comparative case study developed in a research from the University of Rio Grande (FURG), using the EEG signals from a Brain Computer Interface (BCI) with 32 eletrodes prepared in the brain of a blind and a sighted individuals during the execution of an activity that stimulated the spatial ability. This study intends to present results that lead to better ways for use and adapt techniques that support the data treatment of brain signals for elevate the understanding and learning in neuroscience.Keywords: neuroinformatics, bioinformatics, network tools, brain mapping
Procedia PDF Downloads 1822938 Machine Learning Based Smart Beehive Monitoring System Without Internet
Authors: Esra Ece Var
Abstract:
Beekeeping plays essential role both in terms of agricultural yields and agricultural economy; they produce honey, wax, royal jelly, apitoxin, pollen, and propolis. Nowadays, these natural products become more importantly suitable and preferable for nutrition, food supplement, medicine, and industry. However, to produce organic honey, majority of the apiaries are located in remote or distant rural areas where utilities such as electricity and Internet network are not available. Additionally, due to colony failures, world honey production decreases year by year despite the increase in the number of beehives. The objective of this paper is to develop a smart beehive monitoring system for apiaries including those that do not have access to Internet network. In this context, temperature and humidity inside the beehive, and ambient temperature were measured with RFID sensors. Control center, where all sensor data was sent and stored at, has a GSM module used to warn the beekeeper via SMS when an anomaly is detected. Simultaneously, using the collected data, an unsupervised machine learning algorithm is used for detecting anomalies and calibrating the warning system. The results show that the smart beehive monitoring system can detect fatal anomalies up to 4 weeks prior to colony loss.Keywords: beekeeping, smart systems, machine learning, anomaly detection, apiculture
Procedia PDF Downloads 2392937 Transportation Mode Classification Using GPS Coordinates and Recurrent Neural Networks
Authors: Taylor Kolody, Farkhund Iqbal, Rabia Batool, Benjamin Fung, Mohammed Hussaeni, Saiqa Aleem
Abstract:
The rising threat of climate change has led to an increase in public awareness and care about our collective and individual environmental impact. A key component of this impact is our use of cars and other polluting forms of transportation, but it is often difficult for an individual to know how severe this impact is. While there are applications that offer this feedback, they require manual entry of what transportation mode was used for a given trip, which can be burdensome. In order to alleviate this shortcoming, a data from the 2016 TRIPlab datasets has been used to train a variety of machine learning models to automatically recognize the mode of transportation. The accuracy of 89.6% is achieved using single deep neural network model with Gated Recurrent Unit (GRU) architecture applied directly to trip data points over 4 primary classes, namely walking, public transit, car, and bike. These results are comparable in accuracy to results achieved by others using ensemble methods and require far less computation when classifying new trips. The lack of trip context data, e.g., bus routes, bike paths, etc., and the need for only a single set of weights make this an appropriate methodology for applications hoping to reach a broad demographic and have responsive feedback.Keywords: classification, gated recurrent unit, recurrent neural network, transportation
Procedia PDF Downloads 1372936 Analysis of Cardiovascular Diseases Using Artificial Neural Network
Authors: Jyotismita Talukdar
Abstract:
In this paper, a study has been made on the possibility and accuracy of early prediction of several Heart Disease using Artificial Neural Network. (ANN). The study has been made in both noise free environment and noisy environment. The data collected for this analysis are from five Hospitals. Around 1500 heart patient’s data has been collected and studied. The data is analysed and the results have been compared with the Doctor’s diagnosis. It is found that, in noise free environment, the accuracy varies from 74% to 92%and in noisy environment (2dB), the results of accuracy varies from 62% to 82%. In the present study, four basic attributes considered are Blood Pressure (BP), Fasting Blood Sugar (FBS), Thalach (THAL) and Cholesterol (CHOL.). It has been found that highest accuracy(93%), has been achieved in case of PPI( Post-Permanent-Pacemaker Implementation ), around 79% in case of CAD(Coronary Artery disease), 87% in DCM (Dilated Cardiomyopathy), 89% in case of RHD&MS(Rheumatic heart disease with Mitral Stenosis), 75 % in case of RBBB +LAFB (Right Bundle Branch Block + Left Anterior Fascicular Block), 72% for CHB(Complete Heart Block) etc. The lowest accuracy has been obtained in case of ICMP (Ischemic Cardiomyopathy), about 38% and AF( Atrial Fibrillation), about 60 to 62%.Keywords: coronary heart disease, chronic stable angina, sick sinus syndrome, cardiovascular disease, cholesterol, Thalach
Procedia PDF Downloads 1742935 Students’ Level of Knowledge Construction and Pattern of Social Interaction in an Online Forum
Authors: K. Durairaj, I. N. Umar
Abstract:
The asynchronous discussion forum is one of the most widely used activities in learning management system environment. Online forum allows participants to interact, construct knowledge, and can be used to complement face to face sessions in blended learning courses. However, to what extent do the students perceive the benefits or advantages of forum remain to be seen. Through content and social network analyses, instructors will be able to gauge the students’ engagement and knowledge construction level. Thus, this study aims to analyze the students’ level of knowledge construction and their participation level that occur through online discussion. It also attempts to investigate the relationship between the level of knowledge construction and their social interaction patterns. The sample involves 23 students undertaking a master course in one public university in Malaysia. The asynchronous discussion forum was conducted for three weeks as part of the course requirement. The finding indicates that the level of knowledge construction is quite low. Also, the density value of 0.11 indicating that the overall communication among the participants in the forum is low. This study reveals that strong and significant correlations between SNA measures (in-degree centrality, out-degree centrality) and level of knowledge construction. Thus, allocating these active students in a different groups aids the interactive discussion takes place. Finally, based upon the findings, some recommendations to increase students’ level of knowledge construction and also for further research are proposed.Keywords: asynchronous discussion forums, content analysis, knowledge construction, social network analysis
Procedia PDF Downloads 3732934 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology
Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal
Abstract:
Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.Keywords: chloramine decay, modelling, response surface methodology, water quality parameters
Procedia PDF Downloads 2252933 GPRS Based Automatic Metering System
Authors: Constant Akama, Frank Kulor, Frederick Agyemang
Abstract:
All over the world, due to increasing population, electric power distribution companies are looking for more efficient ways of reading electricity meters. In Ghana, the prepaid metering system was introduced in 2007 to replace the manual system of reading which was fraught with inefficiencies. However, the prepaid system in Ghana is not capable of integration with online systems such as e-commerce platforms and remote monitoring systems. In this paper, we present a design framework for an automatic metering system that can be integrated with e-commerce platforms and remote monitoring systems. The meter was designed using ADE 7755 which reads the energy consumption and the reading is processed by a microcontroller connected to Sim900 General Packet Radio Service module containing a GSM chip provisioned with an Access Point Name. The system also has a billing server and a management server located at the premises of the utility company which communicate with the meter over a Virtual Private Network and GPRS. With this system, customers can buy credit online and the credit will be transferred securely to the meter. Also, when a fault is reported, the utility company can log into the meter remotely through the management server to troubleshoot the problem.Keywords: access point name, general packet radio service, GSM, virtual private network
Procedia PDF Downloads 2992932 A Distributed Cryptographically Generated Address Computing Algorithm for Secure Neighbor Discovery Protocol in IPv6
Authors: M. Moslehpour, S. Khorsandi
Abstract:
Due to shortage in IPv4 addresses, transition to IPv6 has gained significant momentum in recent years. Like Address Resolution Protocol (ARP) in IPv4, Neighbor Discovery Protocol (NDP) provides some functions like address resolution in IPv6. Besides functionality of NDP, it is vulnerable to some attacks. To mitigate these attacks, Internet Protocol Security (IPsec) was introduced, but it was not efficient due to its limitation. Therefore, SEND protocol is proposed to automatic protection of auto-configuration process. It is secure neighbor discovery and address resolution process. To defend against threats on NDP’s integrity and identity, Cryptographically Generated Address (CGA) and asymmetric cryptography are used by SEND. Besides advantages of SEND, its disadvantages like the computation process of CGA algorithm and sequentially of CGA generation algorithm are considerable. In this paper, we parallel this process between network resources in order to improve it. In addition, we compare the CGA generation time in self-computing and distributed-computing process. We focus on the impact of the malicious nodes on the CGA generation time in the network. According to the result, although malicious nodes participate in the generation process, CGA generation time is less than when it is computed in a one-way. By Trust Management System, detecting and insulating malicious nodes is easier.Keywords: NDP, IPsec, SEND, CGA, modifier, malicious node, self-computing, distributed-computing
Procedia PDF Downloads 2782931 Statistical Analysis with Prediction Models of User Satisfaction in Software Project Factors
Authors: Katawut Kaewbanjong
Abstract:
We analyzed a volume of data and found significant user satisfaction in software project factors. A statistical significance analysis (logistic regression) and collinearity analysis determined the significance factors from a group of 71 pre-defined factors from 191 software projects in ISBSG Release 12. The eight prediction models used for testing the prediction potential of these factors were Neural network, k-NN, Naïve Bayes, Random forest, Decision tree, Gradient boosted tree, linear regression and logistic regression prediction model. Fifteen pre-defined factors were truly significant in predicting user satisfaction, and they provided 82.71% prediction accuracy when used with a neural network prediction model. These factors were client-server, personnel changes, total defects delivered, project inactive time, industry sector, application type, development type, how methodology was acquired, development techniques, decision making process, intended market, size estimate approach, size estimate method, cost recording method, and effort estimate method. These findings may benefit software development managers considerably.Keywords: prediction model, statistical analysis, software project, user satisfaction factor
Procedia PDF Downloads 1242930 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment
Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang
Abstract:
2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.Keywords: artificial intelligence, machine learning, deep learning, convolutional neural networks
Procedia PDF Downloads 2112929 The Effect of Research Unit Clique-Diversity and Power Structure on Performance and Originality
Authors: Yue Yang, Qiang Wu, Xingyu Gao
Abstract:
"Organized research units" have always been an important part of academia. According to the type of organization, there are public research units, university research units, and corporate research units. Existing research has explored the research unit in some depth from several perspectives. However, there is a research gap on the closer interaction between the three from a network perspective and the impact of this interaction on their performance as well as originality. Cliques are a special kind of structure under the concept of cohesive subgroups in the field of social networks, representing particularly tightly knit teams in a network. This study develops the concepts of the diversity of clique types and the diversity of clique geography based on cliques, starting from the diversity of collaborative activities characterized by them. Taking research units as subjects and assigning values to their power in cliques based on occupational age, we explore the impact of clique diversity and clique power on their performance as well as originality and the moderating role of clique relationship strength and structural holes in them. By collecting 9094 articles published in the field of quantum communication at WoSCC over the 15 years 2007-2021, we processed them to construct annual collaborative networks between a total of 533 research units and measured the network characteristic variables using Ucinet. It was found that the type and geographic diversity of cliques promoted the performance and originality of the research units, and the strength of clique relationships positively moderated the positive effect of the diversity of clique types on performance and negatively affected the promotional relationship between the geographic diversity of cliques and performance. It also negatively affected the positive effects of clique-type diversity and clique-geography diversity on originality. Structural holes positively moderated the facilitating effect of both types of factional diversity on performance and originality. Clique power promoted the performance of the research unit, but unfavorably affected its performance on novelty. Faction relationship strength facilitated the relationship between faction rights and performance and showed negative insignificance for clique power and originality. Structural holes positively moderated the effect of clique power on performance and originality.Keywords: research unit, social networks, clique structure, clique power, diversity
Procedia PDF Downloads 592928 Systematic Study of Mutually Inclusive Influence of Temperature and Substitution on the Coordination Geometry of Co(II) in a Series of Coordination Polymer and Their Properties
Authors: Manasi Roy, Raju Mondal
Abstract:
During last two decades the synthesis and design of MOFs or novel coordination polymers (CPs) has flourished as an emerging area of research due to their role as functional materials. Accordingly, ten new cobalt-based MOFs have been synthesized using a simple bispyrazole ligand, 4,4′-methylene-bispyrazole (H2MBP), and isophthalic acid (H2IPA) and its four 5-substituted derivatives R-H2IPA (R = COOH, OH, tBu, NH2). The major aim of this study was to validate the mutual influence of temperature and substitutions on the final structural self-assembly. Five different isophthalic acid derivatives were used to study the influence of substituents while each reaction was carried out at two different temperatures to assess the temperature effect. A clear correlation was observed between the reaction temperature and the coordination number of the cobalt atoms which consequently changes the self assembly pattern. Another fact that the periodical change in coordination number did bring about some systematic changes in the structural network via secondary building unit selectivity. With the presence of a tunable cavity inside the network, and unsaturated metal centers, MOFs show highly encouraging photocatalytic degradation of toxic dye with a potential application in waste water purification. Another fascinating aspect of this work is the construction of magnetic coordination polymers with the occurrence of a not-so-common MCE behavior of cobalt-based MOF.Keywords: MOFs, temperature effect, MCE, dye degradation
Procedia PDF Downloads 1362927 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data
Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau
Abstract:
Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.Keywords: calcium imaging, computer vision, neural activity, neural networks
Procedia PDF Downloads 822926 On-Chip Sensor Ellipse Distribution Method and Equivalent Mapping Technique for Real-Time Hardware Trojan Detection and Location
Authors: Longfei Wang, Selçuk Köse
Abstract:
Hardware Trojan becomes great concern as integrated circuit (IC) technology advances and not all manufacturing steps of an IC are accomplished within one company. Real-time hardware Trojan detection is proven to be a feasible way to detect randomly activated Trojans that cannot be detected at testing stage. On-chip sensors serve as a great candidate to implement real-time hardware Trojan detection, however, the optimization of on-chip sensors has not been thoroughly investigated and the location of Trojan has not been carefully explored. On-chip sensor ellipse distribution method and equivalent mapping technique are proposed based on the characteristics of on-chip power delivery network in this paper to address the optimization and distribution of on-chip sensors for real-time hardware Trojan detection as well as to estimate the location and current consumption of hardware Trojan. Simulation results verify that hardware Trojan activation can be effectively detected and the location of a hardware Trojan can be efficiently estimated with less than 5% error for a realistic power grid using our proposed methods. The proposed techniques therefore lay a solid foundation for isolation and even deactivation of hardware Trojans through accurate location of Trojans.Keywords: hardware trojan, on-chip sensor, power distribution network, power/ground noise
Procedia PDF Downloads 3912925 Climate Variability on Hydro-Energy Potential: An MCDM and Neural Network Approach
Authors: Apu Kumar Saha, Mrinmoy Majumder
Abstract:
The increase in the concentration of Green House gases all over the World has induced global warming phenomena whereby the average temperature of the world has aggravated to impact the pattern of climate in different regions. The frequency of extreme event has increased, early onset of season and change in an average amount of rainfall all are engrossing the conclusion that normal pattern of climate is changing. Sophisticated and complex models are prepared to estimate the future situation of the climate in different zones of the Earth. As hydro-energy is directly related to climatic parameters like rainfall and evaporation such energy resources will have to sustain the onset of the climatic abnormalities. The present investigation has tried to assess the impact of climatic abnormalities upon hydropower potential of different regions of the World. In this regard multi-criteria, decision making, and the neural network is used to predict the impact of the change cognitively by an index. The results from the study show that hydro-energy potential of Asian region is mostly vulnerable with respect to other regions of the world. The model results also encourage further application of the index to analyze the impact of climate change on the potential of hydro-energy.Keywords: hydro-energy potential, neural networks, multi criteria decision analysis, environmental and ecological engineering
Procedia PDF Downloads 5492924 A Comparative Study on ANN, ANFIS and SVM Methods for Computing Resonant Frequency of A-Shaped Compact Microstrip Antennas
Authors: Ahmet Kayabasi, Ali Akdagli
Abstract:
In this study, three robust predicting methods, namely artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were used for computing the resonant frequency of A-shaped compact microstrip antennas (ACMAs) operating at UHF band. Firstly, the resonant frequencies of 144 ACMAs with various dimensions and electrical parameters were simulated with the help of IE3D™ based on method of moment (MoM). The ANN, ANFIS and SVM models for computing the resonant frequency were then built by considering the simulation data. 124 simulated ACMAs were utilized for training and the remaining 20 ACMAs were used for testing the ANN, ANFIS and SVM models. The performance of the ANN, ANFIS and SVM models are compared in the training and test process. The average percentage errors (APE) regarding the computed resonant frequencies for training of the ANN, ANFIS and SVM were obtained as 0.457%, 0.399% and 0.600%, respectively. The constructed models were then tested and APE values as 0.601% for ANN, 0.744% for ANFIS and 0.623% for SVM were achieved. The results obtained here show that ANN, ANFIS and SVM methods can be successfully applied to compute the resonant frequency of ACMAs, since they are useful and versatile methods that yield accurate results.Keywords: a-shaped compact microstrip antenna, artificial neural network (ANN), adaptive neuro-fuzzy inference system (ANFIS), support vector machine (SVM)
Procedia PDF Downloads 4412923 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience
Authors: Amanda Kavner, Richard Lamb
Abstract:
Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience
Procedia PDF Downloads 1192922 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction
Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach
Abstract:
X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast
Procedia PDF Downloads 2572921 Intrusion Detection in Computer Networks Using a Hybrid Model of Firefly and Differential Evolution Algorithms
Authors: Mohammad Besharatloo
Abstract:
Intrusion detection is an important research topic in network security because of increasing growth in the use of computer network services. Intrusion detection is done with the aim of detecting the unauthorized use or abuse in the networks and systems by the intruders. Therefore, the intrusion detection system is an efficient tool to control the user's access through some predefined regulations. Since, the data used in intrusion detection system has high dimension, a proper representation is required to show the basis structure of this data. Therefore, it is necessary to eliminate the redundant features to create the best representation subset. In the proposed method, a hybrid model of differential evolution and firefly algorithms was employed to choose the best subset of properties. In addition, decision tree and support vector machine (SVM) are adopted to determine the quality of the selected properties. In the first, the sorted population is divided into two sub-populations. These optimization algorithms were implemented on these sub-populations, respectively. Then, these sub-populations are merged to create next repetition population. The performance evaluation of the proposed method is done based on KDD Cup99. The simulation results show that the proposed method has better performance than the other methods in this context.Keywords: intrusion detection system, differential evolution, firefly algorithm, support vector machine, decision tree
Procedia PDF Downloads 912920 Deep Reinforcement Learning Model Using Parameterised Quantum Circuits
Authors: Lokes Parvatha Kumaran S., Sakthi Jay Mahenthar C., Sathyaprakash P., Jayakumar V., Shobanadevi A.
Abstract:
With the evolution of technology, the need to solve complex computational problems like machine learning and deep learning has shot up. But even the most powerful classical supercomputers find it difficult to execute these tasks. With the recent development of quantum computing, researchers and tech-giants strive for new quantum circuits for machine learning tasks, as present works on Quantum Machine Learning (QML) ensure less memory consumption and reduced model parameters. But it is strenuous to simulate classical deep learning models on existing quantum computing platforms due to the inflexibility of deep quantum circuits. As a consequence, it is essential to design viable quantum algorithms for QML for noisy intermediate-scale quantum (NISQ) devices. The proposed work aims to explore Variational Quantum Circuits (VQC) for Deep Reinforcement Learning by remodeling the experience replay and target network into a representation of VQC. In addition, to reduce the number of model parameters, quantum information encoding schemes are used to achieve better results than the classical neural networks. VQCs are employed to approximate the deep Q-value function for decision-making and policy-selection reinforcement learning with experience replay and the target network.Keywords: quantum computing, quantum machine learning, variational quantum circuit, deep reinforcement learning, quantum information encoding scheme
Procedia PDF Downloads 1342919 Detecting Venomous Files in IDS Using an Approach Based on Data Mining Algorithm
Authors: Sukhleen Kaur
Abstract:
In security groundwork, Intrusion Detection System (IDS) has become an important component. The IDS has received increasing attention in recent years. IDS is one of the effective way to detect different kinds of attacks and malicious codes in a network and help us to secure the network. Data mining techniques can be implemented to IDS, which analyses the large amount of data and gives better results. Data mining can contribute to improving intrusion detection by adding a level of focus to anomaly detection. So far the study has been carried out on finding the attacks but this paper detects the malicious files. Some intruders do not attack directly, but they hide some harmful code inside the files or may corrupt those file and attack the system. These files are detected according to some defined parameters which will form two lists of files as normal files and harmful files. After that data mining will be performed. In this paper a hybrid classifier has been used via Naive Bayes and Ripper classification methods. The results show how the uploaded file in the database will be tested against the parameters and then it is characterised as either normal or harmful file and after that the mining is performed. Moreover, when a user tries to mine on harmful file it will generate an exception that mining cannot be made on corrupted or harmful files.Keywords: data mining, association, classification, clustering, decision tree, intrusion detection system, misuse detection, anomaly detection, naive Bayes, ripper
Procedia PDF Downloads 4142918 Building a Dynamic News Category Network for News Sources Recommendations
Authors: Swati Gupta, Shagun Sodhani, Dhaval Patel, Biplab Banerjee
Abstract:
It is generic that news sources publish news in different broad categories. These categories can either be generic such as Business, Sports, etc. or time-specific such as World Cup 2015 and Nepal Earthquake or both. It is up to the news agencies to build the categories. Extracting news categories automatically from numerous online news sources is expected to be helpful in many applications including news source recommendations and time specific news category extraction. To address this issue, existing systems like DMOZ directory and Yahoo directory are mostly considered though they are mostly human annotated and do not consider the time dynamism of categories of news websites. As a remedy, we propose an approach to automatically extract news category URLs from news websites in this paper. News category URL is a link which points to a category in news websites. We use the news category URL as a prior knowledge to develop a news source recommendation system which contains news sources listed in various categories in order of ranking. In addition, we also propose an approach to rank numerous news sources in different categories using various parameters like Traffic Based Website Importance, Social media Analysis and Category Wise Article Freshness. Experimental results on category URLs captured from GDELT project during April 2016 to December 2016 show the adequacy of the proposed method.Keywords: news category, category network, news sources, ranking
Procedia PDF Downloads 3862917 Non-Targeted Adversarial Object Detection Attack: Fast Gradient Sign Method
Authors: Bandar Alahmadi, Manohar Mareboyana, Lethia Jackson
Abstract:
Today, there are many applications that are using computer vision models, such as face recognition, image classification, and object detection. The accuracy of these models is very important for the performance of these applications. One challenge that facing the computer vision models is the adversarial examples attack. In computer vision, the adversarial example is an image that is intentionally designed to cause the machine learning model to misclassify it. One of very well-known method that is used to attack the Convolution Neural Network (CNN) is Fast Gradient Sign Method (FGSM). The goal of this method is to find the perturbation that can fool the CNN using the gradient of the cost function of CNN. In this paper, we introduce a novel model that can attack Regional-Convolution Neural Network (R-CNN) that use FGSM. We first extract the regions that are detected by R-CNN, and then we resize these regions into the size of regular images. Then, we find the best perturbation of the regions that can fool CNN using FGSM. Next, we add the resulted perturbation to the attacked region to get a new region image that looks similar to the original image to human eyes. Finally, we placed the regions back to the original image and test the R-CNN with the attacked images. Our model could drop the accuracy of the R-CNN when we tested with Pascal VOC 2012 dataset.Keywords: adversarial examples, attack, computer vision, image processing
Procedia PDF Downloads 193