Search results for: corporate memory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 671

Search results for: corporate memory

71 A Machine Learning Approach for Anomaly Detection in Environmental IoT-Driven Wastewater Purification Systems

Authors: Giovanni Cicceri, Roberta Maisano, Nathalie Morey, Salvatore Distefano

Abstract:

The main goal of this paper is to present a solution for a water purification system based on an Environmental Internet of Things (EIoT) platform to monitor and control water quality and machine learning (ML) models to support decision making and speed up the processes of purification of water. A real case study has been implemented by deploying an EIoT platform and a network of devices, called Gramb meters and belonging to the Gramb project, on wastewater purification systems located in Calabria, south of Italy. The data thus collected are used to control the wastewater quality, detect anomalies and predict the behaviour of the purification system. To this extent, three different statistical and machine learning models have been adopted and thus compared: Autoregressive Integrated Moving Average (ARIMA), Long Short Term Memory (LSTM) autoencoder, and Facebook Prophet (FP). The results demonstrated that the ML solution (LSTM) out-perform classical statistical approaches (ARIMA, FP), in terms of both accuracy, efficiency and effectiveness in monitoring and controlling the wastewater purification processes.

Keywords: EIoT, machine learning, anomaly detection, environment monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 955
70 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm

Authors: Ghada Badr, Arwa Alturki

Abstract:

The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.

Keywords: Alignment, RNA secondary structure, pairwise, component-based, data mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 940
69 Stochastic Edge Based Anomaly Detection for Supervisory Control and Data Acquisitions Systems: Considering the Zambian Power Grid

Authors: Lukumba Phiri, Simon Tembo, Kumbuso Joshua Nyoni

Abstract:

In Zambia, recent initiatives by various power operators like ZESCO, CEC, and consumers like the mines, to upgrade power systems into smart grids, target an even tighter integration with information technologies to enable the integration of renewable energy sources, local and bulk generation, and demand response. Thus, for the reliable operation of smart grids, its information infrastructure must be secure and reliable in the face of both failures and cyberattacks. Due to the nature of the systems, ICS/SCADA cybersecurity and governance face additional challenges compared to the corporate networks, and critical systems may be left exposed. There exist control frameworks internationally such as the NIST framework, however, they are generic and do not meet the domain-specific needs of the SCADA systems. Zambia is also lagging in cybersecurity awareness and adoption, and therefore there is a concern about securing ICS controlling key infrastructure critical to the Zambian economy as there are few known facts about the true posture. In this paper, we present a stochastic Edged-based Anomaly Detection for SCADA systems (SEADS) framework for threat modeling and risk assessment. SEADS enables the calculation of steady-steady probabilities that are further applied to establish metrics like system availability, maintainability, and reliability.

Keywords: Anomaly detection, SmartGrid, edge, maintainability, reliability, stochastic process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 260
68 Discrete Polyphase Matched Filtering-based Soft Timing Estimation for Mobile Wireless Systems

Authors: Thomas O. Olwal, Michael A. van Wyk, Barend J. van Wyk

Abstract:

In this paper we present a soft timing phase estimation (STPE) method for wireless mobile receivers operating in low signal to noise ratios (SNRs). Discrete Polyphase Matched (DPM) filters, a Log-maximum a posterior probability (MAP) and/or a Soft-output Viterbi algorithm (SOVA) are combined to derive a new timing recovery (TR) scheme. We apply this scheme to wireless cellular communication system model that comprises of a raised cosine filter (RCF), a bit-interleaved turbo-coded multi-level modulation (BITMM) scheme and the channel is assumed to be memory-less. Furthermore, no clock signals are transmitted to the receiver contrary to the classical data aided (DA) models. This new model ensures that both the bandwidth and power of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft timing recovery (ISTR) scheme outperforms the conventional schemes.

Keywords: discrete polyphase matched filters, maximum likelihood estimators, soft timing phase estimation, wireless mobile systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1659
67 Cross-Cultural Cooperation and Innovation: An Exploration of Chinese Foreign Direct Investment in Europe

Authors: Yongsheng Guo, Shuchao Li

Abstract:

This study explores Chinese Foreign Direct Investment (FDI) in Europe and the cross-cultural cooperation between Chinese and European managers. The aim of this research is to shed light on the phenomenon of investments in developed countries from an emerging market and to gain insights into the cooperation process. A grounded theory approach is adopted, and 46 semi-structured interviews were conducted with 10 case companies in Germany and 13 case companies in the UK. Grounded theory models are developed from primary data and interview quotes are used to support the themes. The interviewees perceived differences between the two parties in cultural traits, management concepts, knowledge structure and resource endowment between the two parties. Chinese and European partners can take advantage of different resources and cooperate in innovative ways to improve corporate performance. Moreover, both parties appreciate different ethical and cultural characteristics and complement each other to develop a combined organizational culture. This study proposes an ethical and cultural diversity theory in international management arguing that a team with diversified values and behaviours may be more excited and motivated. This study suggests that “resource complement” and “cross-cultural cooperation” might be an advantage for international investment. Firms are encouraged to open their minds and cooperate with partners with different resources and cultures. The authorities may review the FDI policies to reduce social and political barriers.

Keywords: Cross-culture, FDI, China, Europe.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83
66 An Investigation on Students’ Reticence in Iranian University EFL Classrooms

Authors: Azizeh Chalak, Firouzeh Baktash

Abstract:

Reticence is a prominent and complex phenomenon which occurs in foreign language classrooms and influences students’ oral passivity. The present study investigated the extent in which students experience reticence in the EFL classrooms and explored the underlying factors triggering reticence. The participants were 104 Iranian freshmen undergraduate male and female EFL students, who enrolled in listening and speaking courses, all majoring in English studying at Islamic Azad University Isfahan (Khorasgan) Branch and University of Isfahan, Isfahan, Iran. To collect the data, the Reticence Scale-12 (RS-12) questionnaire which measures the level of reticence consisting of six dimensions (anxiety, knowledge, timing, organization, skills, and memory) was administered to the participants. The statistical analyses showed that the reticent level was high among the Iranian EFL undergraduate students, and their major problems were feelings of anxiety and delivery skills. Moreover, the results revealed that factors such as low English proficiency, the teaching method, and lack of confidence contributed to the students’ reticence in Iranian EFL classrooms. It can be implied that language teachers’ awareness of learners’ reticence can help them choose more appropriate activities and provide a friendly environment enhancing hopefully more effective participation of EFL learners. The findings can have implications for EFL teachers, learners and policy makers.

Keywords: Reticence, reticence scale, anxiety, Iranian EFL learners.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2641
65 The Effects of Physical Activity and Serotonin on Depression, Anxiety, Body Image and Mental Health

Authors: Sh. Khoshemehry, M. E. Bahram, M. J. Pourvaghar

Abstract:

Sport has found a special place as an effective phenomenon in all societies of the contemporary world. The relationship between physical activity and exercise with different sciences has provided new fields for human study. The range of issues related to exercise and physical education is such that it requires specialized sciences and special studies. In this article, the psychological and social sections of exercise have been investigated for children and adults. It can be used for anyone in different age groups. Exercise and regular physical movements have a great impact on the mental and social health of the individual in addition to body health. It affects the individual's adaptability in society and his/her personality. Exercise affects the treatment of diseases such as depression, anxiety, stress, body image, and memory. Exercise is a safe haven for young people to achieve the optimum human development in its shelter. The effects of sensorimotor skills on mental actions and mental development are such a way that many psychologists and sports science experts believe these activities should be included in training programs in the first place. Familiarity of students and scholars with different programs and methods of sensorimotor activities not only causes their mental actions; but also increases mental health and vitality, enhances self-confidence and, therefore, mental health.

Keywords: Anxiety, mental health, physical activity, serotonin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1745
64 Result of Fatty Acid Content in Meat of Selenge Breed Younger Cattle

Authors: Myagmarsuren Soronzonjav, N. Togtokhbayar, L. Davaahuu, B. Minjigdorj, Seong Gu Hwang

Abstract:

The number of natural or organic product consumers is increased in recent years and this healthy demand pushes to increase usage of healthy meat. At the same time, consumers pay more attention on the healthy fat, especially on unsaturated fatty acids. These long chain carbohydrates reduce heart diseases, improve memory and eye sight and activate the immune system. One of the important issues to be solved for our Mongolia’s food security is to provide healthy, fresh, widely available and cheap meat for the population. Thus, an importance of the Selenge breed meat production is increasing in order to supply the quality meat food security since the Selenge breed cattle are rapidly multiplied, beneficial in term of income, the same quality as Mongolian breed, and well digested for human body. We researched the lipid, unsaturated and saturated fatty acid contents of meat of Selenge breed younger cattle by their muscle types. Result of our research reveals that 11 saturated fatty acids are detected. For the content of palmitic acid among saturated fatty acids, 23.61% was in the sirloin meat, 24.01% was in the round and chuck meat, and 24.83% was in the short loin meat.

Keywords: Chromatogram, gas chromatography, organic resolving, saturated and unsaturated fatty acids.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1031
63 The Characteristics of Transformation of Institutional Changes and Georgia

Authors: Nazira Kakulia

Abstract:

The analysis of transformation of institutional changes outlines two important characteristics. These are: the speed of the changes and their sequence. Successful transformation must be carried out in three different stages; On the first stage, macroeconomic stabilization must be achieved with the help of fiscal and monetary tools. Two-tier banking system should be established and the active functions of central bank should be replaced by the passive ones (reserve requirements and refinancing rate), together with the involvement growth of private sector. Fiscal policy by itself here means the creation of tax system which must replace previously existing direct state revenues; the share of subsidies in the state expenses must be reduced also. The second stage begins after reaching the macroeconomic stabilization at a time of change of formal institutes which must stimulate the private business. Corporate legislation creates a competitive environment at the market and the privatization of state companies takes place. Bankruptcy and contract law is created. he third stage is the most extended one, which means the formation of all state structures that is necessary for the further proper functioning of a market economy. These three stages about the cycle period of political and social transformation and the hierarchy of changes can also be grouped by the different methodology: on the first and the most short-term stage the transfer of power takes place. On the second stage institutions corresponding to new goal are created. The last phase of transformation is extended in time and it includes the infrastructural, socio-cultural and socio-structural changes. The main goal of this research is to explore and identify the features of such kind of models.

Keywords: Competitive, environment, fiscal policy, macro-economic stabilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 919
62 High Accuracy ESPRIT-TLS Technique for Wind Turbine Fault Discrimination

Authors: Saad Chakkor, Mostafa Baghouri, Abderrahmane Hajraoui

Abstract:

ESPRIT-TLS method appears a good choice for high resolution fault detection in induction machines. It has a very high effectiveness in the frequency and amplitude identification. Contrariwise, it presents a high computation complexity which affects its implementation in real time fault diagnosis. To avoid this problem, a Fast-ESPRIT algorithm that combined the IIR band-pass filtering technique, the decimation technique and the original ESPRIT-TLS method was employed to enhance extracting accurately frequencies and their magnitudes from the wind stator current with less computation cost. The proposed algorithm has been applied to verify the wind turbine machine need in the implementation of an online, fast, and proactive condition monitoring. This type of remote and periodic maintenance provides an acceptable machine lifetime, minimize its downtimes and maximize its productivity. The developed technique has evaluated by computer simulations under many fault scenarios. Study results prove the performance of Fast- ESPRIT offering rapid and high resolution harmonics recognizing with minimum computation time and less memory cost.

Keywords: Spectral Estimation, ESPRIT-TLS, Real Time, Diagnosis, Wind Turbine Faults, Band-Pass Filtering, Decimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2214
61 Facilitation of Digital Culture and Creativity through an Ideation Strategy: A Case Study with an Incumbent Automotive Manufacturer

Authors: K. Ö. Kartal, L. Maul, M. Hägele

Abstract:

With the development of new technologies come additional opportunities for the founding of companies and new markets to be created. The barriers to entry are lowered and technology makes old business models obsolete. Incumbent companies have to be adaptable to this quickly changing environment. They have to start the process of digital maturation and they have to be able to adapt quickly to new and drastic changes that might arise. One of the biggest barriers for organizations in order to do so is their culture. This paper shows the core elements of a corporate culture that supports the process of digital maturation in incumbent organizations. Furthermore, it is explored how ideation and innovation can be used in a strategy in order to facilitate these core elements of culture that promote digital maturity. Focus areas are identified for the design of ideation strategies, with the aim to make the facilitation and incitation process more effective, short to long term. Therefore, one in-depth case study is conducted with data collection from interviews, observation, document review and surveys. The findings indicate that digital maturity is connected to cultural shift and 11 relevant elements of digital culture are identified which have to be considered. Based on these 11 core elements, five focus areas that need to be regarded in the design of a strategy that uses ideation and innovation to facilitate the cultural shift are identified. These are: Focus topics, rewards and communication, structure and frequency, regions and new online formats.

Keywords: Digital transformation, innovation management, ideation strategy, creativity culture, change.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1134
60 Simple Agents Benefit Only from Simple Brains

Authors: Valeri A. Makarov, Nazareth P. Castellanos, Manuel G. Velarde

Abstract:

In order to answer the general question: “What does a simple agent with a limited life-time require for constructing a useful representation of the environment?" we propose a robot platform including the simplest probabilistic sensory and motor layers. Then we use the platform as a test-bed for evaluation of the navigational capabilities of the robot with different “brains". We claim that a protocognitive behavior is not a consequence of highly sophisticated sensory–motor organs but instead emerges through an increment of the internal complexity and reutilization of the minimal sensory information. We show that the most fundamental robot element, the short-time memory, is essential in obstacle avoidance. However, in the simplest conditions of no obstacles the straightforward memoryless robot is usually superior. We also demonstrate how a low level action planning, involving essentially nonlinear dynamics, provides a considerable gain to the robot performance dynamically changing the robot strategy. Still, however, for very short life time the brainless robot is superior. Accordingly we suggest that small organisms (or agents) with short life-time does not require complex brains and even can benefit from simple brain-like (reflex) structures. To some extend this may mean that controlling blocks of modern robots are too complicated comparative to their life-time and mechanical abilities.

Keywords: Neural network, probabilistic control, robot navigation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1404
59 Double Reduction of Ada-ECATNet Representation using Rewriting Logic

Authors: Noura Boudiaf, Allaoua Chaoui

Abstract:

One major difficulty that faces developers of concurrent and distributed software is analysis for concurrency based faults like deadlocks. Petri nets are used extensively in the verification of correctness of concurrent programs. ECATNets [2] are a category of algebraic Petri nets based on a sound combination of algebraic abstract types and high-level Petri nets. ECATNets have 'sound' and 'complete' semantics because of their integration in rewriting logic [12] and its programming language Maude [13]. Rewriting logic is considered as one of very powerful logics in terms of description, verification and programming of concurrent systems. We proposed in [4] a method for translating Ada-95 tasking programs to ECATNets formalism (Ada-ECATNet). In this paper, we show that ECATNets formalism provides a more compact translation for Ada programs compared to the other approaches based on simple Petri nets or Colored Petri nets (CPNs). Such translation doesn-t reduce only the size of program, but reduces also the number of program states. We show also, how this compact Ada-ECATNet may be reduced again by applying reduction rules on it. This double reduction of Ada-ECATNet permits a considerable minimization of the memory space and run time of corresponding Maude program.

Keywords: Ada tasking, ECATNets, Algebraic Petri Nets, Compact Representation, Analysis, Rewriting Logic, Maude.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1379
58 Mechanical Design and Theoretical Analysis of a Four Fingered Prosthetic Hand Incorporating Embedded SMA Bundle Actuators

Authors: Kevin T. O'Toole, Mark M. McGrath

Abstract:

The psychological and physical trauma associated with the loss of a human limb can severely impact on the quality of life of an amputee rendering even the most basic of tasks very difficult. A prosthetic device can be of great benefit to the amputee in the performance of everyday human tasks. This paper outlines a proposed mechanical design of a 12 degree-of-freedom SMA actuated artificial hand. It is proposed that the SMA wires be embedded intrinsically within the hand structure which will allow for significant flexibility for use either as a prosthetic hand solution, or as part of a complete lower arm prosthetic solution. A modular approach is taken in the design facilitating ease of manufacture and assembly, and more importantly, also allows the end user to easily replace SMA wires in the event of failure. A biomimetric approach has been taken during the design process meaning that the artificial hand should replicate that of a human hand as far as is possible with due regard to functional requirements. The proposed design has been exposed to appropriate loading through the use of finite element analysis (FEA) to ensure that it is structurally sound. Theoretical analysis of the mechanical framework was also carried out to establish the limits of the angular displacement and velocity of the finger tip as well finger tip force generation. A combination of various polymers and Titanium, which are suitably lightweight, are proposed for the manufacture of the design.

Keywords: Hand prosthesis, mechanical design, shape memory alloys, wire bundle actuation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2595
57 Identifying Business Opportunities Based on Patent and Trademark Portfolios: A Technology-Based Service Industry Case

Authors: Mingook Lee, Sungjoo Lee

Abstract:

As technology-based service industries grow drastically worldwide; companies are recognizing the importance of market preoccupancy and have made an effort to capture a large market to gain the upper hand. To this end, a focus on patents can be used to determine the properties of a technology, as well as to capture advantages in technical skills, in comparison with the firm’s competitors. However, technology-based services largely depend not only on their technological value but also their economic value, due to the recognized worth that is passed to a plurality of users. Thus, it is important to determine whether there are any competitors in the target areas and what services they provide in any field. Despite this importance, little effort has been made to systematically benchmark competitors in order to identify business opportunities. Thus, this study aims to not only identify each position of technology-centered service companies in complex market dynamics, but also to discover new business opportunities. For this, we try to consider both technology and market environments simultaneously by utilizing patent data as a representative proxy for technology and trademark dates as an index for a firm’s target goods and services. Theoretically, this is one of the earliest attempts to combine patent data and trademark data to analyze corporate strategies. In practice, the research results are expected to be used as a decision criterion to diagnose the economic value that companies can obtain by entering the market, as well as the technological value to be passed onto their customers. Thus, the proposed approach can be useful to support effective technology and business strategies in a firm.

Keywords: Business opportunity, patent, Portfolio analysis, trademark.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1507
56 Addressing Scalability Issues of Named Entity Recognition Using Multi-Class Support Vector Machines

Authors: Mona Soliman Habib

Abstract:

This paper explores the scalability issues associated with solving the Named Entity Recognition (NER) problem using Support Vector Machines (SVM) and high-dimensional features. The performance results of a set of experiments conducted using binary and multi-class SVM with increasing training data sizes are examined. The NER domain chosen for these experiments is the biomedical publications domain, especially selected due to its importance and inherent challenges. A simple machine learning approach is used that eliminates prior language knowledge such as part-of-speech or noun phrase tagging thereby allowing for its applicability across languages. No domain-specific knowledge is included. The accuracy measures achieved are comparable to those obtained using more complex approaches, which constitutes a motivation to investigate ways to improve the scalability of multiclass SVM in order to make the solution more practical and useable. Improving training time of multi-class SVM would make support vector machines a more viable and practical machine learning solution for real-world problems with large datasets. An initial prototype results in great improvement of the training time at the expense of memory requirements.

Keywords: Named entity recognition, support vector machines, language independence, bioinformatics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1663
55 An FPGA Implementation of Intelligent Visual Based Fall Detection

Authors: Peng Shen Ong, Yoong Choon Chang, Chee Pun Ooi, Ettikan K. Karuppiah, Shahirina Mohd Tahir

Abstract:

Falling has been one of the major concerns and threats to the independence of the elderly in their daily lives. With the worldwide significant growth of the aging population, it is essential to have a promising solution of fall detection which is able to operate at high accuracy in real-time and supports large scale implementation using multiple cameras. Field Programmable Gate Array (FPGA) is a highly promising tool to be used as a hardware accelerator in many emerging embedded vision based system. Thus, it is the main objective of this paper to present an FPGA-based solution of visual based fall detection to meet stringent real-time requirements with high accuracy. The hardware architecture of visual based fall detection which utilizes the pixel locality to reduce memory accesses is proposed. By exploiting the parallel and pipeline architecture of FPGA, our hardware implementation of visual based fall detection using FGPA is able to achieve a performance of 60fps for a series of video analytical functions at VGA resolutions (640x480). The results of this work show that FPGA has great potentials and impacts in enabling large scale vision system in the future healthcare industry due to its flexibility and scalability.

Keywords: Fall detection, FPGA, hardware implementation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2439
54 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: Anomaly detection, autoencoder, data centers, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 683
53 Choosing R-tree or Quadtree Spatial DataIndexing in One Oracle Spatial Database System to Make Faster Showing Geographical Map in Mobile Geographical Information System Technology

Authors: Maruto Masserie Sardadi, Mohd Shafry bin Mohd Rahim, Zahabidin Jupri, Daut bin Daman

Abstract:

The latest Geographic Information System (GIS) technology makes it possible to administer the spatial components of daily “business object," in the corporate database, and apply suitable geographic analysis efficiently in a desktop-focused application. We can use wireless internet technology for transfer process in spatial data from server to client or vice versa. However, the problem in wireless Internet is system bottlenecks that can make the process of transferring data not efficient. The reason is large amount of spatial data. Optimization in the process of transferring and retrieving data, however, is an essential issue that must be considered. Appropriate decision to choose between R-tree and Quadtree spatial data indexing method can optimize the process. With the rapid proliferation of these databases in the past decade, extensive research has been conducted on the design of efficient data structures to enable fast spatial searching. Commercial database vendors like Oracle have also started implementing these spatial indexing to cater to the large and diverse GIS. This paper focuses on the decisions to choose R-tree and quadtree spatial indexing using Oracle spatial database in mobile GIS application. From our research condition, the result of using Quadtree and R-tree spatial data indexing method in one single spatial database can save the time until 42.5%.

Keywords: Indexing, Mobile GIS, MapViewer, Oracle SpatialDatabase.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4007
52 Prediction on Housing Price Based on Deep Learning

Authors: Li Yu, Chenlu Jiao, Hongrun Xin, Yan Wang, Kaiyang Wang

Abstract:

In order to study the impact of various factors on the housing price, we propose to build different prediction models based on deep learning to determine the existing data of the real estate in order to more accurately predict the housing price or its changing trend in the future. Considering that the factors which affect the housing price vary widely, the proposed prediction models include two categories. The first one is based on multiple characteristic factors of the real estate. We built Convolution Neural Network (CNN) prediction model and Long Short-Term Memory (LSTM) neural network prediction model based on deep learning, and logical regression model was implemented to make a comparison between these three models. Another prediction model is time series model. Based on deep learning, we proposed an LSTM-1 model purely regard to time series, then implementing and comparing the LSTM model and the Auto-Regressive and Moving Average (ARMA) model. In this paper, comprehensive study of the second-hand housing price in Beijing has been conducted from three aspects: crawling and analyzing, housing price predicting, and the result comparing. Ultimately the best model program was produced, which is of great significance to evaluation and prediction of the housing price in the real estate industry.

Keywords: Deep learning, convolutional neural network, LSTM, housing prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4913
51 Investigation of Fire Damaged Concrete Using Nonlinear Resonance Vibration Method

Authors: Kang-Gyu Park, Sun-Jong Park, Hong Jae Yim, Hyo-Gyung Kwak

Abstract:

This paper attempts to evaluate the effect of fire damage on concrete by using nonlinear resonance vibration method, one of the nonlinear nondestructive method. Concrete exhibits not only nonlinear stress-strain relation but also hysteresis and discrete memory effect which are contained in consolidated materials. Hysteretic materials typically show the linear resonance frequency shift. Also, the shift of resonance frequency is changed according to the degree of micro damage. The degree of the shift can be obtained through nonlinear resonance vibration method. Five exposure scenarios were considered in order to make different internal micro damage. Also, the effect of post-fire-curing on fire-damaged concrete was taken into account to conform the change in internal damage. Hysteretic nonlinearity parameter was obtained by amplitudedependent resonance frequency shift after specific curing periods. In addition, splitting tensile strength was measured on each sample to characterize the variation of residual strength. Then, a correlation between the hysteretic nonlinearity parameter and residual strength was proposed from each test result.

Keywords: Fire damaged concrete, nonlinear resonance vibration method, nonlinearity parameter, post-fire-curing, splitting tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2092
50 Experimental Investigation on Effect of Different Heat Treatments on Phase Transformation and Superelasticity of NiTi Alloy

Authors: Erfan Asghari Fesaghandis, Reza Ghaffari Adli, Abbas Kianvash, Hossein Aghajani, Homa Homaie

Abstract:

NiTi alloys possess magnificent superelastic, shape memory, high strength and biocompatible properties. For improving mechanical properties, foremost, superelasticity behavior, heat treatment process is carried out. In this paper, two different heat treatment methods were undertaken: (1) solid solution, and (2) aging. The effect of each treatment in a constant time is investigated. Five samples were prepared to study the structure and optimize mechanical properties under different time and temperature. For measuring the upper plateau stress, lower plateau stress and residual strain, tensile test is carried out. The samples were aged at two different temperatures to see difference between aging temperatures. The sample aged at 500 °C has a bigger crystallite size and lower amount of Ni which causes the mentioned sample to possess poor pseudo elasticity behaviour than the other aged sample. The sample aged at 460 °C has shown remarkable superelastic properties. The mentioned sample’s higher plateau is 580 MPa with the lowest residual strain (0.17%) while other samples have possessed higher residual strains. X-ray diffraction was used to investigate the produced phases.

Keywords: Heat treatment, phase transformation, superelasticity, NiTi alloy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 659
49 An AI-Based Dynamical Resource Allocation Calculation Algorithm for Unmanned Aerial Vehicle

Authors: Zhou Luchen, Wu Yubing, Burra Venkata Durga Kumar

Abstract:

As the scale of the network becomes larger and more complex than before, the density of user devices is also increasing. The development of Unmanned Aerial Vehicle (UAV) networks is able to collect and transform data in an efficient way by using software-defined networks (SDN) technology. This paper proposed a three-layer distributed and dynamic cluster architecture to manage UAVs by using an AI-based resource allocation calculation algorithm to address the overloading network problem. Through separating services of each UAV, the UAV hierarchical cluster system performs the main function of reducing the network load and transferring user requests, with three sub-tasks including data collection, communication channel organization, and data relaying. In this cluster, a head node and a vice head node UAV are selected considering the CPU, RAM, and ROM memory of devices, battery charge, and capacity. The vice head node acts as a backup that stores all the data in the head node. The k-means clustering algorithm is used in order to detect high load regions and form the UAV layered clusters. The whole process of detecting high load areas, forming and selecting UAV clusters, and moving the selected UAV cluster to that area is proposed as offloading traffic algorithm.

Keywords: k-means, resource allocation, SDN, UAV network, unmanned aerial vehicles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 295
48 General Regression Neural Network and Back Propagation Neural Network Modeling for Predicting Radial Overcut in EDM: A Comparative Study

Authors: Raja Das, M. K. Pradhan

Abstract:

This paper presents a comparative study between two neural network models namely General Regression Neural Network (GRNN) and Back Propagation Neural Network (BPNN) are used to estimate radial overcut produced during Electrical Discharge Machining (EDM). Four input parameters have been employed: discharge current (Ip), pulse on time (Ton), Duty fraction (Tau) and discharge voltage (V). Recently, artificial intelligence techniques, as it is emerged as an effective tool that could be used to replace time consuming procedures in various scientific or engineering applications, explicitly in prediction and estimation of the complex and nonlinear process. The both networks are trained, and the prediction results are tested with the unseen validation set of the experiment and analysed. It is found that the performance of both the networks are found to be in good agreement with average percentage error less than 11% and the correlation coefficient obtained for the validation data set for GRNN and BPNN is more than 91%. However, it is much faster to train GRNN network than a BPNN and GRNN is often more accurate than BPNN. GRNN requires more memory space to store the model, GRNN features fast learning that does not require an iterative procedure, and highly parallel structure. GRNN networks are slower than multilayer perceptron networks at classifying new cases.

Keywords: Electrical-discharge machining, General Regression Neural Network, Back-propagation Neural Network, Radial Overcut.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3040
47 Use of Social Media in PR: A Change of Trend

Authors: Tang Mui Joo, Chan Eang Teng

Abstract:

The use of social media has become more defined. It has been widely used for the purpose of business. More marketers are now using social media as tools to enhance their businesses. Whereas on the other hand, there are more and more people spending their time through mobile apps to be engaged in the social media sites like YouTube, Facebook, Twitter and others. Social media has even become common in Public Relations (PR). It has become number one platform for creating and sharing content. In view to this, social media has changed the rules in PR where it brings new challenges and opportunities to the profession. Although corporate websites, chat-rooms, email customer response facilities and electronic news release distribution are now viewed as standard aspects of PR practice, many PR practitioners are still struggling with the impact of new media though the implementation of social media is potentially reducing the cost of communication. It is to the point that PR practitioners are not fully embracing new media, they are ill-equipped to do so and they have a fear of the technology. Somehow that social media has become a new style of communication that is characterized by conversation and community. It has become a platform that allows individuals to interact with one another and build relationship among each other. Therefore, in the use of business world, consumers are able to interact with those companies that have joined any social media. Based on their experiences with social networking site interactions, they are also exposed to personal interaction while communicating. This paper is to study the impact of social media to PR. This paper discovers the potential changes of PR practices in a developing country like Malaysia. Eventually the study reflects on how PR practitioners are actually using social media in the country. This paper is based on two theories in its development of this research foundation. Media Ecology Theory is to support the impact and changes to PR. Social Penetration Theory is to reflect on how the use of social media is among PRs. This research is using survey with PR practitioners in its data collection. The results have shown that PR professionals value social media more than they actually use it and the way of organizations communicate had been changed due to the transformation of social media.

Keywords: New media, social media, PR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6036
46 Unstructured-Data Content Search Based on Optimized EEG Signal Processing and Multi-Objective Feature Extraction

Authors: Qais M. Yousef, Yasmeen A. Alshaer

Abstract:

Over the last few years, the amount of data available on the globe has been increased rapidly. This came up with the emergence of recent concepts, such as the big data and the Internet of Things, which have furnished a suitable solution for the availability of data all over the world. However, managing this massive amount of data remains a challenge due to their large verity of types and distribution. Therefore, locating the required file particularly from the first trial turned to be a not easy task, due to the large similarities of names for different files distributed on the web. Consequently, the accuracy and speed of search have been negatively affected. This work presents a method using Electroencephalography signals to locate the files based on their contents. Giving the concept of natural mind waves processing, this work analyses the mind wave signals of different people, analyzing them and extracting their most appropriate features using multi-objective metaheuristic algorithm, and then classifying them using artificial neural network to distinguish among files with similar names. The aim of this work is to provide the ability to find the files based on their contents using human thoughts only. Implementing this approach and testing it on real people proved its ability to find the desired files accurately within noticeably shorter time and retrieve them as a first choice for the user.

Keywords: Artificial intelligence, data contents search, human active memory, mind wave, multi-objective optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 886
45 A Security Model of Voice Eavesdropping Protection over Digital Networks

Authors: Supachai Tangwongsan, Sathaporn Kassuvan

Abstract:

The purpose of this research is to develop a security model for voice eavesdropping protection over digital networks. The proposed model provides an encryption scheme and a personal secret key exchange between communicating parties, a so-called voice data transformation system, resulting in a real-privacy conversation. The operation of this system comprises two main steps as follows: The first one is the personal secret key exchange for using the keys in the data encryption process during conversation. The key owner could freely make his/her choice in key selection, so it is recommended that one should exchange a different key for a different conversational party, and record the key for each case into the memory provided in the client device. The next step is to set and record another personal option of encryption, either taking all frames or just partial frames, so-called the figure of 1:M. Using different personal secret keys and different sets of 1:M to different parties without the intervention of the service operator, would result in posing quite a big problem for any eavesdroppers who attempt to discover the key used during the conversation, especially in a short period of time. Thus, it is quite safe and effective to protect the case of voice eavesdropping. The results of the implementation indicate that the system can perform its function accurately as designed. In this regard, the proposed system is suitable for effective use in voice eavesdropping protection over digital networks, without any requirements to change presently existing network systems, mobile phone network and VoIP, for instance.

Keywords: Computer Security, Encryption, Key Exchange, Security Model, Voice Eavesdropping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553
44 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: Algorithm optimization, Bank Failures, OpenMP, Parallel Techniques, Statistical tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1856
43 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform

Authors: Vijaya Prakash.A.M, K.S.Gurumurthy

Abstract:

In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].

Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3114
42 Earth Station Neural Network Control Methodology and Simulation

Authors: Hanaa T. El-Madany, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassen T. Dorrah

Abstract:

Renewable energy resources are inexhaustible, clean as compared with conventional resources. Also, it is used to supply regions with no grid, no telephone lines, and often with difficult accessibility by common transport. Satellite earth stations which located in remote areas are the most important application of renewable energy. Neural control is a branch of the general field of intelligent control, which is based on the concept of artificial intelligence. This paper presents the mathematical modeling of satellite earth station power system which is required for simulating the system.Aswan is selected to be the site under consideration because it is a rich region with solar energy. The complete power system is simulated using MATLAB–SIMULINK.An artificial neural network (ANN) based model has been developed for the optimum operation of earth station power system. An ANN is trained using a back propagation with Levenberg–Marquardt algorithm. The best validation performance is obtained for minimum mean square error. The regression between the network output and the corresponding target is equal to 96% which means a high accuracy. Neural network controller architecture gives satisfactory results with small number of neurons, hence better in terms of memory and time are required for NNC implementation. The results indicate that the proposed control unit using ANN can be successfully used for controlling the satellite earth station power system.

Keywords: Satellite, neural network, MATLAB, power system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843