Search results for: Memory Forensics.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 457

Search results for: Memory Forensics.

277 Smartphone Video Source Identification Based on Sensor Pattern Noise

Authors: Raquel Ramos López, Anissa El-Khattabi, Ana Lucila Sandoval Orozco, Luis Javier García Villalba

Abstract:

An increasing number of mobile devices with integrated cameras has meant that most digital video comes from these devices. These digital videos can be made anytime, anywhere and for different purposes. They can also be shared on the Internet in a short period of time and may sometimes contain recordings of illegal acts. The need to reliably trace the origin becomes evident when these videos are used for forensic purposes. This work proposes an algorithm to identify the brand and model of mobile device which generated the video. Its procedure is as follows: after obtaining the relevant video information, a classification algorithm based on sensor noise and Wavelet Transform performs the aforementioned identification process. We also present experimental results that support the validity of the techniques used and show promising results.

Keywords: Digital video, forensics analysis, key frame, mobile device, PRNU, sensor noise, source identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1148
276 Robust & Energy Efficient Universal Gates for High Performance Computer Networks at 22nm Process Technology

Authors: M. Geetha Priya, K. Baskaran, S. Srinivasan

Abstract:

Digital systems are said to be constructed using basic logic gates. These gates are the NOR, NAND, AND, OR, EXOR & EXNOR gates. This paper presents a robust three transistors (3T) based NAND and NOR gates with precise output logic levels, yet maintaining equivalent performance than the existing logic structures. This new set of 3T logic gates are based on CMOS inverter and Pass Transistor Logic (PTL). The new universal logic gates are characterized by better speed and lower power dissipation which can be straightforwardly fabricated as memory ICs for high performance computer networks. The simulation tests were performed using standard BPTM 22nm process technology using SYNOPSYS HSPICE. The 3T NAND gate is evaluated using C17 benchmark circuit and 3T NOR is gate evaluated using a D-Latch. According to HSPICE simulation in 22 nm CMOS BPTM process technology under given conditions and at room temperature, the proposed 3T gates shows an improvement of 88% less power consumption on an average over conventional CMOS logic gates. The devices designed with 3T gates will make longer battery life by ensuring extremely low power consumption.

Keywords: Low power, CMOS, pass-transistor, flash memory, logic gates.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2389
275 In vitro and in vivo Assessment of Cholinesterase Inhibitory Activity of the Bark Extracts of Pterocarpus santalinus L. for the Treatment of Alzheimer’s Disease

Authors: K. Biswas, U. H. Armin, S. M. J. Prodhan, J. A. Prithul, S. Sarker, F. Afrin

Abstract:

Alzheimer’s disease (AD) (a progressive neurodegenerative disorder) is mostly predominant cause of dementia in the elderly. Prolonging the function of acetylcholine by inhibiting both acetylcholinesterase and butyrylcholinesterase is most effective treatment therapy of AD. Traditionally Pterocarpus santalinus L. is widely known for its medicinal use. In this study, in vitro acetylcholinesterase inhibitory activity was investigated and methanolic extract of the plant showed significant activity. To confirm this activity (in vivo), learning and memory enhancing effects were tested in mice. For the test, memory impairment was induced by scopolamine (cholinergic muscarinic receptor antagonist). Anti-amnesic effect of the extract was investigated by the passive avoidance task in mice. The study also includes brain acetylcholinesterase activity. Results proved that scopolamine induced cognitive dysfunction was significantly decreased by administration of the extract solution, in the passive avoidance task and inhibited brain acetylcholinesterase activity. These results suggest that bark extract of Pterocarpus santalinus can be better option for further studies on AD via their acetylcholinesterase inhibitory actions.

Keywords: Pterocarpus santalinus, cholinesterase inhibitor, passive avoidance, Alzheimer’s disease.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 768
274 A Real-Time Image Change Detection System

Authors: Madina Hamiane, Amina Khunji

Abstract:

Detecting changes in multiple images of the same scene has recently seen increased interest due to the many contemporary applications including smart security systems, smart homes, remote sensing, surveillance, medical diagnosis, weather forecasting, speed and distance measurement, post-disaster forensics and much more. These applications differ in the scale, nature, and speed of change. This paper presents an application of image processing techniques to implement a real-time change detection system. Change is identified by comparing the RGB representation of two consecutive frames captured in real-time. The detection threshold can be controlled to account for various luminance levels. The comparison result is passed through a filter before decision making to reduce false positives, especially at lower luminance conditions. The system is implemented with a MATLAB Graphical User interface with several controls to manage its operation and performance.

Keywords: Image change detection, Image processing, image filtering, thresholding, B/W quantization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2525
273 High Speed Bitwise Search for Digital Forensic System

Authors: Hyungkeun Jee, Jooyoung Lee, Dowon Hong

Abstract:

The most common forensic activity is searching a hard disk for string of data. Nowadays, investigators and analysts are increasingly experiencing large, even terabyte sized data sets when conducting digital investigations. Therefore consecutive searching can take weeks to complete successfully. There are two primary search methods: index-based search and bitwise search. Index-based searching is very fast after the initial indexing but initial indexing takes a long time. In this paper, we discuss a high speed bitwise search model for large-scale digital forensic investigations. We used pattern matching board, which is generally used for network security, to search for string and complex regular expressions. Our results indicate that in many cases, the use of pattern matching board can substantially increase the performance of digital forensic search tools.

Keywords: Digital forensics, search, regular expression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1765
272 ECG-Based Heartbeat Classification Using Convolutional Neural Networks

Authors: Jacqueline R. T. Alipo-on, Francesca I. F. Escobar, Myles J. T. Tan, Hezerul Abdul Karim, Nouar AlDahoul

Abstract:

Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases which are considered as one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis on the ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heart beat types. The dataset used in this work is the synthetic MIT-Beth Israel Hospital (MIT-BIH) Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%.

Keywords: Heartbeat classification, convolutional neural network, electrocardiogram signals, ECG signals, generative adversarial networks, long short-term memory, LSTM, ResNet-50.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 114
271 Design and Implementation of a Software Platform Based on Artificial Intelligence for Product Recommendation

Authors: G. Settanni, A. Panarese, R. Vaira, A. Galiano

Abstract:

Nowadays, artificial intelligence is used successfully in the field of e-commerce for its ability to learn from a large amount of data. In this research study, a prototype software platform was designed and implemented in order to suggest to users the most suitable products for their needs. The platform includes a recommender system based on artificial intelligence algorithms that provide suggestions and decision support to the customer. Specifically, support vector machine algorithms have been implemented combined with natural language processing techniques that allow the user to interact with the system, express their requests and receive suggestions. The interested user can access the web platform on the internet using a computer, tablet or mobile phone, register, provide the necessary information and view the products that the system deems them the most appropriate. The platform also integrates a dashboard that allows the use of the various functions, which the platform is equipped with, in an intuitive and simple way. Also, Long Short-Term Memory algorithms have been implemented and trained on historical data in order to predict customer scores of the different items. Items with the highest scores are recommended to customers.

Keywords: Deep Learning, Long Short-Term Memory, Machine Learning, Recommender Systems, Support Vector Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 243
270 Optimizing Forecasting for Indonesia's Coal and Palm Oil Exports: A Comparative Analysis of ARIMA, ANN, and LSTM Methods

Authors: Mochammad Dewo, Sumarsono Sudarto

Abstract:

The Exponential Triple Smoothing Algorithm approach nowadays, which is used to anticipate the export value of Indonesia's two major commodities, coal and palm oil, has a Mean Percentage Absolute Error (MAPE) value of 30-50%, which may be considered as a "reasonable" forecasting mistake. Forecasting errors of more than 30% shall have a domino effect on industrial output, as extra production adds to raw material, manufacturing and storage expenses. Whereas, reaching an "excellent" classification with an error value of less than 10% will provide new investors and exporters with confidence in the commercial development of related sectors. Industrial growth will bring out a positive impact on economic development. It can be applied for other commodities if the forecast error is less than 10%. The purpose of this project is to create a forecasting technique that can produce precise forecasting results with an error of less than 10%. This research analyzes forecasting methods such as ARIMA (Autoregressive Integrated Moving Average), ANN (Artificial Neural Network) and LSTM (Long-Short Term Memory). By providing a MAPE of 1%, this study reveals that ANN is the most successful strategy for forecasting coal and palm oil commodities in Indonesia.

Keywords: ANN, Artificial Neural Network, ARIMA, Autoregressive Integrated Moving Average, export value, forecast, LSTM, Long Short Term Memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 147
269 Tagging by Combining Rules- Based Method and Memory-Based Learning

Authors: Tlili-Guiassa Yamina

Abstract:

Many natural language expressions are ambiguous, and need to draw on other sources of information to be interpreted. Interpretation of the e word تعاون to be considered as a noun or a verb depends on the presence of contextual cues. To interpret words we need to be able to discriminate between different usages. This paper proposes a hybrid of based- rules and a machine learning method for tagging Arabic words. The particularity of Arabic word that may be composed of stem, plus affixes and clitics, a small number of rules dominate the performance (affixes include inflexional markers for tense, gender and number/ clitics include some prepositions, conjunctions and others). Tagging is closely related to the notion of word class used in syntax. This method is based firstly on rules (that considered the post-position, ending of a word, and patterns), and then the anomaly are corrected by adopting a memory-based learning method (MBL). The memory_based learning is an efficient method to integrate various sources of information, and handling exceptional data in natural language processing tasks. Secondly checking the exceptional cases of rules and more information is made available to the learner for treating those exceptional cases. To evaluate the proposed method a number of experiments has been run, and in order, to improve the importance of the various information in learning.

Keywords: Arabic language, Based-rules, exceptions, Memorybased learning, Tagging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
268 Efficient Copy-Move Forgery Detection for Digital Images

Authors: Somayeh Sadeghi, Hamid A. Jalab, Sajjad Dadkhah

Abstract:

Due to availability of powerful image processing software and improvement of human computer knowledge, it becomes easy to tamper images. Manipulation of digital images in different fields like court of law and medical imaging create a serious problem nowadays. Copy-move forgery is one of the most common types of forgery which copies some part of the image and pastes it to another part of the same image to cover an important scene. In this paper, a copy-move forgery detection method proposed based on Fourier transform to detect forgeries. Firstly, image is divided to same size blocks and Fourier transform is performed on each block. Similarity in the Fourier transform between different blocks provides an indication of the copy-move operation. The experimental results prove that the proposed method works on reasonable time and works well for gray scale and colour images. Computational complexity reduced by using Fourier transform in this method.

Keywords: Copy-Move forgery, Digital Forensics, Image Forgery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2740
267 A PIM (Processor-In-Memory) for Computer Graphics : Data Partitioning and Placement Schemes

Authors: Jae Chul Cha, Sandeep K. Gupta

Abstract:

The demand for higher performance graphics continues to grow because of the incessant desire towards realism. And, rapid advances in fabrication technology have enabled us to build several processor cores on a single die. Hence, it is important to develop single chip parallel architectures for such data-intensive applications. In this paper, we propose an efficient PIM architectures tailored for computer graphics which requires a large number of memory accesses. We then address the two important tasks necessary for maximally exploiting the parallelism provided by the architecture, namely, partitioning and placement of graphic data, which affect respectively load balances and communication costs. Under the constraints of uniform partitioning, we develop approaches for optimal partitioning and placement, which significantly reduce search space. We also present heuristics for identifying near-optimal placement, since the search space for placement is impractically large despite our optimization. We then demonstrate the effectiveness of our partitioning and placement approaches via analysis of example scenes; simulation results show considerable search space reductions, and our heuristics for placement performs close to optimal – the average ratio of communication overheads between our heuristics and the optimal was 1.05. Our uniform partitioning showed average load-balance ratio of 1.47 for geometry processing and 1.44 for rasterization, which is reasonable.

Keywords: Data Partitioning and Placement, Graphics, PIM, Search Space Reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1458
266 Switching Studies on Ge15In5Te56Ag24 Thin Films

Authors: Diptoshi Roy, G. Sreevidya Varma, S. Asokan, Chandasree Das

Abstract:

Germanium Telluride based quaternary thin film switching devices with composition Ge15In5Te56Ag24, have been deposited in sandwich geometry on glass substrate with aluminum as top and bottom electrodes. The bulk glassy form of the said composition is prepared by melt quenching technique. In this technique, appropriate quantity of elements with high purity are taken in a quartz ampoule and sealed under a vacuum of 10-5 mbar. Then, it is allowed to rotate in a horizontal rotary furnace for 36 hours to ensure homogeneity of the melt. After that, the ampoule is quenched into a mixture of ice - water and NaOH to get the bulk ingot of the sample. The sample is then coated on a glass substrate using flash evaporation technique at a vacuum level of 10-6 mbar. The XRD report reveals the amorphous nature of the thin film sample and Energy - Dispersive X-ray Analysis (EDAX) confirms that the film retains the same chemical composition as that of the base sample. Electrical switching behavior of the device is studied with the help of Keithley (2410c) source-measure unit interfaced with Lab VIEW 7 (National Instruments). Switching studies, mainly SET (changing the state of the material from amorphous to crystalline) operation is conducted on the thin film form of the sample. This device is found to manifest memory switching as the device remains 'ON' even after the removal of the electric field. Also it is found that amorphous Ge15In5Te56Ag24 thin film unveils clean memory type of electrical switching behavior which can be justified by the absence of fluctuation in the I-V characteristics. The I-V characteristic also reveals that the switching is faster in this sample as no data points could be seen in the negative resistance region during the transition to on state and this leads to the conclusion of fast phase change during SET process. Scanning Electron Microscopy (SEM) studies are performed on the chosen sample to study the structural changes at the time of switching. SEM studies on the switched Ge15In5Te56Ag24 sample has shown some morphological changes at the place of switching wherein it can be explained that a conducting crystalline channel is formed in the device when the device switches from high resistance to low resistance state. From these studies it can be concluded that the material may find its application in fast switching Non-Volatile Phase Change Memory (PCM) Devices.

Keywords: Chalcogenides, vapor deposition, electrical switching, PCM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641
265 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures

Authors: Silvina Caíno-Lores, Jesús Carretero

Abstract:

Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.

Keywords: Co-scheduling, data-centric, data-intensive, data locality, in-memory storage, large scale.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
264 Fingerprint Compression Using Multiwavelets

Authors: Sudhakar.R, Jayaraman.S

Abstract:

Large volumes of fingerprints are collected and stored every day in a wide range of applications, including forensics, access control etc. It is evident from the database of Federal Bureau of Investigation (FBI) which contains more than 70 million finger prints. Compression of this database is very important because of this high Volume. The performance of existing image coding standards generally degrades at low bit-rates because of the underlying block based Discrete Cosine Transform (DCT) scheme. Over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. Due to implementation constraints scalar wavelets do not posses all the properties which are needed for better performance in compression. New class of wavelets called 'Multiwavelets' which posses more than one scaling filters overcomes this problem. The objective of this paper is to develop an efficient compression scheme and to obtain better quality and higher compression ratio through multiwavelet transform and embedded coding of multiwavelet coefficients through Set Partitioning In Hierarchical Trees algorithm (SPIHT) algorithm. A comparison of the best known multiwavelets is made to the best known scalar wavelets. Both quantitative and qualitative measures of performance are examined for Fingerprints.

Keywords: Mutiwavelet, Modified SPIHT Algorithm, SPIHT, Wavelet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563
263 Urban Renewal from the Perspective of Industrial Heritage Protection: Taking the Qiaokou District of Wuhan as an Example

Authors: Yue Sun, Yuan Wang

Abstract:

Most of the earliest national industries in Wuhan are located along the Hanjiang River, and Qiaokou is considered to be a gathering place for Dahankou old industrial base. Zongguan Waterworks, Pacific Soap Factory, Fuxin Flour Factory, Nanyang Tobacco Factory and other hundred-year-old factories are located along Hanjiang River in Qiaokou District, especially the Gutian Industrial Zone, which was listed as one of 156 national restoration projects at the beginning of the founding of the People’s Republic of China. After decades of development, Qiaokou has become the gathering place of the chemical industry and secondary industry, causing damage to the city and serious pollution, becoming a marginalized area forgotten by the central city. In recent years, with the accelerated pace of urban renewal, Qiaokou has been constantly reforming and innovating, and has begun drastic changes in the transformation of old cities and the development of new districts. These factories have been listed as key reconstruction projects, and a large number of industrial heritage with historical value and full urban memory have been relocated, demolished and reformed, with only a few factory buildings preserved. Through the methods of industrial archaeology, image analysis, typology and field investigation, this paper analyzes and summarizes the spatial characteristics of industrial heritage in Qiaokou District, explores urban renewal from the perspective of industrial heritage protection, and provides design strategies for the regeneration of urban industrial sites and industrial heritage.

Keywords: Industrial heritage, urban renewal, protection, urban memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 934
262 Obsession of Time and the New Musical Ontologies: The Concert for Saxophone, Daniel Kientzy and Orchestra by Myriam Marbe

Authors: Luminiţa Duţică

Abstract:

For the music composer Myriam Marbe the musical time and memory represent 2 (complementary) phenomena with conclusive impact on the settlement of new musical ontologies. Summarizing the most important achievements of the contemporary techniques of composition, her vision on the microform presented in The Concert for Daniel Kientzy, saxophone and orchestra transcends the linear and unidirectional time in favour of a flexible, multivectorial speech with spiral developments, where the sound substance is auto(re)generated by analogy with the fundamental processes of the memory. The conceptual model is of an archetypal essence, the music composer being concerned with identifying the mechanisms of the creation process, especially of those specific to the collective creation (of oral tradition). Hence the spontaneity of expression, improvisation tint, free rhythm, micro-interval intonation, coloristictimbral universe dominated by multiphonics and unique sound effects, hence the atmosphere of ritual, however purged by the primary connotations and reprojected into a wonderful spectacular space. The Concert is a work of artistic maturity and enforces respect, among others, by the timbral diversity of the three species of saxophone required by the music composer (baritone, sopranino and alt), in Part III Daniel Kientzy shows the performance of playing two saxophones concomitantly. The score of the music composer Myriam Marbe contains a deeply spiritualized music, full or archetypal symbols, a music whose drama suggests a real cinematographic movement.

Keywords: Archetype, chronogenesis, concert, multiphonics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2056
261 Ovshinsky Effect by Quantum Mechanics

Authors: Thomas V. Prevenslik

Abstract:

Ovshinsky initiated scientific research in the field of amorphous and disordered materials that continues to this day. The Ovshinsky Effect where the resistance of thin GST films is significantly reduced upon the application of low voltage is of fundamental importance in phase-change - random access memory (PC-RAM) devices.GST stands for GdSbTe chalcogenide type glasses.However, the Ovshinsky Effect is not without controversy. Ovshinsky thought the resistance of GST films is reduced by the redistribution of charge carriers; whereas, others at that time including many PC-RAM researchers today argue that the GST resistance changes because the GST amorphous state is transformed to the crystalline state by melting, the heat supplied by external heaters. In this controversy, quantum mechanics (QM) asserts the heat capacity of GST films vanishes, and therefore melting cannot occur as the heat supplied cannot be conserved by an increase in GST film temperature.By precluding melting, QM re-opens the controversy between the melting and charge carrier mechanisms. Supporting analysis is presented to show that instead of increasing GST film temperature, conservation proceeds by the QED induced creation of photons within the GST film, the QED photons confined by TIR. QED stands for quantum electrodynamics and TIR for total internal reflection. The TIR confinement of QED photons is enhanced by the fact the absorbedheat energy absorbed in the GST film is concentrated in the TIR mode because of their high surface to volume ratio. The QED photons having Planck energy beyond the ultraviolet produce excitons by the photoelectric effect, the electrons and holes of which reduce the GST film resistance.

Keywords: Ovshinsky, phase change memory, PC-RAM, chalcogenide, quantummechanics, quantum electrodynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
260 Searching for Forensic Evidence in a Compromised Virtual Web Server against SQL Injection Attacks and PHP Web Shell

Authors: Gigih Supriyatno

Abstract:

SQL injection is one of the most common types of attacks and has a very critical impact on web servers. In the worst case, an attacker can perform post-exploitation after a successful SQL injection attack. In the case of forensics web servers, web server analysis is closely related to log file analysis. But sometimes large file sizes and different log types make it difficult for investigators to look for traces of attackers on the server. The purpose of this paper is to help investigator take appropriate steps to investigate when the web server gets attacked. We use attack scenarios using SQL injection attacks including PHP backdoor injection as post-exploitation. We perform post-mortem analysis of web server logs based on Hypertext Transfer Protocol (HTTP) POST and HTTP GET method approaches that are characteristic of SQL injection attacks. In addition, we also propose structured analysis method between the web server application log file, database application, and other additional logs that exist on the webserver. This method makes the investigator more structured to analyze the log file so as to produce evidence of attack with acceptable time. There is also the possibility that other attack techniques can be detected with this method. On the other side, it can help web administrators to prepare their systems for the forensic readiness.

Keywords: Web forensic, SQL injection, web shell, investigation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1192
259 Computational Algorithm for Obtaining Abelian Subalgebras in Lie Algebras

Authors: Manuel Ceballos, Juan Nunez, Angel F. Tenorio

Abstract:

The set of all abelian subalgebras is computationally obtained for any given finite-dimensional Lie algebra, starting from the nonzero brackets in its law. More concretely, an algorithm is described and implemented to compute a basis for each nontrivial abelian subalgebra with the help of the symbolic computation package MAPLE. Finally, it is also shown a brief computational study for this implementation, considering both the computing time and the used memory.

Keywords: Solvable Lie algebra, maximal abelian dimension, algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1244
258 Investigation of Possible Behavioural and Molecular Effects of Mobile Phone Exposure on Rats

Authors: Ç. Gökçek-Saraç, Ş. Özen, N. Derin

Abstract:

The N-methyl-D-aspartate (NMDA)-dependent pathway is the major intracellular signaling pathway implemented in both short- and long-term memory formation in the hippocampus which is the most studied brain structure because of its well documented role in learning and memory. However, little is known about the effects of RF-EMR exposure on NMDA receptor signaling pathway including activation of protein kinases, notably Ca2+/calmodulin-dependent protein kinase II alpha (CaMKIIα). The aim of the present study was to investigate the effects of acute and chronic 900 MHz RF-EMR exposure on both passive avoidance behaviour and hippocampal levels of CaMKIIα and its phosphorylated form (pCaMKIIα). Rats were divided into the following groups: Sham rats, and rats exposed to 900 MHz RF-EMR for 2 h/day for 1 week (acute group) or 10 weeks (chronic group), respectively. Passive avoidance task was used as a behavioural method. The hippocampal levels of selected kinases were measured using Western Blotting technique. The results of passive avoidance task showed that both acute and chronic exposure to 900 MHz RF-EMR can impair passive avoidance behaviour with minor effects on chronic group of rats. The analysis of western blot data of selected protein kinases demonstrated that hippocampal levels of CaMKIIα and pCaMKIIα were significantly higher in chronic group of rats as compared to acute groups. Taken together, these findings demonstrated that different duration times (1 week vs 10 weeks) of 900 MHz RF-EMR exposure have different effects on both passive avoidance behaviour of rats and hippocampal levels of selected protein kinases.

Keywords: Hippocampus, protein kinase, rat, RF-EMR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 821
257 Detecting Email Forgery using Random Forests and Naïve Bayes Classifiers

Authors: Emad E Abdallah, A.F. Otoom, ArwaSaqer, Ola Abu-Aisheh, Diana Omari, Ghadeer Salem

Abstract:

As emails communications have no consistent authentication procedure to ensure the authenticity, we present an investigation analysis approach for detecting forged emails based on Random Forests and Naïve Bays classifiers. Instead of investigating the email headers, we use the body content to extract a unique writing style for all the possible suspects. Our approach consists of four main steps: (1) The cybercrime investigator extract different effective features including structural, lexical, linguistic, and syntactic evidence from previous emails for all the possible suspects, (2) The extracted features vectors are normalized to increase the accuracy rate. (3) The normalized features are then used to train the learning engine, (4) upon receiving the anonymous email (M); we apply the feature extraction process to produce a feature vector. Finally, using the machine learning classifiers the email is assigned to one of the suspects- whose writing style closely matches M. Experimental results on real data sets show the improved performance of the proposed method and the ability of identifying the authors with a very limited number of features.

Keywords: Digital investigation, cybercrimes, emails forensics, anonymous emails, writing style, and authorship analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5201
256 A Localized Interpolation Method Using Radial Basis Functions

Authors: Mehdi Tatari

Abstract:

Finding the interpolation function of a given set of nodes is an important problem in scientific computing. In this work a kind of localization is introduced using the radial basis functions which finds a sufficiently smooth solution without consuming large amount of time and computer memory. Some examples will be presented to show the efficiency of the new method.

Keywords: Radial basis functions, local interpolation method, closed form solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1497
255 Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks

Authors: B. Golchin, N. Riahi

Abstract:

One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.

Keywords: emotion classification, sentiment analysis, social networks, deep neural networks

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 604
254 A Neuroscience-Based Learning Technique: Framework and Application to STEM

Authors: Dante J. Dorantes-González, Aldrin Balsa-Yepes

Abstract:

Existing learning techniques such as problem-based learning, project-based learning, or case study learning are learning techniques that focus mainly on technical details, but give no specific guidelines on learner’s experience and emotional learning aspects such as arousal salience and valence, being emotional states important factors affecting engagement and retention. Some approaches involving emotion in educational settings, such as social and emotional learning, lack neuroscientific rigorousness and use of specific neurobiological mechanisms. On the other hand, neurobiology approaches lack educational applicability. And educational approaches mainly focus on cognitive aspects and disregard conditioning learning. First, authors start explaining the reasons why it is hard to learn thoughtfully, then they use the method of neurobiological mapping to track the main limbic system functions, such as the reward circuit, and its relations with perception, memories, motivations, sympathetic and parasympathetic reactions, and sensations, as well as the brain cortex. The authors conclude explaining the major finding: The mechanisms of nonconscious learning and the triggers that guarantee long-term memory potentiation. Afterward, the educational framework for practical application and the instructors’ guidelines are established. An implementation example in engineering education is given, namely, the study of tuned-mass dampers for earthquake oscillations attenuation in skyscrapers. This work represents an original learning technique based on nonconscious learning mechanisms to enhance long-term memories that complement existing cognitive learning methods.

Keywords: Emotion, emotion-enhanced memory, learning technique, STEM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 950
253 A Combinatorial Model for ECG Interpretation

Authors: Costas S. Iliopoulos, Spiros Michalakopoulos

Abstract:

A new, combinatorial model for analyzing and inter- preting an electrocardiogram (ECG) is presented. An application of the model is QRS peak detection. This is demonstrated with an online algorithm, which is shown to be space as well as time efficient. Experimental results on the MIT-BIH Arrhythmia database show that this novel approach is promising. Further uses for this approach are discussed, such as taking advantage of its small memory requirements and interpreting large amounts of pre-recorded ECG data.

Keywords: Combinatorics, ECG analysis, MIT-BIH Arrhythmia Database, QRS Detection, String Algorithms

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1896
252 Improvement of Data Transfer over Simple Object Access Protocol (SOAP)

Authors: Khaled Ahmed Kadouh, Kamal Ali Albashiri

Abstract:

This paper presents a designed algorithm involves improvement of transferring data over Simple Object Access Protocol (SOAP). The aim of this work is to establish whether using SOAP in exchanging XML messages has any added advantages or not. The results showed that XML messages without SOAP take longer time and consume more memory, especially with binary data.

Keywords: JAX-WS, SMTP, SOAP, Web service, XML.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2084
251 The Relationship between Fluctuation of Biological Signal: Finger Plethysmogram in Conversation and Anthropophobic Tendency

Authors: Haruo Okabayashi

Abstract:

Human biological signals (pulse wave and brain wave, etc.) have a rhythm which shows fluctuations. This study investigates the relationship between fluctuations of biological signals which are shown by a finger plethysmogram (i.e., finger pulse wave) in conversation and anthropophobic tendency, and identifies whether the fluctuation could be an index of mental health. 32 college students participated in the experiment. The finger plethysmogram of each subject was measured in the following conversation situations: Fun memory talking/listening situation and regrettable memory talking/ listening situation for three minutes each. Lyspect 3.5 was used to collect the data of the finger plethysmogram. Since Lyspect calculates the Lyapunov spectrum, it is possible to obtain the largest Lyapunov exponent (LLE). LLE is an indicator of the fluctuation and shows the degree to which a measure is going away from close proximity to the track in a dynamical system. Before the finger plethysmogram experiment, each participant took the psychological test questionnaire “Anthropophobic Scale.” The scale measures the social phobia trend close to the consciousness of social phobia. It is revealed that there is a remarkable relationship between the fluctuation of the finger plethysmography and anthropophobic tendency scale in talking about a regrettable story in conversation: The participants (N=15) who have a low anthropophobic tendency show significantly more fluctuation of finger pulse waves than the participants (N=17) who have a high anthropophobic tendency (F (1, 31) =5.66, p<0.05). That is, the participants who have a low anthropophobic tendency make conversation flexibly using large fluctuation of biological signal; on the other hand, the participants who have a high anthropophobic tendency constrain a conversation because of small fluctuation. Therefore, fluctuation is not an error but an important drive to make better relationships with others and go towards the development of interaction. In considering mental health, the fluctuation of biological signals would be an important indicator.

Keywords: Anthropophobic tendency, finger plethymogram, fluctuation of biological signal, LLE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1275
250 Design and Development of On-Line, On-Site, In-Situ Induction Motor Performance Analyser

Authors: G. S. Ayyappan, Srinivas Kota, Jaffer R. C. Sheriff, C. Prakash Chandra Joshua

Abstract:

In the present scenario of energy crises, energy conservation in the electrical machines is very important in the industries. In order to conserve energy, one needs to monitor the performance of an induction motor on-site and in-situ. The instruments available for this purpose are very meager and very expensive. This paper deals with the design and development of induction motor performance analyser on-line, on-site, and in-situ. The system measures only few electrical input parameters like input voltage, line current, power factor, frequency, powers, and motor shaft speed. These measured data are coupled to name plate details and compute the operating efficiency of induction motor. This system employs the method of computing motor losses with the help of equivalent circuit parameters. The equivalent circuit parameters of the concerned motor are estimated using the developed algorithm at any load conditions and stored in the system memory. The developed instrument is a reliable, accurate, compact, rugged, and cost-effective one. This portable instrument could be used as a handy tool to study the performance of both slip ring and cage induction motors. During the analysis, the data can be stored in SD Memory card and one can perform various analyses like load vs. efficiency, torque vs. speed characteristics, etc. With the help of the developed instrument, one can operate the motor around its Best Operating Point (BOP). Continuous monitoring of the motor efficiency could lead to Life Cycle Assessment (LCA) of motors. LCA helps in taking decisions on motor replacement or retaining or refurbishment.

Keywords: Energy conservation, equivalent circuit parameters, induction motor efficiency, life cycle assessment, motor performance analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 908
249 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 362
248 Constructing a Simple Polygonalizations

Authors: V. Tereshchenko, V. Muravitskiy

Abstract:

We consider the methods of construction simple polygons for a set S of n points and applying them for searching the minimal area polygon. In this paper we propose the approximate algorithm, which generates the simple polygonalizations of a fixed set of points and finds the minimal area polygon, in O (n3) time and using O(n2) memory.

Keywords: simple polygon, approximate algorithm, minimal area polygon, polygonalizations

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2171