Search results for: processing schemes.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2023

Search results for: processing schemes.

1153 Implementing High Performance VPN Router using Cavium-s CN2560 Security Processor

Authors: Sang Su Lee, Sang Woo Lee, Yong Sung Jeon, Ki Young Kim

Abstract:

IPsec protocol[1] is a set of security extensions developed by the IETF and it provides privacy and authentication services at the IP layer by using modern cryptography. In this paper, we describe both of H/W and S/W architectures of our router system, SRS-10. The system is designed to support high performance routing and IPsec VPN. Especially, we used Cavium-s CN2560 processor to implement IPsec processing in inline-mode.

Keywords: IP, router, VPN, IPsec.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2034
1152 Overloading Scheme for Cellular DS-CDMA using Quasi-Orthogonal Sequences and Iterative Interference Cancellation Receiver

Authors: Preetam Kumar, Saswat Chakrabarti

Abstract:

Overloading is a technique to accommodate more number of users than the spreading factor N. This is a bandwidth efficient scheme to increase the number users in a fixed bandwidth. One of the efficient schemes to overload a CDMA system is to use two sets of orthogonal signal waveforms (O/O). The first set is assigned to the N users and the second set is assigned to the additional M users. An iterative interference cancellation technique is used to cancel interference between the two sets of users. In this paper, the performance of an overloading scheme in which the first N users are assigned Walsh-Hadamard orthogonal codes and extra users are assigned the same WH codes but overlaid by a fixed (quasi) bent sequence [11] is evaluated. This particular scheme is called Quasi- Orthogonal Sequence (QOS) O/O scheme, which is a part of cdma2000 standard [12] to provide overloading in the downlink using single user detector. QOS scheme are balance O/O scheme, where the correlation between any set-1 and set-2 users are equalized. The allowable overload of this scheme is investigated in the uplink on an AWGN and Rayleigh fading channels, so that the uncoded performance with iterative multistage interference cancellation detector remains close to the single user bound. It is shown that this scheme provides 19% and 11% overloading with SDIC technique for N= 16 and 64 respectively, with an SNR degradation of less than 0.35 dB as compared to single user bound at a BER of 0.00001. But on a Rayleigh fading channel, the channel overloading is 45% (29 extra users) at a BER of 0.0005, with an SNR degradation of about 1 dB as compared to single user performance for N=64. This is a significant amount of channel overloading on a Rayleigh fading channel.

Keywords: DS-CDMA, Iterative Interference CancellationOrthogonal codes, Overloading.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1716
1151 A Pragmatic Study of Metaphorization in English Newspaper Headlines

Authors: Wang Haiyan

Abstract:

This paper attempts to explore the phenomenon of metaphorization in English newspaper headlines from the perspective of pragmatic investigation. With relevance theory as the guideline, this paper makes an explanation of the processing of metaphor with a pragmatic approach and points that metaphor is the stimulus adopted by journalists to achieve optimal relevance in this ostensive communication, as well as the strategy to fulfill their writing purpose.

Keywords: Metaphorization, newspaper, headlines, relevance theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5373
1150 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering

Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause

Abstract:

In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.

Keywords: Image processing, Illumination equalization, Shadow filtering, Object detection, Colour models, Image segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1020
1149 Javanese Character Recognition Using Hidden Markov Model

Authors: Anastasia Rita Widiarti, Phalita Nari Wastu

Abstract:

Hidden Markov Model (HMM) is a stochastic method which has been used in various signal processing and character recognition. This study proposes to use HMM to recognize Javanese characters from a number of different handwritings, whereby HMM is used to optimize the number of state and feature extraction. An 85.7 % accuracy is obtained as the best result in 16-stated vertical model using pure HMM. This initial result is satisfactory for prompting further research.

Keywords: Character recognition, off-line handwritingrecognition, Hidden Markov Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1989
1148 Intelligent Process and Model Applied for E-Learning Systems

Authors: Mafawez Alharbi, Mahdi Jemmali

Abstract:

E-learning is a developing area especially in education. E-learning can provide several benefits to learners. An intelligent system to collect all components satisfying user preferences is so important. This research presents an approach that it capable to personalize e-information and give the user their needs following their preferences. This proposal can make some knowledge after more evaluations made by the user. In addition, it can learn from the habit from the user. Finally, we show a walk-through to prove how intelligent process work.

Keywords: Artificial intelligence, architecture, e-learning, software engineering, processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1094
1147 Evaluation of Efficient CSI Based Channel Feedback Techniques for Adaptive MIMO-OFDM Systems

Authors: Muhammad Rehan Khalid, Muhammad Haroon Siddiqui, Danish Ilyas

Abstract:

This paper explores the implementation of adaptive coding and modulation schemes for Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) feedback systems. Adaptive coding and modulation enables robust and spectrally-efficient transmission over time-varying channels. The basic premise is to estimate the channel at the receiver and feed this estimate back to the transmitter, so that the transmission scheme can be adapted relative to the channel characteristics. Two types of codebook based channel feedback techniques are used in this work. The longterm and short-term CSI at the transmitter is used for efficient channel utilization. OFDM is a powerful technique employed in communication systems suffering from frequency selectivity. Combined with multiple antennas at the transmitter and receiver, OFDM proves to be robust against delay spread. Moreover, it leads to significant data rates with improved bit error performance over links having only a single antenna at both the transmitter and receiver. The coded modulation increases the effective transmit power relative to uncoded variablerate variable-power MQAM performance for MIMO-OFDM feedback system. Hence proposed arrangement becomes an attractive approach to achieve enhanced spectral efficiency and improved error rate performance for next generation high speed wireless communication systems.

Keywords: Adaptive Coded Modulation, MQAM, MIMO, OFDM, Codebooks, Feedback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909
1146 Design of Medical Information Storage System – ECG Signal

Authors: A. Rubiano F, N. Olarte, D. Lara

Abstract:

This paper presents the design, implementation and results related to the storage system of medical information associated to the ECG (Electrocardiography) signal. The system includes the signal acquisition modules, the preprocessing and signal processing, followed by a module of transmission and reception of the signal, along with the storage and web display system of the medical platform. The tests were initially performed with this signal, with the purpose to include more biosignal under the same system in the future.

Keywords: Acquisition, ECG Signal, Storage, Web Platform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2269
1145 Processing and Economic Analysis of Rain Tree (Samanea saman) Pods for Village Level Hydrous Bioethanol Production

Authors: Dharell B. Siano, Wendy C. Mateo, Victorino T. Taylan, Francisco D. Cuaresma

Abstract:

Biofuel is one of the renewable energy sources adapted by the Philippine government in order to lessen the dependency on foreign fuel and to reduce carbon dioxide emissions. Rain tree pods were seen to be a promising source of bioethanol since it contains significant amount of fermentable sugars. The study was conducted to establish the complete procedure in processing rain tree pods for village level hydrous bioethanol production. Production processes were done for village level hydrous bioethanol production from collection, drying, storage, shredding, dilution, extraction, fermentation, and distillation. The feedstock was sundried, and moisture content was determined at a range of 20% to 26% prior to storage. Dilution ratio was 1:1.25 (1 kg of pods = 1.25 L of water) and after extraction process yielded a sugar concentration of 22 0Bx to 24 0Bx. The dilution period was three hours. After three hours of diluting the samples, the juice was extracted using extractor with a capacity of 64.10 L/hour. 150 L of rain tree pods juice was extracted and subjected to fermentation process using a village level anaerobic bioreactor. Fermentation with yeast (Saccharomyces cerevisiae) can fasten up the process, thus producing more ethanol at a shorter period of time; however, without yeast fermentation, it also produces ethanol at lower volume with slower fermentation process. Distillation of 150 L of fermented broth was done for six hours at 85 °C to 95 °C temperature (feedstock) and 74 °C to 95 °C temperature of the column head (vapor state of ethanol). The highest volume of ethanol recovered was established at with yeast fermentation at five-day duration with a value of 14.89 L and lowest actual ethanol content was found at without yeast fermentation at three-day duration having a value of 11.63 L. In general, the results suggested that rain tree pods had a very good potential as feedstock for bioethanol production. Fermentation of rain tree pods juice can be done with yeast and without yeast.

Keywords: Fermentation, hydrous bioethanol, rain tree pods, village level.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1564
1144 A Comparison of Signal Processing Techniques for the Extraction of Breathing Rate from the Photoplethysmogram

Authors: Susannah G. Fleming Lionel Tarassenko

Abstract:

The photoplethysmogram (PPG) is the pulsatile waveform produced by the pulse oximeter, which is widely used for monitoring arterial oxygen saturation in patients. Various methods for extracting the breathing rate from the PPG waveform have been compared using a consistent data set, and a novel technique using autoregressive modelling is presented. This novel technique is shown to outperform the existing techniques, with a mean error in breathing rate of 0.04 breaths per minute.

Keywords: Autoregressive modelling, breathing rate, photoplethysmogram, pulse oximetry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3319
1143 Numerical Analysis of All-Optical Microwave Mixing and Bandpass Filtering in an RoF Link

Authors: S. Khosroabadi, M. R. Salehi

Abstract:

In this paper, all-optical signal processors that perform both microwave mixing and bandpass filtering in a radio-over-fiber (RoF) link are presented. The key device is a Mach-Zehnder modulator (MZM) which performs all-optical microwave mixing. An up-converted microwave signal is obtained and other unwanted frequency components are suppressed at the end of the fiber span.

Keywords: Microwave mixing, bandpass filtering, all-optical, signal processing, MZM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1719
1142 Applications of AUSM+ Scheme on Subsonic, Supersonic and Hypersonic Flows Fields

Authors: Muhammad Yamin Younis, Muhammad Amjad Sohail, Tawfiqur Rahman, Zaka Muhammad, Saifur Rahman Bakaul

Abstract:

The performance of Advection Upstream Splitting Method AUSM schemes are evaluated against experimental flow fields at different Mach numbers and results are compared with experimental data of subsonic, supersonic and hypersonic flow fields. The turbulent model used here is SST model by Menter. The numerical predictions include lift coefficient, drag coefficient and pitching moment coefficient at different mach numbers and angle of attacks. This work describes a computational study undertaken to compute the Aerodynamic characteristics of different air vehicles configurations using a structured Navier-Stokes computational technique. The CFD code bases on the idea of upwind scheme for the convective (convective-moving) fluxes. CFD results for GLC305 airfoil and cone cylinder tail fined missile calculated on above mentioned turbulence model are compared with the available data. Wide ranges of Mach number from subsonic to hypersonic speeds are simulated and results are compared. When the computation is done by using viscous turbulence model the above mentioned coefficients have a very good agreement with the experimental values. AUSM scheme is very efficient in the regions of very high pressure gradients like shock waves and discontinuities. The AUSM versions simulate the all types of flows from lower subsonic to hypersonic flow without oscillations.

Keywords: Subsonic, supersonic, Hypersonic, AUSM+, Drag Coefficient, lift Coefficient, Pitching moment coefficient, pressure Coefficient, turbulent flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3244
1141 Edge-end Pixel Extraction for Edge-based Image Segmentation

Authors: Mahinda P. Pathegama, Özdemir Göl

Abstract:

Extraction of edge-end-pixels is an important step for the edge linking process to achieve edge-based image segmentation. This paper presents an algorithm to extract edge-end pixels together with their directional sensitivities as an augmentation to the currently available mathematical models. The algorithm is implemented in the Java environment because of its inherent compatibility with web interfaces since its main use is envisaged to be for remote image analysis on a virtual instrumentation platform.

Keywords: edge-end pixels, image processing, imagesegmentation, pixel extraction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2154
1140 H.263 Based Video Transceiver for Wireless Camera System

Authors: Won-Ho Kim

Abstract:

In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video.

Keywords: Digital signal processing, H.263 video encoder, surveillance camera, wireless video transceiver.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1855
1139 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry

Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine

Abstract:

The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).

Keywords: Bottom elevation, multi-view stereo, river, structure-from-motion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1580
1138 Watermark-based Counter for Restricting Digital Audio Consumption

Authors: Mikko Löytynoja, Nedeljko Cvejic, Tapio Seppänen

Abstract:

In this paper we introduce three watermarking methods that can be used to count the number of times that a user has played some content. The proposed methods are tested with audio content in our experimental system using the most common signal processing attacks. The test results show that the watermarking methods used enable the watermark to be extracted under the most common attacks with a low bit error rate.

Keywords: Digital rights management, restricted usage, content protection, spread spectrum, audio watermarking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1466
1137 Guidelines for Developing, Supervising, Assessing and Evaluating Capstone Design Project of BSc in Electrical and Electronic Engineering Program

Authors: Muhibul Haque Bhuyan

Abstract:

Inclusion of any design project in an undergraduate electrical and electronic engineering curriculum and producing creative ideas in the final year capstone design projects have received numerous comments at the Board of Accreditation for Engineering and Technical Education (BAETE) several times by the mentors and visiting program evaluator team members at different public and private universities in Bangladesh. To eradicate this deficiency which is needed for getting the program accreditation, a thorough change was required in the Department of Electrical and Electronic Engineering (EEE) for its BSc in EEE program at Southeast University, Dhaka, Bangladesh. We suggested making changes in the course curriculum titles and contents, emphasizing to include capstone design projects, question setting, examining students through other standard methods, selecting and retaining Outcome-Based Education (OBE)-oriented engineering faculty members, improving laboratories through purchasing new equipment and software as well as developing new experiments for each laboratory courses, and engaging the students to practical designs in various courses and final year projects. This paper reports on capstone design project course objectives, course outcomes, mapping with the program outcomes, cognitive domain of learning, assessment schemes, guidelines, suggestions and recommendations for supervision processes, assessment strategy, and rubric setting, etc. It is expected that this will substantially improve the capstone design projects offering, supervision, and assessment in the undergraduate EEE program to fulfill the arduous requirements of BAETE accreditation based on OBE.

Keywords: Course outcome, capstone design project, assessment and evaluation, electrical and electronic engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 525
1136 Performance Analysis of the Subgroup Method for Collective I/O

Authors: Kwangho Cha, Hyeyoung Cho, Sungho Kim

Abstract:

As many scientific applications require large data processing, the importance of parallel I/O has been increasingly recognized. Collective I/O is one of the considerable features of parallel I/O and enables application programmers to easily handle their large data volume. In this paper we measured and analyzed the performance of original collective I/O and the subgroup method, the way of using collective I/O of MPI effectively. From the experimental results, we found that the subgroup method showed good performance with small data size.

Keywords: Collective I/O, MPI, parallel file system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1575
1135 Time Temperature Dependence of Long Fiber Reinforced Polypropylene Manufactured by Direct Long Fiber Thermoplastic Process

Authors: K. A. Weidenmann, M. Grigo, B. Brylka, P. Elsner, T. Böhlke

Abstract:

In order to reduce fuel consumption, the weight of automobiles has to be reduced. Fiber reinforced polymers offer the potential to reach this aim because of their high stiffness to weight ratio. Additionally, the use of fiber reinforced polymers in automotive applications has to allow for an economic large-scale production. In this regard, long fiber reinforced thermoplastics made by direct processing offer both mechanical performance and processability in injection moulding and compression moulding. The work presented in this contribution deals with long glass fiber reinforced polypropylene directly processed in compression moulding (D-LFT). For the use in automotive applications both the temperature and the time dependency of the materials properties have to be investigated to fulfill performance requirements during crash or the demands of service temperatures ranging from -40 °C to 80 °C. To consider both the influence of temperature and time, quasistatic tensile tests have been carried out at different temperatures. These tests have been complemented by high speed tensile tests at different strain rates. As expected, the increase in strain rate results in an increase of the elastic modulus which correlates to an increase of the stiffness with decreasing service temperature. The results are in good accordance with results determined by dynamic mechanical analysis within the range of 0.1 to 100 Hz. The experimental results from different testing methods were grouped and interpreted by using different time temperature shift approaches. In this regard, Williams-Landel-Ferry and Arrhenius approach based on kinetics have been used. As the theoretical shift factor follows an arctan function, an empirical approach was also taken into consideration. It could be shown that this approach describes best the time and temperature superposition for glass fiber reinforced polypropylene manufactured by D-LFT processing.

Keywords: Composite, long fiber reinforced thermoplastics, mechanical properties, dynamic mechanical analysis, time temperature superposition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
1134 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994
1133 Flood Modeling in Urban Area Using a Well-Balanced Discontinuous Galerkin Scheme on Unstructured Triangular Grids

Authors: Rabih Ghostine, Craig Kapfer, Viswanathan Kannan, Ibrahim Hoteit

Abstract:

Urban flooding resulting from a sudden release of water due to dam-break or excessive rainfall is a serious threatening environment hazard, which causes loss of human life and large economic losses. Anticipating floods before they occur could minimize human and economic losses through the implementation of appropriate protection, provision, and rescue plans. This work reports on the numerical modelling of flash flood propagation in urban areas after an excessive rainfall event or dam-break. A two-dimensional (2D) depth-averaged shallow water model is used with a refined unstructured grid of triangles for representing the urban area topography. The 2D shallow water equations are solved using a second-order well-balanced discontinuous Galerkin scheme. Theoretical test case and three flood events are described to demonstrate the potential benefits of the scheme: (i) wetting and drying in a parabolic basin (ii) flash flood over a physical model of the urbanized Toce River valley in Italy; (iii) wave propagation on the Reyran river valley in consequence of the Malpasset dam-break in 1959 (France); and (iv) dam-break flood in October 1982 at the town of Sumacarcel (Spain). The capability of the scheme is also verified against alternative models. Computational results compare well with recorded data and show that the scheme is at least as efficient as comparable second-order finite volume schemes, with notable efficiency speedup due to parallelization.

Keywords: Flood modeling, dam-break, shallow water equations, Discontinuous Galerkin scheme, MUSCL scheme.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 947
1132 Cloud Computing: Changing Cogitation about Computing

Authors: Mehrdad Mahdavi Boroujerdi, Soheil Nazem

Abstract:

Cloud Computing is a new technology that helps us to use the Cloud for compliance our computation needs. Cloud refers to a scalable network of computers that work together like Internet. An important element in Cloud Computing is that we shift processing, managing, storing and implementing our data from, locality into the Cloud; So it helps us to improve the efficiency. Because of it is new technology, it has both advantages and disadvantages that are scrutinized in this article. Then some vanguards of this technology are studied. Afterwards we find out that Cloud Computing will have important roles in our tomorrow life!

Keywords: Cloud Computing, Grid Computing, Internet as a Platform, On-demand Computing, Software as a Service.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632
1131 The Use of Information Technologies in Special Education for Preparation of Individual Education Programs

Authors: Yasar Guneri Sahin, Mehmet Cudi Okur

Abstract:

In this presentation, we discuss the use of information technologies in the area of special education for teaching individuals with learning disabilities. Application software which was developed for this purpose is used to demonstrate the applicability of a database integrated information processing system to alleviate the burden of educators. The software allows the preparation of individualized education programs based on the predefined objectives, goals and behaviors.

Keywords: Special education, disabled individual, informationtechnology, individual education programs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1406
1130 An Approximation of Daily Rainfall by Using a Pixel Value Data Approach

Authors: Sarisa Pinkham, Kanyarat Bussaban

Abstract:

The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.

Keywords: Daily rainfall, Image processing, Approximation, Pixel value data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758
1129 Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory

Authors: Ebipatei Victoria Tunyan, T. A. Cao, Cheol Young Ock

Abstract:

Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages.

Keywords: Subjective bias detection, machine learning, BERT–BiLSTM–Attention, text classification, natural language processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 830
1128 64 bit Computer Architectures for Space Applications – A study

Authors: Niveditha Domse, Kris Kumar, K. N. Balasubramanya Murthy

Abstract:

The more recent satellite projects/programs makes extensive usage of real – time embedded systems. 16 bit processors which meet the Mil-Std-1750 standard architecture have been used in on-board systems. Most of the Space Applications have been written in ADA. From a futuristic point of view, 32 bit/ 64 bit processors are needed in the area of spacecraft computing and therefore an effort is desirable in the study and survey of 64 bit architectures for space applications. This will also result in significant technology development in terms of VLSI and software tools for ADA (as the legacy code is in ADA). There are several basic requirements for a special processor for this purpose. They include Radiation Hardened (RadHard) devices, very low power dissipation, compatibility with existing operational systems, scalable architectures for higher computational needs, reliability, higher memory and I/O bandwidth, predictability, realtime operating system and manufacturability of such processors. Further on, these may include selection of FPGA devices, selection of EDA tool chains, design flow, partitioning of the design, pin count, performance evaluation, timing analysis etc. This project deals with a brief study of 32 and 64 bit processors readily available in the market and designing/ fabricating a 64 bit RISC processor named RISC MicroProcessor with added functionalities of an extended double precision floating point unit and a 32 bit signal processing unit acting as co-processors. In this paper, we emphasize the ease and importance of using Open Core (OpenSparc T1 Verilog RTL) and Open “Source" EDA tools such as Icarus to develop FPGA based prototypes quickly. Commercial tools such as Xilinx ISE for Synthesis are also used when appropriate.

Keywords: RISC MicroProcessor, RPC – RISC Processor Core, PBX – Processor to Block Interface part of the Interconnection Network, BPX – Block to Processor Interface part of the Interconnection Network, FPU – Floating Point Unit, SPU – Signal Processing Unit, WB – Wishbone Interface, CTU – Clock and Test Unit

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2249
1127 Mammogram Image Size Reduction Using 16-8 bit Conversion Technique

Authors: Ayman A. AbuBaker, Rami S.Qahwaji, Musbah J. Aqel, Mohmmad H. Saleh

Abstract:

Two algorithms are proposed to reduce the storage requirements for mammogram images. The input image goes through a shrinking process that converts the 16-bit images to 8-bits by using pixel-depth conversion algorithm followed by enhancement process. The performance of the algorithms is evaluated objectively and subjectively. A 50% reduction in size is obtained with no loss of significant data at the breast region.

Keywords: Breast cancer, Image processing, Image reduction, Mammograms, Image enhancement

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2035
1126 Productivity Effect of Urea Deep Placement Technology: An Empirical Analysis from Irrigation Rice Farmers in the Northern Region of Ghana

Authors: Shaibu Baanni Azumah, Ignatius Tindjina, Stella Obanyi, Tara N. Wood

Abstract:

This study examined the effect of Urea Deep Placement (UDP) technology on the output of irrigated rice farmers in the northern region of Ghana. Multi-stage sampling technique was used to select 142 rice farmers from the Golinga and Bontanga irrigation schemes, around Tamale. A treatment effect model was estimated at two stages; firstly, to determine the factors that influenced farmers’ decision to adopt the UDP technology and secondly, to determine the effect of the adoption of the UDP technology on the output of rice farmers. The significant variables that influenced rice farmers’ adoption of the UPD technology were sex of the farmer, land ownership, off-farm activity, extension service, farmer group participation and training. The results also revealed that farm size and the adoption of UDP technology significantly influenced the output of rice farmers in the northern region of Ghana. In addition to the potential of the technology to improve yields, it also presents an employment opportunity for women and youth, who are engaged in the deep placement of Urea Super Granules (USG), as well as in the transplantation of rice. It is recommended that the government of Ghana work closely with the IFDC to embed the UDP technology in the national agricultural programmes and policies. The study also recommends an effective collaboration between the government, through the Ministry of Food and Agriculture (MoFA) and the International Fertilizer Development Center (IFDC) to train agricultural extension agents on UDP technology in the rice producing areas of the country.

Keywords: Northern Ghana, output, irrigation rice farmers, treatment effect model, urea deep placement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1129
1125 Contextual SenSe Model: Word Sense Disambiguation Using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural Language Processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a method to create an affinity matrix to calculate the affinity between any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an algorithm to create the sense clusters of tokens using affinity matrix under hierarchy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contextual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and challenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: Word Sense Disambiguation, WSD, Contextual SenSe Model, Most Frequent Sense, part of speech, POS, Natural Language Processing, NLP, OOV, out of vocabulary, ELMo, Embeddings from Language Model, BERT, Bidirectional Encoder Representations from Transformers, Word2Vec, lemma_POS, Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 391
1124 Dynamic Web-Based 2D Medical Image Visualization and Processing Software

Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail

Abstract:

In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.

Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 795