Search results for: conventional computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2099

Search results for: conventional computing

359 Cyber Warriors for Cyber Security and Information Assurance- An Academic Perspective

Authors: Ronald F. Gonzales, Gordon W. Romney, Pradip Peter Dey, Mohammad Amin, Bhaskar Raj Sinha

Abstract:

A virtualized and virtual approach is presented on academically preparing students to successfully engage at a strategic perspective to understand those concerns and measures that are both structured and not structured in the area of cyber security and information assurance. The Master of Science in Cyber Security and Information Assurance (MSCSIA) is a professional degree for those who endeavor through technical and managerial measures to ensure the security, confidentiality, integrity, authenticity, control, availability and utility of the world-s computing and information systems infrastructure. The National University Cyber Security and Information Assurance program is offered as a Master-s degree. The emphasis of the MSCSIA program uniquely includes hands-on academic instruction using virtual computers. This past year, 2011, the NU facility has become fully operational using system architecture to provide a Virtual Education Laboratory (VEL) accessible to both onsite and online students. The first student cohort completed their MSCSIA training this past March 2, 2012 after fulfilling 12 courses, for a total of 54 units of college credits. The rapid pace scheduling of one course per month is immensely challenging, perpetually changing, and virtually multifaceted. This paper analyses these descriptive terms in consideration of those globalization penetration breaches as present in today-s world of cyber security. In addition, we present current NU practices to mitigate risks.

Keywords: Cyber security, information assurance, mitigate risks, virtual machines, strategic perspective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1851
358 Effective Charge Coupling in Low Dimensional Doped Quantum Antiferromagnets

Authors: Suraka Bhattacharjee, Ranjan Chaudhury

Abstract:

The interaction between the charge degrees of freedom for itinerant antiferromagnets is investigated in terms of generalized charge stiffness constant corresponding to nearest neighbour t-J model and t1-t2-t3-J model. The low dimensional hole doped antiferromagnets are the well known systems that can be described by the t-J-like models. Accordingly, we have used these models to investigate the fermionic pairing possibilities and the coupling between the itinerant charge degrees of freedom. A detailed comparison between spin and charge couplings highlights that the charge and spin couplings show very similar behaviour in the over-doped region, whereas, they show completely different trends in the lower doping regimes. Moreover, a qualitative equivalence between generalized charge stiffness and effective Coulomb interaction is also established based on the comparisons with other theoretical and experimental results. Thus it is obvious that the enhanced possibility of fermionic pairing is inherent in the reduction of Coulomb repulsion with increase in doping concentration. However, the increased possibility can not give rise to pairing without the presence of any other pair producing mechanism outside the t-J model. Therefore, one can conclude that the t-J-like models themselves solely are not capable of producing conventional momentum-based superconducting pairing on their own.

Keywords: Generalized charge stiffness constant, charge coupling, effective Coulomb interaction, t-J-like models, momentum-space pairing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 577
357 Discrete Polyphase Matched Filtering-based Soft Timing Estimation for Mobile Wireless Systems

Authors: Thomas O. Olwal, Michael A. van Wyk, Barend J. van Wyk

Abstract:

In this paper we present a soft timing phase estimation (STPE) method for wireless mobile receivers operating in low signal to noise ratios (SNRs). Discrete Polyphase Matched (DPM) filters, a Log-maximum a posterior probability (MAP) and/or a Soft-output Viterbi algorithm (SOVA) are combined to derive a new timing recovery (TR) scheme. We apply this scheme to wireless cellular communication system model that comprises of a raised cosine filter (RCF), a bit-interleaved turbo-coded multi-level modulation (BITMM) scheme and the channel is assumed to be memory-less. Furthermore, no clock signals are transmitted to the receiver contrary to the classical data aided (DA) models. This new model ensures that both the bandwidth and power of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft timing recovery (ISTR) scheme outperforms the conventional schemes.

Keywords: discrete polyphase matched filters, maximum likelihood estimators, soft timing phase estimation, wireless mobile systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1663
356 Exploiting Two Intelligent Models to Predict Water Level: A Field Study of Urmia Lake, Iran

Authors: Shahab Kavehkar, Mohammad Ali Ghorbani, Valeriy Khokhlov, Afshin Ashrafzadeh, Sabereh Darbandi

Abstract:

Water level forecasting using records of past time series is of importance in water resources engineering and management. For example, water level affects groundwater tables in low-lying coastal areas, as well as hydrological regimes of some coastal rivers. Then, a reliable prediction of sea-level variations is required in coastal engineering and hydrologic studies. During the past two decades, the approaches based on the Genetic Programming (GP) and Artificial Neural Networks (ANN) were developed. In the present study, the GP is used to forecast daily water level variations for a set of time intervals using observed water levels. The measurements from a single tide gauge at Urmia Lake, Northwest Iran, were used to train and validate the GP approach for the period from January 1997 to July 2008. Statistics, the root mean square error and correlation coefficient, are used to verify model by comparing with a corresponding outputs from Artificial Neural Network model. The results show that both these artificial intelligence methodologies are satisfactory and can be considered as alternatives to the conventional harmonic analysis.

Keywords: Water-Level variation, forecasting, artificial neural networks, genetic programming, comparative analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2305
355 A Modified Run Length Coding Technique for Test Data Compression Based on Multi-Level Selective Huffman Coding

Authors: C. Kalamani, K. Paramasivam

Abstract:

Test data compression is an efficient method for reducing the test application cost. The problem of reducing test data has been addressed by researchers in three different aspects: Test Data Compression, Built-in-Self-Test (BIST) and Test set compaction. The latter two methods are capable of enhancing fault coverage with cost of hardware overhead. The drawback of the conventional methods is that they are capable of reducing the test storage and test power but when test data have redundant length of runs, no additional compression method is followed. This paper presents a modified Run Length Coding (RLC) technique with Multilevel Selective Huffman Coding (MLSHC) technique to reduce test data volume, test pattern delivery time and power dissipation in scan test applications where redundant length of runs is encountered then the preceding run symbol is replaced with tiny codeword. Experimental results show that the presented method not only improves the test data compression but also reduces the overall test data volume compared to recent schemes. Experiments for the six largest ISCAS-98 benchmarks show that our method outperforms most known techniques.

Keywords: Modified run length coding, multilevel selective Huffman coding, built-in-self-test modified selective Huffman coding, automatic test equipment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1242
354 Emission Assessment of Rice Husk Combustion for Power Production

Authors: Thipwimon Chungsangunsit, Shabbir H. Gheewala, Suthum Patumsawad

Abstract:

Rice husk is one of the alternative fuels for Thailand because of its high potential and environmental benefits. Nonetheless, the environmental profile of the electricity production from rice husk must be assessed to ensure reduced environmental damage. A 10 MW pilot plant using rice husk as feedstock is the study site. The environmental impacts from rice husk power plant are evaluated by using the Life Cycle Assessment (LCA) methodology. Energy, material and carbon balances have been determined for tracing the system flow. Carbon closure has been used for describing of the net amount of CO2 released from the system in relation to the amount being recycled between the power plant and the CO2 adsorbed by rice husk. The transportation of rice husk to the power plant has significant on global warming, but not on acidification and photo-oxidant formation. The results showed that the impact potentials from rice husk power plant are lesser than the conventional plants for most of the categories considered; except the photo-oxidant formation potential from CO. The high CO from rice husk power plant may be due to low boiler efficiency and high moisture content in rice husk. The performance of the study site can be enhanced by improving the combustion efficiency.

Keywords: Environmental impact, Fossil fuels, Life Cycle Assessment (LCA), Renewable energy, Rice husk

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7394
353 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis

Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya

Abstract:

In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.

Keywords: Cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948
352 Continuous FAQ Updating for Service Incident Ticket Resolution

Authors: Kohtaroh Miyamoto

Abstract:

As enterprise computing becomes more and more complex, the costs and technical challenges of IT system maintenance and support are increasing rapidly. One popular approach to managing IT system maintenance is to prepare and use a FAQ (Frequently Asked Questions) system to manage and reuse systems knowledge. Such a FAQ system can help reduce the resolution time for each service incident ticket. However, there is a major problem where over time the knowledge in such FAQs tends to become outdated. Much of the knowledge captured in the FAQ requires periodic updates in response to new insights or new trends in the problems addressed in order to maintain its usefulness for problem resolution. These updates require a systematic approach to define the exact portion of the FAQ and its content. Therefore, we are working on a novel method to hierarchically structure the FAQ and automate the updates of its structure and content. We use structured information and the unstructured text information with the timelines of the information in the service incident tickets. We cluster the tickets by structured category information, by keywords, and by keyword modifiers for the unstructured text information. We also calculate an urgency score based on trends, resolution times, and priorities. We carefully studied the tickets of one of our projects over a 2.5-year time period. After the first 6 months we started to create FAQs and confirmed they improved the resolution times. We continued observing over the next 2 years to assess the ongoing effectiveness of our method for the automatic FAQ updates. We improved the ratio of tickets covered by the FAQ from 32.3% to 68.9% during this time. Also, the average time reduction of ticket resolution was between 31.6% and 43.9%. Subjective analysis showed more than 75% reported that the FAQ system was useful in reducing ticket resolution times.

Keywords: FAQ System, Resolution Time, Service Incident Tickets, IT System Maintenance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2459
351 Computer Aided X-Ray Diffraction Intensity Analysis for Spinels: Hands-On Computing Experience

Authors: Ashish R. Tanna, Hiren H. Joshi

Abstract:

The mineral having chemical compositional formula MgAl2O4 is called “spinel". The ferrites crystallize in spinel structure are known as spinel-ferrites or ferro-spinels. The spinel structure has a fcc cage of oxygen ions and the metallic cations are distributed among tetrahedral (A) and octahedral (B) interstitial voids (sites). The X-ray diffraction (XRD) intensity of each Bragg plane is sensitive to the distribution of cations in the interstitial voids of the spinel lattice. This leads to the method of determination of distribution of cations in the spinel oxides through XRD intensity analysis. The computer program for XRD intensity analysis has been developed in C language and also tested for the real experimental situation by synthesizing the spinel ferrite materials Mg0.6Zn0.4AlxFe2- xO4 and characterized them by X-ray diffractometry. The compositions of Mg0.6Zn0.4AlxFe2-xO4(x = 0.0 to 0.6) ferrites have been prepared by ceramic method and powder X-ray diffraction patterns were recorded. Thus, the authenticity of the program is checked by comparing the theoretically calculated data using computer simulation with the experimental ones. Further, the deduced cation distributions were used to fit the magnetization data using Localized canting of spins approach to explain the “recovery" of collinear spin structure due to Al3+ - substitution in Mg-Zn ferrites which is the case if A-site magnetic dilution and non-collinear spin structure. Since the distribution of cations in the spinel ferrites plays a very important role with regard to their electrical and magnetic properties, it is essential to determine the cation distribution in spinel lattice.

Keywords: Spinel ferrites, Localized canting of spins, X-ray diffraction, Programming in Borland C.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3760
350 A Hybrid Expert System for Generating Stock Trading Signals

Authors: Hosein Hamisheh Bahar, Mohammad Hossein Fazel Zarandi, Akbar Esfahanipour

Abstract:

In this paper, a hybrid expert system is developed by using fuzzy genetic network programming with reinforcement learning (GNP-RL). In this system, the frame-based structure of the system uses the trading rules extracted by GNP. These rules are extracted by using technical indices of the stock prices in the training time period. For developing this system, we applied fuzzy node transition and decision making in both processing and judgment nodes of GNP-RL. Consequently, using these method not only did increase the accuracy of node transition and decision making in GNP's nodes, but also extended the GNP's binary signals to ternary trading signals. In the other words, in our proposed Fuzzy GNP-RL model, a No Trade signal is added to conventional Buy or Sell signals. Finally, the obtained rules are used in a frame-based system implemented in Kappa-PC software. This developed trading system has been used to generate trading signals for ten companies listed in Tehran Stock Exchange (TSE). The simulation results in the testing time period shows that the developed system has more favorable performance in comparison with the Buy and Hold strategy.

Keywords: Fuzzy genetic network programming, hybrid expert system, technical trading signal, Tehran stock exchange.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827
349 Optimal Economic Load Dispatch Using Genetic Algorithms

Authors: Vijay Kumar, Jagdev Singh, Yaduvir Singh, Sanjay Sood

Abstract:

In a practical power system, the power plants are not located at the same distance from the center of loads and their fuel costs are different. Also, under normal operating conditions, the generation capacity is more than the total load demand and losses. Thus, there are many options for scheduling generation. In an interconnected power system, the objective is to find the real and reactive power scheduling of each power plant in such a way as to minimize the operating cost. This means that the generator’s real and reactive powers are allowed to vary within certain limits so as to meet a particular load demand with minimum fuel cost. This is called optimal power flow problem. In this paper, Economic Load Dispatch (ELD) of real power generation is considered. Economic Load Dispatch (ELD) is the scheduling of generators to minimize total operating cost of generator units subjected to equality constraint of power balance within the minimum and maximum operating limits of the generating units. In this paper, genetic algorithms are considered. ELD solutions are found by solving the conventional load flow equations while at the same time minimizing the fuel costs.

Keywords: ELD, Equality constraints, Genetic algorithms, Strings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3831
348 Nonlinear Control of a Continuous Bioreactor Based on Cell Population Model

Authors: Mahdi Sharifian, Mohammad Ali Fanaei

Abstract:

Saccharomyces cerevisiae (baker-s yeast) can exhibit sustained oscillations during the operation in a continuous bioreactor that adversely affects its stability and productivity. Because of heterogeneous nature of cell populations, the cell population balance models can be used to capture the dynamic behavior of such cultures. In this paper an unstructured, segregated model is used which is based on population balance equation(PBE) and then in order to simulation, the 4th order Rung-Kutta is used for time dimension and three methods, finite difference, orthogonal collocation on finite elements and Galerkin finite element are used for discretization of the cell mass domain. The results indicate that the orthogonal collocation on finite element not only is able to predict the oscillating behavior of the cell culture but also needs much little time for calculations. Therefore this method is preferred in comparison with other methods. In the next step two controllers, a globally linearizing control (GLC) and a conventional proportional-integral (PI) controller are designed for controlling the total cell mass per unit volume, and performances of these controllers are compared through simulation. The results show that although the PI controller has simpler structure, the GLC has better performance.

Keywords: Bioreactor, cell population balance, finite difference, orthogonal collocation on finite elements, Galerkin finite element, feedback linearization, PI controller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1854
347 Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps

Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li

Abstract:

The widespread popularity of mobile devices and the development of artificial intelligence (AI) have led to the widespread adoption of deep learning (DL) in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace, a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Additionally, we propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. Using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We conduct an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace outperformed FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.

Keywords: Mobile computing, deep learning apps, sensitive information, static analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 531
346 Comparative Studies of the Effects of Microstructures on the Corrosion Behavior of Micro-Alloyed Steels in Unbuffered 3.5 Wt% NaCl Saturated with CO2

Authors: Lawrence I. Onyeji, Girish M. Kale, M. Bijan Kermani

Abstract:

Corrosion problem which exists in every stage of oil and gas production has been a great challenge to the operators in the industry. The conventional carbon steel with all its inherent advantages has been adjudged susceptible to the aggressive corrosion environment of oilfield. This has aroused increased interest in the use of micro alloyed steels for oil and gas production and transportation. The corrosion behavior of three commercially supplied micro alloyed steels designated as A, B, and C have been investigated with API 5L X65 as reference samples. Electrochemical corrosion tests were conducted in an unbuffered 3.5 wt% NaCl solution saturated with CO2 at 30 0C for 24 hours. Pre-corrosion analyses revealed that samples A, B and X65 consist of ferrite-pearlite microstructures but with different grain sizes, shapes and distribution whereas sample C has bainitic microstructure with dispersed acicular ferrites. The results of the electrochemical corrosion tests showed that within the experimental conditions, the corrosion rate of the samples can be ranked as CR(A)< CR(X65)< CR(B)< CR(C). These results are attributed to difference in microstructures of the samples as depicted by ASTM grain size number in accordance with ASTM E112-12 Standard and ferrite-pearlite volume fractions determined by ImageJ Fiji grain size analysis software.

Keywords: Carbon dioxide corrosion, corrosion behavior, micro-alloyed steel, microstructures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1076
345 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: Deep learning, field programmable gate array, FPGA, hardware acceleration, convolutional neural networks, CNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 856
344 A New Multi-Target, Multi-Agent Search-and-Rescue Path Planning Approach

Authors: Jean Berger, Nassirou Lo, Martin Noel

Abstract:

Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.

Keywords: Search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2457
343 Blood Glucose Level Measurement from Breath Analysis

Authors: Tayyab Hassan, Talha Rehman, Qasim Abdul Aziz, Ahmad Salman

Abstract:

The constant monitoring of blood glucose level is necessary for maintaining health of patients and to alert medical specialists to take preemptive measures before the onset of any complication as a result of diabetes. The current clinical monitoring of blood glucose uses invasive methods repeatedly which are uncomfortable and may result in infections in diabetic patients. Several attempts have been made to develop non-invasive techniques for blood glucose measurement. In this regard, the existing methods are not reliable and are less accurate. Other approaches claiming high accuracy have not been tested on extended dataset, and thus, results are not statistically significant. It is a well-known fact that acetone concentration in breath has a direct relation with blood glucose level. In this paper, we have developed the first of its kind, reliable and high accuracy breath analyzer for non-invasive blood glucose measurement. The acetone concentration in breath was measured using MQ 138 sensor in the samples collected from local hospitals in Pakistan involving one hundred patients. The blood glucose levels of these patients are determined using conventional invasive clinical method. We propose a linear regression classifier that is trained to map breath acetone level to the collected blood glucose level achieving high accuracy.

Keywords: Blood glucose level, breath acetone concentration, diabetes, linear regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1502
342 Building a Personalized Multidimensional Intelligent Learning System

Authors: Lun-Ping Hung, Nan-Chen Hsieh, Chia-Ling Ho, Chien-Liang Chen

Abstract:

Currently, most of distance learning courses can only deliver standard material to students. Students receive course content passively which leads to the neglect of the goal of education – “to suit the teaching to the ability of students". Providing appropriate course content according to students- ability is the main goal of this paper. Except offering a series of conventional learning services, abundant information available, and instant message delivery, a complete online learning environment should be able to distinguish between students- ability and provide learning courses that best suit their ability. However, if a distance learning site contains well-designed course content and design but fails to provide adaptive courses, students will gradually loss their interests and confidence in learning and result in ineffective learning or discontinued learning. In this paper, an intelligent tutoring system is proposed and it consists of several modules working cooperatively in order to build an adaptive learning environment for distance education. The operation of the system is based on the result of Self-Organizing Map (SOM) to divide students into different groups according to their learning ability and learning interests and then provide them with suitable course content. Accordingly, the problem of information overload and internet traffic problem can be solved because the amount of traffic accessing the same content is reduced.

Keywords: Distance Learning, Intelligent Tutoring System(ITS), Self-Organizing Map (SOM)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1839
341 Data-driven Multiscale Tsallis Complexity: Application to EEG Analysis

Authors: Young-Seok Choi

Abstract:

This work proposes a data-driven multiscale based quantitative measures to reveal the underlying complexity of electroencephalogram (EEG), applying to a rodent model of hypoxic-ischemic brain injury and recovery. Motivated by that real EEG recording is nonlinear and non-stationary over different frequencies or scales, there is a need of more suitable approach over the conventional single scale based tools for analyzing the EEG data. Here, we present a new framework of complexity measures considering changing dynamics over multiple oscillatory scales. The proposed multiscale complexity is obtained by calculating entropies of the probability distributions of the intrinsic mode functions extracted by the empirical mode decomposition (EMD) of EEG. To quantify EEG recording of a rat model of hypoxic-ischemic brain injury following cardiac arrest, the multiscale version of Tsallis entropy is examined. To validate the proposed complexity measure, actual EEG recordings from rats (n=9) experiencing 7 min cardiac arrest followed by resuscitation were analyzed. Experimental results demonstrate that the use of the multiscale Tsallis entropy leads to better discrimination of the injury levels and improved correlations with the neurological deficit evaluation after 72 hours after cardiac arrest, thus suggesting an effective metric as a prognostic tool.

Keywords: Electroencephalogram (EEG), multiscale complexity, empirical mode decomposition, Tsallis entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2036
340 Exploring the Physical Environment and Building Features in Earthquake Disaster Areas

Authors: Chang Hsueh-Sheng, Chen Tzu-Ling

Abstract:

Earthquake is an unpredictable natural disaster and intensive earthquakes have caused serious impacts on social-economic system, environmental and social resilience. Conventional ways to mitigate earthquake disaster are to enhance building codes and advance structural engineering measures. However, earthquake-induced ground damage such as liquefaction, land subsidence, landslide happen on places nearby earthquake prone or poor soil condition areas. Therefore, this study uses spatial statistical analysis to explore the spatial pattern of damaged buildings. Afterwards, principle components analysis (PCA) is applied to categorize the similar features in different kinds of clustered patterns. The results show that serious landslide prone area, close to fault, vegetated ground surface and mudslide prone area are common in those highly damaged buildings. In addition, the oldest building might not be directly referred to the most vulnerable one. In fact, it seems that buildings built between 1974 and 1989 become more fragile during the earthquake. The incorporation of both spatial statistical analyses and PCA can provide more accurate information to subsidize retrofit programs to enhance earthquake resistance in particular areas.

Keywords: Earthquake disaster, spatial statistical analysis, principle components analysis, clustered patterns.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1347
339 Structural Behavior of Laterally Loaded Precast Foamed Concrete Sandwich Panel

Authors: Y. H. Mugahed Amran, Raizal S. M. Rashid, Farzad Hejazi, Nor Azizi Safiee, A. A. Abang Ali

Abstract:

Experimental and analytical studies were carried out to investigate the structural behavior of precast foamed concrete sandwich panels (PFCSP) of total number (6) as one-way action slab tested under lateral load. The details of the test setup and procedures were illustrated. The results obtained from the experimental tests were discussed which include the observation of cracking patterns and influence of aspect ratio (L/b). Analytical study of finite element analysis was implemented and degree of composite action of the test panels was also examined in both experimental and analytical studies. Result shows that crack patterns appeared in only one-direction, similar to reports on solid slabs, particularly when both concrete wythes act in a composite manner. Foamed concrete was briefly reviewed and experimental results were compared with the finite element analyses data which gives a reasonable degree of accuracy. Therefore, based on the results obtained, PFCSP slab can be used as an alternative to conventional flooring system.

Keywords: Aspect ratio (L/b), finite element analyses (FEA), foamed concrete (FC), precast foamed concrete sandwich panel (PFCSP), ultimate flexural strength capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1804
338 Intelligent Temperature Controller for Water-Bath System

Authors: Om Prakash Verma, Rajesh Singla, Rajesh Kumar

Abstract:

Conventional controller’s usually required a prior knowledge of mathematical modelling of the process. The inaccuracy of mathematical modelling degrades the performance of the process, especially for non-linear and complex control problem. The process used is Water-Bath system, which is most widely used and nonlinear to some extent. For Water-Bath system, it is necessary to attain desired temperature within a specified period of time to avoid the overshoot and absolute error, with better temperature tracking capability, else the process is disturbed.

To overcome above difficulties intelligent controllers, Fuzzy Logic (FL) and Adaptive Neuro-Fuzzy Inference System (ANFIS), are proposed in this paper. The Fuzzy controller is designed to work with knowledge in the form of linguistic control rules. But the translation of these linguistic rules into the framework of fuzzy set theory depends on the choice of certain parameters, for which no formal method is known. To design ANFIS, Fuzzy-Inference-System is combined with learning capability of Neural-Network.

It is analyzed that ANFIS is best suitable for adaptive temperature control of above system. As compared to PID and FLC, ANFIS produces a stable control signal. It has much better temperature tracking capability with almost zero overshoot and minimum absolute error.

Keywords: PID Controller, FLC, ANFIS, Non-Linear Control System, Water-Bath System, MATLAB-7.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5515
337 Distributed Generator Placement for Loss Reduction and Improvement in Reliability

Authors: Priyanka Paliwal, N.P. Patidar

Abstract:

Distributed Power generation has gained a lot of attention in recent times due to constraints associated with conventional power generation and new advancements in DG technologies .The need to operate the power system economically and with optimum levels of reliability has further led to an increase in interest in Distributed Generation. However it is important to place Distributed Generator on an optimum location so that the purpose of loss minimization and voltage regulation is dully served on the feeder. This paper investigates the impact of DG units installation on electric losses, reliability and voltage profile of distribution networks. In this paper, our aim would be to find optimal distributed generation allocation for loss reduction subjected to constraint of voltage regulation in distribution network. The system is further analyzed for increased levels of Reliability. Distributed Generator offers the additional advantage of increase in reliability levels as suggested by the improvements in various reliability indices such as SAIDI, CAIDI and AENS. Comparative studies are performed and related results are addressed. An analytical technique is used in order to find the optimal location of Distributed Generator. The suggested technique is programmed under MATLAB software. The results clearly indicate that DG can reduce the electrical line loss while simultaneously improving the reliability of the system.

Keywords: AENS, CAIDI, Distributed Generation, lossreduction, Reliability, SAIDI

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3071
336 Properties of Fly Ash Brick Prepared in Local Environment of Bangladesh

Authors: Robiul Islam, Monjurul Hasan, Rezaul Karim, M. F. M. Zain

Abstract:

Coal fly ash, an industrial by product of coal combustion thermal power plants is considered as a hazardous material and its improper disposal has become an environmental issue. On the other hand, manufacturing conventional clay bricks involves on consumption of large amount of clay and leads substantial depletion of topsoil. This paper unveils the possibility of using fly ash as a partial replacement of clay for brick manufacturing considering the local technology practiced in Bangladesh. The effect of fly ash with different replacing ratio (0%, 20%, 30%, 40%, and 50% by volume) of clay on properties of bricks was studied. Bricks were made in the field parallel to ordinary bricks marked with specific number for different percentage to identify them at time of testing. No physical distortion is observed in fly ash brick after burning in the kiln. Results from laboratory test show that compressive strength of brick is decreased with the increase of fly ash and maximum compressive strength is found to be 19.6 MPa at 20% of fly ash. In addition, water absorption of fly ash brick is increased with the increase of fly ash. The abrasion value and Specific gravity of coarse aggregate prepared from brick with fly ash also studied and the results of this study suggests that 20% fly ash can be considered as the optimum fly ash content for producing good quality bricks utilizing present practiced technology.

Keywords: Bangladesh brick, fly ash, clay brick, physical properties, compressive strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2460
335 Rating the Importance of Customer Requirements for Green Product Using Analytic Hierarchy Process Methodology

Authors: Lara F. Horani, Shurong Tong

Abstract:

Identification of customer requirements and their preferences are the starting points in the process of product design. Most of design methodologies focus on traditional requirements. But in the previous decade, the green products and the environment requirements have increasingly attracted the attention with the constant increase in the level of consumer awareness towards environmental problems (such as green-house effect, global warming, pollution and energy crisis, and waste management). Determining the importance weights for the customer requirements is an essential and crucial process. This paper used the analytic hierarchy process (AHP) approach to evaluate and rate the customer requirements for green products. With respect to the ultimate goal of customer satisfaction, surveys are conducted using a five-point scale analysis. With the help of this scale, one can derive the weight vectors. This approach can improve the imprecise ranking of customer requirements inherited from studies based on the conventional AHP. Furthermore, the AHP with extent analysis is simple and easy to implement to prioritize customer requirements. The research is based on collected data through a questionnaire survey conducted over a sample of 160 people belonging to different age, marital status, education and income groups in order to identify the customer preferences for green product requirements.

Keywords: Analytic hierarchy process, green product, customer requirements for green design, importance weights for the customer requirements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 863
334 Design and Simulation of Portable Telemedicine System for High Risk Cardiac Patients

Authors: V. Thulasi Bai, Srivatsa S. K.

Abstract:

Deaths from cardiovascular diseases have decreased substantially over the past two decades, largely as a result of advances in acute care and cardiac surgery. These developments have produced a growing population of patients who have survived a myocardial infarction. These patients need to be continuously monitored so that the initiation of treatment can be given within the crucial golden hour. The available conventional methods of monitoring mostly perform offline analysis and restrict the mobility of these patients within a hospital or room. Hence the aim of this paper is to design a Portable Cardiac Telemedicine System to aid the patients to regain their independence and return to an active work schedule, there by improving the psychological well being. The portable telemedicine system consists of a Wearable ECG Transmitter (WET) and a slightly modified mobile phone, which has an inbuilt ECG analyzer. The WET is placed on the body of the patient that continuously acquires the ECG signals from the high-risk cardiac patients who can move around anywhere. This WET transmits the ECG to the patient-s Bluetooth enabled mobile phone using blue tooth technology. The ECG analyzer inbuilt in the mobile phone continuously analyzes the heartbeats derived from the received ECG signals. In case of any panic condition, the mobile phone alerts the patients care taker by an SMS and initiates the transmission of a sample ECG signal to the doctor, via the mobile network.

Keywords: WET, ECG analyzer, Bluetooth, mobilecellular network, high risk cardiac patients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
333 Implementing ALD in Product Development: The Effect of Geometrical Dimensions on Tubular Member Deformation

Authors: Shigeyuki Haruyama, Aidil Khaidir Bin Muhamad, Tadayuki Kyoutani, Dai-Heng Chen, Ken Kaminishi

Abstract:

The product development process has undergone many changes concomitant with world progress in order to produce products that meet customer needs quickly and inexpensively. Analysis-Led Design (ALD) is one of the latest methods in the product development process. It focuses more on up-front engineering, a product quality optimization process that starts early in the conceptual design stage. Product development and manufacturing through ALD utilizes digital tools extensively for design, analysis and product optimization. This study uses computer-aided design (CAD) and finite element method (FEM) simulation to examine the modes of deformation of tubular members under axial loading. A multiple-combination impact absorption tubular member, referred to as a compress–expand member, is proposed as a substitute for the conventional thin-walled cylindrical tube to be used as a vehicle’s crash box. The study of deformation modes is crucial for evaluating the geometrical dimension limits by which a member can absorb energy efficiently.

Keywords: Analysis-led design, axial collapse, tubular member, finite element method, thin-walled cylindrical tube, compress-expand member, deformation modes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1540
332 A Codebook-based Redundancy Suppression Mechanism with Lifetime Prediction in Cluster-based WSN

Authors: Huan Chen, Bo-Chao Cheng, Chih-Chuan Cheng, Yi-Geng Chen, Yu Ling Chou

Abstract:

Wireless Sensor Network (WSN) comprises of sensor nodes which are designed to sense the environment, transmit sensed data back to the base station via multi-hop routing to reconstruct physical phenomena. Since physical phenomena exists significant overlaps between temporal redundancy and spatial redundancy, it is necessary to use Redundancy Suppression Algorithms (RSA) for sensor node to lower energy consumption by reducing the transmission of redundancy. A conventional algorithm of RSAs is threshold-based RSA, which sets threshold to suppress redundant data. Although many temporal and spatial RSAs are proposed, temporal-spatial RSA are seldom to be proposed because it is difficult to determine when to utilize temporal or spatial RSAs. In this paper, we proposed a novel temporal-spatial redundancy suppression algorithm, Codebookbase Redundancy Suppression Mechanism (CRSM). CRSM adopts vector quantization to generate a codebook, which is easily used to implement temporal-spatial RSA. CRSM not only achieves power saving and reliability for WSN, but also provides the predictability of network lifetime. Simulation result shows that the network lifetime of CRSM outperforms at least 23% of that of other RSAs.

Keywords: Redundancy Suppression Algorithm (RSA), Threshold-based RSA, Temporal RSA, Spatial RSA and Codebookbase Redundancy Suppression Mechanism (CRSM)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
331 A High-Speed Multiplication Algorithm Using Modified Partial Product Reduction Tree

Authors: P. Asadee

Abstract:

Multiplication algorithms have considerable effect on processors performance. A new high-speed, low-power multiplication algorithm has been presented using modified Dadda tree structure. Three important modifications have been implemented in inner product generation step, inner product reduction step and final addition step. Optimized algorithms have to be used into basic computation components, such as multiplication algorithms. In this paper, we proposed a new algorithm to reduce power, delay, and transistor count of a multiplication algorithm implemented using low power modified counter. This work presents a novel design for Dadda multiplication algorithms. The proposed multiplication algorithm includes structured parts, which have important effect on inner product reduction tree. In this paper, a 1.3V, 64-bit carry hybrid adder is presented for fast, low voltage applications. The new 64-bit adder uses a new circuit to implement the proposed carry hybrid adder. The new adder using 80 nm CMOS technology has been implemented on 700 MHz clock frequency. The proposed multiplication algorithm has achieved 14 percent improvement in transistor count, 13 percent reduction in delay and 12 percent modification in power consumption in compared with conventional designs.

Keywords: adder, CMOS, counter, Dadda tree, encoder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2282
330 Optimizing of Fuzzy C-Means Clustering Algorithm Using GA

Authors: Mohanad Alata, Mohammad Molhim, Abdullah Ramini

Abstract:

Fuzzy C-means Clustering algorithm (FCM) is a method that is frequently used in pattern recognition. It has the advantage of giving good modeling results in many cases, although, it is not capable of specifying the number of clusters by itself. In FCM algorithm most researchers fix weighting exponent (m) to a conventional value of 2 which might not be the appropriate for all applications. Consequently, the main objective of this paper is to use the subtractive clustering algorithm to provide the optimal number of clusters needed by FCM algorithm by optimizing the parameters of the subtractive clustering algorithm by an iterative search approach and then to find an optimal weighting exponent (m) for the FCM algorithm. In order to get an optimal number of clusters, the iterative search approach is used to find the optimal single-output Sugenotype Fuzzy Inference System (FIS) model by optimizing the parameters of the subtractive clustering algorithm that give minimum least square error between the actual data and the Sugeno fuzzy model. Once the number of clusters is optimized, then two approaches are proposed to optimize the weighting exponent (m) in the FCM algorithm, namely, the iterative search approach and the genetic algorithms. The above mentioned approach is tested on the generated data from the original function and optimal fuzzy models are obtained with minimum error between the real data and the obtained fuzzy models.

Keywords: Fuzzy clustering, Fuzzy C-Means, Genetic Algorithm, Sugeno fuzzy systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3216