Search results for: efficient market hypothesis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3145

Search results for: efficient market hypothesis

415 Unequal Error Protection of Facial Features for Personal ID Images Coding

Authors: T. Hirner, J. Polec

Abstract:

This paper presents an approach for an unequal error protection of facial features of personal ID images coding. We consider unequal error protection (UEP) strategies for the efficient progressive transmission of embedded image codes over noisy channels. This new method is based on the progressive image compression embedded zerotree wavelet (EZW) algorithm and UEP technique with defined region of interest (ROI). In this case is ROI equal facial features within personal ID image. ROI technique is important in applications with different parts of importance. In ROI coding, a chosen ROI is encoded with higher quality than the background (BG). Unequal error protection of image is provided by different coding techniques and encoding LL band separately. In our proposed method, image is divided into two parts (ROI, BG) that consist of more important bytes (MIB) and less important bytes (LIB). The proposed unequal error protection of image transmission has shown to be more appropriate to low bit rate applications, producing better quality output for ROI of the compresses image. The experimental results verify effectiveness of the design. The results of our method demonstrate the comparison of the UEP of image transmission with defined ROI with facial features and the equal error protection (EEP) over additive white gaussian noise (AWGN) channel.

Keywords: Embedded zerotree wavelet (EZW), equal error protection (EEP), facial features, personal ID images, region of interest (ROI), unequal error protection (UEP)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490
414 The Automated Soil Erosion Monitoring System (ASEMS)

Authors: George N. Zaimes, Valasia Iakovoglou, Paschalis Koutalakis, Konstantinos Ioannou, Ioannis Kosmadakis, Panagiotis Tsardaklis, Theodoros Laopoulos

Abstract:

The advancements in technology allow the development of a new system that can continuously measure surface soil erosion. Continuous soil erosion measurements are required in order to comprehend the erosional processes and propose effective and efficient conservation measures to mitigate surface erosion. Mitigating soil erosion, especially in Mediterranean countries such as Greece, is essential in order to maintain environmental and agricultural sustainability. In this paper, we present the Automated Soil Erosion Monitoring System (ASEMS) that measures surface soil erosion along with other factors that impact erosional process. Specifically, this system measures ground level changes (surface soil erosion), rainfall, air temperature, soil temperature, and soil moisture. Another important innovation is that the data will be collected by remote communication. In addition, stakeholder’s awareness is a key factor to help reduce any environmental problem. The different dissemination activities that were utilized are described. The overall outcomes were the development of a new innovative system that can measure erosion very accurately. These data from the system help study the process of erosion and find the best possible methods to reduce erosion. The dissemination activities enhance the stakeholders and public's awareness on surface soil erosion problems and will lead to the adoption of more effective soil erosion conservation practices in Greece.

Keywords: Soil management, climate change, new technologies, conservation practices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2468
413 A Proposed Hybrid Color Image Compression Based on Fractal Coding with Quadtree and Discrete Cosine Transform

Authors: Shimal Das, Dibyendu Ghoshal

Abstract:

Fractal based digital image compression is a specific technique in the field of color image. The method is best suited for irregular shape of image like snow bobs, clouds, flame of fire; tree leaves images, depending on the fact that parts of an image often resemble with other parts of the same image. This technique has drawn much attention in recent years because of very high compression ratio that can be achieved. Hybrid scheme incorporating fractal compression and speedup techniques have achieved high compression ratio compared to pure fractal compression. Fractal image compression is a lossy compression method in which selfsimilarity nature of an image is used. This technique provides high compression ratio, less encoding time and fart decoding process. In this paper, fractal compression with quad tree and DCT is proposed to compress the color image. The proposed hybrid schemes require four phases to compress the color image. First: the image is segmented and Discrete Cosine Transform is applied to each block of the segmented image. Second: the block values are scanned in a zigzag manner to prevent zero co-efficient. Third: the resulting image is partitioned as fractals by quadtree approach. Fourth: the image is compressed using Run length encoding technique.

Keywords: Fractal coding, Discrete Cosine Transform, Iterated Function System (IFS), Affine Transformation, Run length encoding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
412 A Highly Efficient Process Applying Sige Film to Generate Quasi-Beehive Si Nanostructure for the Growth of Platinum Nanopillars with High Emission Property for the Applications of X-Ray Tube

Authors: Pin-Hsu Kao, Wen-Shou Tseng, Hung-Ming Tai, Yuan-Ming Chang, Jenh-Yih Juang

Abstract:

We report a lithography-free approach to fabricate the biomimetics, quasi-beehive Si nanostructures (QBSNs), on Si-substrates. The self-assembled SiGe nanoislands via the strain induced surface roughening (Asaro-Tiller-Grinfeld instability) during in-situ annealing play a key role as patterned sacrifice regions for subsequent reactive ion etching (RIE) process performed for fabricating quasi-beehive nanostructures on Si-substrates. As the measurements of field emission, the bare QBSNs show poor field emission performance, resulted from the existence of the native oxide layer which forms an insurmountable barrier for electron emission. In order to dramatically improve the field emission characteristics, the platinum nanopillars (Pt-NPs) were deposited on QBSNs to form Pt-NPs/QBSNs heterostructures. The turn-on field of Pt-NPs/QBSNs is as low as 2.29 V/μm (corresponding current density of 1 μA/cm2), and the field enhancement factor (β-value) is significantly increased to 6067. More importantly, the uniform and continuous electrons excite light emission, due to the surrounding filed emitters from Pt-NPs/QBSNs, can be easily obtained. This approach does not require an expensive photolithographic process and possesses great potential for applications.

Keywords: Biomimetics, quasi-beehive Si, SiGe nanoislands, platinum nanopillars, field emission.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1799
411 Investigating the Regulation System of the Synchronous Motor Excitation Mode Serving as a Reactive Power Source

Authors: Baghdasaryan Marinka, Ulikyan Azatuhi

Abstract:

The efficient usage of the compensation abilities of the electrical drive synchronous motors used in production processes can essentially improve the technical and economic indices of the process.  Reducing the flows of the reactive electrical energy due to the compensation of reactive power allows to significantly reduce the load losses of power in the electrical networks. As a result of analyzing the scientific works devoted to the issues of regulating the excitation of the synchronous motors, the need for comprehensive investigation and estimation of the excitation mode has been substantiated. By means of the obtained transmission functions, in the Simulink environment of the software package MATLAB, the transition processes of the excitation mode have been studied. As a result of obtaining and estimating the graph of the Nyquist plot and the transient process, the necessity of developing the Proportional-Integral-Derivative (PID) regulator has been justified. The transient processes of the system of the PID regulator have been investigated, and the amplitude–phase characteristics of the system have been estimated. The analysis of the obtained results has shown that the regulation indices of the developed system have been improved. The developed system can be successfully applied for regulating the excitation voltage of different-power synchronous motors, operating with a changing load, ensuring a value of the power coefficient close to 1.

Keywords: Transient process, synchronous motor, excitation mode, regulator, reactive power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 689
410 Experimental Study on a Solar Heat Concentrating Steam Generator

Authors: Qiangqiang Xu, Xu Ji, Jingyang Han, Changchun Yang, Ming Li

Abstract:

Replacing of complex solar concentrating unit, this paper designs a solar heat-concentrating medium-temperature steam-generating system. Solar radiation is collected by using a large solar collecting and heat concentrating plate and is converged to the metal evaporating pipe with high efficient heat transfer. In the meantime, the heat loss is reduced by employing a double-glazed cover and other heat insulating structures. Thus, a high temperature is reached in the metal evaporating pipe. The influences of the system's structure parameters on system performance are analyzed. The steam production rate and the steam production under different solar irradiance, solar collecting and heat concentrating plate area, solar collecting and heat concentrating plate temperature and heat loss are obtained. The results show that when solar irradiance is higher than 600 W/m2, the effective heat collecting area is 7.6 m2 and the double-glazing cover is adopted, the system heat loss amount is lower than the solar irradiance value. The stable steam is produced in the metal evaporating pipe at 100 ℃, 110 ℃, and 120 ℃, respectively. When the average solar irradiance is about 896 W/m2, and the steaming cumulative time is about 5 hours, the daily steam production of the system is about 6.174 kg. In a single day, the solar irradiance is larger at noon, thus the steam production rate is large at that time. Before 9:00 and after 16:00, the solar irradiance is smaller, and the steam production rate is almost 0.

Keywords: Heat concentrating, heat loss, medium temperature, solar steam production.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1105
409 Face Recognition Using Principal Component Analysis, K-Means Clustering, and Convolutional Neural Network

Authors: Zukisa Nante, Wang Zenghui

Abstract:

Face recognition is the problem of identifying or recognizing individuals in an image. This paper investigates a possible method to bring a solution to this problem. The method proposes an amalgamation of Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. It is trained and evaluated using the ORL dataset. This dataset consists of 400 different faces with 40 classes of 10 face images per class. Firstly, PCA enabled the usage of a smaller network. This reduces the training time of the CNN. Thus, we get rid of the redundancy and preserve the variance with a smaller number of coefficients. Secondly, the K-Means clustering model is trained using the compressed PCA obtained data which select the K-Means clustering centers with better characteristics. Lastly, the K-Means characteristics or features are an initial value of the CNN and act as input data. The accuracy and the performance of the proposed method were tested in comparison to other Face Recognition (FR) techniques namely PCA, Support Vector Machine (SVM), as well as K-Nearest Neighbour (kNN). During experimentation, the accuracy and the performance of our suggested method after 90 epochs achieved the highest performance: 99% accuracy F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% and KNN with 84% during the conducted experiments. Therefore, this method proved to be efficient in identifying faces in the images.

Keywords: Face recognition, Principal Component Analysis, PCA, Convolutional Neural Network, CNN, Rectified Linear Unit, ReLU, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 506
408 Many-Sided Self Risk Analysis Model for Information Asset to Secure Stability of the Information and Communication Service

Authors: Jin-Tae Lee, Jung-Hoon Suh, Sang-Soo Jang, Jae-Il Lee

Abstract:

Information and communication service providers (ICSP) that are significant in size and provide Internet-based services take administrative, technical, and physical protection measures via the information security check service (ISCS). These protection measures are the minimum action necessary to secure the stability and continuity of the information and communication services (ICS) that they provide. Thus, information assets are essential to providing ICS, and deciding the relative importance of target assets for protection is a critical procedure. The risk analysis model designed to decide the relative importance of information assets, which is described in this study, evaluates information assets from many angles, in order to choose which ones should be given priority when it comes to protection. Many-sided risk analysis (MSRS) grades the importance of information assets, based on evaluation of major security check items, evaluation of the dependency on the information and communication facility (ICF) and influence on potential incidents, and evaluation of major items according to their service classification, in order to identify the ISCS target. MSRS could be an efficient risk analysis model to help ICSPs to identify their core information assets and take information protection measures first, so that stability of the ICS can be ensured.

Keywords: Information Asset, Information CommunicationFacility, Evaluation, ISCS (Information Security Check Service), Evaluation, Grade.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1448
407 Bacteriological Quality of Commercially Prepared Fermented Ogi (Akamu) Sold in Some Parts of South Eastern Nigeria

Authors: Alloysius C. Ogodo, Ositadinma C. Ugbogu, Uzochukwu G. Ekeleme

Abstract:

Food poisoning and infection by bacteria are of public health significance to both developing and developed countries. Samples of ogi (akamu) prepared from white and yellow variety of maize sold in Uturu and Okigwe were analyzed together with the laboratory prepared ogi for bacterial quality using the standard microbiological methods. The analyses showed that both white and yellow variety had total bacterial counts (cfu/g) of 4.0 ×107 and 3.9 x 107 for the laboratory prepared ogi while the commercial ogi had 5.2 x 107 and 4.9 x107, 4.9 x107 and 4.5 x107, 5.4 x107 and 5.0 x107 for Eke-Okigwe, Up-gate and Nkwo-Achara market respectively. The Staphylococcal counts ranged from 2.0 x 102 to 5.0 x102 and 1.0 x 102 to 4.0 x102 for the white and yellow variety from the different markets while Staphylococcal growth was not recorded on the laboratory prepared ogi. The laboratory prepared ogi had no Coliform growth while the commercially prepared ogi had counts of 0.5 x103 to 1.6 x 103 for white variety and 0.3 x 103 to 1.1 x103 for yellow variety respectively. The Lactic acid bacterial count of 3.5x106 and 3.0x106 was recorded for the laboratory ogi while the commercially prepared ogi ranged from 3.2x106 to 4.2x106 (white variety) and 3.0 x106 to 3.9 x106 (yellow). The presence of bacteria isolates from the commercial and laboratory fermented ogi showed that Lactobacillus sp, Leuconostoc sp and Citrobacter sp were present in all the samples, Micrococcus sp and Klebsiella sp were isolated from Eke- Okigwe and ABSU-up-gate markets varieties respectively, E. coli and Staphylococcus sp were present in Eke-Okigwe and Nkwo- Achara markets while Salmonella sp were isolated from the three markets. Hence, there are chances of contracting food borne diseases from commercially prepared ogi. Therefore, there is the need for sanitary measures in the production of fermented cereals so as to minimize the rate of food borne pathogens during processing and storage.

Keywords: Bacterial quality, fermentation, maize, Ogi.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3460
406 Response Delay Model: Bridging the Gap in Urban Fire Disaster Response System

Authors: Sulaiman Yunus

Abstract:

The need for modeling response to urban fire disaster cannot be over emphasized, as recurrent fire outbreaks have gutted most cities of the world. This necessitated the need for a prompt and efficient response system in order to mitigate the impact of the disaster. Promptness, as a function of time, is seen to be the fundamental determinant for efficiency of a response system and magnitude of a fire disaster. Delay, as a result of several factors, is one of the major determinants of promptgness of a response system and also the magnitude of a fire disaster. Response Delay Model (RDM) intends to bridge the gap in urban fire disaster response system through incorporating and synchronizing the delay moments in measuring the overall efficiency of a response system and determining the magnitude of a fire disaster. The model identified two delay moments (pre-notification and Intra-reflex sequence delay) that can be elastic and collectively plays a significant role in influencing the efficiency of a response system. Due to variation in the elasticity of the delay moments, the model provides for measuring the length of delays in order to arrive at a standard average delay moment for different parts of the world, putting into consideration geographic location, level of preparedness and awareness, technological advancement, socio-economic and environmental factors. It is recommended that participatory researches should be embarked on locally and globally to determine standard average delay moments within each phase of the system so as to enable determining the efficiency of response systems and predicting fire disaster magnitudes.

Keywords: Delay moment, fire disaster, reflex sequence, response, response delay moment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 733
405 The Mediating Effect of MSMEs Export Performance between Technological Advancement Capabilities and Business Performance

Authors: Fawad Hussain, Mohammad Basir Bin Saud, Mohd Azwardi Md Isa

Abstract:

The aim of this study is to empirically investigate the mediating impact of export performance (EP) between technological advancement capabilities and business performance (BP) of Malaysian manufacturing micro, small and medium sized enterprises (MSME’s). Firm’s technological advancement resources are hypothesized as a platform to enhance both exports and BP of manufacturing MSMEs in Malaysia. This study is twofold, primary it has investigated that technological advancement capabilities helps to appreciates main performance measures noted in terms of EP and Secondly, it investigates that how efficiently and effectively technological advancement capabilities can contribute in overall Malaysian MSME’s BP. Smart PLS-3 statistical software is used to know the association between technological advancement capabilities, MSME’s EP and BP. In this study, the data was composed from Malaysian manufacturing MSME’s in east coast industrial zones known as the manufacturing hub of MSMEs. Seven hundred and fifty (750) questionnaires were distributed, but only 148 usable questionnaires are returned. The finding of this study indicated that technological advancement capabilities helps to strengthen the export in term of time and cost efficient and it plays a significant role in appreciating their BP. This study is helpful for small and medium enterprise owners who intend to expand their business overseas and though smart technological advancement resources they can achieve their business competitiveness and excellence both at local and international markets.

Keywords: Technological advancement capabilities, export performance, business performance, small and medium manufacturing enterprises, Malaysia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1768
404 Efficiency in Urban Governance towards Sustainability and Competitiveness of City : A Case Study of Kuala Lumpur

Authors: Hamzah Jusoh, Azmizam Abdul Rashid

Abstract:

Malaysia has successfully applied economic planning to guide the development of the country from an economy of agriculture and mining to a largely industrialised one. Now, with its sights set on attaining the economic level of a fully developed nation by 2020, the planning system must be made even more efficient and focused. It must ensure that every investment made in the country, contribute towards creating the desirable objective of a strong, modern, internationally competitive, technologically advanced, post-industrial economy. Cities in Malaysia must also be fully aware of the enormous competition it faces in a region with rapidly expanding and modernising economies, all contending for the same pool of potential international investments. Efficiency of urban governance is also fundamental issue in development characterized by sustainability, subsidiarity, equity, transparency and accountability, civic engagement and citizenship, and security. As described above, city competitiveness is harnessed through 'city marketing and city management'. High technology and high skilled industries, together with finance, transportation, tourism, business, information and professional services shopping and other commercial activities, are the principal components of the nation-s economy, which must be developed to a level well beyond where it is now. In this respect, Kuala Lumpur being the premier city must play the leading role.

Keywords: Economic planning, sustainability, efficiency, urban governance and city competitiveness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2239
403 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data

Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer

Abstract:

This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.

Keywords: Non-stationary, BINARMA(1, 1) model, Poisson Innovations, CML

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 588
402 Automated Method Time Measurement System for Redesigning Dynamic Facility Layout

Authors: Salam Alzubaidi, G. Fantoni, F. Failli, M. Frosolini

Abstract:

The dynamic facility layout problem is a really critical issue in the competitive industrial market; thus, solving this problem requires robust design and effective simulation systems. The sustainable simulation requires inputting reliable and accurate data into the system. So this paper describes an automated system integrated into the real environment to measure the duration of the material handling operations, collect the data in real-time, and determine the variances between the actual and estimated time schedule of the operations in order to update the simulation software and redesign the facility layout periodically. The automated method- time measurement system collects the real data through using Radio Frequency-Identification (RFID) and Internet of Things (IoT) technologies. Hence, attaching RFID- antenna reader and RFID tags enables the system to identify the location of the objects and gathering the time data. The real duration gathered will be manipulated by calculating the moving average duration of the material handling operations, choosing the shortest material handling path, and then updating the simulation software to redesign the facility layout accommodating with the shortest/real operation schedule. The periodic simulation in real-time is more sustainable and reliable than the simulation system relying on an analysis of historical data. The case study of this methodology is in cooperation with a workshop team for producing mechanical parts. Although there are some technical limitations, this methodology is promising, and it can be significantly useful in the redesigning of the manufacturing layout.

Keywords: Dynamic facility layout problem, internet of things, method time measurement, radio frequency identification, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 599
401 Semi-Analytic Method in Fast Evaluation of Thermal Management Solution in Energy Storage System

Authors: Ya Lv

Abstract:

This article presents the application of the semi-analytic method (SAM) in the thermal management solution (TMS) of the energy storage system (ESS). The TMS studied in this work is fluid cooling. In fluid cooling, both effective heat conduction and heat convection are indispensable due to the heat transfer from solid to fluid. Correspondingly, an efficient TMS requires a design investigation of the following parameters: fluid inlet temperature, ESS initial temperature, fluid flow rate, working c rate, continuous working time, and materials properties. Their variation induces a change of thermal performance in the battery module, which is usually evaluated by numerical simulation. Compared to complicated computation resources and long computation time in simulation, the SAM is developed in this article to predict the thermal influence within a few seconds. In SAM, a fast prediction model is reckoned by combining numerical simulation with theoretical/empirical equations. The SAM can explore the thermal effect of boundary parameters in both steady-state and transient heat transfer scenarios within a short time. Therefore, the SAM developed in this work can simplify the design cycle of TMS and inspire more possibilities in TMS design.

Keywords: Semi-analytic method, fast prediction model, thermal influence of boundary parameters, energy storage system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 662
400 Effect of Different Media and Mannitol Concentrations on Growth and Development of Vandopsis lissochiloides (Gaudich.) Pfitz. under Slow Growth Conditions

Authors: J. Linjikao, P. Inthima, A. Kongbangkerd

Abstract:

In vitro conservation of orchid germplasm provides an effective technique for ex situ conservation of orchid diversity. In this study, an efficient protocol for in vitro conservation of Vandopsis lissochiloides (Gaudich.) Pfitz. plantlet under slow growth conditions was investigated. Plantlets were cultured on different strength of Vacin and Went medium (½VW and ¼VW) supplemented with different concentrations of mannitol (0, 2, 4, 6 and 8%), sucrose (0 and 3%) and 50 g/L potato extract, 150 mL/L coconut water. The cultures were incubated at 25±2 °C and maintained under 20 µmol/m2s light intensity for 24 weeks without subculture. At the end of preservation period, the plantlets were subcultured to fresh medium for growth recovery. The results found that the highest leaf number per plantlet could be observed on ¼VW medium without adding sucrose and mannitol while the highest root number per plantlet was found on ½VW added with 3% sucrose without adding mannitol after 24 weeks of in vitro storage. The results showed that the maximum number of leaves (5.8 leaves) and roots (5.0 roots) of preserved plantlets were produced on ¼VW medium without adding sucrose and mannitol. Therefore, ¼VW medium without adding sucrose and mannitol was the best minimum growth conditions for medium-term storage of V. lissochiloides plantlets.

Keywords: Preservation, Vandopsis, germplasm, in vitro.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 706
399 Off-Policy Q-learning Technique for Intrusion Response in Network Security

Authors: Zheni S. Stefanova, Kandethody M. Ramachandran

Abstract:

With the increasing dependency on our computer devices, we face the necessity of adequate, efficient and effective mechanisms, for protecting our network. There are two main problems that Intrusion Detection Systems (IDS) attempt to solve. 1) To detect the attack, by analyzing the incoming traffic and inspect the network (intrusion detection). 2) To produce a prompt response when the attack occurs (intrusion prevention). It is critical creating an Intrusion detection model that will detect a breach in the system on time and also challenging making it provide an automatic and with an acceptable delay response at every single stage of the monitoring process. We cannot afford to adopt security measures with a high exploiting computational power, and we are not able to accept a mechanism that will react with a delay. In this paper, we will propose an intrusion response mechanism that is based on artificial intelligence, and more precisely, reinforcement learning techniques (RLT). The RLT will help us to create a decision agent, who will control the process of interacting with the undetermined environment. The goal is to find an optimal policy, which will represent the intrusion response, therefore, to solve the Reinforcement learning problem, using a Q-learning approach. Our agent will produce an optimal immediate response, in the process of evaluating the network traffic.This Q-learning approach will establish the balance between exploration and exploitation and provide a unique, self-learning and strategic artificial intelligence response mechanism for IDS.

Keywords: Intrusion prevention, network security, optimal policy, Q-learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1022
398 Optimum Design of Steel Space Frames by Hybrid Teaching-Learning Based Optimization and Harmony Search Algorithms

Authors: Alper Akın, İbrahim Aydoğdu

Abstract:

This study presents a hybrid metaheuristic algorithm to obtain optimum designs for steel space buildings. The optimum design problem of three-dimensional steel frames is mathematically formulated according to provisions of LRFD-AISC (Load and Resistance factor design of American Institute of Steel Construction). Design constraints such as the strength requirements of structural members, the displacement limitations, the inter-story drift and the other structural constraints are derived from LRFD-AISC specification. In this study, a hybrid algorithm by using teachinglearning based optimization (TLBO) and harmony search (HS) algorithms is employed to solve the stated optimum design problem. These algorithms are two of the recent additions to metaheuristic techniques of numerical optimization and have been an efficient tool for solving discrete programming problems. Using these two algorithms in collaboration creates a more powerful tool and mitigates each other’s weaknesses. To demonstrate the powerful performance of presented hybrid algorithm, the optimum design of a large scale steel building is presented and the results are compared to the previously obtained results available in the literature.

Keywords: Optimum structural design, hybrid techniques, teaching-learning based optimization, harmony search algorithm, minimum weight, steel space frame.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2450
397 Design of Low Power and High Speed Digital IIR Filter in 45nm with Optimized CSA for Digital Signal Processing Applications

Authors: G. Ramana Murthy, C. Senthilpari, P. Velrajkumar, Lim Tien Sze

Abstract:

In this paper, a design methodology to implement low-power and high-speed 2nd order recursive digital Infinite Impulse Response (IIR) filter has been proposed. Since IIR filters suffer from a large number of constant multiplications, the proposed method replaces the constant multiplications by using addition/subtraction and shift operations. The proposed new 6T adder cell is used as the Carry-Save Adder (CSA) to implement addition/subtraction operations in the design of recursive section IIR filter to reduce the propagation delay. Furthermore, high-level algorithms designed for the optimization of the number of CSA blocks are used to reduce the complexity of the IIR filter. The DSCH3 tool is used to generate the schematic of the proposed 6T CSA based shift-adds architecture design and it is analyzed by using Microwind CAD tool to synthesize low-complexity and high-speed IIR filters. The proposed design outperforms in terms of power, propagation delay, area and throughput when compared with MUX-12T, MCIT-7T based CSA adder filter design. It is observed from the experimental results that the proposed 6T based design method can find better IIR filter designs in terms of power and delay than those obtained by using efficient general multipliers.

Keywords: CSA Full Adder, Delay unit, IIR filter, Low-Power, PDP, Parametric Analysis, Propagation Delay, Throughput, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3815
396 Iterative Estimator-Based Nonlinear Backstepping Control of a Robotic Exoskeleton

Authors: Brahmi Brahim, Mohammad Habibur Rahman, Maarouf Saad, Cristóbal Ochoa Luna

Abstract:

A repetitive training movement is an efficient method to improve the ability and movement performance of stroke survivors and help them to recover their lost motor function and acquire new skills. The ETS-MARSE is seven degrees of freedom (DOF) exoskeleton robot developed to be worn on the lateral side of the right upper-extremity to assist and rehabilitate the patients with upper-extremity dysfunction resulting from stroke. Practically, rehabilitation activities are repetitive tasks, which make the assistive/robotic systems to suffer from repetitive/periodic uncertainties and external perturbations induced by the high-order dynamic model (seven DOF) and interaction with human muscle which impact on the tracking performance and even on the stability of the exoskeleton. To ensure the robustness and the stability of the robot, a new nonlinear backstepping control was implemented with designed tests performed by healthy subjects. In order to limit and to reject the periodic/repetitive disturbances, an iterative estimator was integrated into the control of the system. The estimator does not need the precise dynamic model of the exoskeleton. Experimental results confirm the robustness and accuracy of the controller performance to deal with the external perturbation, and the effectiveness of the iterative estimator to reject the repetitive/periodic disturbances.

Keywords: Backstepping control, iterative control, rehabilitation, ETS-MARSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1369
395 Ultra-Light Overhead Conveyor Systems for Logistics Applications

Authors: Batin Latif Aylak, Bernd Noche

Abstract:

Overhead conveyor systems satisfy by their simple
construction, wide application range and their full compatibility with
other manufacturing systems, which are designed according to
international standards. Ultra-light overhead conveyor systems are
rope-based conveying systems with individually driven vehicles. The
vehicles can move automatically on the rope and this can be realized
by energy and signals. Crossings are realized by switches. Overhead
conveyor systems are particularly used in the automotive industry but
also at post offices. Overhead conveyor systems always must be
integrated with a logistical process by finding the best way for a
cheaper material flow and in order to guarantee precise and fast
workflows. With their help, any transport can take place without
wasting ground and space, without excessive company capacity, lost
or damaged products, erroneous delivery, endless travels and without
wasting time. Ultra-light overhead conveyor systems provide optimal
material flow, which produces profit and saves time. This article
illustrates the advantages of the structure of the ultra-light overhead
conveyor systems in logistics applications and explains the steps of
their system design. After an illustration of the steps, currently
available systems on the market will be shown by means of their
technical characteristics. Due to their simple construction, demands
to an ultra-light overhead conveyor system will be illustrated.

Keywords: Logistics, material flow, overhead conveyor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2066
394 Liability Aspects Related to Genetically Modified Food under the Food Safety Legislation in India

Authors: S. K. Balashanmugam, Padmavati Manchikanti, S. R. Subramanian

Abstract:

The question of legal liability over injury arising out of the import and the introduction of GM food emerges as a crucial issue confronting to promote GM food and its derivatives. There is a greater possibility of commercialized GM food from the exporting country to enter importing country where status of approval shall not be same. This necessitates the importance of fixing a liability mechanism to discuss the damage, if any, occurs at the level of transboundary movement or at the market. There was a widespread consensus to develop the Cartagena Protocol on Biosafety and to give for a dedicated regime on liability and redress in the form of Nagoya Kuala Lumpur Supplementary Protocol on the Liability and Redress (‘N-KL Protocol’) at the international context. The national legal frameworks based on this protocol are not adequately established in the prevailing food legislations of the developing countries. The developing economy like India is willing to import GM food and its derivatives after the successful commercialization of Bt Cotton in 2002. As a party to the N-KL Protocol, it is indispensable for India to formulate a legal framework and to discuss safety, liability, and regulatory issues surrounding GM foods in conformity to the provisions of the Protocol. The liability mechanism is also important in the case where the risk assessment and risk management is still in implementing stage. Moreover, the country is facing GM infiltration issues with its neighbors Bangladesh. As a precautionary approach, there is a need to formulate rules and procedure of legal liability to discuss any kind of damage occurs at transboundary trade. In this context, the proposed work will attempt to analyze the liability regime in the existing Food Safety and Standards Act, 2006 from the applicability and domestic compliance and to suggest legal and policy options for regulatory authorities.

Keywords: Commercialisation, food safety, FSSAI, genetically modified foods, India, liability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2234
393 Post Pandemic Mobility Analysis through Indexing and Sharding in MongoDB: Performance Optimization and Insights

Authors: Karan Vishavjit, Aakash Lakra, Shafaq Khan

Abstract:

The COVID-19 pandemic has pushed healthcare professionals to use big data analytics as a vital tool for tracking and evaluating the effects of contagious viruses. To effectively analyse huge datasets, efficient NoSQL databases are needed. The analysis of post-COVID-19 health and well-being outcomes and the evaluation of the effectiveness of government efforts during the pandemic is made possible by this research’s integration of several datasets, which cuts down on query processing time and creates predictive visual artifacts. We recommend applying sharding and indexing technologies to improve query effectiveness and scalability as the dataset expands. Effective data retrieval and analysis are made possible by spreading the datasets into a sharded database and doing indexing on individual shards. Analysis of connections between governmental activities, poverty levels, and post-pandemic wellbeing is the key goal. We want to evaluate the effectiveness of governmental initiatives to improve health and lower poverty levels. We will do this by utilising advanced data analysis and visualisations. The findings provide relevant data that support the advancement of UN sustainable objectives, future pandemic preparation, and evidence-based decision-making. This study shows how Big Data and NoSQL databases may be used to address problems with global health.

Keywords: COVID-19, big data, data analysis, indexing, NoSQL, sharding, scalability, poverty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69
392 An Evaluation of TIG Welding Parametric Influence on Tensile Strength of 5083 Aluminium Alloy

Authors: Lakshman Singh, Rajeshwar Singh, Naveen Kumar Singh, Davinder Singh, Pargat Singh

Abstract:

Tungsten Inert Gas (TIG) welding is a high quality welding process used to weld the thin metals and their alloy. 5083 Aluminium alloys play an important role in engineering and metallurgy field because of excellent corrosion properties, ease of fabrication and high specific strength coupled with best combination of toughness and formability.

TIG welding technique is one of the precise and fastest processes used in aerospace, ship and marine industries. TIG welding process is used to analyze the data and evaluate the influence of input parameters on tensile strength of 5083 Al-alloy specimens with dimensions of 100mm long x 15mm wide x 5mm thick. Welding current (I), gas flow rate (G) and welding speed (S) are the input parameters which effect tensile strength of 5083 Al-alloy welded joints. As welding speed increased, tensile strength increases first till optimum value and after that both decreases by increasing welding speed further. Results of the study show that maximum tensile strength of 129 MPa of weld joint are obtained at welding current of 240 Amps, gas flow rate of 7 Lt/min and welding speed of 98 mm/min. These values are the optimum values of input parameters which help to produce efficient weld joint that have good mechanical properties as a tensile strength.

Keywords: 5083 Aluminium alloy, Gas flow rate, TIG welding, Welding current, Welding speed and Tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4085
391 Comparative Analysis of Ranunculus muricatus and Typha latifolia as Wetland Plants Applied for Domestic Wastewater Treatment in a Mesocosm Scale Study

Authors: S. Aziz, M. Ali, S. Asghar, S. Ahmed

Abstract:

Comparing other methods of waste water treatment, constructed wetlands are one of the most fascinating practices because being a natural process they are eco-friendly have low construction and maintenance cost and have considerable capability of wastewater treatment. The current research was focused mainly on comparison of Ranunculus muricatus and Typha latifolia as wetland plants for domestic wastewater treatment by designing and constructing efficient pilot scale horizontal subsurface flow mesocosms. Parameters like chemical oxygen demand, biological oxygen demand, phosphates, sulphates, nitrites, nitrates, and pathogenic indicator microbes were studied continuously with successive treatments. Treatment efficiency of the system increases with passage of time and with increase in temperature. Efficiency of T. latifolia planted setups in open environment was fairly good for parameters like COD and BOD5 which was showing reduction up to 82.5% for COD and 82.6% for BOD5 while DO was increased up to 125%. Efficiency of R. muricatus vegetated setup was also good but lowers than that of T. latifolia planted showing 80.95% removal of COD and BOD5. Ranunculus muricatus was found effective in reducing bacterial count in wastewater. Both macrophytes were found promising in wastewater treatment.

Keywords: Biological oxygen demand, chemical oxygen demand, horizontal subsurface flow, Total suspended solids, Wetland.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2637
390 Tide Contribution in the Flood Event of Jeddah City: Mathematical Modelling and Different Field Measurements of the Groundwater Rise

Authors: Aïssa Rezzoug

Abstract:

This paper is aimed to bring new elements that demonstrate the tide caused the groundwater to rise in the shoreline band, on which the urban areas occurs, especially in the western coastal cities of the Kingdom of Saudi Arabia like Jeddah. The reason for the last events of Jeddah inundation was the groundwater rise in the city coupled at the same time to a strong precipitation event. This paper will illustrate the tide participation in increasing the groundwater level significantly. It shows that the reason for internal groundwater recharge within the urban area is not only the excess of the water supply coming from surrounding areas, due to the human activity, with lack of sufficient and efficient sewage system, but also due to tide effect. The research study follows a quantitative method to assess groundwater level rise risks through many in-situ measurements and mathematical modelling. The proposed approach highlights groundwater level, in the urban areas of the city on the shoreline band, reaching the high tide level without considering any input from precipitation. Despite the small tide in the Red Sea compared to other oceanic coasts, the groundwater level is considerably enhanced by the tide from the seaside and by the freshwater table from the landside of the city. In these conditions, the groundwater level becomes high in the city and prevents the soil to evacuate quickly enough the surface flow caused by the storm event, as it was observed in the last historical flood catastrophe of Jeddah in 2009.

Keywords: Flood, groundwater rise, Jeddah, tide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 500
389 Neural Network Implementation Using FPGA: Issues and Application

Authors: A. Muthuramalingam, S. Himavathi, E. Srinivasan

Abstract:

.Hardware realization of a Neural Network (NN), to a large extent depends on the efficient implementation of a single neuron. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of neural networks. FPGA realization of ANNs with a large number of neurons is still a challenging task. This paper discusses the issues involved in implementation of a multi-input neuron with linear/nonlinear excitation functions using FPGA. Implementation method with resource/speed tradeoff is proposed to handle signed decimal numbers. The VHDL coding developed is tested using Xilinx XC V50hq240 Chip. To improve the speed of operation a lookup table method is used. The problems involved in using a lookup table (LUT) for a nonlinear function is discussed. The percentage saving in resource and the improvement in speed with an LUT for a neuron is reported. An attempt is also made to derive a generalized formula for a multi-input neuron that facilitates to estimate approximately the total resource requirement and speed achievable for a given multilayer neural network. This facilitates the designer to choose the FPGA capacity for a given application. Using the proposed method of implementation a neural network based application, namely, a Space vector modulator for a vector-controlled drive is presented

Keywords: FPGA implementation, multi-input neuron, neural network, nn based space vector modulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4426
388 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space

Authors: Nanjiang Chen

Abstract:

In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experience of space. Addressing these gaps, this paper presents a continuous visibility algorithm, providing a potentially valuable approach to measuring urban spaces from a human - centric perspective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this technique allows for a continuous range of visibility assessment, closely mirroring human visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Beijing's urban setting. Its key distinction lies in its potential to benefit a broad spectrum of stakeholders, ranging from urban developers to public policymakers, aiding in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.

Keywords: Visual openness, spatial continuity, ray-tracing algorithms, urban computation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33
387 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems

Authors: Rodolfo Lorbieski, Silvia Modesto Nassar

Abstract:

Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.

Keywords: Stacking, multi-layers, ensemble, multi-class.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1094
386 A Software Framework for Predicting Oil-Palm Yield from Climate Data

Authors: Mohd. Noor Md. Sap, A. Majid Awan

Abstract:

Intelligent systems based on machine learning techniques, such as classification, clustering, are gaining wide spread popularity in real world applications. This paper presents work on developing a software system for predicting crop yield, for example oil-palm yield, from climate and plantation data. At the core of our system is a method for unsupervised partitioning of data for finding spatio-temporal patterns in climate data using kernel methods which offer strength to deal with complex data. This work gets inspiration from the notion that a non-linear data transformation into some high dimensional feature space increases the possibility of linear separability of the patterns in the transformed space. Therefore, it simplifies exploration of the associated structure in the data. Kernel methods implicitly perform a non-linear mapping of the input data into a high dimensional feature space by replacing the inner products with an appropriate positive definite function. In this paper we present a robust weighted kernel k-means algorithm incorporating spatial constraints for clustering the data. The proposed algorithm can effectively handle noise, outliers and auto-correlation in the spatial data, for effective and efficient data analysis by exploring patterns and structures in the data, and thus can be used for predicting oil-palm yield by analyzing various factors affecting the yield.

Keywords: Pattern analysis, clustering, kernel methods, spatial data, crop yield

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980