Search results for: Existing concept
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2943

Search results for: Existing concept

303 Packet Forwarding with Multiprotocol Label Switching

Authors: R.N.Pise, S.A.Kulkarni, R.V.Pawar

Abstract:

MultiProtocol Label Switching (MPLS) is an emerging technology that aims to address many of the existing issues associated with packet forwarding in today-s Internetworking environment. It provides a method of forwarding packets at a high rate of speed by combining the speed and performance of Layer 2 with the scalability and IP intelligence of Layer 3. In a traditional IP (Internet Protocol) routing network, a router analyzes the destination IP address contained in the packet header. The router independently determines the next hop for the packet using the destination IP address and the interior gateway protocol. This process is repeated at each hop to deliver the packet to its final destination. In contrast, in the MPLS forwarding paradigm routers on the edge of the network (label edge routers) attach labels to packets based on the forwarding Equivalence class (FEC). Packets are then forwarded through the MPLS domain, based on their associated FECs , through swapping the labels by routers in the core of the network called label switch routers. The act of simply swapping the label instead of referencing the IP header of the packet in the routing table at each hop provides a more efficient manner of forwarding packets, which in turn allows the opportunity for traffic to be forwarded at tremendous speeds and to have granular control over the path taken by a packet. This paper deals with the process of MPLS forwarding mechanism, implementation of MPLS datapath , and test results showing the performance comparison of MPLS and IP routing. The discussion will focus primarily on MPLS IP packet networks – by far the most common application of MPLS today.

Keywords: Forwarding equivalence class, incoming label map, label, next hop label forwarding entry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2693
302 An Energy Aware Data Aggregation in Wireless Sensor Network Using Connected Dominant Set

Authors: M. Santhalakshmi, P Suganthi

Abstract:

Wireless Sensor Networks (WSNs) have many advantages. Their deployment is easier and faster than wired sensor networks or other wireless networks, as they do not need fixed infrastructure. Nodes are partitioned into many small groups named clusters to aggregate data through network organization. WSN clustering guarantees performance achievement of sensor nodes. Sensor nodes energy consumption is reduced by eliminating redundant energy use and balancing energy sensor nodes use over a network. The aim of such clustering protocols is to prolong network life. Low Energy Adaptive Clustering Hierarchy (LEACH) is a popular protocol in WSN. LEACH is a clustering protocol in which the random rotations of local cluster heads are utilized in order to distribute energy load among all sensor nodes in the network. This paper proposes Connected Dominant Set (CDS) based cluster formation. CDS aggregates data in a promising approach for reducing routing overhead since messages are transmitted only within virtual backbone by means of CDS and also data aggregating lowers the ratio of responding hosts to the hosts existing in virtual backbones. CDS tries to increase networks lifetime considering such parameters as sensors lifetime, remaining and consumption energies in order to have an almost optimal data aggregation within networks. Experimental results proved CDS outperformed LEACH regarding number of cluster formations, average packet loss rate, average end to end delay, life computation, and remaining energy computation.

Keywords: Wireless sensor network, connected dominant set, clustering, data aggregation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1129
301 Authentication Protocol for Wireless Sensor Networks

Authors: Sunil Gupta, Harsh Kumar Verma, AL Sangal

Abstract:

Wireless sensor networks can be used to measure and monitor many challenging problems and typically involve in monitoring, tracking and controlling areas such as battlefield monitoring, object tracking, habitat monitoring and home sentry systems. However, wireless sensor networks pose unique security challenges including forgery of sensor data, eavesdropping, denial of service attacks, and the physical compromise of sensor nodes. Node in a sensor networks may be vanished due to power exhaustion or malicious attacks. To expand the life span of the sensor network, a new node deployment is needed. In military scenarios, intruder may directly organize malicious nodes or manipulate existing nodes to set up malicious new nodes through many kinds of attacks. To avoid malicious nodes from joining the sensor network, a security is required in the design of sensor network protocols. In this paper, we proposed a security framework to provide a complete security solution against the known attacks in wireless sensor networks. Our framework accomplishes node authentication for new nodes with recognition of a malicious node. When deployed as a framework, a high degree of security is reachable compared with the conventional sensor network security solutions. A proposed framework can protect against most of the notorious attacks in sensor networks, and attain better computation and communication performance. This is different from conventional authentication methods based on the node identity. It includes identity of nodes and the node security time stamp into the authentication procedure. Hence security protocols not only see the identity of each node but also distinguish between new nodes and old nodes.

Keywords: Authentication, Key management, Wireless Sensornetwork, Elliptic curve cryptography (ECC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3824
300 Numerical Approach to a Mathematical Modeling of Bioconvection Due to Gyrotactic Micro-Organisms over a Nonlinear Inclined Stretching Sheet

Authors: Madhu Aneja, Sapna Sharma

Abstract:

The water-based bioconvection of a nanofluid containing motile gyrotactic micro-organisms over nonlinear inclined stretching sheet has been investigated. The governing nonlinear boundary layer equations of the model are reduced to a system of ordinary differential equations via Oberbeck-Boussinesq approximation and similarity transformations. Further, the modified set of equations with associated boundary conditions are solved using Finite Element Method. The impact of various pertinent parameters on the velocity, temperature, nanoparticles concentration, density of motile micro-organisms profiles are obtained and analyzed in details. The results show that with the increase in angle of inclination δ, velocity decreases while temperature, nanoparticles concentration, a density of motile micro-organisms increases. Additionally, the skin friction coefficient, Nusselt number, Sherwood number, density number are computed for various thermophysical parameters. It is noticed that increasing Brownian motion and thermophoresis parameter leads to an increase in temperature of fluid which results in a reduction in Nusselt number. On the contrary, Sherwood number rises with an increase in Brownian motion and thermophoresis parameter. The findings have been validated by comparing the results of special cases with existing studies.

Keywords: Bioconvection, inclined stretching sheet, Gyrotactic micro-organisms, Brownian motion, thermophoresis, finite element method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 722
299 The CEO Mission II, Rescue Robot with Multi-Joint Mechanical Arm

Authors: Amon Tunwannarux, Supanunt Tunwannarux

Abstract:

This paper presents design features of a rescue robot, named CEO Mission II. Its body is designed to be the track wheel type with double front flippers for climbing over the collapse and the rough terrain. With 125 cm. long, 5-joint mechanical arm installed on the robot body, it is deployed not only for surveillance from the top view but also easier and faster access to the victims to get their vital signs. Two cameras and sensors for searching vital signs are set up at the tip of the multi-joint mechanical arm. The third camera is at the back of the robot for driving control. Hardware and software of the system, which controls and monitors the rescue robot, are explained. The control system is used for controlling the robot locomotion, the 5-joint mechanical arm, and for turning on/off devices. The monitoring system gathers all information from 7 distance sensors, IR temperature sensors, 3 CCD cameras, voice sensor, robot wheels encoders, yawn/pitch/roll angle sensors, laser range finder and 8 spare A/D inputs. All sensors and controlling data are communicated with a remote control station via IEEE 802.11b Wi-Fi. The audio and video data are compressed and sent via another IEEE 802.11g Wi-Fi transmitter for getting real-time response. At remote control station site, the robot locomotion and the mechanical arm are controlled by joystick. Moreover, the user-friendly GUI control program is developed based on the clicking and dragging method to easily control the movement of the arm. Robot traveling map is plotted from computing the information of wheel encoders and the yawn/pitch data. 2D Obstacle map is plotted from data of the laser range finder. The concept and design of this robot can be adapted to suit many other applications. As the Best Technique awardee from Thailand Rescue Robot Championship 2006, all testing results are satisfied.

Keywords: Controlling, monitoring, rescue robot, mechanicalarm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972
298 An Innovative Transient Free Adaptive SVC in Stepless Mode of Control

Authors: U. Gudaru, D. R. Patil

Abstract:

Electrical distribution systems are incurring large losses as the loads are wide spread, inadequate reactive power compensation facilities and their improper control. A comprehensive static VAR compensator consisting of capacitor bank in five binary sequential steps in conjunction with a thyristor controlled reactor of smallest step size is employed in the investigative work. The work deals with the performance evaluation through analytical studies and practical implementation on an existing system. A fast acting error adaptive controller is developed suitable both for contactor and thyristor switched capacitors. The switching operations achieved are transient free, practically no need to provide inrush current limiting reactors, TCR size minimum providing small percentages of nontriplen harmonics, facilitates stepless variation of reactive power depending on load requirement so as maintain power factor near unity always. It is elegant, closed loop microcontroller system having the features of self regulation in adaptive mode for automatic adjustment. It is successfully tested on a distribution transformer of three phase 50 Hz, Dy11, 11KV/440V, 125 KVA capacity and the functional feasibility and technical soundness are established. The controller developed is new, adaptable to both LT & HT systems and practically established to be giving reliable performance.

Keywords: Binary Sequential switched capacitor bank, TCR, Nontriplen harmonics, step less Q control, transient free

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2336
297 The Service Appraisal of Soldiers of the Army of the Czech Republic in the Context of Personal Expenses

Authors: Tereza Dolečková

Abstract:

Following article provides the comparison of international norms and standards formulating personal expenses, and then it illustrates the national concept of personal expenses of the Ministry of Defence. Then a new salary system of soldiers and the importance of the service appraisal in the context of personal expenses of the Ministry of Defence are explained. The first part of the article includes formulation of the approach to the definition of personal expenses within the international norms and standards and also within the Ministry of Defence of the Czech Republic. The structure of employees of the Ministry of Defence of the Czech Republic in years 2012 – 2014 and the amount of military expenses and the share of salary expenses of the Ministry of total expenses of the Ministry are clarified there, also the comparison of the amount of military expenses in chosen member states of the North Atlantic Treaty Organization is done. The salary system of professional soldiers in connection with the amendment of the Act No. 221/1999 Coll. on Professional Soldiers is clarified in the second part of this article. The amendment significantly regulates the salary items of soldiers but changes are also in the service appraisal of soldiers which reflects one of seven salary items of soldiers – the performance bonus. The aim of this article is to clarify different approach to define personal expenses with emphasis on the Ministry of Defence of the Czech Republic which overlaps to the service appraisal of soldiers of the Army of the Czech Republic and their salary system in connection with personal expenses of the Ministry of Defence of the Czech Republic. The efficient and objective system of the service appraisal and the use of its results are connected to the principles of the career advancement; only the best soldiers can advance in the system of the service careers to higher positions. That is why it is necessary to improve the service appraisal so it would provide the maximum information about the performance of a soldier and it would also motivate the soldier in his development. The attention should be paid to the service appraisal of the soldiers of the Army of the Czech Republic to achieve as much objectivity as possible.

Keywords: Career, human resource management and development, personal expenses, salary system of soldiers, service appraisal of soldiers, the Army of the Czech Republic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1344
296 Development and Control of Deep Seated Gravitational Slope Deformation: The Case of Colzate-Vertova Landslide, Bergamo, Northern Italy

Authors: Paola Comella, Vincenzo Francani, Paola Gattinoni

Abstract:

This paper presents the Colzate-Vertova landslide, a Deep Seated Gravitational Slope Deformation (DSGSD) located in the Seriana Valley, Northern Italy. The paper aims at describing the development as well as evaluating the factors that influence the evolution of the landslide. After defining the conceptual model of the landslide, numerical simulations were developed using a finite element numerical model, first with a two-dimensional domain, and later with a three-dimensional one. The results of the 2-D model showed a displacement field typical of a sackung, as a consequence of the erosion along the Seriana Valley. The analysis also showed that the groundwater flow could locally affect the slope stability, bringing about a reduction in the safety factor, but without reaching failure conditions. The sensitivity analysis carried out on the strength parameters pointed out that slope failures could be reached only for relevant reduction of the geotechnical characteristics. Such a result does not fit the real conditions observed on site, where a number of small failures often develop all along the hillslope. The 3-D model gave a more comprehensive analysis of the evolution of the DSGSD, also considering the border effects. The results showed that the convex profile of the slope favors the development of displacements along the lateral valley, with a relevant reduction in the safety factor, justifying the existing landslides.

Keywords: Deep seated gravitational slope deformation, Italy, landslide, numerical modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1025
295 Evaluation of Residual Stresses in Human Face as a Function of Growth

Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan

Abstract:

Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of living tissues to mechanical loads is necessary for a wide range of developing fields such as prosthetics design or computerassisted surgical interventions. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically, growth is one of the main sources. Extracting body organ’s shapes from medical imaging does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is gravity since an organ grows under its influence from birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. This paper presents an original computational framework based on gradual growth to determine the residual stresses due to growth. To illustrate the method, we apply it to a finite element model of a healthy human face reconstructed from medical images. The distribution of residual stress in facial tissues is computed, which can overcome the effect of gravity and maintain tissues firmness. Our assumption is that tissue wrinkles caused by aging could be a consequence of decreasing residual stress and thus not counteracting gravity. Taking into account these stresses seems therefore extremely important in maxillofacial surgery. It would indeed help surgeons to estimate tissues changes after surgery.

Keywords: Finite element method, growth, residual stress, soft tissue.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686
294 DWT-SATS Based Detection of Image Region Cloning

Authors: Michael Zimba

Abstract:

A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency subband of the DWT of the suspicious image thereby leaving valuable information in the other three subbands, the proposed algorithm simultaneously extracts features from all the four subbands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates. 

Keywords: Affine Transformation, Discrete Wavelet Transform, Radix Sort, SATS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1910
293 Intelligent Assistive Methods for Diagnosis of Rheumatoid Arthritis Using Histogram Smoothing and Feature Extraction of Bone Images

Authors: SP. Chokkalingam, K. Komathy

Abstract:

Advances in the field of image processing envision a new era of evaluation techniques and application of procedures in various different fields. One such field being considered is the biomedical field for prognosis as well as diagnosis of diseases. This plethora of methods though provides a wide range of options to select from, it also proves confusion in selecting the apt process and also in finding which one is more suitable. Our objective is to use a series of techniques on bone scans, so as to detect the occurrence of rheumatoid arthritis (RA) as accurately as possible. Amongst other techniques existing in the field our proposed system tends to be more effective as it depends on new methodologies that have been proved to be better and more consistent than others. Computer aided diagnosis will provide more accurate and infallible rate of consistency that will help to improve the efficiency of the system. The image first undergoes histogram smoothing and specification, morphing operation, boundary detection by edge following algorithm and finally image subtraction to determine the presence of rheumatoid arthritis in a more efficient and effective way. Using preprocessing noises are removed from images and using segmentation, region of interest is found and Histogram smoothing is applied for a specific portion of the images. Gray level co-occurrence matrix (GLCM) features like Mean, Median, Energy, Correlation, Bone Mineral Density (BMD) and etc. After finding all the features it stores in the database. This dataset is trained with inflamed and noninflamed values and with the help of neural network all the new images are checked properly for their status and Rough set is implemented for further reduction.

Keywords: Computer Aided Diagnosis, Edge Detection, Histogram Smoothing, Rheumatoid Arthritis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2479
292 Taguchi Robust Design for Optimal Setting of Process Wastes Parameters in an Automotive Parts Manufacturing Company

Authors: Charles Chikwendu Okpala, Christopher Chukwutoo Ihueze

Abstract:

As a technique that reduces variation in a product by lessening the sensitivity of the design to sources of variation, rather than by controlling their sources, Taguchi Robust Design entails the designing of ideal goods, by developing a product that has minimal variance in its characteristics and also meets the desired exact performance. This paper examined the concept of the manufacturing approach and its application to brake pad product of an automotive parts manufacturing company. Although the firm claimed that only defects, excess inventory, and over-production were the few wastes that grossly affect their productivity and profitability, a careful study and analysis of their manufacturing processes with the application of Single Minute Exchange of Dies (SMED) tool showed that the waste of waiting is the fourth waste that bedevils the firm. The selection of the Taguchi L9 orthogonal array which is based on the four parameters and the three levels of variation for each parameter revealed that with a range of 2.17, that waiting is the major waste that the company must reduce in order to continue to be viable. Also, to enhance the company’s throughput and profitability, the wastes of over-production, excess inventory, and defects with ranges of 2.01, 1.46, and 0.82, ranking second, third, and fourth respectively must also be reduced to the barest minimum. After proposing -33.84 as the highest optimum Signal-to-Noise ratio to be maintained for the waste of waiting, the paper advocated for the adoption of all the tools and techniques of Lean Production System (LPS), and Continuous Improvement (CI), and concluded by recommending SMED in order to drastically reduce set up time which leads to unnecessary waiting.

Keywords: Taguchi Robust Design, signal to noise ratio, Single Minute Exchange of Dies, lean production system, waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 975
291 Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores

Authors: Ankit Sinha, Soham Banerjee, Pratik Chattopadhyay

Abstract:

Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies  the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model.

Keywords: Retail stores, Faster-RCNN, object localization, ResNet-18, triplet loss, data augmentation, product recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 583
290 Hospital Waste Management Practices: A Case Study in Iran

Authors: M. Farzadkia, S. Jorfi

Abstract:

Hospital waste is a category of waste consisting of infectious and non-infectious waste, which pose environmental and health risks. Therefore, special planning and management is required, due to the potential hazards of them. The lack of valid and comprehensive information regarding the generation and management of hospital waste in Iran is one of the most important problems in this field. This research aimed to evaluate hospital waste management efficiency in Karaj city, Iran. The four greatest hospitals in Karaj city had been selected in this cross-sectional study. Site observations and interviews with employees were implemented. The data was gathered based on the hospital waste management questionnaire which was designed by World Health Organization for developing countries. Collected Data had been analyzed using SPSS software. The average of solid waste which was generated per bed was 2.78 kg, which included 90% of domestic waste and 10% of infectious waste. Based on the quantitative analysis of general and infectious waste in these hospitals, the highest contributors of general waste were consisting of food waste (37.39%), while textile (28.06%) were the highest contributors of the infectious waste. According to the information contained in the questionnaires, the main defects of waste management in these hospitals were; inadequate staff in waste management sector, poorly disinfection of solid waste containers and temporary storage locations, and a lack of proper infectious waste treatment. According to the results of this research, waste management in these hospitals were far from optimum conditions. In order to improve the existing conditions, mentioned problems must be solved quickly, and planning for continuous monitoring in the waste management field in these hospitals should be established.

Keywords: Waste management, hospital wastes, solid wastes, Iran.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2160
289 Evaluation of the Impact of Dataset Characteristics for Classification Problems in Biological Applications

Authors: Kanthida Kusonmano, Michael Netzer, Bernhard Pfeifer, Christian Baumgartner, Klaus R. Liedl, Armin Graber

Abstract:

Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.

Keywords: Classification, High dimensional data, Machine learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2384
288 A Study on Algorithm Fusion for Recognition and Tracking of Moving Robot

Authors: Jungho Choi, Youngwan Cho

Abstract:

This paper presents an algorithm for the recognition and tracking of moving objects, 1/10 scale model car is used to verify performance of the algorithm. Presented algorithm for the recognition and tracking of moving objects in the paper is as follows. SURF algorithm is merged with Lucas-Kanade algorithm. SURF algorithm has strong performance on contrast, size, rotation changes and it recognizes objects but it is slow due to many computational complexities. Processing speed of Lucas-Kanade algorithm is fast but the recognition of objects is impossible. Its optical flow compares the previous and current frames so that can track the movement of a pixel. The fusion algorithm is created in order to solve problems which occurred using the Kalman Filter to estimate the position and the accumulated error compensation algorithm was implemented. Kalman filter is used to create presented algorithm to complement problems that is occurred when fusion two algorithms. Kalman filter is used to estimate next location, compensate for the accumulated error. The resolution of the camera (Vision Sensor) is fixed to be 640x480. To verify the performance of the fusion algorithm, test is compared to SURF algorithm under three situations, driving straight, curve, and recognizing cars behind the obstacles. Situation similar to the actual is possible using a model vehicle. Proposed fusion algorithm showed superior performance and accuracy than the existing object recognition and tracking algorithms. We will improve the performance of the algorithm, so that you can experiment with the images of the actual road environment.

Keywords: SURF, Optical Flow Lucas-Kanade, Kalman Filter, object recognition, object tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2292
287 A Grid-based Neural Network Framework for Multimodal Biometrics

Authors: Sitalakshmi Venkataraman

Abstract:

Recent scientific investigations indicate that multimodal biometrics overcome the technical limitations of unimodal biometrics, making them ideally suited for everyday life applications that require a reliable authentication system. However, for a successful adoption of multimodal biometrics, such systems would require large heterogeneous datasets with complex multimodal fusion and privacy schemes spanning various distributed environments. From experimental investigations of current multimodal systems, this paper reports the various issues related to speed, error-recovery and privacy that impede the diffusion of such systems in real-life. This calls for a robust mechanism that caters to the desired real-time performance, robust fusion schemes, interoperability and adaptable privacy policies. The main objective of this paper is to present a framework that addresses the abovementioned issues by leveraging on the heterogeneous resource sharing capacities of Grid services and the efficient machine learning capabilities of artificial neural networks (ANN). Hence, this paper proposes a Grid-based neural network framework for adopting multimodal biometrics with the view of overcoming the barriers of performance, privacy and risk issues that are associated with shared heterogeneous multimodal data centres. The framework combines the concept of Grid services for reliable brokering and privacy policy management of shared biometric resources along with a momentum back propagation ANN (MBPANN) model of machine learning for efficient multimodal fusion and authentication schemes. Real-life applications would be able to adopt the proposed framework to cater to the varying business requirements and user privacies for a successful diffusion of multimodal biometrics in various day-to-day transactions.

Keywords: Back Propagation, Grid Services, MultimodalBiometrics, Neural Networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1917
286 Tailoring of ECSS Standard for Space Qualification Test of CubeSat Nano-Satellite

Authors: B. Tiseo, V. Quaranta, G. Bruno, G. Sisinni

Abstract:

There is an increasing demand of nano-satellite development among universities, small companies, and emerging countries. Low-cost and fast-delivery are the main advantages of such class of satellites achieved by the extensive use of commercial-off-the-shelf components. On the other side, the loss of reliability and the poor success rate are limiting the use of nano-satellite to educational and technology demonstration and not to the commercial purpose. Standardization of nano-satellite environmental testing by tailoring the existing test standard for medium/large satellites is then a crucial step for their market growth. Thus, it is fundamental to find the right trade-off between the improvement of reliability and the need to keep their low-cost/fast-delivery advantages. This is particularly even more essential for satellites of CubeSat family. Such miniaturized and standardized satellites have 10 cm cubic form and mass no more than 1.33 kilograms per 1 unit (1U). For this class of nano-satellites, the qualification process is mandatory to reduce the risk of failure during a space mission. This paper reports the description and results of the space qualification test campaign performed on Endurosat’s CubeSat nano-satellite and modules. Mechanical and environmental tests have been carried out step by step: from the testing of the single subsystem up to the assembled CubeSat nano-satellite. Functional tests have been performed during all the test campaign to verify the functionalities of the systems. The test duration and levels have been selected by tailoring the European Space Agency standard ECSS-E-ST-10-03C and GEVS: GSFC-STD-7000A.

Keywords: CubeSat, Nano-satellite, shock, testing, vibration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1716
285 A Decade of Creating an Alternative Banking System in Tanzania: The Current State of Affairs of Islamic Banks

Authors: Pradeep Kulshrestha, Maulana Ayoub Ali

Abstract:

The concept of financial inclusion has been tabled in the whole world where practitioners, academicians, policy makers and economists are working hard to look for the best possible opportunities in order to enable the whole society to be in the banking cycle. The Islamic banking system is considered to be one of the said opportunities. Countries like the United Kingdom, United States of America, Malaysia, Saudi Arabia, the whole of the United Arab Emirates and many African countries have accommodated the aspect of Islamic banking in the conventional banking system as one of the financial inclusion strategies. This paper tries to analyse the current state of affairs of the Islamic Banking system in Tanzania in order to understand the improvement of the provision of Islamic banking products and services in the said country. The paper discusses the historical background of the banking system in Tanzania, the level of penetration of banking products and services and the coming of the Islamic banking system in the country. Furthermore, the paper discusses banking regulatory bodies, legal instruments governing banking operations as well as number of legal challenges facing Islamic banking operations in the country. Following a critical literature review, the paper discovered that there is no legal instrument which talks about the introduction and provision of Islamic banking system in Tanzania. Furthermore, the Islamic banking system was considered as a banking product which is absolutely incorrect because Islamic banking is considered to be as a banking system of its own. In addition to that, it has been discovered that lack of a proper regulatory system and legal instruments to harmonize the conventional and Islamic banking systems has resulted in the closure of one Islamic window in the country, which in the end affects the credibility of the newly introduced banking system. In its conclusive remarks, the paper suggests that Tanzania should work on all legal challenges affecting the smooth operations of the Islamic banking system. This can be in a way of adopting various Islamic banking legal models which are used in countries like Malaysia and others, or a borrowing legal harmonization process which has been adopted by the UK, Uganda, Nigeria and Kenya.

Keywords: Islamic banking, Islamic Windows, regulations, banks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913
284 Ventilation Efficiency in the Subway Environment for the Indoor Air Quality

Authors: Kyung Jin Ryu, MakhsudaJuraeva, Sang-Hyun Jeongand Dong Joo Song

Abstract:

Clean air in subway station is important to passengers. The Platform Screen Doors (PSDs) can improve indoor air quality in the subway station; however the air quality in the subway tunnel is degraded. The subway tunnel has high CO2 concentration and indoor particulate matter (PM) value. The Indoor Air Quality (IAQ) level in subway environment degrades by increasing the frequency of the train operation and the number of the train. The ventilation systems of the subway tunnel need improvements to have better air-quality. Numerical analyses might be effective tools to analyze the performance of subway twin-track tunnel ventilation systems. An existing subway twin-track tunnel in the metropolitan Seoul subway system is chosen for the numerical simulations. The ANSYS CFX software is used for unsteady computations of the airflow inside the twin-track tunnel when the train moves. The airflow inside the tunnel is simulated when one train runs and two trains run at the same time in the tunnel. The piston-effect inside the tunnel is analyzed when all shafts function as the natural ventilation shaft. The supplied air through the shafts is mixed with the pollutant air in the tunnel. The pollutant air is exhausted by the mechanical ventilation shafts. The supplied and discharged airs are balanced when only one train runs in the twin-track tunnel. The pollutant air in the tunnel is high when two trains run simultaneously in opposite direction and all shafts functioned as the natural shaft cases when there are no electrical power supplies in the shafts. The remained pollutant air inside the tunnel enters into the station platform when the doors are opened.

Keywords: indoor air quality, subway twin-track tunnel, train-induced wind

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4343
283 Power Production Performance of Different Wave Energy Converters in the Southwestern Black Sea

Authors: Ajab G. Majidi, Bilal Bingölbali, Adem Akpınar

Abstract:

This study aims to investigate the amount of energy (economic wave energy potential) that can be obtained from the existing wave energy converters in the high wave energy potential region of the Black Sea in terms of wave energy potential and their performance at different depths in the region. The data needed for this purpose were obtained using the calibrated nested layered SWAN wave modeling program version 41.01AB, which was forced with Climate Forecast System Reanalysis (CFSR) winds from 1979 to 2009. The wave dataset at a time interval of 2 hours was accumulated for a sub-grid domain for around Karaburun beach in Arnavutkoy, a district of Istanbul city. The annual sea state characteristic matrices for the five different depths along with a vertical line to the coastline were calculated for 31 years. According to the power matrices of different wave energy converter systems and characteristic matrices for each possible installation depth, the probability distribution tables of the specified mean wave period or wave energy period and significant wave height were calculated. Then, by using the relationship between these distribution tables, according to the present wave climate, the energy that the wave energy converter systems at each depth can produce was determined. Thus, the economically feasible potential of the relevant coastal zone was revealed, and the effect of different depths on energy converter systems is presented. The Oceantic at 50, 75 and 100 m depths and Oyster at 5 and 25 m depths presents the best performance. In the 31-year long period 1998 the most and 1989 is the least dynamic year.

Keywords: Annual power production, Black Sea, efficiency, power production performance, wave energy converter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 660
282 Public Financial Management in Ghana: A Move beyond Reforms to Consolidation and Sustainability

Authors: Mohammed Sani Abdulai

Abstract:

Ghana’s Public Financial Management reforms have been going on for some two decades now (1997/98 to 2017/18). Given this long period of reforms, Ghana in 2019 is putting together both a Public Financial Management (PFM) strategy and a Ghana Integrated Financial Management Information System (GIFMIS) strategy for the next 5-years (2020-2024). The primary aim of these dual strategies is assisting the country in moving beyond reforms to consolidation and sustainability. In this paper we, first, examined the evolution of Ghana’s PFM reforms. We, secondly, reviewed the legal and institutional reforms undertaken to strengthen the country’s key PFM institutions. Thirdly, we summarized the strengths and weaknesses identified by the 2018 Public Expenditure and Financial Accountability (PEFA) assessment of Ghana’s PFM system relating to its macro-fiscal framework, budget preparation and approval, budget execution, accounting and fiscal reporting as well as external scrutiny and audit. We, finally, considered what the country should be doing to achieve its intended goal of PFM consolidation and sustainability. Using a qualitative method of review and analysis of existing documents, we, through this paper, brought to the fore the lessons that could be learnt by other developing countries from Ghana’s PFM reforms experiences. These lessons included the need to: (a) undergird any PFM reform with a comprehensive PFM reform strategy; (b) undertake a legal and institutional reforms of the key PFM institutions; (c) assess the strengths and weaknesses of those reforms using PFM performance evaluation tools such as PEFA framework; and (d) move beyond reforms to consolidation and sustainability.

Keywords: Public financial management, public expenditure and financial accountability (PEFA), reforms, consolidation, sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1085
281 The Changing Trend of Collaboration Patterns in the Social Sciences: Institutional Influences on Academic Research in Korea, 2013-2016

Authors: Ho-Dae Chong, Jong-Kil Kim

Abstract:

Collaborative research has become more prevalent and important across disciplines because it stimulates innovation and interaction between scholars. Seeing as existing studies relatively disregarded the institutional conditions triggering collaborative research, this work aims to analyze the changing trend in collaborative work patterns among Korean social scientists. The focus of this research is the performance of social scientists who received research grants through the government’s Social Science Korea (SSK) program. Using quantitative statistical methods, collaborative research patterns in a total of 2,354 papers published under the umbrella of the SSK program in peer-reviewed scholarly journals from 2013 to 2016 were examined to identify changing trends and triggering factors in collaborative research. A notable finding is that the share of collaborative research is overwhelmingly higher than that of individual research. In particular, levels of collaborative research surpassed 70%, increasing much quicker compared to other research done in the social sciences. Additionally, the most common composition of collaborative research was for two or three researchers to conduct joint research as coauthors, and this proportion has also increased steadily. Finally, a strong association between international journals and co-authorship patterns was found for the papers published by SSK program researchers from 2013 to 2016. The SSK program can be seen as the driving force behind collaboration between social scientists. Its emphasis on competition through a merit-based financial support system along with a rigorous evaluation process seems to have influenced researchers to cooperate with those who have similar research interests.

Keywords: Co-authorship, collaboration, competition, cooperation, Social Science Korea, policy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 983
280 Controller Design for Euler-Bernoulli Smart Structures Using Robust Decentralized POF via Reduced Order Modeling

Authors: T.C. Manjunath, B. Bandyopadhyay

Abstract:

This paper features the proposed modeling and design of a Robust Decentralized Periodic Output Feedback (RDPOF) control technique for the active vibration control of smart flexible multimodel Euler-Bernoulli cantilever beams for a multivariable (MIMO) case by retaining the first 6 vibratory modes. The beam structure is modeled in state space form using the concept of piezoelectric theory, the Euler-Bernoulli beam theory and the Finite Element Method (FEM) technique by dividing the beam into 4 finite elements and placing the piezoelectric sensor / actuator at two finite element locations (positions 2 and 4) as collocated pairs, i.e., as surface mounted sensor / actuator, thus giving rise to a multivariable model of the smart structure plant with two inputs and two outputs. Five such multivariable models are obtained by varying the dimensions (aspect ratios) of the aluminum beam, thus giving rise to a multimodel of the smart structure system. Using model order reduction technique, the reduced order model of the higher order system is obtained based on dominant eigen value retention and the method of Davison. RDPOF controllers are designed for the above 5 multivariable-multimodel plant. The closed loop responses with the RDPOF feedback gain and the magnitudes of the control input are observed and the performance of the proposed multimodel smart structure system with the controller is evaluated for vibration control.

Keywords: Smart structure, Euler-Bernoulli beam theory, Periodic output feedback control, Finite Element Method, State space model, SISO, Embedded sensors and actuators, Vibration control, Reduced order model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2028
279 Application of Griddization Management to Construction Hazard Management

Authors: Lingzhi Li, Jiankun Zhang, Tiantian Gu

Abstract:

Hazard management that can prevent fatal accidents and property losses is a fundamental process during the buildings’ construction stage. However, due to lack of safety supervision resources and operational pressures, the conduction of hazard management is poor and ineffective in China. In order to improve the quality of construction safety management, it is critical to explore the use of information technologies to ensure that the process of hazard management is efficient and effective. After exploring the existing problems of construction hazard management in China, this paper develops the griddization management model for construction hazard management. First, following the knowledge grid infrastructure, the griddization computing infrastructure for construction hazards management is designed which includes five layers: resource entity layer, information management layer, task management layer, knowledge transformation layer and application layer. This infrastructure will be as the technical support for realizing grid management. Second, this study divides the construction hazards into grids through city level, district level and construction site level according to grid principles. Last, a griddization management process including hazard identification, assessment and control is developed. Meanwhile, all stakeholders of construction safety management, such as owners, contractors, supervision organizations and government departments, should take the corresponding responsibilities in this process. Finally, a case study based on actual construction hazard identification, assessment and control is used to validate the effectiveness and efficiency of the proposed griddization management model. The advantage of this designed model is to realize information sharing and cooperative management between various safety management departments.

Keywords: Construction hazard, grid management, griddization computing, process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
278 Critical Assessment of Scoring Schemes for Protein-Protein Docking Predictions

Authors: Dhananjay C. Joshi, Jung-Hsin Lin

Abstract:

Protein-protein interactions (PPI) play a crucial role in many biological processes such as cell signalling, transcription, translation, replication, signal transduction, and drug targeting, etc. Structural information about protein-protein interaction is essential for understanding the molecular mechanisms of these processes. Structures of protein-protein complexes are still difficult to obtain by biophysical methods such as NMR and X-ray crystallography, and therefore protein-protein docking computation is considered an important approach for understanding protein-protein interactions. However, reliable prediction of the protein-protein complexes is still under way. In the past decades, several grid-based docking algorithms based on the Katchalski-Katzir scoring scheme were developed, e.g., FTDock, ZDOCK, HADDOCK, RosettaDock, HEX, etc. However, the success rate of protein-protein docking prediction is still far from ideal. In this work, we first propose a more practical measure for evaluating the success of protein-protein docking predictions,the rate of first success (RFS), which is similar to the concept of mean first passage time (MFPT). Accordingly, we have assessed the ZDOCK bound and unbound benchmarks 2.0 and 3.0. We also createda new benchmark set for protein-protein docking predictions, in which the complexes have experimentally determined binding affinity data. We performed free energy calculation based on the solution of non-linear Poisson-Boltzmann equation (nlPBE) to improve the binding mode prediction. We used the well-studied thebarnase-barstarsystem to validate the parameters for free energy calculations. Besides,thenlPBE-based free energy calculations were conducted for the badly predicted cases by ZDOCK and ZRANK. We found that direct molecular mechanics energetics cannot be used to discriminate the native binding pose from the decoys.Our results indicate that nlPBE-based calculations appeared to be one of the promising approaches for improving the success rate of binding pose predictions.

Keywords: protein-protein docking, protein-protein interaction, molecular mechanics energetics, Poisson-Boltzmann calculations

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805
277 Localization of Geospatial Events and Hoax Prediction in the UFO Database

Authors: Harish Krishnamurthy, Anna Lafontant, Ren Yi

Abstract:

Unidentified Flying Objects (UFOs) have been an interesting topic for most enthusiasts and hence people all over the United States report such findings online at the National UFO Report Center (NUFORC). Some of these reports are a hoax and among those that seem legitimate, our task is not to establish that these events confirm that they indeed are events related to flying objects from aliens in outer space. Rather, we intend to identify if the report was a hoax as was identified by the UFO database team with their existing curation criterion. However, the database provides a wealth of information that can be exploited to provide various analyses and insights such as social reporting, identifying real-time spatial events and much more. We perform analysis to localize these time-series geospatial events and correlate with known real-time events. This paper does not confirm any legitimacy of alien activity, but rather attempts to gather information from likely legitimate reports of UFOs by studying the online reports. These events happen in geospatial clusters and also are time-based. We look at cluster density and data visualization to search the space of various cluster realizations to decide best probable clusters that provide us information about the proximity of such activity. A random forest classifier is also presented that is used to identify true events and hoax events, using the best possible features available such as region, week, time-period and duration. Lastly, we show the performance of the scheme on various days and correlate with real-time events where one of the UFO reports strongly correlates to a missile test conducted in the United States.

Keywords: Time-series clustering, feature extraction, hoax prediction, geospatial events.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 851
276 Convection through Light Weight Timber Constructions with Mineral Wool

Authors: J. Schmidt, O. Kornadt

Abstract:

The major part of light weight timber constructions consists of insulation. Mineral wool is the most commonly used insulation due to its cost efficiency and easy handling. The fiber orientation and porosity of this insulation material enables flowthrough. The air flow resistance is low. If leakage occurs in the insulated bay section, the convective flow may cause energy losses and infiltration of the exterior wall with moisture and particles. In particular the infiltrated moisture may lead to thermal bridges and growth of health endangering mould and mildew. In order to prevent this problem, different numerical calculation models have been developed. All models developed so far have a potential for completion. The implementation of the flow-through properties of mineral wool insulation may help to improve the existing models. Assuming that the real pressure difference between interior and exterior surface is larger than the prescribed pressure difference in the standard test procedure for mineral wool ISO 9053 / EN 29053, measurements were performed using the measurement setup for research on convective moisture transfer “MSRCMT". These measurements show, that structural inhomogeneities of mineral wool effect the permeability only at higher pressure differences, as applied in MSRCMT. Additional microscopic investigations show, that the location of a leak within the construction has a crucial influence on the air flow-through and the infiltration rate. The results clearly indicate that the empirical values for the acoustic resistance of mineral wool should not be used for the calculation of convective transfer mechanisms.

Keywords: convection, convective transfer, infiltration, mineralwool, permeability, resistance, leakage

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2142
275 Development of Storm Water Quality Improvement Strategy Plan for Local City Councils in Western Australia

Authors: Ranjan Sarukkalige, Dinushi Gamage

Abstract:

The aim of this study was to develop a storm water quality improvement strategy plan (WQISP) which assists managers and decision makers of local city councils in enhancing their activities to improve regional water quality. City of Gosnells in Western Australia has been considered as a case study. The procedure on developing the WQISP consists of reviewing existing water quality data, identifying water quality issues in the study areas and developing a decision making tool for the officers, managers and decision makers. It was found that land use type is the main factor affecting the water quality. Therefore, activities, sources and pollutants related to different land use types including residential, industrial, agricultural and commercial are given high importance during the study. Semi-structured interviews were carried out with coordinators of different management sections of the regional councils in order to understand the associated management framework and issues. The issues identified from these interviews were used in preparing the decision making tool. Variables associated with the defined “value versus threat" decision making tool are obtained from the intensive literature review. The main recommendations provided for improvement of water quality in local city councils, include non-structural, structural and management controls and potential impacts of climate change.

Keywords: Storm water quality, Storm water Management, Land use, Strategy plan

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668
274 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach

Authors: Rajvir Kaur, Jeewani Anupama Ginige

Abstract:

With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.

Keywords: Artificial neural networks, breast cancer, cancer dataset, classifiers, cervical cancer, F-score, logistic regression, machine learning, precision, recall, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553