Search results for: CHAID Decision Tree Algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4867

Search results for: CHAID Decision Tree Algorithm

487 A Reusability Evaluation Model for OO-Based Software Components

Authors: Parvinder S. Sandhu, Hardeep Singh

Abstract:

The requirement to improve software productivity has promoted the research on software metric technology. There are metrics for identifying the quality of reusable components but the function that makes use of these metrics to find reusability of software components is still not clear. These metrics if identified in the design phase or even in the coding phase can help us to reduce the rework by improving quality of reuse of the component and hence improve the productivity due to probabilistic increase in the reuse level. CK metric suit is most widely used metrics for the objectoriented (OO) software; we critically analyzed the CK metrics, tried to remove the inconsistencies and devised the framework of metrics to obtain the structural analysis of OO-based software components. Neural network can learn new relationships with new input data and can be used to refine fuzzy rules to create fuzzy adaptive system. Hence, Neuro-fuzzy inference engine can be used to evaluate the reusability of OO-based component using its structural attributes as inputs. In this paper, an algorithm has been proposed in which the inputs can be given to Neuro-fuzzy system in form of tuned WMC, DIT, NOC, CBO , LCOM values of the OO software component and output can be obtained in terms of reusability. The developed reusability model has produced high precision results as expected by the human experts.

Keywords: CK-Metric, ID3, Neuro-fuzzy, Reusability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821
486 Exploring Methods and Strategies for Sustainable Urban Development

Authors: Klio Monokrousou, Maria Giannopoulou

Abstract:

Urban areas, as they have been developed and operate today, are areas of accumulation of a significant amount of people and a large number of activities that generate desires and reasons for traveling. The territorial expansion of the cities as well as the need to preserve the importance of the central city areas lead to the continuous increase of transportation needs which in the limited urban space results in creating serious traffic and operational problems. The modern perception of urban planning is directed towards more holistic approaches and integrated policies that make it economically competitive, socially just and more environmentally friendly. Over the last 25 years, the goal of sustainable transport development has been central to the agenda of any plan or policy for the city. The modern planning of urban space takes into account the economic and social aspects of the city and the importance of the environment to sustainable urban development. In this context, the European Union promotes direct or indirect related interventions according to the cohesion and environmental policies; many countries even had the chance to actually test them. This paper explores the methods and processes that have been developed towards this direction and presents a review and systematic presentation of this work. The ultimate purpose of this research is to effectively use this review to create a decision making methodological framework which can be the basis of a useful operational tool for sustainable urban planning.

Keywords: Sustainable urban development, urban mobility, urban regeneration methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3508
485 Improving Packet Latency of Video Sensor Networks

Authors: Arijit Ghosh, Tony Givargis

Abstract:

Video sensor networks operate on stringent requirements of latency. Packets have a deadline within which they have to be delivered. Violation of the deadline causes a packet to be treated as lost and the loss of packets ultimately affects the quality of the application. Network latency is typically a function of many interacting components. In this paper, we propose ways of reducing the forwarding latency of a packet at intermediate nodes. The forwarding latency is caused by a combination of processing delay and queueing delay. The former is incurred in order to determine the next hop in dynamic routing. We show that unless link failures in a very specific and unlikely pattern, a vast majority of these lookups are redundant. To counter this we propose source routing as the routing strategy. However, source routing suffers from issues related to scalability and being impervious to network dynamics. We propose solutions to counter these and show that source routing is definitely a viable option in practical sized video networks. We also propose a fast and fair packet scheduling algorithm that reduces queueing delay at the nodes. We support our claims through extensive simulation on realistic topologies with practical traffic loads and failure patterns.

Keywords: Sensor networks, Packet latency, Network design, Networkperformance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1559
484 Evaluation of Features Extraction Algorithms for a Real-Time Isolated Word Recognition System

Authors: Tomyslav Sledevič, Artūras Serackis, Gintautas Tamulevičius, Dalius Navakauskas

Abstract:

Paper presents an comparative evaluation of features extraction algorithm for a real-time isolated word recognition system based on FPGA. The Mel-frequency cepstral, linear frequency cepstral, linear predictive and their cepstral coefficients were implemented in hardware/software design. The proposed system was investigated in speaker dependent mode for 100 different Lithuanian words. The robustness of features extraction algorithms was tested recognizing the speech records at different signal to noise rates. The experiments on clean records show highest accuracy for Mel-frequency cepstral and linear frequency cepstral coefficients. For records with 15 dB signal to noise rate the linear predictive cepstral coefficients gives best result. The hard and soft part of the system is clocked on 50 MHz and 100 MHz accordingly. For the classification purpose the pipelined dynamic time warping core was implemented. The proposed word recognition system satisfy the real-time requirements and is suitable for applications in embedded systems.

Keywords: Isolated word recognition, features extraction, MFCC, LFCC, LPCC, LPC, FPGA, DTW.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3542
483 A Two-Stage Airport Ground Movement Speed Profile Design Methodology Using Particle Swarm Optimization

Authors: Zhang Tianci, Ding Meng, Zuo Hongfu, Zeng Lina, Sun Zejun

Abstract:

Automation of airport operations can greatly improve ground movement efficiency. In this paper, we study the speed profile design problem for advanced airport ground movement control and guidance. The problem is constrained by the surface four-dimensional trajectory generated in taxi planning. A decomposed approach of two stages is presented to solve this problem efficiently. In the first stage, speeds are allocated at control points, which ensure smooth speed profiles can be found later. In the second stage, detailed speed profiles of each taxi interval are generated according to the allocated control point speeds with the objective of minimizing the overall fuel consumption. We present a swarm intelligence based algorithm for the first-stage problem and a discrete variable driven enumeration method for the second-stage problem, since it only has a small set of discrete variables. Experimental results demonstrate the presented methodology performs well on real world speed profile design problems.

Keywords: Airport ground movement, fuel consumption, particle swarm optimization, smoothness, speed profile design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1943
482 Tibyan Automated Arabic Correction Using Machine-Learning in Detecting Syntactical Mistakes

Authors: Ashwag O. Maghraby, Nida N. Khan, Hosnia A. Ahmed, Ghufran N. Brohi, Hind F. Assouli, Jawaher S. Melibari

Abstract:

The Arabic language is one of the most important languages. Learning it is so important for many people around the world because of its religious and economic importance and the real challenge lies in practicing it without grammatical or syntactical mistakes. This research focused on detecting and correcting the syntactic mistakes of Arabic syntax according to their position in the sentence and focused on two of the main syntactical rules in Arabic: Dual and Plural. It analyzes each sentence in the text, using Stanford CoreNLP morphological analyzer and machine-learning approach in order to detect the syntactical mistakes and then correct it. A prototype of the proposed system was implemented and evaluated. It uses support vector machine (SVM) algorithm to detect Arabic grammatical errors and correct them using the rule-based approach. The prototype system has a far accuracy 81%. In general, it shows a set of useful grammatical suggestions that the user may forget about while writing due to lack of familiarity with grammar or as a result of the speed of writing such as alerting the user when using a plural term to indicate one person.

Keywords: Arabic Language acquisition and learning, natural language processing, morphological analyzer, part-of-speech.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1052
481 An Archetype to Sustain Knowledge Management Systems through Intranet

Authors: B. T. Sayed, Nafaâ Jabeur, M. Aref

Abstract:

Creation and maintenance of knowledge management systems has been recognized as an important research area. Consecutively lack of accurate results from knowledge management systems limits the organization to apply their knowledge management processes. This leads to a failure in getting the right information to the right people at the right time thus followed by a deficiency in decision making processes. An Intranet offers a powerful tool for communication and collaboration, presenting data and information, and the means that creates and shares knowledge, all in one easily accessible place. This paper proposes an archetype describing how a knowledge management system, with the support of intranet capabilities, could very much increase the accuracy of capturing, storing and retrieving knowledge based processes thereby increasing the efficiency of the system. This system will expect a critical mass of usage, by the users, for intranet to function as knowledge management systems. This prototype would lead to a design of an application that would impose creation and maintenance of an effective knowledge management system through intranet. The aim of this paper is to introduce an effective system to handle capture, store and distribute knowledge management in a form that may not lead to any failure which exists in most of the systems. The methodology used in the system would require all the employees, in the organization, to contribute the maximum to deliver the system to a successful arena. The system is still in its initial mode and thereby the authors are under the process to practically implement the ideas, as mentioned in the system, to produce satisfactory results.

Keywords: Knowledge Management Systems, Intranet, Methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2002
480 On The Analysis of a Compound Neural Network for Detecting Atrio Ventricular Heart Block (AVB) in an ECG Signal

Authors: Salama Meghriche, Amer Draa, Mohammed Boulemden

Abstract:

Heart failure is the most common reason of death nowadays, but if the medical help is given directly, the patient-s life may be saved in many cases. Numerous heart diseases can be detected by means of analyzing electrocardiograms (ECG). Artificial Neural Networks (ANN) are computer-based expert systems that have proved to be useful in pattern recognition tasks. ANN can be used in different phases of the decision-making process, from classification to diagnostic procedures. This work concentrates on a review followed by a novel method. The purpose of the review is to assess the evidence of healthcare benefits involving the application of artificial neural networks to the clinical functions of diagnosis, prognosis and survival analysis, in ECG signals. The developed method is based on a compound neural network (CNN), to classify ECGs as normal or carrying an AtrioVentricular heart Block (AVB). This method uses three different feed forward multilayer neural networks. A single output unit encodes the probability of AVB occurrences. A value between 0 and 0.1 is the desired output for a normal ECG; a value between 0.1 and 1 would infer an occurrence of an AVB. The results show that this compound network has a good performance in detecting AVBs, with a sensitivity of 90.7% and a specificity of 86.05%. The accuracy value is 87.9%.

Keywords: Artificial neural networks, Electrocardiogram(ECG), Feed forward multilayer neural network, Medical diagnosis, Pattern recognitionm, Signal processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2475
479 Leveraging Hyperledger Iroha for the Issuance and Verification of Higher-Education Certificates

Authors: Vasiliki Vlachou, Christos Kontzinos, Ourania Markaki, Panagiotis Kokkinakos, Vagelis Karakolis, John Psarras

Abstract:

Higher Education is resisting the pull of technology, especially as this concerns the issuance and verification of degrees and certificates. It is widely known that education certificates are largely produced in paper form making them vulnerable to damage while holders of such certificates are dependent on the universities and other issuing organisations. QualiChain is an EU Horizon 2020 (H2020) research project aiming to transform and revolutionise the domain of public education and its ties with the job market by leveraging blockchain, analytics and decision support to develop a platform for the verification and sharing of education certificates. Blockchain plays an integral part in the QualiChain solution in providing a trustworthy environment to store, share and manage such accreditations. Under the context of this paper, three prominent blockchain platforms (Ethereum, Hyperledger Fabric, Hyperledger Iroha) were considered as a means of experimentation for creating a system with the basic functionalities that will be needed for trustworthy degree verification. The methodology and respective system developed and presented in this paper used Hyperledger Iroha and proved that this specific platform can be used to easily develop decentralize applications. Future papers will attempt to further experiment with other blockchain platforms and assess which has the best potential.

Keywords: Blockchain, degree verification, higher education certificates, Hyperledger Iroha.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 835
478 Enhancing the Performance of H.264/AVC in Adaptive Group of Pictures Mode Using Octagon and Square Search Pattern

Authors: S. Sowmyayani, P. Arockia Jansi Rani

Abstract:

This paper integrates Octagon and Square Search pattern (OCTSS) motion estimation algorithm into H.264/AVC (Advanced Video Coding) video codec in Adaptive Group of Pictures (AGOP) mode. AGOP structure is computed based on scene change in the video sequence. Octagon and square search pattern block-based motion estimation method is implemented in inter-prediction process of H.264/AVC. Both these methods reduce bit rate and computational complexity while maintaining the quality of the video sequence respectively. Experiments are conducted for different types of video sequence. The results substantially proved that the bit rate, computation time and PSNR gain achieved by the proposed method is better than the existing H.264/AVC with fixed GOP and AGOP. With a marginal gain in quality of 0.28dB and average gain in bitrate of 132.87kbps, the proposed method reduces the average computation time by 27.31 minutes when compared to the existing state-of-art H.264/AVC video codec.

Keywords: Block Distortion Measure, Block Matching Algorithms, H.264/AVC, Motion estimation, Search patterns, Shot cut detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735
477 DCGA Based-Transmission Network Expansion Planning Considering Network Adequacy

Authors: H. Shayeghi, M. Mahdavi, H. Haddadian

Abstract:

Transmission network expansion planning (TNEP) is an important component of power system planning that its task is to minimize the network construction and operational cost while satisfying the demand increasing, imposed technical and economic conditions. Up till now, various methods have been presented to solve the static transmission network expansion planning (STNEP) problem. But in all of these methods, the lines adequacy rate has not been studied after the planning horizon, i.e. when the expanded network misses its adequacy and needs to be expanded again. In this paper, in order to take transmission lines condition after expansion in to account from the line loading view point, the adequacy of transmission network is considered for solution of STNEP problem. To obtain optimal network arrangement, a decimal codification genetic algorithm (DCGA) is being used for minimizing the network construction and operational cost. The effectiveness of the proposed idea is tested on the Garver's six-bus network. The results evaluation reveals that the annual worth of network adequacy has a considerable effect on the network arrangement. In addition, the obtained network, based on the DCGA, has lower investment cost and higher adequacy rate. Thus, the network satisfies the requirements of delivering electric power more safely and reliably to load centers.

Keywords: STNEP Problem, Network Adequacy, DCGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1429
476 ADA Tool for Satellite InSAR-Based Ground Displacement Analysis: The Granada Region

Authors: M. Cuevas-González, O. Monserrat, A. Barra, C. Reyes-Carmona, R. M. Mateos, J. P. Galve, R. Sarro, M. Cantalejo, E. Peña, M. Martínez-Corbella, J. A. Luque, J. M. Azañón, A. Millares, M. Béjar, J. A. Navarro, L. Solari

Abstract:

Geohazard prone areas require continuous monitoring to detect risks, understand the phenomena occurring in those regions and prevent disasters. Satellite interferometry (InSAR) has come to be a trustworthy technique for ground movement detection and monitoring in the last few years. InSAR based techniques allow to process large areas providing high number of displacement measurements at low cost. However, the results provided by such techniques are usually not easy to interpret by non-experienced users hampering its use for decision makers. This work presents a set of tools developed in the framework of different projects (Momit, Safety, U-Geohaz, Riskcoast) and an example of their use in the Granada Coastal area (Spain) is shown. The ADA (Active Displacement Areas) tool has been developed with the aim of easing the management, use and interpretation of InSAR based results. It provides a semi-automatic extraction of the most significant ADAs through the application ADAFinder tool. This tool aims to support the exploitation of the European Ground Motion Service (EU-GMS), which will offer reliable and systematic information on natural and anthropogenic ground motion phenomena across Europe.

Keywords: Ground displacements, InSAR, natural hazards, satellite imagery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 416
475 Illumination Invariant Face Recognition using Supervised and Unsupervised Learning Algorithms

Authors: Shashank N. Mathur, Anil K. Ahlawat, Virendra P. Vishwakarma

Abstract:

In this paper, a comparative study of application of supervised and unsupervised learning algorithms on illumination invariant face recognition has been carried out. The supervised learning has been carried out with the help of using a bi-layered artificial neural network having one input, two hidden and one output layer. The gradient descent with momentum and adaptive learning rate back propagation learning algorithm has been used to implement the supervised learning in a way that both the inputs and corresponding outputs are provided at the time of training the network, thus here is an inherent clustering and optimized learning of weights which provide us with efficient results.. The unsupervised learning has been implemented with the help of a modified Counterpropagation network. The Counterpropagation network involves the process of clustering followed by application of Outstar rule to obtain the recognized face. The face recognition system has been developed for recognizing faces which have varying illumination intensities, where the database images vary in lighting with respect to angle of illumination with horizontal and vertical planes. The supervised and unsupervised learning algorithms have been implemented and have been tested exhaustively, with and without application of histogram equalization to get efficient results.

Keywords: Artificial Neural Networks, back propagation, Counterpropagation networks, face recognition, learning algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1691
474 Gender Based Variability Time Series Complexity Analysis

Authors: Ramesh K. Sunkaria, Puneeta Marwaha

Abstract:

Non linear methods of heart rate variability (HRV) analysis are becoming more popular. It has been observed that complexity measures quantify the regularity and uncertainty of cardiovascular RR-interval time series. In the present work, SampEn has been evaluated in healthy normal sinus rhythm (NSR) male and female subjects for different data lengths and tolerance level r. It is demonstrated that SampEn is small for higher values of tolerance r. Also SampEn value of healthy female group is higher than that of healthy male group for short data length and with increase in data length both groups overlap each other and it is difficult to distinguish them. The SampEn gives inaccurate results by assigning higher value to female group, because male subject have more complex HRV pattern than that of female subjects. Therefore, this traditional algorithm exhibits higher complexity for healthy female subjects than for healthy male subjects, which is misleading observation. This may be due to the fact that SampEn do not account for multiple time scales inherent in the physiologic time series and the hidden spatial and temporal fluctuations remains unexplored.

Keywords: Heart rate variability, normal sinus rhythm group, RR interval time series, sample entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770
473 Palmprint Recognition by Wavelet Transform with Competitive Index and PCA

Authors: Deepti Tamrakar, Pritee Khanna

Abstract:

This manuscript presents, palmprint recognition by combining different texture extraction approaches with high accuracy. The Region of Interest (ROI) is decomposed into different frequencytime sub-bands by wavelet transform up-to two levels and only the approximate image of two levels is selected, which is known as Approximate Image ROI (AIROI). This AIROI has information of principal lines of the palm. The Competitive Index is used as the features of the palmprint, in which six Gabor filters of different orientations convolve with the palmprint image to extract the orientation information from the image. The winner-take-all strategy is used to select dominant orientation for each pixel, which is known as Competitive Index. Further, PCA is applied to select highly uncorrelated Competitive Index features, to reduce the dimensions of the feature vector, and to project the features on Eigen space. The similarity of two palmprints is measured by the Euclidean distance metrics. The algorithm is tested on Hong Kong PolyU palmprint database. Different AIROI of different wavelet filter families are also tested with the Competitive Index and PCA. AIROI of db7 wavelet filter achievs Equal Error Rate (EER) of 0.0152% and Genuine Acceptance Rate (GAR) of 99.67% on the palm database of Hong Kong PolyU.

Keywords: DWT, EER, Euclidean Distance, Gabor filter, PCA, ROI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746
472 Modelling the States of Public Client Participation in Public Private Partnership Arrangements

Authors: Eisa A. Alsafran, Francis T. Edum-Fotwe, Wayne E. Lord

Abstract:

The degree to which a public client actively participates in Public Private Partnership (PPP) schemes, is seen as a determinant of the success of the arrangement, and in particular, efficiency in the delivery of the assets of any infrastructure development. The asset delivery is often an early barometer for judging the overall performance of the PPP. Currently, there are no defined descriptors for the degree of such participation. The lack of defined descriptors makes the association between the degree of participation and efficiency of asset delivery, difficult to establish. This is particularly so if an optimum effect is desired. In addition, such an association is important for the strategic decision to embark on any PPP initiative. This paper presents a conceptual model of different levels of participation that characterise PPP schemes. The modelling was achieved by a systematic review of reported sources that address essential aspects and structures of PPP schemes, published from 2001 to 2015. As a precursor to the modelling, the common areas of Public Client Participation (PCP) were investigated. Equity and risk emerged as two dominant factors in the common areas of PCP, and were therefore adopted to form the foundation of the modelling. The resultant conceptual model defines the different states of combined PCP. The defined states provide a more rational basis for establishing how the degree of PCP affects the efficiency of asset delivery in PPP schemes.

Keywords: Asset delivery, infrastructure development, public private partnership, public client participation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
471 Cell Phone: A Vital Clue

Authors: Meenakshi Mahajan, Arun Sharma, Navendu Sharma

Abstract:

Increasing use of cell phone as a medium of human interaction is playing a vital role in solving riddles of crime as well. A young girl went missing from her home late in the evening in the month of August, 2008 when her enraged relatives and villagers physically assaulted and chased her fiancée who often frequented her home. Two years later, her mother lodged a complaint against the relatives and the villagers alleging that after abduction her daughter was either sold or killed as she had failed to trace her. On investigation, a rusted cell phone with partial visible IMEI number, clothes, bangles, human skeleton etc. recovered from abandoned well in the month of May, 2011 were examined in the lab. All hopes pinned on identity of cell phone, for only linking evidence to fix the scene of occurrence supported by call detail record (CDR) and to dispel doubts about mode of sudden disappearance or death as DNA technology did not help in establishing identity of the deceased. The conventional scientific methods were used without success and international mobile equipment identification number of the cell phone could be generated by using statistical analysis followed by online verification. 

Keywords: Call detail record, Luhn algorithm, stereomicroscope.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1983
470 An Enhanced SAR-Based Tsunami Detection System

Authors: Jean-Pierre Dubois, Jihad S. Daba, H. Karam, J. Abdallah

Abstract:

Tsunami early detection and warning systems have proved to be of ultimate importance, especially after the destructive tsunami that hit Japan in March 2012. Such systems are crucial to inform the authorities of any risk of a tsunami and of the degree of its danger in order to make the right decision and notify the public of the actions they need to take to save their lives. The purpose of this research is to enhance existing tsunami detection and warning systems. We first propose an automated and miniaturized model of an early tsunami detection and warning system. The model for the operation of a tsunami warning system is simulated using the data acquisition toolbox of Matlab and measurements acquired from specified internet pages due to the lack of the required real-life sensors, both seismic and hydrologic, and building a graphical user interface for the system. In the second phase of this work, we implement various satellite image filtering schemes to enhance the acquired synthetic aperture radar images of the tsunami affected region that are masked by speckle noise. This enables us to conduct a post-tsunami damage extent study and calculate the percentage damage. We conclude by proposing improvements to the existing telecommunication infrastructure of existing warning tsunami systems using a migration to IP-based networks and fiber optics links.

Keywords: Detection, GIS, GSN, GTS, GPS, speckle noise, synthetic aperture radar, tsunami, wiener filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2182
469 Directional Drilling Optimization by Non-Rotating Stabilizer

Authors: Eisa Noveiri, Adel Taheri Nia

Abstract:

The Non-Rotating Adjustable Stabilizer / Directional Solution (NAS/DS) is the imitation of a mechanical process or an object by a directional drilling operation that causes a respond mathematically and graphically to data and decision to choose the best conditions compared to the previous mode. The NAS/DS Auto Guide rotary steerable tool is undergoing final field trials. The point-the-bit tool can use any bit, work at any rotating speed, work with any MWD/LWD system, and there is no pressure drop through the tool. It is a fully closed-loop system that automatically maintains a specified curvature rate. The Non–Rotating Adjustable stabilizer (NAS) can be controls curvature rate by exactly positioning and run with the optimum bit, use the most effective weight (WOB) and rotary speed (RPM) and apply all of the available hydraulic energy to the bit. The directional simulator allowed to specify the size of the curvature rate performance errors of the NAS tool and the magnitude of the random errors in the survey measurements called the Directional Solution (DS). The combination of these technologies (NAS/DS) will provide smoother bore holes, reduced drilling time, reduced drilling cost and incredible targeting precision. This simulator controls curvature rate by precisely adjusting the radial extension of stabilizer blades on a near bit Non-Rotating Stabilizer and control process corrects for the secondary effects caused by formation characteristics, bit and tool wear, and manufacturing tolerances.

Keywords: non-rotating, Adjustable stabilizer, simulator, Directional Drilling, optimization, Oil Well Drilling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3278
468 Optimal Opportunistic Maintenance Policy for a Two-Unit System

Authors: Nooshin Salari, Viliam Makis, Jane Doe

Abstract:

This paper presents a maintenance policy for a system consisting of two units. Unit 1 is gradually deteriorating and is subject to soft failure. Unit 2 has a general lifetime distribution and is subject to hard failure. Condition of unit 1 of the system is monitored periodically and it is considered as failed when its deterioration level reaches or exceeds a critical level N. At the failure time of unit 2 system is considered as failed, and unit 2 will be correctively replaced by the next inspection epoch. Unit 1 or 2 are preventively replaced when deterioration level of unit 1 or age of unit 2 exceeds the related preventive maintenance (PM) levels. At the time of corrective or preventive replacement of unit 2, there is an opportunity to replace unit 1 if its deterioration level reaches the opportunistic maintenance (OM) level. If unit 2 fails in an inspection interval, system stops operating although unit 1 has not failed. A mathematical model is derived to find the preventive and opportunistic replacement levels for unit 1 and preventive replacement age for unit 2, that minimize the long run expected average cost per unit time. The problem is formulated and solved in the semi-Markov decision process (SMDP) framework. Numerical example is provided to illustrate the performance of the proposed model and the comparison of the proposed model with an optimal policy without opportunistic maintenance level for unit 1 is carried out.

Keywords: Condition-based maintenance, opportunistic maintenance, preventive maintenance, two-unit system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1019
467 Investigation of Chord Protocol in Peer to Peer-Wireless Mesh Network with Mobility

Authors: P. Prasanna Murali Krishna, M. V. Subramanyam, K. Satya Prasad

Abstract:

File sharing in networks is generally achieved using Peer-to-Peer (P2P) applications. Structured P2P approaches are widely used in adhoc networks due to its distributed and scalability features. Efficient mechanisms are required to handle the huge amount of data distributed to all peers. The intrinsic characteristics of P2P system makes for easier content distribution when compared to client-server architecture. All the nodes in a P2P network act as both client and server, thus, distributing data takes lesser time when compared to the client-server method. CHORD protocol is a resource routing based where nodes and data items are structured into a 1- dimensional ring. The structured lookup algorithm of Chord is advantageous for distributed P2P networking applications. However, structured approach improves lookup performance in a high bandwidth wired network it could contribute to unnecessary overhead in overlay networks leading to degradation of network performance. In this paper, the performance of existing CHORD protocol on Wireless Mesh Network (WMN) when nodes are static and dynamic is investigated.

Keywords: Wireless mesh network (WMN), structured P2P networks, peer to peer resource sharing, CHORD protocol, DHT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2122
466 Effectual Reversible Watermarking Method for Hide the Patient Details in Brain Tumor Image

Authors: K. Amudha, C. Nelson Kennedy Babu, S. Balu

Abstract:

The security of the medical images and its related data is the major research area which is to be concentrated in today’s era. Security in the medical image indicates that the physician may hide patients’ related data in the medical image and transfer it safely to a defined location using reversible watermarking. Many reversible watermarking methods had proposed over the decade. This paper enhances the security level in brain tumor images to hide the patient’s detail, which has to be conferred with other physician’s suggestions. The details or the information will be hidden in Non-ROI area of the image by using the block cipher algorithm. The block cipher uses different keys to extract the details that are difficult for the intruder to detect all the keys and to spot the details, which are the key advantage of this method. The ROI is the tumor area and Non-ROI is the area rest of ROI. The Non-ROI should not be spoiled in any cause and the details in the Non-ROI should be extracted correctly. The reversible watermarking method proposed in this paper performs well when compared to existing methods in the process of extraction of an original image and providing information security.

Keywords: Brain tumor images, Block Cipher, Reversible watermarking, ROI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1341
465 Governance, Risk Management, and Compliance Factors Influencing the Adoption of Cloud Computing in Australia

Authors: Tim Nedyalkov

Abstract:

A business decision to move to the cloud brings fundamental changes in how an organization develops and delivers its Information Technology solutions. The accelerated pace of digital transformation across businesses and government agencies increases the reliance on cloud-based services. Collecting, managing, and retaining large amounts of data in cloud environments make information security and data privacy protection essential. It becomes even more important to understand what key factors drive successful cloud adoption following the commencement of the Privacy Amendment Notifiable Data Breaches (NDB) Act 2017 in Australia as the regulatory changes impact many organizations and industries. This quantitative correlational research investigated the governance, risk management, and compliance factors contributing to cloud security success. The factors influence the adoption of cloud computing within an organizational context after the commencement of the NDB scheme. The results and findings demonstrated that corporate information security policies, data storage location, management understanding of data governance responsibilities, and regular compliance assessments are the factors influencing cloud computing adoption. The research has implications for organizations, future researchers, practitioners, policymakers, and cloud computing providers to meet the rapidly changing regulatory and compliance requirements.

Keywords: Cloud compliance, cloud security, cloud security governance, data governance, privacy protection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 917
464 A Fast Neural Algorithm for Serial Code Detection in a Stream of Sequential Data

Authors: Hazem M. El-Bakry, Qiangfu Zhao

Abstract:

In recent years, fast neural networks for object/face detection have been introduced based on cross correlation in the frequency domain between the input matrix and the hidden weights of neural networks. In our previous papers [3,4], fast neural networks for certain code detection was introduced. It was proved in [10] that for fast neural networks to give the same correct results as conventional neural networks, both the weights of neural networks and the input matrix must be symmetric. This condition made those fast neural networks slower than conventional neural networks. Another symmetric form for the input matrix was introduced in [1-9] to speed up the operation of these fast neural networks. Here, corrections for the cross correlation equations (given in [13,15,16]) to compensate for the symmetry condition are presented. After these corrections, it is proved mathematically that the number of computation steps required for fast neural networks is less than that needed by classical neural networks. Furthermore, there is no need for converting the input data into symmetric form. Moreover, such new idea is applied to increase the speed of neural networks in case of processing complex values. Simulation results after these corrections using MATLAB confirm the theoretical computations.

Keywords: Fast Code/Data Detection, Neural Networks, Cross Correlation, real/complex values.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
463 Inverse Heat Conduction Analysis of Cooling on Run Out Tables

Authors: M. S. Gadala, Khaled Ahmed, Elasadig Mahdi

Abstract:

In this paper, we introduced a gradient-based inverse solver to obtain the missing boundary conditions based on the readings of internal thermocouples. The results show that the method is very sensitive to measurement errors, and becomes unstable when small time steps are used. The artificial neural networks are shown to be capable of capturing the whole thermal history on the run-out table, but are not very effective in restoring the detailed behavior of the boundary conditions. Also, they behave poorly in nonlinear cases and where the boundary condition profile is different. GA and PSO are more effective in finding a detailed representation of the time-varying boundary conditions, as well as in nonlinear cases. However, their convergence takes longer. A variation of the basic PSO, called CRPSO, showed the best performance among the three versions. Also, PSO proved to be effective in handling noisy data, especially when its performance parameters were tuned. An increase in the self-confidence parameter was also found to be effective, as it increased the global search capabilities of the algorithm. RPSO was the most effective variation in dealing with noise, closely followed by CRPSO. The latter variation is recommended for inverse heat conduction problems, as it combines the efficiency and effectiveness required by these problems.

Keywords: Inverse Analysis, Function Specification, Neural Net Works, Particle Swarm, Run Out Table.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702
462 A Markov Chain Model for Load-Balancing Based and Service Based RAT Selection Algorithms in Heterogeneous Networks

Authors: Abdallah Al Sabbagh

Abstract:

Next Generation Wireless Network (NGWN) is expected to be a heterogeneous network which integrates all different Radio Access Technologies (RATs) through a common platform. A major challenge is how to allocate users to the most suitable RAT for them. An optimized solution can lead to maximize the efficient use of radio resources, achieve better performance for service providers and provide Quality of Service (QoS) with low costs to users. Currently, Radio Resource Management (RRM) is implemented efficiently for the RAT that it was developed. However, it is not suitable for a heterogeneous network. Common RRM (CRRM) was proposed to manage radio resource utilization in the heterogeneous network. This paper presents a user level Markov model for a three co-located RAT networks. The load-balancing based and service based CRRM algorithms have been studied using the presented Markov model. A comparison for the performance of load-balancing based and service based CRRM algorithms is studied in terms of traffic distribution, new call blocking probability, vertical handover (VHO) call dropping probability and throughput.

Keywords: Heterogeneous Wireless Network, Markov chain model, load-balancing based and service based algorithm, CRRM algorithms, Beyond 3G network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2488
461 A Deep-Learning Based Prediction of Pancreatic Adenocarcinoma with Electronic Health Records from the State of Maine

Authors: Xiaodong Li, Peng Gao, Chao-Jung Huang, Shiying Hao, Xuefeng B. Ling, Yongxia Han, Yaqi Zhang, Le Zheng, Chengyin Ye, Modi Liu, Minjie Xia, Changlin Fu, Bo Jin, Karl G. Sylvester, Eric Widen

Abstract:

Predicting the risk of Pancreatic Adenocarcinoma (PA) in advance can benefit the quality of care and potentially reduce population mortality and morbidity. The aim of this study was to develop and prospectively validate a risk prediction model to identify patients at risk of new incident PA as early as 3 months before the onset of PA in a statewide, general population in Maine. The PA prediction model was developed using Deep Neural Networks, a deep learning algorithm, with a 2-year electronic-health-record (EHR) cohort. Prospective results showed that our model identified 54.35% of all inpatient episodes of PA, and 91.20% of all PA that required subsequent chemoradiotherapy, with a lead-time of up to 3 months and a true alert of 67.62%. The risk assessment tool has attained an improved discriminative ability. It can be immediately deployed to the health system to provide automatic early warnings to adults at risk of PA. It has potential to identify personalized risk factors to facilitate customized PA interventions.

Keywords: Cancer prediction, deep learning, electronic health records, pancreatic adenocarcinoma.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 856
460 Information Requirements for Vessel Traffic Service Operations

Authors: Fan Li, Chun-Hsien Chen, Li Pheng Khoo

Abstract:

Operators of vessel traffic service (VTS) center provides three different types of services; namely information service, navigational assistance and traffic organization to vessels. To provide these services, operators monitor vessel traffic through computer interface and provide navigational advice based on the information integrated from multiple sources, including automatic identification system (AIS), radar system, and closed circuit television (CCTV) system. Therefore, this information is crucial in VTS operation. However, what information the VTS operator actually need to efficiently and properly offer services is unclear. The aim of this study is to investigate into information requirements for VTS operation. To achieve this aim, field observation was carried out to elicit the information requirements for VTS operation. The study revealed that the most frequent and important tasks were handling arrival vessel report, potential conflict control and abeam vessel report. Current location and vessel name were used in all tasks. Hazard cargo information was particularly required when operators handle arrival vessel report. The speed, the course, and the distance of two or several vessels were only used in potential conflict control. The information requirements identified in this study can be utilized in designing a human-computer interface that takes into consideration what and when information should be displayed, and might be further used to build the foundation of a decision support system for VTS.

Keywords: Vessel traffic service, information requirements, hierarchy task analysis, field observation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
459 Solving Part Type Selection and Loading Problem in Flexible Manufacturing System Using Real Coded Genetic Algorithms – Part I: Modeling

Authors: Wayan F. Mahmudy, Romeo M. Marian, Lee H. S. Luong

Abstract:

This paper and its companion (Part 2) deal with modeling and optimization of two NP-hard problems in production planning of flexible manufacturing system (FMS), part type selection problem and loading problem. The part type selection problem and the loading problem are strongly related and heavily influence the system-s efficiency and productivity. The complexity of the problems is harder when flexibilities of operations such as the possibility of operation processed on alternative machines with alternative tools are considered. These problems have been modeled and solved simultaneously by using real coded genetic algorithms (RCGA) which uses an array of real numbers as chromosome representation. These real numbers can be converted into part type sequence and machines that are used to process the part types. This first part of the papers focuses on the modeling of the problems and discussing how the novel chromosome representation can be applied to solve the problems. The second part will discuss the effectiveness of the RCGA to solve various test bed problems.

Keywords: Flexible manufacturing system, production planning, part type selection problem, loading problem, real-coded genetic algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2111
458 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-

Authors: Nieto Bernal Wilson, Carmona Suarez Edgar

Abstract:

The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects.  Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.

Keywords: Data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1480