Search results for: modified backtracking search algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7424

Search results for: modified backtracking search algorithm

6614 Impact of Relaxing Incisions on Maxillofacial Growth Following Sommerlad–Furlow Modified Technique in Patients with Isolated Cleft Palate: A Preliminary Comparative Study

Authors: Sadam Elayah, Yang Li, Bing Shi

Abstract:

Background: The impact of relaxing incisions on maxillofacial growth during palatoplasty remains a topic of debate, and further research is needed to understand its effects fully. Thus, the current study is the first long-term study that aimed to assess the maxillofacial growth of patients with isolated cleft palate following the Sommerlad-Furlow modified (S.F) technique and to estimate the impact of relaxing incisions on maxillofacial growth following S.F technique in patients with isolated cleft palate. Methods: A total of 85 participants, 55 patients with non-syndromic isolated soft and hard cleft palate underwent primary palatoplasty with our technique (30 patients received the Sommerlad-Furlow modified technique without relaxing incision (S.F+RI group), and 25 received Sommerlad-Furlow modified technique without relaxing (S.F-RI group) with no significant difference found between them regarding the cleft type, cleft width, and age at repair. While the other 30 were normal participants with skeletal class I pattern (C group). The control group was matched with the study group in number, age, and sex. All the study variables were measured using stable landmarks, including 12 linear and 10 angular variants. Results: The mean ages at collection of cephalograms were 6.03±0.80 in the S.F+RI group, 5.96±0.76 in the S.F-RI group, and 5.91±0.87 in the C group. Regarding cranial base, the results showed no statistically significant differences between the three groups in S-N and S-N-Ba. The S.F+R.I group had a significantly shorter S-Ba than the S.F-R.I & C groups (P= 0.01). However, there was no statistically significant difference between the S.F-R.I & C groups (P=0.80). Regarding the skeletal maxilla, there was no significant difference between the S.F+R.I and S.F-R.I groups in all linear measurements (N-ANS, S- PM & SN-PP ) except Co-A, the S.F+R.I group had significantly shorter Co-A than the S.F-R.I & C groups (P= <0.01). While the angular measurement, S.F+R.I group had significantly less SNA angle than the S.F-R.I & C groups (P= <0.01). Regarding mandibular bone, there were no statistically significant differences in all linear and angular mandibular measurements between the S.F+R.I and S.F-R.I groups. Regarding intermaxillary relation, the S.F+R.I group had significant differences in Co-Gn - Co-A and ANB compared to the S.F-R.I & C groups (P= <0.01). There was no statistically significant difference in PP-MP among the three groups. Conclusion: As a preliminary report, the Sommerlad-Furlow modified technique without relaxing incisions was found to have good maxillary positioning in the face and a satisfactory intermaxillary relationship compared to the Sommerlad-Furlow modified technique with relaxing incisions.

Keywords: relaxing incisions, cleft palate, palatoplasty, maxillofacial growth

Procedia PDF Downloads 111
6613 Multithreading/Multiprocessing Simulation of The International Space Station Multibody System Using A Divide and Conquer Dynamics Formulation with Flexible Bodies

Authors: Luong A. Nguyen, Elihu Deneke, Thomas L. Harman

Abstract:

This paper describes a multibody dynamics algorithm formulated for parallel implementation on multiprocessor computing platforms using the divide-and-conquer approach. The system of interest is a general topology of rigid and elastic articulated bodies with or without loops. The algorithm is an extension of Featherstone’s divide and conquer approach to include the flexible-body dynamics formulation. The equations of motion, configured for the International Space Station (ISS) with its robotic manipulator arm as a system of articulated flexible bodies, are implemented in separate computer processors. The performance of this divide-and-conquer algorithm implementation in multiple processors is compared with an existing method implemented on a single processor.

Keywords: multibody dynamics, multiple processors, multithreading, divide-and-conquer algorithm, computational efficiency, flexible body dynamics

Procedia PDF Downloads 337
6612 Efficient Fuzzy Classified Cryptographic Model for Intelligent Encryption Technique towards E-Banking XML Transactions

Authors: Maher Aburrous, Adel Khelifi, Manar Abu Talib

Abstract:

Transactions performed by financial institutions on daily basis require XML encryption on large scale. Encrypting large volume of message fully will result both performance and resource issues. In this paper a novel approach is presented for securing financial XML transactions using classification data mining (DM) algorithms. Our strategy defines the complete process of classifying XML transactions by using set of classification algorithms, classified XML documents processed at later stage using element-wise encryption. Classification algorithms were used to identify the XML transaction rules and factors in order to classify the message content fetching important elements within. We have implemented four classification algorithms to fetch the importance level value within each XML document. Classified content is processed using element-wise encryption for selected parts with "High", "Medium" or “Low” importance level values. Element-wise encryption is performed using AES symmetric encryption algorithm and proposed modified algorithm for AES to overcome the problem of computational overhead, in which substitute byte, shift row will remain as in the original AES while mix column operation is replaced by 128 permutation operation followed by add round key operation. An implementation has been conducted using data set fetched from e-banking service to present system functionality and efficiency. Results from our implementation showed a clear improvement in processing time encrypting XML documents.

Keywords: XML transaction, encryption, Advanced Encryption Standard (AES), XML classification, e-banking security, fuzzy classification, cryptography, intelligent encryption

Procedia PDF Downloads 411
6611 Wastewater Treatment by Modified Bentonite

Authors: Mecabih Zohra

Abstract:

Water is such an important element of many manufacturing processes which that use a big amount of chemical substances, It is likely to cause it contamination of water returning to rivers by industrial discharged. These contaminants can be a high in suspended solid and chemical oxygen demand. In this study, urban wastewater of sidi bel abbes city (Algeria) was treated by adsorption using modified bentonite from Magnia (Algeria) by conducting batch experiments to investigate its equilibrium characteristics and kinetics. Purified bentonite is characterized by; CEC, XRF, BET, FITR, XRD, SEM and 27Al spectroscopy. The results showed the removal of suspended solids exceeds 98.47% and COD up to 99.52%, and regarding of sorption efficiencies (qm), the maximum COD sorption efficiencies (qm) calculated using the Langmuir model is 156.23, 64.47 and 17.19 mg/g respectively, for a pH range of 4 to 9.

Keywords: adsorption, bentonite, COD, wastewater

Procedia PDF Downloads 85
6610 Wastewater Treatment by Modified Bentonite

Authors: Mecabih Zohra

Abstract:

Water is such an important element of many manufacturing processes which that use a big amount of chemical substances, It is likely to cause it contamination of water returning to rivers by industrial discharged. These contaminants can be a high in suspended solid and chemical oxygen demand. In this study, urban wastewater of sidi bel abbes city (Algeria) was treated by adsorption using modified bentonite from Magnia (Algeria) by conducting batch experiments to investigate its equilibrium characteristics and kinetics. Purified bentonite is characterized by; CEC, XRF, BET, FITR, XRD, SEM and 27Al spectroscopy. The results showed the removal of suspended solids exceeds 98.47% and COD up to 99.52%, and regarding of sorption efficiencies (qm), the maximum COD sorption efficiencies (qm) calculated using the Langmuir model is 156.23, 64.47 and 17.19 mg/g respectively, for a pH range of 4 to 9.

Keywords: adsorption, bentonite, COD, wastewater

Procedia PDF Downloads 83
6609 Optimized and Secured Digital Watermarking Using Fuzzy Entropy, Bezier Curve and Visual Cryptography

Authors: R. Rama Kishore, Sunesh

Abstract:

Recent development in the usage of internet for different purposes creates a great threat for the copyright protection of the digital images. Digital watermarking can be used to address the problem. This paper presents detailed review of the different watermarking techniques, latest trends in the field of secured, robust and imperceptible watermarking. It also discusses the different optimization techniques used in the field of watermarking in order to improve the robustness and imperceptibility of the method. Different measures are discussed to evaluate the performance of the watermarking algorithm. At the end, this paper proposes a watermarking algorithm using (2, 2) share visual cryptography and Bezier curve based algorithm to improve the security of the watermark. The proposed method uses fractional transformation to improve the robustness of the copyright protection of the method. The algorithm is optimized using fuzzy entropy for better results.

Keywords: digital watermarking, fractional transform, visual cryptography, Bezier curve, fuzzy entropy

Procedia PDF Downloads 366
6608 Modified Montgomery for RSA Cryptosystem

Authors: Rupali Verma, Maitreyee Dutta, Renu Vig

Abstract:

Encryption and decryption in RSA are done by modular exponentiation which is achieved by repeated modular multiplication. Hence, efficiency of modular multiplication directly determines the efficiency of RSA cryptosystem. This paper designs a Modified Montgomery Modular multiplication in which addition of operands is computed by 4:2 compressor. The basic logic operations in addition are partitioned over two iterations such that parallel computations are performed. This reduces the critical path delay of proposed Montgomery design. The proposed design and RSA are implemented on Virtex 2 and Virtex 5 FPGAs. The two factors partitioning and parallelism have improved the frequency and throughput of proposed design.

Keywords: RSA, montgomery modular multiplication, 4:2 compressor, FPGA

Procedia PDF Downloads 414
6607 A Modified Open Posterior Approach for the Fixation of Posterior Cruciate Ligament Tibial Avulsion Fractures

Authors: Babak Mirzashahi, Arvin Najafi, Pejman Mansouri, Mahmoud Farzan

Abstract:

Background: The most effective treatment of posterior cruciate ligament (PCL) tears and the consequence of untreated PCL injuries remain controversial. Objectives: The aim of this study is to assess outcomes of fixation of tibial posterior cruciate ligament (PCL) avulsion fractures via a modified technique. Patients and Methods: From January, 2009 to March, 2012, there were 45 cases of PCL tibial avulsion fractures that were referred to our hospital and were managed through a modified open posterior approach. Fixation of Tibial PCL avulsion fractures were fixed by means of a lag screw and washer placed through our modified open posterior approach. Range of motion was begun on the first postoperative day. Clinical stability, range of motion, gastrocnemius muscle strength, radiographic investigation, and patient’s overall quality of life was analyzed at final follow up visit. Results: The average of overall musculoskeletal functional evaluation scores was 15 (range 3–35). All patients achieved union of their fracture and had clinically stable knees at the latest follow-up. The mean preoperative Lysholm score for 15 knees was 62 ± 8 (range, 50-75); the mean postoperative Lysholm score was 92± 7 (range, 75-101). A significant difference in Lysholm scores between preoperative and final follow-up evaluations was found (P < .05). At first-year follow-up, 42 (93%) patients revealed a difference of less than 10 mm in thigh circumference between their injured and healthy knees. Conclusions: The management of displaced large PCL avulsion fractures with placement of a cancellous lag screw with washer by means of the modified open posterior approach leads to satisfactory clinical, radiographic, and functional results and reduces the operation time and less blood loss. Level of evidence: IV.

Keywords: posterior cruciate ligament, tibial fracture, lysholm knee score, patient outcome assessment

Procedia PDF Downloads 301
6606 Water Purification By Novel Nanocomposite Membrane

Authors: E. S. Johal, M. S. Saini, M. K. Jha

Abstract:

Currently, 1.1 billion people are at risk due to lack of clean water and about 35 % of people in the developed world die from water related problem. To alleviate these problems water purification technology requires new approaches for effective management and conservation of water resources. Electrospun nanofibres membrane has a potential for water purification due to its high large surface area and good mechanical strength. In the present study PAMAM dendrimers composite nynlon-6 nanofibres membrane was prepared by crosslinking method using Glutaraldehyde. Further, the efficacy of the modified membrane can be renewed by mere exposure of the saturated membrane with the solution having acidic pH. The modified membrane can be used as an effective tool for water purification.

Keywords: dendrimer, nanofibers, nanocomposite membrane, water purification

Procedia PDF Downloads 356
6605 Two Stage Assembly Flowshop Scheduling Problem Minimizing Total Tardiness

Authors: Ali Allahverdi, Harun Aydilek, Asiye Aydilek

Abstract:

The two stage assembly flowshop scheduling problem has lots of application in real life. To the best of our knowledge, the two stage assembly flowshop scheduling problem with total tardiness performance measure and separate setup times has not been addressed so far, and hence, it is addressed in this paper. Different dominance relations are developed and several algorithms are proposed. Extensive computational experiments are conducted to evaluate the proposed algorithms. The computational experiments have shown that one of the algorithms performs much better than the others. Moreover, the experiments have shown that the best performing algorithm performs much better than the best existing algorithm for the case of zero setup times in the literature. Therefore, the proposed best performing algorithm not only can be used for problems with separate setup times but also for the case of zero setup times.

Keywords: scheduling, assembly flowshop, total tardiness, algorithm

Procedia PDF Downloads 344
6604 Phenomena-Based Approach for Automated Generation of Process Options and Process Models

Authors: Parminder Kaur Heer, Alexei Lapkin

Abstract:

Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.

Keywords: Phenomena, Process intensification, Process models , Process options

Procedia PDF Downloads 232
6603 Pion/Muon Identification in a Nuclear Emulsion Cloud Chamber Using Neural Networks

Authors: Kais Manai

Abstract:

The main part of this work focuses on the study of pion/muon separation at low energy using a nuclear Emulsion Cloud Chamber (ECC) made of lead and nuclear emulsion films. The work consists of two parts: particle reconstruction algorithm and a Neural Network that assigns to each reconstructed particle the probability to be a muon or a pion. The pion/muon separation algorithm has been optimized by using a detailed Monte Carlo simulation of the ECC and tested on real data. The algorithm allows to achieve a 60% muon identification efficiency with a pion misidentification smaller than 3%.

Keywords: nuclear emulsion, particle identification, tracking, neural network

Procedia PDF Downloads 506
6602 Modified RSA in Mobile Communication

Authors: Nagaratna Rajur, J. D. Mallapur, Y. B. Kirankumar

Abstract:

The security in mobile communication is very different from the internet or telecommunication, because of its poor user interface and limited processing capacity, as well as combination of complex network protocols. Hence, it poses a challenge for less memory usage and low computation speed based security system. Security involves all the activities that are undertaken to protect the value and on-going usability of assets and the integrity and continuity of operations. An effective network security strategies requires identifying threats and then choosing the most effective set of tools to combat them. Cryptography is a simple and efficient way to provide security in communication. RSA is an asymmetric key approach that is highly reliable and widely used in internet communication. However, it has not been efficiently implemented in mobile communication due its computational complexity and large memory utilization. The proposed algorithm modifies the current RSA to be useful in mobile communication by reducing its computational complexity and memory utilization.

Keywords: M-RSA, sensor networks, sensor applications, security

Procedia PDF Downloads 342
6601 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique

Authors: Satyasen Panda, Urmila Bhanja

Abstract:

In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.

Keywords: Cross Correlation (CC), Three dimensional Optical Code Division Multiple Access (3-D OCDMA), Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA), Multiple Access Interference (MAI), Phase Induced Intensity Noise (PIIN), Three Dimensional Modified Quadratic Congruence/Modified Prime (3-D MQC/MP) code

Procedia PDF Downloads 412
6600 Split Monotone Inclusion and Fixed Point Problems in Real Hilbert Spaces

Authors: Francis O. Nwawuru

Abstract:

The convergence analysis of split monotone inclusion problems and fixed point problems of certain nonlinear mappings are investigated in the setting of real Hilbert spaces. Inertial extrapolation term in the spirit of Polyak is incorporated to speed up the rate of convergence. Under standard assumptions, a strong convergence of the proposed algorithm is established without computing the resolvent operator or involving Yosida approximation method. The stepsize involved in the algorithm does not depend on the spectral radius of the linear operator. Furthermore, applications of the proposed algorithm in solving some related optimization problems are also considered. Our result complements and extends numerous results in the literature.

Keywords: fixedpoint, hilbertspace, monotonemapping, resolventoperators

Procedia PDF Downloads 52
6599 Linear Frequency Modulation-Frequency Shift Keying Radar with Compressive Sensing

Authors: Ho Jeong Jin, Chang Won Seo, Choon Sik Cho, Bong Yong Choi, Kwang Kyun Na, Sang Rok Lee

Abstract:

In this paper, a radar signal processing technique using the LFM-FSK (Linear Frequency Modulation-Frequency Shift Keying) is proposed for reducing the false alarm rate based on the compressive sensing. The LFM-FSK method combines FMCW (Frequency Modulation Continuous Wave) signal with FSK (Frequency Shift Keying). This shows an advantage which can suppress the ghost phenomenon without the complicated CFAR (Constant False Alarm Rate) algorithm. Moreover, the parametric sparse algorithm applying the compressive sensing that restores signals efficiently with respect to the incomplete data samples is also integrated, leading to reducing the burden of ADC in the receiver of radars. 24 GHz FMCW signal is applied and tested in the real environment with FSK modulated data for verifying the proposed algorithm along with the compressive sensing.

Keywords: compressive sensing, LFM-FSK radar, radar signal processing, sparse algorithm

Procedia PDF Downloads 483
6598 Lesson of Moral Teaching of the Sokoto Caliphate in the Quest for Genuine National Development in Nigeria

Authors: Murtala Marafa

Abstract:

It’s been 50 years now since we began the desperate search for a genuine all round development as a nation. Painfully though, like a wild goose chase, the search for that promised land had remain elusive. In this piece, recourse is made to the sound administrative qualities of the 19th century Sokoto Caliphate leaders. It enabled them to administer the vast entity on the basis of mutual peace and justice. It also guaranteed a just political order built on a sound and viable economy. The paper is of the view that if the Nigerian society can allow for a replication of such moral virtues as exemplified by the founding fathers of the Caliphate, Nigeria could transform into a politically coherent and economically viable nation aspired by all.

Keywords: administration, religion, sokoto caliphate, moral teachings

Procedia PDF Downloads 273
6597 Accuracy of VCCT for Calculating Stress Intensity Factor in Metal Specimens Subjected to Bending Load

Authors: Sanjin Kršćanski, Josip Brnić

Abstract:

Virtual Crack Closure Technique (VCCT) is a method used for calculating stress intensity factor (SIF) of a cracked body that is easily implemented on top of basic finite element (FE) codes and as such can be applied on the various component geometries. It is a relatively simple method that does not require any special finite elements to be used and is usually used for calculating stress intensity factors at the crack tip for components made of brittle materials. This paper studies applicability and accuracy of VCCT applied on standard metal specimens containing trough thickness crack, subjected to an in-plane bending load. Finite element analyses were performed using regular 4-node, regular 8-node and a modified quarter-point 8-node 2D elements. Stress intensity factor was calculated from the FE model results for a given crack length, using data available from FE analysis and a custom programmed algorithm based on virtual crack closure technique. Influence of the finite element size on the accuracy of calculated SIF was also studied. The final part of this paper includes a comparison of calculated stress intensity factors with results obtained from analytical expressions found in available literature and in ASTM standard. Results calculated by this algorithm based on VCCT were found to be in good correlation with results obtained with mentioned analytical expressions.

Keywords: VCCT, stress intensity factor, finite element analysis, 2D finite elements, bending

Procedia PDF Downloads 305
6596 Intrusion Detection Using Dual Artificial Techniques

Authors: Rana I. Abdulghani, Amera I. Melhum

Abstract:

With the abnormal growth of the usage of computers over networks and under the consideration or agreement of most of the computer security experts who said that the goal of building a secure system is never achieved effectively, all these points led to the design of the intrusion detection systems(IDS). This research adopts a comparison between two techniques for network intrusion detection, The first one used the (Particles Swarm Optimization) that fall within the field (Swarm Intelligence). In this Act, the algorithm Enhanced for the purpose of obtaining the minimum error rate by amending the cluster centers when better fitness function is found through the training stages. Results show that this modification gives more efficient exploration of the original algorithm. The second algorithm used a (Back propagation NN) algorithm. Finally a comparison between the results of two methods used were based on (NSL_KDD) data sets for the construction and evaluation of intrusion detection systems. This research is only interested in clustering the two categories (Normal and Abnormal) for the given connection records. Practices experiments result in intrude detection rate (99.183818%) for EPSO and intrude detection rate (69.446416%) for BP neural network.

Keywords: IDS, SI, BP, NSL_KDD, PSO

Procedia PDF Downloads 382
6595 Designing Floor Planning in 2D and 3D with an Efficient Topological Structure

Authors: V. Nagammai

Abstract:

Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. Development of technology increases the complexity in IC manufacturing which may vary the power consumption, increase the size and latency period. Topology defines a number of connections between network. In this project, NoC topology is generated using atlas tool which will increase performance in turn determination of constraints are effective. The routing is performed by XY routing algorithm and wormhole flow control. In NoC topology generation, the value of power, area and latency are predetermined. In previous work, placement, routing and shortest path evaluation is performed using an algorithm called floor planning with cluster reconstruction and path allocation algorithm (FCRPA) with the account of 4 3x3 switch, 6 4x4 switch, and 2 5x5 switches. The usage of the 4x4 and 5x5 switch will increase the power consumption and area of the block. In order to avoid the problem, this paper has used one 8x8 switch and 4 3x3 switches. This paper uses IPRCA which of 3 steps they are placement, clustering, and shortest path evaluation. The placement is performed using min – cut placement and clustering are performed using an algorithm called cluster generation. The shortest path is evaluated using an algorithm called Dijkstra's algorithm. The power consumption of each block is determined. The experimental result shows that the area, power, and wire length improved simultaneously.

Keywords: application specific noc, b* tree representation, floor planning, t tree representation

Procedia PDF Downloads 394
6594 Multi Tier Data Collection and Estimation, Utilizing Queue Model in Wireless Sensor Networks

Authors: Amirhossein Mohajerzadeh, Abolghasem Mohajerzadeh

Abstract:

In this paper, target parameter is estimated with desirable precision in hierarchical wireless sensor networks (WSN) while the proposed algorithm also tries to prolong network lifetime as much as possible, using efficient data collecting algorithm. Target parameter distribution function is considered unknown. Sensor nodes sense the environment and send the data to the base station called fusion center (FC) using hierarchical data collecting algorithm. FC builds underlying phenomena based on collected data. Considering the aggregation level, x, the goal is providing the essential infrastructure to find the best value for aggregation level in order to prolong network lifetime as much as possible, while desirable accuracy is guaranteed (required sample size is fully depended on desirable precision). First, the sample size calculation algorithm is discussed, second, the average queue length based on M/M[x]/1/K queue model is determined and it is used for energy consumption calculation. Nodes can decrease transmission cost by aggregating incoming data. Furthermore, the performance of the new algorithm is evaluated in terms of lifetime and estimation accuracy.

Keywords: aggregation, estimation, queuing, wireless sensor network

Procedia PDF Downloads 186
6593 Extraction of Road Edge Lines from High-Resolution Remote Sensing Images Based on Energy Function and Snake Model

Authors: Zuoji Huang, Haiming Qian, Chunlin Wang, Jinyan Sun, Nan Xu

Abstract:

In this paper, the strategy to extract double road edge lines from acquired road stripe image was explored. The workflow is as follows: the road stripes are acquired by probabilistic boosting tree algorithm and morphological algorithm immediately, and road centerlines are detected by thinning algorithm, so the initial road edge lines can be acquired along the road centerlines. Then we refine the results with big variation of local curvature of centerlines. Specifically, the energy function of edge line is constructed by gradient feature and spectral information, and Dijkstra algorithm is used to optimize the initial road edge lines. The Snake model is constructed to solve the fracture problem of intersection, and the discrete dynamic programming algorithm is used to solve the model. After that, we could get the final road network. Experiment results show that the strategy proposed in this paper can be used to extract the continuous and smooth road edge lines from high-resolution remote sensing images with an accuracy of 88% in our study area.

Keywords: road edge lines extraction, energy function, intersection fracture, Snake model

Procedia PDF Downloads 338
6592 A Hybrid System of Hidden Markov Models and Recurrent Neural Networks for Learning Deterministic Finite State Automata

Authors: Pavan K. Rallabandi, Kailash C. Patidar

Abstract:

In this paper, we present an optimization technique or a learning algorithm using the hybrid architecture by combining the most popular sequence recognition models such as Recurrent Neural Networks (RNNs) and Hidden Markov models (HMMs). In order to improve the sequence or pattern recognition/ classification performance by applying a hybrid/neural symbolic approach, a gradient descent learning algorithm is developed using the Real Time Recurrent Learning of Recurrent Neural Network for processing the knowledge represented in trained Hidden Markov Models. The developed hybrid algorithm is implemented on automata theory as a sample test beds and the performance of the designed algorithm is demonstrated and evaluated on learning the deterministic finite state automata.

Keywords: hybrid systems, hidden markov models, recurrent neural networks, deterministic finite state automata

Procedia PDF Downloads 388
6591 Formulation of Extended-Release Gliclazide Tablet Using a Mathematical Model for Estimation of Hypromellose

Authors: Farzad Khajavi, Farzaneh Jalilfar, Faranak Jafari, Leila Shokrani

Abstract:

Formulation of gliclazide in the form of extended-release tablet in 30 and 60 mg dosage forms was performed using hypromellose (HPMC K4M) as a retarding agent. Drug-release profiles were investigated in comparison with references Diamicron MR 30 and 60 mg tablets. The effect of size of powder particles, the amount of hypromellose in formulation, hardness of tablets, and also the effect of halving the tablets were investigated on drug release profile. A mathematical model which describes hypromellose behavior in initial times of drug release was proposed for the estimation of hypromellose content in modified-release gliclazide 60 mg tablet. This model is based on erosion of hypromellose in dissolution media. The model is applicable to describe release profiles of insoluble drugs. Therefore, by using dissolved amount of drug in initial times of dissolution and the model, the amount of hypromellose in formulation can be predictable. The model was used to predict the HPMC K4M content in modified-release gliclazide 30 mg and extended-release quetiapine 200 mg tablets.

Keywords: Gliclazide, hypromellose, drug release, modified-release tablet, mathematical model

Procedia PDF Downloads 223
6590 An Effective Modification to Multiscale Elastic Network Model and Its Evaluation Based on Analyses of Protein Dynamics

Authors: Weikang Gong, Chunhua Li

Abstract:

Dynamics plays an essential role in function exertion of proteins. Elastic network model (ENM), a harmonic potential-based and cost-effective computational method, is a valuable and efficient tool for characterizing the intrinsic dynamical properties encoded in biomacromolecule structures and has been widely used to detect the large-amplitude collective motions of proteins. Gaussian network model (GNM) and anisotropic network model (ANM) are the two often-used ENM models. In recent years, many ENM variants have been proposed. Here, we propose a small but effective modification (denoted as modified mENM) to the multiscale ENM (mENM) where fitting weights of Kirchhoff/Hessian matrixes with the least square method (LSM) is modified since it neglects the details of pairwise interactions. Then we perform its comparisons with the original mENM, traditional ENM, and parameter-free ENM (pfENM) on reproducing dynamical properties for the six representative proteins whose molecular dynamics (MD) trajectories are available in http://mmb.pcb.ub.es/MoDEL/. In the results, for B-factor prediction, mENM achieves the best performance among the four ENM models. Additionally, it is noted that with the weights of the multiscale Kirchhoff/Hessian matrixes modified, interestingly, the modified mGNM/mANM still has a much better performance than the corresponding traditional ENM and pfENM models. As to dynamical cross-correlation map (DCCM) calculation, taking the data obtained from MD trajectories as the standard, mENM performs the worst while the results produced by the modified mENM and pfENM models are close to those from MD trajectories with the latter a little better than the former. Generally, ANMs perform better than the corresponding GNMs except for the mENM. Thus, pfANM and the modified mANM, especially the former, have an excellent performance in dynamical cross-correlation calculation. Compared with GNMs (except for mGNM), the corresponding ANMs can capture quite a number of positive correlations for the residue pairs nearly largest distances apart, which is maybe due to the anisotropy consideration in ANMs. Furtherly, encouragingly the modified mANM displays the best performance in capturing the functional motional modes, followed by pfANM and traditional ANM models, while mANM fails in all the cases. This suggests that the consideration of long-range interactions is critical for ANM models to produce protein functional motions. Based on the analyses, the modified mENM is a promising method in capturing multiple dynamical characteristics encoded in protein structures. This work is helpful for strengthening the understanding of the elastic network model and provides a valuable guide for researchers to utilize the model to explore protein dynamics.

Keywords: elastic network model, ENM, multiscale ENM, molecular dynamics, parameter-free ENM, protein structure

Procedia PDF Downloads 121
6589 Comparative Study of IC and Perturb and Observe Method of MPPT Algorithm for Grid Connected PV Module

Authors: Arvind Kumar, Manoj Kumar, Dattatraya H. Nagaraj, Amanpreet Singh, Jayanthi Prattapati

Abstract:

The purpose of this paper is to study and compare two maximum power point tracking (MPPT) algorithms in a photovoltaic simulation system and also show a simulation study of maximum power point tracking (MPPT) for photovoltaic systems using perturb and observe algorithm and Incremental conductance algorithm. Maximum power point tracking (MPPT) plays an important role in photovoltaic systems because it maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency and minimize the overall system cost. Since the maximum power point (MPP) varies, based on the irradiation and cell temperature, appropriate algorithms must be utilized to track the (MPP) and maintain the operation of the system in it. MATLAB/Simulink is used to establish a model of photovoltaic system with (MPPT) function. This system is developed by combining the models established of solar PV module and DC-DC Boost converter. The system is simulated under different climate conditions. Simulation results show that the photovoltaic simulation system can track the maximum power point accurately.

Keywords: incremental conductance algorithm, perturb and observe algorithm, photovoltaic system, simulation results

Procedia PDF Downloads 556
6588 Use of Nanoclay in Various Modified Polyolefins

Authors: Michael Tupý, Alice Tesaříková-Svobodová, Dagmar Měřínská, Vít Petránek

Abstract:

Polyethylene (PE), Polypropylene (PP), Polyethylene (vinyl acetate) (EVA) and Surlyn (modif-PE) nano composite samples were prepared with montmorillonite fillers Cloisite 93A and Dellite 67G. The amount of modified Na+ montmorillonite (MMT) was fixed to 5 % (w/w). For the compounding of polymer matrix and chosen nano fillers twin-screw kneader was used. The level of MMT intercalation or exfoliation in the nano composite systems was studied by transmission electron microscopy (TEM) observations. The properties of samples were evaluated by dynamical mechanical analysis (E* modulus at 30 °C) and by the measurement of tensile properties (stress and strain at break).

Keywords: polyethylene, polypropylene, polyethylene(vinyl acetate), clay, nanocomposite, montmorillonite

Procedia PDF Downloads 535
6587 Sensitive Determination of Copper(II) by Square Wave Anodic Stripping Voltammetry with Tetracarbonylmolybdenum(0) Multiwalled Carbon Nanotube Paste Electrode

Authors: Illyas Md Isa, Mohamad Idris Saidin, Mustaffa Ahmad, Norhayati Hashim

Abstract:

A highly selective and sensitive carbon paste electrode modified with multiwall carbon nanotubes and 2,6–diacetylpyridine-di-(1R)–(-)–fenchone diazine tetracarbonylmolybdenum(0) complex was used for determination of trace amounts of Cu(II) using square wave anodic stripping voltammetry (SWASV). The influences of experimental variables on the proposed electrode such as pH, supporting electrolyte, preconcentration potential and time, and square wave parameters were investigated. Under optimal conditions, the proposed electrode showed a linear relationship with concentration in the range of 1.0 × 10–10 to 1.0 × 10– 6 M Cu(II) with a limit of detection 8.0 × 10–11 M. The relative standard deviation (n = 5) for a solution containing 1.0 × 10– 6 M of Cu(II) was 0.036. The presence of various cations (in 10 and 100-folds concentration) did not interfere. Electrochemical impedance spectroscopy (EIS) showed that the charge transfer at the electrode-solution interface was favourable. The proposed electrode was applied for the determination of Cu(II) in several water samples. Results agreed very well with those obtained by inductively coupled plasma-optical emission spectrometry. The modified electrode was then proposed as an alternative for determination of Cu(II).

Keywords: chemically modified electrode, Cu(II), square wave anodic stripping voltammetry, tetracarbonylmolybdenum(0)

Procedia PDF Downloads 271
6586 Designing a Corpus Database to Enhance the Learning of Old English Language

Authors: Raquel Mateo Mendaza, Carmen Novo Urraca

Abstract:

The current paper presents the elaboration of a corpus database that aligns two different corpora in order to simplify the search of information both for researchers and students of Old English. This database comprises the information contained in two main reference corpora, namely the Dictionary of Old English Corpus (DOEC), compiled at the University of Toronto, and the York-Toronto-Helsinki Parsed Corpus of Old English (YCOE). The first one provides information on all surviving texts written in the Old English language. The latter offers the syntactical and morphological annotation of several texts included in the DOEC. Although both corpora are closely related, as the YCOE includes the DOE source text identifier, the main problem detected is that there is not an alignment of texts that allows for the search of whole fragments to be further analysed in terms of morphology and syntax. The database proposed in this paper gathers all this information and presents it in a simple, more accessible, visual, and educational way. The alignment of fragments has been done in an automatized way. However, some problems have emerged during the creating process particularly related to the lack of correspondence in the division of fragments. For this reason, it has been necessary to revise the whole entries manually to obtain a truthful high-quality product and to carefully indicate the gaps encountered in these corpora. All in all, this database contains more than 60,000 entries corresponding with the DOE fragments annotated by the YCOE. The main strength of the resulting product is its research and teaching implications in the study of Old English. The use of this database will help researchers and students in the study of different aspects of the language, such as inflectional morphology, syntactic behaviour of given words, or translation studies, among others. By means of the search of words or fragments, the annotated information on morphology and syntax will be automatically displayed, automatizing, and speeding up the search of data.

Keywords: alignment, corpus database, morphosyntactic analysis, Old English

Procedia PDF Downloads 134
6585 Heart Murmurs and Heart Sounds Extraction Using an Algorithm Process Separation

Authors: Fatima Mokeddem

Abstract:

The phonocardiogram signal (PCG) is a physiological signal that reflects heart mechanical activity, is a promising tool for curious researchers in this field because it is full of indications and useful information for medical diagnosis. PCG segmentation is a basic step to benefit from this signal. Therefore, this paper presents an algorithm that serves the separation of heart sounds and heart murmurs in case they exist in order to use them in several applications and heart sounds analysis. The separation process presents here is founded on three essential steps filtering, envelope detection, and heart sounds segmentation. The algorithm separates the PCG signal into S1 and S2 and extract cardiac murmurs.

Keywords: phonocardiogram signal, filtering, Envelope, Detection, murmurs, heart sounds

Procedia PDF Downloads 141