Search results for: Great Deluge Algorithm.
604 An Inverse Heat Transfer Algorithm for Predicting the Thermal Properties of Tumors during Cryosurgery
Authors: Mohamed Hafid, Marcel Lacroix
Abstract:
This study aimed at developing an inverse heat transfer approach for predicting the time-varying freezing front and the temperature distribution of tumors during cryosurgery. Using a temperature probe pressed against the layer of tumor, the inverse approach is able to predict simultaneously the metabolic heat generation and the blood perfusion rate of the tumor. Once these parameters are predicted, the temperature-field and time-varying freezing fronts are determined with the direct model. The direct model rests on one-dimensional Pennes bioheat equation. The phase change problem is handled with the enthalpy method. The Levenberg-Marquardt Method (LMM) combined to the Broyden Method (BM) is used to solve the inverse model. The effect (a) of the thermal properties of the diseased tissues; (b) of the initial guesses for the unknown thermal properties; (c) of the data capture frequency; and (d) of the noise on the recorded temperatures is examined. It is shown that the proposed inverse approach remains accurate for all the cases investigated.
Keywords: Cryosurgery, inverse heat transfer, Levenberg-Marquardt method, thermal properties, Pennes model, enthalpy method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1500603 Optimized Energy Scheduling Algorithm for Energy Efficient Wireless Sensor Networks
Authors: S. Arun Rajan, S. Bhavani
Abstract:
Wireless sensor networks can be tiny, low cost, intelligent sensors connected with advanced communication systems. WSNs have pulled in significant consideration as a matter of fact that, industrial as well as medical solicitations employ these in monitoring targets, conservational observation, obstacle exposure, movement regulator etc. In these applications, sensor hubs are thickly sent in the unattended environment with little non-rechargeable batteries. This constraint requires energy-efficient systems to drag out the system lifetime. There are redundancies in data sent over the network. To overcome this, multiple virtual spine scheduling has been presented. Such networks problems are called Maximum Lifetime Backbone Scheduling (MLBS) problems. Though this sleep wake cycle reduces radio usage, improvement can be made in the path in which the group heads stay selected. Cluster head selection with emphasis on geometrical relation of the system will enhance the load sharing among the nodes. Also the data are analyzed to reduce redundant transmission. Multi-hop communication will facilitate lighter loads on the network.
Keywords: WSN, wireless sensor networks, MLBS, maximum lifetime backbone scheduling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 878602 Influence of Fiber Packing on Transverse Plastic Properties of Metal Matrix Composites
Authors: Mohammad Tahaye Abadi
Abstract:
The present paper concerns with the influence of fiber packing on the transverse plastic properties of metal matrix composites. A micromechanical modeling procedure is used to predict the effective mechanical properties of composite materials at large tensile and compressive deformations. Microstructure is represented by a repeating unit cell (RUC). Two fiber arrays are considered including ideal square fiber packing and random fiber packing defined by random sequential algorithm. The micromechanical modeling procedure is implemented for graphite/aluminum metal matrix composite in which the reinforcement behaves as elastic, isotropic solids and the matrix is modeled as an isotropic elastic-plastic solid following the von Mises criterion with isotropic hardening and the Ramberg-Osgood relationship between equivalent true stress and logarithmic strain. The deformation is increased to a considerable value to evaluate both elastic and plastic behaviors of metal matrix composites. The yields strength and true elastic-plastic stress are determined for graphite/aluminum composites.Keywords: Fiber packing, metal matrix composites, micromechanics, plastic deformation, random
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646601 Massively-Parallel Bit-Serial Neural Networks for Fast Epilepsy Diagnosis: A Feasibility Study
Authors: Si Mon Kueh, Tom J. Kazmierski
Abstract:
There are about 1% of the world population suffering from the hidden disability known as epilepsy and major developing countries are not fully equipped to counter this problem. In order to reduce the inconvenience and danger of epilepsy, different methods have been researched by using a artificial neural network (ANN) classification to distinguish epileptic waveforms from normal brain waveforms. This paper outlines the aim of achieving massive ANN parallelization through a dedicated hardware using bit-serial processing. The design of this bit-serial Neural Processing Element (NPE) is presented which implements the functionality of a complete neuron using variable accuracy. The proposed design has been tested taking into consideration non-idealities of a hardware ANN. The NPE consists of a bit-serial multiplier which uses only 16 logic elements on an Altera Cyclone IV FPGA and a bit-serial ALU as well as a look-up table. Arrays of NPEs can be driven by a single controller which executes the neural processing algorithm. In conclusion, the proposed compact NPE design allows the construction of complex hardware ANNs that can be implemented in a portable equipment that suits the needs of a single epileptic patient in his or her daily activities to predict the occurrences of impending tonic conic seizures.Keywords: Artificial Neural Networks, bit-serial neural processor, FPGA, Neural Processing Element.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573600 Modelling a Hospital as a Queueing Network: Analysis for Improving Performance
Authors: Emad Alenany, M. Adel El-Baz
Abstract:
In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.Keywords: Queueing network, discrete-event simulation, health applications, SPT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1531599 A Secure Semi-Fragile Watermarking Scheme for Authentication and Recovery of Images Based On Wavelet Transform
Authors: Rafiullah Chamlawi, Asifullah Khan, Adnan Idris, Zahid Munir
Abstract:
Authentication of multimedia contents has gained much attention in recent times. In this paper, we propose a secure semi-fragile watermarking, with a choice of two watermarks to be embedded. This technique operates in integer wavelet domain and makes use of semi fragile watermarks for achieving better robustness. A self-recovering algorithm is employed, that hides the image digest into some Wavelet subbands to detect possible malevolent object manipulation undergone by the image (object replacing and/or deletion). The Semi-fragility makes the scheme tolerant for JPEG lossy compression as low as quality of 70%, and locate the tempered area accurately. In addition, the system ensures more security because the embedded watermarks are protected with private keys. The computational complexity is reduced using parameterized integer wavelet transform. Experimental results show that the proposed scheme guarantees the safety of watermark, image recovery and location of the tempered area accurately.
Keywords: Integer Wavelet Transform (IWT), Discrete Cosine Transform (DCT), JPEG Compression, Authentication and Self- Recovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2085598 Hiding Data in Images Using PCP
Authors: Souvik Bhattacharyya, Gautam Sanyal
Abstract:
In recent years, everything is trending toward digitalization and with the rapid development of the Internet technologies, digital media needs to be transmitted conveniently over the network. Attacks, misuse or unauthorized access of information is of great concern today which makes the protection of documents through digital media a priority problem. This urges us to devise new data hiding techniques to protect and secure the data of vital significance. In this respect, steganography often comes to the fore as a tool for hiding information. Steganography is a process that involves hiding a message in an appropriate carrier like image or audio. It is of Greek origin and means "covered or hidden writing". The goal of steganography is covert communication. Here the carrier can be sent to a receiver without any one except the authenticated receiver only knows existence of the information. Considerable amount of work has been carried out by different researchers on steganography. In this work the authors propose a novel Steganographic method for hiding information within the spatial domain of the gray scale image. The proposed approach works by selecting the embedding pixels using some mathematical function and then finds the 8 neighborhood of the each selected pixel and map each bit of the secret message in each of the neighbor pixel coordinate position in a specified manner. Before embedding a checking has been done to find out whether the selected pixel or its neighbor lies at the boundary of the image or not. This solution is independent of the nature of the data to be hidden and produces a stego image with minimum degradation.Keywords: Cover Image, LSB, Pixel Coordinate Position (PCP), Stego Image.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1822597 Medical Image Fusion Based On Redundant Wavelet Transform and Morphological Processing
Authors: P. S. Gomathi, B. Kalaavathi
Abstract:
The process in which the complementary information from multiple images is integrated to provide composite image that contains more information than the original input images is called image fusion. Medical image fusion provides useful information from multimodality medical images that provides additional information to the doctor for diagnosis of diseases in a better way. This paper represents the wavelet based medical image fusion algorithm on different multimodality medical images. In order to fuse the medical images, images are decomposed using Redundant Wavelet Transform (RWT). The high frequency coefficients are convolved with morphological operator followed by the maximum-selection (MS) rule. The low frequency coefficients are processed by MS rule. The reconstructed image is obtained by inverse RWT. The quantitative measures which includes Mean, Standard Deviation, Average Gradient, Spatial frequency, Edge based Similarity Measures are considered for evaluating the fused images. The performance of this proposed method is compared with Pixel averaging, PCA, and DWT fusion methods. When compared with conventional methods, the proposed framework provides better performance for analysis of multimodality medical images.
Keywords: Discrete Wavelet Transform (DWT), Image Fusion, Morphological Processing, Redundant Wavelet Transform (RWT).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2160596 Comparison between Deterministic and Probabilistic Stability Analysis, Featuring Consequent Risk Assessment
Authors: Isabela Moreira Queiroz
Abstract:
Slope stability analyses are largely carried out by deterministic methods and evaluated through a single security factor. Although it is known that the geotechnical parameters can present great dispersal, such analyses are considered fixed and known. The probabilistic methods, in turn, incorporate the variability of input key parameters (random variables), resulting in a range of values of safety factors, thus enabling the determination of the probability of failure, which is an essential parameter in the calculation of the risk (probability multiplied by the consequence of the event). Among the probabilistic methods, there are three frequently used methods in geotechnical society: FOSM (First-Order, Second-Moment), Rosenblueth (Point Estimates) and Monte Carlo. This paper presents a comparison between the results from deterministic and probabilistic analyses (FOSM method, Monte Carlo and Rosenblueth) applied to a hypothetical slope. The end was held to evaluate the behavior of the slope and consequent risk analysis, which is used to calculate the risk and analyze their mitigation and control solutions. It can be observed that the results obtained by the three probabilistic methods were quite close. It should be noticed that the calculation of the risk makes it possible to list the priority to the implementation of mitigation measures. Therefore, it is recommended to do a good assessment of the geological-geotechnical model incorporating the uncertainty in viability, design, construction, operation and closure by means of risk management.Keywords: Probabilistic methods, risk assessment, risk management, slope stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740595 Statistics over Lyapunov Exponents for Feature Extraction: Electroencephalographic Changes Detection Case
Authors: Elif Derya UBEYLI, Inan GULER
Abstract:
A new approach based on the consideration that electroencephalogram (EEG) signals are chaotic signals was presented for automated diagnosis of electroencephalographic changes. This consideration was tested successfully using the nonlinear dynamics tools, like the computation of Lyapunov exponents. This paper presented the usage of statistics over the set of the Lyapunov exponents in order to reduce the dimensionality of the extracted feature vectors. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Multilayer perceptron neural network (MLPNN) architectures were formulated and used as basis for detection of electroencephalographic changes. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. The selected Lyapunov exponents of the EEG signals were used as inputs of the MLPNN trained with Levenberg- Marquardt algorithm. The classification results confirmed that the proposed MLPNN has potential in detecting the electroencephalographic changes.
Keywords: Chaotic signal, Electroencephalogram (EEG) signals, Feature extraction/selection, Lyapunov exponents
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2510594 Reliability Analysis of Computer Centre at Yobe State University Using LRU Algorithm
Authors: V. V. Singh, Yusuf Ibrahim Gwanda, Rajesh Prasad
Abstract:
In this paper, we focus on the reliability and performance analysis of Computer Centre (CC) at Yobe State University, Damaturu, Nigeria. The CC consists of three servers: one database mail server, one redundant and one for sharing with the client computers in the CC (called as a local server). Observing the different possibilities of the functioning of the CC, the analysis has been done to evaluate the various popular measures of reliability such as availability, reliability, mean time to failure (MTTF), profit analysis due to the operation of the system. The system can ultimately fail due to the failure of router, redundant server before repairing the mail server and switch failure. The system can also partially fail when a local server fails. The failed devices have restored according to Least Recently Used (LRU) techniques. The system can also fail entirely due to a cooling failure of the server, electricity failure or some natural calamity like earthquake, fire tsunami, etc. All the failure rates are assumed to be constant and follow exponential time distribution, while the repair follows two types of distributions: i.e. general and Gumbel-Hougaard family copula distribution.Keywords: Reliability, availability Gumbel-Hougaard family copula, MTTF, internet data center.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 870593 Reverse Twin Block with Expansion Screw for Treatment of Skeletal Class III Malocclusion in Growing Patient: Case Report
Authors: Alfrina Marwan, Erna Sulistyawati
Abstract:
Class III malocclusion shows both skeletal and dentoalveolar component. Sketal Class III malocclusion can have variants in different region, maxilla or mandibular. Skeletal Class III malocclusion during growth period is considered to treat to prevent its severity in adulthood. Orthopedics treatment of skeletal Class III malocclusion in growing patient can be treated by using reverse twin block with expansion screw to modify the growth pattern. The objective of this case report was to describe the functional correction of skeletal Class III maloclussion using reverse twin block with expansion screw in growing patient. A patient with concave profile came with a chief complaint of aesthetic problems. The cephalometric analysis showed that patient had skeletal Class III malocclusion (ANB -50, SNA 75º, Wits appraisal -3 mm) with anterior cross bite and deep bite (overjet -3 mm, overbite 6 mm). In this case report, the patient was treated with reverse twin block appliance with expansion screw. After three months of treatment, the skeletal problems have been corrected (ANB -1°), overjet, overbite and aesthetic were improved. Reverse twin block appliance with expansion screw can be used as orthopedics treatment for skeletal Class III malocclusion in growing patient and can improve the aesthetic with great satisfaction which was the main complaint in this patient.
Keywords: Growing patient, maxilla retrognatism, reverse twin blocks, skeletal Class III malocclusion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 991592 A Real Time Ultra-Wideband Location System for Smart Healthcare
Authors: Mingyang Sun, Guozheng Yan, Dasheng Liu, Lei Yang
Abstract:
Driven by the demand of intelligent monitoring in rehabilitation centers or hospitals, a high accuracy real-time location system based on UWB (ultra-wideband) technology was proposed. The system measures precise location of a specific person, traces his movement and visualizes his trajectory on the screen for doctors or administrators. Therefore, doctors could view the position of the patient at any time and find them immediately and exactly when something emergent happens. In our design process, different algorithms were discussed, and their errors were analyzed. In addition, we discussed about a , simple but effective way of correcting the antenna delay error, which turned out to be effective. By choosing the best algorithm and correcting errors with corresponding methods, the system attained a good accuracy. Experiments indicated that the ranging error of the system is lower than 7 cm, the locating error is lower than 20 cm, and the refresh rate exceeds 5 times per second. In future works, by embedding the system in wearable IoT (Internet of Things) devices, it could provide not only physical parameters, but also the activity status of the patient, which would help doctors a lot in performing healthcare.Keywords: Intelligent monitoring, IoT devices, real-time location, smart healthcare, ultra-wideband technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 890591 Local Curvelet Based Classification Using Linear Discriminant Analysis for Face Recognition
Authors: Mohammed Rziza, Mohamed El Aroussi, Mohammed El Hassouni, Sanaa Ghouzali, Driss Aboutajdine
Abstract:
In this paper, an efficient local appearance feature extraction method based the multi-resolution Curvelet transform is proposed in order to further enhance the performance of the well known Linear Discriminant Analysis(LDA) method when applied to face recognition. Each face is described by a subset of band filtered images containing block-based Curvelet coefficients. These coefficients characterize the face texture and a set of simple statistical measures allows us to form compact and meaningful feature vectors. The proposed method is compared with some related feature extraction methods such as Principal component analysis (PCA), as well as Linear Discriminant Analysis LDA, and independent component Analysis (ICA). Two different muti-resolution transforms, Wavelet (DWT) and Contourlet, were also compared against the Block Based Curvelet-LDA algorithm. Experimental results on ORL, YALE and FERET face databases convince us that the proposed method provides a better representation of the class information and obtains much higher recognition accuracies.Keywords: Curvelet, Linear Discriminant Analysis (LDA) , Contourlet, Discreet Wavelet Transform, DWT, Block-based analysis, face recognition (FR).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1810590 A Monte Carlo Method to Data Stream Analysis
Authors: Kittisak Kerdprasop, Nittaya Kerdprasop, Pairote Sattayatham
Abstract:
Data stream analysis is the process of computing various summaries and derived values from large amounts of data which are continuously generated at a rapid rate. The nature of a stream does not allow a revisit on each data element. Furthermore, data processing must be fast to produce timely analysis results. These requirements impose constraints on the design of the algorithms to balance correctness against timely responses. Several techniques have been proposed over the past few years to address these challenges. These techniques can be categorized as either dataoriented or task-oriented. The data-oriented approach analyzes a subset of data or a smaller transformed representation, whereas taskoriented scheme solves the problem directly via approximation techniques. We propose a hybrid approach to tackle the data stream analysis problem. The data stream has been both statistically transformed to a smaller size and computationally approximated its characteristics. We adopt a Monte Carlo method in the approximation step. The data reduction has been performed horizontally and vertically through our EMR sampling method. The proposed method is analyzed by a series of experiments. We apply our algorithm on clustering and classification tasks to evaluate the utility of our approach.Keywords: Data Stream, Monte Carlo, Sampling, DensityEstimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1418589 Steepest Descent Method with New Step Sizes
Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman
Abstract:
Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.Keywords: Convergence, iteration, line search, running time, steepest descent, unconstrained optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3165588 Traffic Behaviour of VoIP in a Simulated Access Network
Authors: Jishu Das Gupta, Srecko Howard, Angela Howard
Abstract:
Insufficient Quality of Service (QoS) of Voice over Internet Protocol (VoIP) is a growing concern that has lead the need for research and study. In this paper we investigate the performance of VoIP and the impact of resource limitations on the performance of Access Networks. The impact of VoIP performance in Access Networks is particularly important in regions where Internet resources are limited and the cost of improving these resources is prohibitive. It is clear that perceived VoIP performance, as measured by mean opinion score [2] in experiments, where subjects are asked to rate communication quality, is determined by end-to-end delay on the communication path, delay variation, packet loss, echo, the coding algorithm in use and noise. These performance indicators can be measured and the affect in the Access Network can be estimated. This paper investigates the congestion in the Access Network to the overall performance of VoIP services with the presence of other substantial uses of internet and ways in which Access Networks can be designed to improve VoIP performance. Methods for analyzing the impact of the Access Network on VoIP performance will be surveyed and reviewed. This paper also considers some approaches for improving performance of VoIP by carrying out experiments using Network Simulator version 2 (NS2) software with a view to gaining a better understanding of the design of Access Networks.Keywords: Codec, DiffServ, Droptail, RED, VOIP
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596587 Application and Assessment of Artificial Neural Networks for Biodiesel Iodine Value Prediction
Authors: Raquel M. de Sousa, Sofiane Labidi, Allan Kardec D. Barros, Alex O. Barradas Filho, Aldalea L. B. Marques
Abstract:
Several parameters are established in order to measure biodiesel quality. One of them is the iodine value, which is an important parameter that measures the total unsaturation within a mixture of fatty acids. Limitation of unsaturated fatty acids is necessary since warming of higher quantity of these ones ends in either formation of deposits inside the motor or damage of lubricant. Determination of iodine value by official procedure tends to be very laborious, with high costs and toxicity of the reagents, this study uses artificial neural network (ANN) in order to predict the iodine value property as an alternative to these problems. The methodology of development of networks used 13 esters of fatty acids in the input with convergence algorithms of back propagation of back propagation type were optimized in order to get an architecture of prediction of iodine value. This study allowed us to demonstrate the neural networks’ ability to learn the correlation between biodiesel quality properties, in this caseiodine value, and the molecular structures that make it up. The model developed in the study reached a correlation coefficient (R) of 0.99 for both network validation and network simulation, with Levenberg-Maquardt algorithm.Keywords: Artificial Neural Networks, Biodiesel, Iodine Value, Prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2382586 Viability of Rice Husk Ash Concrete Brick/Block from Green Electricity in Bangladesh
Authors: Mohammad A. N. M. Shafiqul Karim
Abstract:
As a developing country, Bangladesh has to face numerous challenges. Self Independence in electricity, contributing to climate change by reducing carbon emission and bringing the backward population of society to the mainstream is more challenging for them. Therefore, it is essential to ensure recycled use of local products to the maximum level in every sector. Some private organizations have already worked alongside government to bring the backward population to the mainstream by developing their financial capacities. As rice husk is the largest single category of the total energy supply in Bangladesh. As part of this strategy, rice husk can play a great as a promising renewable energy source, which is readily available, has considerable environmental benefits and can produce electricity and ensure multiple uses of byproducts in construction technology. For the first time in Bangladesh, an experimental multidimensional project depending on Rice Husk Electricity and Rice Husk Ash (RHA) concrete brick/block under Green Eco-Tech Limited has already been started. Project analysis, opportunity, sustainability, the high monitoring component, limitations and finally evaluated data reflecting the viability of establishing more projects using rice husk are discussed in this paper. The by-product of rice husk from the production of green electricity, RHA, can be used for making, in particular, RHA concrete brick/block in Bangladeshi aspects is also discussed here.
Keywords: Project analysis, rice husk, rice husk ash concrete brick/block, compressive strength of rice husk ash concrete brick/block.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2076585 Optimal Design of Airfoil with High Aspect Ratio in Unmanned Aerial Vehicles
Authors: Kyoungwoo Park, Ji-Won Han, Hyo-Jae Lim, Byeong-Sam Kim, Juhee Lee
Abstract:
Shape optimization of the airfoil with high aspect ratio of long endurance unmanned aerial vehicle (UAV) is performed by the multi-objective optimization technology coupled with computational fluid dynamics (CFD). For predicting the aerodynamic characteristics around the airfoil the high-fidelity Navier-Stokes solver is employed and SMOGA (Simple Multi-Objective Genetic Algorithm), which is developed by authors, is used for solving the multi-objective optimization problem. To obtain the optimal solutions of the design variable (i.e., sectional airfoil profile, wing taper ratio and sweep) for high performance of UAVs, both the lift and lift-to-drag ratio are maximized whereas the pitching moment should be minimized, simultaneously. It is found that the lift force and lift-to-drag ratio are linearly dependent and a unique and dominant solution are existed. However, a trade-off phenomenon is observed between the lift-to-drag ratio and pitching moment. As the result of optimization, sixty-five (65) non-dominated Pareto individuals at the cutting edge of design spaces that is decided by airfoil shapes can be obtained.Keywords: Unmanned aerial vehicle (UAV), Airfoil, CFD, Shape optimization, Lift-to-drag ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6439584 GenCos- Optimal Bidding Strategy Considering Market Power and Transmission Constraints: A Cournot-based Model
Authors: A. Badri
Abstract:
Restructured electricity markets may provide opportunities for producers to exercise market power maintaining prices in excess of competitive levels. In this paper an oligopolistic market is presented that all Generation Companies (GenCos) bid in a Cournot model. Genetic algorithm (GA) is applied to obtain generation scheduling of each GenCo as well as hourly market clearing prices (MCP). In order to consider network constraints a multiperiod framework is presented to simulate market clearing mechanism in which the behaviors of market participants are modelled through piecewise block curves. A mixed integer linear programming (MILP) is employed to solve the problem. Impacts of market clearing process on participants- characteristic and final market prices are presented. Consequently, a novel multi-objective model is addressed for security constrained optimal bidding strategy of GenCos. The capability of price-maker GenCos to alter MCP is evaluated through introducing an effective-supply curve. In addition, the impact of exercising market power on the variation of market characteristics as well as GenCos scheduling is studied.Keywords: Optimal bidding strategy, Cournot equilibrium, market power, network constraints, market auction mechanism
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655583 Applying Similarity Theory and Hilbert Huang Transform for Estimating the Differences of Pig-s Blood Pressure Signals between Situations of Intestinal Artery Blocking and Unblocking
Authors: Jia-Rong Yeh, Tzu-Yu Lin, Jiann-Shing Shieh, Yun Chen
Abstract:
A mammal-s body can be seen as a blood vessel with complex tunnels. When heart pumps blood periodically, blood runs through blood vessels and rebounds from walls of blood vessels. Blood pressure signals can be measured with complex but periodic patterns. When an artery is clamped during a surgical operation, the spectrum of blood pressure signals will be different from that of normal situation. In this investigation, intestinal artery clamping operations were conducted to a pig for simulating the situation of intestinal blocking during a surgical operation. Similarity theory is a convenient and easy tool to prove that patterns of blood pressure signals of intestinal artery blocking and unblocking are surely different. And, the algorithm of Hilbert Huang Transform can be applied to extract the character parameters of blood pressure pattern. In conclusion, the patterns of blood pressure signals of two different situations, intestinal artery blocking and unblocking, can be distinguished by these character parameters defined in this paper.Keywords: Blood pressure, spectrum, intestinal artery, similarity theory and Hilbert Huang Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625582 A Character Detection Method for Ancient Yi Books Based on Connected Components and Regressive Character Segmentation
Authors: Xu Han, Shanxiong Chen, Shiyu Zhu, Xiaoyu Lin, Fujia Zhao, Dingwang Wang
Abstract:
Character detection is an important issue for character recognition of ancient Yi books. The accuracy of detection directly affects the recognition effect of ancient Yi books. Considering the complex layout, the lack of standard typesetting and the mixed arrangement between images and texts, we propose a character detection method for ancient Yi books based on connected components and regressive character segmentation. First, the scanned images of ancient Yi books are preprocessed with nonlocal mean filtering, and then a modified local adaptive threshold binarization algorithm is used to obtain the binary images to segment the foreground and background for the images. Second, the non-text areas are removed by the method based on connected components. Finally, the single character in the ancient Yi books is segmented by our method. The experimental results show that the method can effectively separate the text areas and non-text areas for ancient Yi books and achieve higher accuracy and recall rate in the experiment of character detection, and effectively solve the problem of character detection and segmentation in character recognition of ancient books.Keywords: Computing methodologies, interest point, salient region detections, image segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 868581 Ontology of Collaborative Supply Chain for Quality Management
Authors: Jiaqi Yan, Sherry Sun, Huaiqing Wang, Zhongsheng Hua
Abstract:
In the highly competitive and rapidly changing global marketplace, independent organizations and enterprises often come together and form a temporary alignment of virtual enterprise in a supply chain to better provide products or service. As firms adopt the systems approach implicit in supply chain management, they must manage the quality from both internal process control and external control of supplier quality and customer requirements. How to incorporate quality management of upstream and downstream supply chain partners into their own quality management system has recently received a great deal of attention from both academic and practice. This paper investigate the collaborative feature and the entities- relationship in a supply chain, and presents an ontology of collaborative supply chain from an approach of aligning service-oriented framework with service-dominant logic. This perspective facilitates the segregation of material flow management from manufacturing capability management, which provides a foundation for the coordination and integration of the business process to measure, analyze, and continually improve the quality of products, services, and process. Further, this approach characterizes the different interests of supply chain partners, providing an innovative approach to analyze the collaborative features of supply chain. Furthermore, this ontology is the foundation to develop quality management system which internalizes the quality management in upstream and downstream supply chain partners and manages the quality in supply chain systematically.Keywords: Ontology, supply chain quality management, service-oriented architecture, service-dominant logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1855580 Dead Bodies that Matter: A Consensual Qualitative Research on the Lived Experience of Embalmers
Authors: Mark N. Abello, Betina Velanie L. Cruz, Angelo Joachim D. C. De Castro, Arnel A. Diego, John Ezequel V. Murillo
Abstract:
Embalmers are widely recognized as someone who mends the cadavers, but behind that is a great deal of work. These professionals are competent in physiology, chemicals, and cosmetics. Another is that such professionals face cadavers day-to-day. Given this background, the researchers intended to find out the lived experience of embalmers. The purpose of the present study is to discover the essence of the work of these professionals, to determine factors that influence their work, the depths of their life and on how the occupation affects upon physical, emotional-mental, spiritual, moral and social aspects. The researchers used the Consensual Qualitative Research, and eight embalmers, seven male and one female, from Manila and Bulacan were interviewed using open-ended questions and were used to triangulate the results. A primary research team conducted the consensus of domains, and an external auditor reviewed the results. A personal data sheet was also used, this helped the researchers group the respondents according to demographic profile. The results of the consensual qualitative research investigation revealed the four core components of the lived experience of embalmers which are motivation, struggles, acceptance, and contentment. The results revealed core components that play an important role in their everyday lives as an embalmer, daily hardships, and source of their pleasures. The present study will help future researchers, embalmers, and society.Keywords: Embalmers, consensual qualitative research, lived experience.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2614579 Long Short-Term Memory Based Model for Modeling Nicotine Consumption Using an Electronic Cigarette and Internet of Things Devices
Authors: Hamdi Amroun, Yacine Benziani, Mehdi Ammi
Abstract:
In this paper, we want to determine whether the accurate prediction of nicotine concentration can be obtained by using a network of smart objects and an e-cigarette. The approach consists of, first, the recognition of factors influencing smoking cessation such as physical activity recognition and participant’s behaviors (using both smartphone and smartwatch), then the prediction of the configuration of the e-cigarette (in terms of nicotine concentration, power, and resistance of e-cigarette). The study uses a network of commonly connected objects; a smartwatch, a smartphone, and an e-cigarette transported by the participants during an uncontrolled experiment. The data obtained from sensors carried in the three devices were trained by a Long short-term memory algorithm (LSTM). Results show that our LSTM-based model allows predicting the configuration of the e-cigarette in terms of nicotine concentration, power, and resistance with a root mean square error percentage of 12.9%, 9.15%, and 11.84%, respectively. This study can help to better control consumption of nicotine and offer an intelligent configuration of the e-cigarette to users.
Keywords: Iot, activity recognition, automatic classification, unconstrained environment, deep neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1135578 Spacecraft Neural Network Control System Design using FPGA
Authors: Hanaa T. El-Madany, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassen T. Dorrah
Abstract:
Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products of space technologies. A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. Field programmable gate array (FPGA) is a digital device that owns reprogrammable properties and robust flexibility. For the neural network based instrument prototype in real time application, conventional specific VLSI neural chip design suffers the limitation in time and cost. With low precision artificial neural network design, FPGAs have higher speed and smaller size for real time application than the VLSI and DSP chips. So, many researchers have made great efforts on the realization of neural network (NN) using FPGA technique. In this paper, an introduction of ANN and FPGA technique are briefly shown. Also, Hardware Description Language (VHDL) code has been proposed to implement ANNs as well as to present simulation results with floating point arithmetic. Synthesis results for ANN controller are developed using Precision RTL. Proposed VHDL implementation creates a flexible, fast method and high degree of parallelism for implementing ANN. The implementation of multi-layer NN using lookup table LUT reduces the resource utilization for implementation and time for execution.
Keywords: Spacecraft, neural network, FPGA, VHDL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3010577 Finite Element Modeling and Mechanical Properties of Aluminum Proceed by Equal Channel Angular Pressing Process
Authors: F. Al-Mufadi, F. Djavanroodi
Abstract:
During the last decade ultrafine grained (UFG) and nano-structured (NS) materials have experienced a rapid development. In this research work finite element analysis has been carried out to investigate the plastic strain distribution in equal channel angular process (ECAP). The magnitudes of Standard deviation (S. D.) and inhomogeneity index (Ci) were compared for different ECAP passes. Verification of a three-dimensional finite element model was performed with experimental tests. Finally the mechanical property including impact energy of ultrafine grained pure commercially pure Aluminum produced by severe plastic deformation method has been examined. For this aim, equal channel angular pressing die with the channel angle, outer corner angle and channel diameter of 90°, 20° and 20mm had been designed and manufactured. Commercial pure Aluminum billets were ECAPed up to four passes by route BC at the ambient temperature. The results indicated that there is a great improvement at the hardness measurement, yield strength and ultimate tensile strength after ECAP process. It is found that the magnitudes of HV reach 67HV from 21HV after the final stage of process. Also, about 330% and 285% enhancement at the YS and UTS values have been obtained after the fourth pass as compared to the as-received conditions, respectively. On the other hand, the elongation to failure and impact energy have been reduced by 23% and 50% after imposing four passes of ECAP process, respectively.
Keywords: SPD, ECAP, FEM, Pure Al, Mechanical properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2512576 Grouping and Indexing Color Features for Efficient Image Retrieval
Authors: M. V. Sudhamani, C. R. Venugopal
Abstract:
Content-based Image Retrieval (CBIR) aims at searching image databases for specific images that are similar to a given query image based on matching of features derived from the image content. This paper focuses on a low-dimensional color based indexing technique for achieving efficient and effective retrieval performance. In our approach, the color features are extracted using the mean shift algorithm, a robust clustering technique. Then the cluster (region) mode is used as representative of the image in 3-D color space. The feature descriptor consists of the representative color of a region and is indexed using a spatial indexing method that uses *R -tree thus avoiding the high-dimensional indexing problems associated with the traditional color histogram. Alternatively, the images in the database are clustered based on region feature similarity using Euclidian distance. Only representative (centroids) features of these clusters are indexed using *R -tree thus improving the efficiency. For similarity retrieval, each representative color in the query image or region is used independently to find regions containing that color. The results of these methods are compared. A JAVA based query engine supporting query-by- example is built to retrieve images by color.
Keywords: Content-based, indexing, cluster, region.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1812575 Fluorescence Quenching as an Efficient Tool for Sensing Application: Study on the Fluorescence Quenching of Naphthalimide Dye by Graphene Oxide
Authors: Sanaz Seraj, Shohre Rouhani
Abstract:
Recently, graphene has gained much attention because of its unique optical, mechanical, electrical, and thermal properties. Graphene has been used as a key material in the technological applications in various areas such as sensors, drug delivery, super capacitors, transparent conductor, and solar cell. It has a superior quenching efficiency for various fluorophores. Based on these unique properties, the optical sensors with graphene materials as the energy acceptors have demonstrated great success in recent years. During quenching, the emission of a fluorophore is perturbed by a quencher which can be a substrate or biomolecule, and due to this phenomenon, fluorophore-quencher has been used for selective detection of target molecules. Among fluorescence dyes, 1,8-naphthalimide is well known for its typical intramolecular charge transfer (ICT) and photo-induced charge transfer (PET) fluorophore, strong absorption and emission in the visible region, high photo stability, and large Stokes shift. Derivatives of 1,8-naphthalimides have found applications in some areas, especially fluorescence sensors. Herein, the fluorescence quenching of graphene oxide has been carried out on a naphthalimide dye as a fluorescent probe model. The quenching ability of graphene oxide on naphthalimide dye was studied by UV-VIS and fluorescence spectroscopy. This study showed that graphene is an efficient quencher for fluorescent dyes. Therefore, it can be used as a suitable candidate sensing platform. To the best of our knowledge, studies on the quenching and absorption of naphthalimide dyes by graphene oxide are rare.
Keywords: Fluorescence, graphene oxide, naphthalimide dye, quenching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 758