Search results for: Computer forensics
124 Design and Performance Comparison of Metamaterial Based Antenna for 4G/5G Mobile Devices
Authors: Jalal Khan, Daniyal Ali Sehrai, Shakeel Ahmad
Abstract:
This paper presents the design and performance evaluation of multiband metamaterial based antenna operating in the 3.6 GHz (4G), 14.33 GHz, and 28.86 GHz (5G) frequency bands, for future mobile and handheld devices. The radiating element of the proposed design is made up of a conductive material supported by a 1.524 mm thicker Rogers-4003 substrate, having a relative dielectric constant and loss tangent of 3.55 and 0.0027, respectively. The substrate is backed by truncated ground plane. The future mobile communication system is based on higher frequencies, which are highly affected by the atmospheric conditions. Therefore, to overcome the path loss problem, essential enhancements and improvements must be made in the overall performance of the antenna. The traditional ground plane does not provide the in-phase reflection and surface wave suppression due to which side and back lobes are produced. This will affect the antenna performance in terms of gain and efficiency. To enhance the overall performance of the antenna, a metamaterial acting as a high impedance surface (HIS) is used as a reflector in the proposed design. The simulated gain of the metamaterial based antenna is enhanced from {2.76-6.47, 4.83-6.71 and 7.52-7.73} dB at 3.6, 14.33 and 28.89 GHz, respectively relative to the gain of the antenna backed by a traditional ground plane. The proposed antenna radiated efficiently with a radiated efficiency (>85 %) in all the three frequency bands with and without metamaterial surface. The total volume of the antenna is (L x W x h=45 x 40 x 1.524) mm3. The antenna can be potentially used for wireless handheld devices and mobile terminal. All the simulations have been performed using the Computer Simulation Technology (CST) software.
Keywords: Multiband, fourth generation (4G), fifth generation (5G), metamaterial, CST MWS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887123 Fast Factored DCT-LMS Speech Enhancement for Performance Enhancement of Digital Hearing Aid
Authors: Sunitha. S.L., V. Udayashankara
Abstract:
Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Cosine Transform Power Normalized Least Mean Square algorithm to improve the SNR and to reduce the convergence rate of the LMS for Sensory neural loss patients. Since it requires only real arithmetic, it establishes the faster convergence rate as compare to time domain LMS and also this transformation improves the eigenvalue distribution of the input autocorrelation matrix of the LMS filter. The DCT has good ortho-normal, separable, and energy compaction property. Although the DCT does not separate frequencies, it is a powerful signal decorrelator. It is a real valued function and thus can be effectively used in real-time operation. The advantages of DCT-LMS as compared to standard LMS algorithm are shown via SNR and eigenvalue ratio computations. . Exploiting the symmetry of the basis functions, the DCT transform matrix [AN] can be factored into a series of ±1 butterflies and rotation angles. This factorization results in one of the fastest DCT implementation. There are different ways to obtain factorizations. This work uses the fast factored DCT algorithm developed by Chen and company. The computer simulations results show superior convergence characteristics of the proposed algorithm by improving the SNR at least 10 dB for input SNR less than and equal to 0 dB, faster convergence speed and better time and frequency characteristics.Keywords: Hearing Impairment, DCT Adaptive filter, Sensorineural loss patients, Convergence rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2171122 Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores
Authors: Ankit Sinha, Soham Banerjee, Pratik Chattopadhyay
Abstract:
Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model.
Keywords: Retail stores, Faster-RCNN, object localization, ResNet-18, triplet loss, data augmentation, product recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 583121 Identifying E-Learning Components at North-West University, Mafikeng Campus
Authors: Sylvia Tumelo Nthutang, Nehemiah Mavetera
Abstract:
Educational institutions are under pressure from their competitors. Regulators and community groups need educational institutions to adopt appropriate business and organizational practices. Globally, educational institutions are now using e-learning as the best teaching and learning approach. E-learning is becoming the center of attention to the learning institutions, educational systems and software inventors. North-West University (NWU) is currently using eFundi, a Learning Management System (LMS). LMS are all information systems and procedures that adds value to students learning and support the learning material in text or any multimedia files. With various e-learning tools, students would be able to access all the materials related to the course in electronic copies. The study was tasked with identifying the e-learning components at the NWU, Mafikeng campus. Quantitative research methodology was considered in data collection and descriptive statistics for data analysis. The Activity Theory (AT) was used as a theory to guide the study. AT outlines the limitations amongst e-learning at the macro-organizational level (plan, guiding principle, campus-wide solutions) and micro-organization (daily functioning practice, collaborative transformation, specific adaptation). On a technological environment, AT gives people an opportunity to change from concentrating on computers as an area of concern but also understand that technology is part of human activities. The findings have identified the university’s current IT tools and knowledge on e-learning elements. It was recommended that university should consider buying computer resources that consumes less power and practice e-learning effectively.
Keywords: E-learning, information and communication technology, teaching, and virtual learning environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1080120 An Extensible Software Infrastructure for Computer Aided Custom Monitoring of Patients in Smart Homes
Authors: Ritwik Dutta, Marilyn Wolf
Abstract:
This paper describes the tradeoffs and the design from scratch of a self-contained, easy-to-use health dashboard software system that provides customizable data tracking for patients in smart homes. The system is made up of different software modules and comprises a front-end and a back-end component. Built with HTML, CSS, and JavaScript, the front-end allows adding users, logging into the system, selecting metrics, and specifying health goals. The backend consists of a NoSQL Mongo database, a Python script, and a SimpleHTTPServer written in Python. The database stores user profiles and health data in JSON format. The Python script makes use of the PyMongo driver library to query the database and displays formatted data as a daily snapshot of user health metrics against target goals. Any number of standard and custom metrics can be added to the system, and corresponding health data can be fed automatically, via sensor APIs or manually, as text or picture data files. A real-time METAR request API permits correlating weather data with patient health, and an advanced query system is implemented to allow trend analysis of selected health metrics over custom time intervals. Available on the GitHub repository system, the project is free to use for academic purposes of learning and experimenting, or practical purposes by building on it.
Keywords: Flask, Java, JavaScript, health monitoring, long term care, Mongo, Python, smart home, software engineering, webserver.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2134119 Image Magnification Using Adaptive Interpolationby Pixel Level Data-Dependent Geometrical Shapes
Authors: Muhammad Sajjad, Naveed Khattak, Noman Jafri
Abstract:
World has entered in 21st century. The technology of computer graphics and digital cameras is prevalent. High resolution display and printer are available. Therefore high resolution images are needed in order to produce high quality display images and high quality prints. However, since high resolution images are not usually provided, there is a need to magnify the original images. One common difficulty in the previous magnification techniques is that of preserving details, i.e. edges and at the same time smoothing the data for not introducing the spurious artefacts. A definitive solution to this is still an open issue. In this paper an image magnification using adaptive interpolation by pixel level data-dependent geometrical shapes is proposed that tries to take into account information about the edges (sharp luminance variations) and smoothness of the image. It calculate threshold, classify interpolation region in the form of geometrical shapes and then assign suitable values inside interpolation region to the undefined pixels while preserving the sharp luminance variations and smoothness at the same time. The results of proposed technique has been compared qualitatively and quantitatively with five other techniques. In which the qualitative results show that the proposed method beats completely the Nearest Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The quantitative results are competitive and consistent with NN, BL, BC and others.Keywords: Adaptive, digital image processing, imagemagnification, interpolation, geometrical shapes, qualitative &quantitative analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1800118 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation
Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar
Abstract:
The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.Keywords: Computational fluid dynamics, erosion, slurry transportation, k-ε Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1918117 Stability of Concrete Moment Resisting Frames in View of Current Codes Requirements
Authors: Mahmoud A. Mahmoud, Ashraf Osman
Abstract:
In this study, the different approaches currently followed by design codes to assess the stability of buildings utilizing concrete moment resisting frames structural system are evaluated. For such purpose, a parametric study was performed. It involved analyzing group of concrete moment resisting frames having different slenderness ratios (height/width ratios), designed for different lateral loads to vertical loads ratios and constructed using ordinary reinforced concrete and high strength concrete for stability check and overall buckling using code approaches and computer buckling analysis. The objectives were to examine the influence of such parameters that directly linked to frames’ lateral stiffness on the buildings’ stability and evaluates the code approach in view of buckling analysis results. Based on this study, it was concluded that, the most susceptible buildings to instability and magnification of second order effects are buildings having high aspect ratios (height/width ratio), having low lateral to vertical loads ratio and utilizing construction materials of high strength. In addition, the study showed that the instability limits imposed by codes are mainly mathematical to ensure reliable analysis not a physical ones and that they are in general conservative. Also, it has been shown that the upper limit set by one of the codes that second order moment for structural elements should be limited to 1.4 the first order moment is not justified, instead, the overall story check is more reliable.
Keywords: Buckling, lateral stability, p-delta, second order.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2313116 Automatic Classification of Lung Diseases from CT Images
Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari
Abstract:
Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life due to the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or COVID-19 induced pneumonia. The early prediction and classification of such lung diseases help reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans are pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publicly available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.
Keywords: CT scans, COVID-19, deep learning, image processing, pneumonia, lung disease.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 610115 Image Ranking to Assist Object Labeling for Training Detection Models
Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman
Abstract:
Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.Keywords: Computer vision, deep learning, object detection, semiconductor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 827114 The Algorithm to Solve the Extend General Malfatti’s Problem in a Convex Circular Triangle
Authors: Ching-Shoei Chiang
Abstract:
The Malfatti’s problem solves the problem of fitting three circles into a right triangle such that these three circles are tangent to each other, and each circle is also tangent to a pair of the triangle’s sides. This problem has been extended to any triangle (called general Malfatti’s problem). Furthermore, the problem has been extended to have 1 + 2 + … + n circles inside the triangle with special tangency properties among circles and triangle sides; it is called the extended general Malfatti’s problem. In the extended general Malfatti’s problem, call it Tri(Tn), where Tn is the triangle number, there are closed-form solutions for the Tri(T₁) (inscribed circle) problem and Tri(T₂) (3 Malfatti’s circles) problem. These problems become more complex when n is greater than 2. In solving the Tri(Tn) problem, n > 2, algorithms have been proposed to solve these problems numerically. With a similar idea, this paper proposed an algorithm to find the radii of circles with the same tangency properties. Instead of the boundary of the triangle being a straight line, we use a convex circular arc as the boundary and try to find Tn circles inside this convex circular triangle with the same tangency properties among circles and boundary as in Tri(Tn) problems. We call these problems the Carc(Tn) problems. The algorithm is a mO(Tn) algorithm, where m is the number of iterations in the loop. It takes less than 1000 iterations and less than 1 second for the Carc(T16) problem, which finds 136 circles inside a convex circular triangle with specified tangency properties. This algorithm gives a solution for circle packing problem inside convex circular triangle with arbitrarily-sized circles. Many applications concerning circle packing may come from the result of the algorithm, such as logo design, architecture design, etc.
Keywords: Circle packing, computer-aided geometric design, geometric constraint solver, Malfatti’s problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 143113 Hierarchies Based On the Number of Cooperating Systems of Finite Automata on Four-Dimensional Input Tapes
Authors: Makoto Sakamoto, Yasuo Uchida, Makoto Nagatomo, Takao Ito, Tsunehiro Yoshinaga, Satoshi Ikeda, Masahiro Yokomichi, Hiroshi Furutani
Abstract:
In theoretical computer science, the Turing machine has played a number of important roles in understanding and exploiting basic concepts and mechanisms in computing and information processing [20]. It is a simple mathematical model of computers [9]. After that, M.Blum and C.Hewitt first proposed two-dimensional automata as a computational model of two-dimensional pattern processing, and investigated their pattern recognition abilities in 1967 [7]. Since then, a lot of researchers in this field have been investigating many properties about automata on a two- or three-dimensional tape. On the other hand, the question of whether processing fourdimensional digital patterns is much more difficult than two- or threedimensional ones is of great interest from the theoretical and practical standpoints. Thus, the study of four-dimensional automata as a computasional model of four-dimensional pattern processing has been meaningful [8]-[19],[21]. This paper introduces a cooperating system of four-dimensional finite automata as one model of four-dimensional automata. A cooperating system of four-dimensional finite automata consists of a finite number of four-dimensional finite automata and a four-dimensional input tape where these finite automata work independently (in parallel). Those finite automata whose input heads scan the same cell of the input tape can communicate with each other, that is, every finite automaton is allowed to know the internal states of other finite automata on the same cell it is scanning at the moment. In this paper, we mainly investigate some accepting powers of a cooperating system of eight- or seven-way four-dimensional finite automata. The seven-way four-dimensional finite automaton is an eight-way four-dimensional finite automaton whose input head can move east, west, south, north, up, down, or in the fu-ture, but not in the past on a four-dimensional input tape.
Keywords: computational complexity, cooperating system, finite automaton, four-dimension, hierarchy, multihead.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888112 Memristor-A Promising Candidate for Neural Circuits in Neuromorphic Computing Systems
Authors: Juhi Faridi, Mohd. Ajmal Kafeel
Abstract:
The advancements in the field of Artificial Intelligence (AI) and technology has led to an evolution of an intelligent era. Neural networks, having the computational power and learning ability similar to the brain is one of the key AI technologies. Neuromorphic computing system (NCS) consists of the synaptic device, neuronal circuit, and neuromorphic architecture. Memristor are a promising candidate for neuromorphic computing systems, but when it comes to neuromorphic computing, the conductance behavior of the synaptic memristor or neuronal memristor needs to be studied thoroughly in order to fathom the neuroscience or computer science. Furthermore, there is a need of more simulation work for utilizing the existing device properties and providing guidance to the development of future devices for different performance requirements. Hence, development of NCS needs more simulation work to make use of existing device properties. This work aims to provide an insight to build neuronal circuits using memristors to achieve a Memristor based NCS. Here we throw a light on the research conducted in the field of memristors for building analog and digital circuits in order to motivate the research in the field of NCS by building memristor based neural circuits for advanced AI applications. This literature is a step in the direction where we describe the various Key findings about memristors and its analog and digital circuits implemented over the years which can be further utilized in implementing the neuronal circuits in the NCS. This work aims to help the electronic circuit designers to understand how the research progressed in memristors and how these findings can be used in implementing the neuronal circuits meant for the recent progress in the NCS.
Keywords: Analog circuits, digital circuits, memristors, neuromorphic computing systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1214111 Design and Construction Validation of Pile Performance through High Strain Pile Dynamic Tests for both Contiguous Flight Auger and Drilled Displacement Piles
Authors: S. Pirrello
Abstract:
Sydney’s booming real estate market has pushed property developers to invest in historically “no-go” areas, which were previously too expensive to develop. These areas are usually near rivers where the sites are underlain by deep alluvial and estuarine sediments. In these ground conditions, conventional bored pile techniques are often not competitive. Contiguous Flight Auger (CFA) and Drilled Displacement (DD) Piles techniques are on the other hand suitable for these ground conditions. This paper deals with the design and construction challenges encountered with these piling techniques for a series of high-rise towers in Sydney’s West. The advantages of DD over CFA piles such as reduced overall spoil with substantial cost savings and achievable rock sockets in medium strength bedrock are discussed. Design performances were assessed with PIGLET. Pile performances are validated in two stages, during constructions with the interpretation of real-time data from the piling rigs’ on-board computer data, and after construction with analyses of results from high strain pile dynamic testing (PDA). Results are then presented and discussed. High Strain testing data are presented as Case Pile Wave Analysis Program (CAPWAP) analyses.
Keywords: Contiguous flight auger, case pile wave analysis, high strain pile, drilled displacement, pile performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 983110 Verification of Space System Dynamics Using the MATLAB Identification Toolbox in Space Qualification Test
Authors: Y. V. Kim
Abstract:
This article presents an approach with regards to the Functional Testing of Space System (SS) that could be a space vehicle (spacecraft-S/C) and/or its equipment and components – S/C subsystems. This test should finalize the Space Qualification Tests (SQT) campaign. It could be considered as a generic test and used for a wide class of SS that, from the point of view of System Dynamics and Control Theory, may be described by the ordinary differential equations. The suggested methodology is based on using semi-natural experiment laboratory stand that does not require complicated, precise and expensive technological control-verification equipment. However, it allows for testing totally assembled system during Assembling, Integration and Testing (AIT) activities at the final phase of SQT, involving system hardware (HW) and software (SW). The test physically activates system input (sensors) and output (actuators) and requires recording their outputs in real time. The data are then inserted in a laboratory computer, where it is post-experiment processed by the MATLAB/Simulink Identification Toolbox. It allows for estimating the system dynamics in the form of estimation of its differential equation coefficients through the verification experimental test and comparing them with expected mathematical model, prematurely verified by mathematical simulation during the design process. Mathematical simulation results presented in the article show that this approach could be applicable and helpful in SQT practice. Further semi-natural experiments should specify detail requirements for the test laboratory equipment and test-procedures.
Keywords: system dynamics, space system ground tests, space qualification, system dynamics identification, satellite attitude control, assembling integration and testing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 540109 Classification of Acoustic Emission Based Partial Discharge in Oil Pressboard Insulation System Using Wavelet Analysis
Authors: Prasanta Kundu, N.K. Kishore, A.K. Sinha
Abstract:
Insulation used in transformer is mostly oil pressboard insulation. Insulation failure is one of the major causes of catastrophic failure of transformers. It is established that partial discharges (PD) cause insulation degradation and premature failure of insulation. Online monitoring of PDs can reduce the risk of catastrophic failure of transformers. There are different techniques of partial discharge measurement like, electrical, optical, acoustic, opto-acoustic and ultra high frequency (UHF). Being non invasive and non interference prone, acoustic emission technique is advantageous for online PD measurement. Acoustic detection of p.d. is based on the retrieval and analysis of mechanical or pressure signals produced by partial discharges. Partial discharges are classified according to the origin of discharges. Their effects on insulation deterioration are different for different types. This paper reports experimental results and analysis for classification of partial discharges using acoustic emission signal of laboratory simulated partial discharges in oil pressboard insulation system using three different electrode systems. Acoustic emission signal produced by PD are detected by sensors mounted on the experimental tank surface, stored on an oscilloscope and fed to computer for further analysis. The measured AE signals are analyzed using discrete wavelet transform analysis and wavelet packet analysis. Energy distribution in different frequency bands of discrete wavelet decomposed signal and wavelet packet decomposed signal is calculated. These analyses show a distinct feature useful for PD classification. Wavelet packet analysis can sort out any misclassification arising out of DWT in most cases.
Keywords: Acoustic emission, discrete wavelet transform, partial discharge, wavelet packet analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2987108 Simulation Aided Life Cycle Sustainability Assessment Framework for Manufacturing Design and Management
Authors: Mijoh A. Gbededo, Kapila Liyanage, Ilias Oraifige
Abstract:
Decision making for sustainable manufacturing design and management requires critical considerations due to the complexity and partly conflicting issues of economic, social and environmental factors. Although there are tools capable of assessing the combination of one or two of the sustainability factors, the frameworks have not adequately integrated all the three factors. Case study and review of existing simulation applications also shows the approach lacks integration of the sustainability factors. In this paper we discussed the development of a simulation based framework for support of a holistic assessment of sustainable manufacturing design and management. To achieve this, a strategic approach is introduced to investigate the strengths and weaknesses of the existing decision supporting tools. Investigation reveals that Discrete Event Simulation (DES) can serve as a rock base for other Life Cycle Analysis frameworks. Simio-DES application optimizes systems for both economic and competitive advantage, Granta CES EduPack and SimaPro collate data for Material Flow Analysis and environmental Life Cycle Assessment, while social and stakeholders’ analysis is supported by Analytical Hierarchy Process, a Multi-Criteria Decision Analysis method. Such a common and integrated framework creates a platform for companies to build a computer simulation model of a real system and assess the impact of alternative solutions before implementing a chosen solution.
Keywords: Discrete event simulation, life cycle sustainability analysis, manufacturing, sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1289107 A Novel Multiple Valued Logic OHRNS Modulo rn Adder Circuit
Authors: Mehdi Hosseinzadeh, Somayyeh Jafarali Jassbi, Keivan Navi
Abstract:
Residue Number System (RNS) is a modular representation and is proved to be an instrumental tool in many digital signal processing (DSP) applications which require high-speed computations. RNS is an integer and non weighted number system; it can support parallel, carry-free, high-speed and low power arithmetic. A very interesting correspondence exists between the concepts of Multiple Valued Logic (MVL) and Residue Number Arithmetic. If the number of levels used to represent MVL signals is chosen to be consistent with the moduli which create the finite rings in the RNS, MVL becomes a very natural representation for the RNS. There are two concerns related to the application of this Number System: reaching the most possible speed and the largest dynamic range. There is a conflict when one wants to resolve both these problem. That is augmenting the dynamic range results in reducing the speed in the same time. For achieving the most performance a method is considere named “One-Hot Residue Number System" in this implementation the propagation is only equal to one transistor delay. The problem with this method is the huge increase in the number of transistors they are increased in order m2 . In real application this is practically impossible. In this paper combining the Multiple Valued Logic and One-Hot Residue Number System we represent a new method to resolve both of these two problems. In this paper we represent a novel design of an OHRNS-based adder circuit. This circuit is useable for Multiple Valued Logic moduli, in comparison to other RNS design; this circuit has considerably improved the number of transistors and power consumption.
Keywords: Computer Arithmetic, Residue Number System, Multiple Valued Logic, One-Hot, VLSI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843106 Analyzing the Usage of Social Media: A Study on Elderly in Malaysia
Authors: Chan Eang Teng, Tang Mui Joo
Abstract:
In the beginning of the prevalence of social media, it would be an obvious trend that the young adult age group has the highest population among the users on social media. However, apart from the age group of the users are becoming younger and younger, the elderly group has become a new force on social media, and this age group has increased rapidly. On top of that, the influence of social media towards the elderly is becoming more significant and it is even trending among them. This is because basic computer knowledge is not instilled into their life when they were young. This age group tends to be engrossed more than the young as this is something new for them, and they have the mindset that it is a new platform to approach things, and they tend to be more engrossed when they start getting in touch with the social media. Generally, most of the social media has been accepted and accessed by teenagers and young adult, but it is reasonable to believe that the social media is not really accepted among the elderly. Surprisingly, the elderlies are more addicted to the social media than the teenagers. Therefore, this study is to determine and understand the relationship between the elderly and social media, and how they employ social media in their lives. An online survey on 200 elderly aged 45-80 and an interview with a media expert are conducted to answer the main questions in the research paper. Uses and Gratification Approach is employed in theoretical framework. Finding revealed that majority of the respondents use social media to connect with family, friends, and for leisure purposes. The finding concluded that the elderly use social media differently according to their needs and wants which is in par with the highlight of Uses and Gratification theory. Considering the significantly large role social media plays in our culture and daily life today, the finding will shed some light on the effect of social media on the elderly or senior citizens who are usually relegated into a minority group in today’s age where the internet and social media are of great importance to our society and humanity in general. This may also serve to be useful in understanding behavioral patterns and preference in terms of social media usage among the elderly.
Keywords: Elderly, Facebook, Malaysia, social media.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5489105 Investigating the Effectiveness of a 3D Printed Composite Mold
Authors: Peng Hao Wang, Garam Kim, Ronald Sterkenburg
Abstract:
In composite manufacturing, the fabrication of tooling and tooling maintenance contributes to a large portion of the total cost. However, as the applications of composite materials continue to increase, there is also a growing demand for more tooling. The demand for more tooling places heavy emphasis on the industry’s ability to fabricate high quality tools while maintaining the tool’s cost effectiveness. One of the popular techniques of tool fabrication currently being developed utilizes additive manufacturing technology known as 3D printing. The popularity of 3D printing is due to 3D printing’s ability to maintain low material waste, low cost, and quick fabrication time. In this study, a team of Purdue University School of Aviation and Transportation Technology (SATT) faculty and students investigated the effectiveness of a 3D printed composite mold. A steel valve cover from an aircraft reciprocating engine was modeled utilizing 3D scanning and computer-aided design (CAD) to create a 3D printed composite mold. The mold was used to fabricate carbon fiber versions of the aircraft reciprocating engine valve cover. The carbon fiber valve covers were evaluated for dimensional accuracy and quality while the 3D printed composite mold was evaluated for durability and dimensional stability. The data collected from this study provided valuable information in the understanding of 3D printed composite molds, potential improvements for the molds, and considerations for future tooling design.Keywords: Additive manufacturing, carbon fiber, composite tooling, molds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 909104 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code
Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader
Abstract:
In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.
Keywords: Bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913103 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images
Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj
Abstract:
Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.
Keywords: Image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1181102 Analyzing the Effect of Materials’ Selection on Energy Saving and Carbon Footprint: A Case Study Simulation of Concrete Structure Building
Authors: M. Kouhirostamkolaei, M. Kouhirostami, M. Sam, J. Woo, A. T. Asutosh, J. Li, C. Kibert
Abstract:
Construction is one of the most energy consumed activities in the urban environment that results in a significant amount of greenhouse gas emissions around the world. Thus, the impact of the construction industry on global warming is undeniable. Thus, reducing building energy consumption and mitigating carbon production can slow the rate of global warming. The purpose of this study is to determine the amount of energy consumption and carbon dioxide production during the operation phase and the impact of using new shells on energy saving and carbon footprint. Therefore, a residential building with a re-enforced concrete structure is selected in Babolsar, Iran. DesignBuilder software has been used for one year of building operation to calculate the amount of carbon dioxide production and energy consumption in the operation phase of the building. The primary results show the building use 61750 kWh of energy each year. Computer simulation analyzes the effect of changing building shells -using XPS polystyrene and new electrochromic windows- as well as changing the type of lighting on energy consumption reduction and subsequent carbon dioxide production. The results show that the amount of energy and carbon production during building operation has been reduced by approximately 70% by applying the proposed changes. The changes reduce CO2e to 11345 kg CO2/yr. The result of this study helps designers and engineers to consider material selection’s process as one of the most important stages of design for improving energy performance of buildings.
Keywords: Construction materials, green construction, energy simulation, carbon footprint, energy saving, concrete structure, DesignBuilder.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 992101 Two-Dimensional Observation of Oil Displacement by Water in a Petroleum Reservoir through Numerical Simulation and Application to a Petroleum Reservoir
Authors: Ahmad Fahim Nasiry, Shigeo Honma
Abstract:
We examine two-dimensional oil displacement by water in a petroleum reservoir. The pore fluid is immiscible, and the porous media is homogenous and isotropic in the horizontal direction. Buckley-Leverett theory and a combination of Laplacian and Darcy’s law are used to study the fluid flow through porous media, and the Laplacian that defines the dispersion and diffusion of fluid in the sand using heavy oil is discussed. The reservoir is homogenous in the horizontal direction, as expressed by the partial differential equation. Two main factors which are observed are the water saturation and pressure distribution in the reservoir, and they are evaluated for predicting oil recovery in two dimensions by a physical and mathematical simulation model. We review the numerical simulation that solves difficult partial differential reservoir equations. Based on the numerical simulations, the saturation and pressure equations are calculated by the iterative alternating direction implicit method and the iterative alternating direction explicit method, respectively, according to the finite difference assumption. However, to understand the displacement of oil by water and the amount of water dispersion in the reservoir better, an interpolated contour line of the water distribution of the five-spot pattern, that provides an approximate solution which agrees well with the experimental results, is also presented. Finally, a computer program is developed to calculate the equation for pressure and water saturation and to draw the pressure contour line and water distribution contour line for the reservoir.Keywords: Numerical simulation, immiscible, finite difference, IADI, IADE, waterflooding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1088100 Optimal Image Representation for Linear Canonical Transform Multiplexing
Authors: Navdeep Goel, Salvador Gabarda
Abstract:
Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4 × 4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4 × 4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4 × 4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.Keywords: Chirp signals, Image multiplexing, Image transformation, Linear canonical transform, Polynomial approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 212999 Applying Kinect on the Development of a Customized 3D Mannequin
Authors: Shih-Wen Hsiao, Rong-Qi Chen
Abstract:
In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.Keywords: 3D Mannequin, kinect scanner, interactive closest point, shape morphing, subdivision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 206298 Competency and Strategy Formulation in Automobile Industry
Authors: Chandan Deep Singh
Abstract:
In present days, companies are facing the rapid competition in terms of customer requirements to be satisfied, new technologies to be integrated into future products, new safety regulations to be followed, new computer-based tools to be introduced into design activities that becomes more scientific. In today’s highly competitive market, survival focuses on various factors such as quality, innovation, adherence to standards, and rapid response as the basis for competitive advantage. For competitive advantage, companies have to produce various competencies: for improving the capability of suppliers and for strengthening the process of integrating technology. For more competitiveness, organizations should operate in a strategy driven way and have a strategic architecture for developing core competencies. Traditional ways to take such experience and develop competencies tend to take a lot of time and they are expensive. A new learning environment, which is built around a gaming engine, supports the development of competences in specific subject areas. Technology competencies have a significant role in firm innovation and competitiveness; they interact with the competitive environment. Technological competencies vary according to the type of competitive environment, thus enhancing firm innovativeness. Technological competency is gained through extensive experimentation and learning in its research, development and employment in manufacturing. This is a review paper based on competency and strategic success of automobile industry. The aim here is to study strategy formulation and competency tools in the industry. This work is a review of literature related to competency and strategy in automobile industry. This study involves review of 34 papers related to competency and strategy.
Keywords: Competency, competitiveness, manufacturing competency, strategic formulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 212397 Interactive Effects in Blended Learning Mode: Exploring Hybrid Data Sources and Iterative Linkages
Authors: Hock Chuan, Lim
Abstract:
This paper presents an approach for identifying interactive effects using Network Science (NS) supported by Social Network Analysis (SNA) techniques. Based on general observations that learning processes and behaviors are shaped by the social relationships and influenced by learning environment, the central idea was to understand both the human and non-human interactive effects for a blended learning mode of delivery of computer science modules. Important findings include (a) the importance of non-human nodes to influence the centrality and transfer; (b) the degree of non-human and human connectivity impacts learning. This project reveals that the NS pattern and connectivity as measured by node relationships offer alternative approach for hypothesis generation and design of qualitative data collection. An iterative process further reinforces the analysis, whereas the experimental simulation option itself is an interesting alternative option, a hybrid combination of both experimental simulation and qualitative data collection presents itself as a promising and viable means to study complex scenario such as blended learning delivery mode. The primary value of this paper lies in the design of the approach for studying interactive effects of human (social nodes) and non-human (learning/study environment, Information and Communication Technologies (ICT) infrastructures nodes) components. In conclusion, this project adds to the understanding and the use of SNA to model and study interactive effects in blended social learning.
Keywords: Blended learning, network science, social learning, social network analysis, study environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 65896 Numerical Simulation of Free Surface Water Wave for the Flow around NACA 0012 Hydrofoil and Wigley Hull Using VOF Method
Authors: Saadia Adjali, Omar Imine, Mohammed Aounallah, Mustapha Belkadi
Abstract:
Steady three-dimensional and two free surface waves generated by moving bodies are presented, the flow problem to be simulated is rich in complexity and poses many modeling challenges because of the existence of breaking waves around the ship hull, and because of the interaction of the two-phase flow with the turbulent boundary layer. The results of several simulations are reported. The first study was performed for NACA0012 of hydrofoil with different meshes, this section is analyzed at h/c= 1, 0345 for 2D. In the second simulation a mathematically defined Wigley hull form is used to investigate the application of a commercial CFD code in prediction of the total resistance and its components from tangential and normal forces on the hull wetted surface. The computed resistance and wave profiles are used to estimate the coefficient of the total resistance for Wigley hull advancing in calm water under steady conditions. The commercial CFD software FLUENT version 12 is used for the computations in the present study. The calculated grid is established using the code computer GAMBIT 2.3.26. The shear stress k-ωSST model is used for turbulence modeling and the volume of fluid technique is employed to simulate the free-surface motion. The second order upwind scheme is used for discretizing the convection terms in the momentum transport equations, the Modified HRIC scheme for VOF discretization. The results obtained compare well with the experimental data.Keywords: Free surface flows, Breaking waves, Boundary layer, Wigley hull, Volume of fluid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 356295 Numerical Simulation of Free Surface Water Wave for the Flow around NACA 0012 Hydrofoil and Wigley Hull Using VOF Method
Authors: Saadia Adjali, Omar Imine, Mohammed Aounallah, Mustapha Belkadi
Abstract:
Steady three-dimensional and two free surface waves generated by moving bodies are presented, the flow problem to be simulated is rich in complexity and poses many modeling challenges because of the existence of breaking waves around the ship hull, and because of the interaction of the two-phase flow with the turbulent boundary layer. The results of several simulations are reported. The first study was performed for NACA0012 of hydrofoil with different meshes, this section is analyzed at h/c= 1, 0345 for 2D. In the second simulation a mathematically defined Wigley hull form is used to investigate the application of a commercial CFD code in prediction of the total resistance and its components from tangential and normal forces on the hull wetted surface. The computed resistance and wave profiles are used to estimate the coefficient of the total resistance for Wigley hull advancing in calm water under steady conditions. The commercial CFD software FLUENT version 12 is used for the computations in the present study. The calculated grid is established using the code computer GAMBIT 2.3.26. The shear stress k-ωSST model is used for turbulence modeling and the volume of fluid technique is employed to simulate the free-surface motion. The second order upwind scheme is used for discretizing the convection terms in the momentum transport equations, the Modified HRIC scheme for VOF discretization. The results obtained compare well with the experimental data.Keywords: Free surface flows, breaking waves, boundary layer, Wigley hull, volume of fluid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3302