Search results for: image generation
4759 Bioethanol Production from Marine Algae Ulva Lactuca and Sargassum Swartzii: Saccharification and Process Optimization
Authors: M. Jerold, V. Sivasubramanian, A. George, B.S. Ashik, S. S. Kumar
Abstract:
Bioethanol is a sustainable biofuel that can be used alternative to fossil fuels. Today, third generation (3G) biofuel is gaining more attention than first and second-generation biofuel. The more lignin content in the lignocellulosic biomass is the major drawback of second generation biofuels. Algae are the renewable feedstock used in the third generation biofuel production. Algae contain a large number of carbohydrates, therefore it can be used for the fermentation by hydrolysis process. There are two groups of Algae, such as micro and macroalgae. In the present investigation, Macroalgae was chosen as raw material for the production of bioethanol. Two marine algae viz. Ulva Lactuca and Sargassum swartzii were used for the experimental studies. The algal biomass was characterized using various analytical techniques like Elemental Analysis, Scanning Electron Microscopy Analysis and Fourier Transform Infrared Spectroscopy to understand the physio-Chemical characteristics. The batch experiment was done to study the hydrolysis and operation parameters such as pH, agitation, fermentation time, inoculum size. The saccharification was done with acid and alkali treatment. The experimental results showed that NaOH treatment was shown to enhance the bioethanol. From the hydrolysis study, it was found that 0.5 M Alkali treatment would serve as optimum concentration for the saccharification of polysaccharide sugar to monomeric sugar. The maximum yield of bioethanol was attained at a fermentation time of 9 days. The inoculum volume of 1mL was found to be lowest for the ethanol fermentation. The agitation studies show that the fermentation was higher during the process. The percentage yield of bioethanol was found to be 22.752% and 14.23 %. The elemental analysis showed that S. swartzii contains a higher carbon source. The results confirmed hydrolysis was not completed to recover the sugar from biomass. The specific gravity of ethanol was found to 0.8047 and 0.808 for Ulva Lactuca and Sargassum swartzii, respectively. The purity of bioethanol also studied and found to be 92.55 %. Therefore, marine algae can be used as a most promising renewable feedstock for the production of bioethanol.Keywords: algae, biomass, bioethaol, biofuel, pretreatment
Procedia PDF Downloads 1604758 Digital Watermarking Based on Visual Cryptography and Histogram
Authors: R. Rama Kishore, Sunesh
Abstract:
Nowadays, robust and secure watermarking algorithm and its optimization have been need of the hour. A watermarking algorithm is presented to achieve the copy right protection of the owner based on visual cryptography, histogram shape property and entropy. In this, both host image and watermark are preprocessed. Host image is preprocessed by using Butterworth filter, and watermark is with visual cryptography. Applying visual cryptography on water mark generates two shares. One share is used for embedding the watermark, and the other one is used for solving any dispute with the aid of trusted authority. Usage of histogram shape makes the process more robust against geometric and signal processing attacks. The combination of visual cryptography, Butterworth filter, histogram, and entropy can make the algorithm more robust, imperceptible, and copy right protection of the owner.Keywords: digital watermarking, visual cryptography, histogram, butter worth filter
Procedia PDF Downloads 3584757 Classifier for Liver Ultrasound Images
Authors: Soumya Sajjan
Abstract:
Liver cancer is the most common cancer disease worldwide in men and women, and is one of the few cancers still on the rise. Liver disease is the 4th leading cause of death. According to new NHS (National Health Service) figures, deaths from liver diseases have reached record levels, rising by 25% in less than a decade; heavy drinking, obesity, and hepatitis are believed to be behind the rise. In this study, we focus on Development of Diagnostic Classifier for Ultrasound liver lesion. Ultrasound (US) Sonography is an easy-to-use and widely popular imaging modality because of its ability to visualize many human soft tissues/organs without any harmful effect. This paper will provide an overview of underlying concepts, along with algorithms for processing of liver ultrasound images Naturaly, Ultrasound liver lesion images are having more spackle noise. Developing classifier for ultrasound liver lesion image is a challenging task. We approach fully automatic machine learning system for developing this classifier. First, we segment the liver image by calculating the textural features from co-occurrence matrix and run length method. For classification, Support Vector Machine is used based on the risk bounds of statistical learning theory. The textural features for different features methods are given as input to the SVM individually. Performance analysis train and test datasets carried out separately using SVM Model. Whenever an ultrasonic liver lesion image is given to the SVM classifier system, the features are calculated, classified, as normal and diseased liver lesion. We hope the result will be helpful to the physician to identify the liver cancer in non-invasive method.Keywords: segmentation, Support Vector Machine, ultrasound liver lesion, co-occurance Matrix
Procedia PDF Downloads 4114756 An Investigation of System and Operating Parameters on the Performance of Parabolic Trough Solar Collector for Power Generation
Authors: Umesh Kumar Sinha, Y. K. Nayak, N. Kumar, Swapnil Saurav, Monika Kashyap
Abstract:
The authors investigate the effect of system and operating parameters on the performance of high temperature solar concentrator for power generation. The effects of system and operating parameters were investigated using the developed mathematical expressions for collector efficiency, heat removal factor, fluid outlet temperature and power, etc. The results were simulated using C++program. The simulated results were plotted for investigation like effect of thermal loss parameter and radiative loss parameters on the collector efficiency, heat removal factor, fluid outlet temperature, rise of temperature and effect of mass flow rate of the fluid outlet temperature. In connection with the power generation, plots were drawn for the effect of (TM–TAMB) on the variation of concentration efficiency, concentrator irradiance on PM/PMN, evaporation temperature on thermal to electric power efficiency (Conversion efficiency) of the plant and overall efficiency of solar power plant.Keywords: parabolic trough solar collector, radiative and thermal loss parameters, collector efficiency, heat removal factor, fluid outlet and inlet temperatures, rise of temperature, mass flow rate, conversion efficiency, concentrator irradiance
Procedia PDF Downloads 3224755 Damage Micromechanisms of Coconut Fibers and Chopped Strand Mats of Coconut Fibers
Authors: Rios A. S., Hild F., Deus E. P., Aimedieu P., Benallal A.
Abstract:
The damage micromechanisms of chopped strand mats manufactured by compression of Brazilian coconut fiber and coconut fibers in different external conditions (chemical treatment) were used in this study. Mechanical analysis testing uniaxial traction were used with Digital Image Correlation (DIC). The images captured during the tensile test in the coconut fibers and coconut fiber mats showed an uncertainty of measurement in order centipixels. The initial modulus (modulus of elasticity) and tensile strength decreased with increasing diameter for the four conditions of coconut fibers. The DIC showed heterogeneous deformation fields for coconut fibers and mats and the displacement fields showed the rupture process of coconut fiber. The determination of poisson’s ratio of the mat was performed through of transverse and longitudinal deformations found in the elastic region.Keywords: coconut fiber, mechanical behavior, digital image correlation, micromechanism
Procedia PDF Downloads 4594754 Classification of Multiple Cancer Types with Deep Convolutional Neural Network
Authors: Nan Deng, Zhenqiu Liu
Abstract:
Thousands of patients with metastatic tumors were diagnosed with cancers of unknown primary sites each year. The inability to identify the primary cancer site may lead to inappropriate treatment and unexpected prognosis. Nowadays, a large amount of genomics and transcriptomics cancer data has been generated by next-generation sequencing (NGS) technologies, and The Cancer Genome Atlas (TCGA) database has accrued thousands of human cancer tumors and healthy controls, which provides an abundance of resource to differentiate cancer types. Meanwhile, deep convolutional neural networks (CNNs) have shown high accuracy on classification among a large number of image object categories. Here, we utilize 25 cancer primary tumors and 3 normal tissues from TCGA and convert their RNA-Seq gene expression profiling to color images; train, validate and test a CNN classifier directly from these images. The performance result shows that our CNN classifier can archive >80% test accuracy on most of the tumors and normal tissues. Since the gene expression pattern of distant metastases is similar to their primary tumors, the CNN classifier may provide a potential computational strategy on identifying the unknown primary origin of metastatic cancer in order to plan appropriate treatment for patients.Keywords: bioinformatics, cancer, convolutional neural network, deep leaning, gene expression pattern
Procedia PDF Downloads 2994753 Impact of the Energy Transition on Security of Supply - A Case Study of Vietnam Power System in 2030
Authors: Phuong Nguyen, Trung Tran
Abstract:
Along with the global ongoing energy transition, Vietnam has indicated a strong commitment in the last COP events on the zero-carbon emission target. However, it is a real challenge for the nation to replace fossil-fired power plants by a significant amount of renewable energy sources (RES) while maintaining security of supply. The unpredictability and variability of RES would cause technical issues for supply-demand balancing, network congestions, system balancing, among others. It is crucial to take these into account while planning the future grid infrastructure. This study will address both generation and transmission adequacy and reveal a comprehensive analysis about the impact of ongoing energy transition on the development of Vietnam power system in 2030. This will provide insight for creating an secure, stable, and affordable pathway for the country in upcoming years.Keywords: generation adequacy, transmission adequacy, security of supply, energy transition
Procedia PDF Downloads 864752 Measurement of Coal Fineness, Air Fuel Ratio, and Fuel Weight Distribution in a Vertical Spindle Mill’s Pulverized Fuel Pipes at Classifier Vane 40%
Authors: Jayasiler Kunasagaram
Abstract:
In power generation, coal fineness is crucial to maintain flame stability, ensure combustion efficiency, and lower emissions to the environment. In order for the pulverized coal to react effectively in the boiler furnace, the size of coal particles needs to be at least 70% finer than 74 μm. This paper presents the experiment results of coal fineness, air fuel ratio and fuel weight distribution in pulverized fuel pipes at classifier vane 40%. The aim of this experiment is to extract the pulverized coal is kinetically and investigate the data accordingly. Dirty air velocity, coal sample extraction, and coal sieving experiments were performed to measure coal fineness. The experiment results show that required coal fineness can be achieved at 40 % classifier vane. However, this does not surpass the desired value by a great margin.Keywords: coal power, emissions, isokinetic sampling, power generation
Procedia PDF Downloads 6104751 Retina Registration for Biometrics Based on Characterization of Retinal Feature Points
Authors: Nougrara Zineb
Abstract:
The unique structure of the blood vessels in the retina has been used for biometric identification. The retina blood vessel pattern is a unique pattern in each individual and it is almost impossible to forge that pattern in a false individual. The retina biometrics’ advantages include high distinctiveness, universality, and stability overtime of the blood vessel pattern. Once the creases have been extracted from the images, a registration stage is necessary, since the position of the retinal vessel structure could change between acquisitions due to the movements of the eye. Image registration consists of following steps: Feature detection, feature matching, transform model estimation and image resembling and transformation. In this paper, we present an algorithm of registration; it is based on the characterization of retinal feature points. For experiments, retinal images from the DRIVE database have been tested. The proposed methodology achieves good results for registration in general.Keywords: fovea, optic disc, registration, retinal images
Procedia PDF Downloads 2664750 An Advanced Automated Brain Tumor Diagnostics Approach
Authors: Berkan Ural, Arif Eser, Sinan Apaydin
Abstract:
Medical image processing is generally become a challenging task nowadays. Indeed, processing of brain MRI images is one of the difficult parts of this area. This study proposes a hybrid well-defined approach which is consisted from tumor detection, extraction and analyzing steps. This approach is mainly consisted from a computer aided diagnostics system for identifying and detecting the tumor formation in any region of the brain and this system is commonly used for early prediction of brain tumor using advanced image processing and probabilistic neural network methods, respectively. For this approach, generally, some advanced noise removal functions, image processing methods such as automatic segmentation and morphological operations are used to detect the brain tumor boundaries and to obtain the important feature parameters of the tumor region. All stages of the approach are done specifically with using MATLAB software. Generally, for this approach, firstly tumor is successfully detected and the tumor area is contoured with a specific colored circle by the computer aided diagnostics program. Then, the tumor is segmented and some morphological processes are achieved to increase the visibility of the tumor area. Moreover, while this process continues, the tumor area and important shape based features are also calculated. Finally, with using the probabilistic neural network method and with using some advanced classification steps, tumor area and the type of the tumor are clearly obtained. Also, the future aim of this study is to detect the severity of lesions through classes of brain tumor which is achieved through advanced multi classification and neural network stages and creating a user friendly environment using GUI in MATLAB. In the experimental part of the study, generally, 100 images are used to train the diagnostics system and 100 out of sample images are also used to test and to check the whole results. The preliminary results demonstrate the high classification accuracy for the neural network structure. Finally, according to the results, this situation also motivates us to extend this framework to detect and localize the tumors in the other organs.Keywords: image processing algorithms, magnetic resonance imaging, neural network, pattern recognition
Procedia PDF Downloads 4184749 Investigation of New Gait Representations for Improving Gait Recognition
Authors: Chirawat Wattanapanich, Hong Wei
Abstract:
This study presents new gait representations for improving gait recognition accuracy on cross gait appearances, such as normal walking, wearing a coat and carrying a bag. Based on the Gait Energy Image (GEI), two ideas are implemented to generate new gait representations. One is to append lower knee regions to the original GEI, and the other is to apply convolutional operations to the GEI and its variants. A set of new gait representations are created and used for training multi-class Support Vector Machines (SVMs). Tests are conducted on the CASIA dataset B. Various combinations of the gait representations with different convolutional kernel size and different numbers of kernels used in the convolutional processes are examined. Both the entire images as features and reduced dimensional features by Principal Component Analysis (PCA) are tested in gait recognition. Interestingly, both new techniques, appending the lower knee regions to the original GEI and convolutional GEI, can significantly contribute to the performance improvement in the gait recognition. The experimental results have shown that the average recognition rate can be improved from 75.65% to 87.50%.Keywords: convolutional image, lower knee, gait
Procedia PDF Downloads 2024748 Design of a Phemt Buffer Amplifier in Mm-Wave Band around 60 GHz
Authors: Maryam Abata, Moulhime El Bekkali, Said Mazer, Catherine Algani, Mahmoud Mehdi
Abstract:
One major problem of most electronic systems operating in the millimeter wave band is the signal generation with a high purity and a stable carrier frequency. This problem is overcome by using the combination of a signal with a low frequency local oscillator (LO) and several stages of frequency multipliers. The use of these frequency multipliers to create millimeter-wave signals is an attractive alternative to direct generation signal. Therefore, the isolation problem of the local oscillator from the other stages is always present, which leads to have various mechanisms that can disturb the oscillator performance, thus a buffer amplifier is often included in oscillator outputs. In this paper, we present the study and design of a buffer amplifier in the mm-wave band using a 0.15μm pHEMT from UMS foundry. This amplifier will be used as a part of a frequency quadrupler at 60 GHz.Keywords: Mm-wave band, local oscillator, frequency quadrupler, buffer amplifier
Procedia PDF Downloads 5454747 Performance Evaluation of Extruded-type Heat sinks Used in Inverter for Solar Power Generation
Authors: Jung Hyun Kim, Gyo Woo Lee
Abstract:
In this study, heat release performances of the three extruded-type heat sinks can be used in the inverter for solar power generation were evaluated. Numbers of fins in the heat sinks (namely E-38, E-47 and E-76) were 38, 47 and 76, respectively. Heat transfer areas of them were 1.8, 1.9 and 2.8 m2. The heat release performances of E-38, E-47, and E-76 heat sinks were measured as 79.6, 81.6, and 83.2%, respectively. The results of heat release performance show that the larger amount of heat transfer area the higher heat release rate. While on the other, in this experiment, variations of the mass flow rates caused by different cross-sectional areas of the three heat sinks may not be the major parameter of the heat release. Despite the 47.4% increment of heat transfer area of E-76 heat sink than that of E-47 one, its heat release rate was higher by only 2.0%; this suggests that its heat transfer area need to be optimized.Keywords: solar Inverter, heat sink, forced convection, heat transfer, performance evaluation
Procedia PDF Downloads 4674746 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images
Authors: Eiman Kattan, Hong Wei
Abstract:
In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.Keywords: CNNs, hyperparamters, remote sensing, land cover, land use
Procedia PDF Downloads 1694745 Rethinking Classical Concerts in the Digital Era: Transforming Sound, Experience, and Engagement for the New Generation
Authors: Orit Wolf
Abstract:
Classical music confronts a crucial challenge: updating cherished concert traditions for the digital age. This paper is a journey, and a quest to make classical concerts resonate with a new generation. It's not just about asking questions; it's about exploring the future of classical concerts and their potential to captivate and connect with today's audience in an era defined by change. The younger generation, known for their love of diversity, interactive experiences, and multi-sensory immersion, cannot be overlooked. This paper explores innovative strategies that forge deep connections with audiences whose relationship with classical music differs from the past. The urgency of this challenge drives the transformation of classical concerts. Examining classical concerts is necessary to understand how they can harmonize with contemporary sensibilities. New dimensions in audiovisual experiences that enchant the emerging generation are sought. Classical music must embrace the technological era while staying open to fusion and cross-cultural collaboration possibilities. The role of technology and Artificial Intelligence (AI) in reshaping classical concerts is under research. The fusion of classical music with digital experiences and dynamic interdisciplinary collaborations breathes new life into the concert experience. It aligns classical music with the expectations of modern audiences, making it more relevant and engaging. Exploration extends to the structure of classical concerts. Conventions are challenged, and ways to make classical concerts more accessible and captivating are sought. Inspired by innovative artistic collaborations, musical genres and styles are redefined, transforming the relationship between performers and the audience. This paper, therefore, aims to be a catalyst for dialogue and a beacon of innovation. A set of critical inquiries integral to reshaping classical concerts for the digital age is presented. As the world embraces digital transformation, classical music seeks resonance with contemporary audiences, redefining the concert experience while remaining true to its roots and embracing revolutions in the digital age.Keywords: new concert formats, reception of classical music, interdiscplinary concerts, innovation in the new musical era, mash-up, cross culture, innovative concerts, engaging musical performances
Procedia PDF Downloads 654744 DeepOmics: Deep Learning for Understanding Genome Functioning and the Underlying Genetic Causes of Disease
Authors: Vishnu Pratap Singh Kirar, Madhuri Saxena
Abstract:
Advancement in sequence data generation technologies is churning out voluminous omics data and posing a massive challenge to annotate the biological functional features. With so much data available, the use of machine learning methods and tools to make novel inferences has become obvious. Machine learning methods have been successfully applied to a lot of disciplines, including computational biology and bioinformatics. Researchers in computational biology are interested to develop novel machine learning frameworks to classify the huge amounts of biological data. In this proposal, it plan to employ novel machine learning approaches to aid the understanding of how apparently innocuous mutations (in intergenic DNA and at synonymous sites) cause diseases. We are also interested in discovering novel functional sites in the genome and mutations in which can affect a phenotype of interest.Keywords: genome wide association studies (GWAS), next generation sequencing (NGS), deep learning, omics
Procedia PDF Downloads 984743 Phenomena-Based Approach for Automated Generation of Process Options and Process Models
Authors: Parminder Kaur Heer, Alexei Lapkin
Abstract:
Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.Keywords: Phenomena, Process intensification, Process models , Process options
Procedia PDF Downloads 2324742 Optimized Techniques for Reducing the Reactive Power Generation in Offshore Wind Farms in India
Authors: Pardhasaradhi Gudla, Imanual A.
Abstract:
The generated electrical power in offshore needs to be transmitted to grid which is located in onshore by using subsea cables. Long subsea cables produce reactive power, which should be compensated in order to limit transmission losses, to optimize the transmission capacity, and to keep the grid voltage within the safe operational limits. Installation cost of wind farm includes the structure design cost and electrical system cost. India has targeted to achieve 175GW of renewable energy capacity by 2022 including offshore wind power generation. Due to sea depth is more in India, the installation cost will be further high when compared to European countries where offshore wind energy is already generating successfully. So innovations are required to reduce the offshore wind power project cost. This paper presents the optimized techniques to reduce the installation cost of offshore wind firm with respect to electrical transmission systems. This technical paper provides the techniques for increasing the current carrying capacity of subsea cable by decreasing the reactive power generation (capacitance effect) of the subsea cable. There are many methods for reactive power compensation in wind power plants so far in execution. The main reason for the need of reactive power compensation is capacitance effect of subsea cable. So if we diminish the cable capacitance of cable then the requirement of the reactive power compensation will be reduced or optimized by avoiding the intermediate substation at midpoint of the transmission network.Keywords: offshore wind power, optimized techniques, power system, sub sea cable
Procedia PDF Downloads 1944741 Statistical Feature Extraction Method for Wood Species Recognition System
Authors: Mohd Iz'aan Paiz Bin Zamri, Anis Salwa Mohd Khairuddin, Norrima Mokhtar, Rubiyah Yusof
Abstract:
Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method.Keywords: classification, feature extraction, fuzzy, inspection system, image analysis, macroscopic images
Procedia PDF Downloads 4264740 Late Roman-Byzantine Glass Bracelet Finds at Amorium and Comparison with Other Cultures
Authors: Atilla Tekin
Abstract:
Amorium was one of the biggest cities of Byzantine Empire, located under and around the modern village of Hisarköy, Emirdağ, Afyonkarahisar Province, Turkey. It was situated on the routes of trades and Byzantine military road from Constantinople to Cilicia. In addition, it was on the routes of trades and a center of bishopric. After Arab invasion, Amorium gradually lost importance. The research consists of 1372 pieces of glass bracelet finds from mostly at 1998- 2009 excavations. Most of them were found as glass bracelets fragments. The fragments are of various size, forms, colors, and decorations. During the research, they were measured and grouped according to their crossings, at first. After being photographed, they were sketched by Adobe Illustrator and decoupaged by Photoshop. All forms, colors, and decorations were specified and compared to each other. Thus, they have been tried to be dated and uncovered the place of manufacture. The importance of the research is presenting the perception of image and admiration and comparing with other cultures.Keywords: Amorium, glass bracelets, image, Byzantine empire, jewelry
Procedia PDF Downloads 1964739 Sensitivity Analysis for 14 Bus Systems in a Distribution Network with Distributed Generators
Authors: Lakshya Bhat, Anubhav Shrivastava, Shiva Rudraswamy
Abstract:
There has been a formidable interest in the area of Distributed Generation in recent times. A wide number of loads are addressed by Distributed Generators and have better efficiency too. The major disadvantage in Distributed Generation is voltage control- is highlighted in this paper. The paper addresses voltage control at buses in IEEE 14 Bus system by regulating reactive power. An analysis is carried out by selecting the most optimum location in placing the Distributed Generators through load flow analysis and seeing where the voltage profile rises. MATLAB programming is used for simulation of voltage profile in the respective buses after introduction of DG’s. A tolerance limit of +/-5% of the base value has to be maintained. To maintain the tolerance limit, 3 methods are used. Sensitivity analysis of 3 methods for voltage control is carried out to determine the priority among the methods.Keywords: distributed generators, distributed system, reactive power, voltage control, sensitivity analysis
Procedia PDF Downloads 7034738 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet
Authors: Amir Moslemi, Amir movafeghi, Shahab Moradi
Abstract:
One of the most important challenging factors in medical images is nominated as noise.Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjected to low quality due to the noise. The quality of CT images is dependent on the absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on the purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete wavelet transform(DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result in good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).Keywords: computed tomography (CT), noise reduction, curve-let, contour-let, signal to noise peak-peak ratio (PSNR), structure similarity (Ssim), absorbed dose to patient (ADP)
Procedia PDF Downloads 4414737 Traffic Sign Recognition System Using Convolutional Neural NetworkDevineni
Authors: Devineni Vijay Bhaskar, Yendluri Raja
Abstract:
We recommend a model for traffic sign detection stranded on Convolutional Neural Networks (CNN). We first renovate the unique image into the gray scale image through with support vector machines, then use convolutional neural networks with fixed and learnable layers for revealing and understanding. The permanent layer can reduction the amount of attention areas to notice and crop the limits very close to the boundaries of traffic signs. The learnable coverings can rise the accuracy of detection significantly. Besides, we use bootstrap procedures to progress the accuracy and avoid overfitting problem. In the German Traffic Sign Detection Benchmark, we obtained modest results, with an area under the precision-recall curve (AUC) of 99.49% in the group “Risk”, and an AUC of 96.62% in the group “Obligatory”.Keywords: convolutional neural network, support vector machine, detection, traffic signs, bootstrap procedures, precision-recall curve
Procedia PDF Downloads 1224736 Objects Tracking in Catadioptric Images Using Spherical Snake
Authors: Khald Anisse, Amina Radgui, Mohammed Rziza
Abstract:
Tracking objects on video sequences is a very challenging task in many works in computer vision applications. However, there is no article that treats this topic in catadioptric vision. This paper is an attempt that tries to describe a new approach of omnidirectional images processing based on inverse stereographic projection in the half-sphere. We used the spherical model proposed by Gayer and al. For object tracking, our work is based on snake method, with optimization using the Greedy algorithm, by adapting its different operators. The algorithm will respect the deformed geometries of omnidirectional images such as spherical neighborhood, spherical gradient and reformulation of optimization algorithm on the spherical domain. This tracking method that we call "spherical snake" permitted to know the change of the shape and the size of object in different replacements in the spherical image.Keywords: computer vision, spherical snake, omnidirectional image, object tracking, inverse stereographic projection
Procedia PDF Downloads 4024735 An Intergenerational Study of Iranian Migrant Families in Australia: Exploring Language, Identity, and Acculturation
Authors: Alireza Fard Kashani
Abstract:
This study reports on the experiences and attitudes of six Iranian migrant families, from two groups of asylum seekers and skilled workers, with regard to their language, identity, and acculturation in Australia. The participants included first generation parents and 1.5-generation adolescents, who had lived in Australia for a minimum of three years. For this investigation, Mendoza’s (1984, 2016) acculturation model, as well as poststructuralist views of identity, were employed. The semi-structured interview results have highlighted that Iranian parents and adolescents face low degrees of intergenerational conflicts in most domains of their acculturation. However, the structural and lawful patterns in Australia have caused some internal conflicts for the parents, especially fathers (e.g., their power status within the family or their children’s freedom). Furthermore, while most participants reported ‘cultural eclecticism’ as their preferred acculturation orientation, female participants seemed to be more eclectic than their male counterparts who showed inclination towards keeping more aspects of their home culture. This finding, however, highlights a meaningful effort on the part of husbands that in order to make their married lives continue well in Australia they need to re-consider the traditional male-dominated customs they used to have in Iran. As for identity, not only the parents but also the adolescents proudly identified themselves as Persians. In addition, with respect to linguistic behaviour, almost all adolescents showed enthusiasm to retain the Persian language at home to be able to maintain contacts with their relatives and friends in Iran and to enjoy many other benefits the language may offer them in the future.Keywords: acculturation, asylum seekers, identity, intergenerational conflicts, language, skilled workers, 1.5 generation
Procedia PDF Downloads 2394734 Performance Evaluation of Parallel Surface Modeling and Generation on Actual and Virtual Multicore Systems
Authors: Nyeng P. Gyang
Abstract:
Even though past, current and future trends suggest that multicore and cloud computing systems are increasingly prevalent/ubiquitous, this class of parallel systems is nonetheless underutilized, in general, and barely used for research on employing parallel Delaunay triangulation for parallel surface modeling and generation, in particular. The performances, of actual/physical and virtual/cloud multicore systems/machines, at executing various algorithms, which implement various parallelization strategies of the incremental insertion technique of the Delaunay triangulation algorithm, were evaluated. T-tests were run on the data collected, in order to determine whether various performance metrics differences (including execution time, speedup and efficiency) were statistically significant. Results show that the actual machine is approximately twice faster than the virtual machine at executing the same programs for the various parallelization strategies. Results, which furnish the scalability behaviors of the various parallelization strategies, also show that some of the differences between the performances of these systems, during different runs of the algorithms on the systems, were statistically significant. A few pseudo superlinear speedup results, which were computed from the raw data collected, are not true superlinear speedup values. These pseudo superlinear speedup values, which arise as a result of one way of computing speedups, disappear and give way to asymmetric speedups, which are the accurate kind of speedups that occur in the experiments performed.Keywords: cloud computing systems, multicore systems, parallel Delaunay triangulation, parallel surface modeling and generation
Procedia PDF Downloads 2064733 Imagology: The Study of Multicultural Imagery Reflected in the Heart of Elif Shafak’s 'The Bastard of Istanbul'
Authors: Mohammad Reza Haji Babai, Sepideh Ahmadkhan Beigi
Abstract:
Internationalization and modernization of the globe have played their roles in the process of cultural interaction between globalized societies and, consequently, found their way to the world of literature under the name of ‘imagology’. Imagology has made it possible for the reader to understand the author’s thoughts and judgments of others. The present research focuses on the intercultural images portrayed in the novel of a popular Turkish-French writer, Elif Shafak, about the lifestyle, traditions, habits, and social norms of Turkish, Americans, and Armenians. The novel seeks to articulate a more intricate multicultural memory of Turkishness by grieving over the Armenian massacre. This study finds that, as a mixture of multiple lifestyles and discourses, The Bastard of Istanbul reflects not only images of oriental culture but also occidental cultures. This means that the author has attempted to maintain selfhood through historical and cultural recollection, which resulted in constructing the self and another identity.Keywords: imagology, Elif Shafak, The Bastard of Istanbul, self-image, other-image
Procedia PDF Downloads 1414732 Environmental and Space Travel
Authors: Alimohammad
Abstract:
Man's entry into space is one of the most important results of developments and advances made in information technology. But this human step, like many of his other actions, is not free of danger, as space pollution today has become a major problem for the global community. Paying attention to the issue of preserving the space environment is in the interest of all governments and mankind, and ignoring it can increase the possibility of conflict between countries. What many space powers still do not pay attention to is the freedom to explore and exploit space should be limited by banning pollution of the space environment. Therefore, freedom and prohibition are complementary and should not be considered conflicting concepts. The legal system created by the current space treaties for the effective preservation of the space environment has failed. Customary international law also does not have an effective provision and guarantee of sufficient executions in order to prevent damage to the environment. Considering the responsibility of each generation in the healthy transfer of the environment to the next generation and considering the sustainable development concept, the space environment must also be passed on to future generations in a healthy and undamaged manner. As a result, many environmental policies related to Earth should also be applied to the space environment..Keywords: law, space, environment, responsibility
Procedia PDF Downloads 854731 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation
Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong
Abstract:
Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation
Procedia PDF Downloads 1904730 Characterization of Anisotropic Deformation in Sandstones Using Micro-Computed Tomography Technique
Authors: Seyed Mehdi Seyed Alizadeh, Christoph Arns, Shane Latham
Abstract:
Geomechanical characterization of rocks in detail and its possible implications on flow properties is an important aspect of reservoir characterization workflow. In order to gain more understanding of the microstructure evolution of reservoir rocks under stress a series of axisymmetric triaxial tests were performed on two different analogue rock samples. In-situ compression tests were coupled with high resolution micro-Computed Tomography to elucidate the changes in the pore/grain network of the rocks under pressurized conditions. Two outcrop sandstones were chosen in the current study representing a various cementation status of well-consolidated and weakly-consolidated granular system respectively. High resolution images were acquired while the rocks deformed in a purpose-built compression cell. A detailed analysis of the 3D images in each series of step-wise compression tests (up to the failure point) was conducted which includes the registration of the deformed specimen images with the reference pristine dry rock image. Digital Image Correlation (DIC) technique based on the intensity of the registered 3D subsets and particle tracking are utilized to map the displacement fields in each sample. The results suggest the complex architecture of the localized shear zone in well-cemented Bentheimer sandstone whereas for the weakly-consolidated Castlegate sandstone no discernible shear band could be observed even after macroscopic failure. Post-mortem imaging a sister plug from the friable rock upon undergoing continuous compression reveals signs of a shear band pattern. This suggests that for friable sandstones at small scales loading mode may affect the pattern of deformation. Prior to mechanical failure, the continuum digital image correlation approach can reasonably capture the kinematics of deformation. As failure occurs, however, discrete image correlation (i.e. particle tracking) reveals superiority in both tracking the grains as well as quantifying their kinematics (in terms of translations/rotations) with respect to any stage of compaction. An attempt was made to quantify the displacement field in compression using continuum Digital Image Correlation which is based on the reference and secondary image intensity correlation. Such approach has only been previously applied to unconsolidated granular systems under pressure. We are applying this technique to sandstones with various degrees of consolidation. Such element of novelty will set the results of this study apart from previous attempts to characterize the deformation pattern in consolidated sands.Keywords: deformation mechanism, displacement field, shear behavior, triaxial compression, X-ray micro-CT
Procedia PDF Downloads 190