Search results for: image coding standards
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5093

Search results for: image coding standards

4193 An Evaluation of the Impact of International Accounting Standards on Financial Reporting Quality: Evidence from Emerging Economies

Authors: Kwadwo Yeboah

Abstract:

Background and Aims: The adoption of International Accounting Standards (IAS) is considered to be one of the most significant developments in the accounting profession. The adoption of IAS aims to improve financial reporting quality by ensuring that financial information is transparent and comparable across borders. However, there is a lack of research on the impact of IAS on financial reporting quality in emerging economies. This study aims to fill this gap by evaluating the impact of IAS on financial reporting quality in emerging economies. Methods: This study uses a sample of firms from emerging economies that have adopted IAS. The sample includes firms from different sectors and industries. The financial reporting quality of these firms is measured using financial ratios, such as earnings quality, financial leverage, and liquidity. The data is analyzed using a regression model that controls for firm-specific factors, such as size and profitability. Results: The results show that the adoption of IAS has a positive impact on financial reporting quality in emerging economies. Specifically, firms that adopt IAS exhibit higher earnings quality and lower financial leverage compared to firms that do not adopt IAS. Additionally, the adoption of IAS has a positive impact on liquidity, suggesting that firms that adopt IAS have better access to financing. Conclusions: The findings of this study suggest that the adoption of IAS has a positive impact on financial reporting quality in emerging economies. The results indicate that IAS adoption can improve transparency and comparability of financial information, which can enhance the ability of investors to make informed investment decisions. The study contributes to the literature by providing evidence of the impact of IAS adoption in emerging economies. The findings of this study have implications for policymakers and regulators in emerging economies, as they can use this evidence to support the adoption of IAS and improve financial reporting quality in their respective countries.

Keywords: accounting, international, standards, finance

Procedia PDF Downloads 65
4192 Image Based Landing Solutions for Large Passenger Aircraft

Authors: Thierry Sammour Sawaya, Heikki Deschacht

Abstract:

In commercial aircraft operations, almost half of the accidents happen during approach or landing phases. Automatic guidance and automatic landings have proven to bring significant safety value added for this challenging landing phase. This is why Airbus and ScioTeq have decided to work together to explore the capability of image-based landing solutions as additional landing aids to further expand the possibility to perform automatic approach and landing to runways where the current guiding systems are either not fitted or not optimum. Current systems for automated landing often depend on radio signals provided by airport ground infrastructure on the airport or satellite coverage. In addition, these radio signals may not always be available with the integrity and performance required for safe automatic landing. Being independent from these radio signals would widen the operations possibilities and increase the number of automated landings. Airbus and ScioTeq are joining their expertise in the field of Computer Vision in the European Program called Clean Sky 2 Large Passenger Aircraft, in which they are leading the IMBALS (IMage BAsed Landing Solutions) project. The ultimate goal of this project is to demonstrate, develop, validate and verify a certifiable automatic landing system guiding an airplane during the approach and landing phases based on an onboard camera system capturing images, enabling automatic landing independent from radio signals and without precision instrument for landing. In the frame of this project, ScioTeq is responsible for the development of the Image Processing Platform (IPP), while Airbus is responsible for defining the functional and system requirements as well as the testing and integration of the developed equipment in a Large Passenger Aircraft representative environment. The aim of this paper will be to describe the system as well as the associated methods and tools developed for validation and verification.

Keywords: aircraft landing system, aircraft safety, autoland, avionic system, computer vision, image processing

Procedia PDF Downloads 80
4191 A Method to Ease the Military Certification Process by Taking Advantage of Civil Standards in the Scope of Human Factors

Authors: Burcu Uçan

Abstract:

The certification approach differs in civil and military projects in aviation. Sets of criteria and standards created by airworthiness authorities for the determination of certification basis are distinct. While the civil standards are more understandable and clear because of not only include detailed specifications but also the help of guidance materials such as Advisory Circular, military criteria do not provide this level of guidance. Therefore, specifications that are more negotiable and sometimes more difficult to reconcile arise for the certification basis of a military aircraft. This study investigates a method of how to develop a military specification set by taking advantage of civil standards, regarding the European Military Airworthiness Criteria (EMACC) that establishes the airworthiness criteria for aircraft systems. Airworthiness Certification Criteria (MIL-HDBK-516C) is a handbook published for guidance that contains qualitative evaluation for military aircrafts meanwhile Certification Specifications (CS-29) is published for civil aircrafts by European Union Aviation Safety Agency (EASA). This method intends to compare and contrast specifications that MIL-HDBK-516C and CS-29 contain within the scope of Human Factors. Human Factors supports human performance and aims to improve system performance by encompassing knowledge from a range of scientific disciplines. Human Factors focuses on how people perform their tasks and reduce the risk of an accident occurring due to human physical and cognitive limitations. Hence, regardless of whether the project is civil or military, the specifications must be guided at a certain level by taking into account human limits. This study presents an advisory method for this purpose. The method in this study develops a solution for the military certification process by identifying the CS requirement corresponding to the criteria in the MIL-HDBK-516C by means of EMACC. Thus, it eases understanding the expectations of the criteria and establishing derived requirements. As a result of this method, it may not always be preferred to derive new requirements. Instead, it is possible to add remarks to make the expectancy of the criteria and required verification methods more comprehensible for all stakeholders. This study contributes to creating a certification basis for military aircraft, which is difficult and takes plenty of time for stakeholders to agree due to gray areas in the certification process for military aircrafts.

Keywords: human factors, certification, aerospace, requirement

Procedia PDF Downloads 60
4190 The Use of Classifiers in Image Analysis of Oil Wells Profiling Process and the Automatic Identification of Events

Authors: Jaqueline Maria Ribeiro Vieira

Abstract:

Different strategies and tools are available at the oil and gas industry for detecting and analyzing tension and possible fractures in borehole walls. Most of these techniques are based on manual observation of the captured borehole images. While this strategy may be possible and convenient with small images and few data, it may become difficult and suitable to errors when big databases of images must be treated. While the patterns may differ among the image area, depending on many characteristics (drilling strategy, rock components, rock strength, etc.). Previously we developed and proposed a novel strategy capable of detecting patterns at borehole images that may point to regions that have tension and breakout characteristics, based on segmented images. In this work we propose the inclusion of data-mining classification strategies in order to create a knowledge database of the segmented curves. These classifiers allow that, after some time using and manually pointing parts of borehole images that correspond to tension regions and breakout areas, the system will indicate and suggest automatically new candidate regions, with higher accuracy. We suggest the use of different classifiers methods, in order to achieve different knowledge data set configurations.

Keywords: image segmentation, oil well visualization, classifiers, data-mining, visual computer

Procedia PDF Downloads 286
4189 CMPD: Cancer Mutant Proteome Database

Authors: Po-Jung Huang, Chi-Ching Lee, Bertrand Chin-Ming Tan, Yuan-Ming Yeh, Julie Lichieh Chu, Tin-Wen Chen, Cheng-Yang Lee, Ruei-Chi Gan, Hsuan Liu, Petrus Tang

Abstract:

Whole-exome sequencing focuses on the protein coding regions of disease/cancer associated genes based on a priori knowledge is the most cost-effective method to study the association between genetic alterations and disease. Recent advances in high throughput sequencing technologies and proteomic techniques has provided an opportunity to integrate genomics and proteomics, allowing readily detectable mutated peptides corresponding to mutated genes. Since sequence database search is the most widely used method for protein identification using Mass spectrometry (MS)-based proteomics technology, a mutant proteome database is required to better approximate the real protein pool to improve disease-associated mutated protein identification. Large-scale whole exome/genome sequencing studies were launched by National Cancer Institute (NCI), Broad Institute, and The Cancer Genome Atlas (TCGA), which provide not only a comprehensive report on the analysis of coding variants in diverse samples cell lines but a invaluable resource for extensive research community. No existing database is available for the collection of mutant protein sequences related to the identified variants in these studies. CMPD is designed to address this issue, serving as a bridge between genomic data and proteomic studies and focusing on protein sequence-altering variations originated from both germline and cancer-associated somatic variations.

Keywords: TCGA, cancer, mutant, proteome

Procedia PDF Downloads 576
4188 Application of Medical Information System for Image-Based Second Opinion Consultations–Georgian Experience

Authors: Kldiashvili Ekaterina, Burduli Archil, Ghortlishvili Gocha

Abstract:

Introduction – Medical information system (MIS) is at the heart of information technology (IT) implementation policies in healthcare systems around the world. Different architecture and application models of MIS are developed. Despite of obvious advantages and benefits, application of MIS in everyday practice is slow. Objective - On the background of analysis of the existing models of MIS in Georgia has been created a multi-user web-based approach. This presentation will present the architecture of the system and its application for image based second opinion consultations. Methods – The MIS has been created with .Net technology and SQL database architecture. It realizes local (intranet) and remote (internet) access to the system and management of databases. The MIS is fully operational approach, which is successfully used for medical data registration and management as well as for creation, editing and maintenance of the electronic medical records (EMR). Five hundred Georgian language electronic medical records from the cervical screening activity illustrated by images were selected for second opinion consultations. Results – The primary goal of the MIS is patient management. However, the system can be successfully applied for image based second opinion consultations. Discussion – The ideal of healthcare in the information age must be to create a situation where healthcare professionals spend more time creating knowledge from medical information and less time managing medical information. The application of easily available and adaptable technology and improvement of the infrastructure conditions is the basis for eHealth applications. Conclusion - The MIS is perspective and actual technology solution. It can be successfully and effectively used for image based second opinion consultations.

Keywords: digital images, medical information system, second opinion consultations, electronic medical record

Procedia PDF Downloads 434
4187 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 53
4186 Ophthalmic Services Covered by Albasar International Foundation in Sudan

Authors: Mohammad Ibrahim

Abstract:

The study was conducted at Albasar international foundation ophthalmic hospitals in Sudan to study the burden and patterns of ophthalmic disorder in the sector. Review of the hospitals records revealed that the total number of patient examined in the hospitals and outreached camps conducted by the hospitals is 10,513,874, the total number of surgeries is 694,015 and the total number of pupils at school program is 230,382. The organization working with the highest management system and standards and quality result based planning. The study yielded that the ophthalmic problem in Sudan are of great percentage and the temporal blindness disorder are high since major cases and surgeries were Cataract (57.8%). Retinal problem (2.9%), Glaucoma (2.4%), Orbit and Occulo-plastic disorders (2.2%) other disorders are refractive errors, squint and strabismus, Corneal, Pediatrics and minor ophthalmic disorders.

Keywords: hospitals and outreach ophthalmic services, largest coverage of ophthalmic services, nonprofitable ophthalmic services, strong management system and standards

Procedia PDF Downloads 390
4185 Grid Pattern Recognition and Suppression in Computed Radiographic Images

Authors: Igor Belykh

Abstract:

Anti-scatter grids used in radiographic imaging for the contrast enhancement leave specific artifacts. Those artifacts may be visible or may cause Moiré effect when a digital image is resized on a diagnostic monitor. In this paper, we propose an automated grid artifacts detection and suppression algorithm which is still an actual problem. Grid artifacts detection is based on statistical approach in spatial domain. Grid artifacts suppression is based on Kaiser bandstop filter transfer function design and application avoiding ringing artifacts. Experimental results are discussed and concluded with description of advantages over existing approaches.

Keywords: grid, computed radiography, pattern recognition, image processing, filtering

Procedia PDF Downloads 262
4184 Implementation of International Standards in the Field of Higher Secondary Education in Kerala

Authors: Bernard Morais Joosa

Abstract:

Kerala, the southern state of India, is known for its accomplishments in universal education and enrollments. Through this mission, the Government proposes comprehensive educational reforms including 1000 Government schools into international standards during the first phase. The idea is not only to improve the infrastructural facilities but also to reform the teaching and learning process to the present day needs by introducing ICT enabled learning and providing smart classrooms. There will be focus on creating educational programmes which are useful for differently abled students. It is also meant to reinforce the teaching–learning process by providing ample opportunities to each student to construct their own knowledge using modern technology tools. The mission will redefine the existing classroom learning process, coordinate resource mobilization efforts and develop ‘Janakeeya Vidyabhyasa Mathruka.' Special packages to support schools which are in existence for over 100 years will also be attempted. The implementation will enlist full involvement and partnership of the Parent Teacher Association. Kerala was the first state in the country to attain 100 percent literacy more than two and a half decades ago. Since then the State has not rested on its laurels. It has moved forward in leaps and bounds conquering targets that no other State could achieve. Now the government of Kerala is taking off towards new goal of comprehensive educational reforms. And it focuses on Betterment of educational surroundings, use of technology in education, renewal of learning method and 1000 schools will be uplifted as Smart Schools. Need to upgrade 1000 schools into international standards and turning classrooms from standard 9 to 12 in high schools and higher secondary into high-tech classrooms and a special unique package for the renovation of schools, which have completed 50 and 100 years. The government intends to focus on developing standards first to eighth standards in tune with the times by engaging the teachers, parents, and alumni to recapture the relevance of public schools. English learning will be encouraged in schools. The idea is not only to improve the infrastructure facilities but also reform the curriculum to the present day needs. Keeping in view the differently-abled friendly approach of the government, there will be focus on creating educational program which is useful for differently abled students. The idea is to address the infrastructural deficiencies being faced by such schools. There will be special emphasis on ensuring internet connectivity to promote IT-friendly existence. A task-force and a full-time chief executive will be in charge of managing the day to day affairs of the mission. Secretary of the Public Education Department will serve as the Mission Secretary and the Chairperson of Task Force. As the Task Force will stress on teacher training and the use of information technology, experts in the field, as well as Directors of SCERT, IT School, SSA, and RMSA, will also be a part of it.

Keywords: educational standards, methodology, pedagogy, technology

Procedia PDF Downloads 119
4183 Life Expansion: Autobiography, Ficctionalized Digital Diaries and Forged Narratives of Everyday Life on Instagram

Authors: Pablo M. S. Vallejos

Abstract:

The article aims to analyze the autobiographical practices of users on Instagram, observing the instrumentalization of image resources in the construction of visual narratives that make up that archive and digital diary. Through bibliographical review, discourse exploration and case studies, the research also aims to present a new theoretical perception about everyday records - edited with a collage of filters and aesthetic tools - that permeate that social network, understanding it as a platform fictionalizing and an expansion of life. In this way, therefore, the work reflects on possible futures in the elaboration of representations and identities in the context of digital spaces in the 21st century.

Keywords: visual culture, social media, autobiography, image

Procedia PDF Downloads 59
4182 Mapping a Data Governance Framework to the Continuum of Care in the Active Assisted Living Context

Authors: Gaya Bin Noon, Thoko Hanjahanja-Phiri, Laura Xavier Fadrique, Plinio Pelegrini Morita, Hélène Vaillancourt, Jennifer Teague, Tania Donovska

Abstract:

Active Assisted Living (AAL) refers to systems designed to improve the quality of life, aid in independence, and create healthier lifestyles for care recipients. As the population ages, there is a pressing need for non-intrusive, continuous, adaptable, and reliable health monitoring tools to support aging in place. AAL has great potential to support these efforts with the wide variety of solutions currently available, but insufficient efforts have been made to address concerns arising from the integration of AAL into care. The purpose of this research was to (1) explore the integration of AAL technologies and data into the clinical pathway, and (2) map data access and governance for AAL technology in order to develop standards for use by policy-makers, technology manufacturers, and developers of smart communities for seniors. This was done through four successive research phases: (1) literature search to explore existing work in this area and identify lessons learned; (2) modeling of the continuum of care; (3) adapting a framework for data governance into the AAL context; and (4) interviews with stakeholders to explore the applicability of previous work. Opportunities for standards found in these research phases included a need for greater consistency in language and technology requirements, better role definition regarding who can access and who is responsible for taking action based on the gathered data, and understanding of the privacy-utility tradeoff inherent in using AAL technologies in care settings.

Keywords: active assisted living, aging in place, internet of things, standards

Procedia PDF Downloads 119
4181 Succinct Perspective on the Implications of Intellectual Property Rights and 3rd Generation Partnership Project in the Rapidly Evolving Telecommunication Industry

Authors: Arnesh Vijay

Abstract:

Ever since its early introduction in the late 1980s, the mobile industry has been rapidly evolving with each passing year. The development witnessed is not just in its ability to support diverse applications, but also its extension into diverse technological means to access and offer various services to users. Amongst the various technologies present, radio systems have clearly emerged as a strong contender, due to its fine attributes of accessibility, reachability, interactiveness, and cost efficiency. These advancements have no doubt guaranteed unprecedented ease, utility and sophistication to the cell phone users, but caused uncertainty due to the interdependence of various systems, making it extremely complicated to exactly map concepts on to 3GPP (3rd Generation Partnership Project) standards. Although the close interrelation and interdependence of intellectual property rights and mobile standard specifications have been widely acknowledged by the technical and legal community; there, however, is a requirement for clear distinction between the scope and future-proof of inventions to influence standards and its market place adoptability. For this, collaborative work is required between intellectual property professionals, researchers, standardization specialists and country specific legal experts. With the evolution into next generation mobile technology, i.e., to 5G systems, there is a need for further work to be done in this field, which has been felt now more than ever before. Based on these lines, this poster will briefly describe the importance of intellectual property rights in the European market. More specifically, will analyse the role played by intellectual property in various standardization institutes, such as 3GPP (3rd generation partnership project) and ITU (International Telecommunications Union). The main intention: to ensure the scope and purpose is well defined, and concerned parties on all four sides are well informed on the clear significance of good proposals which not only bring economic revenue to the company but those that are capable of improving the technology and offer better services to mankind. The poster will comprise different sections. The first segment begins with a background on the rapidly evolving mobile technology, with a brief insight on the industrial impact of standards and its relation to intellectual property rights. Next, section two will succinctly outline the interplay between patents and standards; explicitly discussing the ever changing and rapidly evolving relationship between the two sectors. Then the remaining sections will examine ITU and its role played in international standards development, touching upon the various standardization process and the common patent policies and related guidelines. Finally, it proposes ways to improve the collaboration amongst various sectors for a more evolved and sophisticated next generation mobile telecommunication system. The sole purpose here is to discuss methods to reduce the gap and enhance the exchange of information between the two sectors to offer advanced technologies and services to mankind.

Keywords: mobile technology, mobile standards, intellectual property rights, 3GPP

Procedia PDF Downloads 118
4180 Intelligent Rheumatoid Arthritis Identification System Based Image Processing and Neural Classifier

Authors: Abdulkader Helwan

Abstract:

Rheumatoid joint inflammation is characterized as a perpetual incendiary issue which influences the joints by hurting body tissues Therefore, there is an urgent need for an effective intelligent identification system of knee Rheumatoid arthritis especially in its early stages. This paper is to develop a new intelligent system for the identification of Rheumatoid arthritis of the knee utilizing image processing techniques and neural classifier. The system involves two principle stages. The first one is the image processing stage in which the images are processed using some techniques such as RGB to gryascale conversion, rescaling, median filtering, background extracting, images subtracting, segmentation using canny edge detection, and features extraction using pattern averaging. The extracted features are used then as inputs for the neural network which classifies the X-ray knee images as normal or abnormal (arthritic) based on a backpropagation learning algorithm which involves training of the network on 400 X-ray normal and abnormal knee images. The system was tested on 400 x-ray images and the network shows good performance during that phase, resulting in a good identification rate 97%.

Keywords: rheumatoid arthritis, intelligent identification, neural classifier, segmentation, backpropoagation

Procedia PDF Downloads 517
4179 Early Detection of Breast Cancer in Digital Mammograms Based on Image Processing and Artificial Intelligence

Authors: Sehreen Moorat, Mussarat Lakho

Abstract:

A method of artificial intelligence using digital mammograms data has been proposed in this paper for detection of breast cancer. Many researchers have developed techniques for the early detection of breast cancer; the early diagnosis helps to save many lives. The detection of breast cancer through mammography is effective method which detects the cancer before it is felt and increases the survival rate. In this paper, we have purposed image processing technique for enhancing the image to detect the graphical table data and markings. Texture features based on Gray-Level Co-Occurrence Matrix and intensity based features are extracted from the selected region. For classification purpose, neural network based supervised classifier system has been used which can discriminate between benign and malignant. Hence, 68 digital mammograms have been used to train the classifier. The obtained result proved that automated detection of breast cancer is beneficial for early diagnosis and increases the survival rates of breast cancer patients. The proposed system will help radiologist in the better interpretation of breast cancer.

Keywords: medical imaging, cancer, processing, neural network

Procedia PDF Downloads 247
4178 Similarity Based Retrieval in Case Based Reasoning for Analysis of Medical Images

Authors: M. Dasgupta, S. Banerjee

Abstract:

Content Based Image Retrieval (CBIR) coupled with Case Based Reasoning (CBR) is a paradigm that is becoming increasingly popular in the diagnosis and therapy planning of medical ailments utilizing the digital content of medical images. This paper presents a survey of some of the promising approaches used in the detection of abnormalities in retina images as well in mammographic screening and detection of regions of interest in MRI scans of the brain. We also describe our proposed algorithm to detect hard exudates in fundus images of the retina of Diabetic Retinopathy patients.

Keywords: case based reasoning, exudates, retina image, similarity based retrieval

Procedia PDF Downloads 331
4177 Study of Compatibility and Oxidation Stability of Vegetable Insulating Oils

Authors: Helena M. Wilhelm, Paulo O. Fernandes, Laís P. Dill, Kethlyn G. Moscon

Abstract:

The use of vegetable oil (or natural ester) as an insulating fluid in electrical transformers is a trend that aims to contribute to environmental preservation since it is biodegradable and non-toxic. Besides, vegetable oil has high flash and combustion points, being considered a fire safety fluid. However, vegetable oil is usually less stable towards oxidation than mineral oil. Both insulating fluids, mineral and vegetable oils, need to be tested periodically according to specific standards. Oxidation stability can be determined by the induction period measured by conductivity method (Rancimat) by monitoring the effectivity of oil’s antioxidant additives, a methodology already developed for food application and biodiesel but still not standardized for insulating fluids. Besides adequate oxidation stability, fluids must be compatible with transformer's construction materials under normal operating conditions to ensure that damage to the oil and parts of the transformer does not occur. ASTM standard and Brazilian normative differ in parameters evaluated, which reveals the need to regulate tests for each oil type. The aim of this study was to assess oxidation stability and compatibility of vegetable oils to suggest the best way to assure a viable performance of vegetable oil as transformer insulating fluid. The determination of the induction period for several vegetable insulating oils from the local market by using Rancimat was carried out according to BS EN 14112 standard, at different temperatures (110, 120, and 130 °C). Also, the compatibility of vegetable oil was assessed according to ASTM and ABNT NBR standards. The main results showed that the best temperature for use in the Rancimat test is 130 °C, which allows a better observation of conductivity change. The compatibility test results presented differences between vegetable and mineral oil standards that should be taken into account in oil testing since materials compatibility and oxidation stability are essential for equipment reliability.

Keywords: compatibility, Rancimat, natural ester, vegetable oil

Procedia PDF Downloads 185
4176 A Nonlinear Parabolic Partial Differential Equation Model for Image Enhancement

Authors: Tudor Barbu

Abstract:

We present a robust nonlinear parabolic partial differential equation (PDE)-based denoising scheme in this article. Our approach is based on a second-order anisotropic diffusion model that is described first. Then, a consistent and explicit numerical approximation algorithm is constructed for this continuous model by using the finite-difference method. Finally, our restoration experiments and method comparison, which prove the effectiveness of this proposed technique, are discussed in this paper.

Keywords: anisotropic diffusion, finite differences, image denoising and restoration, nonlinear PDE model, anisotropic diffusion, numerical approximation schemes

Procedia PDF Downloads 301
4175 An 8-Bit, 100-MSPS Fully Dynamic SAR ADC for Ultra-High Speed Image Sensor

Authors: F. Rarbi, D. Dzahini, W. Uhring

Abstract:

In this paper, a dynamic and power efficient 8-bit and 100-MSPS Successive Approximation Register (SAR) Analog-to-Digital Converter (ADC) is presented. The circuit uses a non-differential capacitive Digital-to-Analog (DAC) architecture segmented by 2. The prototype is produced in a commercial 65-nm 1P7M CMOS technology with 1.2-V supply voltage. The size of the core ADC is 208.6 x 103.6 µm2. The post-layout noise simulation results feature a SNR of 46.9 dB at Nyquist frequency, which means an effective number of bit (ENOB) of 7.5-b. The total power consumption of this SAR ADC is only 1.55 mW at 100-MSPS. It achieves then a figure of merit of 85.6 fJ/step.

Keywords: CMOS analog to digital converter, dynamic comparator, image sensor application, successive approximation register

Procedia PDF Downloads 401
4174 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV

Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran

Abstract:

Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.

Keywords: geo-referencing, ortho-rectification, video frame, self-calibration

Procedia PDF Downloads 464
4173 An Intelligent Nondestructive Testing System of Ultrasonic Infrared Thermal Imaging Based on Embedded Linux

Authors: Hao Mi, Ming Yang, Tian-yue Yang

Abstract:

Ultrasonic infrared nondestructive testing is a kind of testing method with high speed, accuracy and localization. However, there are still some problems, such as the detection requires manual real-time field judgment, the methods of result storage and viewing are still primitive. An intelligent non-destructive detection system based on embedded linux is put forward in this paper. The hardware part of the detection system is based on the ARM (Advanced Reduced Instruction Set Computer Machine) core and an embedded linux system is built to realize image processing and defect detection of thermal images. The CLAHE algorithm and the Butterworth filter are used to process the thermal image, and then the boa server and CGI (Common Gateway Interface) technology are used to transmit the test results to the display terminal through the network for real-time monitoring and remote monitoring. The system also liberates labor and eliminates the obstacle of manual judgment. According to the experiment result, the system provides a convenient and quick solution for industrial non-destructive testing.

Keywords: remote monitoring, non-destructive testing, embedded Linux system, image processing

Procedia PDF Downloads 206
4172 Digital Material Characterization Using the Quantum Fourier Transform

Authors: Felix Givois, Nicolas R. Gauger, Matthias Kabel

Abstract:

The efficient digital material characterization is of great interest to many fields of application. It consists of the following three steps. First, a 3D reconstruction of 2D scans must be performed. Then, the resulting gray-value image of the material sample is enhanced by image processing methods. Finally, partial differential equations (PDE) are solved on the segmented image, and by averaging the resulting solutions fields, effective properties like stiffness or conductivity can be computed. Due to the high resolution of current CT images, the latter is typically performed with matrix-free solvers. Among them, a solver that uses the explicit formula of the Green-Eshelby operator in Fourier space has been proposed by Moulinec and Suquet. Its algorithmic, most complex part is the Fast Fourier Transformation (FFT). In our talk, we will discuss the potential quantum advantage that can be obtained by replacing the FFT with the Quantum Fourier Transformation (QFT). We will especially show that the data transfer for noisy intermediate-scale quantum (NISQ) devices can be improved by using appropriate boundary conditions for the PDE, which also allows using semi-classical versions of the QFT. In the end, we will compare the results of the QFT-based algorithm for simple geometries with the results of the FFT-based homogenization method.

Keywords: most likelihood amplitude estimation (MLQAE), numerical homogenization, quantum Fourier transformation (QFT), NISQ devises

Procedia PDF Downloads 58
4171 View Synthesis of Kinetic Depth Imagery for 3D Security X-Ray Imaging

Authors: O. Abusaeeda, J. P. O. Evans, D. Downes

Abstract:

We demonstrate the synthesis of intermediary views within a sequence of X-ray images that exhibit depth from motion or kinetic depth effect in a visual display. Each synthetic image replaces the requirement for a linear X-ray detector array during the image acquisition process. Scale invariant feature transform, SIFT, in combination with epipolar morphing is employed to produce synthetic imagery. Comparison between synthetic and ground truth images is reported to quantify the performance of the approach. Our work is a key aspect in the development of a 3D imaging modality for the screening of luggage at airport checkpoints. This programme of research is in collaboration with the UK Home Office and the US Dept. of Homeland Security.

Keywords: X-ray, kinetic depth, KDE, view synthesis

Procedia PDF Downloads 246
4170 Enzymatic Repair Prior To DNA Barcoding, Aspirations, and Restraints

Authors: Maxime Merheb, Rachel Matar

Abstract:

Retrieving ancient DNA sequences which in return permit the entire genome sequencing from fossils have extraordinarily improved in recent years, thanks to sequencing technology and other methodological advances. In any case, the quest to search for ancient DNA is still obstructed by the damage inflicted on DNA which accumulates after the death of a living organism. We can characterize this damage into three main categories: (i) Physical abnormalities such as strand breaks which lead to the presence of short DNA fragments. (ii) Modified bases (mainly cytosine deamination) which cause errors in the sequence due to an incorporation of a false nucleotide during DNA amplification. (iii) DNA modifications referred to as blocking lesions, will halt the PCR extension which in return will also affect the amplification and sequencing process. We can clearly see that the issues arising from breakage and coding errors were significantly decreased in recent years. Fast sequencing of short DNA fragments was empowered by platforms for high-throughput sequencing, most of the coding errors were uncovered to be the consequences of cytosine deamination which can be easily removed from the DNA using enzymatic treatment. The methodology to repair DNA sequences is still in development, it can be basically explained by the process of reintroducing cytosine rather than uracil. This technique is thus restricted to amplified DNA molecules. To eliminate any type of damage (particularly those that block PCR) is a process still pending the complete repair methodologies; DNA detection right after extraction is highly needed. Before using any resources into extensive, unreasonable and uncertain repair techniques, it is vital to distinguish between two possible hypotheses; (i) DNA is none existent to be amplified to begin with therefore completely un-repairable, (ii) the DNA is refractory to PCR and it is worth to be repaired and amplified. Hence, it is extremely important to develop a non-enzymatic technique to detect the most degraded DNA.

Keywords: ancient DNA, DNA barcodong, enzymatic repair, PCR

Procedia PDF Downloads 390
4169 Analysis of Two Phase Hydrodynamics in a Column Flotation by Particle Image Velocimetry

Authors: Balraju Vadlakonda, Narasimha Mangadoddy

Abstract:

The hydrodynamic behavior in a laboratory column flotation was analyzed using particle image velocimetry. For complete characterization of column flotation, it is necessary to determine the flow velocity induced by bubbles in the liquid phase, the bubble velocity and bubble characteristics:diameter,shape and bubble size distribution. An experimental procedure for analyzing simultaneous, phase-separated velocity measurements in two-phase flows was introduced. The non-invasive PIV technique has used to quantify the instantaneous flow field, as well as the time averaged flow patterns in selected planes of the column. Using the novel particle velocimetry (PIV) technique by the combination of fluorescent tracer particles, shadowgraphy and digital phase separation with masking technique measured the bubble velocity as well as the Reynolds stresses in the column. Axial and radial mean velocities as well as fluctuating components were determined for both phases by averaging the sufficient number of double images. Bubble size distribution was cross validated with high speed video camera. Average turbulent kinetic energy of bubble were analyzed. Different air flow rates were considered in the experiments.

Keywords: particle image velocimetry (PIV), bubble velocity, bubble diameter, turbulent kinetic energy

Procedia PDF Downloads 488
4168 Automatic Product Identification Based on Deep-Learning Theory in an Assembly Line

Authors: Fidel Lòpez Saca, Carlos Avilés-Cruz, Miguel Magos-Rivera, José Antonio Lara-Chávez

Abstract:

Automated object recognition and identification systems are widely used throughout the world, particularly in assembly lines, where they perform quality control and automatic part selection tasks. This article presents the design and implementation of an object recognition system in an assembly line. The proposed shapes-color recognition system is based on deep learning theory in a specially designed convolutional network architecture. The used methodology involve stages such as: image capturing, color filtering, location of object mass centers, horizontal and vertical object boundaries, and object clipping. Once the objects are cut out, they are sent to a convolutional neural network, which automatically identifies the type of figure. The identification system works in real-time. The implementation was done on a Raspberry Pi 3 system and on a Jetson-Nano device. The proposal is used in an assembly course of bachelor’s degree in industrial engineering. The results presented include studying the efficiency of the recognition and processing time.

Keywords: deep-learning, image classification, image identification, industrial engineering.

Procedia PDF Downloads 144
4167 A Framework of Product Information Service System Using Mobile Image Retrieval and Text Mining Techniques

Authors: Mei-Yi Wu, Shang-Ming Huang

Abstract:

The online shoppers nowadays often search the product information on the Internet using some keywords of products. To use this kind of information searching model, shoppers should have a preliminary understanding about their interesting products and choose the correct keywords. However, if the products are first contact (for example, the worn clothes or backpack of passengers which you do not have any idea about the brands), these products cannot be retrieved due to insufficient information. In this paper, we discuss and study the applications in E-commerce using image retrieval and text mining techniques. We design a reasonable E-commerce application system containing three layers in the architecture to provide users product information. The system can automatically search and retrieval similar images and corresponding web pages on Internet according to the target pictures which taken by users. Then text mining techniques are applied to extract important keywords from these retrieval web pages and search the prices on different online shopping stores with these keywords using a web crawler. Finally, the users can obtain the product information including photos and prices of their favorite products. The experiments shows the efficiency of proposed system.

Keywords: mobile image retrieval, text mining, product information service system, online marketing

Procedia PDF Downloads 340
4166 The Role of Artificial Intelligence Algorithms in Decision-Making Policies

Authors: Marisa Almeida AraúJo

Abstract:

Artificial intelligence (AI) tools are being used (including in the criminal justice system) and becomingincreasingly popular. The many questions that these (future) super-beings pose the neuralgic center is rooted in the (old) problematic between rationality and morality. For instance, if we follow a Kantian perspective in which morality derives from AI, rationality will also surpass man in ethical and moral standards, questioning the nature of mind, the conscience of self and others, and moral. The recognition of superior intelligence in a non-human being puts us in the contingency of having to recognize a pair in a form of new coexistence and social relationship. Just think of the humanoid robot Sophia, capable of reasoning and conversation (and who has been recognized for Saudi citizenship; a fact that symbolically demonstrates our empathy with the being). Machines having a more intelligent mind, and even, eventually, with higher ethical standards to which, in the alluded categorical imperative, we would have to subject ourselves under penalty of contradiction with the universal Kantian law. Recognizing the complex ethical and legal issues and the significant impact on human rights and democratic functioning itself is the goal of our work.

Keywords: ethics, artificial intelligence, legal rules, principles, philosophy

Procedia PDF Downloads 181
4165 Training a Neural Network to Segment, Detect and Recognize Numbers

Authors: Abhisek Dash

Abstract:

This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well.

Keywords: convolutional neural networks, OCR, text detection, text segmentation

Procedia PDF Downloads 142
4164 Depth Estimation in DNN Using Stereo Thermal Image Pairs

Authors: Ahmet Faruk Akyuz, Hasan Sakir Bilge

Abstract:

Depth estimation using stereo images is a challenging problem in computer vision. Many different studies have been carried out to solve this problem. With advancing machine learning, tackling this problem is often done with neural network-based solutions. The images used in these studies are mostly in the visible spectrum. However, the need to use the Infrared (IR) spectrum for depth estimation has emerged because it gives better results than visible spectra in some conditions. At this point, we recommend using thermal-thermal (IR) image pairs for depth estimation. In this study, we used two well-known networks (PSMNet, FADNet) with minor modifications to demonstrate the viability of this idea.

Keywords: thermal stereo matching, deep neural networks, CNN, Depth estimation

Procedia PDF Downloads 258