Search results for: Image authentication
620 Shaping of World-Class Delhi: Politics of Marginalization and Inclusion
Authors: Aparajita Santra
Abstract:
In the context of the government's vision of turning Delhi into a green, privatized and slum free city, giving it a world-class image at par with the global cities of the world, this paper investigates into the various processes and politics of things that went behind defining spaces in the city and attributing an aesthetic image to it. The paper will explore two cases that were forged primarily through the forces of one particular type of power relation. One would be to look at the modernist movement adopted by the Nehruvian government post-independence and the next case will look at special periods like Emergency and Commonwealth games. The study of these cases will help understand the ambivalence embedded in the different rationales of the Government and different powerful agencies adopted in order to build world-classness. Through the study, it will be easier to discern how city spaces were reconfigured in the name of 'good governance'. In this process, it also became important to analyze the double nature of law, both as a protector of people’s rights and as a threat to people. What was interesting to note through the study was that in the process of nation building and creating an image for the city, the government’s policies and programs were mostly aimed at the richer sections of the society and the poorer sections and people from lower income groups kept getting marginalized, subdued, and pushed further away (These marginalized people were pushed away even geographically!). The reconfiguration of city space and attributing an aesthetic character to it, led to an alteration not only in the way in which citizens perceived and engaged with these spaces, but also brought about changes in the way they envisioned their place in the city. Ironically, it was found that every attempt to build any kind of facility for the city’s elite in turn led to an inevitable removal of the marginalized sections of the society as a necessary step to achieve a clean, green and world-class city. The paper questions the claim made by the government for creating a just, equitable city and granting rights to all. An argument is put forth that in the politics of redistribution of space, the city that has been designed is meant for the aspirational middle-class and elite only, who are ideally primed to live in world-class cities. Thus, the aim is to study city spaces, urban form, the associated politics and power plays involved within and understand whether segmented cities are being built in the name of creating sensible, inclusive cities.
Keywords: Aesthetics, ambivalence, governmentality, power, world-class.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 934619 Ethereum Based Smart Contracts for Trade and Finance
Authors: Rishabh Garg
Abstract:
Traditionally, business parties build trust with a centralized operating mechanism, such as payment by letter of credit. However, the increase in cyber-attacks and malicious hacking has jeopardized business operations and finance practices. Emerging markets, due to their high banking risks and the large presence of digital financing, are looking for technology that enables transparency and traceability of any transaction in trade, finance or supply chain management. Blockchain systems, in the absence of any central authority, enable transactions across the globe with the help of decentralized applications. DApps consist of a front-end, a blockchain back-end, and middleware, that is, the code that connects the two. The front-end can be a sophisticated web app or mobile app, which is used to implement the functions/methods on the smart contract. Web apps can employ technologies such as HTML, CSS, React and Express. In this wake, fintech and blockchain products are popping up in brokerages, digital wallets, exchanges, post-trade clearance, settlement, middleware, infrastructure and base protocols. The present paper provides a technology driven solution, financial inclusion and innovative working paradigm for business and finance.
Keywords: Authentication, blockchain, channel, cryptography, DApps, data portability, Decentralized Public Key Infrastructure, Ethereum, hash function, Hashgraph, Privilege creep, Proof of Work algorithm, revocation, storage variables, Zero Knowledge Proof.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 577618 Design and Application of NFC-Based Identity and Access Management in Cloud Services
Authors: Shin-Jer Yang, Kai-Tai Yang
Abstract:
In response to a changing world and the fast growth of the Internet, more and more enterprises are replacing web-based services with cloud-based ones. Multi-tenancy technology is becoming more important especially with Software as a Service (SaaS). This in turn leads to a greater focus on the application of Identity and Access Management (IAM). Conventional Near-Field Communication (NFC) based verification relies on a computer browser and a card reader to access an NFC tag. This type of verification does not support mobile device login and user-based access management functions. This study designs an NFC-based third-party cloud identity and access management scheme (NFC-IAM) addressing this shortcoming. Data from simulation tests analyzed with Key Performance Indicators (KPIs) suggest that the NFC-IAM not only takes less time in identity identification but also cuts time by 80% in terms of two-factor authentication and improves verification accuracy to 99.9% or better. In functional performance analyses, NFC-IAM performed better in salability and portability. The NFC-IAM App (Application Software) and back-end system to be developed and deployed in mobile device are to support IAM features and also offers users a more user-friendly experience and stronger security protection. In the future, our NFC-IAM can be employed to different environments including identification for mobile payment systems, permission management for remote equipment monitoring, among other applications.
Keywords: Cloud service, multi-tenancy, NFC, IAM, mobile device.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1118617 Adjustment of a PET Scanner for PEPT
Authors: Alireza Sadrmomtaz
Abstract:
Positron emission particle tracking (PEPT) is a technique in which a single radioactive tracer particle can be accurately tracked as it moves. A limitation of PET is that in order to reconstruct a tomographic image it is necessary to acquire a large volume of data (millions of events), so it is difficult to study rapidly changing systems. By considering this fact, PEPT is a very fast process compared with PET. In PEPT detecting both photons defines a line and the annihilation is assumed to have occurred somewhere along this line. The location of the tracer can be determined to within a few mm from coincident detection of a small number of pairs of back-to-back gamma rays and using triangulation. This can be achieved many times per second and the track of a moving particle can be reliably followed. This technique was invented at the University of Birmingham [1]. The attempt in PEPT is not to form an image of the tracer particle but simply to determine its location with time. If this tracer is followed for a long enough period within a closed, circulating system it explores all possible types of motion. The application of PEPT to industrial process systems carried out at the University of Birmingham is categorized in two subjects: the behaviour of granular materials and viscous fluids. Granular materials are processed in industry for example in the manufacture of pharmaceuticals, ceramics, food, polymers and PEPT has been used in a number of ways to study the behaviour of these systems [2]. PEPT allows the possibility of tracking a single particle within the bed [3]. Also PEPT has been used for studying systems such as: fluid flow, viscous fluids in mixers [4], using a neutrally-buoyant tracer particle [5].Keywords: PET, BGO, Particle Tracking, ECAT 931, List mode, PEPT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1403616 An Efficient Biometric Cryptosystem using Autocorrelators
Authors: R. Bremananth, A. Chitra
Abstract:
Cryptography provides the secure manner of information transmission over the insecure channel. It authenticates messages based on the key but not on the user. It requires a lengthy key to encrypt and decrypt the sending and receiving the messages, respectively. But these keys can be guessed or cracked. Moreover, Maintaining and sharing lengthy, random keys in enciphering and deciphering process is the critical problem in the cryptography system. A new approach is described for generating a crypto key, which is acquired from a person-s iris pattern. In the biometric field, template created by the biometric algorithm can only be authenticated with the same person. Among the biometric templates, iris features can efficiently be distinguished with individuals and produces less false positives in the larger population. This type of iris code distribution provides merely less intra-class variability that aids the cryptosystem to confidently decrypt messages with an exact matching of iris pattern. In this proposed approach, the iris features are extracted using multi resolution wavelets. It produces 135-bit iris codes from each subject and is used for encrypting/decrypting the messages. The autocorrelators are used to recall original messages from the partially corrupted data produced by the decryption process. It intends to resolve the repudiation and key management problems. Results were analyzed in both conventional iris cryptography system (CIC) and non-repudiation iris cryptography system (NRIC). It shows that this new approach provides considerably high authentication in enciphering and deciphering processes.Keywords: Autocorrelators, biometrics cryptography, irispatterns, wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527615 Lower energy Gait Pattern Generation in 5-Link Biped Robot Using Image Processing
Authors: Byounghyun Kim, Youngjoon Han, Hernsoo Hahn
Abstract:
The purpose of this study is to find natural gait of biped robot such as human being by analyzing the COG (Center Of Gravity) trajectory of human being's gait. It is discovered that human beings gait naturally maintain the stability and use the minimum energy. This paper intends to find the natural gait pattern of biped robot using the minimum energy as well as maintaining the stability by analyzing the human's gait pattern that is measured from gait image on the sagittal plane and COG trajectory on the frontal plane. It is not possible to apply the torques of human's articulation to those of biped robot's because they have different degrees of freedom. Nonetheless, human and 5-link biped robots are similar in kinematics. For this, we generate gait pattern of the 5-link biped robot by using the GA algorithm of adaptation gait pattern which utilize the human's ZMP (Zero Moment Point) and torque of all articulation that are measured from human's gait pattern. The algorithm proposed creates biped robot's fluent gait pattern as that of human being's and to minimize energy consumption because the gait pattern of the 5-link biped robot model is modeled after consideration about the torque of human's each articulation on the sagittal plane and ZMP trajectory on the frontal plane. This paper demonstrate that the algorithm proposed is superior by evaluating 2 kinds of the 5-link biped robot applied to each gait patterns generated both in the general way using inverse kinematics and in the special way in which by considering visuality and efficiency.Keywords: 5-link biped robot, gait pattern, COG (Center OfGravity), ZMP (Zero Moment Point).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893614 A 10 Giga VPN Accelerator Board for Trust Channel Security System
Authors: Ki Hyun Kim, Jang-Hee Yoo, Kyo Il Chung
Abstract:
This paper proposes a VPN Accelerator Board (VPN-AB), a virtual private network (VPN) protocol designed for trust channel security system (TCSS). TCSS supports safety communication channel between security nodes in internet. It furnishes authentication, confidentiality, integrity, and access control to security node to transmit data packets with IPsec protocol. TCSS consists of internet key exchange block, security association block, and IPsec engine block. The internet key exchange block negotiates crypto algorithm and key used in IPsec engine block. Security Association blocks setting-up and manages security association information. IPsec engine block treats IPsec packets and consists of networking functions for communication. The IPsec engine block should be embodied by H/W and in-line mode transaction for high speed IPsec processing. Our VPN-AB is implemented with high speed security processor that supports many cryptographic algorithms and in-line mode. We evaluate a small TCSS communication environment, and measure a performance of VPN-AB in the environment. The experiment results show that VPN-AB gets a performance throughput of maximum 15.645Gbps when we set the IPsec protocol with 3DES-HMAC-MD5 tunnel mode.Keywords: TCSS(Trust Channel Security System), VPN(VirtualPrivate Network), IPsec, SSL, Security Processor, Securitycommunication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2099613 Accurate Positioning Method of Indoor Plastering Robot Based on Line Laser
Authors: Guanqiao Wang, Hongyang Yu
Abstract:
There is a lot of repetitive work in the traditional construction industry. These repetitive tasks can significantly improve production efficiency by replacing manual tasks with robots. Therefore, robots appear more and more frequently in the construction industry. Navigation and positioning is a very important task for construction robots, and the requirements for accuracy of positioning are very high. Traditional indoor robots mainly use radio frequency or vision methods for positioning. Compared with ordinary robots, the indoor plastering robot needs to be positioned closer to the wall for wall plastering, so the requirements for construction positioning accuracy are higher, and the traditional navigation positioning method has a large error, which will cause the robot to move. Without the exact position, the wall cannot be plastered or the error of plastering the wall is large. A positioning method is proposed, which is assisted by line lasers and uses image processing-based positioning to perform more accurate positioning on the traditional positioning work. In actual work, filter, edge detection, Hough transform and other operations are performed on the images captured by the camera. Each time the position of the laser line is found, it is compared with the standard value, and the position of the robot is moved or rotated to complete the positioning work. The experimental results show that the actual positioning error is reduced to less than 0.5 mm by this accurate positioning method.
Keywords: Indoor plastering robot, navigation, precise positioning, line laser, image processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 539612 An Evaluation of Drivers in Implementing Sustainable Manufacturing in India: Using DEMATEL Approach
Authors: D. Garg, S. Luthra, A. Haleem
Abstract:
Due to growing concern about environmental and social consequences throughout the world, a need has been felt to incorporate sustainability concepts in conventional manufacturing. This paper is an attempt to identify and evaluate drivers in implementing sustainable manufacturing in Indian context. Nine possible drivers for successful implementation of sustainable manufacturing have been identified from extensive review. Further, Decision Making Trial and Evaluation Laboratory (DEMATEL) approach has been utilized to evaluate and categorize these identified drivers for implementing sustainable manufacturing in to the cause and effect groups. Five drivers (Societal Pressure and Public Concerns; Regulations and Government Policies; Top Management Involvement, Commitment and Support; Effective Strategies and Activities towards Socially Responsible Manufacturing and Market Trends) have been categorized into the cause group and four drivers (Holistic View in Manufacturing Systems; Supplier Participation; Building Sustainable culture in Organization; and Corporate Image and Benefits) have been categorized into the effect group. “Societal Pressure and Public Concerns” has been found the most critical driver and “Corporate Image and Benefits” as least critical or the most easily influenced driver to implementing sustainable manufacturing in Indian context. This paper may surely help practitioners in better understanding of these drivers and their priorities towards effective implementation of sustainable manufacturing.
Keywords: Drivers, Decision Making Trial and Evaluation Laboratory (DEMATEL), India, Sustainable Manufacturing (SM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3221611 Low Resolution Face Recognition Using Mixture of Experts
Authors: Fatemeh Behjati Ardakani, Fatemeh Khademian, Abbas Nowzari Dalini, Reza Ebrahimpour
Abstract:
Human activity is a major concern in a wide variety of applications, such as video surveillance, human computer interface and face image database management. Detecting and recognizing faces is a crucial step in these applications. Furthermore, major advancements and initiatives in security applications in the past years have propelled face recognition technology into the spotlight. The performance of existing face recognition systems declines significantly if the resolution of the face image falls below a certain level. This is especially critical in surveillance imagery where often, due to many reasons, only low-resolution video of faces is available. If these low-resolution images are passed to a face recognition system, the performance is usually unacceptable. Hence, resolution plays a key role in face recognition systems. In this paper we introduce a new low resolution face recognition system based on mixture of expert neural networks. In order to produce the low resolution input images we down-sampled the 48 × 48 ORL images to 12 × 12 ones using the nearest neighbor interpolation method and after that applying the bicubic interpolation method yields enhanced images which is given to the Principal Component Analysis feature extractor system. Comparison with some of the most related methods indicates that the proposed novel model yields excellent recognition rate in low resolution face recognition that is the recognition rate of 100% for the training set and 96.5% for the test set.Keywords: Low resolution face recognition, Multilayered neuralnetwork, Mixture of experts neural network, Principal componentanalysis, Bicubic interpolation, Nearest neighbor interpolation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724610 Development of a Basic Robot System for Medical and Nursing Care for Patients with Glaucoma
Authors: Naoto Suzuki
Abstract:
Medical methods to completely treat glaucoma are yet to be developed. Therefore, ophthalmologists manage patients mainly to delay disease progression. Patients with glaucoma are mainly elderly individuals. In elderly people's houses, having an equipment that can provide medical treatment and care can release their family from their care. For elderly people with the glaucoma to live by themselves as much as possible, we developed a support robot having five functions: elderly people care, ophthalmological examination, trip assistance to the neighborhood, medical treatment, and data referral to a hospital. The medical and nursing care robot should approach the visual field that the patients can see at a speed suitable for their eyesight. This is because the robot will be dangerous if it approaches the patients from the visual field that they cannot see. We experimentally developed a robot that brings a white cane to elderly people with glaucoma. The base part of the robot is a carriage, which is a Megarover 1.1, and it has two infrared sensors. The robot moves along a white line on the floor using the infrared sensors and has a special arm, which does not use electricity. The arm can scoop the block attached to the white cane. Next, we also developed a direction detector comprised of a charge-coupled device camera (SVR41ResucueHD; Sun Mechatronics), goggles (MG-277MLF; Midori Anzen Co. Ltd.), and biconvex lenses with a focal length of 25 mm (Edmund Co.). Some young people were photographed using the direction detector, which was put on their faces. Image processing was performed using Scilab 6.1.0 and Image Processing and Computer Vision Toolbox 4.1.2. To measure the people's line of vision, we calculated the iris's center of gravity using five processes: reduction, trimming, binarization or gray scale, edge extraction, and Hough transform. We compared the binarization and gray scale processes in image processing. The binarization process was better than the gray scale process. For edge extraction, we compared five methods: Sobel, Prewitt, Laplacian of Gaussian, fast Fourier transform, and Canny. The Canny method was the optimal extraction method. We performed the Hough transform to search for the main coordinates from the iris's edge, and we found that the Hough transform could calculate the center point of the iris.
Keywords: Glaucoma, support robot, elderly people, Hough transform, direction detector, line of vision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 547609 A Supervised Learning Data Mining Approach for Object Recognition and Classification in High Resolution Satellite Data
Authors: Mais Nijim, Rama Devi Chennuboyina, Waseem Al Aqqad
Abstract:
Advances in spatial and spectral resolution of satellite images have led to tremendous growth in large image databases. The data we acquire through satellites, radars, and sensors consists of important geographical information that can be used for remote sensing applications such as region planning, disaster management. Spatial data classification and object recognition are important tasks for many applications. However, classifying objects and identifying them manually from images is a difficult task. Object recognition is often considered as a classification problem, this task can be performed using machine-learning techniques. Despite of many machine-learning algorithms, the classification is done using supervised classifiers such as Support Vector Machines (SVM) as the area of interest is known. We proposed a classification method, which considers neighboring pixels in a region for feature extraction and it evaluates classifications precisely according to neighboring classes for semantic interpretation of region of interest (ROI). A dataset has been created for training and testing purpose; we generated the attributes by considering pixel intensity values and mean values of reflectance. We demonstrated the benefits of using knowledge discovery and data-mining techniques, which can be on image data for accurate information extraction and classification from high spatial resolution remote sensing imagery.Keywords: Remote sensing, object recognition, classification, data mining, waterbody identification, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2053608 Topographical Image Transference Compatibility Generated Through Moiré Technique Applying Parametrical Softwares of Computer Assisted Design
Authors: M. V. G. Silva, J. Gazzola, I. M. Dal Fabbro, A. C. L. Lino
Abstract:
Computer aided design accounts with the support of parametric software in the design of machine components as well as of any other pieces of interest. The complexities of the element under study sometimes offer certain difficulties to computer design, or ever might generate mistakes in the final body conception. Reverse engineering techniques are based on the transformation of already conceived body images into a matrix of points which can be visualized by the design software. The literature exhibits several techniques to obtain machine components dimensional fields, as contact instrument (MMC), calipers and optical methods as laser scanner, holograms as well as moiré methods. The objective of this research work was to analyze the moiré technique as instrument of reverse engineering, applied to bodies of nom complex geometry as simple solid figures, creating matrices of points. These matrices were forwarded to a parametric software named SolidWorks to generate the virtual object. Volume data obtained by mechanical means, i.e., by caliper, the volume obtained through the moiré method and the volume generated by the SolidWorks software were compared and found to be in close agreement. This research work suggests the application of phase shifting moiré methods as instrument of reverse engineering, serving also to support farm machinery element designs.Keywords: Reverse engineering, Moiré technique, three dimensional image generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3457607 The Effect of Development of Two-Phase Flow Regimes on the Stability of Gas Lift Systems
Authors: Khalid. M. O. Elmabrok, M. L. Burby, G. G. Nasr
Abstract:
Flow instability during gas lift operation is caused by three major phenomena – the density wave oscillation, the casing heading pressure and the flow perturbation within the two-phase flow region. This paper focuses on the causes and the effect of flow instability during gas lift operation and suggests ways to control it in order to maximise productivity during gas lift operations. A laboratory-scale two-phase flow system to study the effects of flow perturbation was designed and built. The apparatus is comprised of a 2 m long by 66 mm ID transparent PVC pipe with air injection point situated at 0.1 m above the base of the pipe. This is the point where stabilised bubbles were visibly clear after injection. Air is injected into the water filled transparent pipe at different flow rates and pressures. The behavior of the different sizes of the bubbles generated within the two-phase region was captured using a digital camera and the images were analysed using the advanced image processing package. It was observed that the average maximum bubbles sizes increased with the increase in the length of the vertical pipe column from 29.72 to 47 mm. The increase in air injection pressure from 0.5 to 3 bars increased the bubble sizes from 29.72 mm to 44.17 mm and then decreasing when the pressure reaches 4 bars. It was observed that at higher bubble velocity of 6.7 m/s, larger diameter bubbles coalesce and burst due to high agitation and collision with each other. This collapse of the bubbles causes pressure drop and reverse flow within two phase flow and is the main cause of the flow instability phenomena.Keywords: Gas lift instability, bubble forming, bubble collapsing, image processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1474606 Indian License Plate Detection and Recognition Using Morphological Operation and Template Matching
Authors: W. Devapriya, C. Nelson Kennedy Babu, T. Srihari
Abstract:
Automatic License plate recognition (ALPR) is a technology which recognizes the registration plate or number plate or License plate of a vehicle. In this paper, an Indian vehicle number plate is mined and the characters are predicted in efficient manner. ALPR involves four major technique i) Pre-processing ii) License Plate Location Identification iii) Individual Character Segmentation iv) Character Recognition. The opening phase, named pre-processing helps to remove noises and enhances the quality of the image using the conception of Morphological Operation and Image subtraction. The second phase, the most puzzling stage ascertain the location of license plate using the protocol Canny Edge detection, dilation and erosion. In the third phase, each characters characterized by Connected Component Approach (CCA) and in the ending phase, each segmented characters are conceptualized using cross correlation template matching- a scheme specifically appropriate for fixed format. Major application of ALPR is Tolling collection, Border Control, Parking, Stolen cars, Enforcement, Access Control, Traffic control. The database consists of 500 car images taken under dissimilar lighting condition is used. The efficiency of the system is 97%. Our future focus is Indian Vehicle License Plate Validation (Whether License plate of a vehicle is as per Road transport and highway standard).
Keywords: Automatic License plate recognition, Character recognition, Number plate Recognition, Template matching, morphological operation, canny edge detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2405605 Energy Efficient Transmission of Image over DWT-OFDM System
Authors: Lakshmi Pujitha Dachuri, Nalini Uppala
Abstract:
In many applications retransmissions of lost packets are not permitted. OFDM is a multi-carrier modulation scheme having excellent performance which allows overlapping in frequency domain. With OFDM there is a simple way of dealing with multipath relatively simple DSP algorithms.
In this paper, an image frame is compressed using DWT, and the compressed data is arranged in data vectors, each with equal number of coefficients. These vectors are quantized and binary coded to get the bit steams, which are then packetized and intelligently mapped to the OFDM system. Based on one-bit channel state information at the transmitter, the descriptions in order of descending priority are assigned to the currently good channels such that poorer sub-channels can only affect the lesser important data vectors. We consider only one-bit channel state information available at the transmitter, informing only about the sub-channels to be good or bad. For a good sub-channel, instantaneous received power should be greater than a threshold Pth. Otherwise, the sub-channel is in fading state and considered bad for that batch of coefficients. In order to reduce the system power consumption, the mapped descriptions onto the bad sub channels are dropped at the transmitter. The binary channel state information gives an opportunity to map the bit streams intelligently and to save a reasonable amount of power. By using MAT LAB simulation we can analysis the performance of our proposed scheme, in terms of system energy saving without compromising the received quality in terms of peak signal-noise ratio.
Keywords: Binary channel state, Channel state feedback, DWT-OFDM system, Energy saving, Fading broadcast channel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2810604 A Perceptually Optimized Wavelet Embedded Zero Tree Image Coder
Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf
Abstract:
In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Keywords: DWT, linear-phase 9/7 filter, 9/7 Wavelets Error Sensitivity WES, CSF implementation approaches, JND Just Noticeable Difference, Luminance masking, Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2051603 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography
Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi
Abstract:
Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618602 Evolution of Web Development Techniques in Modern Technology
Authors: Abdul Basit Kiani, Maryam Kiani
Abstract:
The art of web development in new technologies is a dynamic journey, shaped by the constant evolution of tools and platforms. With the emergence of JavaScript frameworks and APIs, web developers are empowered to craft web applications that are not only robust but also highly interactive. The aim is to provide an overview of the developments in the field. The integration of artificial intelligence (AI) and machine learning (ML) has opened new horizons in web development. Chatbots, intelligent recommendation systems, and personalization algorithms have become integral components of modern websites. These AI-powered features enhance user engagement, provide personalized experiences, and streamline customer support processes, revolutionizing the way businesses interact with their audiences. Lastly, the emphasis on web security and privacy has been a pivotal area of progress. With the increasing incidents of cyber threats, web developers have implemented robust security measures to safeguard user data and ensure secure transactions. Innovations such as HTTPS protocol, two-factor authentication, and advanced encryption techniques have bolstered the overall security of web applications, fostering trust and confidence among users. Hence, recent progress in web development has propelled the industry forward, enabling developers to craft innovative and immersive digital experiences. From responsive design to AI integration and enhanced security, the landscape of web development continues to evolve, promising a future filled with endless possibilities.
Keywords: Web development, software testing, progressive web apps, web and mobile native application.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 381601 Analysis of Aiming Performance for Games Using Mapping Method of Corneal Reflections Based on Two Different Light Sources
Authors: Yoshikazu Onuki, Itsuo Kumazawa
Abstract:
Fundamental motivation of this paper is how gaze estimation can be utilized effectively regarding an application to games. In games, precise estimation is not always important in aiming targets but an ability to move a cursor to an aiming target accurately is also significant. Incidentally, from a game producing point of view, a separate expression of a head movement and gaze movement sometimes becomes advantageous to expressing sense of presence. A case that panning a background image associated with a head movement and moving a cursor according to gaze movement can be a representative example. On the other hand, widely used technique of POG estimation is based on a relative position between a center of corneal reflection of infrared light sources and a center of pupil. However, a calculation of a center of pupil requires relatively complicated image processing, and therefore, a calculation delay is a concern, since to minimize a delay of inputting data is one of the most significant requirements in games. In this paper, a method to estimate a head movement by only using corneal reflections of two infrared light sources in different locations is proposed. Furthermore, a method to control a cursor using gaze movement as well as a head movement is proposed. By using game-like-applications, proposed methods are evaluated and, as a result, a similar performance to conventional methods is confirmed and an aiming control with lower computation power and stressless intuitive operation is obtained.
Keywords: Point-of-gaze, gaze estimation, head movement, corneal reflections, two infrared light sources, game.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1071600 A Process of Forming a Single Competitive Factor in the Digital Camera Industry
Authors: Kiyohiro Yamazaki
Abstract:
This paper considers a forming process of a single competitive factor in the digital camera industry from the viewpoint of product platform. To make product development easier for companies and to increase product introduction ratios, development efforts concentrate on improving and strengthening certain product attributes, and it is born in the process that the product platform is formed continuously. It is pointed out that the formation of this product platform raises product development efficiency of individual companies, but on the other hand, it has a trade-off relationship of causing unification of competitive factors in the whole industry. This research tries to analyze product specification data which were collected from the web page of digital camera companies. Specifically, this research collected all product specification data released in Japan from 1995 to 2003 and analyzed the composition of image sensor and optical lens; and it identified product platforms shared by multiple products and discussed their application. As a result, this research found that the product platformation was born in the development of the standard product for major market segmentation. Every major company has made product platforms of image sensors and optical lenses, and as a result, this research found that the competitive factors were unified in the entire industry throughout product platformation. In other words, this product platformation brought product development efficiency of individual firms; however, it also caused industrial competition factors to be unified in the industry.
Keywords: Digital camera industry, product evolution trajectory, product platform, unification of competitive factors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 652599 Automated Video Surveillance System for Detection of Suspicious Activities during Academic Offline Examination
Authors: G. Sandhya Devi, G. Suvarna Kumar, S. Chandini
Abstract:
This research work aims to develop a system that will analyze and identify students who indulge in malpractices/suspicious activities during the course of an academic offline examination. Automated Video Surveillance provides an optimal solution which helps in monitoring the students and identifying the malpractice event immediately. This work is organized into three modules. The first module deals with performing an impersonation check using a PCA-based face recognition method which is done by cross checking his profile with the database. The presence or absence of the student is even determined in this module by implementing an image registration technique wherein a grid is formed by considering all the images registered using the frontal camera at the determined positions. Second, detecting such facial malpractices in which a student gets involved in conversation with another, trying to obtain unauthorized information etc., based on the threshold range evaluated by considering his/her mouth state whether open or closed. The third module deals with identification of unauthorized material or gadgets used in the examination hall by training the positive samples of the object through various stages. Here, a top view camera feed is analyzed to detect the suspicious activities. The system automatically alerts the administration when any suspicious activities are identified, thereby reducing the error rate caused due to manual monitoring. This work is an improvement over our previous work published in identifying suspicious activities done by examinees in an offline examination.
Keywords: Impersonation, image registration, incrimination, object detection, threshold evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1574598 Near-Infrared Hyperspectral Imaging Spectroscopy to Detect Microplastics and Pieces of Plastic in Almond Flour
Authors: H. Apaza, L. Chévez, H. Loro
Abstract:
Plastic and microplastic pollution in human food chain is a big problem for human health that requires more elaborated techniques that can identify their presences in different kinds of food. Hyperspectral imaging technique is an optical technique than can detect the presence of different elements in an image and can be used to detect plastics and microplastics in a scene. To do this statistical techniques are required that need to be evaluated and compared in order to find the more efficient ones. In this work, two problems related to the presence of plastics are addressed, the first is to detect and identify pieces of plastic immersed in almond seeds, and the second problem is to detect and quantify microplastic in almond flour. To do this we make use of the analysis hyperspectral images taken in the range of 900 to 1700 nm using 4 unmixing techniques of hyperspectral imaging which are: least squares unmixing (LSU), non-negatively constrained least squares unmixing (NCLSU), fully constrained least squares unmixing (FCLSU), and scaled constrained least squares unmixing (SCLSU). NCLSU, FCLSU, SCLSU techniques manage to find the region where the plastic is found and also manage to quantify the amount of microplastic contained in the almond flour. The SCLSU technique estimated a 13.03% abundance of microplastics and 86.97% of almond flour compared to 16.66% of microplastics and 83.33% abundance of almond flour prepared for the experiment. Results show the feasibility of applying near-infrared hyperspectral image analysis for the detection of plastic contaminants in food.
Keywords: Food, plastic, microplastic, NIR hyperspectral imaging, unmixing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1188597 Constitutive Equations for Human Saphenous Vein Coronary Artery Bypass Graft
Authors: Hynek Chlup, Lukas Horny, Rudolf Zitny, Svatava Konvickova, Tomas Adamek
Abstract:
Coronary artery bypass grafts (CABG) are widely studied with respect to hemodynamic conditions which play important role in presence of a restenosis. However, papers which concern with constitutive modeling of CABG are lacking in the literature. The purpose of this study is to find a constitutive model for CABG tissue. A sample of the CABG obtained within an autopsy underwent an inflation–extension test. Displacements were recoredered by CCD cameras and subsequently evaluated by digital image correlation. Pressure – radius and axial force – elongation data were used to fit material model. The tissue was modeled as onelayered composite reinforced by two families of helical fibers. The material is assumed to be locally orthotropic, nonlinear, incompressible and hyperelastic. Material parameters are estimated for two strain energy functions (SEF). The first is classical exponential. The second SEF is logarithmic which allows interpretation by means of limiting (finite) strain extensibility. Presented material parameters are estimated by optimization based on radial and axial equilibrium equation in a thick-walled tube. Both material models fit experimental data successfully. The exponential model fits significantly better relationship between axial force and axial strain than logarithmic one.Keywords: Constitutive model, coronary artery bypass graft, digital image correlation, fiber reinforced composite, inflation test, saphenous vein.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643596 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images
Authors: U. Datta
Abstract:
The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.
Keywords: Co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 730595 A Practice of Zero Trust Architecture in Financial Transactions
Authors: L. Wang, Y. Chen, T. Wu, S. Hu
Abstract:
In order to enhance the security of critical financial infrastructure, this study carries out a transformation of the architecture of a financial trading terminal to a zero trust architecture (ZTA), constructs an active defense system for the cybersecurity, improves the security level of trading services in the Internet environment, enhances the ability to prevent network attacks and unknown risks, and reduces the industry and security risks brought about by cybersecurity risks. This study introduces Software Defined Perimeter (SDP) technology of ZTA, adapts and applies it to a financial trading terminal to achieve security optimization and fine-grained business grading control. The upgraded architecture of the trading terminal moves security protection forward to the user access layer, replaces VPN to optimize remote access and significantly improves the security protection capability of Internet transactions. The study achieves: 1. deep integration with the access control architecture of the transaction system; 2. no impact on the performance of terminals and gateways, and no perception of application system upgrades; 3. customized checklist and policy configuration; 4. introduction of industry-leading security technology such as single-packet authorization (SPA) and secondary authentication. This study carries out a successful application of ZTA in the field of financial trading, and provides transformation ideas for other similar systems while improving the security level of financial transaction services in the Internet environment.
Keywords: Zero trust, trading terminal, architecture, network security, cybersecurity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 222594 A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding
Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf
Abstract:
In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Keywords: DWT, linear-phase 9/7 filter, Foveation Filtering, CSF implementation approaches, 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES, Luminance and Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795593 A Proposal for a Secure and Interoperable Data Framework for Energy Digitalization
Authors: Hebberly Ahatlan
Abstract:
The process of digitizing energy systems involves transforming traditional energy infrastructure into interconnected, data-driven systems that enhance efficiency, sustainability, and responsiveness. As smart grids become increasingly integral to the efficient distribution and management of electricity from both fossil and renewable energy sources, the energy industry faces strategic challenges associated with digitalization and interoperability — particularly in the context of modern energy business models, such as virtual power plants (VPPs). The critical challenge in modern smart grids is to seamlessly integrate diverse technologies and systems, including virtualization, grid computing and service-oriented architecture (SOA), across the entire energy ecosystem. Achieving this requires addressing issues like semantic interoperability, Information Technology (IT) and Operational Technology (OT) convergence, and digital asset scalability, all while ensuring security and risk management. This paper proposes a four-layer digitalization framework to tackle these challenges, encompassing persistent data protection, trusted key management, secure messaging, and authentication of IoT resources. Data assets generated through this framework enable AI systems to derive insights for improving smart grid operations, security, and revenue generation. Furthermore, this paper also proposes a Trusted Energy Interoperability Alliance as a universal guiding standard in the development of this digitalization framework to support more dynamic and interoperable energy markets.
Keywords: Digitalization, IT/OT convergence, semantic interoperability, TEIA alliance, VPP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 118592 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study
Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb
Abstract:
The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.
Keywords: Anthropomorphic phantom, computed tomography, CT-expo, radiation dose.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1464591 Anti-Homomorphism in Fuzzy Ideals
Authors: K. Chandrasekhara Rao, V. Swaminathan
Abstract:
The anti-homomorphic image of fuzzy ideals, fuzzy ideals of near-rings and anti ideals are discussed in this note. A necessary and sufficient condition has been established for near-ring anti ideal to be characteristic.Keywords: Fuzzy Ideals, Anti fuzzy subgroup, Anti fuzzy ideals, Anti homomorphism, Lower α level cut.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2311