Search results for: Digital Memories
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1010

Search results for: Digital Memories

140 Low Value Capacitance Measurement System with Adjustable Lead Capacitance Compensation

Authors: Gautam Sarkar, Anjan Rakshit, Amitava Chatterjee, Kesab Bhattacharya

Abstract:

The present paper describes the development of a low cost, highly accurate low capacitance measurement system that can be used over a range of 0 – 400 pF with a resolution of 1 pF. The range of capacitance may be easily altered by a simple resistance or capacitance variation of the measurement circuit. This capacitance measurement system uses quad two-input NAND Schmitt trigger circuit CD4093B with hysteresis for the measurement and this system is integrated with PIC 18F2550 microcontroller for data acquisition purpose. The microcontroller interacts with software developed in the PC end through USB architecture and an attractive graphical user interface (GUI) based system is developed in the PC end to provide the user with real time, online display of capacitance under measurement. The system uses a differential mode of capacitance measurement, with reference to a trimmer capacitance, that effectively compensates lead capacitances, a notorious error encountered in usual low capacitance measurements. The hysteresis provided in the Schmitt-trigger circuits enable reliable operation of the system by greatly minimizing the possibility of false triggering because of stray interferences, usually regarded as another source of significant error. The real life testing of the proposed system showed that our measurements could produce highly accurate capacitance measurements, when compared to cutting edge, high end digital capacitance meters.

Keywords: Capacitance measurement, NAND Schmitt trigger, microcontroller, GUI, lead compensation, hysteresis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7370
139 Comparison of Compression Ability Using DCT and Fractal Technique on Different Imaging Modalities

Authors: Sumathi Poobal, G. Ravindran

Abstract:

Image compression is one of the most important applications Digital Image Processing. Advanced medical imaging requires storage of large quantities of digitized clinical data. Due to the constrained bandwidth and storage capacity, however, a medical image must be compressed before transmission and storage. There are two types of compression methods, lossless and lossy. In Lossless compression method the original image is retrieved without any distortion. In lossy compression method, the reconstructed images contain some distortion. Direct Cosine Transform (DCT) and Fractal Image Compression (FIC) are types of lossy compression methods. This work shows that lossy compression methods can be chosen for medical image compression without significant degradation of the image quality. In this work DCT and Fractal Compression using Partitioned Iterated Function Systems (PIFS) are applied on different modalities of images like CT Scan, Ultrasound, Angiogram, X-ray and mammogram. Approximately 20 images are considered in each modality and the average values of compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the reconstructed image is arrived by the PSNR values. Based on the results it can be concluded that the DCT has higher PSNR values and FIC has higher compression ratio. Hence in medical image compression, DCT can be used wherever picture quality is preferred and FIC is used wherever compression of images for storage and transmission is the priority, without loosing picture quality diagnostically.

Keywords: DCT, FIC, PIFS, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
138 Geopotential Models Evaluation in Algeria Using Stochastic Method, GPS/Leveling and Topographic Data

Authors: M. A. Meslem

Abstract:

For precise geoid determination, we use a reference field to subtract long and medium wavelength of the gravity field from observations data when we use the remove-compute-restore technique. Therefore, a comparison study between considered models should be made in order to select the optimal reference gravity field to be used. In this context, two recent global geopotential models have been selected to perform this comparison study over Northern Algeria. The Earth Gravitational Model (EGM2008) and the Global Gravity Model (GECO) conceived with a combination of the first model with anomalous potential derived from a GOCE satellite-only global model. Free air gravity anomalies in the area under study have been used to compute residual data using both gravity field models and a Digital Terrain Model (DTM) to subtract the residual terrain effect from the gravity observations. Residual data were used to generate local empirical covariance functions and their fitting to the closed form in order to compare their statistical behaviors according to both cases. Finally, height anomalies were computed from both geopotential models and compared to a set of GPS levelled points on benchmarks using least squares adjustment. The result described in details in this paper regarding these two models has pointed out a slight advantage of GECO global model globally through error degree variances comparison and ground-truth evaluation.

Keywords: Quasigeoid, gravity anomalies, covariance, GGM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 897
137 Embedded Semi-Fragile Signature Based Scheme for Ownership Identification and Color Image Authentication with Recovery

Authors: M. Hamad Hassan, S.A.M. Gilani

Abstract:

In this paper, a novel scheme is proposed for Ownership Identification and Color Image Authentication by deploying Cryptography & Digital Watermarking. The color image is first transformed from RGB to YST color space exclusively designed for watermarking. Followed by color space transformation, each channel is divided into 4×4 non-overlapping blocks with selection of central 2×2 sub-blocks. Depending upon the channel selected two to three LSBs of each central 2×2 sub-block are set to zero to hold the ownership, authentication and recovery information. The size & position of sub-block is important for correct localization, enhanced security & fast computation. As YS ÔèÑ T so it is suitable to embed the recovery information apart from the ownership and authentication information, therefore 4×4 block of T channel along with ownership information is then deployed by SHA160 to compute the content based hash that is unique and invulnerable to birthday attack or hash collision instead of using MD5 that may raise the condition i.e. H(m)=H(m'). For recovery, intensity mean of 4x4 block of each channel is computed and encoded upto eight bits. For watermark embedding, key based mapping of blocks is performed using 2DTorus Automorphism. Our scheme is oblivious, generates highly imperceptible images with correct localization of tampering within reasonable time and has the ability to recover the original work with probability of near one.

Keywords: Hash Collision, LSB, MD5, PSNR, SHA160

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1519
136 Innovative Teaching in Systems Analysis and Design - an Action Research Project

Authors: Imelda Smit

Abstract:

Systems Analysis and Design is a key subject in Information Technology courses, but students do not find it easy to cope with, since it is not “precise" like programming and not exact like Mathematics. It is a subject working with many concepts, modeling ideas into visual representations and then translating the pictures into a real life system. To complicate matters users who are not necessarily familiar with computers need to give their inputs to ensure that they get the system the need. Systems Analysis and Design also covers two fields, namely Analysis, focusing on the analysis of the existing system and Design, focusing on the design of the new system. To be able to test the analysis and design of a system, it is necessary to develop a system or at least a prototype of the system to test the validity of the analysis and design. The skills necessary in each aspect differs vastly. Project Management Skills, Database Knowledge and Object Oriented Principles are all necessary. In the context of a developing country where students enter tertiary education underprepared and the digital divide is alive and well, students need to be motivated to learn the necessary skills, get an opportunity to test it in a “live" but protected environment – within the framework of a university. The purpose of this article is to improve the learning experience in Systems Analysis and Design through reviewing the underlying teaching principles used, the teaching tools implemented, the observations made and the reflections that will influence future developments in Systems Analysis and Design. Action research principles allows the focus to be on a few problematic aspects during a particular semester.

Keywords: Action Research, Project Development, Systems Analysis and Design, Technology in Teaching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1452
135 AI-Driven Cloud Security: Proactive Defense Against Evolving Cyber Threats

Authors: Ashly Joseph

Abstract:

Cloud computing has become an essential component of enterprises and organizations globally in the current era of digital technology. The cloud has a multitude of advantages, including scalability, flexibility, and cost-effectiveness, rendering it an appealing choice for data storage and processing. The increasing storage of sensitive information in cloud environments has raised significant concerns over the security of such systems. The frequency of cyber threats and attacks specifically aimed at cloud infrastructure has been increasing, presenting substantial dangers to the data, reputation, and financial stability of enterprises. Conventional security methods can become inadequate when confronted with ever intricate and dynamic threats. Artificial Intelligence (AI) technologies possess the capacity to significantly transform cloud security through their ability to promptly identify and thwart assaults, adjust to emerging risks, and offer intelligent perspectives for proactive security actions. The objective of this research study is to investigate the utilization of AI technologies in augmenting the security measures within cloud computing systems. This paper aims to offer significant insights and recommendations for businesses seeking to protect their cloud-based assets by analyzing the present state of cloud security, the capabilities of AI, and the possible advantages and obstacles associated with using AI into cloud security policies.

Keywords: Machine Learning, Natural Learning Processing, Denial-of-Service attacks, Sentiment Analysis, Cloud computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 187
134 Evaluation of Coastal Erosion in the Jurisdiction of the Municipalities of Puerto Colombia and Tubará, Atlántico, Colombia in Google Earth Engine with Landsat and Sentinel 2 Images

Authors: Francisco Javier Reyes Salazar, Héctor Mauricio Ramírez

Abstract:

The coastal zones are home to mangrove swamps, coral reefs, and seagrass ecosystems, which are the most biodiverse and fragile on the planet. These areas support a great diversity of marine life; they are also extraordinarily important for humans in the provision of food, water, wood, and other associated goods and services; they also contribute to climate regulation. The lack of an automated model that generates information on the dynamics of changes in coastlines and coastal erosion is identified as a central problem. In this paper, coastlines were determined from 1984 to 2020 on the Google Earth Engine platform from Landsat and Sentinel images. Then, we determined the Modified Normalized Difference Water Index (MNDWI) and used Digital Shoreline Analysis System (DSAS) v5.0. Starting from the 2020 coastline; the 10-year prediction (Year 2031) was determined with the erosion of 238.32 hectares and an accretion of 181.96 hectares. For the 20-year prediction (Year 2041) will be presented an erosion of 544.04 hectares and an accretion of 133.94 hectares. The erosion and accretion of Playa Muelle in the municipality of Puerto Colombia were established, which will register the highest value of erosion. The coverage that presented the greatest change was that of artificialized territories.

Keywords: Coastline, coastal erosion, MNDWI, Google Earth Engine, Colombia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 197
133 Influence of Instructors in Engaging Online Graduate Students in Active Learning in the United States

Authors: Ehi E. Aimiuwu

Abstract:

As of 2017, many online learning professionals, institutions, and journals are still wondering how instructors can keep student engaged in the online learning environment to facilitate active learning effectively. The purpose of this qualitative single-case and narrative research is to explore whether online professors understand their role as mentors and facilitators of students’ academic success by keeping students engaged in active learning based on personalized experience in the field. Data collection tools that were used in the study included an NVivo 12 Plus qualitative software, an interview protocol, a digital audiotape, an observation sheet, and a transcription. Seven online professors in the United States from LinkedIn and residencies were interviewed for this study. Eleven online teaching techniques from previous research were used as the study framework. Data analysis process, member checking, and key themes were used to achieve saturation. About 85.7% of professors agreed on rubric as the preferred online grading technique. About 57.1% agreed on professors logging in daily, students logging in about 2-5 times weekly, knowing students to increase accountability, email as preferred communication tool, and computer access for adequate online learning. About 42.9% agreed on syllabus for clear class expectations, participation to show what has been learned, and energizing students for creativity.

Keywords: Class facilitation, class management, online teaching, online education, pedagogy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 669
132 Comparison between Higher-Order SVD and Third-order Orthogonal Tensor Product Expansion

Authors: Chiharu Okuma, Jun Murakami, Naoki Yamamoto

Abstract:

In digital signal processing it is important to approximate multi-dimensional data by the method called rank reduction, in which we reduce the rank of multi-dimensional data from higher to lower. For 2-dimennsional data, singular value decomposition (SVD) is one of the most known rank reduction techniques. Additional, outer product expansion expanded from SVD was proposed and implemented for multi-dimensional data, which has been widely applied to image processing and pattern recognition. However, the multi-dimensional outer product expansion has behavior of great computation complex and has not orthogonally between the expansion terms. Therefore we have proposed an alterative method, Third-order Orthogonal Tensor Product Expansion short for 3-OTPE. 3-OTPE uses the power method instead of nonlinear optimization method for decreasing at computing time. At the same time the group of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is also developed with SVD extensions for multi-dimensional data. 3-OTPE and HOSVD are similarly on the rank reduction of multi-dimensional data. Using these two methods we can obtain computation results respectively, some ones are the same while some ones are slight different. In this paper, we compare 3-OTPE to HOSVD in accuracy of calculation and computing time of resolution, and clarify the difference between these two methods.

Keywords: Singular value decomposition (SVD), higher-order SVD (HOSVD), higher-order tensor, outer product expansion, power method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562
131 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

Authors: Hazem M. El-Bakry

Abstract:

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

Keywords: Boolean Functions, Simplification, KarnoughMap, Implementation of Logic Functions, Modular NeuralNetworks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
130 Chose the Right Mutation Rate for Better Evolve Combinational Logic Circuits

Authors: Emanuele Stomeo, Tatiana Kalganova, Cyrille Lambert

Abstract:

Evolvable hardware (EHW) is a developing field that applies evolutionary algorithm (EA) to automatically design circuits, antennas, robot controllers etc. A lot of research has been done in this area and several different EAs have been introduced to tackle numerous problems, as scalability, evolvability etc. However every time a specific EA is chosen for solving a particular task, all its components, such as population size, initialization, selection mechanism, mutation rate, and genetic operators, should be selected in order to achieve the best results. In the last three decade the selection of the right parameters for the EA-s components for solving different “test-problems" has been investigated. In this paper the behaviour of mutation rate for designing logic circuits, which has not been done before, has been deeply analyzed. The mutation rate for an EHW system modifies the number of inputs of each logic gates, the functionality (for example from AND to NOR) and the connectivity between logic gates. The behaviour of the mutation has been analyzed based on the number of generations, genotype redundancy and number of logic gates for the evolved circuits. The experimental results found provide the behaviour of the mutation rate during evolution for the design and optimization of simple logic circuits. The experimental results propose the best mutation rate to be used for designing combinational logic circuits. The research presented is particular important for those who would like to implement a dynamic mutation rate inside the evolutionary algorithm for evolving digital circuits. The researches on the mutation rate during the last 40 years are also summarized.

Keywords: Design of logic circuit, evolutionary computation, evolvable hardware, mutation rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
129 Distances over Incomplete Diabetes and Breast Cancer Data Based on Bhattacharyya Distance

Authors: Loai AbdAllah, Mahmoud Kaiyal

Abstract:

Missing values in real-world datasets are a common problem. Many algorithms were developed to deal with this problem, most of them replace the missing values with a fixed value that was computed based on the observed values. In our work, we used a distance function based on Bhattacharyya distance to measure the distance between objects with missing values. Bhattacharyya distance, which measures the similarity of two probability distributions. The proposed distance distinguishes between known and unknown values. Where the distance between two known values is the Mahalanobis distance. When, on the other hand, one of them is missing the distance is computed based on the distribution of the known values, for the coordinate that contains the missing value. This method was integrated with Wikaya, a digital health company developing a platform that helps to improve prevention of chronic diseases such as diabetes and cancer. In order for Wikaya’s recommendation system to work distance between users need to be measured. Since there are missing values in the collected data, there is a need to develop a distance function distances between incomplete users profiles. To evaluate the accuracy of the proposed distance function in reflecting the actual similarity between different objects, when some of them contain missing values, we integrated it within the framework of k nearest neighbors (kNN) classifier, since its computation is based only on the similarity between objects. To validate this, we ran the algorithm over diabetes and breast cancer datasets, standard benchmark datasets from the UCI repository. Our experiments show that kNN classifier using our proposed distance function outperforms the kNN using other existing methods.

Keywords: Missing values, distance metric, Bhattacharyya distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781
128 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

Authors: Hazem M. El-Bakry

Abstract:

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

Keywords: Boolean functions, simplification, Karnough map, implementation of logic functions, modular neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2070
127 A Proposed Hybrid Color Image Compression Based on Fractal Coding with Quadtree and Discrete Cosine Transform

Authors: Shimal Das, Dibyendu Ghoshal

Abstract:

Fractal based digital image compression is a specific technique in the field of color image. The method is best suited for irregular shape of image like snow bobs, clouds, flame of fire; tree leaves images, depending on the fact that parts of an image often resemble with other parts of the same image. This technique has drawn much attention in recent years because of very high compression ratio that can be achieved. Hybrid scheme incorporating fractal compression and speedup techniques have achieved high compression ratio compared to pure fractal compression. Fractal image compression is a lossy compression method in which selfsimilarity nature of an image is used. This technique provides high compression ratio, less encoding time and fart decoding process. In this paper, fractal compression with quad tree and DCT is proposed to compress the color image. The proposed hybrid schemes require four phases to compress the color image. First: the image is segmented and Discrete Cosine Transform is applied to each block of the segmented image. Second: the block values are scanned in a zigzag manner to prevent zero co-efficient. Third: the resulting image is partitioned as fractals by quadtree approach. Fourth: the image is compressed using Run length encoding technique.

Keywords: Fractal coding, Discrete Cosine Transform, Iterated Function System (IFS), Affine Transformation, Run length encoding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
126 Hydrological Characterization of a Watershed for Streamflow Prediction

Authors: Oseni Taiwo Amoo, Bloodless Dzwairo

Abstract:

In this paper, we extend the versatility and usefulness of GIS as a methodology for any river basin hydrologic characteristics analysis (HCA). The Gurara River basin located in North-Central Nigeria is presented in this study. It is an on-going research using spatial Digital Elevation Model (DEM) and Arc-Hydro tools to take inventory of the basin characteristics in order to predict water abstraction quantification on streamflow regime. One of the main concerns of hydrological modelling is the quantification of runoff from rainstorm events. In practice, the soil conservation service curve (SCS) method and the Conventional procedure called rational technique are still generally used these traditional hydrological lumped models convert statistical properties of rainfall in river basin to observed runoff and hydrograph. However, the models give little or no information about spatially dispersed information on rainfall and basin physical characteristics. Therefore, this paper synthesizes morphometric parameters in generating runoff. The expected results of the basin characteristics such as size, area, shape, slope of the watershed and stream distribution network analysis could be useful in estimating streamflow discharge. Water resources managers and irrigation farmers could utilize the tool for determining net return from available scarce water resources, where past data records are sparse for the aspect of land and climate.

Keywords: Hydrological characteristic, land and climate, runoff discharge, streamflow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1462
125 Routing Medical Images with Tabu Search and Simulated Annealing: A Study on Quality of Service

Authors: Mejía M. Paula, Ramírez L. Leonardo, Puerta A. Gabriel

Abstract:

In telemedicine, the image repository service is important to increase the accuracy of diagnostic support of medical personnel. This study makes comparison between two routing algorithms regarding the quality of service (QoS), to be able to analyze the optimal performance at the time of loading and/or downloading of medical images. This study focused on comparing the performance of Tabu Search with other heuristic and metaheuristic algorithms that improve QoS in telemedicine services in Colombia. For this, Tabu Search and Simulated Annealing heuristic algorithms are chosen for their high usability in this type of applications; the QoS is measured taking into account the following metrics: Delay, Throughput, Jitter and Latency. In addition, routing tests were carried out on ten images in digital image and communication in medicine (DICOM) format of 40 MB. These tests were carried out for ten minutes with different traffic conditions, reaching a total of 25 tests, from a server of Universidad Militar Nueva Granada (UMNG) in Bogotá-Colombia to a remote user in Universidad de Santiago de Chile (USACH) - Chile. The results show that Tabu search presents a better QoS performance compared to Simulated Annealing, managing to optimize the routing of medical images, a basic requirement to offer diagnostic images services in telemedicine.

Keywords: Medical image, QoS, simulated annealing, Tabu search, telemedicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 956
124 Displacement Fields in Footing-Sand Interactions under Cyclic Loading

Authors: S. Joseph Antony, Z. K. Jahanger

Abstract:

Soils are subjected to cyclic loading in situ in situations such as during earthquakes and in the compaction of pavements. Investigations on the local scale measurement of the displacements of the grain and failure patterns within the soil bed under the cyclic loading conditions are rather limited. In this paper, using the digital particle image velocimetry (DPIV), local scale displacement fields of a dense sand medium interacting with a rigid footing are measured under the plane-strain condition for two commonly used types of cyclic loading, and the quasi-static loading condition for the purposes of comparison. From the displacement measurements of the grains, the failure envelopes of the sand media are also presented. The results show that, the ultimate cyclic bearing capacity (qultcyc) occurred corresponding to a relatively higher settlement value when compared with that of under the quasi-static loading. For the sand media under the cyclic loading conditions considered here, the displacement fields in the soil media occurred more widely in the horizontal direction and less deeper along the vertical direction when compared with that of under the quasi-static loading. The 'dead zone' in the sand grains beneath the footing is identified for all types of the loading conditions studied here. These grain-scale characteristics have implications on the resulting bulk bearing capacity of the sand media in footing-sand interaction problems.

Keywords: Cyclic loading, DPIV, settlement, soil-structure interactions, strip footing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 877
123 Object Identification with Color, Texture, and Object-Correlation in CBIR System

Authors: Awais Adnan, Muhammad Nawaz, Sajid Anwar, Tamleek Ali, Muhammad Ali

Abstract:

Needs of an efficient information retrieval in recent years in increased more then ever because of the frequent use of digital information in our life. We see a lot of work in the area of textual information but in multimedia information, we cannot find much progress. In text based information, new technology of data mining and data marts are now in working that were started from the basic concept of database some where in 1960. In image search and especially in image identification, computerized system at very initial stages. Even in the area of image search we cannot see much progress as in the case of text based search techniques. One main reason for this is the wide spread roots of image search where many area like artificial intelligence, statistics, image processing, pattern recognition play their role. Even human psychology and perception and cultural diversity also have their share for the design of a good and efficient image recognition and retrieval system. A new object based search technique is presented in this paper where object in the image are identified on the basis of their geometrical shapes and other features like color and texture where object-co-relation augments this search process. To be more focused on objects identification, simple images are selected for the work to reduce the role of segmentation in overall process however same technique can also be applied for other images.

Keywords: Object correlation, Geometrical shape, Color, texture, features, contents.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2028
122 Comparisons of Fine Motor Functions in Subjects with Parkinson’s Disease and Essential Tremor

Authors: Nan-Ying Yu, Shao-Hsia Chang

Abstract:

This study explores the clinical features of neurodegenerative disease patients with tremor. We study the motor impairments in patients with Parkinson’s disease (PD) and essential tremor (ET). Since uncertainty exists on whether Parkinson's disease (PD) and essential tremor (ET) patients have similar degree of impairment during motor tasks, this study based on the self-developed computerized handwriting movement analysis to characterize motor functions of these two impairments. The recruited subjects were diagnosed and confirmed one of neurodegenerative diseases. They were undergone general clinical evaluations by physicians in the first year. We recruited 8 participants with PD and 10 with ET. Additional 12 participants without any neuromuscular dysfunction were recruited as control group. This study used fine motor control of penmanship on digital tablet for sensorimotor function tests. The movement speed in PD/ET group is found significant slower than subjects in normal control group. In movement intensity and speed, the result found subject with ET has similar clinical feature with PD subjects. The ET group shows smaller and slower movements than control group but not to the same extent as PD group. The results of this study contribute to the early screening and detection of diseases and the evaluation of disease progression.

Keywords: Parkinson’s disease, essential tremor, motor function, fine motor movement, computerized handwriting evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2278
121 Standard Deviation of Mean and Variance of Rows and Columns of Images for CBIR

Authors: H. B. Kekre, Kavita Patil

Abstract:

This paper describes a novel and effective approach to content-based image retrieval (CBIR) that represents each image in the database by a vector of feature values called “Standard deviation of mean vectors of color distribution of rows and columns of images for CBIR". In many areas of commerce, government, academia, and hospitals, large collections of digital images are being created. This paper describes the approach that uses contents as feature vector for retrieval of similar images. There are several classes of features that are used to specify queries: colour, texture, shape, spatial layout. Colour features are often easily obtained directly from the pixel intensities. In this paper feature extraction is done for the texture descriptor that is 'variance' and 'Variance of Variances'. First standard deviation of each row and column mean is calculated for R, G, and B planes. These six values are obtained for one image which acts as a feature vector. Secondly we calculate variance of the row and column of R, G and B planes of an image. Then six standard deviations of these variance sequences are calculated to form a feature vector of dimension six. We applied our approach to a database of 300 BMP images. We have determined the capability of automatic indexing by analyzing image content: color and texture as features and by applying a similarity measure Euclidean distance.

Keywords: Standard deviation Image retrieval, color distribution, Variance, Variance of Variance, Euclidean distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3746
120 Growth and Anatomical Responses of Lycopersicon esculentum (Tomatoes) under Microgravity and Normal Gravity Conditions

Authors: Gbenga F. Akomolafe, Joseph Omojola, Ezekiel S. Joshua, Seyi C. Adediwura, Elijah T. Adesuji, Michael O. Odey, Oyinade A. Dedeke, Ayo H. Labulo

Abstract:

Microgravity is known to be a major abiotic stress in space which affects plants depending on the duration of exposure. In this work, tomatoes seeds were exposed to long hours of simulated microgravity condition using a one-axis clinostat. The seeds were sown on a 1.5% combination of plant nutrient and agar-agar solidified medium in three Petri dishes. One of the Petri dishes was mounted on the clinostat and allowed to rotate at the speed of 20 rpm for 72 hours, while the others were subjected to the normal gravity vector. The anatomical sections of both clinorotated and normal gravity plants were made after 72 hours and observed using a Phase-contrast digital microscope. The percentage germination, as well as the growth rate of the normal gravity seeds, was higher than the clinorotated ones. The germinated clinorotated roots followed different directions unlike the normal gravity ones which grew towards the direction of gravity vector. The clinostat was able to switch off gravistimulation. Distinct cellular arrangement was observed for tomatoes under normal gravity condition, unlike those of clinorotated ones. The root epidermis and cortex of normal gravity are thicker than the clinorotated ones. This implied that under long-term microgravity influence, plants do alter their anatomical features as a way of adapting to the stress condition.

Keywords: Anatomy, Clinostat, Germination, Microgravity, Lycopersicon esculentum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1041
119 Spacecraft Neural Network Control System Design using FPGA

Authors: Hanaa T. El-Madany, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassen T. Dorrah

Abstract:

Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products of space technologies. A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. Field programmable gate array (FPGA) is a digital device that owns reprogrammable properties and robust flexibility. For the neural network based instrument prototype in real time application, conventional specific VLSI neural chip design suffers the limitation in time and cost. With low precision artificial neural network design, FPGAs have higher speed and smaller size for real time application than the VLSI and DSP chips. So, many researchers have made great efforts on the realization of neural network (NN) using FPGA technique. In this paper, an introduction of ANN and FPGA technique are briefly shown. Also, Hardware Description Language (VHDL) code has been proposed to implement ANNs as well as to present simulation results with floating point arithmetic. Synthesis results for ANN controller are developed using Precision RTL. Proposed VHDL implementation creates a flexible, fast method and high degree of parallelism for implementing ANN. The implementation of multi-layer NN using lookup table LUT reduces the resource utilization for implementation and time for execution.

Keywords: Spacecraft, neural network, FPGA, VHDL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3009
118 A Novel Machining Signal Filtering Technique: Z-notch Filter

Authors: Nuawi M. Z., Lamin F., Ismail A. R., Abdullah S., Wahid Z.

Abstract:

A filter is used to remove undesirable frequency information from a dynamic signal. This paper shows that the Znotch filter filtering technique can be applied to remove the noise nuisance from a machining signal. In machining, the noise components were identified from the sound produced by the operation of machine components itself such as hydraulic system, motor, machine environment and etc. By correlating the noise components with the measured machining signal, the interested components of the measured machining signal which was less interfered by the noise, can be extracted. Thus, the filtered signal is more reliable to be analysed in terms of noise content compared to the unfiltered signal. Significantly, the I-kaz method i.e. comprises of three dimensional graphical representation and I-kaz coefficient, Z∞ could differentiate between the filtered and the unfiltered signal. The bigger space of scattering and the higher value of Z∞ demonstrated that the signal was highly interrupted by noise. This method can be utilised as a proactive tool in evaluating the noise content in a signal. The evaluation of noise content is very important as well as the elimination especially for machining operation fault diagnosis purpose. The Z-notch filtering technique was reliable in extracting noise component from the measured machining signal with high efficiency. Even though the measured signal was exposed to high noise disruption, the signal generated from the interaction between cutting tool and work piece still can be acquired. Therefore, the interruption of noise that could change the original signal feature and consequently can deteriorate the useful sensory information can be eliminated.

Keywords: Digital signal filtering, I-kaz method, Machiningmonitoring, Noise Cancelling, Sound

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
117 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform

Authors: Vijaya Prakash.A.M, K.S.Gurumurthy

Abstract:

In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].

Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3139
116 Evolution of Web Development Techniques in Modern Technology

Authors: Abdul Basit Kiani, Maryam Kiani

Abstract:

The art of web development in new technologies is a dynamic journey, shaped by the constant evolution of tools and platforms. With the emergence of JavaScript frameworks and APIs, web developers are empowered to craft web applications that are not only robust but also highly interactive. The aim is to provide an overview of the developments in the field. The integration of artificial intelligence (AI) and machine learning (ML) has opened new horizons in web development. Chatbots, intelligent recommendation systems, and personalization algorithms have become integral components of modern websites. These AI-powered features enhance user engagement, provide personalized experiences, and streamline customer support processes, revolutionizing the way businesses interact with their audiences. Lastly, the emphasis on web security and privacy has been a pivotal area of progress. With the increasing incidents of cyber threats, web developers have implemented robust security measures to safeguard user data and ensure secure transactions. Innovations such as HTTPS protocol, two-factor authentication, and advanced encryption techniques have bolstered the overall security of web applications, fostering trust and confidence among users. Hence, recent progress in web development has propelled the industry forward, enabling developers to craft innovative and immersive digital experiences. From responsive design to AI integration and enhanced security, the landscape of web development continues to evolve, promising a future filled with endless possibilities.

Keywords: Web development, software testing, progressive web apps, web and mobile native application.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 381
115 Privacy Protection Principles of Omnichannel Approach

Authors: Renata Mekovec, Dijana Peras, Ruben Picek

Abstract:

The advent of the Internet, mobile devices and social media is revolutionizing the experience of retail customers by linking multiple sources through various channels. Omnichannel retailing is a retailing that combines multiple channels to allow customers to seamlessly leverage all the distribution information online and offline while shopping. Therefore, today data are an asset more critical than ever for all organizations. Nonetheless, because of its heterogeneity through platforms, developers are currently facing difficulties in dealing with personal data. Considering the possibilities of omnichannel communication, this paper presents channel categorization that could enhance the customer experience of omnichannel center called hyper center. The purpose of this paper is fundamentally to describe the connection between the omnichannel hyper center and the customer, with particular attention to privacy protection. The first phase was finding the most appropriate channels of communication for hyper center. Consequently, a selection of widely used communication channels has been identified and analyzed with regard to the effect requirements for optimizing user experience. The evaluation criteria are divided into 3 groups: general, user profile and channel options. For each criterion the weight of importance for omnichannel communication was defined. The most important thing was to consider how the hyper center can make user identification while respecting the privacy protection requirements. The study carried out also shows what customer experience across digital networks would look like, based on an omnichannel approach owing to privacy protection principles.

Keywords: Personal data, privacy protection, omnichannel communication, retail.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 639
114 Identification of the Electronic City Application Obstacles in Iran

Authors: E. Asgharizadeh, M. Ajalli Geshlajoughi, S. R. Safavi Mirmahalleh

Abstract:

Amazing development of the information technology, communications and internet expansion as well as the requirements of the city managers to new ideas to run the city and higher participation of the citizens encourage us to complete the electronic city as soon as possible. The foundations of this electronic city are in information technology. People-s participation in metropolitan management is a crucial topic. Information technology does not impede this matter. It can ameliorate populace-s participation and better interactions between the citizens and the city managers. Citizens can proffer their ideas, beliefs and votes through digital mass media based upon the internet and computerization plexuses on the topical matters to receive appropriate replies and services. They can participate in urban projects by becoming cognizant of the city views. The most significant challenges are as follows: information and communicative management, altering citizens- views, as well as legal and office documents Electronic city obstacles have been identified in this research. The required data were forgathered through questionnaires to identify the barriers from a statistical community comprising specialists and practitioners of the ministry of information technology and communication, the municipality information technology organization. The conclusions demonstrate that the prioritized electronic city application barriers in Iran are as follows: The support quandaries (non-financial ones), behavioral, cultural and educational plights, the security, legal and license predicaments, the hardware, orismological and infrastructural curbs, the software and fiscal problems.

Keywords: Electronic city, urban management, populace's participation, electronic government, electronic services, electronic organization, electronic infrastructure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762
113 Experimental Investigation and Sensitivity Analysis for the Effects of Fracture Parameters to the Conductance Properties of Laterite

Authors: Bai Wei, Kong Ling-Wei, Guo Ai-Guo

Abstract:

This experiment discusses the effects of fracture parameters such as depth, length, width, angle and the number of the fracture to the conductance properties of laterite using the DUK-2B digital electrical measurement system combined with the method of simulating the fractures. The results of experiment show that the changes of fracture parameters produce effects to the conductance properties of laterite. There is a clear degressive period of the conductivity of laterite during increasing the depth, length, width, or the angle and the quantity of fracture gradually. When the depth of fracture exceeds the half thickness of the soil body, the conductivity of laterite shows evidently non-linear diminishing pattern and the amplitude of decrease tends to increase. The length of fracture has fewer effects than the depth to the conductivity. When the width of fracture reaches some fixed values, the change of the conductivity is less sensitive to the change of the width, and at this time, the conductivity of laterite maintains at a stable level. When the angle of fracture is less than 45°, the decrease of the conductivity is more clearly as the angle increases. But when angle is more than 45°, change of the conductivity is relatively gentle as the angle increases. The increasing quantity of the fracture causes the other fracture parameters having great impact on the change of conductivity. When moisture content and temperature were unchanged, depth and angle of fractures are the major factors affecting the conductivity of laterite soil; quantity, length, and width are minor influencing factors. The sensitivity of fracture parameters affect conductivity of laterite soil is: depth >angles >quantity >length >width.

Keywords: laterite, fracture parameters, conductance properties, conductivity, uniform design, sensitivity analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430
112 The Implementation of Anti-Circumvention Legislations in Thai Copyright System

Authors: Chuencheewin Yimfuang

Abstract:

The WIPO copyright treaty (WCT) was established by the World Intellectual Property Organisation (WIPO). This agreement required the contracting nations to provide adequate protection to technological measures to prevent massive copyright infringement in the internet system. Thailand had to implement the anti-circumvention rules into domestic legislation to comply with this international obligation. The purpose of this paper is to critically discuss the legislative standard under the WCT. It also aims to examine the legal development of technological protection measures in Thailand and demonstrate that the scope of prohibitions under the copyright Act 2022 (NO.5) is similar to the Digital Millennium Copyright Act 1998 (DMCA) of the United States (US). It could be found that the anti-circumvention laws of Thailand prohibit the circumvention of access-control technologies, and the regulation on trafficking circumvention devices has been added to the latest version of the Thai Copyright Act. These legislative evolutions have revealed the attempt to reinforce the legal protection of technological measures and copyright holders in order to be in line with global practices. However, the amendment has problems concerning the legal definitions of effective technological measure and the prohibited act of circumvention. The vagueness might affect the scope of protection and the boundary of prohibition. With this aspect, the DMCA will be evaluated and compared to gain guidelines for interpretation and enforcement in Thailand. The lessons and experiences learned from this study might be useful to correct the flaws or at least clarify the ambiguities embodied in Thai copyright legislation.

Keywords: Legal Development Technological Protection Measure, prohibition, circumvention, Thailand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181
111 Component Based Framework for Authoring and Multimedia Training in Mathematics

Authors: Ion Smeureanu, Marian Dardala, Adriana Reveiu

Abstract:

The new programming technologies allow for the creation of components which can be automatically or manually assembled to reach a new experience in knowledge understanding and mastering or in getting skills for a specific knowledge area. The project proposes an interactive framework that permits the creation, combination and utilization of components that are specific to mathematical training in high schools. The main framework-s objectives are: • authoring lessons by the teacher or the students; all they need are simple operating skills for Equation Editor (or something similar, or Latex); the rest are just drag & drop operations, inserting data into a grid, or navigating through menus • allowing sonorous presentations of mathematical texts and solving hints (easier understood by the students) • offering graphical representations of a mathematical function edited in Equation • storing of learning objects in a database • storing of predefined lessons (efficient for expressions and commands, the rest being calculations; allows a high compression) • viewing and/or modifying predefined lessons, according to the curricula The whole thing is focused on a mathematical expressions minicompiler, storing the code that will be later used for different purposes (tables, graphics, and optimisations). Programming technologies used. A Visual C# .NET implementation is proposed. New and innovative digital learning objects for mathematics will be developed; they are capable to interpret, contextualize and react depending on the architecture where they are assembled.

Keywords: Adaptor, automatic assembly learning component and user control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704