Search results for: nonuniform signal processing
3943 Performance Comparison of Wideband Covariance Matrix Sparse Representation (W-CMSR) with Other Wideband DOA Estimation Methods
Authors: Sandeep Santosh, O. P. Sahu
Abstract:
In this paper, performance comparison of wideband covariance matrix sparse representation (W-CMSR) method with other existing wideband Direction of Arrival (DOA) estimation methods has been made.W-CMSR relies less on a priori information of the incident signal number than the ordinary subspace based methods.Consider the perturbation free covariance matrix of the wideband array output. The diagonal covariance elements are contaminated by unknown noise variance. The covariance matrix of array output is conjugate symmetric i.e its upper right triangular elements can be represented by lower left triangular ones.As the main diagonal elements are contaminated by unknown noise variance,slide over them and align the lower left triangular elements column by column to obtain a measurement vector.Simulation results for W-CMSR are compared with simulation results of other wideband DOA estimation methods like Coherent signal subspace method (CSSM), Capon, l1-SVD, and JLZA-DOA. W-CMSR separate two signals very clearly and CSSM, Capon, L1-SVD and JLZA-DOA fail to separate two signals clearly and an amount of pseudo peaks exist in the spectrum of L1-SVD.Keywords: W-CMSR, wideband direction of arrival (DOA), covariance matrix, electrical and computer engineering
Procedia PDF Downloads 4693942 3D Images Representation to Provide Information on the Type of Castella Beams Hole
Authors: Cut Maisyarah Karyati, Aries Muslim, Sulardi
Abstract:
Digital image processing techniques to obtain detailed information from an image have been used in various fields, including in civil engineering, where the use of solid beam profiles in buildings and bridges has often been encountered since the early development of beams. Along with this development, the founded castellated beam profiles began to be more diverse in shape, such as the shape of a hexagon, triangle, pentagon, circle, ellipse and oval that could be a practical solution in optimizing a construction because of its characteristics. The purpose of this research is to create a computer application to edge detect the profile of various shapes of the castella beams hole. The digital image segmentation method has been used to obtain the grayscale images and represented in 2D and 3D formats. This application has been successfully made according to the desired function, which is to provide information on the type of castella beam hole.Keywords: digital image, image processing, edge detection, grayscale, castella beams
Procedia PDF Downloads 1403941 Precursors Signatures of Few Major Earthquakes in Italy Using Very Low Frequency Signal of 45.9kHz
Authors: Keshav Prasad Kandel, Balaram Khadka, Karan Bhatta, Basu Dev Ghimire
Abstract:
Earthquakes still exist as a threating disaster. Being able to predict earthquakes will certainly help prevent substantial loss of life and property. Perhaps, Very Low Frequency/Low Frequency (VLF/LF) signal band (3-30 kHz), which is effectively reflected from D-layer of ionosphere, can be established as a tool to predict earthquake. On May 20 and May 29, 2012, earthquakes of magnitude 6.1 and 5.8 respectively struck Emilia-Romagna of Italy. A year back, on August 24, 2016, an earthquake of magnitude 6.2 struck Central Italy (42.7060 N and 13.2230 E) at 1:36 UT. We present the results obtained from the US Navy VLF Transmitter’s NSY signal of 45.9 kHz transmitted from Niscemi, in the province of Sicily, Italy and received at the Kiel Longwave Monitor, Germany for 2012 and 2016. We analyzed the terminator times, their individual differences and nighttime fluctuation counts. We also analyzed trends, dispersion and nighttime fluctuation which gave us a possible precursors to these earthquakes. Since perturbations in VLF amplitude could also be due to various other factors like lightning, geomagnetic activities (storms, auroras etc.) and solar activities (flares, UV flux, etc.), we filtered the possible perturbations due to these agents to guarantee that the perturbations seen in VLF/LF amplitudes were as a precursor to Earthquakes. As our TRGCP path is North-south, the sunrise and sunset time in transmitter and receiver places matches making pathway for VLF/LF smoother and therefore hoping to obtain more natural data. To our surprise, we found many clear anomalies (as precursors) in terminator times 5 days to 16 days before the earthquakes. Moreover, using night time fluctuation method, we found clear anomalies 5 days to 13 days prior to main earthquakes. This exactly correlates with the findings of previous authors that ionospheric perturbations are seen few days to one month before the seismic activity. In addition to this, we were amazed to observe unexpected decrease of dispersion on certain anomalies where it was supposed to increase, thereby not supporting our finding to some extent. To resolve this problem, we devised a new parameter called dispersion nighttime (dispersion). On analyzing, this parameter decreases significantly on days of nighttime anomalies thereby supporting our precursors to much extent.Keywords: D-layer, TRGCP (Transmitter Receiver Great Circle Path), terminator times, VLF/LF
Procedia PDF Downloads 1893940 Magnetic Survey for the Delineation of Concrete Pillars in Geotechnical Investigation for Site Characterization
Authors: Nuraddeen Usman, Khiruddin Abdullah, Mohd Nawawi, Amin Khalil Ismail
Abstract:
A magnetic survey is carried out in order to locate the remains of construction items, specifically concrete pillars. The conventional Euler deconvolution technique can perform the task but it requires the use of fixed structural index (SI) and the construction items are made of materials with different shapes which require different SI (unknown). A Euler deconvolution technique that estimate background, horizontal coordinate (xo and yo), depth and structural index (SI) simultaneously is prepared and used for this task. The synthetic model study carried indicated the new methodology can give a good estimate of location and does not depend on magnetic latitude. For field data, both the total magnetic field and gradiometer reading had been collected simultaneously. The computed vertical derivatives and gradiometer readings are compared and they have shown good correlation signifying the effectiveness of the method. The filtering is carried out using automated procedure, analytic signal and other traditional techniques. The clustered depth solutions coincided with the high amplitude/values of analytic signal and these are the possible target positions of the concrete pillars being sought. The targets under investigation are interpreted to be located at the depth between 2.8 to 9.4 meters. More follow up survey is recommended as this mark the preliminary stage of the work.Keywords: concrete pillar, magnetic survey, geotechnical investigation, Euler Deconvolution
Procedia PDF Downloads 2573939 Signal Amplification Using Graphene Oxide in Label Free Biosensor for Pathogen Detection
Authors: Agampodi Promoda Perera, Yong Shin, Mi Kyoung Park
Abstract:
The successful detection of pathogenic bacteria in blood provides important information for early detection, diagnosis and the prevention and treatment of infectious diseases. Silicon microring resonators are refractive-index-based optical biosensors that provide highly sensitive, label-free, real-time multiplexed detection of biomolecules. We demonstrate the technique of using GO (graphene oxide) to enhance the signal output of the silicon microring optical sensor. The activated carboxylic groups in GO molecules bind directly to single stranded DNA with an amino modified 5’ end. This conjugation amplifies the shift in resonant wavelength in a real-time manner. We designed a capture probe for strain Staphylococcus aureus of 21 bp and a longer complementary target sequence of 70 bp. The mismatched target sequence we used was of Streptococcus agalactiae of 70 bp. GO is added after the complementary binding of the probe and target. GO conjugates to the unbound single stranded segment of the target and increase the wavelength shift on the silicon microring resonator. Furthermore, our results show that GO could successfully differentiate between the mismatched DNA sequences from the complementary DNA sequence. Therefore, the proposed concept could effectively enhance sensitivity of pathogen detection sensors.Keywords: label free biosensor, pathogenic bacteria, graphene oxide, diagnosis
Procedia PDF Downloads 4653938 Nonlinear Analysis in Investigating the Complexity of Neurophysiological Data during Reflex Behavior
Authors: Juliana A. Knocikova
Abstract:
Methods of nonlinear signal analysis are based on finding that random behavior can arise in deterministic nonlinear systems with a few degrees of freedom. Considering the dynamical systems, entropy is usually understood as a rate of information production. Changes in temporal dynamics of physiological data are indicating evolving of system in time, thus a level of new signal pattern generation. During last decades, many algorithms were introduced to assess some patterns of physiological responses to external stimulus. However, the reflex responses are usually characterized by short periods of time. This characteristic represents a great limitation for usual methods of nonlinear analysis. To solve the problems of short recordings, parameter of approximate entropy has been introduced as a measure of system complexity. Low value of this parameter is reflecting regularity and predictability in analyzed time series. On the other side, increasing of this parameter means unpredictability and a random behavior, hence a higher system complexity. Reduced neurophysiological data complexity has been observed repeatedly when analyzing electroneurogram and electromyogram activities during defence reflex responses. Quantitative phrenic neurogram changes are also obvious during severe hypoxia, as well as during airway reflex episodes. Concluding, the approximate entropy parameter serves as a convenient tool for analysis of reflex behavior characterized by short lasting time series.Keywords: approximate entropy, neurophysiological data, nonlinear dynamics, reflex
Procedia PDF Downloads 2993937 Implementation of Congestion Management Strategies on Arterial Roads: Case Study of Geelong
Authors: A. Das, L. Hitihamillage, S. Moridpour
Abstract:
Natural disasters are inevitable to the biodiversity. Disasters such as flood, tsunami and tornadoes could be brutal, harsh and devastating. In Australia, flooding is a major issue experienced by different parts of the country. In such crisis, delays in evacuation could decide the life and death of the people living in those regions. Congestion management could become a mammoth task if there are no steps taken before such situations. In the past to manage congestion in such circumstances, many strategies were utilised such as converting the road shoulders to extra lanes or changing the road geometry by adding more lanes. However, expansion of road to resolving congestion problems is not considered a viable option nowadays. The authorities avoid this option due to many reasons, such as lack of financial support and land space. They tend to focus their attention on optimising the current resources they possess and use traffic signals to overcome congestion problems. Traffic Signal Management strategy was considered a viable option, to alleviate congestion problems in the City of Geelong, Victoria. Arterial road with signalised intersections considered in this paper and the traffic data required for modelling collected from VicRoads. Traffic signalling software SIDRA used to model the roads, and the information gathered from VicRoads. In this paper, various signal parameters utilised to assess and improve the corridor performance to achieve the best possible Level of Services (LOS) for the arterial road.Keywords: congestion, constraints, management, LOS
Procedia PDF Downloads 3953936 Topic-to-Essay Generation with Event Element Constraints
Authors: Yufen Qin
Abstract:
Topic-to-Essay generation is a challenging task in Natural language processing, which aims to generate novel, diverse, and topic-related text based on user input. Previous research has overlooked the generation of articles under the constraints of event elements, resulting in issues such as incomplete event elements and logical inconsistencies in the generated results. To fill this gap, this paper proposes an event-constrained approach for a topic-to-essay generation that enforces the completeness of event elements during the generation process. Additionally, a language model is employed to verify the logical consistency of the generated results. Experimental results demonstrate that the proposed model achieves a better BLEU-2 score and performs better than the baseline in terms of subjective evaluation on a real dataset, indicating its capability to generate higher-quality topic-related text.Keywords: event element, language model, natural language processing, topic-to-essay generation.
Procedia PDF Downloads 2343935 A Survey on Lossless Compression of Bayer Color Filter Array Images
Authors: Alina Trifan, António J. R. Neves
Abstract:
Although most digital cameras acquire images in a raw format, based on a Color Filter Array that arranges RGB color filters on a square grid of photosensors, most image compression techniques do not use the raw data; instead, they use the rgb result of an interpolation algorithm of the raw data. This approach is inefficient and by performing a lossless compression of the raw data, followed by pixel interpolation, digital cameras could be more power efficient and provide images with increased resolution given that the interpolation step could be shifted to an external processing unit. In this paper, we conduct a survey on the use of lossless compression algorithms with raw Bayer images. Moreover, in order to reduce the effect of the transition between colors that increase the entropy of the raw Bayer image, we split the image into three new images corresponding to each channel (red, green and blue) and we study the same compression algorithms applied to each one individually. This simple pre-processing stage allows an improvement of more than 15% in predictive based methods.Keywords: bayer image, CFA, lossless compression, image coding standards
Procedia PDF Downloads 3193934 The Processing of Implicit Stereotypes in Everyday Scene Perception
Authors: Magali Mari, Fabrice Clement
Abstract:
The present study investigated the influence of implicit stereotypes on adults’ visual information processing, using an eye-tracking device. Implicit stereotyping is an automatic and implicit process; it happens relatively quickly, outside of awareness. In the presence of a member of a social group, a set of expectations about the characteristics of this social group appears automatically in people’s minds. The study aimed to shed light on the cognitive processes involved in stereotyping and to further investigate the use of eye movements to measure implicit stereotypes. With an eye-tracking device, the eye movements of participants were analyzed, while they viewed everyday scenes depicting women and men in congruent or incongruent gender role activities (e.g., a woman ironing or a man ironing). The settings of these scenes had to be analyzed to infer the character’s role. Also, participants completed an implicit association test that combined the concept of gender with attributes of occupation (home/work), while measuring reaction times to assess participants’ implicit stereotypes about gender. The results showed that implicit stereotypes do influence people’s visual attention; within a fraction of a second, the number of returns, between stereotypical and counter-stereotypical scenes, differed significantly, meaning that participants interpreted the scene itself as a whole before identifying the character. They predicted that, in such a situation, the character was supposed to be a woman or a man. Also, the study showed that eye movements could be used as a fast and reliable supplement for traditional implicit association tests to measure implicit stereotypes. Altogether, this research provides further understanding of implicit stereotypes processing as well as a natural method to study implicit stereotypes.Keywords: eye-tracking, implicit stereotypes, social cognition, visual attention
Procedia PDF Downloads 1573933 A Study on the Improvement of Mobile Device Call Buzz Noise Caused by Audio Frequency Ground Bounce
Authors: Jangje Park, So Young Kim
Abstract:
The market demand for audio quality in mobile devices continues to increase, and audible buzz noise generated in time division communication is a chronic problem that goes against the market demand. In the case of time division type communication, the RF Power Amplifier (RF PA) is driven at the audio frequency cycle, and it makes various influences on the audio signal. In this paper, we measured the ground bounce noise generated by the peak current flowing through the ground network in the RF PA with the audio frequency; it was confirmed that the noise is the cause of the audible buzz noise during a call. In addition, a grounding method of the microphone device that can improve the buzzing noise was proposed. Considering that the level of the audio signal generated by the microphone device is -38dBV based on 94dB Sound Pressure Level (SPL), even ground bounce noise of several hundred uV will fall within the range of audible noise if it is induced by the audio amplifier. Through the grounding method of the microphone device proposed in this paper, it was confirmed that the audible buzz noise power density at the RF PA driving frequency was improved by more than 5dB under the conditions of the Printed Circuit Board (PCB) used in the experiment. A fundamental improvement method was presented regarding the buzzing noise during a mobile phone call.Keywords: audio frequency, buzz noise, ground bounce, microphone grounding
Procedia PDF Downloads 1353932 Integrated Model for Enhancing Data Security Processing Time in Cloud Computing
Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali
Abstract:
Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a simple user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.Keywords: cloud computing, data security, SAAS, PAAS, IAAS, Blowfish
Procedia PDF Downloads 3563931 Surveillance of Super-Extended Objects: Bimodal Approach
Authors: Andrey V. Timofeev, Dmitry Egorov
Abstract:
This paper describes an effective solution to the task of a remote monitoring of super-extended objects (oil and gas pipeline, railways, national frontier). The suggested solution is based on the principle of simultaneously monitoring of seismoacoustic and optical/infrared physical fields. The principle of simultaneous monitoring of those fields is not new but in contrast to the known solutions the suggested approach allows to control super-extended objects with very limited operational costs. So-called C-OTDR (Coherent Optical Time Domain Reflectometer) systems are used to monitor the seismoacoustic field. Far-CCTV systems are used to monitor the optical/infrared field. A simultaneous data processing provided by both systems allows effectively detecting and classifying target activities, which appear in the monitored objects vicinity. The results of practical usage had shown high effectiveness of the suggested approach.Keywords: C-OTDR monitoring system, bimodal processing, LPboost, SVM
Procedia PDF Downloads 4683930 An Experimental Investigation of Air Entrainment Due to Water Jets in Crossflows
Authors: Mina Esmi Jahromi, Mehdi Khiadani
Abstract:
Vertical water jets discharging into free surface turbulent cross flows result in the ingression of a large amount of air in the body of water and form a region of two-phase air-water flow with a considerable interfacial area. This research presents an experimental study of the two-phase bubbly flow using image processing technique. The air ingression and the trajectories of bubble swarms under different experimental conditions are evaluated. The rate of air entrainment and the bubble characteristics such as penetration depth, and dispersion pattern were found to be affected by the most influential parameters of water jet and cross flow including water jet-to-crossflow velocity ratio, water jet falling height, and cross flow depth. This research improves understanding of the underwater flow structure due to the water jet impingement in crossflow and advances the practical applications of water jets such as artificial aeration, circulation, and mixing where crossflow is present.Keywords: air entrainment, image processing, jet in cross flow, two-phase flow
Procedia PDF Downloads 3673929 COVID-19 Genomic Analysis and Complete Evaluation
Authors: Narin Salehiyan, Ramin Ghasemi Shayan
Abstract:
In order to investigate coronavirus RNA replication, transcription, recombination, protein processing and transport, virion assembly, the identification of coronavirus-specific cell receptors, and polymerase processing, the manipulation of coronavirus clones and complementary DNAs (cDNAs) of defective-interfering (DI) RNAs is the subject of this chapter. The idea of the Covid genome is nonsegmented, single-abandoned, and positive-sense RNA. When compared to other RNA viruses, its size is significantly greater, ranging from 27 to 32 kb. The quality encoding the enormous surface glycoprotein depends on 4.4 kb, encoding a forcing trimeric, profoundly glycosylated protein. This takes off exactly 20 nm over the virion envelope, giving the infection the appearance-with a little creative mind of a crown or coronet. Covid research has added to the comprehension of numerous parts of atomic science as a general rule, like the component of RNA union, translational control, and protein transport and handling. It stays a fortune equipped for creating startling experiences.Keywords: covid-19, corona, virus, genome, genetic
Procedia PDF Downloads 703928 Newly Designed Ecological Task to Assess Cognitive Map Reading Ability: Behavioral Neuro-Anatomic Correlates of Mental Navigation
Authors: Igor Faulmann, Arnaud Saj, Roland Maurer
Abstract:
Spatial cognition consists in a plethora of high level cognitive abilities: among them, the ability to learn and to navigate in large scale environments is probably one of the most complex skills. Navigation is thought to rely on the ability to read a cognitive map, defined as an allocentric representation of ones environment. Those representations are of course intimately related to the two geometrical primitives of the environment: distance and direction. Also, many recent studies point to a predominant hippocampal and para-hippocampal role in spatial cognition, as well as in the more specific cluster of navigational skills. In a previous study in humans, we used a newly validated test assessing cognitive map processing by evaluating the ability to judge relative distances and directions: the CMRT (Cognitive Map Recall Test). This study identified in topographically disorientated patients (1) behavioral differences between the evaluation of distances and of directions, and (2) distinct causality patterns assessed via VLSM (i.e., distinct cerebral lesions cause distinct response patterns depending on the modality (distance vs direction questions). Thus, we hypothesized that: (1) if the CMRT really taps into the same resources as real navigation, there would be hippocampal, parahippocampal, and parietal activation, and (2) there exists underlying neuroanatomical and functional differences between the processing of this two modalities. Aiming toward a better understanding of the neuroanatomical correlates of the CMRT in humans, and more generally toward a better understanding of how the brain processes the cognitive map, we adapted the CMRT as an fMRI procedure. 23 healthy subjects (11 women, 12 men), all living in Geneva for at least 2 years, underwent the CMRT in fMRI. Results show, for distance and direction taken together, than the most active brain regions are the parietal, frontal and cerebellar parts. Additionally, and as expected, patterns of brain activation differ when comparing the two modalities. Furthermore, distance processing seems to rely more on parietal regions (compared to other brain regions in the same modality and also to direction). It is interesting to notice that no significant activity was observed in the hippocampal or parahippocampal areas. Direction processing seems to tap more into frontal and cerebellar brain regions (compared to other brain regions in the same modality and also to distance). Significant hippocampal and parahippocampal activity has been shown only in this modality. This results demonstrated a complex interaction of structures which are compatible with response patterns observed in other navigational tasks, thus showing that the CMRT taps at least partially into the same brain resources as real navigation. Additionally, differences between the processing of distances and directions leads to the conclusion that the human brain processes each modality distinctly. Further research should focus on the dynamics of this processing, allowing a clearer understanding between the two sub-processes.Keywords: cognitive map, navigation, fMRI, spatial cognition
Procedia PDF Downloads 2943927 A Concept in Addressing the Singularity of the Emerging Universe
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation
Procedia PDF Downloads 883926 Preparation of Silver and Silver-Gold, Universal and Repeatable, Surface Enhanced Raman Spectroscopy Platforms from SERSitive
Authors: Pawel Albrycht, Monika Ksiezopolska-Gocalska, Robert Holyst
Abstract:
Surface Enhanced Raman Spectroscopy (SERS) is a technique of growing importance not only in purely scientific research related to analytical chemistry. It finds more and more applications in broadly understood testing - medical, forensic, pharmaceutical, food - and everywhere works perfectly, on one condition that SERS substrates used for testing give adequate enhancement, repeatability, and homogeneity of SERS signal. This is a problem that has existed since the invention of this technique. Some laboratories use as SERS amplifiers colloids with silver or gold nanoparticles, others form rough silver or gold surfaces, but results are generally either weak or unrepeatable. Furthermore, these structures are very often highly specific - they amplify the signal only of a small group of compounds. It means that they work with some kinds of analytes but only with those which were used at a developer’s laboratory. When it comes to research on different compounds, completely new SERS 'substrates' are required. That underlay our decision to develop universal substrates for the SERS spectroscopy. Generally, each compound has different affinity for both silver and gold, which have the best SERS properties, and that's what depends on what signal we get in the SERS spectrum. Our task was to create the platform that gives a characteristic 'fingerprint' of the largest number of compounds with very high repeatability - even at the expense of the intensity of the enhancement factor (EF) (possibility to repeat research results is of the uttermost importance). As specified above SERS substrates are offered by SERSitive company. Applied method is based on cyclic potentiodynamic electrodeposition of silver or silver-gold nanoparticles on the conductive surface of ITO-coated glass at controlled temperature of the reaction solution. Silver nanoparticles are supplied in the form of silver nitrate (AgNO₃, 10 mM), gold nanoparticles are derived from tetrachloroauric acid (10 mM) while sodium sulfite (Na₂O₃, 5 mM) is used as a reductor. To limit and standardize the size of the SERS surface on which nanoparticles are deposited, photolithography is used. We secure the desired ITO-coated glass surface, and then etch the unprotected ITO layer which prevents nanoparticles from settling at these sites. On the prepared surface, we carry out the process described above, obtaining SERS surface with nanoparticles of sizes 50-400 nm. The SERSitive platforms present highly sensitivity (EF = 10⁵-10⁶), homogeneity and repeatability (70-80%).Keywords: electrodeposition, nanoparticles, Raman spectroscopy, SERS, SERSitive, SERS platforms, SERS substrates
Procedia PDF Downloads 1543925 Optimizing Quantum Machine Learning with Amplitude and Phase Encoding Techniques
Authors: Om Viroje
Abstract:
Quantum machine learning represents a frontier in computational technology, promising significant advancements in data processing capabilities. This study explores the significance of data encoding techniques, specifically amplitude and phase encoding, in this emerging field. By employing a comparative analysis methodology, the research evaluates how these encoding techniques affect the accuracy, efficiency, and noise resilience of quantum algorithms. Our findings reveal that amplitude encoding enhances algorithmic accuracy and noise tolerance, whereas phase encoding significantly boosts computational efficiency. These insights are crucial for developing robust quantum frameworks that can be effectively applied in real-world scenarios. In conclusion, optimizing encoding strategies is essential for advancing quantum machine learning, potentially transforming various industries through improved data processing and analysis.Keywords: quantum machine learning, data encoding, amplitude encoding, phase encoding, noise resilience
Procedia PDF Downloads 93924 A Performance Comparison between Conventional and Flexible Box Erecting Machines Using Dispatching Rules
Authors: Min Kyu Kim, Eun Young Lee, Dong Woo Son, Yoon Seok Chang
Abstract:
In this paper, we introduce a flexible box erecting machine (BEM) that swiftly and automatically transforms cardboard into a three dimensional box. Recently, the parcel service and home-shopping industries have grown rapidly, and there is an increasing need for various box types to ship various products. However, workers cannot fold thousands of boxes manually in a day. As such, automatic BEMs are garnering greater attention. This study takes equipment operation into consideration as well as mechanical improvements in order to design a BEM that is able to outperform its conventional counterparts. We analyzed six dispatching rules – First In First Out (FIFO), Shortest Processing Time (SPT), Earliest Due Date (EDD), Setup Avoidance, EDD + SPT, and EDD + Setup Avoidance – to determine which one was most suitable for BEM operation. Consequently, SPT and Setup Avoidance were found to be the most critical rules, followed by EDD + Setup Avoidance, EDD + SPT, EDD, and FIFO. This hierarchy was valid for both our conventional BEM and our new flexible BEM from the viewpoint of processing time. We believe that this research can contribute to flexible BEM management, which has the potential to increase productivity and convenience.Keywords: automation, box erecting machine, dispatching rule, setup time
Procedia PDF Downloads 3613923 Smart Oxygen Deprivation Mask: An Improved Design with Biometric Feedback
Authors: Kevin V. Bui, Richard A. Claytor, Elizabeth M. Priolo, Weihui Li
Abstract:
Oxygen deprivation masks operate through the use of restricting valves as a means to reduce respiratory flow where flow is inversely proportional to the resistance applied. This produces the same effect as higher altitudes where lower pressure leads to reduced respiratory flow. Both increased resistance with restricting valves and reduce the pressure of higher altitudes make breathing difficultier and force breathing muscles (diaphragm and intercostal muscles) working harder. The process exercises these muscles, improves their strength and results in overall better breathing efficiency. Currently, these oxygen deprivation masks are purely mechanical devices without any electronic sensor to monitor the breathing condition, thus not be able to provide feedback on the breathing effort nor to evaluate the lung function. That is part of the reason that these masks are mainly used for high-level athletes to mimic training in higher altitude conditions, not suitable for patients or customers. The design aims to improve the current method of oxygen deprivation mask to include a larger scope of patients and customers while providing quantitative biometric data that the current design lacks. This will be accomplished by integrating sensors into the mask’s breathing valves along with data acquisition and Bluetooth modules for signal processing and transmission. Early stages of the sensor mask will measure breathing rate as a function of changing the air pressure in the mask, with later iterations providing feedback on flow rate. Data regarding breathing rate will be prudent in determining whether training or therapy is improving breathing function and quantify this improvement.Keywords: oxygen deprivation mask, lung function, spirometer, Bluetooth
Procedia PDF Downloads 2173922 An Energy-Efficient Model of Integrating Telehealth IoT Devices with Fog and Cloud Computing-Based Platform
Authors: Yunyong Guo, Sudhakar Ganti, Bryan Guo
Abstract:
The rapid growth of telehealth Internet of Things (IoT) devices has raised concerns about energy consumption and efficient data processing. This paper introduces an energy-efficient model that integrates telehealth IoT devices with a fog and cloud computing-based platform, offering a sustainable and robust solution to overcome these challenges. Our model employs fog computing as a localized data processing layer while leveraging cloud computing for resource-intensive tasks, significantly reducing energy consumption. We incorporate adaptive energy-saving strategies. Simulation analysis validates our approach's effectiveness in enhancing energy efficiency for telehealth IoT systems integrated with localized fog nodes and both private and public cloud infrastructures. Future research will focus on further optimization of the energy-saving model, exploring additional functional enhancements, and assessing its broader applicability in other healthcare and industry sectors.Keywords: energy-efficient, fog computing, IoT, telehealth
Procedia PDF Downloads 863921 Towards a Complete Automation Feature Recognition System for Sheet Metal Manufacturing
Authors: Bahaa Eltahawy, Mikko Ylihärsilä, Reino Virrankoski, Esko Petäjä
Abstract:
Sheet metal processing is automated, but the step from product models to the production machine control still requires human intervention. This may cause time consuming bottlenecks in the production process and increase the risk of human errors. In this paper we present a system, which automatically recognizes features from the CAD-model of the sheet metal product. By using these features, the system produces a complete model of the particular sheet metal product. Then the model is used as an input for the sheet metal processing machine. Currently the system is implemented, capable to recognize more than 11 of the most common sheet metal structural features, and the procedure is fully automated. This provides remarkable savings in the production time, and protects against the human errors. This paper presents the developed system architecture, applied algorithms and system software implementation and testing.Keywords: feature recognition, automation, sheet metal manufacturing, CAD, CAM
Procedia PDF Downloads 3533920 Marker-Controlled Level-Set for Segmenting Breast Tumor from Thermal Images
Authors: Swathi Gopakumar, Sruthi Krishna, Shivasubramani Krishnamoorthy
Abstract:
Contactless, painless and radiation-free thermal imaging technology is one of the preferred screening modalities for detection of breast cancer. However, poor signal to noise ratio and the inexorable need to preserve edges defining cancer cells and normal cells, make the segmentation process difficult and hence unsuitable for computer-aided diagnosis of breast cancer. This paper presents key findings from a research conducted on the appraisal of two promising techniques, for the detection of breast cancer: (I) marker-controlled, Level-set segmentation of anisotropic diffusion filtered preprocessed image versus (II) Segmentation using marker-controlled level-set on a Gaussian-filtered image. Gaussian-filtering processes the image uniformly, whereas anisotropic filtering processes only in specific areas of a thermographic image. The pre-processed (Gaussian-filtered and anisotropic-filtered) images of breast samples were then applied for segmentation. The segmentation of breast starts with initial level-set function. In this study, marker refers to the position of the image to which initial level-set function is applied. The markers are generally placed on the left and right side of the breast, which may vary with the breast size. The proposed method was carried out on images from an online database with samples collected from women of varying breast characteristics. It was observed that the breast was able to be segmented out from the background by adjustment of the markers. From the results, it was observed that as a pre-processing technique, anisotropic filtering with level-set segmentation, preserved the edges more effectively than Gaussian filtering. Segmented image, by application of anisotropic filtering was found to be more suitable for feature extraction, enabling automated computer-aided diagnosis of breast cancer.Keywords: anisotropic diffusion, breast, Gaussian, level-set, thermograms
Procedia PDF Downloads 3783919 Lexical-Semantic Processing by Chinese as a Second Language Learners
Authors: Yi-Hsiu Lai
Abstract:
The present study aimed to elucidate the lexical-semantic processing for Chinese as second language (CSL) learners. Twenty L1 speakers of Chinese and twenty CSL learners in Taiwan participated in a picture naming task and a category fluency task. Based on their Chinese proficiency levels, these CSL learners were further divided into two sub-groups: ten CSL learners of elementary Chinese proficiency level and ten CSL learners of intermediate Chinese proficiency level. Instruments for the naming task were sixty black-and-white pictures: thirty-five object pictures and twenty-five action pictures. Object pictures were divided into two categories: living objects and non-living objects. Action pictures were composed of two categories: action verbs and process verbs. As in the naming task, the category fluency task consisted of two semantic categories – objects (i.e., living and non-living objects) and actions (i.e., action and process verbs). Participants were asked to report as many items within a category as possible in one minute. Oral productions were tape-recorded and transcribed for further analysis. Both error types and error frequency were calculated. Statistical analysis was further conducted to examine these error types and frequency made by CSL learners. Additionally, category effects, pictorial effects and L2 proficiency were discussed. Findings in the present study helped characterize the lexical-semantic process of Chinese naming in CSL learners of different Chinese proficiency levels and made contributions to Chinese vocabulary teaching and learning in the future.Keywords: lexical-semantic processing, Mandarin Chinese, naming, category effects
Procedia PDF Downloads 4603918 Composite Kernels for Public Emotion Recognition from Twitter
Authors: Chien-Hung Chen, Yan-Chun Hsing, Yung-Chun Chang
Abstract:
The Internet has grown into a powerful medium for information dispersion and social interaction that leads to a rapid growth of social media which allows users to easily post their emotions and perspectives regarding certain topics online. Our research aims at using natural language processing and text mining techniques to explore the public emotions expressed on Twitter by analyzing the sentiment behind tweets. In this paper, we propose a composite kernel method that integrates tree kernel with the linear kernel to simultaneously exploit both the tree representation and the distributed emotion keyword representation to analyze the syntactic and content information in tweets. The experiment results demonstrate that our method can effectively detect public emotion of tweets while outperforming the other compared methods.Keywords: emotion recognition, natural language processing, composite kernel, sentiment analysis, text mining
Procedia PDF Downloads 2163917 Risk-Based Regulation as a Model of Control in the South African Meat Industry
Authors: R. Govender, T. C. Katsande, E. Madoroba, N. M. Thiebaut, D. Naidoo
Abstract:
South African control over meat safety is managed by the Department of Agriculture, Forestry and Fisheries (DAFF). Veterinary services department in each of the nine provinces in the country is tasked with overseeing the farm and abattoir segments of the meat supply chain. Abattoirs are privately owned. The number of abattoirs over the years has increased. This increase has placed constraints on government resources required to monitor these abattoirs. This paper presents empirical research results on the hygienic processing of meat in high and low throughout abattoirs. This paper presents a case for the adoption of risk-based regulation as a method of government control over hygiene and safe meat processing at abattoirs in South Africa. Recommendations are made to the DAFF regarding policy considerations on risk-based regulation as a model of control in South Africa.Keywords: risk-based regulation, abattoir, food control, meat safety
Procedia PDF Downloads 3113916 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos
Authors: Nassima Noufail, Sara Bouhali
Abstract:
In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.Keywords: video segmentation, action detection, classification, Kmeans, C3D
Procedia PDF Downloads 773915 Local Image Features Emerging from Brain Inspired Multi-Layer Neural Network
Authors: Hui Wei, Zheng Dong
Abstract:
Object recognition has long been a challenging task in computer vision. Yet the human brain, with the ability to rapidly and accurately recognize visual stimuli, manages this task effortlessly. In the past decades, advances in neuroscience have revealed some neural mechanisms underlying visual processing. In this paper, we present a novel model inspired by the visual pathway in primate brains. This multi-layer neural network model imitates the hierarchical convergent processing mechanism in the visual pathway. We show that local image features generated by this model exhibit robust discrimination and even better generalization ability compared with some existing image descriptors. We also demonstrate the application of this model in an object recognition task on image data sets. The result provides strong support for the potential of this model.Keywords: biological model, feature extraction, multi-layer neural network, object recognition
Procedia PDF Downloads 5403914 A Comparative Analysis of Hyper-Parameters Using Neural Networks for E-Mail Spam Detection
Authors: Syed Mahbubuz Zaman, A. B. M. Abrar Haque, Mehedi Hassan Nayeem, Misbah Uddin Sagor
Abstract:
Everyday e-mails are being used by millions of people as an effective form of communication over the Internet. Although e-mails allow high-speed communication, there is a constant threat known as spam. Spam e-mail is often called junk e-mails which are unsolicited and sent in bulk. These unsolicited emails cause security concerns among internet users because they are being exposed to inappropriate content. There is no guaranteed way to stop spammers who use static filters as they are bypassed very easily. In this paper, a smart system is proposed that will be using neural networks to approach spam in a different way, and meanwhile, this will also detect the most relevant features that will help to design the spam filter. Also, a comparison of different parameters for different neural network models has been shown to determine which model works best within suitable parameters.Keywords: long short-term memory, bidirectional long short-term memory, gated recurrent unit, natural language processing, natural language processing
Procedia PDF Downloads 204