Search results for: image and signal processing
5626 A Guide to the Implementation of Ambisonics Super Stereo
Authors: Alessio Mastrorillo, Giuseppe Silvi, Francesco Scagliola
Abstract:
In this work, we introduce an Ambisonics decoder with an implementation of the C-format, also called Super Stereo. This format is an alternative to conventional stereo and binaural decoding. Unlike those, this format conveys audio information from the horizontal plane and works with stereo speakers and headphones. The two C-format channels can also return a reconstructed planar B-format. This work provides an open-source implementation for this format. We implement an all-pass filter for signal quadrature, as required by the decoding equations. This filter works with six Biquads in a cascade configuration, with values for control frequency and quality factor discovered experimentally. The phase response of the filter delivers a small error in the 20-14.000Hz range. The decoder has been tested with audio sources up to 192kHz sample rate, returning pristine sound quality and detailed stereo image. It has been included in the Envelop for Live suite and is available as an open-source repository. This decoder has applications in Virtual Reality and 360° audio productions, music composition, and online streaming.Keywords: ambisonics, UHJ, quadrature filter, virtual reality, Gerzon, decoder, stereo, binaural, biquad
Procedia PDF Downloads 915625 Video Processing of a Football Game: Detecting Features of a Football Match for Automated Calculation of Statistics
Authors: Rishabh Beri, Sahil Shah
Abstract:
We have applied a range of filters and processing in order to extract out the various features of the football game, like the field lines of a football field. Another important aspect was the detection of the players in the field and tagging them according to their teams distinguished by their jersey colours. This extracted information combined about the players and field helped us to create a virtual field that consists of the playing field and the players mapped to their locations in it.Keywords: Detect, Football, Players, Virtual
Procedia PDF Downloads 3315624 Design and Assessment of Traffic Management Strategies for Improved Mobility on Major Arterial Roads in Lahore City
Authors: N. Ali, S. Nakayama, H. Yamaguchi, M. Nadeem
Abstract:
Traffic congestion is a matter of prime concern in developing countries. This can be primarily attributed due to poor design practices and biased allocation of resources based on political will neglecting the technical feasibilities in infrastructure design. During the last decade, Lahore has expanded at an unprecedented rate as compared to surrounding cities due to more funding and resource allocation by the previous governments. As a result of this, people from surrounding cities and areas moved to the Lahore city for better opportunities and quality of life. This migration inflow inherited the city with an increased population yielding the inefficiency of the existing infrastructure to accommodate enhanced traffic demand. This leads to traffic congestion on major arterial roads of the city. In this simulation study, a major arterial road was selected to evaluate the performance of the five intersections by changing the geometry of the intersections or signal control type. Simulations were done in two software; Highway Capacity Software (HCS) and Synchro Studio and Sim Traffic Software. Some of the traffic management strategies that were employed include actuated-signal control, semi-actuated signal control, fixed-time signal control, and roundabout. The most feasible solution for each intersection in the above-mentioned traffic management techniques was selected with the least delay time (seconds) and improved Level of Service (LOS). The results showed that Jinnah Hospital Intersection and Akbar Chowk Intersection improved 92.97% and 92.67% in delay time reduction, respectively. These results can be used by traffic planners and policy makers for decision making for the expansion of these intersections keeping in mind the traffic demand in future years.Keywords: traffic congestion, traffic simulation, traffic management, congestion problems
Procedia PDF Downloads 4705623 LTE Performance Analysis in the City of Bogota Northern Zone for Two Different Mobile Broadband Operators over Qualipoc
Authors: Víctor D. Rodríguez, Edith P. Estupiñán, Juan C. Martínez
Abstract:
The evolution in mobile broadband technologies has allowed to increase the download rates in users considering the current services. The evaluation of technical parameters at the link level is of vital importance to validate the quality and veracity of the connection, thus avoiding large losses of data, time and productivity. Some of these failures may occur between the eNodeB (Evolved Node B) and the user equipment (UE), so the link between the end device and the base station can be observed. LTE (Long Term Evolution) is considered one of the IP-oriented mobile broadband technologies that work stably for data and VoIP (Voice Over IP) for those devices that have that feature. This research presents a technical analysis of the connection and channeling processes between UE and eNodeB with the TAC (Tracking Area Code) variables, and analysis of performance variables (Throughput, Signal to Interference and Noise Ratio (SINR)). Three measurement scenarios were proposed in the city of Bogotá using QualiPoc, where two operators were evaluated (Operator 1 and Operator 2). Once the data were obtained, an analysis of the variables was performed determining that the data obtained in transmission modes vary depending on the parameters BLER (Block Error Rate), performance and SNR (Signal-to-Noise Ratio). In the case of both operators, differences in transmission modes are detected and this is reflected in the quality of the signal. In addition, due to the fact that both operators work in different frequencies, it can be seen that Operator 1, despite having spectrum in Band 7 (2600 MHz), together with Operator 2, is reassigning to another frequency, a lower band, which is AWS (1700 MHz), but the difference in signal quality with respect to the establishment with data by the provider Operator 2 and the difference found in the transmission modes determined by the eNodeB in Operator 1 is remarkable.Keywords: BLER, LTE, network, qualipoc, SNR.
Procedia PDF Downloads 1155622 Multicasting Characteristics of All-Optical Triode Based on Negative Feedback Semiconductor Optical Amplifiers
Authors: S. Aisyah Azizan, M. Syafiq Azmi, Yuki Harada, Yoshinobu Maeda, Takaomi Matsutani
Abstract:
We introduced an all-optical multi-casting characteristics with wavelength conversion based on a novel all-optical triode using negative feedback semiconductor optical amplifier. This study was demonstrated with a transfer speed of 10 Gb/s to a non-return zero 231-1 pseudorandom bit sequence system. This multi-wavelength converter device can simultaneously provide three channels of output signal with the support of non-inverted and inverted conversion. We studied that an all-optical multi-casting and wavelength conversion accomplishing cross gain modulation is effective in a semiconductor optical amplifier which is effective to provide an inverted conversion thus negative feedback. The relationship of received power of back to back signal and output signals with wavelength 1535 nm, 1540 nm, 1545 nm, 1550 nm, and 1555 nm with bit error rate was investigated. It was reported that the output signal wavelengths were successfully converted and modulated with a power penalty of less than 8.7 dB, which the highest is 8.6 dB while the lowest is 4.4 dB. It was proved that all-optical multi-casting and wavelength conversion using an optical triode with a negative feedback by three channels at the same time at a speed of 10 Gb/s is a promising device for the new wavelength conversion technology.Keywords: cross gain modulation, multicasting, negative feedback optical amplifier, semiconductor optical amplifier
Procedia PDF Downloads 6845621 Donoho-Stark’s and Hardy’s Uncertainty Principles for the Short-Time Quaternion Offset Linear Canonical Transform
Authors: Mohammad Younus Bhat
Abstract:
The quaternion offset linear canonical transform (QOLCT), which isa time-shifted and frequency-modulated version of the quaternion linear canonical transform (QLCT), provides a more general framework of most existing signal processing tools. For the generalized QOLCT, the classical Heisenberg’s and Lieb’s uncertainty principles have been studied recently. In this paper, we first define the short-time quaternion offset linear canonical transform (ST-QOLCT) and drive its relationship with the quaternion Fourier transform (QFT). The crux of the paper lies in the generalization of several well-known uncertainty principles for the ST-QOLCT, including Donoho-Stark’s uncertainty principle, Hardy’s uncertainty principle, Beurling’s uncertainty principle, and the logarithmic uncertainty principle.Keywords: Quaternion Fourier transform, Quaternion offset linear canonical transform, short-time quaternion offset linear canonical transform, uncertainty principle
Procedia PDF Downloads 2115620 Changing Body Ideals of Ethnically Diverse Gay and Heterosexual Men and the Proliferation of Social and Entertainment Media
Authors: Cristina Azocar, Ivana Markova
Abstract:
A survey of 565 male undergraduates examined the effects of exposure to social networking sites and entertainment media on young men’s body image. Exposure to social and to entertainment media was found to have negative effects on men’s body satisfaction, social comparison, and thin ideal internalization. Findings indicated significant differences in those men who were more exposed to social and to entertainment media than those who were not as exposed. Consistent with past studies, gay men were found to be more dissatisfied with their bodies than straight men. Gay men compared themselves to other better-looking individuals and internalized ideal body types seen in media significantly more than their straight counterparts. Surprisingly, straight men seem to care as much about their physical attractiveness/appearance as gay men do, but only in public settings such as at the beach, at athletic events (including gyms) and social events. Although on average ethnic groups were more similar than different, small but significant differences occurred with Asian men indicating significantly higher body dissatisfaction than White/European men and Middle Eastern/Arab men their counterparts. The study increases our knowledge about SNS and entertainment use and its associated body image, and body satisfaction affects among low-income ethnic minority men.Keywords: body dissatisfaction, body image, entertainment media, gay men, race and ethnicity, social economic status, social comparison, social media
Procedia PDF Downloads 1335619 Relationship among Teams' Information Processing Capacity and Performance in Information System Projects: The Effects of Uncertainty and Equivocality
Authors: Ouafa Sakka, Henri Barki, Louise Cote
Abstract:
Uncertainty and equivocality are defined in the information processing literature as two task characteristics that require different information processing responses from managers. As uncertainty often stems from a lack of information, addressing it is thought to require the collection of additional data. On the other hand, as equivocality stems from ambiguity and a lack of understanding of the task at hand, addressing it is thought to require rich communication between those involved. Past research has provided weak to moderate empirical support to these hypotheses. The present study contributes to this literature by defining uncertainty and equivocality at the project level and investigating their moderating effects on the association between several project information processing constructs and project performance. The information processing constructs considered are the amount of information collected by the project team, and the richness and frequency of formal communications among the team members to discuss the project’s follow-up reports. Data on 93 information system development (ISD) project managers was collected in a questionnaire survey and analyzed it via the Fisher Test for correlation differences. The results indicate that the highest project performance levels were observed in projects characterized by high uncertainty and low equivocality in which project managers were provided with detailed and updated information on project costs and schedules. In addition, our findings show that information about user needs and technical aspects of the project is less useful to managing projects where uncertainty and equivocality are high. Further, while the strongest positive effect of interactive use of follow-up reports on performance occurred in projects where both uncertainty and equivocality levels were high, its weakest effect occurred when both of these were low.Keywords: uncertainty, equivocality, information processing model, management control systems, project control, interactive use, diagnostic use, information system development
Procedia PDF Downloads 2945618 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs
Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.
Abstract:
Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification
Procedia PDF Downloads 1255617 Geomatic Techniques to Filter Vegetation from Point Clouds
Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades
Abstract:
More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud
Procedia PDF Downloads 1545616 Analysis of Nonlinear and Non-Stationary Signal to Extract the Features Using Hilbert Huang Transform
Authors: A. N. Paithane, D. S. Bormane, S. D. Shirbahadurkar
Abstract:
It has been seen that emotion recognition is an important research topic in the field of Human and computer interface. A novel technique for Feature Extraction (FE) has been presented here, further a new method has been used for human emotion recognition which is based on HHT method. This method is feasible for analyzing the nonlinear and non-stationary signals. Each signal has been decomposed into the IMF using the EMD. These functions are used to extract the features using fission and fusion process. The decomposition technique which we adopt is a new technique for adaptively decomposing signals. In this perspective, we have reported here potential usefulness of EMD based techniques.We evaluated the algorithm on Augsburg University Database; the manually annotated database.Keywords: intrinsic mode function (IMF), Hilbert-Huang transform (HHT), empirical mode decomposition (EMD), emotion detection, electrocardiogram (ECG)
Procedia PDF Downloads 5805615 Smart Surveillance with 5G: A Performance Study in Adama City
Authors: Shenko Chura Aredo, Hailu Belay, Kevin T. Kornegay
Abstract:
In light of Adama City’s smart city development vision, this study thoroughly investigates the performance of smart security systems with Fifth Generation (5G) network capabilities. It can be logistically difficult to install a lot of cabling, particularly in big or dynamic settings. Moreover, latency issues might affect linked systems, making it difficult for them to monitor in real time. Through a focused analysis that employs Adama City as a case study, the performance has been evaluated in terms of spectrum and energy efficiency using empirical data and basic signal processing formulations at different frequency resources. The findings also demonstrate that cameras working at higher 5G frequencies have more capacity than those operating at sub-6 GHz, notwithstanding frequency-related issues. It has also been noted that when the beams of such cameras are adaptively focussed based on the distance of the last cell edge user rather than the maximum cell radius, less energy is required than with conventional fixed power ramping.Keywords: 5G, energy efficiency, safety, smart security, spectral efficiency
Procedia PDF Downloads 195614 Nostalgia in Photographed Books for Children – the Case of Photography Books of Children in the Kibbutz
Authors: Ayala Amir
Abstract:
The paper presents interdisciplinary research which draws on the literary study and the cultural study of photography to explore a literary genre defined by nostalgia – the photographed book for children. This genre, which was popular in the second half of the 20th century, presents the romantic, nostalgic image of childhood created in the visual arts in the 18th century (as suggested by Ann Higonnet). At the same time, it capitalizes on the nostalgia inherent in the event of photography as formulated by Jennifer Green-Lewis: photography frames a moment in the present while transforming it into a past longed for in the future. Unlike Freudian melancholy, nostalgia is an effect that enables representation by acknowledging the loss and containing it in the very experience of the object. The representation and preservation of the lost object (nature, childhood, innocence) are in the center of the genre of children's photography books – a modern version of ancient pastoral. In it, the unique synergia of word and image results in a nostalgic image of childhood in an era already conquered by modernization. The nostalgic effect works both in the representation of space – an Edenic image of nature already shadowed by its demise, and of time – an image of childhood imbued by what Gill Bartholnyes calls the "looking backward aesthetics" – under the sign of loss. Little critical attention has been devoted to this genre with the exception of the work of Bettina Kümmerling-Meibauer, who noted the nostalgic effect of the well-known series of photography books by Astrid Lindgren and Anna Riwkin-Brick. This research aims to elaborate Kümmerling-Meibauer's approach using the theories of the study of photography, word-image studies, as well as current studies of childhood. The theoretical perspectives are implemented in the case study of photography books created in one of the most innovative social structures in our time – the Israeli Kibbutz. This communal way of life designed a society where children will experience their childhood in a parentless rural environment that will save them from the fate of the Oedipal fall. It is suggested that in documenting these children in a fictional format, photographers and writers, images and words cooperated in creating nostalgic works situated on the border between nature and culture, imagination and reality, utopia and its realization in history.Keywords: nostalgia, photography , childhood, children's books, kibutz
Procedia PDF Downloads 1425613 Application of Hyperspectral Remote Sensing in Sambhar Salt Lake, A Ramsar Site of Rajasthan, India
Authors: Rajashree Naik, Laxmi Kant Sharma
Abstract:
Sambhar lake is the largest inland Salt Lake of India, declared as a Ramsar site on 23 March 1990. Due to high salinity and alkalinity condition its biodiversity richness is contributed by haloalkaliphilic flora and fauna along with the diverse land cover including waterbody, wetland, salt crust, saline soil, vegetation, scrub land and barren land which welcome large number of flamingos and other migratory birds for winter harboring. But with the gradual increase in the irrational salt extraction activities, the ecological diversity is at stake. There is an urgent need to assess the ecosystem. Advanced technology like remote sensing and GIS has enabled to look into the past, compare with the present for the future planning and management of the natural resources in a judicious way. This paper is a research work intended to present a vegetation in typical inland lake environment of Sambhar wetland using satellite data of NASA’s EO-1 Hyperion sensor launched in November 2000. With the spectral range of 0.4 to 2.5 micrometer at approximately 10nm spectral resolution with 242 bands 30m spatial resolution and 705km orbit was used to produce a vegetation map for a portion of the wetland. The vegetation map was tested for classification accuracy with a pre-existing detailed GIS wetland vegetation database. Though the accuracy varied greatly for different classes the algal communities were successfully identified which are the major sources of food for flamingo. The results from this study have practical implications for uses of spaceborne hyperspectral image data that are now becoming available. Practical limitations of using these satellite data for wetland vegetation mapping include inadequate spatial resolution, complexity of image processing procedures, and lack of stereo viewing.Keywords: Algal community, NASA’s EO-1 Hyperion, salt-tolerant species, wetland vegetation mapping
Procedia PDF Downloads 1355612 Metal-Oxide-Semiconductor-Only Process Corner Monitoring Circuit
Authors: Davit Mirzoyan, Ararat Khachatryan
Abstract:
A process corner monitoring circuit (PCMC) is presented in this work. The circuit generates a signal, the logical value of which depends on the process corner only. The signal can be used in both digital and analog circuits for testing and compensation of process variations (PV). The presented circuit uses only metal-oxide-semiconductor (MOS) transistors, which allow increasing its detection accuracy, decrease power consumption and area. Due to its simplicity the presented circuit can be easily modified to monitor parametrical variations of only n-type and p-type MOS (NMOS and PMOS, respectively) transistors, resistors, as well as their combinations. Post-layout simulation results prove correct functionality of the proposed circuit, i.e. ability to monitor the process corner (equivalently die-to-die variations) even in the presence of within-die variations.Keywords: detection, monitoring, process corner, process variation
Procedia PDF Downloads 5255611 An Efficient Discrete Chaos in Generalized Logistic Maps with Applications in Image Encryption
Authors: Ashish Ashish
Abstract:
In the last few decades, the discrete chaos of difference equations has gained a massive attention of academicians and scholars due to its tremendous applications in each and every branch of science, such as cryptography, traffic control models, secure communications, weather forecasting, and engineering. In this article, a generalized logistic discrete map is established and discrete chaos is reported through period doubling bifurcation, period three orbit and Lyapunov exponent. It is interesting to see that the generalized logistic map exhibits superior chaos due to the presence of an extra degree of freedom of an ordered parameter. The period doubling bifurcation and Lyapunov exponent are demonstrated for some particular values of parameter and the discrete chaos is determined in the sense of Devaney's definition of chaos theoretically as well as numerically. Moreover, the study discusses an extended chaos based image encryption and decryption scheme in cryptography using this novel system. Surprisingly, a larger key space for coding and more sensitive dependence on initial conditions are examined for encryption and decryption of text messages, images and videos which secure the system strongly from external cyber attacks, coding attacks, statistic attacks and differential attacks.Keywords: chaos, period-doubling, logistic map, Lyapunov exponent, image encryption
Procedia PDF Downloads 1525610 The Artificial Intelligence Technologies Used in PhotoMath Application
Authors: Tala Toonsi, Marah Alagha, Lina Alnowaiser, Hala Rajab
Abstract:
This report is about the Photomath app, which is an AI application that uses image recognition technology, specifically optical character recognition (OCR) algorithms. The (OCR) algorithm translates the images into a mathematical equation, and the app automatically provides a step-by-step solution. The application supports decimals, basic arithmetic, fractions, linear equations, and multiple functions such as logarithms. Testing was conducted to examine the usage of this app, and results were collected by surveying ten participants. Later, the results were analyzed. This paper seeks to answer the question: To what level the artificial intelligence features are accurate and the speed of process in this app. It is hoped this study will inform about the efficiency of AI in Photomath to the users.Keywords: photomath, image recognition, app, OCR, artificial intelligence, mathematical equations.
Procedia PDF Downloads 1715609 A New 3D Shape Descriptor Based on Multi-Resolution and Multi-Block CS-LBP
Authors: Nihad Karim Chowdhury, Mohammad Sanaullah Chowdhury, Muhammed Jamshed Alam Patwary, Rubel Biswas
Abstract:
In content-based 3D shape retrieval system, achieving high search performance has become an important research problem. A challenging aspect of this problem is to find an effective shape descriptor which can discriminate similar shapes adequately. To address this problem, we propose a new shape descriptor for 3D shape models by combining multi-resolution with multi-block center-symmetric local binary pattern operator. Given an arbitrary 3D shape, we first apply pose normalization, and generate a set of multi-viewed 2D rendered images. Second, we apply Gaussian multi-resolution filter to generate several levels of images from each of 2D rendered image. Then, overlapped sub-images are computed for each image level of a multi-resolution image. Our unique multi-block CS-LBP comes next. It allows the center to be composed of m-by-n rectangular pixels, instead of a single pixel. This process is repeated for all the 2D rendered images, derived from both ‘depth-buffer’ and ‘silhouette’ rendering. Finally, we concatenate all the features vectors into one dimensional histogram as our proposed 3D shape descriptor. Through several experiments, we demonstrate that our proposed 3D shape descriptor outperform the previous methods by using a benchmark dataset.Keywords: 3D shape retrieval, 3D shape descriptor, CS-LBP, overlapped sub-images
Procedia PDF Downloads 4455608 Cluster Analysis and Benchmarking for Performance Optimization of a Pyrochlore Processing Unit
Authors: Ana C. R. P. Ferreira, Adriano H. P. Pereira
Abstract:
Given the frequent variation of mineral properties throughout the Araxá pyrochlore deposit, even if a good homogenization work has been carried out before feeding the processing plants, an operation with quality and performance’s high variety standard is expected. These results could be improved and standardized if the blend composition parameters that most influence the processing route are determined, and then the types of raw materials are grouped by them, finally presenting a great reference with operational settings for each group. Associating the physical and chemical parameters of a unit operation through benchmarking or even an optimal reference of metallurgical recovery and product quality reflects in the reduction of the production costs, optimization of the mineral resource, and guarantee of greater stability in the subsequent processes of the production chain that uses the mineral of interest. Conducting a comprehensive exploratory data analysis to identify which characteristics of the ore are most relevant to the process route, associated with the use of Machine Learning algorithms for grouping the raw material (ore) and associating these with reference variables in the process’ benchmark is a reasonable alternative for the standardization and improvement of mineral processing units. Clustering methods through Decision Tree and K-Means were employed, associated with algorithms based on the theory of benchmarking, with criteria defined by the process team in order to reference the best adjustments for processing the ore piles of each cluster. A clean user interface was created to obtain the outputs of the created algorithm. The results were measured through the average time of adjustment and stabilization of the process after a new pile of homogenized ore enters the plant, as well as the average time needed to achieve the best processing result. Direct gains from the metallurgical recovery of the process were also measured. The results were promising, with a reduction in the adjustment time and stabilization when starting the processing of a new ore pile, as well as reaching the benchmark. Also noteworthy are the gains in metallurgical recovery, which reflect a significant saving in ore consumption and a consequent reduction in production costs, hence a more rational use of the tailings dams and life optimization of the mineral deposit.Keywords: mineral clustering, machine learning, process optimization, pyrochlore processing
Procedia PDF Downloads 1435607 Nutritional Potential and Functionality of Whey Powder Influenced by Different Processing Temperature and Storage
Authors: Zarmina Gillani, Nuzhat Huma, Aysha Sameen, Mulazim Hussain Bukhari
Abstract:
Whey is an excellent food ingredient owing to its high nutritive value and its functional properties. However, composition of whey varies depending on composition of milk, processing conditions, processing method, and its whey protein content. The aim of this study was to prepare a whey powder from raw whey and to determine the influence of different processing temperatures (160 and 180 °C) on the physicochemical, functional properties during storage of 180 days and on whey protein denaturation. Results have shown that temperature significantly (P < 0.05) affects the pH, acidity, non-protein nitrogen (NPN), protein total soluble solids, fat and lactose contents. Significantly (p < 0.05) higher foaming capacity (FC), foam stability (FS), whey protein nitrogen index (WPNI), and a lower turbidity and solubility index (SI) were observed in whey powder processed at 160 °C compared to whey powder processed at 180 °C. During storage of 180 days, slow but progressive changes were noticed on the physicochemical and functional properties of whey powder. Reverse phase-HPLC analysis revealed a significant (P < 0.05) effect of temperature on whey protein contents. Denaturation of β-Lactoglobulin is followed by α-lacalbumin, casein glycomacropeptide (CMP/GMP), and bovine serum albumin (BSA).Keywords: whey powder, temperature, denaturation, reverse phase, HPLC
Procedia PDF Downloads 2995606 Analysis of Fixed Beamforming Algorithms for Smart Antenna Systems
Authors: Muhammad Umair Shahid, Abdul Rehman, Mudassir Mukhtar, Muhammad Nauman
Abstract:
The smart antenna is the prominent technology that has become known in recent years to meet the growing demands of wireless communications. In an overcrowded atmosphere, its application is growing gradually. A methodical evaluation of the performance of Fixed Beamforming algorithms for smart antennas such as Multiple Sidelobe Canceller (MSC), Maximum Signal-to-interference ratio (MSIR) and minimum variance (MVDR) has been comprehensively presented in this paper. Simulation results show that beamforming is helpful in providing optimized response towards desired directions. MVDR beamformer provides the most optimal solution.Keywords: fixed weight beamforming, array pattern, signal to interference ratio, power efficiency, element spacing, array elements, optimum weight vector
Procedia PDF Downloads 1845605 Predictive Modelling Approaches in Food Processing and Safety
Authors: Amandeep Sharma, Digvaijay Verma, Ruplal Choudhary
Abstract:
Food processing is an activity across the globe that help in better handling of agricultural produce, including dairy, meat, and fish. The operations carried out in the food industry includes raw material quality authenticity; sorting and grading; processing into various products using thermal treatments – heating, freezing, and chilling; packaging; and storage at the appropriate temperature to maximize the shelf life of the products. All this is done to safeguard the food products and to ensure the distribution up to the consumer. The approaches to develop predictive models based on mathematical or statistical tools or empirical models’ development has been reported for various milk processing activities, including plant maintenance and wastage. Recently AI is the key factor for the fourth industrial revolution. AI plays a vital role in the food industry, not only in quality and food security but also in different areas such as manufacturing, packaging, and cleaning. A new conceptual model was developed, which shows that smaller sample size as only spectra would be required to predict the other values hence leads to saving on raw materials and chemicals otherwise used for experimentation during the research and new product development activity. It would be a futuristic approach if these tools can be further clubbed with the mobile phones through some software development for their real time application in the field for quality check and traceability of the product.Keywords: predictive modlleing, ann, ai, food
Procedia PDF Downloads 825604 Challenges and Opportunities: One Stop Processing for the Automation of Indonesian Large-Scale Topographic Base Map Using Airborne LiDAR Data
Authors: Elyta Widyaningrum
Abstract:
The LiDAR data acquisition has been recognizable as one of the fastest solution to provide the basis data for topographic base mapping in Indonesia. The challenges to accelerate the provision of large-scale topographic base maps as a development plan basis gives the opportunity to implement the automated scheme in the map production process. The one stop processing will also contribute to accelerate the map provision especially to conform with the Indonesian fundamental spatial data catalog derived from ISO 19110 and geospatial database integration. Thus, the automated LiDAR classification, DTM generation and feature extraction will be conducted in one GIS-software environment to form all layers of topographic base maps. The quality of automated topographic base map will be assessed and analyzed based on its completeness, correctness, contiguity, consistency and possible customization.Keywords: automation, GIS environment, LiDAR processing, map quality
Procedia PDF Downloads 3685603 High Efficient Biohydrogen Production from Cassava Starch Processing Wastewater by Two Stage Thermophilic Fermentation and Electrohydrogenesis
Authors: Peerawat Khongkliang, Prawit Kongjan, Tsuyoshi Imai, Poonsuk Prasertsan, Sompong O-Thong
Abstract:
A two-stage thermophilic fermentation and electrohydrogenesis process was used to convert cassava starch processing wastewater into hydrogen gas. Maximum hydrogen yield from fermentation stage by Thermoanaerobacterium thermosaccharolyticum PSU-2 was 248 mL H2/g-COD at optimal pH of 6.5. Optimum hydrogen production rate of 820 mL/L/d and yield of 200 mL/g COD was obtained at HRT of 2 days in fermentation stage. Cassava starch processing wastewater fermentation effluent consisted of acetic acid, butyric acid and propionic acid. The effluent from fermentation stage was used as feedstock to generate hydrogen production by microbial electrolysis cell (MECs) at an applied voltage of 0.6 V in second stage with additional 657 mL H2/g-COD was produced. Energy efficiencies based on electricity needed for the MEC were 330 % with COD removals of 95 %. The overall hydrogen yield was 800-900 mL H2/g-COD. Microbial community analysis of electrohydrogenesis by DGGE shows that exoelectrogens belong to Acidiphilium sp., Geobacter sulfurreducens and Thermincola sp. were dominated at anode. These results show two-stage thermophilic fermentation, and electrohydrogenesis process improved hydrogen production performance with high hydrogen yields, high gas production rates and high COD removal efficiency.Keywords: cassava starch processing wastewater, biohydrogen, thermophilic fermentation, microbial electrolysis cell
Procedia PDF Downloads 3435602 Multichannel Surface Electromyography Trajectories for Hand Movement Recognition Using Intrasubject and Intersubject Evaluations
Authors: Christina Adly, Meena Abdelmeseeh, Tamer Basha
Abstract:
This paper proposes a system for hand movement recognition using multichannel surface EMG(sEMG) signals obtained from 40 subjects using 40 different exercises, which are available on the Ninapro(Non-Invasive Adaptive Prosthetics) database. First, we applied processing methods to the raw sEMG signals to convert them to their amplitudes. Second, we used deep learning methods to solve our problem by passing the preprocessed signals to Fully connected neural networks(FCNN) and recurrent neural networks(RNN) with Long Short Term Memory(LSTM). Using intrasubject evaluation, The accuracy using the FCNN is 72%, with a processing time for training around 76 minutes, and for RNN's accuracy is 79.9%, with 8 minutes and 22 seconds processing time. Third, we applied some postprocessing methods to improve the accuracy, like majority voting(MV) and Movement Error Rate(MER). The accuracy after applying MV is 75% and 86% for FCNN and RNN, respectively. The MER value has an inverse relationship with the prediction delay while varying the window length for measuring the MV. The different part uses the RNN with the intersubject evaluation. The experimental results showed that to get a good accuracy for testing with reasonable processing time, we should use around 20 subjects.Keywords: hand movement recognition, recurrent neural network, movement error rate, intrasubject evaluation, intersubject evaluation
Procedia PDF Downloads 1425601 Correlation Mapping for Measuring Platelet Adhesion
Authors: Eunseop Yeom
Abstract:
Platelets can be activated by the surrounding blood flows where a blood vessel is narrowed as a result of atherosclerosis. Numerous studies have been conducted to identify the relation between platelets activation and thrombus formation. To measure platelet adhesion, this study proposes an image analysis technique. Blood samples are delivered in the microfluidic channel, and then platelets are activated by a stenotic micro-channel with 90% severity. By applying proposed correlation mapping, which visualizes decorrelation of the streaming blood flow, the area of adhered platelets (APlatelet) was estimated without labeling platelets. In order to evaluate the performance of correlation mapping on the detection of platelet adhesion, the effect of tile size was investigated by calculating 2D correlation coefficients with binary images obtained by manual labeling and the correlation mapping method with different sizes of the square tile ranging from 3 to 50 pixels. The maximum 2D correlation coefficient is observed with the optimum tile size of 5×5 pixels. As the area of the platelet adhesion increases, the platelets plug the channel and there is only a small amount of blood flows. This image analysis could provide new insights for better understanding of the interactions between platelet aggregation and blood flows in various physiological conditions.Keywords: platelet activation, correlation coefficient, image analysis, shear rate
Procedia PDF Downloads 3355600 An Experimental Study of Bolt Inclination in a Composite Single Bolted Joint
Authors: Youcef Faci, Djillali Allou, Ahmed Mebtouche, Badredine Maalem
Abstract:
The inclination of the bolt in a fastened joint of composite material during a tensile test can be influenced by several parameters, including material properties, bolt diameter and length, the type of composite material being used, the size and dimensions of the bolt, bolt preload, surface preparation, the design and configuration of the joint, and finally testing conditions. These parameters should be carefully considered and controlled to ensure accurate and reliable results during tensile testing of composite materials with fastened joints. Our work focuses on the effect of the stacking sequence and the geometry of specimens. An experimental test is carried out to obtain the inclination of a bolt during a tensile test of a composite material using acoustic emission and digital image correlation. Several types of damage were obtained during load. Digital image correlation techniques permit to obtain the inclination of bolt angle value during tensile test. We concluded that the inclination of the bolt during a tensile test of a composite material can be related to the damage that occurs in the material. It can cause stress concentrations and localized deformation in the material, leading to damage such as delamination, fiber breakage, matrix cracking, and other forms of failure.Keywords: damage, digital image correlation, bolt inclination angle, joint
Procedia PDF Downloads 685599 Iris Recognition Based on the Low Order Norms of Gradient Components
Authors: Iman A. Saad, Loay E. George
Abstract:
Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.Keywords: iris recognition, contrast stretching, gradient features, texture features, Euclidean metric
Procedia PDF Downloads 3355598 Performance Analysis of Search Medical Imaging Service on Cloud Storage Using Decision Trees
Authors: González A. Julio, Ramírez L. Leonardo, Puerta A. Gabriel
Abstract:
Telemedicine services use a large amount of data, most of which are diagnostic images in Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7) formats. Metadata is generated from each related image to support their identification. This study presents the use of decision trees for the optimization of information search processes for diagnostic images, hosted on the cloud server. To analyze the performance in the server, the following quality of service (QoS) metrics are evaluated: delay, bandwidth, jitter, latency and throughput in five test scenarios for a total of 26 experiments during the loading and downloading of DICOM images, hosted by the telemedicine group server of the Universidad Militar Nueva Granada, Bogotá, Colombia. By applying decision trees as a data mining technique and comparing it with the sequential search, it was possible to evaluate the search times of diagnostic images in the server. The results show that by using the metadata in decision trees, the search times are substantially improved, the computational resources are optimized and the request management of the telemedicine image service is improved. Based on the experiments carried out, search efficiency increased by 45% in relation to the sequential search, given that, when downloading a diagnostic image, false positives are avoided in management and acquisition processes of said information. It is concluded that, for the diagnostic images services in telemedicine, the technique of decision trees guarantees the accessibility and robustness in the acquisition and manipulation of medical images, in improvement of the diagnoses and medical procedures in patients.Keywords: cloud storage, decision trees, diagnostic image, search, telemedicine
Procedia PDF Downloads 2045597 Multimodal Direct Neural Network Positron Emission Tomography Reconstruction
Authors: William Whiteley, Jens Gregor
Abstract:
In recent developments of direct neural network based positron emission tomography (PET) reconstruction, two prominent architectures have emerged for converting measurement data into images: 1) networks that contain fully-connected layers; and 2) networks that primarily use a convolutional encoder-decoder architecture. In this paper, we present a multi-modal direct PET reconstruction method called MDPET, which is a hybrid approach that combines the advantages of both types of networks. MDPET processes raw data in the form of sinograms and histo-images in concert with attenuation maps to produce high quality multi-slice PET images (e.g., 8x440x440). MDPET is trained on a large whole-body patient data set and evaluated both quantitatively and qualitatively against target images reconstructed with the standard PET reconstruction benchmark of iterative ordered subsets expectation maximization. The results show that MDPET outperforms the best previously published direct neural network methods in measures of bias, signal-to-noise ratio, mean absolute error, and structural similarity.Keywords: deep learning, image reconstruction, machine learning, neural network, positron emission tomography
Procedia PDF Downloads 111