Search results for: analog signal processing
3574 Study on the Impact of Default Converter on the Quality of Energy Produced by DFIG Based Wind Turbine
Authors: N. Zerzouri, N. Benalia, N. Bensiali
Abstract:
This work is devoted to an analysis of the operation of a doubly fed induction generator (DFIG) integrated with a wind system. The power transfer between the stator and the network is carried out by acting on the rotor via a bidirectional signal converter. The analysis is devoted to the study of a fault in the converter due to an interruption of the control of a semiconductor. Simulation results obtained by the MATLAB/Simulink software illustrate the quality of the power generated at the default.Keywords: doubly fed induction generator (DFIG), wind energy, PWM inverter, modeling
Procedia PDF Downloads 3163573 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments
Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic
Abstract:
Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder
Procedia PDF Downloads 2893572 Dairy Products on the Algerian Market: Proportion of Imitation and Degree of Processing
Authors: Bentayeb-Ait Lounis Saïda, Cheref Zahia, Cherifi Thizi, Ri Kahina Bahmed, Kahina Hallali Yasmine Abdellaoui, Kenza Adli
Abstract:
Algeria is the leading consumer of dairy products in North Africa. This is a fact. However, the nutritional quality of the latter remains unknown. The aim of this study is to characterise the dairy products available on the Algerian market in order to assess whether they constitute a healthy and safe choice. To do this, it collected data on the labelling of 390 dairy products, including cheese, yoghurt, UHT milk and milk drinks, infant formula and dairy creams. We assessed their degree of processing according to the NOVA classification, as well as the proportion of imitation products. The study was carried out between March 2020 and August 2023. The results show that 88% are ultra-processed; 84% for 'cheese', 92% for dairy creams, 92% for 'yoghurt', 100% for infant formula, 92% for margarines and 36% for UHT milk/dairy drinks. As for imitation/analogue dairy products, the study revealed the following proportions: 100% for infant formula, 78% for butter/margarine, 18% for UHT milk/milk-based drinks, 54% for cheese, 2% for camembert and 75% for dairy cream. The harmful effects of consuming ultra-processed products on long-term health are increasingly documented in dozens of publications. The findings of this study sound the alarm about the health risks to which Algerian consumers are exposed. Various scientific, economic and industrial bodies need to be involved in order to safeguard consumer health in both the short and long term. Food awareness and education campaigns should be organised.Keywords: dairy, UPF, NOVA, yoghurt, cheese
Procedia PDF Downloads 353571 An Analytical Systematic Design Approach to Evaluate Ballistic Performance of Armour Grade AA7075 Aluminium Alloy Using Friction Stir Processing
Authors: Lahari Ramya Pa, Sudhakar Ib, Madhu Vc, Madhusudhan Reddy Gd, Srinivasa Rao E.
Abstract:
Selection of suitable armor materials for defense applications is very crucial with respect to increasing mobility of the systems as well as maintaining safety. Therefore, determining the material with the lowest possible areal density that resists the predefined threat successfully is required in armor design studies. A number of light metal and alloys are come in to forefront especially to substitute the armour grade steels. AA5083 aluminium alloy which fit in to the military standards imposed by USA army is foremost nonferrous alloy to consider for possible replacement of steel to increase the mobility of armour vehicles and enhance fuel economy. Growing need of AA5083 aluminium alloy paves a way to develop supplement aluminium alloys maintaining the military standards. It has been witnessed that AA 2xxx aluminium alloy, AA6xxx aluminium alloy and AA7xxx aluminium alloy are the potential material to supplement AA5083 aluminium alloy. Among those cited aluminium series alloys AA7xxx aluminium alloy (heat treatable) possesses high strength and can compete with armour grade steels. Earlier investigations revealed that layering of AA7xxx aluminium alloy can prevent spalling of rear portion of armour during ballistic impacts. Hence, present investigation deals with fabrication of hard layer (made of boron carbide) i.e. layer on AA 7075 aluminium alloy using friction stir processing with an intention of blunting the projectile in the initial impact and backing tough portion(AA7xxx aluminium alloy) to dissipate residual kinetic energy. An analytical approach has been adopted to unfold the ballistic performance of projectile. Penetration of projectile inside the armour has been resolved by considering by strain energy model analysis. Perforation shearing areas i.e. interface of projectile and armour is taken in to account for evaluation of penetration inside the armour. Fabricated surface composites (targets) were tested as per the military standard (JIS.0108.01) in a ballistic testing tunnel at Defence Metallurgical Research Laboratory (DMRL), Hyderabad in standardized testing conditions. Analytical results were well validated with experimental obtained one.Keywords: AA7075 aluminium alloy, friction stir processing, boron carbide, ballistic performance, target
Procedia PDF Downloads 3303570 Web and Smart Phone-based Platform Combining Artificial Intelligence and Satellite Remote Sensing Data to Geoenable Villages for Crop Health Monitoring
Authors: Siddhartha Khare, Nitish Kr Boro, Omm Animesh Mishra
Abstract:
Recent food price hikes may signal the end of an era of predictable global grain crop plenty due to climate change, population expansion, and dietary changes. Food consumption will treble in 20 years, requiring enormous production expenditures. Climate and the atmosphere changed owing to rainfall and seasonal cycles in the past decade. India's tropical agricultural relies on evapotranspiration and monsoons. In places with limited resources, the global environmental change affects agricultural productivity and farmers' capacity to adjust to changing moisture patterns. Motivated by these difficulties, satellite remote sensing might be combined with near-surface imaging data (smartphones, UAVs, and PhenoCams) to enable phenological monitoring and fast evaluations of field-level consequences of extreme weather events on smallholder agriculture output. To accomplish this technique, we must digitally map all communities agricultural boundaries and crop kinds. With the improvement of satellite remote sensing technologies, a geo-referenced database may be created for rural Indian agriculture fields. Using AI, we can design digital agricultural solutions for individual farms. Main objective is to Geo-enable each farm along with their seasonal crop information by combining Artificial Intelligence (AI) with satellite and near-surface data and then prepare long term crop monitoring through in-depth field analysis and scanning of fields with satellite derived vegetation indices. We developed an AI based algorithm to understand the timelapse based growth of vegetation using PhenoCam or Smartphone based images. We developed an android platform where user can collect images of their fields based on the android application. These images will be sent to our local server, and then further AI based processing will be done at our server. We are creating digital boundaries of individual farms and connecting these farms with our smart phone application to collect information about farmers and their crops in each season. We are extracting satellite-based information for each farm from Google earth engine APIs and merging this data with our data of tested crops from our app according to their farm’s locations and create a database which will provide the data of quality of crops from their location.Keywords: artificial intelligence, satellite remote sensing, crop monitoring, android and web application
Procedia PDF Downloads 1003569 Tip-Enhanced Raman Spectroscopy with Plasmonic Lens Focused Longitudinal Electric Field Excitation
Authors: Mingqian Zhang
Abstract:
Tip-enhanced Raman spectroscopy (TERS) is a scanning probe technique for individual objects and structured surfaces investigation that provides a wealth of enhanced spectral information with nanoscale spatial resolution and high detection sensitivity. It has become a powerful and promising chemical and physical information detection method in the nanometer scale. The TERS technique uses a sharp metallic tip regulated in the near-field of a sample surface, which is illuminated with a certain incident beam meeting the excitation conditions of the wave-vector matching. The local electric field, and, consequently, the Raman scattering, from the sample in the vicinity of the tip apex are both greatly tip-enhanced owning to the excitation of localized surface plasmons and the lightning-rod effect. Typically, a TERS setup is composed of a scanning probe microscope, excitation and collection optical configurations, and a Raman spectroscope. In the illumination configuration, an objective lens or a parabolic mirror is always used as the most important component, in order to focus the incident beam on the tip apex for excitation. In this research, a novel TERS setup was built up by introducing a plasmonic lens to the excitation optics as a focusing device. A plasmonic lens with symmetry breaking semi-annular slits corrugated on gold film was designed for the purpose of generating concentrated sub-wavelength light spots with strong longitudinal electric field. Compared to conventional far-field optical components, the designed plasmonic lens not only focuses an incident beam to a sub-wavelength light spot, but also realizes a strong z-component that dominants the electric field illumination, which is ideal for the excitation of tip-enhancement. Therefore, using a PL in the illumination configuration of TERS contributes to improve the detection sensitivity by both reducing the far-field background and effectively exciting the localized electric field enhancement. The FDTD method was employed to investigate the optical near-field distribution resulting from the light-nanostructure interaction. And the optical field distribution was characterized using an scattering-type scanning near-field optical microscope to demonstrate the focusing performance of the lens. The experimental result is in agreement with the theoretically calculated one. It verifies the focusing performance of the plasmonic lens. The optical field distribution shows a bright elliptic spot in the lens center and several arc-like side-lobes on both sides. After the focusing performance was experimentally verified, the designed plasmonic lens was used as a focusing component in the excitation configuration of TERS setup to concentrate incident energy and generate a longitudinal optical field. A collimated linearly polarized laser beam, with along x-axis polarization, was incident from the bottom glass side on the plasmonic lens. The incident light focused by the plasmonic lens interacted with the silver-coated tip apex and enhanced the Raman signal of the sample locally. The scattered Raman signal was gathered by a parabolic mirror and detected with a Raman spectroscopy. Then, the plasmonic lens based setup was employed to investigate carbon nanotubes and TERS experiment was performed. Experimental results indicate that the Raman signal is considerably enhanced which proves that the novel TERS configuration is feasible and promising.Keywords: longitudinal electric field, plasmonics, raman spectroscopy, tip-enhancement
Procedia PDF Downloads 3733568 Sustainable Radiation Curable Palm Oil-Based Products for Advanced Materials Applications
Authors: R. Tajau, R. Rohani, M. S. Alias, N. H. Mudri, K. A. Abdul Halim, M. H. Harun, N. Mat Isa, R. Che Ismail, S. Muhammad Faisal, M. Talib, M. R. Mohamed Zin
Abstract:
Bio-based polymeric materials are increasingly used for a variety of applications, including surface coating, drug delivery systems, and tissue engineering. These polymeric materials are ideal for the aforementioned applications because they are derived from natural resources, non-toxic, low-cost, biocompatible, and biodegradable, and have promising thermal and mechanical properties. The nature of hydrocarbon chains, carbon double bonds, and ester bonds allows various sources of oil (edible), such as soy, sunflower, olive, and oil palm, to fine-tune their particular structures in the development of innovative materials. Palm oil can be the most eminent raw material used for manufacturing new and advanced natural polymeric materials involving radiation techniques, such as coating resins, nanoparticles, scaffold, nanotubes, nanocomposites, and lithography for different branches of the industry in countries where oil palm is abundant. The radiation technique is among the most versatile, cost-effective, simple, and effective methods. Crosslinking, reversible addition-fragmentation chain transfer (RAFT), polymerisation, grafting, and degradation are among the radiation mechanisms. Exposure to gamma, EB, UV, or laser irradiation, which are commonly used in the development of polymeric materials, is used in these mechanisms. Therefore, this review focuses on current radiation processing technologies for the development of various radiation-curable bio-based polymeric materials with a promising future in biomedical and industrial applications. The key focus of this review is on radiation curable palm oil-based products, which have been published frequently in recent studies.Keywords: palm oil, radiation processing, surface coatings, VOC
Procedia PDF Downloads 1833567 Optimization of Extraction Conditions and Characteristics of Scale collagen From Sardine: Sardina pilchardus
Authors: F. Bellali, M. Kharroubi, M. Loutfi, N.Bourhim
Abstract:
In Morocco, fish processing industry is an important source income for a large amount of byproducts including skins, bones, heads, guts and scales. Those underutilized resources particularly scales contain a large amount of proteins and calcium. Scales from Sardina plichardus resulting from the transformation operation have the potential to be used as raw material for the collagen production. Taking into account this strong expectation of the regional fish industry, scales sardine upgrading is well justified. In addition, political and societal demands for sustainability and environment-friendly industrial production systems, coupled with the depletion of fish resources, drive this trend forward. Therefore, fish scale used as a potential source to isolate collagen has a wide large of applications in food, cosmetic and bio medical industry. The main aim of this study is to isolate and characterize the acid solubilize collagen from sardine fish scale, Sardina pilchardus. Experimental design methodology was adopted in collagen processing for extracting optimization. The first stage of this work is to investigate the optimization conditions of the sardine scale deproteinization on using response surface methodology (RSM). The second part focus on the demineralization with HCl solution or EDTA. Moreover, the last one is to establish the optimum condition for the isolation of collagen from fish scale by solvent extraction. The basic principle of RSM is to determinate model equations that describe interrelations between the independent variables and the dependent variables.Keywords: Sardina pilchardus, scales, valorization, collagen extraction, response surface methodology
Procedia PDF Downloads 4173566 Calpains; Insights Into the Pathogenesis of Heart Failure
Authors: Mohammadjavad Sotoudeheian
Abstract:
Heart failure (HF) prevalence, as a global cardiovascular problem, is increasing gradually. A variety of molecular mechanisms contribute to HF. Proteins involved in cardiac contractility regulation, such as ion channels and calcium handling proteins, are altered. Additionally, epigenetic modifications and gene expression can lead to altered cardiac function. Moreover, inflammation and oxidative stress contribute to HF. The progression of HF can be attributed to mitochondrial dysfunction that impairs energy production and increases apoptosis. Molecular mechanisms such as these contribute to the development of cardiomyocyte defects and HF and can be therapeutically targeted. The heart's contractile function is controlled by cardiomyocytes. Calpain, and its related molecules, including Bax, VEGF, and AMPK, are among the proteins involved in regulating cardiomyocyte function. Apoptosis is facilitated by Bax. Cardiomyocyte apoptosis is regulated by this protein. Furthermore, cardiomyocyte survival, contractility, wound healing, and proliferation are all regulated by VEGF, which is produced by cardiomyocytes during inflammation and cytokine stress. Cardiomyocyte proliferation and survival are also influenced by AMPK, an enzyme that plays an active role in energy metabolism. They all play key roles in apoptosis, angiogenesis, hypertrophy, and metabolism during myocardial inflammation. The role of calpains has been linked to several molecular pathways. The calpain pathway plays an important role in signal transduction and apoptosis, as well as autophagy, endocytosis, and exocytosis. Cell death and survival are regulated by these calcium-dependent cysteine proteases that cleave proteins. As a result, protein fragments can be used for various cellular functions. By cleaving adhesion and motility proteins, calcium proteins also contribute to cell migration. HF may be brought about by calpain-mediated pathways. Many physiological processes are mediated by the calpain molecular pathways. Signal transduction, cell death, and cell migration are all regulated by these molecular pathways. Calpain is activated by calcium binding to calmodulin. In the presence of calcium, calmodulin activates calpain. Calpains are stimulated by calcium, which increases matrix metalloproteinases (MMPs). In order to develop novel treatments for these diseases, we must understand how this pathway works. A variety of myocardial remodeling processes involve calpains, including remodeling of the extracellular matrix and hypertrophy of cardiomyocytes. Calpains also play a role in maintaining cardiac homeostasis through apoptosis and autophagy. The development of HF may be in part due to calpain-mediated pathways promoting cardiomyocyte death. Numerous studies have suggested the importance of the Ca2+ -dependent protease calpain in cardiac physiology and pathology. Therefore, it is important to consider this pathway to develop and test therapeutic options in humans that targets calpain in HF. Apoptosis, autophagy, endocytosis, exocytosis, signal transduction, and disease progression all involve calpain molecular pathways. Therefore, it is conceivable that calpain inhibitors might have therapeutic potential as they have been investigated in preclinical models of several conditions in which the enzyme has been implicated that might be treated with them. Ca 2+ - dependent proteases and calpains contribute to adverse ventricular remodeling and HF in multiple experimental models. In this manuscript, we will discuss the calpain molecular pathway's important roles in HF development.Keywords: calpain, heart failure, autophagy, apoptosis, cardiomyocyte
Procedia PDF Downloads 683565 Microbial Dynamics and Sensory Traits of Spanish- and Greek-Style Table Olives (Olea europaea L. cv. Ascolana tenera) Fermented with Sea Fennel (Crithmum maritimum L.)
Authors: Antonietta Maoloni, Federica Cardinali, Vesna Milanović, Andrea Osimani, Ilario Ferrocino, Maria Rita Corvaglia, Luca Cocolin, Lucia Aquilanti
Abstract:
Table olives (Olea europaea L.) are among the most important fermented vegetables all over the world, while sea fennel (Crithmum maritimum L.) is an emerging food crop with interesting nutritional and sensory traits. Both of them are characterized by the presence of several bioactive compounds with potential beneficial health effects, thus representing two valuable substrates for the manufacture of innovative vegetable-based preserves. Given these premises, the present study was aimed at exploring the co-fermentation of table olives and sea fennel to produce new high-value preserves. Spanish style or Greek style processing method and the use of a multiple strain starter were explored. The preserves were evaluated for their microbial dynamics and key sensory traits. During the fermentation, a progressive pH reduction was observed. Mesophilic lactobacilli, mesophilic lactococci, and yeasts were the main microbial groups at the end of the fermentation, whereas Enterobacteriaceae decreased during fermentation. An evolution of the microbiota was revealed by metataxonomic analysis, with Lactiplantibacillus plantarum dominating in the late stage of fermentation, irrespective of processing method and use of the starter. Greek style preserves resulted in more crunchy and less fibrous than Spanish style one and were preferred by trained panelists.Keywords: lactic acid bacteria, Lactiplantibacillus plantarum, metataxonomy, panel test, rock samphire
Procedia PDF Downloads 1293564 Study of Adaptive Filtering Algorithms and the Equalization of Radio Mobile Channel
Authors: Said Elkassimi, Said Safi, B. Manaut
Abstract:
This paper presented a study of three algorithms, the equalization algorithm to equalize the transmission channel with ZF and MMSE criteria, application of channel Bran A, and adaptive filtering algorithms LMS and RLS to estimate the parameters of the equalizer filter, i.e. move to the channel estimation and therefore reflect the temporal variations of the channel, and reduce the error in the transmitted signal. So far the performance of the algorithm equalizer with ZF and MMSE criteria both in the case without noise, a comparison of performance of the LMS and RLS algorithm.Keywords: adaptive filtering second equalizer, LMS, RLS Bran A, Proakis (B) MMSE, ZF
Procedia PDF Downloads 3133563 Coupled Effect of Pulsed Current and Stress State on Fracture Behavior of Ultrathin Superalloy Sheet
Authors: Shuangxin Wu
Abstract:
Superalloy ultra-thin-walled components occupy a considerable proportion of aero engines and play an increasingly important role in structural weight reduction and performance improvement. To solve problems such as high deformation resistance and poor formability at room temperature, the introduction of pulse current in the processing process can improve the plasticity of metal materials, but the influence mechanism of pulse current on the forming limit of superalloy ultra-thin sheet is not clear, which is of great significance for determining the material processing window and improving the micro-forming process. The effect of pulse current on the microstructure evolution of superalloy thin plates was observed by optical microscopy (OM) and X-ray diffraction topography (XRT) by applying pulse current to GH3039 with a thickness of 0.2mm under plane strain and uniaxial tensile states. Compared with the specimen without pulse current applied at the same temperature, the internal void volume fraction is significantly reduced, reflecting the non-thermal effect of pulse current on the growth of micro-pores. ED (electrically deforming) specimens have larger and deeper dimples, but the elongation is not significantly improved because the pulse current promotes the void coalescence process, resulting in material fracture. The electro-plastic phenomenon is more obvious in the plane strain state, which is closely related to the effect of stress triaxial degree on the void evolution under pulsed current.Keywords: pulse current, superalloy, ductile fracture, void damage
Procedia PDF Downloads 723562 Link Between Intensity-trajectories Of Acute Postoperative Pain And Risk Of Chronicization After Breast And Thoracopulmonary Surgery
Authors: Beloulou Mohamed Lamine, Fedili Benamar, Meliani Walid, Chaid Dalila
Abstract:
Introduction: The risk factors for the chronicization of postoperative pain are numerous and often intricately intertwined. Among these, the severity of acute postoperative pain is currently recognized as one of the most determining factors. Mastectomy and thoracotomy are described as among the most painful surgeries and the most likely to lead to chronic post-surgical pain (CPSP). Objective: To examine the aspects of acute postoperative pain potentially involved in the development of chronic pain following breast and thoracic surgery. Patients and Methods: A prospective study involving 164 patients was conducted over a six-month period. Postoperative pain (during mobilization) was assessed using a Visual Analog Scale (VAS) at various time points after surgery: Day 0, 1st, 2nd, 5th days, 1st and 6th months. Moderate to severe pain was defined as a VAS score ≥ 4. A comparative analysis (univariate analysis) of postoperative pain intensities at different evaluation phases was performed on patients with and without CPSP to identify potential associations with the risk of chronicization six months after surgery. Results: At the 6th month post-surgery, the incidence of CPSP was 43.0%. Moderate to severe acute postoperative pain (in the first five days) was observed in 64% of patients. The highest pain scores were reported among thoracic surgery patients. Comparative measures revealed a highly significant association between the presence of moderate to severe acute pain, especially lasting for ≥ 48 hours, and the occurrence of CPSP (p-value <0.0001). Likewise, the persistence of subacute pain (up to 4 to 6 weeks after surgery), especially of moderate to severe intensity, was significantly associated with the risk of chronicization at six months (p-value <0.0001). Conclusion: CPSP after breast and thoracic surgery remains a fairly common morbidity that profoundly affects the quality of life. Severe acute postoperative pain, especially if it is prolonged and/or with a slow decline in intensity, can be an important predictive factor for the risk of chronicization. Therefore, more effective and intensive management of acute postoperative pain, as well as longitudinal monitoring of its trajectory over time, should be an essential component of strategies for preventing chronic pain after surgery.Keywords: chronic post-surgical pain, acute postoperative pain, breast and thoracic surgery, subacute postoperative pain, pain trajectory, predictive factor
Procedia PDF Downloads 733561 Image Recognition Performance Benchmarking for Edge Computing Using Small Visual Processing Unit
Authors: Kasidis Chomrat, Nopasit Chakpitak, Anukul Tamprasirt, Annop Thananchana
Abstract:
Internet of Things devices or IoT and Edge Computing has become one of the biggest things happening in innovations and one of the most discussed of the potential to improve and disrupt traditional business and industry alike. With rises of new hang cliff challenges like COVID-19 pandemic that posed a danger to workforce and business process of the system. Along with drastically changing landscape in business that left ruined aftermath of global COVID-19 pandemic, looming with the threat of global energy crisis, global warming, more heating global politic that posed a threat to become new Cold War. How emerging technology like edge computing and usage of specialized design visual processing units will be great opportunities for business. The literature reviewed on how the internet of things and disruptive wave will affect business, which explains is how all these new events is an effect on the current business and how would the business need to be adapting to change in the market and world, and example test benchmarking for consumer marketed of newer devices like the internet of things devices equipped with new edge computing devices will be increase efficiency and reducing posing a risk from a current and looming crisis. Throughout the whole paper, we will explain the technologies that lead the present technologies and the current situation why these technologies will be innovations that change the traditional practice through brief introductions to the technologies such as cloud computing, edge computing, Internet of Things and how it will be leading into future.Keywords: internet of things, edge computing, machine learning, pattern recognition, image classification
Procedia PDF Downloads 1553560 Features of Normative and Pathological Realizations of Sibilant Sounds for Computer-Aided Pronunciation Evaluation in Children
Authors: Zuzanna Miodonska, Michal Krecichwost, Pawel Badura
Abstract:
Sigmatism (lisping) is a speech disorder in which sibilant consonants are mispronounced. The diagnosis of this phenomenon is usually based on the auditory assessment. However, the progress in speech analysis techniques creates a possibility of developing computer-aided sigmatism diagnosis tools. The aim of the study is to statistically verify whether specific acoustic features of sibilant sounds may be related to pronunciation correctness. Such knowledge can be of great importance while implementing classifiers and designing novel tools for automatic sibilants pronunciation evaluation. The study covers analysis of various speech signal measures, including features proposed in the literature for the description of normative sibilants realization. Amplitudes and frequencies of three fricative formants (FF) are extracted based on local spectral maxima of the friction noise. Skewness, kurtosis, four normalized spectral moments (SM) and 13 mel-frequency cepstral coefficients (MFCC) with their 1st and 2nd derivatives (13 Delta and 13 Delta-Delta MFCC) are included in the analysis as well. The resulting feature vector contains 51 measures. The experiments are performed on the speech corpus containing words with selected sibilant sounds (/ʃ, ʒ/) pronounced by 60 preschool children with proper pronunciation or with natural pathologies. In total, 224 /ʃ/ segments and 191 /ʒ/ segments are employed in the study. The Mann-Whitney U test is employed for the analysis of stigmatism and normative pronunciation. Statistically, significant differences are obtained in most of the proposed features in children divided into these two groups at p < 0.05. All spectral moments and fricative formants appear to be distinctive between pathology and proper pronunciation. These metrics describe the friction noise characteristic for sibilants, which makes them particularly promising for the use in sibilants evaluation tools. Correspondences found between phoneme feature values and an expert evaluation of the pronunciation correctness encourage to involve speech analysis tools in diagnosis and therapy of sigmatism. Proposed feature extraction methods could be used in a computer-assisted stigmatism diagnosis or therapy systems.Keywords: computer-aided pronunciation evaluation, sigmatism diagnosis, speech signal analysis, statistical verification
Procedia PDF Downloads 3013559 Epileptic Seizure Prediction by Exploiting Signal Transitions Phenomena
Authors: Mohammad Zavid Parvez, Manoranjan Paul
Abstract:
A seizure prediction method is proposed by extracting global features using phase correlation between adjacent epochs for detecting relative changes and local features using fluctuation/deviation within an epoch for determining fine changes of different EEG signals. A classifier and a regularization technique are applied for the reduction of false alarms and improvement of the overall prediction accuracy. The experiments show that the proposed method outperforms the state-of-the-art methods and provides high prediction accuracy (i.e., 97.70%) with low false alarm using EEG signals in different brain locations from a benchmark data set.Keywords: Epilepsy, seizure, phase correlation, fluctuation, deviation.
Procedia PDF Downloads 4673558 Effect of Fermentation Time on Some Functional Properties of Moringa (Moringa oleifera) Seed Flour
Authors: Ocheme B. Ocheme, Omobolanle O. Oloyede, S. James, Eleojo V. Akpa
Abstract:
The effect of fermentation time on some functional properties of Moringa (Moringa oleifera) seed flour was examined. Fermentation, an effective processing method used to improve nutritional quality of plant foods, tends to affect the characteristics of food components and their behaviour in food systems just like other processing methods. Hence the need for this study. Moringa seeds were fermented naturally by soaking in potable water and allowing it to stand for 12, 24, 48 and 72 hours. At the end of fermentation, the seeds were oven dried at 600C for 12 hours and then milled into flour. Flour obtained from unfermented seeds served as control: hence a total of five flour samples. The functional properties were analyzed using standard methods. Fermentation significantly (p<0.05) increased the water holding capacity of Moringa seed flour from 0.86g/g - 2.31g/g. The highest value was observed after 48 hours of fermentation The same trend was observed for oil absorption capacity with values between 0.87 and 1.91g/g. Flour from unfermented Moringa seeds had a bulk density of 0.60g/cm3 which was significantly (p<0.05) higher than the bulk densities of flours from seeds fermented for 12, 24 and 48. Fermentation significantly (p<0.05) decreased the dispersibility of Moringa seed flours from 36% to 21, 24, 29 and 20% after 12, 24, 48 and 72 hours of fermentation respectively. The flours’ emulsifying capacities increased significantly (p<0.05) with increasing fermentation time with values between 50 – 68%. The flour obtained from seeds fermented for 12 hours had a significantly (p<0.05) higher foaming capacity of 16% while the flour obtained from seeds fermented for 0, 24 and 72 hours had the least foaming capacities of 9%. Flours from seeds fermented for 12 and 48 hours had better functional properties than flours from seeds fermented for 24 and 72 hours.Keywords: fermentation, flour, functional properties, Moringa
Procedia PDF Downloads 6883557 Distributed Cost-Based Scheduling in Cloud Computing Environment
Authors: Rupali, Anil Kumar Jaiswal
Abstract:
Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc. Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively. Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.Keywords: physical machines, virtual machines, support for repetition, self-healing, highly scalable programming model
Procedia PDF Downloads 1683556 A Hebbian Neural Network Model of the Stroop Effect
Authors: Vadim Kulikov
Abstract:
The classical Stroop effect is the phenomenon that it takes more time to name the ink color of a printed word if the word denotes a conflicting color than if it denotes the same color. Over the last 80 years, there have been many variations of the experiment revealing various mechanisms behind semantic, attentional, behavioral and perceptual processing. The Stroop task is known to exhibit asymmetry. Reading the words out loud is hardly dependent on the ink color, but naming the ink color is significantly influenced by the incongruent words. This asymmetry is reversed, if instead of naming the color, one has to point at a corresponding color patch. Another debated aspects are the notions of automaticity and how much of the effect is due to semantic and how much due to response stage interference. Is automaticity a continuous or an all-or-none phenomenon? There are many models and theories in the literature tackling these questions which will be discussed in the presentation. None of them, however, seems to capture all the findings at once. A computational model is proposed which is based on the philosophical idea developed by the author that the mind operates as a collection of different information processing modalities such as different sensory and descriptive modalities, which produce emergent phenomena through mutual interaction and coherence. This is the framework theory where ‘framework’ attempts to generalize the concepts of modality, perspective and ‘point of view’. The architecture of this computational model consists of blocks of neurons, each block corresponding to one framework. In the simplest case there are four: visual color processing, text reading, speech production and attention selection modalities. In experiments where button pressing or pointing is required, a corresponding block is added. In the beginning, the weights of the neural connections are mostly set to zero. The network is trained using Hebbian learning to establish connections (corresponding to ‘coherence’ in framework theory) between these different modalities. The amount of data fed into the network is supposed to mimic the amount of practice a human encounters, in particular it is assumed that converting written text into spoken words is a more practiced skill than converting visually perceived colors to spoken color-names. After the training, the network performs the Stroop task. The RT’s are measured in a canonical way, as these are continuous time recurrent neural networks (CTRNN). The above-described aspects of the Stroop phenomenon along with many others are replicated. The model is similar to some existing connectionist models but as will be discussed in the presentation, has many advantages: it predicts more data, the architecture is simpler and biologically more plausible.Keywords: connectionism, Hebbian learning, artificial neural networks, philosophy of mind, Stroop
Procedia PDF Downloads 2663555 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model
Authors: T. Thein, S. Kalyar Myo
Abstract:
Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)
Procedia PDF Downloads 2863554 Quantification Model for Capability Evaluation of Optical-Based in-Situ Monitoring System for Laser Powder Bed Fusion (LPBF) Process
Authors: Song Zhang, Hui Wang, Johannes Henrich Schleifenbaum
Abstract:
Due to the increasing demand for quality assurance and reliability for additive manufacturing, the development of an advanced in-situ monitoring system is required to monitor the process anomalies as input for further process control. Optical-based monitoring systems, such as CMOS cameras and NIR cameras, are proved as effective ways to monitor the geometrical distortion and exceptional thermal distribution. Therefore, many studies and applications are focusing on the availability of the optical-based monitoring system for detecting varied types of defects. However, the capability of the monitoring setup is not quantified. In this study, a quantification model to evaluate the capability of the monitoring setups for the LPBF machine based on acquired monitoring data of a designed test artifact is presented, while the design of the relevant test artifacts is discussed. The monitoring setup is evaluated based on its hardware properties, location of the integration, and light condition. Methodology of data processing to quantify the capacity for each aspect is discussed. The minimal capability of the detectable size of the monitoring set up in the application is estimated by quantifying its resolution and accuracy. The quantification model is validated using a CCD camera-based monitoring system for LPBF machines in the laboratory with different setups. The result shows the model to quantify the monitoring system's performance, which makes the evaluation of monitoring systems with the same concept but different setups possible for the LPBF process and provides the direction to improve the setups.Keywords: data processing, in-situ monitoring, LPBF process, optical system, quantization model, test artifact
Procedia PDF Downloads 1973553 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission
Authors: Tingwei Shu, Dong Zhou, Chengjun Guo
Abstract:
Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.Keywords: semantic communication, transformer, wavelet transform, data processing
Procedia PDF Downloads 783552 The Differences and Similarities in Neurocognitive Deficits in Mild Traumatic Brain Injury and Depression
Authors: Boris Ershov
Abstract:
Depression is the most common mood disorder experienced by patients who have sustained a traumatic brain injury (TBI) and is associated with poorer cognitive functional outcomes. However, in some cases, similar cognitive impairments can also be observed in depression. There is not enough information about the features of the cognitive deficit in patients with TBI in relation to patients with depression. TBI patients without depressive symptoms (TBInD, n25), TBI patients with depressive symptoms (TBID, n31), and 28 patients with bipolar II disorder (BP) were included in the study. There were no significant differences in participants in respect to age, handedness and educational level. The patients clinical status was determined by using Montgomery–Asberg Depression Rating Scale (MADRS). All participants completed a cognitive battery (The Brief Assessment of Cognition in Affective Disorders (BAC-A)). Additionally, the Rey–Osterrieth Complex Figure (ROCF) was used to assess visuospatial construction abilities and visual memory, as well as planning and organizational skills. Compared to BP, TBInD and TBID showed a significant impairments in visuomotor abilities, verbal and visual memory. There were no significant differences between BP and TBID groups in working memory, speed of information processing, problem solving. Interference effect (cognitive inhibition) was significantly greater in TBInD and TBID compared to BP. Memory bias towards mood-related information in BP and TBID was greater in comparison with TBInD. These results suggest that depressive symptoms are associated with impairments some executive functions in combination at decrease of speed of information processing.Keywords: bipolar II disorder, depression, neurocognitive deficits, traumatic brain injury
Procedia PDF Downloads 3473551 A Review on Cloud Computing and Internet of Things
Authors: Sahar S. Tabrizi, Dogan Ibrahim
Abstract:
Cloud Computing is a convenient model for on-demand networks that uses shared pools of virtual configurable computing resources, such as servers, networks, storage devices, applications, etc. The cloud serves as an environment for companies and organizations to use infrastructure resources without making any purchases and they can access such resources wherever and whenever they need. Cloud computing is useful to overcome a number of problems in various Information Technology (IT) domains such as Geographical Information Systems (GIS), Scientific Research, e-Governance Systems, Decision Support Systems, ERP, Web Application Development, Mobile Technology, etc. Companies can use Cloud Computing services to store large amounts of data that can be accessed from anywhere on Earth and also at any time. Such services are rented by the client companies where the actual rent depends upon the amount of data stored on the cloud and also the amount of processing power used in a given time period. The resources offered by the cloud service companies are flexible in the sense that the user companies can increase or decrease their storage requirements or the processing power requirements at any time, thus minimizing the overall rental cost of the service they receive. In addition, the Cloud Computing service providers offer fast processors and applications software that can be shared by their clients. This is especially important for small companies with limited budgets which cannot afford to purchase their own expensive hardware and software. This paper is an overview of the Cloud Computing, giving its types, principles, advantages, and disadvantages. In addition, the paper gives some example engineering applications of Cloud Computing and makes suggestions for possible future applications in the field of engineering.Keywords: cloud computing, cloud systems, cloud services, IaaS, PaaS, SaaS
Procedia PDF Downloads 2333550 An Image Processing Scheme for Skin Fungal Disease Identification
Authors: A. A. M. A. S. S. Perera, L. A. Ranasinghe, T. K. H. Nimeshika, D. M. Dhanushka Dissanayake, Namalie Walgampaya
Abstract:
Nowadays, skin fungal diseases are mostly found in people of tropical countries like Sri Lanka. A skin fungal disease is a particular kind of illness caused by fungus. These diseases have various dangerous effects on the skin and keep on spreading over time. It becomes important to identify these diseases at their initial stage to control it from spreading. This paper presents an automated skin fungal disease identification system implemented to speed up the diagnosis process by identifying skin fungal infections in digital images. An image of the diseased skin lesion is acquired and a comprehensive computer vision and image processing scheme is used to process the image for the disease identification. This includes colour analysis using RGB and HSV colour models, texture classification using Grey Level Run Length Matrix, Grey Level Co-Occurrence Matrix and Local Binary Pattern, Object detection, Shape Identification and many more. This paper presents the approach and its outcome for identification of four most common skin fungal infections, namely, Tinea Corporis, Sporotrichosis, Malassezia and Onychomycosis. The main intention of this research is to provide an automated skin fungal disease identification system that increase the diagnostic quality, shorten the time-to-diagnosis and improve the efficiency of detection and successful treatment for skin fungal diseases.Keywords: Circularity Index, Grey Level Run Length Matrix, Grey Level Co-Occurrence Matrix, Local Binary Pattern, Object detection, Ring Detection, Shape Identification
Procedia PDF Downloads 2323549 Experimental Correlation for Erythrocyte Aggregation Rate in Population Balance Modeling
Authors: Erfan Niazi, Marianne Fenech
Abstract:
Red Blood Cells (RBCs) or erythrocytes tend to form chain-like aggregates under low shear rate called rouleaux. This is a reversible process and rouleaux disaggregate in high shear rates. Therefore, RBCs aggregation occurs in the microcirculation where low shear rates are present but does not occur under normal physiological conditions in large arteries. Numerical modeling of RBCs interactions is fundamental in analytical models of a blood flow in microcirculation. Population Balance Modeling (PBM) is particularly useful for studying problems where particles agglomerate and break in a two phase flow systems to find flow characteristics. In this method, the elementary particles lose their individual identity due to continuous destructions and recreations by break-up and agglomeration. The aim of this study is to find RBCs aggregation in a dynamic situation. Simplified PBM was used previously to find the aggregation rate on a static observation of the RBCs aggregation in a drop of blood under the microscope. To find aggregation rate in a dynamic situation we propose an experimental set up testing RBCs sedimentation. In this test, RBCs interact and aggregate to form rouleaux. In this configuration, disaggregation can be neglected due to low shear stress. A high-speed camera is used to acquire video-microscopic pictures of the process. The sizes of the aggregates and velocity of sedimentation are extracted using an image processing techniques. Based on the data collection from 5 healthy human blood samples, the aggregation rate was estimated as 2.7x103(±0.3 x103) 1/s.Keywords: red blood cell, rouleaux, microfluidics, image processing, population balance modeling
Procedia PDF Downloads 3553548 Using Artificial Intelligence Technology to Build the User-Oriented Platform for Integrated Archival Service
Authors: Lai Wenfang
Abstract:
Tthis study will describe how to use artificial intelligence (AI) technology to build the user-oriented platform for integrated archival service. The platform will be launched in 2020 by the National Archives Administration (NAA) in Taiwan. With the progression of information communication technology (ICT) the NAA has built many systems to provide archival service. In order to cope with new challenges, such as new ICT, artificial intelligence or blockchain etc. the NAA will try to use the natural language processing (NLP) and machine learning (ML) skill to build a training model and propose suggestions based on the data sent to the platform. NAA expects the platform not only can automatically inform the sending agencies’ staffs which records catalogues are against the transfer or destroy rules, but also can use the model to find the details hidden in the catalogues and suggest NAA’s staff whether the records should be or not to be, to shorten the auditing time. The platform keeps all the users’ browse trails; so that the platform can predict what kinds of archives user could be interested and recommend the search terms by visualization, moreover, inform them the new coming archives. In addition, according to the Archives Act, the NAA’s staff must spend a lot of time to mark or remove the personal data, classified data, etc. before archives provided. To upgrade the archives access service process, the platform will use some text recognition pattern to black out automatically, the staff only need to adjust the error and upload the correct one, when the platform has learned the accuracy will be getting higher. In short, the purpose of the platform is to deduct the government digital transformation and implement the vision of a service-oriented smart government.Keywords: artificial intelligence, natural language processing, machine learning, visualization
Procedia PDF Downloads 1743547 Visco - Plastic Transition and Transfer of Plastic Material with SGF in case of Linear Dry Friction Contact on Steel Surfaces
Authors: Lucian Capitanu, Virgil Florescu
Abstract:
Often for the laboratory studies, modeling of specific tribological processes raises special problems. One such problem is the modeling of some temperatures and extremely high contact pressures, allowing modeling of temperatures and pressures at which the injection or extrusion processing of thermoplastic materials takes place. Tribological problems occur mainly in thermoplastics materials reinforced with glass fibers. They produce an advanced wear to the barrels and screws of processing machines, in short time. Obtaining temperatures around 210 °C and higher, as well as pressures around 100 MPa is very difficult in the laboratory. This paper reports a simple and convenient solution to get these conditions, using friction sliding couples with linear contact, cylindrical liner plastic filled with glass fibers on plate steel samples, polished and super-finished. C120 steel, which is a steel for moulds and Rp3 steel, high speed steel for tools, were used. Obtaining the pressure was achieved by continuous request of the liner in rotational movement up to its elasticity limits, when the dry friction coefficient reaches or exceeds the hardness value of 0.5 HB. By dissipation of the power lost by friction on flat steel sample, are reached contact temperatures at the metal surface that reach and exceed 230 °C, being placed in the range temperature values of the injection. Contact pressures (in load and materials conditions used) ranging from 16.3-36.4 MPa were obtained depending on the plastic material used and the glass fibers content.Keywords: plastics with glass fibers, dry friction, linear contact, contact temperature, contact pressure, experimental simulation
Procedia PDF Downloads 3023546 Least Support Orthogonal Matching Pursuit (LS-OMP) Recovery Method for Invisible Watermarking Image
Authors: Israa Sh. Tawfic, Sema Koc Kayhan
Abstract:
In this paper, first, we propose least support orthogonal matching pursuit (LS-OMP) algorithm to improve the performance, of the OMP (orthogonal matching pursuit) algorithm. LS-OMP algorithm adaptively chooses optimum L (least part of support), at each iteration. This modification helps to reduce the computational complexity significantly and performs better than OMP algorithm. Second, we give the procedure for the invisible image watermarking in the presence of compressive sampling. The image reconstruction based on a set of watermarked measurements is performed using LS-OMP.Keywords: compressed sensing, orthogonal matching pursuit, restricted isometry property, signal reconstruction, least support orthogonal matching pursuit, watermark
Procedia PDF Downloads 3383545 Preparation of Carbon Nanofiber Reinforced HDPE Using Dialkylimidazolium as a Dispersing Agent: Effect on Thermal and Rheological Properties
Authors: J. Samuel, S. Al-Enezi, A. Al-Banna
Abstract:
High-density polyethylene reinforced with carbon nanofibers (HDPE/CNF) have been prepared via melt processing using dialkylimidazolium tetrafluoroborate (ionic liquid) as a dispersion agent. The prepared samples were characterized by thermogravimetric (TGA) and differential scanning calorimetric (DSC) analyses. The samples blended with imidazolium ionic liquid exhibit higher thermal stability. DSC analysis showed clear miscibility of ionic liquid in the HDPE matrix and showed single endothermic peak. The melt rheological analysis of HDPE/CNF composites was performed using an oscillatory rheometer. The influence of CNF and ionic liquid concentration (ranging from 0, 0.5, and 1 wt%) on the viscoelastic parameters was investigated at 200 °C with an angular frequency range of 0.1 to 100 rad/s. The rheological analysis shows the shear-thinning behavior for the composites. An improvement in the viscoelastic properties was observed as the nanofiber concentration increases. The progress in the modulus values was attributed to the structural rigidity imparted by the high aspect ratio CNF. The modulus values and complex viscosity of the composites increased significantly at low frequencies. Composites blended with ionic liquid exhibit slightly lower values of complex viscosity and modulus over the corresponding HDPE/CNF compositions. Therefore, reduction in melt viscosity is an additional benefit for polymer composite processing as a result of wetting effect by polymer-ionic liquid combinations.Keywords: high-density polyethylene, carbon nanofibers, ionic liquid, complex viscosity
Procedia PDF Downloads 127