Search results for: conjugate dirichlet kernel
102 Robust Medical Image Watermarking based on Contourlet and Extraction Using ICA
Authors: S. Saju, G. Thirugnanam
Abstract:
In this paper, a medical image watermarking algorithm based on contourlet is proposed. Medical image watermarking is a special subcategory of image watermarking in the sense that images have special requirements. Watermarked medical images should not differ perceptually from their original counterparts because clinical reading of images must not be affected. Watermarking techniques based on wavelet transform are reported in many literatures but robustness and security using contourlet are better when compared to wavelet transform. The main challenge in exploring geometry in images comes from the discrete nature of the data. In this paper, original image is decomposed to two level using contourlet and the watermark is embedded in the resultant sub-bands. Sub-band selection is based on the value of Peak Signal to Noise Ratio (PSNR) that is calculated between watermarked and original image. To extract the watermark, Kernel ICA is used and it has a novel characteristic is that it does not require the transformation process to extract the watermark. Simulation results show that proposed scheme is robust against attacks such as Salt and Pepper noise, Median filtering and rotation. The performance measures like PSNR and Similarity measure are evaluated and compared with Discrete Wavelet Transform (DWT) to prove the robustness of the scheme. Simulations are carried out using Matlab Software.Keywords: digital watermarking, independent component analysis, wavelet transform, contourlet
Procedia PDF Downloads 527101 Image Segmentation Using Active Contours Based on Anisotropic Diffusion
Authors: Shafiullah Soomro
Abstract:
Active contour is one of the image segmentation techniques and its goal is to capture required object boundaries within an image. In this paper, we propose a novel image segmentation method by using an active contour method based on anisotropic diffusion feature enhancement technique. The traditional active contour methods use only pixel information to perform segmentation, which produces inaccurate results when an image has some noise or complex background. We use Perona and Malik diffusion scheme for feature enhancement, which sharpens the object boundaries and blurs the background variations. Our main contribution is the formulation of a new SPF (signed pressure force) function, which uses global intensity information across the regions. By minimizing an energy function using partial differential framework the proposed method captures semantically meaningful boundaries instead of catching uninterested regions. Finally, we use a Gaussian kernel which eliminates the problem of reinitialization in level set function. We use several synthetic and real images from different modalities to validate the performance of the proposed method. In the experimental section, we have found the proposed method performance is better qualitatively and quantitatively and yield results with higher accuracy compared to other state-of-the-art methods.Keywords: active contours, anisotropic diffusion, level-set, partial differential equations
Procedia PDF Downloads 157100 Survey of Methods for Solutions of Spatial Covariance Structures and Their Limitations
Authors: Joseph Thomas Eghwerido, Julian I. Mbegbu
Abstract:
In modelling environment processes, we apply multidisciplinary knowledge to explain, explore and predict the Earth's response to natural human-induced environmental changes. Thus, the analysis of spatial-time ecological and environmental studies, the spatial parameters of interest are always heterogeneous. This often negates the assumption of stationarity. Hence, the dispersion of the transportation of atmospheric pollutants, landscape or topographic effect, weather patterns depends on a good estimate of spatial covariance. The generalized linear mixed model, although linear in the expected value parameters, its likelihood varies nonlinearly as a function of the covariance parameters. As a consequence, computing estimates for a linear mixed model requires the iterative solution of a system of simultaneous nonlinear equations. In other to predict the variables at unsampled locations, we need to know the estimate of the present sampled variables. The geostatistical methods for solving this spatial problem assume covariance stationarity (locally defined covariance) and uniform in space; which is not apparently valid because spatial processes often exhibit nonstationary covariance. Hence, they have globally defined covariance. We shall consider different existing methods of solutions of spatial covariance of a space-time processes at unsampled locations. This stationary covariance changes with locations for multiple time set with some asymptotic properties.Keywords: parametric, nonstationary, Kernel, Kriging
Procedia PDF Downloads 25399 Arbitrarily Shaped Blur Kernel Estimation for Single Image Blind Deblurring
Authors: Aftab Khan, Ashfaq Khan
Abstract:
The research paper focuses on an interesting challenge faced in Blind Image Deblurring (BID). It relates to the estimation of arbitrarily shaped or non-parametric Point Spread Functions (PSFs) of motion blur caused by camera handshake. These PSFs exhibit much more complex shapes than their parametric counterparts and deblurring in this case requires intricate ways to estimate the blur and effectively remove it. This research work introduces a novel blind deblurring scheme visualized for deblurring images corrupted by arbitrarily shaped PSFs. It is based on Genetic Algorithm (GA) and utilises the Blind/Reference-less Image Spatial QUality Evaluator (BRISQUE) measure as the fitness function for arbitrarily shaped PSF estimation. The proposed BID scheme has been compared with other single image motion deblurring schemes as benchmark. Validation has been carried out on various blurred images. Results of both benchmark and real images are presented. Non-reference image quality measures were used to quantify the deblurring results. For benchmark images, the proposed BID scheme using BRISQUE converges in close vicinity of the original blurring functions.Keywords: blind deconvolution, blind image deblurring, genetic algorithm, image restoration, image quality measures
Procedia PDF Downloads 44298 Online Handwritten Character Recognition for South Indian Scripts Using Support Vector Machines
Authors: Steffy Maria Joseph, Abdu Rahiman V, Abdul Hameed K. M.
Abstract:
Online handwritten character recognition is a challenging field in Artificial Intelligence. The classification success rate of current techniques decreases when the dataset involves similarity and complexity in stroke styles, number of strokes and stroke characteristics variations. Malayalam is a complex south indian language spoken by about 35 million people especially in Kerala and Lakshadweep islands. In this paper, we consider the significant feature extraction for the similar stroke styles of Malayalam. This extracted feature set are suitable for the recognition of other handwritten south indian languages like Tamil, Telugu and Kannada. A classification scheme based on support vector machines (SVM) is proposed to improve the accuracy in classification and recognition of online malayalam handwritten characters. SVM Classifiers are the best for real world applications. The contribution of various features towards the accuracy in recognition is analysed. Performance for different kernels of SVM are also studied. A graphical user interface has developed for reading and displaying the character. Different writing styles are taken for each of the 44 alphabets. Various features are extracted and used for classification after the preprocessing of input data samples. Highest recognition accuracy of 97% is obtained experimentally at the best feature combination with polynomial kernel in SVM.Keywords: SVM, matlab, malayalam, South Indian scripts, onlinehandwritten character recognition
Procedia PDF Downloads 57497 Production and Leftovers Usage Policies to Minimize Food Waste under Uncertain and Correlated Demand
Authors: Esma Birisci, Ronald McGarvey
Abstract:
One of the common problems in food service industry is demand uncertainty. This research presents a multi-criteria optimization approach to identify the efficient frontier of points lying between the minimum-waste and minimum-shortfall solutions within uncertain demand environment. It also addresses correlation across demands for items (e.g., hamburgers are often demanded with french fries). Reducing overproduction food waste (and its corresponding environmental impacts) and an aversion to shortfalls (leave some customer hungry) need to consider as two contradictory objectives in an all-you-care-to-eat environment food service operation. We identify optimal production adjustments relative to demand forecasts, demand thresholds for utilization of leftovers, and percentages of demand to be satisfied by leftovers, considering two alternative metrics for overproduction waste: mass; and greenhouse gas emissions. Demand uncertainty and demand correlations are addressed using a kernel density estimation approach. A statistical analysis of the changes in decision variable values across each of the efficient frontiers can then be performed to identify the key variables that could be modified to reduce the amount of wasted food at minimal increase in shortfalls. We illustrate our approach with an application to empirical data from Campus Dining Services operations at the University of Missouri.Keywords: environmental studies, food waste, production planning, uncertain and correlated demand
Procedia PDF Downloads 37296 Two-Dimensional Analysis and Numerical Simulation of the Navier-Stokes Equations for Principles of Turbulence around Isothermal Bodies Immersed in Incompressible Newtonian Fluids
Authors: Romulo D. C. Santos, Silvio M. A. Gama, Ramiro G. R. Camacho
Abstract:
In this present paper, the thermos-fluid dynamics considering the mixed convection (natural and forced convections) and the principles of turbulence flow around complex geometries have been studied. In these applications, it was necessary to analyze the influence between the flow field and the heated immersed body with constant temperature on its surface. This paper presents a study about the Newtonian incompressible two-dimensional fluid around isothermal geometry using the immersed boundary method (IBM) with the virtual physical model (VPM). The numerical code proposed for all simulations satisfy the calculation of temperature considering Dirichlet boundary conditions. Important dimensionless numbers such as Strouhal number is calculated using the Fast Fourier Transform (FFT), Nusselt number, drag and lift coefficients, velocity and pressure. Streamlines and isothermal lines are presented for each simulation showing the flow dynamics and patterns. The Navier-Stokes and energy equations for mixed convection were discretized using the finite difference method for space and a second order Adams-Bashforth and Runge-Kuta 4th order methods for time considering the fractional step method to couple the calculation of pressure, velocity, and temperature. This work used for simulation of turbulence, the Smagorinsky, and Spalart-Allmaras models. The first model is based on the local equilibrium hypothesis for small scales and hypothesis of Boussinesq, such that the energy is injected into spectrum of the turbulence, being equal to the energy dissipated by the convective effects. The Spalart-Allmaras model, use only one transport equation for turbulent viscosity. The results were compared with numerical data, validating the effect of heat-transfer together with turbulence models. The IBM/VPM is a powerful tool to simulate flow around complex geometries. The results showed a good numerical convergence in relation the references adopted.Keywords: immersed boundary method, mixed convection, turbulence methods, virtual physical model
Procedia PDF Downloads 11495 The Asymmetric Proximal Support Vector Machine Based on Multitask Learning for Classification
Authors: Qing Wu, Fei-Yan Li, Heng-Chang Zhang
Abstract:
Multitask learning support vector machines (SVMs) have recently attracted increasing research attention. Given several related tasks, the single-task learning methods trains each task separately and ignore the inner cross-relationship among tasks. However, multitask learning can capture the correlation information among tasks and achieve better performance by training all tasks simultaneously. In addition, the asymmetric squared loss function can better improve the generalization ability of the models on the most asymmetric distributed data. In this paper, we first make two assumptions on the relatedness among tasks and propose two multitask learning proximal support vector machine algorithms, named MTL-a-PSVM and EMTL-a-PSVM, respectively. MTL-a-PSVM seeks a trade-off between the maximum expectile distance for each task model and the closeness of each task model to the general model. As an extension of the MTL-a-PSVM, EMTL-a-PSVM can select appropriate kernel functions for shared information and private information. Besides, two corresponding special cases named MTL-PSVM and EMTLPSVM are proposed by analyzing the asymmetric squared loss function, which can be easily implemented by solving linear systems. Experimental analysis of three classification datasets demonstrates the effectiveness and superiority of our proposed multitask learning algorithms.Keywords: multitask learning, asymmetric squared loss, EMTL-a-PSVM, classification
Procedia PDF Downloads 12994 A Mechanical Diagnosis Method Based on Vibration Fault Signal down-Sampling and the Improved One-Dimensional Convolutional Neural Network
Authors: Bowei Yuan, Shi Li, Liuyang Song, Huaqing Wang, Lingli Cui
Abstract:
Convolutional neural networks (CNN) have received extensive attention in the field of fault diagnosis. Many fault diagnosis methods use CNN for fault type identification. However, when the amount of raw data collected by sensors is massive, the neural network needs to perform a time-consuming classification task. In this paper, a mechanical fault diagnosis method based on vibration signal down-sampling and the improved one-dimensional convolutional neural network is proposed. Through the robust principal component analysis, the low-rank feature matrix of a large amount of raw data can be separated, and then down-sampling is realized to reduce the subsequent calculation amount. In the improved one-dimensional CNN, a smaller convolution kernel is used to reduce the number of parameters and computational complexity, and regularization is introduced before the fully connected layer to prevent overfitting. In addition, the multi-connected layers can better generalize classification results without cumbersome parameter adjustments. The effectiveness of the method is verified by monitoring the signal of the centrifugal pump test bench, and the average test accuracy is above 98%. When compared with the traditional deep belief network (DBN) and support vector machine (SVM) methods, this method has better performance.Keywords: fault diagnosis, vibration signal down-sampling, 1D-CNN
Procedia PDF Downloads 13093 Modeling Continuous Flow in a Curved Channel Using Smoothed Particle Hydrodynamics
Authors: Indri Mahadiraka Rumamby, R. R. Dwinanti Rika Marthanty, Jessica Sjah
Abstract:
Smoothed particle hydrodynamics (SPH) was originally created to simulate nonaxisymmetric phenomena in astrophysics. However, this method still has several shortcomings, namely the high computational cost required to model values with high resolution and problems with boundary conditions. The difficulty of modeling boundary conditions occurs because the SPH method is influenced by particle deficiency due to the integral of the kernel function being truncated by boundary conditions. This research aims to answer if SPH modeling with a focus on boundary layer interactions and continuous flow can produce quantifiably accurate values with low computational cost. This research will combine algorithms and coding in the main program of meandering river, continuous flow algorithm, and solid-fluid algorithm with the aim of obtaining quantitatively accurate results on solid-fluid interactions with the continuous flow on a meandering channel using the SPH method. This study uses the Fortran programming language for modeling the SPH (Smoothed Particle Hydrodynamics) numerical method; the model is conducted in the form of a U-shaped meandering open channel in 3D, where the channel walls are soil particles and uses a continuous flow with a limited number of particles.Keywords: smoothed particle hydrodynamics, computational fluid dynamics, numerical simulation, fluid mechanics
Procedia PDF Downloads 12892 A Kernel-Based Method for MicroRNA Precursor Identification
Authors: Bin Liu
Abstract:
MicroRNAs (miRNAs) are small non-coding RNA molecules, functioning in transcriptional and post-transcriptional regulation of gene expression. The discrimination of the real pre-miRNAs from the false ones (such as hairpin sequences with similar stem-loops) is necessary for the understanding of miRNAs’ role in the control of cell life and death. Since both their small size and sequence specificity, it cannot be based on sequence information alone but requires structure information about the miRNA precursor to get satisfactory performance. Kmers are convenient and widely used features for modeling the properties of miRNAs and other biological sequences. However, Kmers suffer from the inherent limitation that if the parameter K is increased to incorporate long range effects, some certain Kmer will appear rarely or even not appear, as a consequence, most Kmers absent and a few present once. Thus, the statistical learning approaches using Kmers as features become susceptible to noisy data once K becomes large. In this study, we proposed a Gapped k-mer approach to overcome the disadvantages of Kmers, and applied this method to the field of miRNA prediction. Combined with the structure status composition, a classifier called imiRNA-GSSC was proposed. We show that compared to the original imiRNA-kmer and alternative approaches. Trained on human miRNA precursors, this predictor can achieve an accuracy of 82.34 for predicting 4022 pre-miRNA precursors from eleven species.Keywords: gapped k-mer, imiRNA-GSSC, microRNA precursor, support vector machine
Procedia PDF Downloads 15991 Harmonization of Conflict Ahadith between Dissociation and Peaceful Co-Existence with Non-Muslims
Authors: Saheed Biodun Qaasim-Badmusi
Abstract:
A lot has been written on peaceful co-existence with non-Muslims in Islam, but little attention is paid to the conflict between Ahadith relating to dissociation from non-Muslims as a kernel of Islamic faith, and the one indicating peaceful co-existence with them. Undoubtedly, proper understanding of seemingly contradictory prophetic traditions is an antidote to the bane of pervasive extremism in our society. This is what calls for need to shed light on ‘Harmonization of Conflict Ahadith between Dissociation and Peaceful Co-existence with Non-Muslims. It is in view of the above that efforts are made in this paper to collate Ahadith pertaining to dissociation from non-Muslims as well as co-existence with them. Consequently, a critical study of their authenticity is briefly explained before proceeding to analysis of their linguistic and contextual meanings. To arrive at the accurate interpretation, harmonization is graphically applied. The result shows that dissociation from non –Muslims as a bedrock of Islamic faith could be explained in Sunnah by prohibition of participating or getting satisfaction from their religious matters, and anti-Islamic activities. Also, freedom of apostasy, ignoring da`wah with wisdom and seeking non-Muslims support against Muslims are frowned upon in Sunnah as phenomenon of dissociation from non –Muslims. All the aforementioned are strictly prohibited in Sunnah whether under the pretext of enhancing peaceful co-existence with non-Muslims or not. While peaceful co-existence with non-Muslims is evidenced in Sunnah by permissibility of visiting the sick among them, exchange of gift with them, forgiving the wrong among them, having good relationship with non-Muslim neighbours, ties of non-Muslim kinship, legal business transaction with them and the like. Finally, the degree of peaceful co-existence with non-Muslims is determined by their attitude towards Islam and Muslims.Keywords: Ahadith, conflict, co-existence, non-Muslims
Procedia PDF Downloads 14490 Application of Rapid Eye Imagery in Crop Type Classification Using Vegetation Indices
Authors: Sunita Singh, Rajani Srivastava
Abstract:
For natural resource management and in other applications about earth observation revolutionary remote sensing technology plays a significant role. One of such application in monitoring and classification of crop types at spatial and temporal scale, as it provides latest, most precise and cost-effective information. Present study emphasizes the use of three different vegetation indices of Rapid Eye imagery on crop type classification. It also analyzed the effect of each indices on classification accuracy. Rapid Eye imagery is highly demanded and preferred for agricultural and forestry sectors as it has red-edge and NIR bands. The three indices used in this study were: the Normalized Difference Vegetation Index (NDVI), the Green Normalized Difference Vegetation Index (GNDVI), and the Normalized Difference Red Edge Index (NDRE) and all of these incorporated the Red Edge band. The study area is Varanasi district of Uttar Pradesh, India and Radial Basis Function (RBF) kernel was used here for the Support Vector Machines (SVMs) classification. Classification was performed with these three vegetation indices. The contribution of each indices on image classification accuracy was also tested with single band classification. Highest classification accuracy of 85% was obtained using three vegetation indices. The study concluded that NDRE has the highest contribution on classification accuracy compared to the other vegetation indices and the Rapid Eye imagery can get satisfactory results of classification accuracy without original bands.Keywords: GNDVI, NDRE, NDVI, rapid eye, vegetation indices
Procedia PDF Downloads 36089 Evolving Convolutional Filter Using Genetic Algorithm for Image Classification
Authors: Rujia Chen, Ajit Narayanan
Abstract:
Convolutional neural networks (CNN), as typically applied in deep learning, use layer-wise backpropagation (BP) to construct filters and kernels for feature extraction. Such filters are 2D or 3D groups of weights for constructing feature maps at subsequent layers of the CNN and are shared across the entire input. BP as a gradient descent algorithm has well-known problems of getting stuck at local optima. The use of genetic algorithms (GAs) for evolving weights between layers of standard artificial neural networks (ANNs) is a well-established area of neuroevolution. In particular, the use of crossover techniques when optimizing weights can help to overcome problems of local optima. However, the application of GAs for evolving the weights of filters and kernels in CNNs is not yet an established area of neuroevolution. In this paper, a GA-based filter development algorithm is proposed. The results of the proof-of-concept experiments described in this paper show the proposed GA algorithm can find filter weights through evolutionary techniques rather than BP learning. For some simple classification tasks like geometric shape recognition, the proposed algorithm can achieve 100% accuracy. The results for MNIST classification, while not as good as possible through standard filter learning through BP, show that filter and kernel evolution warrants further investigation as a new subarea of neuroevolution for deep architectures.Keywords: neuroevolution, convolutional neural network, genetic algorithm, filters, kernels
Procedia PDF Downloads 18588 Experiments on Weakly-Supervised Learning on Imperfect Data
Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler
Abstract:
Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation
Procedia PDF Downloads 19787 Development of a Humanized Anti-CEA Antibody for the Near Infrared Optical Imaging of Cancer
Authors: Paul J Yazaki, Michael Bouvet, John Shively
Abstract:
Surgery for solid gastrointestinal (GI) cancers such as pancreatic, colorectal, and gastric adenocarcinoma remains the mainstay of curative therapy. Complete resection of the primary tumor with negative margins (R0 resection), its draining lymph nodes, and distant metastases offers the optimal surgical benefit. Real-time fluorescence guided surgery (FGS) promises to improve GI cancer outcomes and is rapidly advancing with tumor-specific antibody conjugated fluorophores that can be imaged using near infrared (NIR) technology. Carcinoembryonic Antigen (CEA) is a non-internalizing tumor antigen validated as a surface tumor marker expressed in >95% of colorectal, 80% of gastric, and 60% of pancreatic adenocarcinomas. Our humanized anti-CEA hT84.66-M5A (M5A) monoclonal antibody (mAb)was conjugated with the NHS-IRDye800CW fluorophore and shown it can rapidly and effectively NIRoptical imageorthotopically implanted human colon and pancreatic cancer in mouse models. A limitation observed is that these NIR-800 dye conjugated mAbs have a rapid clearance from the blood, leading to a narrow timeframe for FGS and requiring high doses for effective optical imaging. We developed a novel antibody-fluorophore conjugate by incorporating a PEGylated sidearm linker to shield or mask the IR800 dye’s hydrophobicity which effectively extended the agent’s blood circulation half-life leading to increased tumor sensitivity and lowered normal hepatic uptake. We hypothesized that our unique anti-CEA linked to the fluorophore, IR800 by PEGylated sidewinder, M5A-SW-IR800 will become the next generation optical imaging agent, safe, effective, and widely applicable for intraoperative image guided surgery in CEA expressing GI cancers.Keywords: optical imaging, anti-CEA, cancer, fluorescence-guided surgery
Procedia PDF Downloads 14686 An Atomistic Approach to Define Continuum Mechanical Quantities in One Dimensional Nanostructures at Finite Temperature
Authors: Smriti, Ajeet Kumar
Abstract:
We present a variant of the Irving-Kirkwood procedure to obtain the microscopic expressions of the cross-section averaged continuum fields such as internal force and moment in one-dimensional nanostructures in the non-equilibrium setting. In one-dimensional continuum theories for slender bodies, we deal with quantities such as mass, linear momentum, angular momentum, and strain energy densities, all defined per unit length. These quantities are obtained by integrating the corresponding pointwise (per unit volume) quantities over the cross-section of the slender body. However, no well-defined cross-section exists for these nanostructures at finite temperature. We thus define the cross-section of a nanorod to be an infinite plane which is fixed in space even when time progresses and defines the above continuum quantities by integrating the pointwise microscopic quantities over this infinite plane. The method yields explicit expressions of both the potential and kinetic parts of the above quantities. We further specialize in these expressions for helically repeating one-dimensional nanostructures in order to use them in molecular dynamics study of extension, torsion, and bending of such nanostructures. As, the Irving-Kirkwood procedure does not yield expressions of stiffnesses, we resort to a thermodynamic equilibrium approach to obtain the expressions of axial force, twisting moment, bending moment, and the associated stiffnesses by taking the first and second derivatives of the Helmholtz free energy with respect to conjugate strain measures. The equilibrium approach yields expressions independent of kinetic terms. We then establish the equivalence of the expressions obtained using the two approaches. The derived expressions are used to understand the extension, torsion, and bending of single-walled carbon nanotubes at non-zero temperatures.Keywords: thermoelasticity, molecular dynamics, one dimensional nanostructures, nanotube buckling
Procedia PDF Downloads 12285 Utilizing Topic Modelling for Assessing Mhealth App’s Risks to Users’ Health before and during the COVID-19 Pandemic
Authors: Pedro Augusto Da Silva E Souza Miranda, Niloofar Jalali, Shweta Mistry
Abstract:
BACKGROUND: Software developers utilize automated solutions to scrape users’ reviews to extract meaningful knowledge to identify problems (e.g., bugs, compatibility issues) and possible enhancements (e.g., users’ requests) to their solutions. However, most of these solutions do not consider the health risk aspects to users. Recent works have shed light on the importance of including health risk considerations in the development cycle of mHealth apps to prevent harm to its users. PROBLEM: The COVID-19 Pandemic in Canada (and World) is currently forcing physical distancing upon the general population. This new lifestyle made the usage of mHealth applications more essential than ever, with a projected market forecast of 332 billion dollars by 2025. However, this new insurgency in mHealth usage comes with possible risks to users’ health due to mHealth apps problems (e.g., wrong insulin dosage indication due to a UI error). OBJECTIVE: These works aim to raise awareness amongst mHealth developers of the importance of considering risks to users’ health within their development lifecycle. Moreover, this work also aims to help mHealth developers with a Proof-of-Concept (POC) solution to understand, process, and identify possible health risks to users of mHealth apps based on users’ reviews. METHODS: We conducted a mixed-method study design. We developed a crawler to mine the negative reviews from two samples of mHealth apps (my fitness, medisafe) from the Google Play store users. For each mHealth app, we performed the following steps: • The reviews are divided into two groups, before starting the COVID-19 (reviews’ submission date before 15 Feb 2019) and during the COVID-19 (reviews’ submission date starts from 16 Feb 2019 till Dec 2020). For each period, the Latent Dirichlet Allocation (LDA) topic model was used to identify the different clusters of reviews based on similar topics of review The topics before and during COVID-19 are compared, and the significant difference in frequency and severity of similar topics are identified. RESULTS: We successfully scraped, filtered, processed, and identified health-related topics in both qualitative and quantitative approaches. The results demonstrated the similarity between topics before and during the COVID-19.Keywords: natural language processing (NLP), topic modeling, mHealth, COVID-19, software engineering, telemedicine, health risks
Procedia PDF Downloads 12984 Prediction of Music Track Popularity: A Machine Learning Approach
Authors: Syed Atif Hassan, Luv Mehta, Syed Asif Hassan
Abstract:
Hit song science is a field of investigation wherein machine learning techniques are applied to music tracks in order to extract such features from audio signals which can capture information that could explain the popularity of respective tracks. Record companies invest huge amounts of money into recruiting fresh talents and churning out new music each year. Gaining insight into the basis of why a song becomes popular will result in tremendous benefits for the music industry. This paper aims to extract basic musical and more advanced, acoustic features from songs while also taking into account external factors that play a role in making a particular song popular. We use a dataset derived from popular Spotify playlists divided by genre. We use ten genres (blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae, rock), chosen on the basis of clear to ambiguous delineation in the typical sound of their genres. We feed these features into three different classifiers, namely, SVM with RBF kernel, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model at the end. Predicting song popularity is particularly important for the music industry as it would allow record companies to produce better content for the masses resulting in a more competitive market.Keywords: classifier, machine learning, music tracks, popularity, prediction
Procedia PDF Downloads 66083 Development of a Cost Effective Two Wheel Tractor Mounted Mobile Maize Sheller for Small Farmers in Bangladesh
Authors: M. Israil Hossain, T. P. Tiwari, Ashrafuzzaman Gulandaz, Nusrat Jahan
Abstract:
Two-wheel tractor (power tiller) is a common tillage tool in Bangladesh agriculture for easy access in fragmented land with affordable price of small farmers. Traditional maize sheller needs to be carried from place to place by hooking with two-wheel tractor (2WT) and set up again for shelling operation which takes longer time for preparation of maize shelling. The mobile maize sheller eliminates the transportation problem and can start shelling operation instantly any place as it is attached together with 2WT. It is counterclockwise rotating cylinder, axial flow type sheller, and grain separated with a frictional force between spike tooth and concave. The maize sheller is attached with nuts and bolts in front of the engine base of 2WT. The operating power of the sheller comes from the fly wheel of the engine of the tractor through ‘V” belt pulley arrangement. The average shelling capacity of the mobile sheller is 2.0 t/hr, broken kernel 2.2%, and shelling efficiency 97%. The average maize shelling cost is Tk. 0.22/kg and traditional custom hire rate is Tk.1.0/kg, respectively (1 US$=Tk.78.0). The service provider of the 2WT can transport the mobile maize sheller long distance in operator’s seating position. The manufacturers started the fabrication of mobile maize sheller. This mobile maize sheller is also compatible for the other countries where 2WT is available for farming operation.Keywords: cost effective, mobile maize sheller, maize shelling capacity, small farmers, two wheel tractor
Procedia PDF Downloads 18282 Effect of Modified Atmosphere Packaging and Storage Temperatures on Quality of Shelled Raw Walnuts
Authors: M. Javanmard
Abstract:
This study was aimed at analyzing the effects of packaging (MAP) and preservation conditions on the packaged fresh walnut kernel quality. The central composite plan was used for evaluating the effect of oxygen (0–10%), carbon dioxide (0-10%), and temperature (4-26 °C) on qualitative characteristics of walnut kernels. Also, the response level technique was used to find the optimal conditions for interactive effects of factors, as well as estimating the best conditions of process using least amount of testing. Measured qualitative parameters were: peroxide index, color, decreased weight, mould and yeast counting test, and sensory evaluation. The results showed that the defined model for peroxide index, color, weight loss, and sensory evaluation is significant (p < 0.001), so that increase of temperature causes the peroxide value, color variation, and weight loss to increase and it reduces the overall acceptability of walnut kernels. An increase in oxygen percentage caused the color variation level and peroxide value to increase and resulted in lower overall acceptability of the walnuts. An increase in CO2 percentage caused the peroxide value to decrease, but did not significantly affect other indices (p ≥ 0.05). Mould and yeast were not found in any samples. Optimal packaging conditions to achieve maximum quality of walnuts include: 1.46% oxygen, 10% carbon dioxide, and temperature of 4 °C.Keywords: shelled walnut, MAP, quality, storage temperature
Procedia PDF Downloads 38581 Microscopic Analysis of Bulk, High-Tc Superconductors by Transmission Kikuchi Diffraction
Authors: Anjela Koblischka-Veneva, Michael R. Koblischka
Abstract:
In this contribution, the Transmission-Kikuchi Diffraction (TKD, or sometimes called t-EBSD) is applied to bulk, melt-grown YBa₂Cu₃O₇ (YBCO) superconductors prepared by the MTMG (melt-textured melt-grown) technique and the infiltration growth (IG) technique. TEM slices required for the analysis were prepared by means of Focused Ion-Beam (FIB) milling using mechanically polished sample surfaces, which enable a proper selection of the interesting regions for investigations. The required optical transparency was reached by an additional polishing step of the resulting surfaces using FIB-Ga-ion and Ar-ion milling. The improved spatial resolution of TKD enabled the investigation of the tiny YBa₂Cu₃O₅ (Y-211) particles having a diameter of about 50-100 nm embedded within the YBCO matrix and of other added secondary phase particles. With the TKD technique, the microstructural properties of the YBCO matrix are studied in detail. It is observed that the matrix shows the effects of stress/strain, depending on the size and distribution of the embedded particles, which are important for providing additional flux pinning centers in such superconducting bulk samples. Using the Kernel Average Misorientation (KAM) maps, the strain induced in the superconducting matrix around the particles, which increases the flux pinning effectivity, can be clearly revealed. This type of analysis of the EBSD/TKD data is, therefore, also important for other material systems, where nanoparticles are embedded in a matrix.Keywords: transmission Kikuchi diffraction, EBSD, TKD, embedded particles, superconductors YBa₂Cu₃O₇
Procedia PDF Downloads 13380 Utilization of Bottom Ash as Catalyst in Biomass Steam Gasification for Hydrogen and Syngas Production: Lab Scale Approach
Authors: Angga Pratama Herman, Muhammad Shahbaz, Suzana Yusup
Abstract:
Bottom ash is a solid waste from thermal power plant and it is usually disposed of into landfills and ash ponds. These disposal methods are not sustainable since new lands need to be acquired as the landfills and ash ponds are fill to its capacity. Bottom ash also classified as hazardous material that makes the disposal methods may have contributed to the environmental effect to the area. Hence, more research needs to be done to explore the potential of recycling the bottom ash as more useful product. The objective of this research is to explore the potential of utilizing bottom ash as catalyst in biomass steam gasification. In this research, bottom ash was used as catalyst in gasification of Palm Kernel Shell (PKS) using Thermo Gravimetric Analyzer coupled with mass spectrometry (TGA/MS). The effects of temperature (650 – 750 °C), particle size (0.5 – 1.0 mm) and bottom ash percentage (2 % - 10 %) were studied with and without steam. The experimental arrays were designed using expert method of Central Composite Design (CCD). Results show maximum yield of hydrogen gas was 34.3 mole % for gasification without steam and 61.4 Mole % with steam. Similar trend was observed for syngas production. The maximum syngas yield was 59.5 mole % for without steam and it reached up to 81.5 mole% with the use of steam. The optimal condition for both product gases was temperature 700 °C, particle size 0.75 mm and cool bottom ash % 0.06. In conclusion, the use of bottom ash as catalyst is possible for biomass steam gasification and the product gases composition are comparable with previous researches, however the results need to be validated for bench or pilot scale study.Keywords: bottom ash, biomass steam gasification, catalyst, lab scale
Procedia PDF Downloads 29779 A Moroccan Natural Solution for Treating Industrial Effluents: Evaluating the Effectiveness of Using Date Kernel Residues for Purification
Authors: Ahmed Salim, A. El Bouari, M. Tahiri, O. Tanane
Abstract:
This research aims to develop and comprehensively characterize a cost-effective activated carbon derived from date residues, with a focus on optimizing its physicochemical properties to achieve superior performance in a variety of applications. The samples were synthesized via a chemical activation process utilizing phosphoric acid (H₃PO₄) as the activating agent. Activated carbon, produced through this method, functions as a vital adsorbent for the removal of contaminants, with a specific focus on methylene blue, from industrial wastewater. This study meticulously examined the influence of various parameters, including carbonization temperature and duration, on both the combustion properties and adsorption efficiency of the resultant material. Through extensive analysis, the optimal conditions for synthesizing the activated carbon were identified as a carbonization temperature of 600°C and a duration of 2 hours. The activated carbon synthesized under optimized conditions demonstrated an exceptional carbonization yield and methylene blue adsorption efficiency of 99.71%. The produced carbon was subsequently characterized using X-ray diffraction (XRD) analysis. Its effectiveness in the adsorption of methylene blue from contaminated water was then evaluated. A comprehensive assessment of the adsorption capacity was conducted by varying parameters such as carbon dosage, contact time, initial methylene blue concentration, and pH levels.Keywords: environmental pollution, adsorbent, activated carbon, phosphoric acid, date Kernels, pollutants, adsorption
Procedia PDF Downloads 4378 Effects of the Fractional Order on Nanoparticles in Blood Flow through the Stenosed Artery
Authors: Mohammed Abdulhameed, Sagir M. Abdullahi
Abstract:
In this paper, based on the applications of nanoparticle, the blood flow along with nanoparticles through stenosed artery is studied. The blood is acted by periodic body acceleration, an oscillating pressure gradient and an external magnetic field. The mathematical formulation is based on Caputo-Fabrizio fractional derivative without singular kernel. The model of ordinary blood, corresponding to time-derivatives of integer order, is obtained as a limiting case. Analytical solutions of the blood velocity and temperature distribution are obtained by means of the Hankel and Laplace transforms. Effects of the order of Caputo-Fabrizio time-fractional derivatives and three different nanoparticles i.e. Fe3O4, TiO4 and Cu are studied. The results highlights that, models with fractional derivatives bring significant differences compared to the ordinary model. It is observed that the addition of Fe3O4 nanoparticle reduced the resistance impedance of the blood flow and temperature distribution through bell shape stenosed arteries as compared to TiO4 and Cu nanoparticles. On entering in the stenosed area, blood temperature increases slightly, but, increases considerably and reaches its maximum value in the stenosis throat. The shears stress has variation from a constant in the area without stenosis and higher in the layers located far to the longitudinal axis of the artery. This fact can be an important for some clinical applications in therapeutic procedures.Keywords: nanoparticles, blood flow, stenosed artery, mathematical models
Procedia PDF Downloads 26777 Simultaneous Determination of Six Characterizing/Quality Parameters of Biodiesels via 1H NMR and Multivariate Calibration
Authors: Gustavo G. Shimamoto, Matthieu Tubino
Abstract:
The characterization and the quality of biodiesel samples are checked by determining several parameters. Considering a large number of analysis to be performed, as well as the disadvantages of the use of toxic solvents and waste generation, multivariate calibration is suggested to reduce the number of tests. In this work, hydrogen nuclear magnetic resonance (1H NMR) spectra were used to build multivariate models, from partial least squares (PLS) regression, in order to determine simultaneously six important characterizing and/or quality parameters of biodiesels: density at 20 ºC, kinematic viscosity at 40 ºC, iodine value, acid number, oxidative stability, and water content. Biodiesels from twelve different oils sources were used in this study: babassu, brown flaxseed, canola, corn, cottonseed, macauba almond, microalgae, palm kernel, residual frying, sesame, soybean, and sunflower. 1H NMR reflects the structures of the compounds present in biodiesel samples and showed suitable correlations with the six parameters. The PLS models were constructed with latent variables between 5 and 7, the obtained values of r(cal) and r(val) were greater than 0.994 and 0.989, respectively. In addition, the models were considered suitable to predict all the six parameters for external samples, taking into account the analytical speed to perform it. Thus, the alliance between 1H NMR and PLS showed to be appropriate to characterize and evaluate the quality of biodiesels, reducing significantly analysis time, the consumption of reagents/solvents, and waste generation. Therefore, the proposed methods can be considered to adhere to the principles of green chemistry.Keywords: biodiesel, multivariate calibration, nuclear magnetic resonance, quality parameters
Procedia PDF Downloads 53876 Microscopic Analysis of Bulk, High-TC Superconductors by Transmission Kikuchi Diffraction
Authors: Anjela Koblischka-Veneva, Michael Koblischka
Abstract:
In this contribution, the transmission-Kikuchi diffrac-tion (TKD, or sometimes called t-EBSD) is applied to bulk, melt-grown YBa2Cu3O7 (YBCO) superconductors prepared by the MTMG (melt-textured melt-grown) technique and the infiltration (IG) growth technique. TEM slices required for the analysis were prepared by means of focused ion-beam (FIB) milling using mechanically polished sample surfaces, which enable a proper selection of the in-teresting regions for investigations. The required optical transparency was reached by an additional polishing step of the resulting surfaces using FIB-Ga-ion and Ar-ion milling. The improved spatial resolution of TKD enabled the investigation of the tiny Y2BaCuO5 (Y-211) particles having a diameter of about 50-100 nm embedded within the YBCO matrix and of other added secondary phase particles. With the TKD technique, the microstructural properties of the YBCO matrix are studied in detail. It is observed that the matrix shows effects of stress/strain, depending on the size and distribution of the embedded particles, which are important for providing additional flux pinning centers in such superconducting bulk samples. Using the Kernel average misorientation (KAM) maps, the strain induced in the superconducting matrix around the particles, which increases the flux pinning effectivity, can be clearly revealed. This type of analysis of the EBSD/TKD data is, therefore, also important for other material systems, where nanoparticles are embedded in a matrix.Keywords: electron backscatter Diffraction, transmission Kikuchi diffraction, SEM, YBCO, microstructure, nanoparticles
Procedia PDF Downloads 12475 Peptide-Gold Nanocluster as an Optical Biosensor for Glycoconjugate Secreted from Leishmania
Authors: Y. A. Prada, Fanny Guzman, Rafael Cabanzo, John J. Castillo, Enrique Mejia-Ospino
Abstract:
In this work, we show the important results about of synthesis of photoluminiscents gold nanoclusters using a small peptide as template for biosensing applications. Interestingly, we design one peptide (NBC2854) homologue to conservative domain from 215 250 residue of a galactolectin protein which can recognize the proteophosphoglycans (PPG) from Leishmania. Peptide was synthetized by multiple solid phase synthesis using FMoc group methodology in acid medium. Finally, the peptide was purified by High-Performance Liquid Chromatography using a Vydac C-18 preparative column and the detection was at 215 nm using a Photo Diode Array detector. Molecular mass of this peptide was confirmed by MALDI-TOF and to verify the α-helix structure we use Circular Dichroism. By means of the methodology used we obtained a novel fluorescents gold nanoclusters (AuNC) using NBC2854 as a template. In this work, we described an easy and fast microsonic method for the synthesis of AuNC with ≈ 3.0 nm of hydrodynamic size and photoemission at 630 nm. The presence of cysteine residue in the C-terminal of the peptide allows the formation of Au-S bond which confers stability to Peptide-based gold nanoclusters. Interactions between the peptide and gold nanoclusters were confirmed by X-ray Photoemission and Raman Spectroscopy. Notably, from the ultrafine spectra shown in the MALDI-TOF analysis which containing only 3-7 KDa species was assigned to Au₈-₁₈[NBC2854]₂ clusters. Finally, we evaluated the Peptide-gold nanocluster as an optical biosensor based on fluorescence spectroscopy and the fluorescence signal of PPG (0.1 µg-mL⁻¹ to 1000 µg-mL⁻¹) was amplified at the same wavelength emission (≈ 630 nm). This can suggest that there is a strong interaction between PPG and Pep@AuNC, therefore, the increase of the fluorescence intensity can be related to the association mechanism that take place when the target molecule is sensing by the Pep@AuNC conjugate. Further spectroscopic studies are necessary to evaluate the fluorescence mechanism involve in the sensing of the PPG by the Pep@AuNC. To our best knowledge the fabrication of an optical biosensor based on Pep@AuNC for sensing biomolecules such as Proteophosphoglycans which are secreted in abundance by parasites Leishmania.Keywords: biosensing, fluorescence, Leishmania, peptide-gold nanoclusters, proteophosphoglycans
Procedia PDF Downloads 16674 INRAM-3DCNN: Multi-Scale Convolutional Neural Network Based on Residual and Attention Module Combined with Multilayer Perceptron for Hyperspectral Image Classification
Authors: Jianhong Xiang, Rui Sun, Linyu Wang
Abstract:
In recent years, due to the continuous improvement of deep learning theory, Convolutional Neural Network (CNN) has played a great superior performance in the research of Hyperspectral Image (HSI) classification. Since HSI has rich spatial-spectral information, only utilizing a single dimensional or single size convolutional kernel will limit the detailed feature information received by CNN, which limits the classification accuracy of HSI. In this paper, we design a multi-scale CNN with MLP based on residual and attention modules (INRAM-3DCNN) for the HSI classification task. We propose to use multiple 3D convolutional kernels to extract the packet feature information and fully learn the spatial-spectral features of HSI while designing residual 3D convolutional branches to avoid the decline of classification accuracy due to network degradation. Secondly, we also design the 2D Inception module with a joint channel attention mechanism to quickly extract key spatial feature information at different scales of HSI and reduce the complexity of the 3D model. Due to the high parallel processing capability and nonlinear global action of the Multilayer Perceptron (MLP), we use it in combination with the previous CNN structure for the final classification process. The experimental results on two HSI datasets show that the proposed INRAM-3DCNN method has superior classification performance and can perform the classification task excellently.Keywords: INRAM-3DCNN, residual, channel attention, hyperspectral image classification
Procedia PDF Downloads 7773 Synthesis of New Bio-Based Solid Polymer Electrolyte Polyurethane-Liclo4 via Prepolymerization Method: Effect of NCO/OH Ratio on Their Chemical, Thermal Properties and Ionic Conductivity
Authors: C. S. Wong, K. H. Badri, N. Ataollahi, K. P. Law, M. S. Su’ait, N. I. Hassan
Abstract:
Novel bio-based polymer electrolyte was synthesized with LiClO4 as the main source of charge carrier. Initially, polyurethane-LiClO4 polymer electrolytes were synthesized via polymerization method with different NCO/OH ratios and labelled as PU1, PU2, PU3, and PU4. Subsequently, the chemical, thermal properties and ionic conductivity of the films produced were determined. Fourier transform infrared (FTIR) analysis indicates the co-ordination between Li+ ion and polyurethane in PU1 due to the greatest amount of hard segment of polyurethane in PU1 as proven by soxhlet analysis. The structures of polyurethanes were confirmed by 13 nuclear magnetic resonance spectroscopy (13C NMR) and FTIR spectroscopy. Differential scanning calorimetry (DSC) analysis indicates PU 1 has the highest glass transition temperature (Tg) corresponds to the most abundant urethane group which is the hard segment in PU1. Scanning electron microscopy (SEM) of the PU-LiClO4 shows the good miscibility between lithium salt and the polymer. The study found that PU1 possessed the greatest ionic conductivity (1.19 × 10-7 S.cm-1 at 298 K and 5.01 × 10-5 S.cm-1 at 373 K) and the lowest activation energy, Ea (0.32 eV) due to the greatest amount of hard segment formed in PU 1 induces the coordination between lithium ion and oxygen atom of carbonyl group in polyurethane. All the polyurethanes exhibited linear Arrhenius variations indicating ion transport via simple lithium ion hopping in polyurethane. This research proves the NCO content in polyurethane plays an important role in affecting the ionic conductivity of this polymer electrolyte.Keywords: ionic conductivity, palm kernel oil-based monoester-OH, polyurethane, solid polymer electrolyte
Procedia PDF Downloads 424