Search results for: audio extraction
1834 Recovery of Value-Added Whey Proteins from Dairy Effluent Using Aqueous Two-Phase System
Authors: Perumalsamy Muthiah, Murugesan Thanapalan
Abstract:
The remains of cheese production contain nutritional value added proteins viz., α-Lactalbumin, β-Lactoglobulin representing 80- 90% of the total volume of milk entering the process. Although several possibilities for cheese-whey exploitation have been assayed, approximately half of world cheese-whey production is not treated but is discarded as effluent. It is necessary to develop an effective and environmentally benign extraction process for the recovery of value added cheese whey proteins. Recently aqueous two phase system (ATPS) have emerged as potential separation process, particularly in the field of biotechnology due to the mild conditions of the process, short processing time, and ease of scale-up. In order to design an ATPS process for the recovery of cheese whey proteins, development of phase diagram and the effect of system parameters such as pH, types and the concentrations of the phase forming components, temperature, etc., on the partitioning of proteins were addressed in order to maximize the recovery of proteins. Some of the practical problems encountered in the application of aqueous two-phase systems for the recovery of Cheese whey proteins were also discussed.Keywords: aqueous two-phase system, phase diagram, extraction, cheese whey
Procedia PDF Downloads 4101833 A Method to Evaluate and Compare Web Information Extractors
Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman
Abstract:
Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.Keywords: web information extractors, information extraction evaluation method, Google scholar, web
Procedia PDF Downloads 2481832 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm
Authors: Annalakshmi G., Sakthivel Murugan S.
Abstract:
This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization
Procedia PDF Downloads 1631831 Effectiveness of Computer Video Games on the Levels of Anxiety of Children Scheduled for Tooth Extraction
Authors: Marji Umil, Miane Karyle Urolaza, Ian Winston Dale Uy, John Charle Magne Valdez, Karen Elizabeth Valdez, Ervin Charles Valencia, Cheryleen Tan-Chua
Abstract:
Objective: Distraction techniques can be successful in reducing the anxiety of children during medical procedures. Dental procedures, in particular, are associated with dental anxiety which has been identified as a significant and common problem in children, however, only limited studies were conducted to address such problem. Thus, this study determined the effectiveness of computer video games on the levels of anxiety of children between 5-12 years old scheduled for tooth extraction. Methods: A pre-test post-test quasi-experimental study was conducted involving 30 randomly-assigned subjects, 15 in the experimental and 15 in the control. Subjects in the experimental group played computer video games for a maximum of 15 minutes, however, no intervention was done on the control. The modified Yale Pre-operative Anxiety Scale (m-YPAS) with a Cronbach’s alpha of 0.9 was used to assess anxiety at two different points: upon arrival in the clinic (pre-test anxiety) and 15 minutes after the first measurement (post-test anxiety). Paired t-test and ANCOVA were used to analyze the gathered data. Results: Results showed that there is a significant difference between the pre-test and post-test anxiety scores of the control group (p=0.0002) which indicates an increased anxiety. A significant difference was also noted between the pre-test and post-test anxiety scores of the experimental group (p=0.0002) which indicates decreased anxiety. Comparatively, the experimental group showed lower anxiety score (p=<0.0001) than the control. Conclusion: The use of computer video games is effective in reducing the pre-operative anxiety among children and can be an alternative non-pharmacological management in giving pre-operative care.Keywords: play therapy, preoperative anxiety, tooth extraction, video games
Procedia PDF Downloads 4521830 Extraction of Colorant and Dyeing of Gamma Irradiated Viscose Using Cordyline terminalis Leaves Extract
Authors: Urvah-Til-Vusqa, Unsa Noreen, Ayesha Hussain, Abdul Hafeez, Rafia Asghar, Sidrat Nasir
Abstract:
Natural dyes offer an alternative better application in textiles than synthetic ones. The present study will be aimed to employ natural dye extracted from Cordyline terminalis plant and its application into viscose under the influence of gamma radiations. The colorant extraction will be done by boiling dracaena leaves powder in aqueous, alkaline and ethyl acetate mediums. Both dye powder and fabric will be treated with different doses (5-20 kGy) of gamma radiations. The antioxidant, antimicrobial and hemolytic activities of the extracts will also be determined. Different tests of fabric characterization (before and after radiations treatment) will be employed. Dyeing variables just as time, temperature and M: L will be applied for optimization. Standard methods for ISO to evaluate color fastness to light, washing and rubbing will be employed for improvement of color strength 1.5-15.5% of Al, Fe, Cr, and Cu as mordants will be employed through pre, post and meta mordanting. Color depth % & L*, a*, b* and L*, C*, h values will be recorded using spectra flash SF650.Keywords: natural dyes, gamma radiations, Cordyline terminalis, ecofriendly dyes
Procedia PDF Downloads 5951829 A Theoretical Model for Pattern Extraction in Large Datasets
Authors: Muhammad Usman
Abstract:
Pattern extraction has been done in past to extract hidden and interesting patterns from large datasets. Recently, advancements are being made in these techniques by providing the ability of multi-level mining, effective dimension reduction, advanced evaluation and visualization support. This paper focuses on reviewing the current techniques in literature on the basis of these parameters. Literature review suggests that most of the techniques which provide multi-level mining and dimension reduction, do not handle mixed-type data during the process. Patterns are not extracted using advanced algorithms for large datasets. Moreover, the evaluation of patterns is not done using advanced measures which are suited for high-dimensional data. Techniques which provide visualization support are unable to handle a large number of rules in a small space. We present a theoretical model to handle these issues. The implementation of the model is beyond the scope of this paper.Keywords: association rule mining, data mining, data warehouses, visualization of association rules
Procedia PDF Downloads 2231828 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble
Authors: Jaehong Yu, Seoung Bum Kim
Abstract:
Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking
Procedia PDF Downloads 3391827 Slag-Heaps: From Piles of Waste to Valued Topography
Authors: René Davids
Abstract:
Some Western countries are abandoning coal and finding cleaner alternatives, such as solar, wind, hydroelectric, biomass, and geothermal, for the production of energy. As a consequence, industries have closed, and the toxic contaminated slag heaps formed essentially of discarded rock that did not contain coal are being colonized by spontaneously generated plant communities. In becoming green hiking territory, goat farms, viewing platforms, vineyards, great staging posts for species experiencing, and skiing slopes, many of the formerly abandoned hills of refuse have become delightful amenities to the surrounding communities. Together with the transformation of many industrial facilities into cultural venues, these changes to the slag hills have allowed the old coal districts to develop a new identity, but in the process, they have also literally buried the past. This essay reviews a few case studies to analyze the different ways slag heaps have contributed to the cultural landscape in the ex-coal county while arguing that it is important when deciding on their future, that we find ways to make the environmental damage that the extraction industry caused visibly and honor the lives of the people that worked under often appalling conditions in them.Keywords: slag-heaps, mines, extraction, remediation, pollution
Procedia PDF Downloads 711826 A Two-Step Framework for Unsupervised Speaker Segmentation Using BIC and Artificial Neural Network
Authors: Ahmad Alwosheel, Ahmed Alqaraawi
Abstract:
This work proposes a new speaker segmentation approach for two speakers. It is an online approach that does not require a prior information about speaker models. It has two phases, a conventional approach such as unsupervised BIC-based is utilized in the first phase to detect speaker changes and train a Neural Network, while in the second phase, the output trained parameters from the Neural Network are used to predict next incoming audio stream. Using this approach, a comparable accuracy to similar BIC-based approaches is achieved with a significant improvement in terms of computation time.Keywords: artificial neural network, diarization, speaker indexing, speaker segmentation
Procedia PDF Downloads 5021825 Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis
Authors: Amir Hajian, Sepehr Damavandinejadmonfared
Abstract:
In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA.Keywords: biometrics, finger vein recognition, principal component analysis (PCA), kernel principal component analysis (KPCA)
Procedia PDF Downloads 3651824 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks
Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez
Abstract:
Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning
Procedia PDF Downloads 3391823 Real-Time Demonstration of Visible Light Communication Based on Frequency-Shift Keying Employing a Smartphone as the Receiver
Authors: Fumin Wang, Jiaqi Yin, Lajun Wang, Nan Chi
Abstract:
In this article, we demonstrate a visible light communication (VLC) system over 8 meters free space transmission based on a commercial LED and a receiver in connection with an audio interface of a smart phone. The signal is in FSK modulation format. The successful experimental demonstration validates the feasibility of the proposed system in future wireless communication network.Keywords: visible light communication, smartphone communication, frequency shift keying, wireless communication
Procedia PDF Downloads 3921822 Satellite Interferometric Investigations of Subsidence Events Associated with Groundwater Extraction in Sao Paulo, Brazil
Authors: B. Mendonça, D. Sandwell
Abstract:
The Metropolitan Region of Sao Paulo (MRSP) has suffered from serious water scarcity. Consequently, the most convenient solution has been building wells to extract groundwater from local aquifers. However, it requires constant vigilance to prevent over extraction and future events that can pose serious threat to the population, such as subsidence. Radar imaging techniques (InSAR) have allowed continuous investigation of such phenomena. The analysis of data in the present study consists of 23 SAR images dated from October 2007 to March 2011, obtained by the ALOS-1 spacecraft. Data processing was made with the software GMTSAR, by using the InSAR technique to create pairs of interferograms with ground displacement during different time spans. First results show a correlation between the location of 102 wells registered in 2009 and signals of ground displacement equal or lower than -90 millimeters (mm) in the region. The longest time span interferogram obtained dates from October 2007 to March 2010. As a result, from that interferogram, it was possible to detect the average velocity of displacement in millimeters per year (mm/y), and which areas strong signals have persisted in the MRSP. Four specific areas with signals of subsidence of 28 mm/y to 40 mm/y were chosen to investigate the phenomenon: Guarulhos (Sao Paulo International Airport), the Greater Sao Paulo, Itaquera and Sao Caetano do Sul. The coverage area of the signals was between 0.6 km and 1.65 km of length. All areas are located above a sedimentary type of aquifer. Itaquera and Sao Caetano do Sul showed signals varying from 28 mm/y to 32 mm/y. On the other hand, the places most likely to be suffering from stronger subsidence are the ones in the Greater Sao Paulo and Guarulhos, right beside the International Airport of Sao Paulo. The rate of displacement observed in both regions goes from 35 mm/y to 40 mm/y. Previous investigations of the water use at the International Airport highlight the risks of excessive water extraction that was being done through 9 deep wells. Therefore, it is affirmed that subsidence events are likely to occur and to cause serious damage in the area. This study could show a situation that has not been explored with proper importance in the city, given its social and economic consequences. Since the data were only available until 2011, the question that remains is if the situation still persists. It could be reaffirmed, however, a scenario of risk at the International Airport of Sao Paulo that needs further investigation.Keywords: ground subsidence, Interferometric Satellite Aperture Radar (InSAR), metropolitan region of Sao Paulo, water extraction
Procedia PDF Downloads 3541821 Kinematic Optimization of Energy Extraction Performances for Flapping Airfoil by Using Radial Basis Function Method and Genetic Algorithm
Authors: M. Maatar, M. Mekadem, M. Medale, B. Hadjed, B. Imine
Abstract:
In this paper, numerical simulations have been carried out to study the performances of a flapping wing used as an energy collector. Metamodeling and genetic algorithms are used to detect the optimal configuration, improving power coefficient and/or efficiency. Radial basis functions and genetic algorithms have been applied to solve this problem. Three optimization factors are controlled, namely dimensionless heave amplitude h₀, pitch amplitude θ₀ and flapping frequency f. ANSYS FLUENT software has been used to solve the principal equations at a Reynolds number of 1100, while the heave and pitch motion of a NACA0015 airfoil has been realized using a developed function (UDF). The results reveal an average power coefficient and efficiency of 0.78 and 0.338 with an inexpensive low-fidelity model and a total relative error of 4.1% versus the simulation. The performances of the simulated optimum RBF-NSGA-II have been improved by 1.2% compared with the validated model.Keywords: numerical simulation, flapping wing, energy extraction, power coefficient, efficiency, RBF, NSGA-II
Procedia PDF Downloads 431820 Advanced Lithium Recovery from Brine: 2D-Based Ion Selectivity Membranes
Authors: Nour S. Abdelrahman, Seunghyun Hong, Hassan A. Arafat, Daniel Choi, Faisal Al Marzooqi
Abstract:
Abstract—The advancement of lithium extraction methods from water sources, particularly saltwater brine, is gaining prominence in the lithium recovery industry due to its cost-effectiveness. Traditional techniques like recrystallization, chemical precipitation, and solvent extraction for metal recovery from seawater or brine are energy-intensive and exhibit low efficiency. Moreover, the extensive use of organic solvents poses environmental concerns. As a result, there's a growing demand for environmentally friendly lithium recovery methods. Membrane-based separation technology has emerged as a promising alternative, offering high energy efficiency and ease of continuous operation. In our study, we explored the potential of lithium-selective sieve channels constructed from layers of 2D graphene oxide and MXene (transition metal carbides and nitrides), integrated with surface – SO₃₋ groups. The arrangement of these 2D sheets creates interplanar spacing ranging from 0.3 to 0.8 nm, which forms a barrier against multivalent ions while facilitating lithium-ion movement through nano capillaries. The introduction of the sulfonate group provides an effective pathway for Li⁺ ions, with a calculated binding energy of Li⁺ – SO³⁻ at – 0.77 eV, the lowest among monovalent species. These modified membranes demonstrated remarkably rapid transport of Li⁺ ions, efficiently distinguishing them from other monovalent and divalent species. This selectivity is achieved through a combination of size exclusion and varying binding affinities. The graphene oxide channels in these membranes showed exceptional inter-cation selectivity, with a Li⁺/Mg²⁺ selectivity ratio exceeding 104, surpassing commercial membranes. Additionally, these membranes achieved over 94% rejection of MgCl₂.Keywords: ion permeation, lithium extraction, membrane-based separation, nanotechnology
Procedia PDF Downloads 731819 Effects and Mechanisms of an Online Short-Term Audio-Based Mindfulness Intervention on Wellbeing in Community Settings and How Stress and Negative Affect Influence the Therapy Effects: Parallel Process Latent Growth Curve Modeling of a Randomized Control
Authors: Man Ying Kang, Joshua Kin Man Nan
Abstract:
The prolonged pandemic has posed alarming public health challenges to various parts of the world, and face-to-face mental health treatment is largely discounted for the control of virus transmission, online psychological services and self-help mental health kits have become essential. Online self-help mindfulness-based interventions have proved their effects on fostering mental health for different populations over the globe. This paper was to test the effectiveness of an online short-term audio-based mindfulness (SAM) program in enhancing wellbeing, dispositional mindfulness, and reducing stress and negative affect in community settings in China, and to explore possible mechanisms of how dispositional mindfulness, stress, and negative affect influenced the intervention effects on wellbeing. Community-dwelling adults were recruited via online social networking sites (e.g., QQ, WeChat, and Weibo). Participants (n=100) were randomized into the mindfulness group (n=50) and a waitlist control group (n=50). In the mindfulness group, participants were advised to spend 10–20 minutes listening to the audio content, including mindful-form practices (e.g., eating, sitting, walking, or breathing). Then practice daily mindfulness exercises for 3 weeks (a total of 21 sessions), whereas those in the control group received the same intervention after data collection in the mindfulness group. Participants in the mindfulness group needed to fill in the World Health Organization Five Well-Being Index (WHO), Positive and Negative Affect Schedule (PANAS), Perceived Stress Scale (PSS), and Freiburg Mindfulness Inventory (FMI) four times: at baseline (T0) and at 1 (T1), 2 (T2), and 3 (T3) weeks while those in the waitlist control group only needed to fill in the same scales at pre- and post-interventions. Repeated-measure analysis of variance, paired sample t-test, and independent sample t-test was used to analyze the variable outcomes of the two groups. The parallel process latent growth curve modeling analysis was used to explore the longitudinal moderated mediation effects. The dependent variable was WHO slope from T0 to T3, the independent variable was Group (1=SAM, 2=Control), the mediator was FMI slope from T0 to T3, and the moderator was T0NA and T0PSS separately. The different levels of moderator effects on WHO slope was explored, including low T0NA or T0PSS (Mean-SD), medium T0NA or T0PSS (Mean), and high T0NA or T0PSS (Mean+SD). The results found that SAM significantly improved and predicted higher levels of WHO slope and FMI slope, as well as significantly reduced NA and PSS. FMI slope positively predict WHO slope. FMI slope partially mediated the relationship between SAM and WHO slope. Baseline NA and PSS as the moderators were found to be significant between SAM and WHO slope and between SAM and FMI slope, respectively. The conclusion was that SAM was effective in promoting levels of mental wellbeing, positive affect, and dispositional mindfulness as well as reducing negative affect and stress in community settings in China. SAM improved wellbeing faster through the faster enhancement of dispositional mindfulness. Participants with medium-to-high negative affect and stress buffered the therapy effects of SAM on wellbeing improvement speed.Keywords: mindfulness, negative affect, stress, wellbeing, randomized control trial
Procedia PDF Downloads 1091818 Practical Experiences in the Development of a Lab-Scale Process for the Production and Recovery of Fucoxanthin
Authors: Alma Gómez-Loredo, José González-Valdez, Jorge Benavides, Marco Rito-Palomares
Abstract:
Fucoxanthin is a carotenoid that exerts multiple beneficial effects on human health, including antioxidant, anti-cancer, antidiabetic and anti-obesity activity; making the development of a whole process for its production and recovery an important contribution. In this work, the lab-scale production and purification of fucoxanthin in Isocrhysis galbana have been studied. In batch cultures, low light intensities (13.5 μmol/m2s) and bubble agitation were the best conditions for production of the carotenoid with product yields of up to 0.143 mg/g. After fucoxanthin ethanolic extraction from biomass and hexane partition, further recovery and purification of the carotenoid has been accomplished by means of alcohol – salt Aqueous Two-Phase System (ATPS) extraction followed by an ultrafiltration (UF) step. An ATPS comprised of ethanol and potassium phosphate (Volume Ratio (VR) =3; Tie-line Length (TLL) 60% w/w) presented a fucoxanthin recovery yield of 76.24 ± 1.60% among the studied systems and was able to remove 64.89 ± 2.64% of the carotenoid and chlorophyll pollutants. For UF, the addition of ethanol to the original recovered ethanolic ATPS stream to a final relation of 74.15% (w/w) resulted in a reduction of approximately 16% of the protein contents, increasing product purity with a recovery yield of about 63% of the compound in the permeate stream. Considering the production, extraction and primary recovery (ATPS and UF) steps, around a 45% global fucoxanthin recovery should be expected. Although other purification technologies, such as Centrifugal Partition Chromatography are able to obtain fucoxanthin recoveries of up to 83%, the process developed in the present work does not require large volumes of solvents or expensive equipment. Moreover, it has a potential for scale up to commercial scale and represents a cost-effective strategy when compared to traditional separation techniques like chromatography.Keywords: aqueous two-phase systems, fucoxanthin, Isochrysis galbana, microalgae, ultrafiltration
Procedia PDF Downloads 4241817 Valorisation of Mango Seed: Response Surface Methodology Based Optimization of Starch Extraction from Mango Seeds
Authors: Tamrat Tesfaye, Bruce Sithole
Abstract:
Box-Behnken Response surface methodology was used to determine the optimum processing conditions that give maximum extraction yield and whiteness index from mango seed. The steeping time ranges from 2 to 12 hours and slurring of the steeped seed in sodium metabisulphite solution (0.1 to 0.5 w/v) was carried out. Experiments were designed according to Box-Behnken Design with these three factors and a total of 15 runs experimental variables of were analyzed. At linear level, the concentration of sodium metabisulphite had significant positive influence on percentage yield and whiteness index at p<0.05. At quadratic level, sodium metabisulphite concentration and sodium metabisulphite concentration2 had a significant negative influence on starch yield; sodium metabisulphite concentration and steeping time*temperature had significant (p<0.05) positive influence on whiteness index. The adjusted R2 above 0.8 for starch yield (0.906465) and whiteness index (0.909268) showed a good fit of the model with the experimental data. The optimum sodium metabisulphite concentration, steeping hours, and temperature for starch isolation with maximum starch yield (66.428%) and whiteness index (85%) as set goals for optimization with the desirability of 0.91939 was 0.255w/v concentration, 2hrs and 50 °C respectively. The determined experimental value of each response based on optimal condition was statistically in accordance with predicted levels at p<0.05. The Mango seeds are the by-products obtained during mango processing and possess disposal problem if not handled properly. The substitution of food based sizing agents with mango seed starch can contribute as pertinent resource deployment for value-added product manufacturing and waste utilization which might play significance role of food security in Ethiopia.Keywords: mango, synthetic sizing agent, starch, extraction, textile, sizing
Procedia PDF Downloads 2311816 Alveolar Ridge Preservation in Post-extraction Sockets Using Concentrated Growth Factors: A Split-Mouth, Randomized, Controlled Clinical Trial
Authors: Sadam Elayah
Abstract:
Background: One of the most critical competencies in advanced dentistry is alveolar ridge preservation after exodontia. The aim of this clinical trial was to assess the impact of autologous concentrated growth factor (CGF) as a socket-filling material and its ridge preservation properties following the lower third molar extraction. Materials and Methods: A total of 60 sides of 30 participants who had completely symmetrical bilateral impacted lower third molars were enrolled. The short-term outcome variables were wound healing, swelling and pain, clinically assessed at different time intervals (1st, 3rd & 7th days). While the long-term outcome variables were bone height & width, bone density and socket surface area in the coronal section. Cone beam computed tomography images were obtained immediately after surgery and three months after surgery as a temporal measure. Randomization was achieved by opaque, sealed envelopes. Follow-up data were compared to baseline using Paired & Unpaired t-tests. Results: The wound healing index was significantly better in the test sides (P =0.001). Regarding the facial swelling, the test sides had significantly fewer values than the control sides, particularly on the 1st (1.01±.57 vs 1.55 ±.56) and 3rd days (1.42±0.8 vs 2.63±1.2) postoperatively. Nonetheless, the swelling disappeared within the 7th day on both sides. The pain scores of the visual analog scale were not a statistically significant difference between both sides on the 1st day; meanwhile, the pain scores were significantly lower on the test sides compared with the control sides, especially on the 3rd (P=0.001) and 7th days (P˂0.001) postoperatively. Regarding long-term outcomes, CGF sites had higher values in height and width when compared to Control sites (Buccal wall 32.9±3.5 vs 29.4±4.3 mm, Lingual wall 25.4±3.5 vs 23.1±4 mm, and Alveolar bone width 21.07±1.55vs19.53±1.90 mm) respectively. Bone density showed significantly higher values in CGF sites than in control sites (Coronal half 200±127.3 vs -84.1±121.3, Apical half 406.5±103 vs 64.2±158.6) respectively. There was a significant difference between both sites in reducing periodontal pockets. Conclusion: CGF application following surgical extraction provides an easy, low-cost, and efficient option for alveolar ridge preservation. Thus, dentists may encourage using CGF during dental extractions, particularly when alveolar ridge preservation is required.Keywords: platelet, extraction, impacted teeth, alveolar ridge, regeneration, CGF
Procedia PDF Downloads 671815 Analyses of Soil Volatile Contaminants Extraction by Hot Air Injection
Authors: Abraham Dayan
Abstract:
Remediation of soil containing volatile contaminants is often conducted by vapor extraction (SVE) technique. The operation is based on injection of air at ambient temperatures with or without thermal soil warming. Thermal enhancements of soil vapor extraction (TESVE) processes are usually conducted by soil heating, sometimes assisted by added steam injections. The current study addresses a technique which has not received adequate attention and is based on using exclusively hot air as an alternative to the common TESVE practices. To demonstrate the merit of the hot air TESVE technique, a sandy soil containing contaminated water is studied. Numerical and analytical tools were used to evaluate the rate of decontamination processes for various geometries and operating conditions. The governing equations are based on the Darcy law and are applied to an expanding compressible flow within a sandy soil. The equations were solved to determine the minimal time required for complete soil remediation. An approximate closed form solution was developed based on the assumption of local thermodynamic equilibrium and on a linearized representation of temperature dependence of the vapor to air density ratio. The solution is general in nature and offers insight into the governing processes of the soil remediation operation, where self-similar temperature profiles under certain conditions may exist, and the noticeable role of the contaminants evaporation and recondensation processes in affecting the remediation time. Based on analyses of the hot air TESVE technique, it is shown that it is sufficient to heat the air during a certain period of the decontamination process without compromising its full advantage, and thereby, entailing a minimization of the air-heating-energy requirements. This in effect is achieved by regeneration, leaving the energy stored in the soil during the early period of the remediation process to heat the subsequently injected ambient air, which infiltrates through it for the decontamination of the remaining untreated soil zone. The characteristic time required to complete SVE operations are calculated as a function of, both, the injected air temperature and humidity. For a specific set of conditions, it is demonstrated that elevating the injected air temperature by 20oC, the hot air injection technique reduces the soil remediation time by 50%, while requiring 30% of additional energy consumption. Those evaluations clearly unveil the advantage of the hot air SVE process, which for insignificant cost of added air heating energy, the substantial cost expenditures for manpower and equipment utilization are reduced.Keywords: Porous Media, Soil Decontamination, Hot Air, Vapor Extraction
Procedia PDF Downloads 111814 Rapid Method for Low Level 90Sr Determination in Seawater by Liquid Extraction Technique
Authors: S. Visetpotjanakit, N. Nakkaew
Abstract:
Determination of low level 90Sr in seawater has been widely developed for the purpose of environmental monitoring and radiological research because 90Sr is one of the most hazardous radionuclides released from atmospheric during the testing of nuclear weapons, waste discharge from the generation nuclear energy and nuclear accident occurring at power plants. A liquid extraction technique using bis-2-etylhexyl-phosphoric acid to separate and purify yttrium followed by Cherenkov counting using a liquid scintillation counter to determine 90Y in secular equilibrium to 90Sr was developed to monitor 90Sr in the Asia Pacific Ocean. The analytical performance was validated for the accuracy, precision, and trueness criteria. Sr-90 determination in seawater using various low concentrations in a range of 0.01 – 1 Bq/L of 30 liters spiked seawater samples and 0.5 liters of IAEA-RML-2015-01 proficiency test sample was performed for statistical evaluation. The results had a relative bias in the range from 3.41% to 12.28%, which is below accepted relative bias of ± 25% and passed the criteria confirming that our analytical approach for determination of low levels of 90Sr in seawater was acceptable. Moreover, the approach is economical, non-laborious and fast.Keywords: proficiency test, radiation monitoring, seawater, strontium determination
Procedia PDF Downloads 1691813 Preliminary Study of Hand Gesture Classification in Upper-Limb Prosthetics Using Machine Learning with EMG Signals
Authors: Linghui Meng, James Atlas, Deborah Munro
Abstract:
There is an increasing demand for prosthetics capable of mimicking natural limb movements and hand gestures, but precise movement control of prosthetics using only electrode signals continues to be challenging. This study considers the implementation of machine learning as a means of improving accuracy and presents an initial investigation into hand gesture recognition using models based on electromyographic (EMG) signals. EMG signals, which capture muscle activity, are used as inputs to machine learning algorithms to improve prosthetic control accuracy, functionality and adaptivity. Using logistic regression, a machine learning classifier, this study evaluates the accuracy of classifying two hand gestures from the publicly available Ninapro dataset using two-time series feature extraction algorithms: Time Series Feature Extraction (TSFE) and Convolutional Neural Networks (CNNs). Trials were conducted using varying numbers of EMG channels from one to eight to determine the impact of channel quantity on classification accuracy. The results suggest that although both algorithms can successfully distinguish between hand gesture EMG signals, CNNs outperform TSFE in extracting useful information for both accuracy and computational efficiency. In addition, although more channels of EMG signals provide more useful information, they also require more complex and computationally intensive feature extractors and consequently do not perform as well as lower numbers of channels. The findings also underscore the potential of machine learning techniques in developing more effective and adaptive prosthetic control systems.Keywords: EMG, machine learning, prosthetic control, electromyographic prosthetics, hand gesture classification, CNN, computational neural networks, TSFE, time series feature extraction, channel count, logistic regression, ninapro, classifiers
Procedia PDF Downloads 311812 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder
Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh
Abstract:
In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization
Procedia PDF Downloads 1141811 Two Kinds of Self-Oscillating Circuits Mechanically Demonstrated
Authors: Shiang-Hwua Yu, Po-Hsun Wu
Abstract:
This study introduces two types of self-oscillating circuits that are frequently found in power electronics applications. Special effort is made to relate the circuits to the analogous mechanical systems of some important scientific inventions: Galileo’s pendulum clock and Coulomb’s friction model. A little touch of related history and philosophy of science will hopefully encourage curiosity, advance the understanding of self-oscillating systems and satisfy the aspiration of some students for scientific literacy. Finally, the two self-oscillating circuits are applied to design a simple class-D audio amplifier.Keywords: self-oscillation, sigma-delta modulator, pendulum clock, Coulomb friction, class-D amplifier
Procedia PDF Downloads 3561810 Statistical Optimization of Distribution Coefficient for Reactive Extraction of Lactic Acid Using Tri-n-octyl Amine in Oleyl Alcohol and n-Hexane
Authors: Avinash Thakur, Parmjit S. Panesar, Manohar Singh
Abstract:
The distribution coefficient, KD for the reactive extraction of lactic acid from aqueous solutions of lactic acid using 10-30% (v/v) tri-n-octyl amine (extractant) dissolved in n-hexane (inert diluent) and 20% (v/v) oleyl alcohol (modifier) was optimized by using response surface methodology (RSM). A three level Box-Behnken design was employed for experimental design, analysis of the results and to depict the combined interactive effect of seven independent variables, viz lactic acid concentration (cl), pH, TOA concentration in organic phase (ψ), treat ratio (φ), temperature (T), agitation speed (ω) and batch agitation time (τ) on distribution coefficient of lactic acid. The regression analysis recommended that the quadratic model is significant (R2 and adjusted R2 are 98.72 % and 98.69 % respectively) for analysis. A numerical optimization had resulted in maximum lactic acid distribution coefficient (KD) of 3.16 at the optimized values for test variables, cl, pH, ψ, φ, T, ω and τ as 0.15 [M], 3.0, 22.75% (v/v), 1.0 (v/v), 26°C, 145 rpm and 23 min respectively. A good agreement between the predicted and experimentally obtained values for distribution coefficient using the optimized conditions was exhibited.Keywords: Distribution coefficient, tri-n-octylamine, lactic acid, response surface methodology
Procedia PDF Downloads 4561809 Object Trajectory Extraction by Using Mean of Motion Vectors Form Compressed Video Bitstream
Authors: Ching-Ting Hsu, Wei-Hua Ho, Yi-Chun Chang
Abstract:
Video object tracking is one of the popular research topics in computer graphics area. The trajectory can be applied in security, traffic control, even the sports training. The trajectory for sports training can be utilized to analyze the athlete’s performance without traditional sensors. There are many relevant works which utilize mean shift algorithm with background subtraction. This kind of the schemes should select a kernel function which may affect the accuracy and performance. In this paper, we consider the motion information in the pre-coded bitstream. The proposed algorithm extracts the trajectory by composing the motion vectors from the pre-coded bitstream. We gather the motion vectors from the overlap area of the object and calculate mean of the overlapped motion vectors. We implement and simulate our proposed algorithm in H.264 video codec. The performance is better than relevant works and keeps the accuracy of the object trajectory. The experimental results show that the proposed trajectory extraction can extract trajectory form the pre-coded bitstream in high accuracy and achieve higher performance other relevant works.Keywords: H.264, video bitstream, video object tracking, sports training
Procedia PDF Downloads 4281808 A Spatial Point Pattern Analysis to Recognize Fail Bit Patterns in Semiconductor Manufacturing
Authors: Youngji Yoo, Seung Hwan Park, Daewoong An, Sung-Shick Kim, Jun-Geol Baek
Abstract:
The yield management system is very important to produce high-quality semiconductor chips in the semiconductor manufacturing process. In order to improve quality of semiconductors, various tests are conducted in the post fabrication (FAB) process. During the test process, large amount of data are collected and the data includes a lot of information about defect. In general, the defect on the wafer is the main causes of yield loss. Therefore, analyzing the defect data is necessary to improve performance of yield prediction. The wafer bin map (WBM) is one of the data collected in the test process and includes defect information such as the fail bit patterns. The fail bit has characteristics of spatial point patterns. Therefore, this paper proposes the feature extraction method using the spatial point pattern analysis. Actual data obtained from the semiconductor process is used for experiments and the experimental result shows that the proposed method is more accurately recognize the fail bit patterns.Keywords: semiconductor, wafer bin map, feature extraction, spatial point patterns, contour map
Procedia PDF Downloads 3841807 Insight2OSC: Using Electroencephalography (EEG) Rhythms from the Emotiv Insight for Musical Composition via Open Sound Control (OSC)
Authors: Constanza Levicán, Andrés Aparicio, Rodrigo F. Cádiz
Abstract:
The artistic usage of Brain-computer interfaces (BCI), initially intended for medical purposes, has increased in the past few years as they become more affordable and available for the general population. One interesting question that arises from this practice is whether it is possible to compose or perform music by using only the brain as a musical instrument. In order to approach this question, we propose a BCI for musical composition, based on the representation of some mental states as the musician thinks about sounds. We developed software, called Insight2OSC, that allows the usage of the Emotiv Insight device as a musical instrument, by sending the EEG data to audio processing software such as MaxMSP through the OSC protocol. We provide two compositional applications bundled with the software, which we call Mapping your Mental State and Thinking On. The signals produced by the brain have different frequencies (or rhythms) depending on the level of activity, and they are classified as one of the following waves: delta (0.5-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta (13-30 Hz), gamma (30-50 Hz). These rhythms have been found to be related to some recognizable mental states. For example, the delta rhythm is predominant in a deep sleep, while beta and gamma rhythms have higher amplitudes when the person is awake and very concentrated. Our first application (Mapping your Mental State) produces different sounds representing the mental state of the person: focused, active, relaxed or in a state similar to a deep sleep by the selection of the dominants rhythms provided by the EEG device. The second application relies on the physiology of the brain, which is divided into several lobes: frontal, temporal, parietal and occipital. The frontal lobe is related to abstract thinking and high-level functions, the parietal lobe conveys the stimulus of the body senses, the occipital lobe contains the primary visual cortex and processes visual stimulus, the temporal lobe processes auditory information and it is important for memory tasks. In consequence, our second application (Thinking On) processes the audio output depending on the users’ brain activity as it activates a specific area of the brain that can be measured using the Insight device.Keywords: BCI, music composition, emotiv insight, OSC
Procedia PDF Downloads 3221806 Pharmacognostic, Phytochemical and Antibacterial Activity of Beaumontia Randiflora
Authors: Narmeen Mehmood
Abstract:
The current study was conducted to evaluate the pharmacognostic parameters, phytochemical analysis and antibacterial activity of the plant. Microscopic studies were carried out to determine various Pharmacognostic parameters. Section cutting of the leaf was also done. The study of the ariel parts of Beaumontia grandiflora resulted in the identification of fatty acids mixture and unsaponifiable matters. For the separation of various constituents of the plant, successive solvent extraction was carried out in a laboratory. Material and Methods: The study was carried out with all three extracts of Beaumontia grandiflora i.e. Petroleum ether, Chloroform and Methanol. For the separation of various constituents of the plant, successive solvent extraction was carried out in the laboratory. Raw data containing the measured zones of inhibition in mm was tabulated. Results: The microscopic studies showed the presence of Upper epidermis in surface view, Part of Lamina in section view, cortical parenchyma in longitudinal view, Parenchyma with collapsed tissues, Parenchyma Cells, Epidermal cells with a part of covering trichome, starch granules, reticulated thickened vessels, Transverse Section of leaf of Beaumontia grandiflora showed Upper Epidermis, Lower Epidermis, Hairs, Vascular Bundles, Parenchyma. Phytochemical analysis of leaves of Beaumontia grandiflora indicates that Alkaloids are present. There is a possibility of the presence of some bioactive components in the crude extracts due to which it shows strong activity. Petroleum ether extract shows a greater zone of inhibition at low concentrations. Conclusion: The alkaloids possess good antibacterial activity so the presence of alkaloids may be responsible for the antibacterial activity observed in the crude organic extract of Beaumontia grandiflora.Keywords: successive solvent extraction, zone of inhibitions., microscopy, phytochemical analysis
Procedia PDF Downloads 221805 Design and Study of a DC/DC Converter for High Power, 14.4 V and 300 A for Automotive Applications
Authors: Júlio Cesar Lopes de Oliveira, Carlos Henrique Gonçalves Treviso
Abstract:
The shortage of the automotive market in relation to options for sources of high power car audio systems, led to development of this work. Thus, we developed a source with stabilized voltage with 4320 W effective power. Designed to the voltage of 14.4 V and a choice of two currents: 30 A load option in battery banks and 300 A at full load. This source can also be considered as a source of general use dedicated commercial with a simple control circuit in analog form based on discrete components. The assembly of power circuit uses a methodology for higher power than the initially stipulated.Keywords: DC-DC power converters, converters, power conversion, pulse width modulation converters
Procedia PDF Downloads 384