Search results for: double nearest proportion feature extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5544

Search results for: double nearest proportion feature extraction

3954 A Study of Permission-Based Malware Detection Using Machine Learning

Authors: Ratun Rahman, Rafid Islam, Akin Ahmed, Kamrul Hasan, Hasan Mahmud

Abstract:

Malware is becoming more prevalent, and several threat categories have risen dramatically in recent years. This paper provides a bird's-eye view of the world of malware analysis. The efficiency of five different machine learning methods (Naive Bayes, K-Nearest Neighbor, Decision Tree, Random Forest, and TensorFlow Decision Forest) combined with features picked from the retrieval of Android permissions to categorize applications as harmful or benign is investigated in this study. The test set consists of 1,168 samples (among these android applications, 602 are malware and 566 are benign applications), each consisting of 948 features (permissions). Using the permission-based dataset, the machine learning algorithms then produce accuracy rates above 80%, except the Naive Bayes Algorithm with 65% accuracy. Of the considered algorithms TensorFlow Decision Forest performed the best with an accuracy of 90%.

Keywords: android malware detection, machine learning, malware, malware analysis

Procedia PDF Downloads 159
3953 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 324
3952 In Search of Commonalities in the Determinants of Child Sex Ratios in India and People's of Republic of China

Authors: Suddhasil Siddhanta, Debasish Nandy

Abstract:

Child sex ratios pattern in the Asian Population is highly masculine mainly due to birth masculinity and gender bias in child mortality. The vast and the growing literature of female deficit in world population points out the diffusion of child sex ratio pattern in many Asian as well as neighboring European countries. However, little attention has been given to understand the common factors in different demographics in explaining child sex ratio pattern. Such a scholarship is extremely important as level of gender inequity is different in different country set up. Our paper tries to explain the major structural commonalities in the child masculinity pattern in two demographic billionaires - India and China. The analysis reveals that apart from geographical diffusion of sex selection technology, patrilocal social structure, as proxied by households with more than one generation in China and proportion of population aged 65 years and above in India, can explain significant variation of missing girl child in these two countries. Even after controlling for individual capacity building factors like educational attainment, or work force participation, the measure of social stratification is coming out to be the major determinant of child sex ratio variation. Other socio economic factors that perform much well are the agency building factors of the females, like changing pattern of marriage customs which is proxied by divorce and remarriage ratio for china and percentage of female marrying at or after the age of 20 years in India and the female workforce participation. Proportion of minorities in socio-religious composition of the population and gender bias in scholastic attainment in both these counties are also found to be significant in modeling child sex ratio variations. All these significant common factors associated with child sex ratio point toward the one single most important factor: the historical evolution of patriarchy and its contemporary perpetuation in both the countries. It seems that prohibition of sex selection might not be sufficient to combat the peculiar skewness of excessive maleness in child population in both these countries. Demand sided policies is therefore utmost important to root out the gender bias in child sex ratios.

Keywords: child sex ratios, gender bias, structural factors, prosperity, patrilocality

Procedia PDF Downloads 152
3951 The Use of Bleomycin and Analogues to Probe the Chromatin Structure of Human Genes

Authors: Vincent Murray

Abstract:

The chromatin structure at the transcription start sites (TSSs) of genes is very important in the control of gene expression. In order for gene expression to occur, the chromatin structure at the TSS has to be altered so that the transcriptional machinery can be assembled and RNA transcripts can be produced. In particular, the nucleosome structure and positioning around the TSS has to be changed. Bleomycin is utilized as an anti-tumor agent to treat Hodgkin's lymphoma, squamous cell carcinoma, and testicular cancer. Bleomycin produces DNA damage in human cells and DNA strand breaks, especially double-strand breaks, are thought to be responsible for the cancer chemotherapeutic activity of bleomycin. Bleomycin is a large glycopeptide with molecular weight of approximately 1500 Daltons and hence its DNA strand cleavage activity can be utilized as a probe of chromatin structure. In this project, Illumina next-generation DNA sequencing technology was used to determine the position of DNA double-strand breaks at the TSSs of genes in intact cells. In this genome-wide study, it was found that bleomycin cleavage preferentially occurred at the TSSs of actively transcribed human genes in comparison with non-transcribed genes. There was a correlation between the level of enhanced bleomycin cleavage at TSSs and the degree of transcriptional activity. In addition, bleomycin was able to determine the position of nucleosomes at the TSSs of human genes. Bleomycin analogues were also utilized as probes of chromatin structure at the TSSs of human genes. In a similar manner to bleomycin, the bleomycin analogues 6′-deoxy-BLM Z and zorbamycin preferentially cleaved at the TSSs of human genes. Interestingly this degree of enhanced TSS cleavage inversely correlated with the cytotoxicity (IC50 values) of BLM analogues. This indicated that the degree of cleavage by bleomycin analogues at the TSSs of human genes was very important in the cytotoxicity of bleomycin and analogues. It also provided a deeper insight into the mechanism of action of this cancer chemotherapeutic agent since actively transcribed genes were preferentially targeted.

Keywords: anti-cancer activity, chromatin structure, cytotoxicity, gene expression, next-generation DNA sequencing

Procedia PDF Downloads 112
3950 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning

Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz

Abstract:

Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.

Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics

Procedia PDF Downloads 112
3949 Association of 105A/C IL-18 Gene Single Nucleotide Polymorphism with House Dust Mite Allergy in an Atopic Filipino Population

Authors: Eisha Vienna M. Fernandez, Cristan Q. Cabanilla, Hiyasmin Lim, John Donnie A. Ramos

Abstract:

Allergy is a multifactorial disease affecting a significant proportion of the population. It is developed through the interaction of allergens and the presence of certain polymorphisms in various susceptibility genes. In this study, the correlation of the 105A/C single nucleotide polymorphism (SNP) of the IL-18 gene and house dust mite-specific IgE among Filipino allergic and non-allergic population was investigated. Atopic status was defined by serum total IgE concentration of ≥100 IU/mL, while house dust mite allergy was defined by specific IgE value ≥ +1SD of IgE of nonatopic participants. Two hundred twenty match-paired Filipino cases and controls aged 6-60 were the subjects of this investigation. The level of total IgE and Specific IgE were measured using Enzyme-Linked Immunosorbent Assay (ELISA) while Polymerase Chain Reaction – Restriction Fragment Length Polymorphism (PCR-RFLP) analysis was used in the SNP detection. Sensitization profiles of the allergic patients revealed that 97.3% were sensitized to Blomia tropicalis, 40.0% to Dermatophagoides farinae, and 29.1% to Dermatophagoides pteronyssinus. Multiple sensitization to HDMs was also observed among the 47.27% of the atopic participants. Any of the allergy classes of the atopic triad were exhibited by the cases (allergic asthma: 48.18%; allergic rhinitis: 62.73%; atopic dermatitis: 19.09%), and two or all of these atopic states are concurrently occurring in 26.36% of the cases. A greater proportion of the atopic participants with allergic asthma and allergic rhinitis were sensitized to D. farinae, and D. pteronyssinus, while more of those with atopic dermatitis were sensitized to D. pteronyssinus than D. farinae. Results show that there is overrepresentation of the allele “A” of the 105A/C IL-18 gene SNP in both cases and control groups of the population. The genotype that predominate the population is the heterozygous “AC”, followed by the homozygous wild “AA”, and the homozygous variant “CC” being the least. The study confirmed a positive association between serum specific IgE against B. tropicalis and D. pteronyssinus and the allele “C” (Bt P=0.021, Dp P=0.027) and “AC” (Bt P=0.003, Dp P=0.026) genotype. Findings also revealed that the genotypes “AA” (OR:1.217; 95% CI: 0.701-2.113) and “CC” (OR, 3.5; 95% CI: 0.727-16.849) increase the risk of developing allergy. This indicates that the 105A/C IL-18 gene SNP is a candidate genetic marker for HDM allergy among Filipino patients.

Keywords: house dust mite allergy, interleukin-18 (IL-18), single nucleotide polymorphism,

Procedia PDF Downloads 455
3948 The Morphing Avatar of Startup Sales - Destination Virtual Reality

Authors: Sruthi Kannan

Abstract:

The ongoing covid pandemic has accelerated digital transformation like never before. The physical barriers brought in as a result of the pandemic are being bridged by digital alternatives. While basic collaborative activities like voice, video calling, screen sharing have been replicated in these alternatives, there are several others that require a more intimate setup. Pitching, showcasing, and providing demonstrations are an integral part of selling strategies for startups. Traditionally these have been in-person engagements, enabling a depth of understanding of the startups’ offerings. In the new normal scenario of virtual-only connects, startups are feeling the brunt of the lack of in-person connections with potential customers and investors. This poster demonstrates how a virtual reality platform has been conceptualized and custom-built for startups to engage with their stakeholders and redefine their selling strategies. This virtual reality platform is intended to provide an immersive experience for startup showcases and offers the nearest possible alternative to physical meetings for the startup ecosystem, thereby opening newer frontiers for entrepreneurial collaborations.

Keywords: collaboration, sales, startups, strategy, virtual reality

Procedia PDF Downloads 301
3947 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: hough forest, active shape model, segmentation, cardiac left ventricle

Procedia PDF Downloads 334
3946 Rapid Identification and Diagnosis of the Pathogenic Leptospiras through Comparison among Culture, PCR and Real Time PCR Techniques from Samples of Human and Mouse Feces

Authors: S. Rostampour Yasouri, M. Ghane, M. Doudi

Abstract:

Leptospirosis is one of the most significant infectious and zoonotic diseases along with global spreading. This disease is causative agent of economoic losses and human fatalities in various countries, including Northern provinces of Iran. The aim of this research is to identify and compare the rapid diagnostic techniques of pathogenic leptospiras, considering the multifacetedness of the disease from a clinical manifestation and premature death of patients. In the spring and summer of 2020-2022, 25 fecal samples were collected from suspected leptospirosis patients and 25 Fecal samples from mice residing in the rice fields and factories in Tonekabon city. Samples were prepared by centrifugation and passing through membrane filters. Culture technique was used in liquid and solid EMJH media during one month of incubation at 30°C. Then, the media were examined microscopically. DNA extraction was conducted by extraction Kit. Diagnosis of leptospiras was enforced by PCR and Real time PCR (SYBR Green) techniques using lipL32 specific primer. Out of the patients, 11 samples (44%) and 8 samples (32%) were determined to be pathogenic Leptospira by Real time PCR and PCR technique, respectively. Out of the mice, 9 Samples (36%) and 3 samples (12%) were determined to be pathogenic Leptospira by the mentioned techniques, respectively. Although the culture technique is considered to be the gold standard technique, but due to the slow growth of pathogenic Leptospira and lack of colony formation of some species, it is not a fast technique. Real time PCR allowed rapid diagnosis with much higher accuracy compared to PCR because PCR could not completely identify samples with lower microbial load.

Keywords: culture, pathogenic leptospiras, PCR, real time PCR

Procedia PDF Downloads 77
3945 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets

Authors: Kothuri Sriraman, Mattupalli Komal Teja

Abstract:

In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).

Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm

Procedia PDF Downloads 342
3944 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 151
3943 Using Machine Learning to Predict Answers to Big-Five Personality Questions

Authors: Aadityaa Singla

Abstract:

The big five personality traits are as follows: openness, conscientiousness, extraversion, agreeableness, and neuroticism. In order to get an insight into their personality, many flocks to these categories, which each have different meanings/characteristics. This information is important not only to individuals but also to career professionals and psychologists who can use this information for candidate assessment or job recruitment. The links between AI and psychology have been well studied in cognitive science, but it is still a rather novel development. It is possible for various AI classification models to accurately predict a personality question via ten input questions. This would contrast with the hundred questions that normal humans have to answer to gain a complete picture of their five personality traits. In order to approach this problem, various AI classification models were used on a dataset to predict what a user may answer. From there, the model's prediction was compared to its actual response. Normally, there are five answer choices (a 20% chance of correct guess), and the models exceed that value to different degrees, proving their significance. By utilizing an MLP classifier, decision tree, linear model, and K-nearest neighbors, they were able to obtain a test accuracy of 86.643, 54.625, 47.875, and 52.125, respectively. These approaches display that there is potential in the future for more nuanced predictions to be made regarding personality.

Keywords: machine learning, personally, big five personality traits, cognitive science

Procedia PDF Downloads 143
3942 The Usage of Negative Emotive Words in Twitter

Authors: Martina Katalin Szabó, István Üveges

Abstract:

In this paper, the usage of negative emotive words is examined on the basis of a large Hungarian twitter-database via NLP methods. The data is analysed from a gender point of view, as well as changes in language usage over time. The term negative emotive word refers to those words that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g. rohadt jó ’damn good’) or a sentiment expression with positive polarity despite their negative prior polarity (e.g. brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’. Based on the findings of several authors, the same phenomenon can be found in other languages, so it is probably a language-independent feature. For the recent analysis, 67783 tweets were collected: 37818 tweets (19580 tweets written by females and 18238 tweets written by males) in 2016 and 48344 (18379 tweets written by females and 29965 tweets written by males) in 2021. The goal of the research was to make up two datasets comparable from the viewpoint of semantic changes, as well as from gender specificities. An exhaustive lexicon of Hungarian negative emotive intensifiers was also compiled (containing 214 words). After basic preprocessing steps, tweets were processed by ‘magyarlanc’, a toolkit is written in JAVA for the linguistic processing of Hungarian texts. Then, the frequency and collocation features of all these words in our corpus were automatically analyzed (via the analysis of parts-of-speech and sentiment values of the co-occurring words). Finally, the results of all four subcorpora were compared. Here some of the main outcomes of our analyses are provided: There are almost four times fewer cases in the male corpus compared to the female corpus when the negative emotive intensifier modified a negative polarity word in the tweet (e.g., damn bad). At the same time, male authors used these intensifiers more frequently, modifying a positive polarity or a neutral word (e.g., damn good and damn big). Results also pointed out that, in contrast to female authors, male authors used these words much more frequently as a positive polarity word as well (e.g., brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’). We also observed that male authors use significantly fewer types of emotive intensifiers than female authors, and the frequency proportion of the words is more balanced in the female corpus. As for changes in language usage over time, some notable differences in the frequency and collocation features of the words examined were identified: some of the words collocate with more positive words in the 2nd subcorpora than in the 1st, which points to the semantic change of these words over time.

Keywords: gender differences, negative emotive words, semantic changes over time, twitter

Procedia PDF Downloads 199
3941 Pavement Management for a Metropolitan Area: A Case Study of Montreal

Authors: Luis Amador Jimenez, Md. Shohel Amin

Abstract:

Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.

Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization

Procedia PDF Downloads 457
3940 A Reliable Multi-Type Vehicle Classification System

Authors: Ghada S. Moussa

Abstract:

Vehicle classification is an important task in traffic surveillance and intelligent transportation systems. Classification of vehicle images is facing several problems such as: high intra-class vehicle variations, occlusion, shadow, illumination. These problems and others must be considered to develop a reliable vehicle classification system. In this study, a reliable multi-type vehicle classification system based on Bag-of-Words (BoW) paradigm is developed. Our proposed system used and compared four well-known classifiers; Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), k-Nearest Neighbour (KNN), and Decision Tree to classify vehicles into four categories: motorcycles, small, medium and large. Experiments on a large dataset show that our approach is efficient and reliable in classifying vehicles with accuracy of 95.7%. The SVM outperforms other classification algorithms in terms of both accuracy and robustness alongside considerable reduction in execution time. The innovativeness of developed system is it can serve as a framework for many vehicle classification systems.

Keywords: vehicle classification, bag-of-words technique, SVM classifier, LDA classifier, KNN classifier, decision tree classifier, SIFT algorithm

Procedia PDF Downloads 352
3939 Double Liposomes Based Dual Drug Delivery System for Effective Eradication of Helicobacter pylori

Authors: Yuvraj Singh Dangi, Brajesh Kumar Tiwari, Ashok Kumar Jain, Kamta Prasad Namdeo

Abstract:

The potential use of liposomes as drug carriers by i.v. injection is limited by their low stability in blood stream. Firstly, phospholipid exchange and transfer to lipoproteins, mainly HDL destabilizes and disintegrates liposomes with subsequent loss of content. To avoid the pain associated with injection and to obtain better patient compliance studies concerning various dosage forms, have been developed. Conventional liposomes (unilamellar and multilamellar) have certain drawbacks like low entrapment efficiency, stability and release of drug after single breach in external membrane, have led to the new type of liposomal systems. The challenge has been successfully met in the form of Double Liposomes (DL). DL is a recently developed type of liposome, consisting of smaller liposomes enveloped in lipid bilayers. The outer lipid layer of DL can protect inner liposomes against various enzymes, therefore DL was thought to be more effective than ordinary liposomes. This concept was also supported by in vitro release characteristics i.e. DL formation inhibited the release of drugs encapsulated in inner liposomes. DL consists of several small liposomes encapsulated in large liposomes, i.e., multivesicular vesicles (MVV), therefore, DL should be discriminated from ordinary classification of multilamellar vesicles (MLV), large unilamellar vesicles (LUV), small unilamellar vesicles (SUV). However, for these liposomes, the volume of inner phase is small and loading volume of water-soluble drugs is low. In the present study, the potential of phosphatidylethanolamine (PE) lipid anchored double liposomes (DL) to incorporate two drugs in a single system is exploited as a tool to augment the H. pylori eradication rate. Preparation of DL involves two steps, first formation of primary (inner) liposomes by thin film hydration method containing one drug, then addition of suspension of inner liposomes on thin film of lipid containing the other drug. The success of formation of DL was characterized by optical and transmission electron microscopy. Quantitation of DL-bacterial interaction was evaluated in terms of percent growth inhibition (%GI) on reference strain of H. pylori ATCC 26695. To confirm specific binding efficacy of DL to H. pylori PE surface receptor we performed an agglutination assay. Agglutination in DL treated H. pylori suspension suggested selectivity of DL towards the PE surface receptor of H. pylori. Monotherapy is generally not recommended for treatment of a H. pylori infection due to the danger of development of resistance and unacceptably low eradication rates. Therefore, combination therapy with amoxicillin trihydrate (AMOX) as anti-H. pylori agent and ranitidine bismuth citrate (RBC) as antisecretory agent were selected for the study with an expectation that this dual-drug delivery approach will exert acceptable anti-H. pylori activity.

Keywords: Helicobacter pylorI, amoxicillin trihydrate, Ranitidine Bismuth citrate, phosphatidylethanolamine, multi vesicular systems

Procedia PDF Downloads 202
3938 The Long-Term Effects of Immediate Implantation, Early Implantation and Delayed Implantation at Aesthetics Area

Authors: Xing Wang, Lin Feng, Xuan Zou, Hongchen liu

Abstract:

Immediate Implantation after tooth extraction is considered to be the ideal way to retain the alveolar bone, but some scholars believe the aesthetic effect in the Early Implantation case are more reliable. In this study, 89 patients were added to this retrospective study up to 5 years. Assessment indicators was including the survival of the implant (peri-implant infection, implant loosening, shedding, crowns and occlusal), aesthetics (color and fullness gums, papilla height, probing depth, X-ray alveolar crest height, the patient's own aesthetic satisfaction, doctors aesthetics score), repair defects around the implant (peri-implant bone changes in height and thickness, whether the use of autologous bone graft, whether to use absorption/repair manual nonabsorbable material), treatment time, cost and the use of antibiotics.The results demonstrated that there is no significant difference in long-term success rate of immediate implantation, early implantation and delayed implantation (p> 0.05). But the results indicated immediate implantation group could get get better aesthetic results after two years (p< 0.05), but may increase the risk of complications and failures (p< 0.05). High-risk indicators include gingival recession, labial bone wall damage, thin gingival biotypes, planting position and occlusal restoration bad and so on. No matter which type of implanting methods was selected, the extraction methods and bone defect amplification techniques are observed as a significant factors on aesthetic effect (p< 0.05).

Keywords: immediate implantation, long-term effects, aesthetics area, dental implants

Procedia PDF Downloads 353
3937 In Vitro Antioxidant and Cytotoxic Activities Against Human Oral Cancer and Human Laryngeal Cancer of Limonia acidissima L. Bark Extracts

Authors: Kriyapa lairungruang, Arunporn Itharat

Abstract:

Limonia acidissima L. (LA) (Common name: wood apple, Thai name: ma-khwit) is a medicinal plant which has long been used in Thai traditional medicine. Its bark is used for treatment of diarrhea, abscess, wound healing and inflammation and it is also used in oral cancer. Thus, this research aimed to investigate antioxidant and cytotoxic activities of the LA bark extracts produced by various extraction methods. Different extraction procedures were used to extract LA bark for biological activity testing: boiling in water, maceration with 95% ethanol, maceration with 50% ethanol and water boiling of each the 95% and the 50% ethanolic residues. All extracts were tested for antioxidant activity using DPPH radical scavenging assay, cytotoxic activity against human laryngeal epidermoid carcinoma (HEp-2) cells and human oral epidermoid carcinoma (KB) cells using sulforhodamine B (SRB) assay. The results found that the 95% ethanolic extract of LA bark showed the highest antioxidant activity with EC50 values of 29.76±1.88 µg/ml. For cytotoxic activity, the 50% ethanolic extract showed the best cytotoxic activity against HEp-2 and KB cells with IC50 values of 9.55±1.68 and 18.90±0.86 µg/ml, respectively. This study demonstrated that the 95% ethanolic extract of LA bark showed moderate antioxidant activity and the 50% ethanolic extract provided potent cytotoxic activity against HEp-2 and KB cells. These results confirm the traditional use of LA for the treatment of oral cancer and laryngeal cancer, and also support its ongoing use.

Keywords: antioxidant activity, cytotoxic activity, Laryngeal epidermoid carcinoma, Limonia acidissima L., oral epidermoid carcinoma

Procedia PDF Downloads 476
3936 Color Image Compression/Encryption/Contour Extraction using 3L-DWT and SSPCE Method

Authors: Ali A. Ukasha, Majdi F. Elbireki, Mohammad F. Abdullah

Abstract:

Data security needed in data transmission, storage, and communication to ensure the security. This paper is divided into two parts. This work interests with the color image which is decomposed into red, green and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using the key image that has same original size and are generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours from color images recovery can be obtained with accepted level of distortion using single step parallel contour extraction (SSPCE) method. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Color images and completely reconstructed without any distortion. Also shown that the analyzed algorithm has extremely large security against some attacks like salt and pepper and Jpeg compression. Its proof that the color images can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.

Keywords: SSPCE method, image compression and salt and peppers attacks, bitplanes decomposition, Arnold transform, color image, wavelet transform, lossless image encryption

Procedia PDF Downloads 513
3935 Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition

Authors: Aisultan Shoiynbek, Darkhan Kuanyshbay, Paulo Menezes, Akbayan Bekarystankyzy, Assylbek Mukhametzhanov, Temirlan Shoiynbek

Abstract:

Speech emotion recognition (SER) has received increasing research interest in recent years. It is a common practice to utilize emotional speech collected under controlled conditions recorded by actors imitating and artificially producing emotions in front of a microphone. There are four issues related to that approach: emotions are not natural, meaning that machines are learning to recognize fake emotions; emotions are very limited in quantity and poor in variety of speaking; there is some language dependency in SER; consequently, each time researchers want to start work with SER, they need to find a good emotional database in their language. This paper proposes an approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describes the sequence of actions involved in the proposed approach. One of the first objectives in the sequence of actions is the speech detection issue. The paper provides a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To investigate the working capacity of the developed model, an analysis of speech detection and extraction from real tasks has been performed.

Keywords: deep neural networks, speech detection, speech emotion recognition, Mel-frequency cepstrum coefficients, collecting speech emotion corpus, collecting speech emotion dataset, Kazakh speech dataset

Procedia PDF Downloads 18
3934 Physico-Mechanical Behavior of Indian Oil Shales

Authors: K. S. Rao, Ankesh Kumar

Abstract:

The search for alternative energy sources to petroleum has increased these days because of increase in need and depletion of petroleum reserves. Therefore the importance of oil shales as an economically viable substitute has increased many folds in last 20 years. The technologies like hydro-fracturing have opened the field of oil extraction from these unconventional rocks. Oil shale is a compact laminated rock of sedimentary origin containing organic matter known as kerogen which yields oil when distilled. Oil shales are formed from the contemporaneous deposition of fine grained mineral debris and organic degradation products derived from the breakdown of biota. Conditions required for the formation of oil shales include abundant organic productivity, early development of anaerobic conditions, and a lack of destructive organisms. These rocks are not gown through the high temperature and high pressure conditions in Mother Nature. The most common approach for oil extraction is drastically breaking the bond of the organics which involves retorting process. The two approaches for retorting are surface retorting and in-situ processing. The most environmental friendly approach for extraction is In-situ processing. The three steps involved in this process are fracturing, injection to achieve communication, and fluid migration at the underground location. Upon heating (retorting) oil shale at temperatures in the range of 300 to 400°C, the kerogen decomposes into oil, gas and residual carbon in a process referred to as pyrolysis. Therefore it is very important to understand the physico-mechenical behavior of such rocks, to improve the technology for in-situ extraction. It is clear from the past research and the physical observations that these rocks will behave as an anisotropic rock so it is very important to understand the mechanical behavior under high pressure at different orientation angles for the economical use of these resources. By knowing the engineering behavior under above conditions will allow us to simulate the deep ground retorting conditions numerically and experimentally. Many researchers have investigate the effect of organic content on the engineering behavior of oil shale but the coupled effect of organic and inorganic matrix is yet to be analyzed. The favourable characteristics of Assam coal for conversion to liquid fuels have been known for a long time. Studies have indicated that these coals and carbonaceous shale constitute the principal source rocks that have generated the hydrocarbons produced from the region. Rock cores of the representative samples are collected by performing on site drilling, as coring in laboratory is very difficult due to its highly anisotropic nature. Different tests are performed to understand the petrology of these samples, further the chemical analyses are also done to exactly quantify the organic content in these rocks. The mechanical properties of these rocks are investigated by considering different anisotropic angles. Now the results obtained from petrology and chemical analysis are correlated with the mechanical properties. These properties and correlations will further help in increasing the producibility of these rocks. It is well established that the organic content is negatively correlated to tensile strength, compressive strength and modulus of elasticity.

Keywords: oil shale, producibility, hydro-fracturing, kerogen, petrology, mechanical behavior

Procedia PDF Downloads 343
3933 Numerical Methodology to Support the Development of a Double Chamber Syringe

Authors: Lourenço Bastos, Filipa Carneiro, Bruno Vale, Rita Marques Joana Silva, Ricardo Freitas, Ângelo Marques, Sara Cortez, Alberta Coelho, Pedro Parreira, Liliana Sousa, Anabela Salgueiro, Bruno Silva

Abstract:

The process of flushing is considered to be an adequate technique to reduce the risk of infection during the clinical practice of venous catheterization. Nonetheless, there is still a lack of adhesion to this method, in part due to the complexity of this procedure. The project SeringaDuo aimed to develop an innovative double-chamber syringe for intravenous sequential administration of drugs and serums. This device served the purpose of improving the adherence to the practice, through the reduction of manipulations needed, which also improves patient safety, and though the promotion of flushing practice by health professionals, by simplifying this task. To assist on the development of this innovative syringe, a numerical methodology was developed and validated in order to predict the syringe’s mechanical and flow behavior during the fluids’ loading and administration phases, as well as to allow the material behavior evaluation during its production. For this, three commercial numerical simulation software was used, namely ABAQUS, ANSYS/FLUENT, and MOLDFLOW. This methodology aimed to evaluate the concepts feasibility and to optimize the geometries of the syringe’s components, creating this way an iterative process for product development based on numerical simulations, validated by the production of prototypes. Through this methodology, it was possible to achieve a final design that fulfils all the characteristics and specifications defined. This iterative process based on numerical simulations is a powerful tool for product development that allows obtaining fast and accurate results without the strict need for prototypes. An iterative process can be implemented, consisting of consecutive constructions and evaluations of new concepts, to obtain an optimized solution, which fulfils all the predefined specifications and requirements.

Keywords: Venous catheterization, flushing, syringe, numerical simulation

Procedia PDF Downloads 163
3932 Investigation of Type and Concentration Effects of Solvent on Chemical Properties of Saffron Edible Extract

Authors: Sharareh Mohseni

Abstract:

Purpose: The objective of this study was to find a suitable solvent to produce saffron edible extract with improved chemical properties. Design/methodology/approach: Dried and pulverized stigmas of C. sativus L. (10g) was extracted with 300 ml of solvents including: distillated water (DW), ethanol/DW, methanol/DW, propylene glycol/DW, heptan/DW, and hexan/DW, for 3 days at 25°C and then centrifuged at 3000 rpm. Then the extracts were evaporated using rotary evaporator at 40°C. The fiber and solvent-free extracts were then analyzed by UV spectrophotometer to detect saffron quality parameters including crocin, picrocrocin and safranal. Findings: Distilled water/ethanol mixture as the extraction solvent, caused larger amounts of the plant constituents to diffuse out to the extract compared to other treatments and also control. Polar solvents including distilled water, ethanol, and propylene glycol (except methanol) were more effective in extracting crocin, picrocrocin, and saffranal than non-polar solvents. Social implications: Due to an enhancement of color and flavor, saffron extract is economical compared to natural saffron. Saffron Extract saves on preparation time and reduces the amount of saffron required for imparting the same flavor, as compared to dry saffron. Liquid extract is easier to use and standardize in food preparations compared to dry stamens and can be dosed precisely compared to natural saffron. Originality/value: No research had been done on production of saffron edible extract using the solvent studied in this survey. The novelty of this research is high and the results can be used industrially.

Keywords: Crocus sativus L., saffron extract, solvent extraction, distilled water

Procedia PDF Downloads 444
3931 Effects of Different Mechanical Treatments on the Physical and Chemical Properties of Turmeric

Authors: Serpa A. M., Gómez Hoyos C., Velásquez-Cock J. A., Ruiz L. F., Vélez Acosta L. M., Gañan P., Zuluaga R.

Abstract:

Turmeric (Curcuma Longa L) is an Indian rhizome known for its biological properties, derived from its active compounds such as curcuminoids. Curcumin, the main polyphenol in turmeric, only represents around 3.5% of the dehydrated rhizome and extraction yields between 41 and 90% have been reported. Therefore, for every 1000 tons of turmeric powder used for the extraction of curcumin, around 970 tons of residues are generated. The present study evaluates the effect of different mechanical treatments (waring blender, grinder and high-pressure homogenization) on the physical and chemical properties of turmeric, as an alternative for the transformation of the entire rhizome. Suspensions of turmeric (10, 20 y 30%) were processed by waring blender during 3 min at 12000 rpm, while the samples treated by grinder were processed evaluating two different Gaps (-1 and -1,5). Finally, the process by high-pressure homogenization, was carried out at 500 bar. According to the results, the luminosity of the samples increases with the severity of the mechanical treatment, due to the stabilization of the color associated with the inactivation of the oxidative enzymes. Additionally, according to the microstructure of the samples, the process by grinder (Gap -1,5) and by high-pressure homogenization allowed the largest size reduction, reaching sizes up to 3 m (measured by optical microscopy). This processes disrupts the cells and breaks their fragments into small suspended particles. The infrared spectra obtained from the samples using an attenuated total reflectance accessory indicates changes in the 800-1200 cm⁻¹ region, related mainly to changes in the starch structure. Finally, the thermogravimetric analysis shows the presence of starch, curcumin and some minerals in the suspensions.

Keywords: characterization, mechanical treatments, suspensions, turmeric rhizome

Procedia PDF Downloads 160
3930 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluate the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: convolutional neural network, electronic medical record, feature representation, lexical semantics, semantic decision

Procedia PDF Downloads 122
3929 Railway Transport as a Potential Source of Polychlorinated Biphenyls in Soil

Authors: Nataša Stojić, Mira Pucarević, Nebojša Ralević, Vojislava Bursić, Gordan Stojić

Abstract:

Surface soil (0 – 10 cm) samples from 52 sampling sites along the length of railway tracks on the territory of Srem (the western part of the Autonomous Province of Vojvodina, itself part of Serbia) were collected and analyzed for 7 polychlorinated biphenyls (PCBs) in order to see how the distance from the railroad on the one hand and dump on the other hand, affect the concentration of PCBs (CPCBs) in the soil. Samples were taken at a distance of 0.03 to 4.19 km from the railway and 0.43 to 3.35 km from the landfills. For the soil extraction the Soxhlet extraction (USEPA 3540S) was used. The extracts were purified on a silica-gel column (USEPA 3630C). The analysis of the extracts was performed by gas chromatography with tandem mass spectrometry. PCBs were not detected only at two locations. Mean total concentration of PCBs for all other sampling locations was 0,0043 ppm dry weight (dw) with a range of 0,0005 to 0,0227 ppm dw. On the part of the data that were interesting for this research with statistical methods (PCA) were isolated factors that affect the concentration of PCBs. Data were also analyzed using the Pearson's chi-squared test which showed that the hypothesis of independence of CPCBs and distance from the railway can be rejected. Hypothesis of independence between CPCB and the percentage of humus in the soil can also be rejected, in contrast to dependence of CPCB and the distance from the landfill where the hypothesis of independence cannot be rejected. Based on these results can be said that railway transport is a potential source of PCBs. The next step in this research is to establish the position of transformers which are located near sampling sites as another important factor that affects the concentration of PCBs in the soil.

Keywords: GC/MS, landfill, PCB, railway, soil

Procedia PDF Downloads 329
3928 Semantic Indexing Improvement for Textual Documents: Contribution of Classification by Fuzzy Association Rules

Authors: Mohsen Maraoui

Abstract:

In the aim of natural language processing applications improvement, such as information retrieval, machine translation, lexical disambiguation, we focus on statistical approach to semantic indexing for multilingual text documents based on conceptual network formalism. We propose to use this formalism as an indexing language to represent the descriptive concepts and their weighting. These concepts represent the content of the document. Our contribution is based on two steps. In the first step, we propose the extraction of index terms using the multilingual lexical resource Euro WordNet (EWN). In the second step, we pass from the representation of index terms to the representation of index concepts through conceptual network formalism. This network is generated using the EWN resource and pass by a classification step based on association rules model (in attempt to discover the non-taxonomic relations or contextual relations between the concepts of a document). These relations are latent relations buried in the text and carried by the semantic context of the co-occurrence of concepts in the document. Our proposed indexing approach can be applied to text documents in various languages because it is based on a linguistic method adapted to the language through a multilingual thesaurus. Next, we apply the same statistical process regardless of the language in order to extract the significant concepts and their associated weights. We prove that the proposed indexing approach provides encouraging results.

Keywords: concept extraction, conceptual network formalism, fuzzy association rules, multilingual thesaurus, semantic indexing

Procedia PDF Downloads 135
3927 Bacterial Recovery of Copper Ores

Authors: Zh. Karaulova, D. Baizhigitov

Abstract:

At the Aktogay deposit, the oxidized ore section has been developed since 2015; by now, the reserves of easily enriched ore are decreasing, and a large number of copper-poor, difficult-to-enrich ores has been accumulated in the dumps of the KAZ Minerals Aktogay deposit, which is unprofitable to mine using the traditional mining methods. Hence, another technology needs to be implemented, which will significantly expand the raw material base of copper production in Kazakhstan and ensure the efficient use of natural resources. Heap and dump bacterial recovery are the most acceptable technologies for processing low-grade secondary copper sulfide ores. Test objects were the copper ores of Aktogay deposit and chemolithotrophic bacteria Leptospirillum ferrooxidans (L.f.), Acidithiobacillus caldus (A.c.), Sulfobacillus Acidophilus (S.a.), which are mixed cultures were both used in bacterial oxidation systems. They can stay active in the 20-400C temperature range. These bacteria were the most extensively studied and widely used in sulfide mineral recovery technology. Biocatalytic acceleration was achieved as a result of bacteria oxidizing iron sulfides to form iron sulfate, which subsequently underwent chemical oxidation to become sulfate oxide. The following results have been achieved at the initial stage: the goal was to grow and maintain the life activity of bacterial cultures under laboratory conditions. These bacteria grew the best within the pH 1,2-1,8 range with light stirring and in an aerated environment. The optimal growth temperature was 30-33оC. The growth rate decreased by one-half for each 4-5°C fall in temperature from 30°C. At best, the number of bacteria doubled every 24 hours. Typically, the maximum concentration of cells that can be grown in ferrous solution is about 107/ml. A further step researched in this case was the adaptation of microorganisms to the environment of certain metals. This was followed by mass production of inoculum and maintenance for their further cultivation on a factory scale. This was done by adding sulfide concentrate, allowing the bacteria to convert the ferrous sulfate as indicated by the Eh (>600 mV), then diluting to double the volume and adding concentrate to achieve the same metal level. This process was repeated until the desired metal level and volumes were achieved. The final stage of bacterial recovery was the transportation and irrigation of secondary sulfide copper ores of the oxidized ore section. In conclusion, the project was implemented at the Aktogay mine since the bioleaching process was prolonged. Besides, the method of bacterial recovery might compete well with existing non-biological methods of extraction of metals from ores.

Keywords: bacterial recovery, copper ore, bioleaching, bacterial inoculum

Procedia PDF Downloads 67
3926 Application of Aquatic Plants for the Remediation of Organochlorine Pesticides from Keenjhar Lake

Authors: Soomal Hamza, Uzma Imran

Abstract:

Organochlorine pesticides bio-accumulate into the fat of fish, birds, and animals through which it enters the human food cycle. Due to their persistence and stability in the environment, many health impacts are associated with them, most of which are carcinogenic in nature. In this study, the level of organochlorine pesticides has been detected in Keenjhar Lake and remediated using Rhizoremediation technique. 14 OC pesticides namely, Aldrin, Deldrin, Heptachlor, Heptachlor epoxide, Endrin, Endosulfun I and II, DDT, DDE, DDD, Alpha, Beta, Gamma BHC and two plants namely, Water Hyacinth and Slvinia Molesta were used in the system using pot experiment which processed for 11 days. A consortium was inoculated in both plants to increase its efficiency. Water samples were processed using liquide-liquid extraction. Sediments and roots samples were processed using Soxhlet method followed by clean-up and Gas Chromatography. Delta-BHC was the predominantly found in all samples with mean concentration (ppb) and standard deviation of 0.02 ± 0.14, 0.52 ± 0.68, 0.61 ± 0.06, in Water, Sediments and Roots samples respectively. The highest levels were of Endosulfan II in the samples of water, sediments and roots. Water Hyacinth proved to be better bioaccumulaor as compared to Silvinia Molesta. The pattern of compounds reduction rate by the end of experiment was Delta-BHC>DDD > Alpha-BHC > DDT> Heptachlor> H.Epoxide> Deldrin> Aldrin> Endrin> DDE> Endosulfun I > Endosulfun II. Not much significant difference was observed between the pots with the consortium and pots without the consortium addition. Phytoremediation is a promising technique, but more studies are required to assess the bioremediation potential of different aquatic plants and plant-endophyte relationship.

Keywords: aquatic plant, bio remediation, gas chromatography, liquid liquid extraction

Procedia PDF Downloads 140
3925 The Study of Spray Drying Process for Skimmed Coconut Milk

Authors: Jaruwan Duangchuen, Siwalak Pathaveerat

Abstract:

Coconut (Cocos nucifera) belongs to the family Arecaceae. Coconut juice and meat are consumed as food and dessert in several regions of the world. Coconut juice contains low proteins, and arginine is the main amino acid content. Coconut meat is the endosperm of coconut that has nutritional value. It composes of carbohydrate, protein and fat. The objective of this study is utilization of by-products from the virgin coconut oil extraction process by using the skimmed coconut milk as a powder. The skimmed coconut milk was separated from the coconut milk in virgin coconut oil extraction process that consists approximately of protein 6.4%, carbohydrate 7.2%, dietary fiber 0.27 %, sugar 6.27%, fat 3.6 % and moisture content of 86.93%. This skimmed coconut milk can be made to powder for value - added product by using spray drying. The factors effect to the yield and properties of dry skimmed coconut milk in spraying process are inlet, outlet air temperature and the maltodextrin concentration. The percentage of maltodextrin content (15, 20%), outlet air temperature (80 ºC, 85 ºC, 90 ºC) and inlet air temperature (190 ºC, 200 ºC, 210 ºC) were conducted to the skimmed coconut milk spray drying process. The spray dryer was kept air flow rate (0.2698 m3 /s). The result that shown 2.22 -3.23% of moisture content, solubility, bulk density (0.4-0.67g/mL), solubility, wettability (4.04 -19.25 min) for solubility in the water, color, particle size were analyzed for the powder samples. The maximum yield (18.00%) of spray dried coconut milk powder was obtained at 210 °C of temperature, 80°C of outlet temperature and 20% maltodextrin for 27.27 second for drying time. For the amino analysis shown that the high amino acids are Glutamine (16.28%), Arginine (10.32%) and Glycerin (9.59%) by using HPLP method (UV detector).

Keywords: skimmed coconut milk, spray drying, virgin coconut oil process (VCO), maltodextrin

Procedia PDF Downloads 326