Search results for: synthetic dataset
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2210

Search results for: synthetic dataset

1640 Phage Capsid for Efficient Delivery of Cytotoxic Drugs

Authors: Simona Dostalova, Dita Munzova, Ana Maria Jimenez Jimenez, Marketa Vaculovicova, Vojtech Adam, Rene Kizek

Abstract:

The boom of nanomedicine in recent years has led to the development of numerous new nanomaterials that can be used as nanocarriers in the drug delivery. These nanocarriers can either be synthetic or natural-based. The disadvantage of many synthetic nanocarriers is their toxicity in patient’s body. Protein cages that can naturally be found in human body do not exhibit such disadvantage. However, the release of cargo from some protein cages in target cells can be problematic. As a special type of protein cages can serve the capsid of many viruses, including phage. Phages infect bacterial cells; therefore they are not harmful to human cells. The targeting of phage particles to cancer cells can be solved by producing of empty phage capsids during which the targeting moieties (e.g. peptides) can be cloned into genes of phage capsid to decorate its surface. Moreover, the produced capsids do not contain viral nucleic acid and are therefore not infectious to beneficial bacteria in the patient’s body. The protein cage composed of viral capsid is larger than other frequently used apoferritin cage but its size is still small enough to benefit from passive targeting by Enhanced Permeability and Retention effect. In this work, bacteriophage λ was used, both whole and its empty capsid for delivery of different cytotoxic drugs (cisplatin, carboplatin, oxaliplatin, etoposide and doxorubicin). Large quantities of phage λ were obtained from phage λ-producing strain of E. coli cultivated in medium with 0.2 % maltose. After killing of E. coli with chloroform and its removal by centrifugation, the phage was concentrated by ultracentrifugation at 130 000 g and 4 °C for 3 h. The encapsulation of the drugs was performed by infusion method and four different concentrations of the drugs were encapsulated (200; 100; 50; 25 µg/ml). Free molecules of drugs were removed by dialysis. The encapsulation was verified using spectrophotometric and electrochemical methods. The amount of encapsulated drug linearly increased with the amount of applied drug (determination coefficient R2=0.8013). 76% of applied drug was encapsulated in phage λ particles (concentration of 10 µg/ml), even with the highest applied concentration of drugs, 200 µg/ml. Only 1% of encapsulated drug was detected in phage DNA. Similar results were obtained with encapsulation in phage empty capsid. Therefore, it can be concluded that the encapsulation of drugs into phage particles is efficient and mostly occurs by interaction of drugs with protein capsid.

Keywords: cytostatics, drug delivery, nanocarriers, phage capsid

Procedia PDF Downloads 493
1639 Thermal Analysis of a Composite of Coco Fiber and Látex

Authors: Elmo Thiago Lins Cöuras Ford, Valentina Alessandra Carvalho do Vale

Abstract:

Given the unquestionable need of environmental preservation, the natural fibers have been seen as a salutary alternative for production of composites in substitution to the synthetic fibers, vitreous and metallic. In this work, the behavior of a composite was analyzed done with fiber of the peel of the coconut as reinforcement and latex as head office, when submitted the source of heat. The temperature profiles were verified in the internal surfaces and it expresses of the composite as well as the temperature gradient in the same. It was also analyzed the behavior of this composite when submitted to a cold source. As consequence, in function of the answers of the system, conclusions were reached.

Keywords: natural fiber, composite, temperature, latex, gradient

Procedia PDF Downloads 817
1638 Deep Learning-Based Liver 3D Slicer for Image-Guided Therapy: Segmentation and Needle Aspiration

Authors: Ahmedou Moulaye Idriss, Tfeil Yahya, Tamas Ungi, Gabor Fichtinger

Abstract:

Image-guided therapy (IGT) plays a crucial role in minimally invasive procedures for liver interventions. Accurate segmentation of the liver and precise needle placement is essential for successful interventions such as needle aspiration. In this study, we propose a deep learning-based liver 3D slicer designed to enhance segmentation accuracy and facilitate needle aspiration procedures. The developed 3D slicer leverages state-of-the-art convolutional neural networks (CNNs) for automatic liver segmentation in medical images. The CNN model is trained on a diverse dataset of liver images obtained from various imaging modalities, including computed tomography (CT) and magnetic resonance imaging (MRI). The trained model demonstrates robust performance in accurately delineating liver boundaries, even in cases with anatomical variations and pathological conditions. Furthermore, the 3D slicer integrates advanced image registration techniques to ensure accurate alignment of preoperative images with real-time interventional imaging. This alignment enhances the precision of needle placement during aspiration procedures, minimizing the risk of complications and improving overall intervention outcomes. To validate the efficacy of the proposed deep learning-based 3D slicer, a comprehensive evaluation is conducted using a dataset of clinical cases. Quantitative metrics, including the Dice similarity coefficient and Hausdorff distance, are employed to assess the accuracy of liver segmentation. Additionally, the performance of the 3D slicer in guiding needle aspiration procedures is evaluated through simulated and clinical interventions. Preliminary results demonstrate the effectiveness of the developed 3D slicer in achieving accurate liver segmentation and guiding needle aspiration procedures with high precision. The integration of deep learning techniques into the IGT workflow shows great promise for enhancing the efficiency and safety of liver interventions, ultimately contributing to improved patient outcomes.

Keywords: deep learning, liver segmentation, 3D slicer, image guided therapy, needle aspiration

Procedia PDF Downloads 48
1637 Comparison of Existing Predictor and Development of Computational Method for S- Palmitoylation Site Identification in Arabidopsis Thaliana

Authors: Ayesha Sanjana Kawser Parsha

Abstract:

S-acylation is an irreversible bond in which cysteine residues are linked to fatty acids palmitate (74%) or stearate (22%), either at the COOH or NH2 terminal, via a thioester linkage. There are several experimental methods that can be used to identify the S-palmitoylation site; however, since they require a lot of time, computational methods are becoming increasingly necessary. There aren't many predictors, however, that can locate S- palmitoylation sites in Arabidopsis Thaliana with sufficient accuracy. This research is based on the importance of building a better prediction tool. To identify the type of machine learning algorithm that predicts this site more accurately for the experimental dataset, several prediction tools were examined in this research, including the GPS PALM 6.0, pCysMod, GPS LIPID 1.0, CSS PALM 4.0, and NBA PALM. These analyses were conducted by constructing the receiver operating characteristics plot and the area under the curve score. An AI-driven deep learning-based prediction tool has been developed utilizing the analysis and three sequence-based input data, such as the amino acid composition, binary encoding profile, and autocorrelation features. The model was developed using five layers, two activation functions, associated parameters, and hyperparameters. The model was built using various combinations of features, and after training and validation, it performed better when all the features were present while using the experimental dataset for 8 and 10-fold cross-validations. While testing the model with unseen and new data, such as the GPS PALM 6.0 plant and pCysMod mouse, the model performed better, and the area under the curve score was near 1. It can be demonstrated that this model outperforms the prior tools in predicting the S- palmitoylation site in the experimental data set by comparing the area under curve score of 10-fold cross-validation of the new model with the established tools' area under curve score with their respective training sets. The objective of this study is to develop a prediction tool for Arabidopsis Thaliana that is more accurate than current tools, as measured by the area under the curve score. Plant food production and immunological treatment targets can both be managed by utilizing this method to forecast S- palmitoylation sites.

Keywords: S- palmitoylation, ROC PLOT, area under the curve, cross- validation score

Procedia PDF Downloads 76
1636 Developing Primary Care Datasets for a National Asthma Audit

Authors: Rachael Andrews, Viktoria McMillan, Shuaib Nasser, Christopher M. Roberts

Abstract:

Background and objective: The National Review of Asthma Deaths (NRAD) found that asthma management and care was inadequate in 26% of cases reviewed. Major shortfalls identified were adherence to national guidelines and standards and, particularly, the organisation of care, including supervision and monitoring in primary care, with 70% of cases reviewed having at least one avoidable factor in this area. 5.4 million people in the UK are diagnosed with and actively treated for asthma, and approximately 60,000 are admitted to hospital with acute exacerbations each year. The majority of people with asthma receive management and treatment solely in primary care. This has therefore created concern that many people within the UK are receiving sub-optimal asthma care resulting in unnecessary morbidity and risk of adverse outcome. NRAD concluded that a national asthma audit programme should be established to measure and improve processes, organisation, and outcomes of asthma care. Objective: To develop a primary care dataset enabling extraction of information from GP practices in Wales and providing robust data by which results and lessons could be drawn and drive service development and improvement. Methods: A multidisciplinary group of experts, including general practitioners, primary care organisation representatives, and asthma patients was formed and used as a source of governance and guidance. A review of asthma literature, guidance, and standards took place and was used to identify areas of asthma care which, if improved, would lead to better patient outcomes. Modified Delphi methodology was used to gain consensus from the expert group on which of the areas identified were to be prioritised, and an asthma patient and carer focus group held to seek views and feedback on areas of asthma care that were important to them. Areas of asthma care identified by both groups were mapped to asthma guidelines and standards to inform and develop primary and secondary care datasets covering both adult and pediatric care. Dataset development consisted of expert review and a targeted consultation process in order to seek broad stakeholder views and feedback. Results: Areas of asthma care identified as requiring prioritisation by the National Asthma Audit were: (i) Prescribing, (ii) Asthma diagnosis (iii) Asthma Reviews (iv) Personalised Asthma Action Plans (PAAPs) (v) Primary care follow-up after discharge from hospital (vi) Methodologies and primary care queries were developed to cover each of the areas of poor and variable asthma care identified and the queries designed to extract information directly from electronic patients’ records. Conclusion: This paper describes the methodological approach followed to develop primary care datasets for a National Asthma Audit. It sets out the principles behind the establishment of a National Asthma Audit programme in response to a national asthma mortality review and describes the development activities undertaken. Key process elements included: (i) mapping identified areas of poor and variable asthma care to national guidelines and standards, (ii) early engagement of experts, including clinicians and patients in the process, and (iii) targeted consultation of the queries to provide further insight into measures that were collectable, reproducible and relevant.

Keywords: asthma, primary care, general practice, dataset development

Procedia PDF Downloads 175
1635 DEEPMOTILE: Motility Analysis of Human Spermatozoa Using Deep Learning in Sri Lankan Population

Authors: Chamika Chiran Perera, Dananjaya Perera, Chirath Dasanayake, Banuka Athuraliya

Abstract:

Male infertility is a major problem in the world, and it is a neglected and sensitive health issue in Sri Lanka. It can be determined by analyzing human semen samples. Sperm motility is one of many factors that can evaluate male’s fertility potential. In Sri Lanka, this analysis is performed manually. Manual methods are time consuming and depend on the person, but they are reliable and it can depend on the expert. Machine learning and deep learning technologies are currently being investigated to automate the spermatozoa motility analysis, and these methods are unreliable. These automatic methods tend to produce false positive results and false detection. Current automatic methods support different techniques, and some of them are very expensive. Due to the geographical variance in spermatozoa characteristics, current automatic methods are not reliable for motility analysis in Sri Lanka. The suggested system, DeepMotile, is to explore a method to analyze motility of human spermatozoa automatically and present it to the andrology laboratories to overcome current issues. DeepMotile is a novel deep learning method for analyzing spermatozoa motility parameters in the Sri Lankan population. To implement the current approach, Sri Lanka patient data were collected anonymously as a dataset, and glass slides were used as a low-cost technique to analyze semen samples. Current problem was identified as microscopic object detection and tackling the problem. YOLOv5 was customized and used as the object detector, and it achieved 94 % mAP (mean average precision), 86% Precision, and 90% Recall with the gathered dataset. StrongSORT was used as the object tracker, and it was validated with andrology experts due to the unavailability of annotated ground truth data. Furthermore, this research has identified many potential ways for further investigation, and andrology experts can use this system to analyze motility parameters with realistic accuracy.

Keywords: computer vision, deep learning, convolutional neural networks, multi-target tracking, microscopic object detection and tracking, male infertility detection, motility analysis of human spermatozoa

Procedia PDF Downloads 106
1634 Estimating the Ladder Angle and the Camera Position From a 2D Photograph Based on Applications of Projective Geometry and Matrix Analysis

Authors: Inigo Beckett

Abstract:

In forensic investigations, it is often the case that the most potentially useful recorded evidence derives from coincidental imagery, recorded immediately before or during an incident, and that during the incident (e.g. a ‘failure’ or fire event), the evidence is changed or destroyed. To an image analysis expert involved in photogrammetric analysis for Civil or Criminal Proceedings, traditional computer vision methods involving calibrated cameras is often not appropriate because image metadata cannot be relied upon. This paper presents an approach for resolving this problem, considering in particular and by way of a case study, the angle of a simple ladder shown in a photograph. The UK Health and Safety Executive (HSE) guidance document published in 2014 (INDG455) advises that a leaning ladder should be erected at 75 degrees to the horizontal axis. Personal injury cases can arise in the construction industry because a ladder is too steep or too shallow. Ad-hoc photographs of such ladders in their incident position provide a basis for analysis of their angle. This paper presents a direct approach for ascertaining the position of the camera and the angle of the ladder simultaneously from the photograph(s) by way of a workflow that encompasses a novel application of projective geometry and matrix analysis. Mathematical analysis shows that for a given pixel ratio of directly measured collinear points (i.e. features that lie on the same line segment) from the 2D digital photograph with respect to a given viewing point, we can constrain the 3D camera position to a surface of a sphere in the scene. Depending on what we know about the ladder, we can enforce another independent constraint on the possible camera positions which enables us to constrain the possible positions even further. Experiments were conducted using synthetic and real-world data. The synthetic data modeled a vertical plane with a ladder on a horizontally flat plane resting against a vertical wall. The real-world data was captured using an Apple iPhone 13 Pro and 3D laser scan survey data whereby a ladder was placed in a known location and angle to the vertical axis. For each case, we calculated camera positions and the ladder angles using this method and cross-compared them against their respective ‘true’ values.

Keywords: image analysis, projective geometry, homography, photogrammetry, ladders, Forensics, Mathematical modeling, planar geometry, matrix analysis, collinear, cameras, photographs

Procedia PDF Downloads 52
1633 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids

Authors: S. Gariani, I. Shyha

Abstract:

Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.

Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions

Procedia PDF Downloads 279
1632 Comparative Analysis of Feature Extraction and Classification Techniques

Authors: R. L. Ujjwal, Abhishek Jain

Abstract:

In the field of computer vision, most facial variations such as identity, expression, emotions and gender have been extensively studied. Automatic age estimation has been rarely explored. With age progression of a human, the features of the face changes. This paper is providing a new comparable study of different type of algorithm to feature extraction [Hybrid features using HAAR cascade & HOG features] & classification [KNN & SVM] training dataset. By using these algorithms we are trying to find out one of the best classification algorithms. Same thing we have done on the feature selection part, we extract the feature by using HAAR cascade and HOG. This work will be done in context of age group classification model.

Keywords: computer vision, age group, face detection

Procedia PDF Downloads 368
1631 Hydrochemical Contamination Profiling and Spatial-Temporal Mapping with the Support of Multivariate and Cluster Statistical Analysis

Authors: Sofia Barbosa, Mariana Pinto, José António Almeida, Edgar Carvalho, Catarina Diamantino

Abstract:

The aim of this work was to test a methodology able to generate spatial-temporal maps that can synthesize simultaneously the trends of distinct hydrochemical indicators in an old radium-uranium tailings dam deposit. Multidimensionality reduction derived from principal component analysis and subsequent data aggregation derived from clustering analysis allow to identify distinct hydrochemical behavioural profiles and to generate synthetic evolutionary hydrochemical maps.

Keywords: Contamination plume migration, K-means of PCA scores, groundwater and mine water monitoring, spatial-temporal hydrochemical trends

Procedia PDF Downloads 234
1630 Predicting Success and Failure in Drug Development Using Text Analysis

Authors: Zhi Hao Chow, Cian Mulligan, Jack Walsh, Antonio Garzon Vico, Dimitar Krastev

Abstract:

Drug development is resource-intensive, time-consuming, and increasingly expensive with each developmental stage. The success rates of drug development are also relatively low, and the resources committed are wasted with each failed candidate. As such, a reliable method of predicting the success of drug development is in demand. The hypothesis was that some examples of failed drug candidates are pushed through developmental pipelines based on false confidence and may possess common linguistic features identifiable through sentiment analysis. Here, the concept of using text analysis to discover such features in research publications and investor reports as predictors of success was explored. R studios were used to perform text mining and lexicon-based sentiment analysis to identify affective phrases and determine their frequency in each document, then using SPSS to determine the relationship between our defined variables and the accuracy of predicting outcomes. A total of 161 publications were collected and categorised into 4 groups: (i) Cancer treatment, (ii) Neurodegenerative disease treatment, (iii) Vaccines, and (iv) Others (containing all other drugs that do not fit into the 3 categories). Text analysis was then performed on each document using 2 separate datasets (BING and AFINN) in R within the category of drugs to determine the frequency of positive or negative phrases in each document. A relative positivity and negativity value were then calculated by dividing the frequency of phrases with the word count of each document. Regression analysis was then performed with SPSS statistical software on each dataset (values from using BING or AFINN dataset during text analysis) using a random selection of 61 documents to construct a model. The remaining documents were then used to determine the predictive power of the models. Model constructed from BING predicts the outcome of drug performance in clinical trials with an overall percentage of 65.3%. AFINN model had a lower accuracy at predicting outcomes compared to the BING model at 62.5% but was not effective at predicting the failure of drugs in clinical trials. Overall, the study did not show significant efficacy of the model at predicting outcomes of drugs in development. Many improvements may need to be made to later iterations of the model to sufficiently increase the accuracy.

Keywords: data analysis, drug development, sentiment analysis, text-mining

Procedia PDF Downloads 157
1629 In-situ Raman Spectroscopy of Flexible Graphene Oxide Films Containing Pt Nanoparticles in The Presense of Atomic Hydrogen

Authors: Ali Moafi, Kourosh Kalantarzadeh, Richard Kaner, Parviz Parvin, Ebrahim Asl Soleimani, Dougal McCulloch

Abstract:

In-situ Raman spectroscopy of flexible graphene-oxide films examined upon exposure to hydrogen gas, air, and synthetic air. The changes in D and G peaks are attributed to defects responding to atomic hydrogen spilled over from the catalytic behavior of Pt nanoparticles distributed all over the film. High-resolution transmission electron microscopy images (HRTEM) as well as electron energy loss spectroscopy (EELS) were carried out to define the density of the samples.

Keywords: in situ Raman Spectroscopy, EELS, TEM, graphene oxide, graphene, atomic hydrogen

Procedia PDF Downloads 449
1628 Softening Finishing: Teaching and Learning Materials

Authors: C.W. Kan

Abstract:

Softening applied on textile products based on several reasons. First, the synthetic detergent removes natural oils and waxes, thus lose the softness. Second, compensate the harsh handle of resin finishing. Also, imitate natural fibres and improve the comfort of fabric are the reasons to apply softening. There are different types of softeners for softening finishing of textiles, nonionic softener, anionic softener, cationic softener and silicone softener. The aim of this study is to illustrate the proper application of different softeners and their final softening effect in textiles. The results could also provide guidance note to the students in learning this topic. Acknowledgment: Authors would like to thank the financial support from the Hong Kong Polytechnic University for this work.

Keywords: learning materials, softening, textiles, effect

Procedia PDF Downloads 217
1627 Noncovalent Antibody-Nanomaterial Conjugates: A Simple Approach to Produce Targeted Nanomedicines

Authors: Nicholas Fletcher, Zachary Houston, Yongmei Zhao, Christopher Howard, Kristofer Thurecht

Abstract:

One promising approach to enhance nanomedicine therapeutic efficacy is to include a targeting agent, such as an antibody, to increase accumulation at the tumor site. However, the application of such targeted nanomedicines remains limited, in part due to difficulties involved with biomolecule conjugation to synthetic nanomaterials. One approach recently developed to overcome this has been to engineer bispecific antibodies (BsAbs) with dual specificity, whereby one portion binds to methoxy polyethyleneglycol (mPEG) epitopes present on synthetic nanomedicines, while the other binds to molecular disease markers of interest. In this way, noncovalent complexes of nanomedicine core, comprising a hyperbranched polymer (HBP) of primarily mPEG, decorated with targeting ligands are able to be produced by simple mixing. Further work in this area has now demonstrated such complexes targeting the breast cancer marker epidermal growth factor receptor (EGFR) to show enhanced binding to tumor cells both in vitro and in vivo. Indeed the enhanced accumulation at the tumor site resulted in improved therapeutic outcomes compared to untargeted nanomedicines and free chemotherapeutics. The current work on these BsAb-HBP conjugates focuses on further probing antibody-nanomaterial interactions and demonstrating broad applicability to a range of cancer types. Herein are reported BsAb-HBP materials targeted towards prostate-specific membrane antigen (PSMA) and study of their behavior in vivo using ⁸⁹Zr positron emission tomography (PET) in a dual-tumor prostate cancer xenograft model. In this model mice bearing both PSMA+ and PSMA- tumors allow for PET imaging to discriminate between nonspecific and targeted uptake in tumors, and better quantify the increased accumulation following BsAb conjugation. Also examined is the potential for formation of these targeted complexes in situ following injection of individual components? The aim of this approach being to avoid undesirable clearance of proteinaceous complexes upon injection limiting available therapeutic. Ultimately these results demonstrate BsAb functionalized nanomaterials as a powerful and versatile approach for producing targeted nanomedicines for a variety of cancers.

Keywords: bioengineering, cancer, nanomedicine, polymer chemistry

Procedia PDF Downloads 141
1626 Potential Applications of Biosurfactants from Corn Steep Liquor in Cosmetic

Authors: J. M. Cruz, X. Vecıno, L. Rodrıguez-López, J. M. Dominguez, A. B. Moldes

Abstract:

The cosmetic and personal care industry are the fields where biosurfactants could have more possibilities of success because in this kind of products the replacement of synthetic detergents by natural surfactants will provide an additional added value to the product, at the same time that the harmful effects produced by some synthetic surfactants could be avoided or reduced. Therefore, nowadays, consumers are disposed to pay and additional cost if they obtain more natural products. In this work we provide data about the potential of biosurfactants in the cosmetic and personal care industry. Biosurfactants from corn steep liquor, that is a fermented and condensed stream, have showed good surface-active properties, reducing substantially the surface tension of water. The bacteria that usually growth in corn steep liquor comprises Lactobacillus species, generally recognize as safe. The biosurfactant extracted from CSL consists of a lipopeptide, composed by fatty acids, which can reduce the surface tension of water in more than 30 units. It is a yellow and viscous liquid with a density of 1.053 mg/mL and pH=4. By these properties, they could be introduced in the formulation of cosmetic creams, hair conditioners or shampoos. Moreover this biosurfactant extracted from corn steep liquor, have showed a potent antimicrobial effect on different strains of Streptococcus. Some species of Streptococcus are commonly found weakly living in the human respiratory and genitourinary systems, producing several diseases in humans, including skin diseases. For instance, Streptococcus pyogenes produces many toxins and enzymes that help to stabilize skin infections; probably biosurfactants from corn steep liquor can inhibit the mechanisms of the S. pyogenes enzymes. S. pyogenes is an important cause of pharyngitis, impetigo, cellulitis and necrotizing fasciitis. In this work it was observed that 50 mg/L of biosurfactant extract obtained from corn steep liquor is able to inhibit more than 50% the growth of S. pyogenes. Thus, cosmetic and personal care products, formulated with biosurfactants from corn steep liquor, could have prebiotic properties. The natural biosurfactant presented in this work and obtained from corn milling industry streams, have showed a high potential to provide an interesting and sustainable alternative to those, antibacterial and surfactant ingredients used in cosmetic and personal care manufacture, obtained by chemical synthesis, which can cause irritation, and often only show short time effects.

Keywords: antimicrobial activity, biosurfactants, cosmetic, personal care

Procedia PDF Downloads 257
1625 Data Model to Predict Customize Skin Care Product Using Biosensor

Authors: Ashi Gautam, Isha Shukla, Akhil Seghal

Abstract:

Biosensors are analytical devices that use a biological sensing element to detect and measure a specific chemical substance or biomolecule in a sample. These devices are widely used in various fields, including medical diagnostics, environmental monitoring, and food analysis, due to their high specificity, sensitivity, and selectivity. In this research paper, a machine learning model is proposed for predicting the suitability of skin care products based on biosensor readings. The proposed model takes in features extracted from biosensor readings, such as biomarker concentration, skin hydration level, inflammation presence, sensitivity, and free radicals, and outputs the most appropriate skin care product for an individual. This model is trained on a dataset of biosensor readings and corresponding skin care product information. The model's performance is evaluated using several metrics, including accuracy, precision, recall, and F1 score. The aim of this research is to develop a personalised skin care product recommendation system using biosensor data. By leveraging the power of machine learning, the proposed model can accurately predict the most suitable skin care product for an individual based on their biosensor readings. This is particularly useful in the skin care industry, where personalised recommendations can lead to better outcomes for consumers. The developed model is based on supervised learning, which means that it is trained on a labeled dataset of biosensor readings and corresponding skin care product information. The model uses these labeled data to learn patterns and relationships between the biosensor readings and skin care products. Once trained, the model can predict the most suitable skin care product for an individual based on their biosensor readings. The results of this study show that the proposed machine learning model can accurately predict the most appropriate skin care product for an individual based on their biosensor readings. The evaluation metrics used in this study demonstrate the effectiveness of the model in predicting skin care products. This model has significant potential for practical use in the skin care industry for personalised skin care product recommendations. The proposed machine learning model for predicting the suitability of skin care products based on biosensor readings is a promising development in the skin care industry. The model's ability to accurately predict the most appropriate skin care product for an individual based on their biosensor readings can lead to better outcomes for consumers. Further research can be done to improve the model's accuracy and effectiveness.

Keywords: biosensors, data model, machine learning, skin care

Procedia PDF Downloads 97
1624 Conversational Assistive Technology of Visually Impaired Person for Social Interaction

Authors: Komal Ghafoor, Tauqir Ahmad, Murtaza Hanif, Hira Zaheer

Abstract:

Assistive technology has been developed to support visually impaired people in their social interactions. Conversation assistive technology is designed to enhance communication skills, facilitate social interaction, and improve the quality of life of visually impaired individuals. This technology includes speech recognition, text-to-speech features, and other communication devices that enable users to communicate with others in real time. The technology uses natural language processing and machine learning algorithms to analyze spoken language and provide appropriate responses. It also includes features such as voice commands and audio feedback to provide users with a more immersive experience. These technologies have been shown to increase the confidence and independence of visually impaired individuals in social situations and have the potential to improve their social skills and relationships with others. Overall, conversation-assistive technology is a promising tool for empowering visually impaired people and improving their social interactions. One of the key benefits of conversation-assistive technology is that it allows visually impaired individuals to overcome communication barriers that they may face in social situations. It can help them to communicate more effectively with friends, family, and colleagues, as well as strangers in public spaces. By providing a more seamless and natural way to communicate, this technology can help to reduce feelings of isolation and improve overall quality of life. The main objective of this research is to give blind users the capability to move around in unfamiliar environments through a user-friendly device by face, object, and activity recognition system. This model evaluates the accuracy of activity recognition. This device captures the front view of the blind, detects the objects, recognizes the activities, and answers the blind query. It is implemented using the front view of the camera. The local dataset is collected that includes different 1st-person human activities. The results obtained are the identification of the activities that the VGG-16 model was trained on, where Hugging, Shaking Hands, Talking, Walking, Waving video, etc.

Keywords: dataset, visually impaired person, natural language process, human activity recognition

Procedia PDF Downloads 58
1623 Recovery of Cd (II) and Pb (II) under the Effect of Temperature with the Synthetic Zeolite NaA

Authors: Karima Menad, Ahmed Feddag

Abstract:

In this study, large crystals of the zeolite NaA were synthesized by hydrothermal way. By following this zeolite was used to recover two heavy metals that are allowing the most dangerous toxic, lead and cadmium. The synthesized zeolite was analyzed by XRD and SEM aims to verify its purity and its good morphology; after it was undergoing ion exchange operations by aqueous solution with lead and cadmium in two salts Pb(CH3COOH)2 and CdCl2 at different concentrations. The exchange was carried out under the effect of two temperatures (25 °C and 60 °C). The contents of Pb++, Cd++ and Na+ were analyzed by atomic absorption and the results are given in the form of exchange rates. At the end the samples are analyzed by XRD exchanged to confirm their conservation of their zeolite framework. It is found that the exchange rate increases with the increase of initial concentration and the best results are found for the temperature of 60 °C.

Keywords: exchange rate, ion exchange, LTA zeolite, zeolite NaA

Procedia PDF Downloads 414
1622 Analysis of Different Classification Techniques Using WEKA for Diabetic Disease

Authors: Usama Ahmed

Abstract:

Data mining is the process of analyze data which are used to predict helpful information. It is the field of research which solve various type of problem. In data mining, classification is an important technique to classify different kind of data. Diabetes is most common disease. This paper implements different classification technique using Waikato Environment for Knowledge Analysis (WEKA) on diabetes dataset and find which algorithm is suitable for working. The best classification algorithm based on diabetic data is Naïve Bayes. The accuracy of Naïve Bayes is 76.31% and take 0.06 seconds to build the model.

Keywords: data mining, classification, diabetes, WEKA

Procedia PDF Downloads 147
1621 Performance Comparison of Cooperative Banks in the EU, USA and Canada

Authors: Matěj Kuc

Abstract:

This paper compares different types of profitability measures of cooperative banks from two developed regions: the European Union and the United States of America together with Canada. We created balanced dataset of more than 200 cooperative banks covering 2011-2016 period. We made series of tests and run Random Effects estimation on panel data. We found that American and Canadian cooperatives are more profitable in terms of return on assets (ROA) and return on equity (ROE). There is no significant difference in net interest margin (NIM). Our results show that the North American cooperative banks accommodated better to the current market environment.

Keywords: cooperative banking, panel data, profitability measures, random effects

Procedia PDF Downloads 113
1620 Automated Localization of Palpebral Conjunctiva and Hemoglobin Determination Using Smart Phone Camera

Authors: Faraz Tahir, M. Usman Akram, Albab Ahmad Khan, Mujahid Abbass, Ahmad Tariq, Nuzhat Qaiser

Abstract:

The objective of this study was to evaluate the Degree of anemia by taking the picture of the palpebral conjunctiva using Smartphone Camera. We have first localized the region of interest from the image and then extracted certain features from that Region of interest and trained SVM classifier on those features and then, as a result, our system classifies the image in real-time on their level of hemoglobin. The proposed system has given an accuracy of 70%. We have trained our classifier on a locally gathered dataset of 30 patients.

Keywords: anemia, palpebral conjunctiva, SVM, smartphone

Procedia PDF Downloads 505
1619 Experimental Study on Effects of Addition of Rice Husk on Coal Gasification

Authors: M. Bharath, Vasudevan Raghavan, B. V. S. S. S. Prasad, S. R. Chakravarthy

Abstract:

In this experimental study, effects of addition of rice husk on coal gasification in a bubbling fluidized bed gasifier, operating at atmospheric pressure with air as gasifying agent, are reported. Rice husks comprising of 6.5% and 13% by mass are added to coal. Results show that, when rice husk is added the methane yield increases from volumetric percentage of 0.56% (with no rice husk) to 2.77% (with 13% rice husk). CO and H2 remain almost unchanged and CO2 decreases with addition of rice husk. The calorific value of the synthetic gas is around 2.73 MJ/Nm3. All performance indices, such as cold gas efficiency and carbon conversion, increase with addition of rice husk.

Keywords: bubbling fluidized bed reactor, calorific value, coal gasification, rice husk

Procedia PDF Downloads 228
1618 A Mean–Variance–Skewness Portfolio Optimization Model

Authors: Kostas Metaxiotis

Abstract:

Portfolio optimization is one of the most important topics in finance. This paper proposes a mean–variance–skewness (MVS) portfolio optimization model. Traditionally, the portfolio optimization problem is solved by using the mean–variance (MV) framework. In this study, we formulate the proposed model as a three-objective optimization problem, where the portfolio's expected return and skewness are maximized whereas the portfolio risk is minimized. For solving the proposed three-objective portfolio optimization model we apply an adapted version of the non-dominated sorting genetic algorithm (NSGAII). Finally, we use a real dataset from FTSE-100 for validating the proposed model.

Keywords: evolutionary algorithms, portfolio optimization, skewness, stock selection

Procedia PDF Downloads 198
1617 iPSCs More Effectively Differentiate into Neurons on PLA Scaffolds with High Adhesive Properties for Primary Neuronal Cells

Authors: Azieva A. M., Yastremsky E. V., Kirillova D. A., Patsaev T. D., Sharikov R. V., Kamyshinsky R. A., Lukanina K. I., Sharikova N. A., Grigoriev T. E., Vasiliev A. L.

Abstract:

Adhesive properties of scaffolds, which predominantly depend on the chemical and structural features of their surface, play the most important role in tissue engineering. The basic requirements for such scaffolds are biocompatibility, biodegradation, high cell adhesion, which promotes cell proliferation and differentiation. In many cases, synthetic polymers scaffolds have proven advantageous because they are easy to shape, they are tough, and they have high tensile properties. The regeneration of nerve tissue still remains a big challenge for medicine, and neural stem cells provide promising therapeutic potential for cell replacement therapy. However, experiments with stem cells have their limitations, such as low level of cell viability and poor control of cell differentiation. Whereas the study of already differentiated neuronal cell culture obtained from newborn mouse brain is limited only to cell adhesion. The growth and implantation of neuronal culture requires proper scaffolds. Moreover, the polymer scaffolds implants with neuronal cells could demand specific morphology. To date, it has been proposed to use numerous synthetic polymers for these purposes, including polystyrene, polylactic acid (PLA), polyglycolic acid, and polylactide-glycolic acid. Tissue regeneration experiments demonstrated good biocompatibility of PLA scaffolds, despite the hydrophobic nature of the compound. Problem with poor wettability of the PLA scaffold surface could be overcome in several ways: the surface can be pre-treated by poly-D-lysine or polyethyleneimine peptides; roughness and hydrophilicity of PLA surface could be increased by plasma treatment, or PLA could be combined with natural fibers, such as collagen or chitosan. This work presents a study of adhesion of both induced pluripotent stem cells (iPSCs) and mouse primary neuronal cell culture on the polylactide scaffolds of various types: oriented and non-oriented fibrous nonwoven materials and sponges – with and without the effect of plasma treatment and composites with collagen and chitosan. To evaluate the effect of different types of PLA scaffolds on the neuronal differentiation of iPSCs, we assess the expression of NeuN in differentiated cells through immunostaining. iPSCs more effectively differentiate into neurons on PLA scaffolds with high adhesive properties for primary neuronal cells.

Keywords: PLA scaffold, neurons, neuronal differentiation, stem cells, polylactid

Procedia PDF Downloads 84
1616 Prediction of Mental Health: Heuristic Subjective Well-Being Model on Perceived Stress Scale

Authors: Ahmet Karakuş, Akif Can Kilic, Emre Alptekin

Abstract:

A growing number of studies have been conducted to determine how well-being may be predicted using well-designed models. It is necessary to investigate the backgrounds of features in order to construct a viable Subjective Well-Being (SWB) model. We have picked the suitable variables from the literature on SWB that are acceptable for real-world data instructions. The goal of this work is to evaluate the model by feeding it with SWB characteristics and then categorizing the stress levels using machine learning methods to see how well it performs on a real dataset. Despite the fact that it is a multiclass classification issue, we have achieved significant metric scores, which may be taken into account for a specific task.

Keywords: machine learning, multiclassification problem, subjective well-being, perceived stress scale

Procedia PDF Downloads 130
1615 Imaging of Underground Targets with an Improved Back-Projection Algorithm

Authors: Alireza Akbari, Gelareh Babaee Khou

Abstract:

Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.

Keywords: algorithm, back-projection, GPR, remote sensing

Procedia PDF Downloads 452
1614 Analysis of Brownfield Soil Contamination Using Local Government Planning Data

Authors: Emma E. Hellawell, Susan J. Hughes

Abstract:

BBrownfield sites are currently being redeveloped for residential use. Information on soil contamination on these former industrial sites is collected as part of the planning process by the local government. This research project analyses this untapped resource of environmental data, using site investigation data submitted to a local Borough Council, in Surrey, UK. Over 150 site investigation reports were collected and interrogated to extract relevant information. This study involved three phases. Phase 1 was the development of a database for soil contamination information from local government reports. This database contained information on the source, history, and quality of the data together with the chemical information on the soil that was sampled. Phase 2 involved obtaining site investigation reports for development within the study area and extracting the required information for the database. Phase 3 was the data analysis and interpretation of key contaminants to evaluate typical levels of contaminants, their distribution within the study area, and relating these results to current guideline levels of risk for future site users. Preliminary results for a pilot study using a sample of the dataset have been obtained. This pilot study showed there is some inconsistency in the quality of the reports and measured data, and careful interpretation of the data is required. Analysis of the information has found high levels of lead in shallow soil samples, with mean and median levels exceeding the current guidance for residential use. The data also showed elevated (but below guidance) levels of potentially carcinogenic polyaromatic hydrocarbons. Of particular concern from the data was the high detection rate for asbestos fibers. These were found at low concentrations in 25% of the soil samples tested (however, the sample set was small). Contamination levels of the remaining chemicals tested were all below the guidance level for residential site use. These preliminary pilot study results will be expanded, and results for the whole local government area will be presented at the conference. The pilot study has demonstrated the potential for this extensive dataset to provide greater information on local contamination levels. This can help inform regulators and developers and lead to more targeted site investigations, improving risk assessments, and brownfield development.

Keywords: Brownfield development, contaminated land, local government planning data, site investigation

Procedia PDF Downloads 138
1613 Metabolites of Polygonum L. Plants Having Antitumor Properties

Authors: Dmitriy Yu. Korulkin, Raissa A. Muzychkina

Abstract:

The article represents the results of research of antitumor activity of different structural types of plant flavonoids extracted by authors from Polygonum L. plants in commercial reserves at the territory of the Republic of Kazakhstan. For the first time ever the results comparative research of antitumor activity of plant flavonoids of different structural groups and their synthetic derivatives have been represented. The results of determination of toxicity of flavonoids in single parenteral infusion conditions have been represented. Experimental substantiation of possible mechanisms of antiproliferative and cytotoxic action of flavonoids has been suggested. The perspectives of usage of plant flavonoids as medications and creation of effective dosage forms of antitumor medicines on their basis have been substantiated.

Keywords: antitumor activity, cytotoxicity, flavonoids, Polygonum L., secondary metabolites

Procedia PDF Downloads 260
1612 An Improvement of ComiR Algorithm for MicroRNA Target Prediction by Exploiting Coding Region Sequences of mRNAs

Authors: Giorgio Bertolazzi, Panayiotis Benos, Michele Tumminello, Claudia Coronnello

Abstract:

MicroRNAs are small non-coding RNAs that post-transcriptionally regulate the expression levels of messenger RNAs. MicroRNA regulation activity depends on the recognition of binding sites located on mRNA molecules. ComiR (Combinatorial miRNA targeting) is a user friendly web tool realized to predict the targets of a set of microRNAs, starting from their expression profile. ComiR incorporates miRNA expression in a thermodynamic binding model, and it associates each gene with the probability of being a target of a set of miRNAs. ComiR algorithms were trained with the information regarding binding sites in the 3’UTR region, by using a reliable dataset containing the targets of endogenously expressed microRNA in D. melanogaster S2 cells. This dataset was obtained by comparing the results from two different experimental approaches, i.e., inhibition, and immunoprecipitation of the AGO1 protein; this protein is a component of the microRNA induced silencing complex. In this work, we tested whether including coding region binding sites in the ComiR algorithm improves the performance of the tool in predicting microRNA targets. We focused the analysis on the D. melanogaster species and updated the ComiR underlying database with the currently available releases of mRNA and microRNA sequences. As a result, we find that the ComiR algorithm trained with the information related to the coding regions is more efficient in predicting the microRNA targets, with respect to the algorithm trained with 3’utr information. On the other hand, we show that 3’utr based predictions can be seen as complementary to the coding region based predictions, which suggests that both predictions, from 3'UTR and coding regions, should be considered in a comprehensive analysis. Furthermore, we observed that the lists of targets obtained by analyzing data from one experimental approach only, that is, inhibition or immunoprecipitation of AGO1, are not reliable enough to test the performance of our microRNA target prediction algorithm. Further analysis will be conducted to investigate the effectiveness of the tool with data from other species, provided that validated datasets, as obtained from the comparison of RISC proteins inhibition and immunoprecipitation experiments, will be available for the same samples. Finally, we propose to upgrade the existing ComiR web-tool by including the coding region based trained model, available together with the 3’UTR based one.

Keywords: AGO1, coding region, Drosophila melanogaster, microRNA target prediction

Procedia PDF Downloads 451
1611 Analysis of Syngas Combustion Characteristics in Can-Type Combustor using CFD

Authors: Norhaslina Mat Zian, Hasril Hasini, Nur Irmawati Om

Abstract:

This study focuses on the flow and combustion behavior inside gas turbine combustor used in thermal power plant. The combustion process takes place using synthetic gas and the baseline solution was made on gas turbine combustor firing natural gas (100% Methane) as the main source of fuel. Attention is given to the effect of the H2/CO ratio on the variation of the flame profile, temperature distribution, and emissions. The H2/CO ratio varies in the range of 10-80 % and the CH4 values are fixed 10% for each case. While keeping constant the mass flow rate and operating pressure, the preliminary result shows that the flow inside the can-combustor is highly swirling which indicates good mixing of fuel and air prior to the entrance of the mixture to the main combustion zone.

Keywords: cfd, combustion, flame, syngas

Procedia PDF Downloads 284