Search results for: processing map
1202 Elimination of Mixed-Culture Biofilms Using Biological Agents
Authors: Anita Vidacs, Csaba Vagvolgyi, Judit Krisch
Abstract:
The attachment of microorganisms to different surfaces and the development of biofilms can lead to outbreaks of food-borne diseases and economic losses due to perished food. In food processing environments, bacterial communities are generally formed by mixed cultures of different species. Plants are sources of several antimicrobial substances that may be potential candidates for the development of new disinfectants. We aimed to investigate cinnamon (Cinnamomum zeylanicum), marjoram (Origanum majorana), and thyme (Thymus vulgaris). Essential oils and their major components (cinnamaldehyde, terpinene-4-ol, and thymol) on four-species biofilms of E. coli, L. monocytogenes, P. putida, and S. aureus. Experiments had three parts: (i) determination of minimum bactericide concentration and the killing time with microdilution methods; (ii) elimination of the four-species 24– and 168-hours old biofilm from stainless steel, polypropylene, tile and wood surfaces; and (iii) comparing the disinfectant effect with industrial used per-acetic based sanitizer (HC-DPE). E. coli and P. putida were more resistant to investigated essential oils and their main components in biofilm, than L. monocytogenes and S. aureus. These Gram-negative bacteria were detected on the surfaces, where the natural based disinfectant had not total biofilm elimination effect. Most promoted solutions were the cinnamon essential oil and the terpinene-4-ol that could eradicate the biofilm from stainless steel, polypropylene and even from tile, too. They have a better disinfectant effect than HC-DPE. These natural agents can be used as alternative solutions in the battle against bacterial biofilms.Keywords: biofilm, essential oils, surfaces, terpinene-4-ol
Procedia PDF Downloads 1121201 Identification of Potential Small Molecule Regulators of PERK Kinase
Authors: Ireneusz Majsterek, Dariusz Pytel, J. Alan Diehl
Abstract:
PKR-like ER kinase (PERK) is serine/threonie endoplasmic reticulum (ER) transmembrane kinase activated during ER-stress. PERK can activate signaling pathways known as unfolded protein response (UPR). Attenuation of translation is mediated by PERK via phosphorylation of eukaryotic initiation factor 2α (eIF2α), which is necessary for translation initiation. PERK activation also directly contributes to activation of Nrf2 which regulates expression of anti-oxidant enzymes. An increased phosphorylation of eIF2α has been reported in Alzheimer disease (AD) patient hippocampus, indicating that PERK is activated in this disease. Recent data have revealed activation of PERK signaling in non-Hodgkins lymphomas. Results also revealed that loss of PERK limits mammary tumor cell growth in vitro and in vivo. Consistent with these observations, activation of UPR in vitro increases levels of the amyloid precursor protein (APP), the peptide from which beta-amyloid plaques (AB) fragments are derived. Finally, proteolytic processing of APP, including the cleavages that produce AB, largely occurs in the ER, and localization coincident with PERK activity. Thus, we expect that PERK-dependent signaling is critical for progression of many types of diseases (human cancer, neurodegenerative disease and other). Therefore, modulation of PERK activity may be a useful therapeutic target in the treatment of different diseases that fail to respond to traditional chemotherapeutic strategies, including Alzheimer’s disease. Our goal will be to developed therapeutic modalities targeting PERK activity.Keywords: PERK kinase, small molecule inhibitor, neurodegenerative disease, Alzheimer’s disease
Procedia PDF Downloads 4821200 Structural Damage Detection via Incomplete Model Data Using Output Data Only
Authors: Ahmed Noor Al-qayyim, Barlas Özden Çağlayan
Abstract:
Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on obtaining very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. This study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using “Two Points - Condensation (TPC) technique”. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices are obtained from optimization of the equation of motion using the measured test data. The current stiffness matrices are compared with original (undamaged) stiffness matrices. High percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element. Where two cases are considered, the method detects the damage and determines its location accurately in both cases. In addition, the results illustrate that these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can also be used for big structures.Keywords: damage detection, optimization, signals processing, structural health monitoring, two points–condensation
Procedia PDF Downloads 3651199 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets
Authors: Kothuri Sriraman, Mattupalli Komal Teja
Abstract:
In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm
Procedia PDF Downloads 3481198 Event Related Brain Potentials Evoked by Carmen in Musicians and Dancers
Authors: Hanna Poikonen, Petri Toiviainen, Mari Tervaniemi
Abstract:
Event-related potentials (ERPs) evoked by simple tones in the brain have been extensively studied. However, in reality the music surrounding us is spectrally and temporally complex and dynamic. Thus, the research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation, which, in various forms, has always been an essential part of different cultures. In addition to sensory responses, music elicits vast cognitive and emotional processes in the brain. When compared to laymen, professional musicians have stronger ERP responses in processing individual musical features in simple tone sequences, such as changes in pitch, timbre and harmony. Here we show that the ERP responses evoked by rapid changes in individual musical features are more intense in musicians than in laymen, also while listening to long excerpts of the composition Carmen. Interestingly, for professional dancers, the amplitudes of the cognitive P300 response are weaker than for musicians but still stronger than for laymen. Also, the cognitive P300 latencies of musicians are significantly shorter whereas the latencies of laymen are significantly longer. In contrast, sensory N100 do not differ in amplitude or latency between musicians and laymen. These results, acquired from a novel ERP methodology for natural music, suggest that we can take the leap of studying the brain with long pieces of natural music also with the ERP method of electroencephalography (EEG), as has already been made with functional magnetic resonance (fMRI), as these two brain imaging devices complement each other.Keywords: electroencephalography, expertise, musical features, real-life music
Procedia PDF Downloads 4831197 INRAM-3DCNN: Multi-Scale Convolutional Neural Network Based on Residual and Attention Module Combined with Multilayer Perceptron for Hyperspectral Image Classification
Authors: Jianhong Xiang, Rui Sun, Linyu Wang
Abstract:
In recent years, due to the continuous improvement of deep learning theory, Convolutional Neural Network (CNN) has played a great superior performance in the research of Hyperspectral Image (HSI) classification. Since HSI has rich spatial-spectral information, only utilizing a single dimensional or single size convolutional kernel will limit the detailed feature information received by CNN, which limits the classification accuracy of HSI. In this paper, we design a multi-scale CNN with MLP based on residual and attention modules (INRAM-3DCNN) for the HSI classification task. We propose to use multiple 3D convolutional kernels to extract the packet feature information and fully learn the spatial-spectral features of HSI while designing residual 3D convolutional branches to avoid the decline of classification accuracy due to network degradation. Secondly, we also design the 2D Inception module with a joint channel attention mechanism to quickly extract key spatial feature information at different scales of HSI and reduce the complexity of the 3D model. Due to the high parallel processing capability and nonlinear global action of the Multilayer Perceptron (MLP), we use it in combination with the previous CNN structure for the final classification process. The experimental results on two HSI datasets show that the proposed INRAM-3DCNN method has superior classification performance and can perform the classification task excellently.Keywords: INRAM-3DCNN, residual, channel attention, hyperspectral image classification
Procedia PDF Downloads 791196 Effect of Saponin Enriched Soapwort Powder on Structural and Sensorial Properties of Turkish Delight
Authors: Ihsan Burak Cam, Ayhan Topuz
Abstract:
Turkish delight has been produced by bleaching the plain delight mix (refined sugar, water and starch) via soapwort extract and powdered sugar. Soapwort extract which contains high amount of saponin, is an additive used in Turkish delight and tahini halvah production to improve consistency, chewiness and color due to its bioactive saponin content by acting as emulsifier. In this study, soapwort powder has been produced by determining optimum process conditions of soapwort extract by using response-surface method. This extract has been enriched with saponin by reverse osmosis (contains %63 saponin in dry bases). Büchi mini spray dryer B-290 was used to produce spray-dried soapwort powder (aw=0.254) from the enriched soapwort concentrate. Processing steps optimization and saponin content enrichment of soapwort extract has been tested on Turkish Delight production. Delight samples, produced by soapwort powder and commercial extract (control), were compared in chewiness, springiness, stickiness, adhesiveness, hardness, color and sensorial characteristics. According to the results, all textural properties except hardness of delights produced by powder were found to be statistically different than control samples. Chewiness, springiness, stickiness, adhesiveness and hardness values of samples (delights produced by the powder / control delights) were determined to be 361.9/1406.7, 0.095/0.251, -120.3/-51.7, 781.9/1869.3, 3427.3g/3118.4g, respectively. According to the quality analysis that has been ran with the end products it has been determined that; there is no statistically negative effect of the soapwort extract and the soapwort powder on the color and the appearance of Turkish Delight.Keywords: saponin, delight, soapwort powder, spray drying
Procedia PDF Downloads 2531195 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation
Authors: Ekin Nurbaş
Abstract:
One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing
Procedia PDF Downloads 1471194 Plant Identification Using Convolution Neural Network and Vision Transformer-Based Models
Authors: Virender Singh, Mathew Rees, Simon Hampton, Sivaram Annadurai
Abstract:
Plant identification is a challenging task that aims to identify the family, genus, and species according to plant morphological features. Automated deep learning-based computer vision algorithms are widely used for identifying plants and can help users narrow down the possibilities. However, numerous morphological similarities between and within species render correct classification difficult. In this paper, we tested custom convolution neural network (CNN) and vision transformer (ViT) based models using the PyTorch framework to classify plants. We used a large dataset of 88,000 provided by the Royal Horticultural Society (RHS) and a smaller dataset of 16,000 images from the PlantClef 2015 dataset for classifying plants at genus and species levels, respectively. Our results show that for classifying plants at the genus level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420 and other state-of-the-art CNN-based models suggested in previous studies on a similar dataset. ViT model achieved top accuracy of 83.3% for classifying plants at the genus level. For classifying plants at the species level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420, with a top accuracy of 92.5%. We show that the correct set of augmentation techniques plays an important role in classification success. In conclusion, these results could help end users, professionals and the general public alike in identifying plants quicker and with improved accuracy.Keywords: plant identification, CNN, image processing, vision transformer, classification
Procedia PDF Downloads 1041193 The Functional Rehabilitation of Peri-Implant Tissue Defects: A Case Report
Authors: Özgür Öztürk, Cumhur Sipahi, Hande Yeşil
Abstract:
Implant retained restorations commonly consist of a metal-framework veneered with ceramic or composite facings. The increasing and expanding use of indirect resin composites in dentistry is a result of innovations in materials and processing techniques. Of special interest to the implant restorative field is the possibility that composites present significantly lower peak vertical and transverse forces transmitted at the peri-implant level compared to metal-ceramic supra structures in implant-supported restorations. A 43-year-old male patient referred to the department of prosthodontics for an implant retained fixed prosthesis. The clinical and radiographic examination of the patient demonstrated the presence of an implant in the right mandibular first molar tooth region. A considerable amount of marginal bone loss around the implant was detected in radiographic examinations combined with a remarkable peri-implant soft tissue deficiency. To minimize the chewing loads transmitted to the implant-bone interface it was decided to fabricate an indirect composite resin veneered single metal crown over a screw-retained abutment. At the end of the treatment, the functional and aesthetic deficiencies were fully compensated. After a 6 months clinical and radiographic follow-up period the not any additional pathologic invasion was detected in the implant-bone interface and implant retained restoration did not reveal any vehement complication.Keywords: dental implant, fixed partial dentures, indirect composite resin, peri-implant defects
Procedia PDF Downloads 2621192 Gas Phase Extraction: An Environmentally Sustainable and Effective Method for The Extraction and Recovery of Metal from Ores
Authors: Kolela J Nyembwe, Darlington C. Ashiegbu, Herman J. Potgieter
Abstract:
Over the past few decades, the demand for metals has increased significantly. This has led to a decrease and decline of high-grade ore over time and an increase in mineral complexity and matrix heterogeneity. In addition to that, there are rising concerns about greener processes and a sustainable environment. Due to these challenges, the mining and metal industry has been forced to develop new technologies that are able to economically process and recover metallic values from low-grade ores, materials having a metal content locked up in industrially processed residues (tailings and slag), and complex matrix mineral deposits. Several methods to address these issues have been developed, among which are ionic liquids (IL), heap leaching, and bioleaching. Recently, the gas phase extraction technique has been gaining interest because it eliminates many of the problems encountered in conventional mineral processing methods. The technique relies on the formation of volatile metal complexes, which can be removed from the residual solids by a carrier gas. The complexes can then be reduced using the appropriate method to obtain the metal and regenerate-recover the organic extractant. Laboratory work on the gas phase have been conducted for the extraction and recovery of aluminium (Al), iron (Fe), copper (Cu), chrome (Cr), nickel (Ni), lead (Pb), and vanadium V. In all cases the extraction revealed to depend of temperature and mineral surface area. The process technology appears very promising, offers the feasibility of recirculation, organic reagent regeneration, and has the potential to deliver on all promises of a “greener” process.Keywords: gas-phase extraction, hydrometallurgy, low-grade ore, sustainable environment
Procedia PDF Downloads 1331191 Comparison of Central Light Reflex Width-to-Retinal Vessel Diameter Ratio between Glaucoma and Normal Eyes by Using Edge Detection Technique
Authors: P. Siriarchawatana, K. Leungchavaphongse, N. Covavisaruch, K. Rojananuangnit, P. Boondaeng, N. Panyayingyong
Abstract:
Glaucoma is a disease that causes visual loss in adults. Glaucoma causes damage to the optic nerve and its overall pathophysiology is still not fully understood. Vasculopathy may be one of the possible causes of nerve damage. Photographic imaging of retinal vessels by fundus camera during eye examination may complement clinical management. This paper presents an innovation for measuring central light reflex width-to-retinal vessel diameter ratio (CRR) from digital retinal photographs. Using our edge detection technique, CRRs from glaucoma and normal eyes were compared to examine differences and associations. CRRs were evaluated on fundus photographs of participants from Mettapracharak (Wat Raikhing) Hospital in Nakhon Pathom, Thailand. Fifty-five photographs from normal eyes and twenty-one photographs from glaucoma eyes were included. Participants with hypertension were excluded. In each photograph, CRRs from four retinal vessels, including arteries and veins in the inferotemporal and superotemporal regions, were quantified using edge detection technique. From our finding, mean CRRs of all four retinal arteries and veins were significantly higher in persons with glaucoma than in those without glaucoma (0.34 vs. 0.32, p < 0.05 for inferotemporal vein, 0.33 vs. 0.30, p < 0.01 for inferotemporal artery, 0.34 vs. 0.31, p < 0.01 for superotemporal vein, and 0.33 vs. 0.30, p < 0.05 for superotemporal artery). From these results, an increase in CRRs of retinal vessels, as quantitatively measured from fundus photographs, could be associated with glaucoma.Keywords: glaucoma, retinal vessel, central light reflex, image processing, fundus photograph, edge detection
Procedia PDF Downloads 3251190 Effect of Cellular Water Transport on Deformation of Food Material during Drying
Authors: M. Imran Hossen Khan, M. Mahiuddin, M. A. Karim
Abstract:
Drying is a food processing technique where simultaneous heat and mass transfer take place from surface to the center of the sample. Deformation of food materials during drying is a common physical phenomenon which affects the textural quality and taste of the dried product. Most of the plant-based food materials are porous and hygroscopic in nature that contains about 80-90% water in different cellular environments: intercellular environment and intracellular environment. Transport of this cellular water has a significant effect on material deformation during drying. However, understanding of the scale of deformation is very complex due to diverse nature and structural heterogeneity of food material. Knowledge about the effect of transport of cellular water on deformation of material during drying is crucial for increasing the energy efficiency and obtaining better quality dried foods. Therefore, the primary aim of this work is to investigate the effect of intracellular water transport on material deformation during drying. In this study, apple tissue was taken for the investigation. The experiment was carried out using 1H-NMR T2 relaxometry with a conventional dryer. The experimental results are consistent with the understanding that transport of intracellular water causes cellular shrinkage associated with the anisotropic deformation of whole apple tissue. Interestingly, it is found that the deformation of apple tissue takes place at different stages of drying rather than deforming at one time. Moreover, it is found that the penetration rate of heat energy together with the pressure gradient between intracellular and intercellular environments is the responsible force to rupture the cell membrane.Keywords: heat and mass transfer, food material, intracellular water, cell rupture, deformation
Procedia PDF Downloads 2211189 Linux Security Management: Research and Discussion on Problems Caused by Different Aspects
Authors: Ma Yuzhe, Burra Venkata Durga Kumar
Abstract:
The computer is a great invention. As people use computers more and more frequently, the demand for PCs is growing, and the performance of computer hardware is also rising to face more complex processing and operation. However, the operating system, which provides the soul for computers, has stopped developing at a stage. In the face of the high price of UNIX (Uniplexed Information and Computering System), batch after batch of personal computer owners can only give up. Disk Operating System is too simple and difficult to bring innovation into play, which is not a good choice. And MacOS is a special operating system for Apple computers, and it can not be widely used on personal computers. In this environment, Linux, based on the UNIX system, was born. Linux combines the advantages of the operating system and is composed of many microkernels, which is relatively powerful in the core architecture. Linux system supports all Internet protocols, so it has very good network functions. Linux supports multiple users. Each user has no influence on their own files. Linux can also multitask and run different programs independently at the same time. Linux is a completely open source operating system. Users can obtain and modify the source code for free. Because of these advantages of Linux, it has also attracted a large number of users and programmers. The Linux system is also constantly upgraded and improved. It has also issued many different versions, which are suitable for community use and commercial use. Linux system has good security because it relies on a file partition system. However, due to the constant updating of vulnerabilities and hazards, the using security of the operating system also needs to be paid more attention to. This article will focus on the analysis and discussion of Linux security issues.Keywords: Linux, operating system, system management, security
Procedia PDF Downloads 1081188 Classifier for Liver Ultrasound Images
Authors: Soumya Sajjan
Abstract:
Liver cancer is the most common cancer disease worldwide in men and women, and is one of the few cancers still on the rise. Liver disease is the 4th leading cause of death. According to new NHS (National Health Service) figures, deaths from liver diseases have reached record levels, rising by 25% in less than a decade; heavy drinking, obesity, and hepatitis are believed to be behind the rise. In this study, we focus on Development of Diagnostic Classifier for Ultrasound liver lesion. Ultrasound (US) Sonography is an easy-to-use and widely popular imaging modality because of its ability to visualize many human soft tissues/organs without any harmful effect. This paper will provide an overview of underlying concepts, along with algorithms for processing of liver ultrasound images Naturaly, Ultrasound liver lesion images are having more spackle noise. Developing classifier for ultrasound liver lesion image is a challenging task. We approach fully automatic machine learning system for developing this classifier. First, we segment the liver image by calculating the textural features from co-occurrence matrix and run length method. For classification, Support Vector Machine is used based on the risk bounds of statistical learning theory. The textural features for different features methods are given as input to the SVM individually. Performance analysis train and test datasets carried out separately using SVM Model. Whenever an ultrasonic liver lesion image is given to the SVM classifier system, the features are calculated, classified, as normal and diseased liver lesion. We hope the result will be helpful to the physician to identify the liver cancer in non-invasive method.Keywords: segmentation, Support Vector Machine, ultrasound liver lesion, co-occurance Matrix
Procedia PDF Downloads 4111187 Design and Testing of Electrical Capacitance Tomography Sensors for Oil Pipeline Monitoring
Authors: Sidi M. A. Ghaly, Mohammad O. Khan, Mohammed Shalaby, Khaled A. Al-Snaie
Abstract:
Electrical capacitance tomography (ECT) is a valuable, non-invasive technique used to monitor multiphase flow processes, especially within industrial pipelines. This study focuses on the design, testing, and performance comparison of ECT sensors configured with 8, 12, and 16 electrodes, aiming to evaluate their effectiveness in imaging accuracy, resolution, and sensitivity. Each sensor configuration was designed to capture the spatial permittivity distribution within a pipeline cross-section, enabling visualization of phase distribution and flow characteristics such as oil and water interactions. The sensor designs were implemented and tested in closed pipes to assess their response to varying flow regimes. Capacitance data collected from each electrode configuration were reconstructed into cross-sectional images, enabling a comparison of image resolution, noise levels, and computational demands. Results indicate that the 16-electrode configuration yields higher image resolution and sensitivity to phase boundaries compared to the 8- and 12-electrode setups, making it more suitable for complex flow visualization. However, the 8 and 12-electrode sensors demonstrated advantages in processing speed and lower computational requirements. This comparative analysis provides critical insights into optimizing ECT sensor design based on specific industrial requirements, from high-resolution imaging to real-time monitoring needs.Keywords: capacitance tomography, modeling, simulation, electrode, permittivity, fluid dynamics, imaging sensitivity measurement
Procedia PDF Downloads 101186 Histone Deacetylases Inhibitor - Valproic Acid Sensitizes Human Melanoma Cells for alkylating agent and PARP inhibitor
Authors: Małgorzata Drzewiecka, Tomasz Śliwiński, Maciej Radek
Abstract:
The inhibition of histone deacetyles (HDACs) holds promise as a potential anti-cancer therapy because histone and non-histone protein acetylation is frequently disrupted in cancer, leading to cancer initiation and progression. Additionally, histone deacetylase inhibitors (HDACi) such as class I HDAC inhibitor - valproic acid (VPA) have been shown to enhance the effectiveness of DNA-damaging factors, such as cisplatin or radiation. In this study, we found that, using of VPA in combination with talazoparib (BMN-637 – PARP1 inhibitor – PARPi) and/or Dacarabazine (DTIC - alkylating agent) resulted in increased DNA double strand break (DSB) and reduced survival (while not affecting primary melanocytes )and proliferation of melanoma cells. Furthermore, pharmacologic inhibition of class I HDACs sensitizes melanoma cells to apoptosis following exposure to DTIC and BMN-637. In addition, inhibition of HDAC caused sensitization of melanoma cells to dacarbazine and BMN-637 in melanoma xenografts in vivo. At the mRNA and protein level histone deacetylase inhibitor downregulated RAD51 and FANCD2. This study provides that combining HDACi, alkylating agent and PARPi could potentially enhance the treatment of melanoma, which is known for being one of the most aggressive malignant tumors. The findings presented here point to a scenario in which HDAC via enhancing the HR-dependent repair of DSBs created during the processing of DNA lesions, are essential nodes in the resistance of malignant melanoma cells to methylating agent-based therapies.Keywords: melanoma, hdac, parp inhibitor, valproic acid
Procedia PDF Downloads 821185 Assessment of Heavy Metals and Radionuclide Concentrations in Mafikeng Waste Water Treatment Plant
Authors: M. Mathuthu, N. N. Gaxela, R. Y. Olobatoke
Abstract:
A study was carried out to assess the heavy metal and radionuclide concentrations of water from the waste water treatment plant in Mafikeng Local Municipality to evaluate treatment efficiency. Ten water samples were collected from various stages of water treatment which included sewage delivered to the plant, the two treatment stages and the effluent and also the community. The samples were analyzed for heavy metal content using Inductive Coupled Plasma Mass Spectrometer. Gross α/β activity concentration in water samples was evaluated by Liquid Scintillation Counting whereas the concentration of individual radionuclides was measured by gamma spectroscopy. The results showed marked reduction in the levels of heavy metal concentration from 3 µg/L (As)–670 µg/L (Na) in sewage into the plant to 2 µg/L (As)–170 µg/L (Fe) in the effluent. Beta activity was not detected in water samples except in the in-coming sewage, the concentration of which was within reference limits. However, the gross α activity in all the water samples (7.7-8.02 Bq/L) exceeded the 0.1 Bq/L limit set by World Health Organization (WHO). Gamma spectroscopy analysis revealed very high concentrations of 235U and 226Ra in water samples, with the lowest concentrations (9.35 and 5.44 Bq/L respectively) in the in-coming sewage and highest concentrations (73.8 and 47 Bq/L respectively) in the community water suggesting contamination along water processing line. All the values were considerably higher than the limits of South Africa Target Water Quality Range and WHO. However, the estimated total doses of the two radionuclides for the analyzed water samples (10.62 - 45.40 µSv yr-1) were all well below the reference level of the committed effective dose of 100 µSv yr-1 recommended by WHO.Keywords: gross α/β activity, heavy metals, radionuclides, 235U, 226Ra, water sample
Procedia PDF Downloads 4481184 Optimization of Reliability Test Plans: Increase Wafer Fabrication Equipments Uptime
Authors: Swajeeth Panchangam, Arun Rajendran, Swarnim Gupta, Ahmed Zeouita
Abstract:
Semiconductor processing chambers tend to operate in controlled but aggressive operating conditions (chemistry, plasma, high temperature etc.) Owing to this, the design of this equipment requires developing robust and reliable hardware and software. Any equipment downtime due to reliability issues can have cost implications both for customers in terms of tool downtime (reduced throughput) and for equipment manufacturers in terms of high warranty costs and customer trust deficit. A thorough reliability assessment of critical parts and a plan for preventive maintenance/replacement schedules need to be done before tool shipment. This helps to save significant warranty costs and tool downtimes in the field. However, designing a proper reliability test plan to accurately demonstrate reliability targets with proper sample size and test duration is quite challenging. This is mainly because components can fail in different failure modes that fit into different Weibull beta value distributions. Without apriori Weibull beta of a failure mode under consideration, it always leads to over/under utilization of resources, which eventually end up in false positives or false negatives estimates. This paper proposes a methodology to design a reliability test plan with optimal model size/duration/both (independent of apriori Weibull beta). This methodology can be used in demonstration tests and can be extended to accelerated life tests to further decrease sample size/test duration.Keywords: reliability, stochastics, preventive maintenance
Procedia PDF Downloads 151183 Energy Efficient Massive Data Dissemination Through Vehicle Mobility in Smart Cities
Authors: Salman Naseer
Abstract:
One of the main challenges of operating a smart city (SC) is collecting the massive data generated from multiple data sources (DS) and to transmit them to the control units (CU) for further data processing and analysis. These ever-increasing data demands require not only more and more capacity of the transmission channels but also results in resource over-provision to meet the resilience requirements, thus the unavoidable waste because of the data fluctuations throughout the day. In addition, the high energy consumption (EC) and carbon discharges from these data transmissions posing serious issues to the environment we live in. Therefore, to overcome the issues of intensive EC and carbon emissions (CE) of massive data dissemination in Smart Cities, we propose an energy efficient and carbon reduction approach by utilizing the daily mobility of the existing vehicles as an alternative communications channel to accommodate the data dissemination in smart cities. To illustrate the effectiveness and efficiency of our approach, we take the Auckland City in New Zealand as an example, assuming massive data generated by various sources geographically scattered throughout the Auckland region to the control centres located in city centre. The numerical results show that our proposed approach can provide up to 5 times lower delay as transferring the large volume of data by utilizing the existing daily vehicles’ mobility than the conventional transmission network. Moreover, our proposed approach offers about 30% less EC and CE than that of conventional network transmission approach.Keywords: smart city, delay tolerant network, infrastructure offloading, opportunistic network, vehicular mobility, energy consumption, carbon emission
Procedia PDF Downloads 1421182 Early Depression Detection for Young Adults with a Psychiatric and AI Interdisciplinary Multimodal Framework
Authors: Raymond Xu, Ashley Hua, Andrew Wang, Yuru Lin
Abstract:
During COVID-19, the depression rate has increased dramatically. Young adults are most vulnerable to the mental health effects of the pandemic. Lower-income families have a higher ratio to be diagnosed with depression than the general population, but less access to clinics. This research aims to achieve early depression detection at low cost, large scale, and high accuracy with an interdisciplinary approach by incorporating clinical practices defined by American Psychiatric Association (APA) as well as multimodal AI framework. The proposed approach detected the nine depression symptoms with Natural Language Processing sentiment analysis and a symptom-based Lexicon uniquely designed for young adults. The experiments were conducted on the multimedia survey results from adolescents and young adults and unbiased Twitter communications. The result was further aggregated with the facial emotional cues analyzed by the Convolutional Neural Network on the multimedia survey videos. Five experiments each conducted on 10k data entries reached consistent results with an average accuracy of 88.31%, higher than the existing natural language analysis models. This approach can reach 300+ million daily active Twitter users and is highly accessible by low-income populations to promote early depression detection to raise awareness in adolescents and young adults and reveal complementary cues to assist clinical depression diagnosis.Keywords: artificial intelligence, COVID-19, depression detection, psychiatric disorder
Procedia PDF Downloads 1311181 Geographical Indication Protection for Agricultural Products: Contribution for Achieving Food Security in Indonesia
Authors: Mas Rahmah
Abstract:
Indonesia is the most populous Southeast Asian nations, as Indonesia`s population is constantly growing, food security has become a crucial trending issue. Although Indonesia has more than enough natural resources and agricultural products to ensure food security for all, Indonesia is still facing the problem of food security because of adverse weather conditions, increasing population, political instability, economic factors (unemployment, rising food prices), and the dependent system of agriculture. This paper will analyze that Geographical Indication (GI) can aid in transforming Indonesian agricultural-dependent system by tapping the unique product attributes of their quality products since Indonesia has a lot of agricultural products with unique quality and special characteristic associated with geographical factors such as Toraja Coffee, Alor Vanili, Banda Nutmeg, Java Tea, Deli Tobacco, Cianjur Rise etc. This paper argues that the reputation and agricultural products and their intrinsic quality should be protected under GI because GI will provide benefit supporting the food security program. Therefore, this paper will expose the benefit of GI protection such as increasing productivity, improving the exports of GI products, creating employment, adding economic value to products, and increasing the diversity of supply of natural and unique quality products, etc. that can contribute to food security. The analysis will finally conclude that the scenario of promoting GI may indirectly contribute to food security through adding value by incorporating territory specific cultural, environmental and social qualities into production, processing and developing of unique local, niche and special agricultural products.Keywords: geographical indication, food security, agricultural product, Indonesia
Procedia PDF Downloads 3691180 Evaluation of the Safety Status of Beef Meat During Processing at Slaughterhouse in Bouira, Algeria
Authors: A. Ameur Ameur, H. Boukherrouba
Abstract:
In red meat slaughterhouses a significant number of organs and carcasses were seized because of the presence of lesions of various origins. The objective of this study is to characterize and evaluate the frequency of these lesions in the slaughterhouse of the Wilaya of BOUIRA. On cattle slaughtered in 2646 and inspected 72% of these carcasses have been no seizures against 28% who have undergone at least one entry. 325 lung (44%), 164 livers (22%), 149 hearts (21%) are the main saisis.38 kidneys members (5%), 33 breasts (4%) and 16 whole carcasses (2%) are less seizures parties. The main reasons are the input hydatid cyst for most seized organs such as the lungs (64.5%), livers (51.8%), hearts (23.2%), hydronephrosis for the kidneys (39.4%), and chronic mastitis (54%) for the breasts. Then we recorded second-degree pneumonia (16%) to the lungs, chronic fascioliasis (25%) for livers. A significant difference was observed (p < 0.0001) by sex, race, origin and age of all cattle having been saisie.une a specific input patterns and So pathology was recorded based on race. The local breed presented (75.2%) of hydatid cyst, (95%) and chronic fascioliasis (60%) pyelonephritis, for against the improved breed presented the entire respiratory lesions include pneumonia (64%) the chronic tuberculosis (64%) and mastitis (76%). These results are an important step in the implementation of the concept of risk assessment as the scientific basis of food legislation, by the identification and characterization of macroscopic damage leading withdrawals in meat and to establish the level of inclusion of these injuries within the recommended risk assessment systems (HACCP).Keywords: slaughterhouses, meat safety, seizure patterns, HACCP
Procedia PDF Downloads 4651179 An Adaptive Back-Propagation Network and Kalman Filter Based Multi-Sensor Fusion Method for Train Location System
Authors: Yu-ding Du, Qi-lian Bao, Nassim Bessaad, Lin Liu
Abstract:
The Global Navigation Satellite System (GNSS) is regarded as an effective approach for the purpose of replacing the large amount used track-side balises in modern train localization systems. This paper describes a method based on the data fusion of a GNSS receiver sensor and an odometer sensor that can significantly improve the positioning accuracy. A digital track map is needed as another sensor to project two-dimensional GNSS position to one-dimensional along-track distance due to the fact that the train’s position can only be constrained on the track. A model trained by BP neural network is used to estimate the trend positioning error which is related to the specific location and proximate processing of the digital track map. Considering that in some conditions the satellite signal failure will lead to the increase of GNSS positioning error, a detection step for GNSS signal is applied. An adaptive weighted fusion algorithm is presented to reduce the standard deviation of train speed measurement. Finally an Extended Kalman Filter (EKF) is used for the fusion of the projected 1-D GNSS positioning data and the 1-D train speed data to get the estimate position. Experimental results suggest that the proposed method performs well, which can reduce positioning error notably.Keywords: multi-sensor data fusion, train positioning, GNSS, odometer, digital track map, map matching, BP neural network, adaptive weighted fusion, Kalman filter
Procedia PDF Downloads 2521178 Application of Interferometric Techniques for Quality Control Oils Used in the Food Industry
Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich
Abstract:
The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.Keywords: food industry, interferometric, oils, quality control
Procedia PDF Downloads 3721177 Q-Map: Clinical Concept Mining from Clinical Documents
Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala
Abstract:
Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.Keywords: information retrieval, unified medical language system, syntax based analysis, natural language processing, medical informatics
Procedia PDF Downloads 1331176 Using Non-Negative Matrix Factorization Based on Satellite Imagery for the Collection of Agricultural Statistics
Authors: Benyelles Zakaria, Yousfi Djaafar, Karoui Moussa Sofiane
Abstract:
Agriculture is fundamental and remains an important objective in the Algerian economy, based on traditional techniques and structures, it generally has a purpose of consumption. Collection of agricultural statistics in Algeria is done using traditional methods, which consists of investigating the use of land through survey and field survey. These statistics suffer from problems such as poor data quality, the long delay between collection of their last final availability and high cost compared to their limited use. The objective of this work is to develop a processing chain for a reliable inventory of agricultural land by trying to develop and implement a new method of extracting information. Indeed, this methodology allowed us to combine data from remote sensing and field data to collect statistics on areas of different land. The contribution of remote sensing in the improvement of agricultural statistics, in terms of area, has been studied in the wilaya of Sidi Bel Abbes. It is in this context that we applied a method for extracting information from satellite images. This method is called the non-negative matrix factorization, which does not consider the pixel as a single entity, but will look for components the pixel itself. The results obtained by the application of the MNF were compared with field data and the results obtained by the method of maximum likelihood. We have seen a rapprochement between the most important results of the FMN and those of field data. We believe that this method of extracting information from satellite data leads to interesting results of different types of land uses.Keywords: blind source separation, hyper-spectral image, non-negative matrix factorization, remote sensing
Procedia PDF Downloads 4231175 Integrated Free Space Optical Communication and Optical Sensor Network System with Artificial Intelligence Techniques
Authors: Yibeltal Chanie Manie, Zebider Asire Munyelet
Abstract:
5G and 6G technology offers enhanced quality of service with high data transmission rates, which necessitates the implementation of the Internet of Things (IoT) in 5G/6G architecture. In this paper, we proposed the integration of free space optical communication (FSO) with fiber sensor networks for IoT applications. Recently, free-space optical communications (FSO) are gaining popularity as an effective alternative technology to the limited availability of radio frequency (RF) spectrum. FSO is gaining popularity due to flexibility, high achievable optical bandwidth, and low power consumption in several applications of communications, such as disaster recovery, last-mile connectivity, drones, surveillance, backhaul, and satellite communications. Hence, high-speed FSO is an optimal choice for wireless networks to satisfy the full potential of 5G/6G technology, offering 100 Gbit/s or more speed in IoT applications. Moreover, machine learning must be integrated into the design, planning, and optimization of future optical wireless communication networks in order to actualize this vision of intelligent processing and operation. In addition, fiber sensors are important to achieve real-time, accurate, and smart monitoring in IoT applications. Moreover, we proposed deep learning techniques to estimate the strain changes and peak wavelength of multiple Fiber Bragg grating (FBG) sensors using only the spectrum of FBGs obtained from the real experiment.Keywords: optical sensor, artificial Intelligence, Internet of Things, free-space optics
Procedia PDF Downloads 631174 Fast Return Path Planning for Agricultural Autonomous Terrestrial Robot in a Known Field
Authors: Carlo Cernicchiaro, Pedro D. Gaspar, Martim L. Aguiar
Abstract:
The agricultural sector is becoming more critical than ever in view of the expected overpopulation of the Earth. The introduction of robotic solutions in this field is an increasingly researched topic to make the most of the Earth's resources, thus going to avoid the problems of wear and tear of the human body due to the harsh agricultural work, and open the possibility of a constant careful processing 24 hours a day. This project is realized for a terrestrial autonomous robot aimed to navigate in an orchard collecting fallen peaches below the trees. When it receives the signal indicating the low battery, it has to return to the docking station where it will replace its battery and then return to the last work point and resume its routine. Considering a preset path in orchards with tree rows with variable length by which the robot goes iteratively using the algorithm D*. In case of low battery, the D* algorithm is still used to determine the fastest return path to the docking station as well as to come back from the docking station to the last work point. MATLAB simulations were performed to analyze the flexibility and adaptability of the developed algorithm. The simulation results show an enormous potential for adaptability, particularly in view of the irregularity of orchard field, since it is not flat and undergoes modifications over time from fallen branch as well as from other obstacles and constraints. The D* algorithm determines the best route in spite of the irregularity of the terrain. Moreover, in this work, it will be shown a possible solution to improve the initial points tracking and reduce time between movements.Keywords: path planning, fastest return path, agricultural autonomous terrestrial robot, docking station
Procedia PDF Downloads 1341173 Comparative Study of the Effects of Process Parameters on the Yield of Oil from Melon Seed (Cococynthis citrullus) and Coconut Fruit (Cocos nucifera)
Authors: Ndidi F. Amulu, Patrick E. Amulu, Gordian O. Mbah, Callistus N. Ude
Abstract:
Comparative analysis of the properties of melon seed, coconut fruit and their oil yield were evaluated in this work using standard analytical technique AOAC. The results of the analysis carried out revealed that the moisture contents of the samples studied are 11.15% (melon) and 7.59% (coconut). The crude lipid content are 46.10% (melon) and 55.15% (coconut).The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant difference (p < 0.05) in yield between the samples, with melon oil seed flour having a higher percentage range of oil yield (41.30 – 52.90%) and coconut (36.25 – 49.83%). The physical characterization of the extracted oil was also carried out. The values gotten for refractive index are 1.487 (melon seed oil) and 1.361 (coconut oil) and viscosities are 0.008 (melon seed oil) and 0.002 (coconut oil). The chemical analysis of the extracted oils shows acid value of 1.00mg NaOH/g oil (melon oil), 10.050mg NaOH/g oil (coconut oil) and saponification value of 187.00mg/KOH (melon oil) and 183.26mg/KOH (coconut oil). The iodine value of the melon oil gave 75.00mg I2/g and 81.00mg I2/g for coconut oil. A standard statistical package Minitab version 16.0 was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to optimize the leaching process. Both samples gave high oil yield at the same optimal conditions. The optimal conditions to obtain highest oil yield ≥ 52% (melon seed) and ≥ 48% (coconut seed) are solute - solvent ratio of 40g/ml, leaching time of 2hours and leaching temperature of 50oC. The two samples studied have potential of yielding oil with melon seed giving the higher yield.Keywords: Coconut, Melon, Optimization, Processing
Procedia PDF Downloads 442