Search results for: circular metrics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1322

Search results for: circular metrics

302 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kumar Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform

Procedia PDF Downloads 110
301 Establishing Combustion Behaviour for Refuse Derived Fuel Firing at Kiln Inlet through Computational Fluid Dynamics at a Cement Plant in India

Authors: Prateek Sharma, Venkata Ramachandrarao Maddali, Kapil Kukreja, B. N. Mohapatra

Abstract:

Waste management is one of the pressing issues of India. Several initiatives by the Indian Government, including the recent one “Swachhata hi Seva” campaign launched by Prime Minister on 15th August 2018, can be one of the game changers to waste disposal. Under this initiative, the government, cement industry and other stakeholders are working hand in hand to dispose of single-use plastics in cement plants in rotary kilns. This is an exemplary effort and a move that establishes the Indian Cement industry as one of the key players in a circular economy. One of the cement plants in Southern India has been mandated by the state government to co-process shredded plastic and refuse-derived fuel (RDF) available in nearby regions as an alternative fuel in their cement plant. The plant has set a target of 25 % thermal substitution rate (TSR) by RDF in the next five years. Most of the cement plants in India and abroad have achieved high TSR through pre calciner firing. But the cement plant doesn’t have the precalciner and has to achieve this daunting task of 25 % TSR by firing through the main kiln burner. Since RDF is a heterogeneous waste with the change in fuel quality, it is difficult to achieve this task; hence plant has to resort to firing some portion of RDF/plastics at kiln inlet. But kiln inlet has reducing conditions as observed during measurements) under baseline condition. The combustion behavior of RDF of different sizes at different firing locations in riser was studied with the help of a computational fluid dynamics tool. It has been concluded that RDF above 50 mm size results in incomplete combustion leading to CO formation. Moreover, best firing location appears to be in the bottom portion of the kiln riser.

Keywords: kiln inlet, plastics, refuse derived fuel, thermal substitution rate

Procedia PDF Downloads 121
300 Saving Energy through Scalable Architecture

Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala

Abstract:

In this paper, we focus on the importance of scalable architecture for data centers and buildings in general to help an enterprise achieve environmental sustainability. The scalable architecture helps in many ways, such as adaptability to the business and user requirements, promotes high availability and disaster recovery solutions that are cost effective and low maintenance. The scalable architecture also plays a vital role in three core areas of sustainability: economy, environment, and social, which are also known as the 3 pillars of a sustainability model. If the architecture is scalable, it has many advantages. A few examples are that scalable architecture helps businesses and industries to adapt to changing technology, drive innovation, promote platform independence, and build resilience against natural disasters. Most importantly, having a scalable architecture helps industries bring in cost-effective measures for energy consumption, reduce wastage, increase productivity, and enable a robust environment. It also helps in the reduction of carbon emissions with advanced monitoring and metering capabilities. Scalable architectures help in reducing waste by optimizing the designs to utilize materials efficiently, minimize resources, decrease carbon footprints by using low-impact materials that are environmentally friendly. In this paper we also emphasize the importance of cultural shift towards the reuse and recycling of natural resources for a balanced ecosystem and maintain a circular economy. Also, since all of us are involved in the use of computers, much of the scalable architecture we have studied is related to data centers.

Keywords: scalable architectures, sustainability, application design, disruptive technology, machine learning and natural language processing, AI, social media platform, cloud computing, advanced networking and storage devices, advanced monitoring and metering infrastructure, climate change

Procedia PDF Downloads 97
299 Investigating the Effectiveness of Multilingual NLP Models for Sentiment Analysis

Authors: Othmane Touri, Sanaa El Filali, El Habib Benlahmar

Abstract:

Natural Language Processing (NLP) has gained significant attention lately. It has proved its ability to analyze and extract insights from unstructured text data in various languages. It is found that one of the most popular NLP applications is sentiment analysis which aims to identify the sentiment expressed in a piece of text, such as positive, negative, or neutral, in multiple languages. While there are several multilingual NLP models available for sentiment analysis, there is a need to investigate their effectiveness in different contexts and applications. In this study, we aim to investigate the effectiveness of different multilingual NLP models for sentiment analysis on a dataset of online product reviews in multiple languages. The performance of several NLP models, including Google Cloud Natural Language API, Microsoft Azure Cognitive Services, Amazon Comprehend, Stanford CoreNLP, spaCy, and Hugging Face Transformers are being compared. The models based on several metrics, including accuracy, precision, recall, and F1 score, are being evaluated and compared to their performance across different categories of product reviews. In order to run the study, preprocessing of the dataset has been performed by cleaning and tokenizing the text data in multiple languages. Then training and testing each model has been applied using a cross-validation approach where randomly dividing the dataset into training and testing sets and repeating the process multiple times has been used. A grid search approach to optimize the hyperparameters of each model and select the best-performing model for each category of product reviews and language has been applied. The findings of this study provide insights into the effectiveness of different multilingual NLP models for Multilingual Sentiment Analysis and their suitability for different languages and applications. The strengths and limitations of each model were identified, and recommendations for selecting the most performant model based on the specific requirements of a project were provided. This study contributes to the advancement of research methods in multilingual NLP and provides a practical guide for researchers and practitioners in the field.

Keywords: NLP, multilingual, sentiment analysis, texts

Procedia PDF Downloads 95
298 Influence of Hydrophobic Surface on Flow Past Square Cylinder

Authors: S. Ajith Kumar, Vaisakh S. Rajan

Abstract:

In external flows, vortex shedding behind the bluff bodies causes to experience unsteady loads on a large number of engineering structures, resulting in structural failure. Vortex shedding can even turn out to be disastrous like the Tacoma Bridge failure incident. We need to have control over vortex shedding to get rid of this untoward condition by reducing the unsteady forces acting on the bluff body. In circular cylinders, hydrophobic surface in an otherwise no-slip surface is found to be delaying separation and minimizes the effects of vortex shedding drastically. Flow over square cylinder stands different from this behavior as separation can takes place from either of the two corner separation points (front or rear). An attempt is made in this study to numerically elucidate the effect of hydrophobic surface in flow over a square cylinder. A 2D numerical simulation has been done to understand the effects of the slip surface on the flow past square cylinder. The details of the numerical algorithm will be presented at the time of the conference. A non-dimensional parameter, Knudsen number is defined to quantify the slip on the cylinder surface based on Maxwell’s equation. The slip surface condition of the wall affects the vorticity distribution around the cylinder and the flow separation. In the numerical analysis, we observed that the hydrophobic surface enhances the shedding frequency and damps down the amplitude of oscillations of the square cylinder. We also found that the slip has a negative effect on aerodynamic force coefficients such as the coefficient of lift (CL), coefficient of drag (CD) etc. and hence replacing the no slip surface by a hydrophobic surface can be treated as an effective drag reduction strategy and the introduction of hydrophobic surface could be utilized for reducing the vortex induced vibrations (VIV) and is found as an effective method in controlling VIV thereby controlling the structural failures.

Keywords: drag reduction, flow past square cylinder, flow control, hydrophobic surfaces, vortex shedding

Procedia PDF Downloads 369
297 Data Mining Model for Predicting the Status of HIV Patients during Drug Regimen Change

Authors: Ermias A. Tegegn, Million Meshesha

Abstract:

Human Immunodeficiency Virus and Acquired Immunodeficiency Syndrome (HIV/AIDS) is a major cause of death for most African countries. Ethiopia is one of the seriously affected countries in sub Saharan Africa. Previously in Ethiopia, having HIV/AIDS was almost equivalent to a death sentence. With the introduction of Antiretroviral Therapy (ART), HIV/AIDS has become chronic, but manageable disease. The study focused on a data mining technique to predict future living status of HIV/AIDS patients at the time of drug regimen change when the patients become toxic to the currently taking ART drug combination. The data is taken from University of Gondar Hospital ART program database. Hybrid methodology is followed to explore the application of data mining on ART program dataset. Data cleaning, handling missing values and data transformation were used for preprocessing the data. WEKA 3.7.9 data mining tools, classification algorithms, and expertise are utilized as means to address the research problem. By using four different classification algorithms, (i.e., J48 Classifier, PART rule induction, Naïve Bayes and Neural network) and by adjusting their parameters thirty-two models were built on the pre-processed University of Gondar ART program dataset. The performances of the models were evaluated using the standard metrics of accuracy, precision, recall, and F-measure. The most effective model to predict the status of HIV patients with drug regimen substitution is pruned J48 decision tree with a classification accuracy of 98.01%. This study extracts interesting attributes such as Ever taking Cotrim, Ever taking TbRx, CD4 count, Age, Weight, and Gender so as to predict the status of drug regimen substitution. The outcome of this study can be used as an assistant tool for the clinician to help them make more appropriate drug regimen substitution. Future research directions are forwarded to come up with an applicable system in the area of the study.

Keywords: HIV drug regimen, data mining, hybrid methodology, predictive model

Procedia PDF Downloads 140
296 Safeguarding Product Quality through Pre-Qualification of Material Manufacturers: A Ship and Offshore Classification Society's Perspective

Authors: Sastry Y. Kandukuri, Isak Andersen

Abstract:

Despite recent advances in the manufacturing sector, quality issues remain a frequent occurrence, and can result in fatal accidents, equipment downtime, and loss of life. Adequate quality is of high importance in high-risk industries such as sea-going vessels and offshore installations in which third party quality assurance and product control play an important essential role in ensuring manufacturing quality of critical components. Classification societies play a vital role in mitigating risk in these industries by making sure that all the stakeholders i.e. manufacturers, builders, and end users are provided with adequate rules and standards that effectively ensures components produced at a high level of quality based on the area of application and risk of its failure. Quality issues have also been linked to the lack of competence or negligence of stakeholders in supply value chain. However, continued actions and regulatory reforms through modernization of rules and requirements has provided additional tools for purchasers and manufacturers to confront these issues. Included among these tools are updated ‘approval of manufacturer class programs’ aimed at developing and implementing a set of standardized manufacturing quality metrics for use by the manufacturer and verified by the classification society. The establishment and collection of manufacturing and testing requirements described in these programs could provide various stakeholders – from industry to vessel owners – with greater insight into the state of quality at a given manufacturing facility, and allow stakeholders to anticipate better and address quality issues while simultaneously reducing unnecessary failures that are costly to the industry. The publication introduces, explains and discusses critical manufacturing and testing requirements set in a leading class society’s approval of manufacturer regime and its rationale and some case studies.

Keywords: classification society, manufacturing, materials processing, materials testing, quality control

Procedia PDF Downloads 350
295 The Detection of Implanted Radioactive Seeds on Ultrasound Images Using Convolution Neural Networks

Authors: Edward Holupka, John Rossman, Tye Morancy, Joseph Aronovitz, Irving Kaplan

Abstract:

A common modality for the treatment of early stage prostate cancer is the implantation of radioactive seeds directly into the prostate. The radioactive seeds are positioned inside the prostate to achieve optimal radiation dose coverage to the prostate. These radioactive seeds are positioned inside the prostate using Transrectal ultrasound imaging. Once all of the planned seeds have been implanted, two dimensional transaxial transrectal ultrasound images separated by 2 mm are obtained through out the prostate, beginning at the base of the prostate up to and including the apex. A common deep neural network, called DetectNet was trained to automatically determine the position of the implanted radioactive seeds within the prostate under ultrasound imaging. The results of the training using 950 training ultrasound images and 90 validation ultrasound images. The commonly used metrics for successful training were used to evaluate the efficacy and accuracy of the trained deep neural network and resulted in an loss_bbox (train) = 0.00, loss_coverage (train) = 1.89e-8, loss_bbox (validation) = 11.84, loss_coverage (validation) = 9.70, mAP (validation) = 66.87%, precision (validation) = 81.07%, and a recall (validation) = 82.29%, where train and validation refers to the training image set and validation refers to the validation training set. On the hardware platform used, the training expended 12.8 seconds per epoch. The network was trained for over 10,000 epochs. In addition, the seed locations as determined by the Deep Neural Network were compared to the seed locations as determined by a commercial software based on a one to three months after implant CT. The Deep Learning approach was within \strikeout off\uuline off\uwave off2.29\uuline default\uwave default mm of the seed locations determined by the commercial software. The Deep Learning approach to the determination of radioactive seed locations is robust, accurate, and fast and well within spatial agreement with the gold standard of CT determined seed coordinates.

Keywords: prostate, deep neural network, seed implant, ultrasound

Procedia PDF Downloads 194
294 Cold Formed Steel Sections: Analysis, Design and Applications

Authors: A. Saha Chaudhuri, D. Sarkar

Abstract:

In steel construction, there are two families of structural members. One is hot rolled steel and another is cold formed steel. Cold formed steel section includes steel sheet, strip, plate or flat bar. Cold formed steel section is manufactured in roll forming machine by press brake or bending operation. Cold formed steel (CFS), also known as Light Gauge Steel (LGS). As cold formed steel is a sustainable material, it is widely used in green building. Cold formed steel can be recycled and reused with no degradation in structural properties. Cold formed steel structures can earn credits for green building ratings such as LEED and similar programs. Cold formed steel construction satisfies international demand for better, more efficient and affordable buildings. Cold formed steel sections are used in building, car body, railway coach, various types of equipment, storage rack, grain bin, highway product, transmission tower, transmission pole, drainage facility, bridge construction etc. Various shapes of cold formed steel sections are available, such as C section, Z section, I section, T section, angle section, hat section, box section, square hollow section (SHS), rectangular hollow section (RHS), circular hollow section (CHS) etc. In building construction cold formed steel is used as eave strut, purlin, girt, stud, header, floor joist, brace, diaphragm and covering for roof, wall and floor. Cold formed steel has high strength to weight ratio and high stiffness. Cold formed steel is non shrinking and non creeping at ambient temperature, it is termite proof and rot proof. CFS is durable, dimensionally stable and non combustible material. CFS is economical in transportation and handling. At present days cold formed steel becomes a competitive building material. In this paper all these applications related present research work are described and how the CFS can be used as blast resistant structural system that is examined.

Keywords: cold form steel sections, applications, present research review, blast resistant design

Procedia PDF Downloads 144
293 Phase Composition Analysis of Ternary Alloy Materials for Gas Turbine Applications

Authors: Mayandi Ramanathan

Abstract:

Gas turbine blades see the most aggressive thermal stress conditions within the engine, due to high Turbine Entry Temperatures in the range of 1500 to 1600°C. The blades rotate at very high rotation rates and remove a significant amount of thermal power from the gas stream. At high temperatures, the major component failure mechanism is a creep. During its service over time under high thermal loads, the blade will deform, lengthen and rupture. High strength and stiffness in the longitudinal direction up to elevated service temperatures are certainly the most needed properties of turbine blades and gas turbine components. The proposed advanced Ti alloy material needs a process that provides a strategic orientation of metallic ordering, uniformity in composition and high metallic strength. The chemical composition of the proposed Ti alloy material (25% Ta/(Al+Ta) ratio), unlike Ti-47Al-2Cr-2Nb, has less excess Al that could limit the service life of turbine blades. Properties and performance of Ti-47Al-2Cr-2Nb and Ti-6Al-4V materials will be compared with that of the proposed Ti alloy material to generalize the performance metrics of various gas turbine components. This paper will involve the summary of the effects of additive manufacturing and heat treatment process conditions on the changes in the phase composition, grain structure, lattice structure of the material, tensile strength, creep strain rate, thermal expansion coefficient and fracture toughness at different temperatures. Based on these results, additive manufacturing and heat treatment process conditions will be optimized to fabricate turbine blade with Ti-43Al matrix alloyed with an optimized amount of refractory Ta metal. Improvement in service temperature of the turbine blades and corrosion resistance dependence on the coercivity of the alloy material will be reported. A correlation of phase composition and creep strain rate will also be discussed.

Keywords: high temperature materials, aerospace, specific strength, creep strain, phase composition

Procedia PDF Downloads 113
292 Numerical Study of a Ventilation Principle Based on Flow Pulsations

Authors: Amir Sattari, Mac Panah, Naeim Rashidfarokhi

Abstract:

To enhance the mixing of fluid in a rectangular enclosure with a circular inlet and outlet, an energy-efficient approach is further investigated through computational fluid dynamics (CFD). Particle image velocimetry (PIV) measurements help confirm that the pulsation of the inflow velocity improves the mixing performance inside the enclosure considerably without increasing energy consumption. In this study, multiple CFD simulations with different turbulent models were performed. The results obtained were compared with experimental PIV results. This study investigates small-scale representations of flow patterns in a ventilated rectangular room. The objective is to validate the concept of an energy-efficient ventilation strategy with improved thermal comfort and reduction of stagnant air inside the room. Experimental and simulated results confirm that through pulsation of the inflow velocity, strong secondary vortices are generated downstream of the entrance wall-jet. The pulsatile inflow profile promotes a periodic generation of vortices with stronger eddies despite a relatively low inlet velocity, which leads to a larger boundary layer with increased kinetic energy in the occupied zone. A real-scale study was not conducted; however, it can be concluded that a constant velocity inflow profile can be replaced with a lower pulsated flow rate profile while preserving the mixing efficiency. Among the turbulent CFD models demonstrated in this study, SST-kω is most advantageous, exhibiting a similar global airflow pattern as in the experiments. The detailed near-wall velocity profile is utilized to identify the wall-jet instabilities that consist of mixing and boundary layers. The SAS method was later applied to predict the turbulent parameters in the center of the domain. In both cases, the predictions are in good agreement with the measured results.

Keywords: CFD, PIV, pulsatile inflow, ventilation, wall-jet

Procedia PDF Downloads 172
291 Harvesting Energy from Lightning Strikes

Authors: Vaishakh Medikeri

Abstract:

Lightning, the marvelous, spectacular and the awesome truth of nature is one of the greatest energy sources left unharnessed since ages. A single lightning bolt of lightning contains energy of about 15 billion joules. This huge amount of energy cannot be harnessed completely but partially. This paper proposes to harness the energy from lightning strikes. Throughout the globe the frequency of lightning is 40-50 flashes per second, totally 1.4 billion flashes per year; all of these flashes carrying an average energy of about 15 billion joules each. When a lightning bolt strikes the ground, tremendous amounts of energy is transferred to earth which propagates in the form of concentric circular energy waves. These waves have a frequency of about 7.83Hz. Harvesting the lightning bolt directly seems impossible, but harvesting the energy waves produced by the lightning is pretty easier. This can be done using a tricoil energy harnesser which is a new device which I have invented. We know that lightning bolt seeks the path which has minimum resistance down to the earth. For this we can make a lightning rod about 100 meters high. Now the lightning rod is attached to the tricoil energy harnesser. The tricoil energy harnesser contains three coils whose centers are collinear and all the coils are parallel to the ground. The first coil has one of its ends connected to the lightning rod and the other end grounded. There is a secondary coil wound on the first coil with one of its end grounded and the other end pointing to the ground and left unconnected and placed a little bit above the ground so that this end of the coil produces more intense currents, hence producing intense energy waves. The first coil produces very high magnetic fields and induces them in the second and third coils. Along with the magnetic fields induced by the first coil, the energy waves which are currents also flow through the second and the third coils. The second and the third coils are connected to a generator which in turn is connected to a capacitor which stores the electrical energy. The first coil is placed in the middle of the second and the third coil. The stored energy can be used for transmission of electricity. This new technique of harnessing the lightning strikes would be most efficient in places with more probability of the lightning strikes. Since we are using a lightning rod sufficiently long, the probability of cloud to ground strikes is increased. If the proposed apparatus is implemented, it would be a great source of pure and clean energy.

Keywords: generator, lightning rod, tricoil energy harnesser, harvesting energy

Procedia PDF Downloads 376
290 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach

Authors: James Ladzekpo

Abstract:

Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.

Keywords: diabetes, machine learning, prediction, biomarkers

Procedia PDF Downloads 51
289 A Comparative Study of the Proposed Models for the Components of the National Health Information System

Authors: M. Ahmadi, Sh. Damanabi, F. Sadoughi

Abstract:

National Health Information System plays an important role in ensuring timely and reliable access to Health information which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, by using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system for better planning and management influential factors of performance seems necessary, therefore, in this study, different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process, and output. In this context, search for information using library resources and internet search were conducted and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system, Lippeveld, Sauerborn, and Bodart Model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008 and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities, and equipment. In addition, in the ‘process’ section from three models, we pointed up the actions ensuring the quality of health information system and in output section, except Lippeveld Model, two other models consider information products, usage and distribution of information as components of the national health information system. Conclusion: The results showed that all the three models have had a brief discussion about the components of health information in input section. However, Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process, and output.

Keywords: National Health Information System, components of the NHIS, Lippeveld Model

Procedia PDF Downloads 419
288 Structural and Functional Characterization of the Transcriptional Regulator Rv1176 of Mycobacterium tuberculosis H37Rv

Authors: Vikash Yadav, Ashish Arora

Abstract:

Microorganisms have self-defense mechanisms to protect themselves from toxic environments. Phenolic acid decarboxylase(pad) is responsible for the defense against toxicity caused by phenolic acids, converting them into less toxic vinyl derivatives. The transcription of the pad gene is regulated by a negative transcription factor, phenolic acid decarboxylase regulators (PadR), in a substrate-inducible manner. The PadR family members share the conserved DNA-binding features and interact with the operator DNA using a winged helix-turn-helix (wHTH) motif, which contains a three-helix motif and a β-stranded wing. The members of this family function as transcriptional regulators that are involved in various cellular survival processes, such as toxin production, detoxification, multidrug resistance, antibiotic biosynthesis, and carbon catabolism. Rv1176 of Mycobacterium tuberculosis H37Rv has been assigned to the PadR family protein that remains to be structurally and functionally uncharacterized. To reveal the structural mechanism by which Rv1176 could regulates effector-responsive transcription, several experiments were performed, including Electrophoretic Mobility Shift Assay (EMSA) for DNA protein interaction, differential scanning calorimetry (DSC) and Differential Scanning Fluorimetry (DSF) for temperature and ligand-dependent protein stability, Circular Dichroism (CD) spectroscopy for secondary structure analysis. Further, to evaluate the functional role of Rv1176, the intracellular survival of recombinant M. smegmatis was examined in murine macrophage cell line J774A.1 and different stressed conditions like oxidative, pH, and nutritive stress. All these studies demonstrated that Rv1176 could behave as a transcription regulator and its expression in recombinant M. smegmatis increases intracellular survival.

Keywords: EMSA, Mycobacterium tuberculosis, PadR family protein, transcriptional regulator

Procedia PDF Downloads 73
287 Modelling of Exothermic Reactions during Carbon Fibre Manufacturing and Coupling to Surrounding Airflow

Authors: Musa Akdere, Gunnar Seide, Thomas Gries

Abstract:

Carbon fibres are fibrous materials with a carbon atom amount of more than 90%. They combine excellent mechanicals properties with a very low density. Thus carbon fibre reinforced plastics (CFRP) are very often used in lightweight design and construction. The precursor material is usually polyacrylonitrile (PAN) based and wet-spun. During the production of carbon fibre, the precursor has to be stabilized thermally to withstand the high temperatures of up to 1500 °C which occur during carbonization. Even though carbon fibre has been used since the late 1970s in aerospace application, there is still no general method available to find the optimal production parameters and the trial-and-error approach is most often the only resolution. To have a much better insight into the process the chemical reactions during stabilization have to be analyzed particularly. Therefore, a model of the chemical reactions (cyclization, dehydration, and oxidation) based on the research of Dunham and Edie has been developed. With the presented model, it is possible to perform a complete simulation of the fibre undergoing all zones of stabilization. The fiber bundle is modeled as several circular fibers with a layer of air in-between. Two thermal mechanisms are considered to be the most important: the exothermic reactions inside the fiber and the convective heat transfer between the fiber and the air. The exothermic reactions inside the fibers are modeled as a heat source. Differential scanning calorimetry measurements have been performed to estimate the amount of heat of the reactions. To shorten the required time of a simulation, the number of fibers is decreased by similitude theory. Experiments were conducted to validate the simulation results of the fibre temperature during stabilization. The experiments for the validation were conducted on a pilot scale stabilization oven. To measure the fibre bundle temperature, a new measuring method is developed. The comparison of the results shows that the developed simulation model gives good approximations for the temperature profile of the fibre bundle during the stabilization process.

Keywords: carbon fibre, coupled simulation, exothermic reactions, fibre-air-interface

Procedia PDF Downloads 269
286 Analysis of Biomarkers Intractable Epileptogenic Brain Networks with Independent Component Analysis and Deep Learning Algorithms: A Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients

Authors: Bliss Singhal

Abstract:

Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide and 1.2 million Americans. There exist millions of pediatric patients with intractable epilepsy, a condition in which seizures fail to come under control. The occurrence of seizures can result in physical injury, disorientation, unconsciousness, and additional symptoms that could impede children's ability to participate in everyday tasks. Predicting seizures can help parents and healthcare providers take precautions, prevent risky situations, and mentally prepare children to minimize anxiety and nervousness associated with the uncertainty of a seizure. This research proposes a comprehensive framework to predict seizures in pediatric patients by evaluating machine learning algorithms on unimodal neuroimaging data consisting of electroencephalogram signals. The bandpass filtering and independent component analysis proved to be effective in reducing the noise and artifacts from the dataset. Various machine learning algorithms’ performance is evaluated on important metrics such as accuracy, precision, specificity, sensitivity, F1 score and MCC. The results show that the deep learning algorithms are more successful in predicting seizures than logistic Regression, and k nearest neighbors. The recurrent neural network (RNN) gave the highest precision and F1 Score, long short-term memory (LSTM) outperformed RNN in accuracy and convolutional neural network (CNN) resulted in the highest Specificity. This research has significant implications for healthcare providers in proactively managing seizure occurrence in pediatric patients, potentially transforming clinical practices, and improving pediatric care.

Keywords: intractable epilepsy, seizure, deep learning, prediction, electroencephalogram channels

Procedia PDF Downloads 78
285 Personalization of Context Information Retrieval Model via User Search Behaviours for Ranking Document Relevance

Authors: Kehinde Agbele, Longe Olumide, Daniel Ekong, Dele Seluwa, Akintoye Onamade

Abstract:

One major problem of most existing information retrieval systems (IRS) is that they provide even access and retrieval results to individual users specially based on the query terms user issued to the system. When using IRS, users often present search queries made of ad-hoc keywords. It is then up to IRS to obtain a precise representation of user’s information need, and the context of the information. In effect, the volume and range of the Internet documents is growing exponentially and consequently causes difficulties for a user to obtain information that precisely matches the user interest. Diverse combination techniques are used to achieve the specific goal. This is due, firstly, to the fact that users often do not present queries to IRS that optimally represent the information they want, and secondly, the measure of a document's relevance is highly subjective between diverse users. In this paper, we address the problem by investigating the optimization of IRS to individual information needs in order of relevance. The paper addressed the development of algorithms that optimize the ranking of documents retrieved from IRS. This paper addresses this problem with a two-fold approach in order to retrieve domain-specific documents. Firstly, the design of context of information. The context of a query determines retrieved information relevance using personalization and context-awareness. Thus, executing the same query in diverse contexts often leads to diverse result rankings based on the user preferences. Secondly, the relevant context aspects should be incorporated in a way that supports the knowledge domain representing users’ interests. In this paper, the use of evolutionary algorithms is incorporated to improve the effectiveness of IRS. A context-based information retrieval system that learns individual needs from user-provided relevance feedback is developed whose retrieval effectiveness is evaluated using precision and recall metrics. The results demonstrate how to use attributes from user interaction behavior to improve the IR effectiveness.

Keywords: context, document relevance, information retrieval, personalization, user search behaviors

Procedia PDF Downloads 458
284 Recycling Broken Photovoltaic Cells into Anodes for Lithium-Ion Batteries Using Open-Source 3D Printing

Authors: Maryam Mottaghi, Joshua M. Pearce

Abstract:

The increasing volume of end-of-life photovoltaic (PV) cells presents a significant environmental challenge and offers an opportunity for resource recovery. This work explores the use of broken silicon PV cells as a sustainable source of silicon for the fabrication of anodes in lithium-ion (Li-ion) batteries. An open-source toolchain provides a low-cost and accessible method for 3D printing anode composites. The silicon used in PV cells has already undergone energy-intensive purification and processing, which enhances its reuse in batteries as a more resource-efficient approach. While silicon is abundant and offers potential for high-capacity anodes, it faces challenges such as low conductivity and significant volume changes during cycling, which can lead to mechanical degradation and reduced battery performance. In this work, silicon PV waste is first ground into particles smaller than 50 microns using an open-source ball mill. The silicon particles mix with a UV-curable resin through an open-source bottle roller to form a printable slurry. This slurry is used to fabricate an acrylate-silicon composite via stereolithography (SLA) 3D printing. SLA 3D printing offers the advantage of high precision and the ability to create complex geometries, which can enhance the performance of the anode. The printed parts are then pyrolyzed in an inert nitrogen atmosphere, which burns away the volatile components of the resin and leaves behind a carbon residue that enhances conductivity and helps alleviate silicon volume expansion during cycling. The results demonstrate the feasibility of using broken solar cell anodes in batteries. This approach is a promising candidate for advancing recycling solutions. Additionally, the use of open-source toolchain promotes resource recovery and shows potential for future developments in circular economy within energy storage.

Keywords: recycling, silicon anode, Li-ion battery, 3D printing

Procedia PDF Downloads 15
283 FEM for Stress Reduction by Optimal Auxiliary Holes in a Loaded Plate with Elliptical Hole

Authors: Basavaraj R. Endigeri, S. G. Sarganachari

Abstract:

Steel is widely used in machine parts, structural equipment and many other applications. In many steel structural elements, holes of different shapes and orientations are made with a view to satisfy the design requirements. The presence of holes in steel elements creates stress concentration, which eventually reduce the mechanical strength of the structure. Therefore, it is of great importance to investigate the state of stress around the holes for the safety and properties design of such elements. By literature survey, it is known that till date, there is no analytical solution to reduce the stress concentration by providing auxiliary holes at a definite location and radii in a steel plate. The numerical method can be used to determine the optimum location and radii of auxiliary holes. In the present work plate with an elliptical hole, for a steel material subjected to uniaxial load is analyzed and the effect of stress concentration is graphically represented .The introduction of auxiliary holes at a optimum location and radii with its effect on stress concentration is also represented graphically. The finite element analysis package ANSYS 11.0 is used to analyse the steel plate. The analysis is carried out using a plane 42 element. Further the ANSYS optimization model is used to determine the location and radii for optimum values of auxiliary hole to reduce stress concentration. All the results for different diameter to plate width ratio are presented graphically. The results of this study are in the form of the graphs for determining the locations and diameter of optimal auxiliary holes. The graph of stress concentration v/s central hole diameter to plate width ratio. The Finite Elements results of the study indicates that the stress concentration effect of central elliptical hole in an uniaxial loaded plate can be reduced by introducing auxiliary holes on either side of the central circular hole.

Keywords: finite element method, optimization, stress concentration factor, auxiliary holes

Procedia PDF Downloads 449
282 Hypertension and Obesity: A Cross-National Comparison of BMI and Waist-Height Ratio

Authors: Adam M. Yates, Julie E. Byles

Abstract:

Hypertension has been identified as a prominent co-morbidity of obesity. To improve clinical intervention of hypertension, it is critical to identify metrics that most accurately reflect risk for increased morbidity. Two of the most relevant and accurate measures for increased risk of hypertension due to excess adipose tissue are Body Mass Index (BMI) and Waist-Height Ratio (WHtR). Previous research has examined these measures in cross-national and cross-ethnic studies, but has most often relied on secondary means such as meta-analysis to identify and evaluate the efficacy of individual body mass measures. In this study, we instead use cross-sectional analysis to assess the cross-ethnic discriminative power of BMI and WHtR to predict risk of hypertension. Using the WHO SAGE survey, which collected anthropometric and biometric data from respondents in six middle-income countries (China, Ghana, India, Mexico, Russia, South Africa), we implement logistic regression to examine the discriminative power of measured BMI and WHtR with a known population of hypertensive and non-hypertensive respondents. We control for gender and age to identify whether optimum cut-off points that are adequately sensitive as tests for risk of hypertension may be different between groups. We report results for OR, RR, and ROC curves for each of the six SAGE countries. As seen in existing literature, results demonstrate that both WHtR and BMI are significant predictors of hypertension (p < .01). For these six countries, we find that cut-off points for WHtR may be dependent upon gender, age and ethnicity. While an optimum omnibus cut-point for WHtR may be 0.55, results also suggest that the gender and age relationship with WHtR may warrant the development of individual cut-offs to optimize health outcomes. Trends through multiple countries show that the optimum cut-point for WHtR increases with age while the area under the curve (AUROC) decreases for both men and women. Comparison between BMI and WHtR indicate that BMI may remain more robust than WHtR. Implications for public health policy are discussed.

Keywords: hypertension, obesity, Waist-Height ratio, SAGE

Procedia PDF Downloads 474
281 Leveraging Automated and Connected Vehicles with Deep Learning for Smart Transportation Network Optimization

Authors: Taha Benarbia

Abstract:

The advent of automated and connected vehicles has revolutionized the transportation industry, presenting new opportunities for enhancing the efficiency, safety, and sustainability of our transportation networks. This paper explores the integration of automated and connected vehicles into a smart transportation framework, leveraging the power of deep learning techniques to optimize the overall network performance. The first aspect addressed in this paper is the deployment of automated vehicles (AVs) within the transportation system. AVs offer numerous advantages, such as reduced congestion, improved fuel efficiency, and increased safety through advanced sensing and decisionmaking capabilities. The paper delves into the technical aspects of AVs, including their perception, planning, and control systems, highlighting the role of deep learning algorithms in enabling intelligent and reliable AV operations. Furthermore, the paper investigates the potential of connected vehicles (CVs) in creating a seamless communication network between vehicles, infrastructure, and traffic management systems. By harnessing real-time data exchange, CVs enable proactive traffic management, adaptive signal control, and effective route planning. Deep learning techniques play a pivotal role in extracting meaningful insights from the vast amount of data generated by CVs, empowering transportation authorities to make informed decisions for optimizing network performance. The integration of deep learning with automated and connected vehicles paves the way for advanced transportation network optimization. Deep learning algorithms can analyze complex transportation data, including traffic patterns, demand forecasting, and dynamic congestion scenarios, to optimize routing, reduce travel times, and enhance overall system efficiency. The paper presents case studies and simulations demonstrating the effectiveness of deep learning-based approaches in achieving significant improvements in network performance metrics

Keywords: automated vehicles, connected vehicles, deep learning, smart transportation network

Procedia PDF Downloads 72
280 Characteristics of the Wake behind a Heated Cylinder in Relatively High Reynolds Number

Authors: Morteza Khashehchi, Kamel Hooman

Abstract:

Thermal effects on the dynamics and stability of the flow past a circular cylinder operating in the mixed convection regime is studied experimentally for Reynolds number (ReD) between 1000 and 4000, and different cylinder wall temperatures (Tw) between 25 and 75°C by means of Particle Image Velocimetry (PIV). The experiments were conducted in a horizontal wind tunnel with the heated cylinder placed horizontally. With such assumptions, the direction of the thermally induced buoyancy force acting on the fluid surrounding the heated cylinder would be perpendicular to the flow direction. In each experiment, to acquire 3000 PIV image pairs, the temperature and Reynolds number of the approach flow were held constant. By adjusting different temperatures in different Reynolds numbers, the corresponding Richardson number (RiD = Gr/Re^2) was varied between 0:0 (unheated) and 10, resulting in a change in the heat transfer process from forced convection to mixed convection. With increasing temperature of the wall cylinder, significant modifications of the wake flow pattern and wake vortex shedding process were clearly revealed. For cylinder at low wall temperature, the size of the wake and the vortex shedding process are found to be quite similar to those of an unheated cylinder. With high wall temperature, however, the high temperature gradient in the wake shear layer creates a type of vorticity with opposite sign to that of the shear layer vorticity. This temperature gradient vorticity weakens the strength of the shear layer vorticity, causing delay in reaching the recreation point. In addition to the wake characteristics, the shedding frequency for the heated cylinder is determined for all aforementioned cases. It is found that, as the cylinder wall is heated, the organization of the vortex shedding is altered and the relative position of the first detached vortices with respect to the second one is changed. This movement of the first detached vortex toward the second one increases the frequency of the shedding process. It is also found that the wake closure length decreases with increasing the Richardson number.

Keywords: heated cylinder, PIV, wake, Reynolds number

Procedia PDF Downloads 388
279 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.

Keywords: cyber security, vulnerability detection, neural networks, feature extraction

Procedia PDF Downloads 86
278 Optimizing Wind Turbine Blade Geometry for Enhanced Performance and Durability: A Computational Approach

Authors: Nwachukwu Ifeanyi

Abstract:

Wind energy is a vital component of the global renewable energy portfolio, with wind turbines serving as the primary means of harnessing this abundant resource. However, the efficiency and stability of wind turbines remain critical challenges in maximizing energy output and ensuring long-term operational viability. This study proposes a comprehensive approach utilizing computational aerodynamics and aeromechanics to optimize wind turbine performance across multiple objectives. The proposed research aims to integrate advanced computational fluid dynamics (CFD) simulations with structural analysis techniques to enhance the aerodynamic efficiency and mechanical stability of wind turbine blades. By leveraging multi-objective optimization algorithms, the study seeks to simultaneously optimize aerodynamic performance metrics such as lift-to-drag ratio and power coefficient while ensuring structural integrity and minimizing fatigue loads on the turbine components. Furthermore, the investigation will explore the influence of various design parameters, including blade geometry, airfoil profiles, and turbine operating conditions, on the overall performance and stability of wind turbines. Through detailed parametric studies and sensitivity analyses, valuable insights into the complex interplay between aerodynamics and structural dynamics will be gained, facilitating the development of next-generation wind turbine designs. Ultimately, this research endeavours to contribute to the advancement of sustainable energy technologies by providing innovative solutions to enhance the efficiency, reliability, and economic viability of wind power generation systems. The findings have the potential to inform the design and optimization of wind turbines, leading to increased energy output, reduced maintenance costs, and greater environmental benefits in the transition towards a cleaner and more sustainable energy future.

Keywords: computation, robotics, mathematics, simulation

Procedia PDF Downloads 55
277 The Side Effect of the Perforation Shape towards Behaviour Flexural in Castellated Beam

Authors: Harrys Purnama, Wardatul Jannah, Rizkia Nita Hawari

Abstract:

In the development of the times, there are many materials used to plan a building structure. Steel became one of the most widely used materials in building construction that works as the main structure. Steel Castellated Beam is a type of innovation in the use of steel in building construction. Steel Castellated Beam is a beam that used for long span construction (more than 10 meters). The Castellated Beam is two steel profiles that unified into one to get the appropriate profile height (more than 10 meters). The profile is perforated to minimize the profile's weight, increase the rate, save costs, and have architectural value. The perforations shape in the Castellated Beam can be circular, elliptical, hexagonal, and rectangular. The Castellated beam has a height (h) almost 50% higher than the initial profile thus increasing the axial bending value and the moment of inertia (Iₓ). In this analysis, there are 3 specimens were used with 12.1 meters span of Castellated Beam as the sample with varied perforation, such us round, hexagon, and octagon. Castellated Beam testing system is done with computer-based applications that named Staad Pro V8i. It is to provide a central load in the middle of the steel beam span. It aims to determine the effect of perforation on bending behavior on the steel Castellated Beam by applying some form of perforations on the steel Castellated Beam with test specimen WF 200.100.5.5.8. From the analysis, results found the behavior of steel Castellated Beam when receiving such central load. From the results of the analysis will be obtained the amount of load, shear, strain, and Δ (deflection). The result of analysis by using Staad Pro V8i shows that with the different form of perforations on the profile of Castellated steel, then we get the different tendency of inertia moment. From the analysis, results obtained the moment of the greatest inertia can increase the stiffness of Castellated steel. By increasing the stiffness of the steel Castellated Beam the deflection will be smaller, so it can withstand the moment and a large strength. The results of the analysis show that the most effective and efficient perforations are the steel beam with a hexagon perforation shape.

Keywords: Castellated Beam, the moment of inertia, stress, deflection, bending test

Procedia PDF Downloads 165
276 Dynamic Web-Based 2D Medical Image Visualization and Processing Software

Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail

Abstract:

In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.

Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN

Procedia PDF Downloads 158
275 A Review of Benefit-Risk Assessment over the Product Lifecycle

Authors: M. Miljkovic, A. Urakpo, M. Simic-Koumoutsaris

Abstract:

Benefit-risk assessment (BRA) is a valuable tool that takes place in multiple stages during a medicine's lifecycle, and this assessment can be conducted in a variety of ways. The aim was to summarize current BRA methods used during approval decisions and in post-approval settings and to see possible future directions. Relevant reviews, recommendations, and guidelines published in medical literature and through regulatory agencies over the past five years have been examined. BRA implies the review of two dimensions: the dimension of benefits (determined mainly by the therapeutic efficacy) and the dimension of risks (comprises the safety profile of a drug). Regulators, industry, and academia have developed various approaches, ranging from descriptive textual (qualitative) to decision-analytic (quantitative) models, to facilitate the BRA of medicines during the product lifecycle (from Phase I trials, to authorization procedure, post-marketing surveillance and health technology assessment for inclusion in public formularies). These approaches can be classified into the following categories: stepwise structured approaches (frameworks); measures for benefits and risks that are usually endpoint specific (metrics), simulation techniques and meta-analysis (estimation techniques), and utility survey techniques to elicit stakeholders’ preferences (utilities). All these approaches share the following two common goals: to assist this analysis and to improve the communication of decisions, but each is subject to its own specific strengths and limitations. Before using any method, its utility, complexity, the extent to which it is established, and the ease of results interpretation should be considered. Despite widespread and long-time use, BRA is subject to debate, suffers from a number of limitations, and currently is still under development. The use of formal, systematic structured approaches to BRA for regulatory decision-making and quantitative methods to support BRA during the product lifecycle is a standard practice in medicine that is subject to continuous improvement and modernization, not only in methodology but also in cooperation between organizations.

Keywords: benefit-risk assessment, benefit-risk profile, product lifecycle, quantitative methods, structured approaches

Procedia PDF Downloads 152
274 Threat Modeling Methodology for Supporting Industrial Control Systems Device Manufacturers and System Integrators

Authors: Raluca Ana Maria Viziteu, Anna Prudnikova

Abstract:

Industrial control systems (ICS) have received much attention in recent years due to the convergence of information technology (IT) and operational technology (OT) that has increased the interdependence of safety and security issues to be considered. These issues require ICS-tailored solutions. That led to the need to creation of a methodology for supporting ICS device manufacturers and system integrators in carrying out threat modeling of embedded ICS devices in a way that guarantees the quality of the identified threats and minimizes subjectivity in the threat identification process. To research, the possibility of creating such a methodology, a set of existing standards, regulations, papers, and publications related to threat modeling in the ICS sector and other sectors was reviewed to identify various existing methodologies and methods used in threat modeling. Furthermore, the most popular ones were tested in an exploratory phase on a specific PLC device. The outcome of this exploratory phase has been used as a basis for defining specific characteristics of ICS embedded devices and their deployment scenarios, identifying the factors that introduce subjectivity in the threat modeling process of such devices, and defining metrics for evaluating the minimum quality requirements of identified threats associated to the deployment of the devices in existing infrastructures. Furthermore, the threat modeling methodology was created based on the previous steps' results. The usability of the methodology was evaluated through a set of standardized threat modeling requirements and a standardized comparison method for threat modeling methodologies. The outcomes of these verification methods confirm that the methodology is effective. The full paper includes the outcome of research on different threat modeling methodologies that can be used in OT, their comparison, and the results of implementing each of them in practice on a PLC device. This research is further used to build a threat modeling methodology tailored to OT environments; a detailed description is included. Moreover, the paper includes results of the evaluation of created methodology based on a set of parameters specifically created to rate threat modeling methodologies.

Keywords: device manufacturers, embedded devices, industrial control systems, threat modeling

Procedia PDF Downloads 76
273 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 108