Search results for: universal testing machine
932 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 91931 Mutations in rpoB, katG and inhA Genes: The Association with Resistance to Rifampicin and Isoniazid in Egyptian Mycobacterium tuberculosis Clinical Isolates
Authors: Ayman K. El Essawy, Amal M. Hosny, Hala M. Abu Shady
Abstract:
The rapid detection of TB and drug resistance, both optimizes treatment and improves outcomes. In the current study, respiratory specimens were collected from 155 patients. Conventional susceptibility testing and MIC determination were performed for rifampicin (RIF) and isoniazid (INH). Genotype MTBDRplus assay, which is a molecular genetic assay based on the DNA-STRIP technology and specific gene sequencing with primers for rpoB, KatG, and mab-inhA genes were used to detect mutations associated with resistance to rifampicin and isoniazid. In comparison to other categories, most of rifampicin resistant (61.5%) and isoniazid resistant isolates (47.1%) were from patients relapsed in treatment. The genotypic profile (using Genotype MTBDRplus assay) of multi-drug resistant (MDR) isolates showed missing of katG wild type 1 (WT1) band and appearance of mutation band katG MUT2. For isoniazid mono-resistant isolates, 80% showed katG MUT1, 20% showed katG MUT1, and inhA MUT1, 20% showed only inhA MUT1. Accordingly, 100% of isoniazid resistant strains were detected by this assay. Out of 17 resistant strains, 16 had mutation bands for katG distinguished high resistance to isoniazid. The assay could clearly detect rifampicin resistance among 66.7% of MDR isolates that showed mutation band rpoB MUT3 while 33.3% of them were considered as unknown. One mono-resistant rifampicin isolate did not show rifampicin mutation bands by Genotype MTBDRplus assay, but it showed an unexpected mutation in Codon 531 of rpoB by DNA sequence analysis. Rifampicin resistance in this strain could be associated with a mutation in codon 531 of rpoB (based on molecular sequencing), and Genotype MTBDRplus assay could not detect the associated mutation. If the results of Genotype MTBDRplus assay and sequencing were combined, this strain shows hetero-resistance pattern. Gene sequencing of eight selected isolates, previously tested by Genotype MTBDRplus assay, could detect resistance mutations mainly in codon 315 (katG gene), position -15 in inhA promotes gene for isoniazid resistance and codon 531 (rpoB gene) for rifampicin resistance. Genotyping techniques allow distinguishing between recurrent cases of reinfection or reactivation and supports epidemiological studies.Keywords: M. tuberculosis, rpoB, KatG, inhA, genotype MTBDRplus
Procedia PDF Downloads 167930 The Role of Parental Stress and Emotion Regulation in Responding to Children’s Expression of Negative Emotion
Authors: Lizel Bertie, Kim Johnston
Abstract:
Parental emotion regulation plays a central role in the socialisation of emotion, especially when teaching young children to cope with negative emotions. Despite evidence which shows non-supportive parental responses to children’s expression of negative emotions has implications for the social and emotional development of the child, few studies have investigated risk factors which impact parental emotion socialisation processes. The current study aimed to explore the extent to which parental stress contributes to both difficulties in parental emotion regulation and non-supportive parental responses to children’s expression of negative emotions. In addition, the study examined whether parental use of expressive suppression as an emotion regulation strategy facilitates the influence of parental stress on non-supportive responses by testing the relations in a mediation model. A sample of 140 Australian adults, who identified as parents with children aged 5 to 10 years, completed an online questionnaire. The measures explored recent symptoms of depression, anxiety, and stress, the use of expressive suppression as an emotion regulation strategy, and hypothetical parental responses to scenarios related to children’s expression of negative emotions. A mediated regression indicated that parents who reported higher levels of stress also reported higher levels of expressive suppression as an emotion regulation strategy and increased use of non-supportive responses in relation to young children’s expression of negative emotions. These findings suggest that parents who experience heightened symptoms of stress are more likely to both suppress their emotions in parent-child interaction and engage in non-supportive responses. Furthermore, higher use of expressive suppression strongly predicted the use of non-supportive responses, despite the presence of parental stress. Contrary to expectation, no indirect effect of stress on non-supportive responses was observed via expressive suppression. The findings from the study suggest that parental stress may become a more salient manifestation of psychological distress in a sub-clinical population of parents while contributing to impaired parental responses. As such, the study offers support for targeting overarching factors such as difficulties in parental emotion regulation and stress management, not only as an intervention for parental psychological distress, but also the detection and prevention of maladaptive parenting practices.Keywords: emotion regulation, emotion socialisation, expressive suppression, non-supportive responses, parental stress
Procedia PDF Downloads 160929 Clinical and Molecular Characterization of Ichthyosis at King Abdulaziz Medical City, Riyadh KSA
Authors: Reema K. AlEssa, Sahar Alshomer, Abdullah Alfaleh, Sultan ALkhenaizan, Mohammed Albalwi
Abstract:
Ichthyosis is a disorder of abnormal keratinization, characterized by excessive scaling, and consists of more than twenty subtypes varied in severity, mode of inheritance, and the genes involved. There is insufficient data in the literature about the epidemiology and characteristics of ichthyosis locally. Our aim is to identify the histopathological features and genetic profile of ichthyosis. Method: It is an observational retrospective case series study conducted in March 2020, included all patients who were diagnosed with Ichthyosis and confirmed by histological and molecular findings over the last 20 years in King Abdulaziz Medical City (KAMC), Riyadh, Saudi Arabia. Molecular analysis was performed by testing genomic DNA and checking genetic variations using the AmpliSeq panel. All disease-causing variants were checked against HGMD, ClinVar, Genome Aggregation Database (gnomAD), and Exome Aggregation Consortium (ExAC) databases. Result: A total of 60 cases of Ichthyosis were identified with a mean age of 13 ± 9.2. There is an almost equal distribution between female patients 29 (48%) and males 31 (52%). The majority of them were Saudis, 94%. More than half of patients presented with general scaling 33 (55%), followed by dryness and coarse skin 19 (31.6%) and hyperlinearity 5 (8.33%). Family history and history of consanguinity were seen in 26 (43.3% ), 13 (22%), respectively. History of colloidal babies was found in 6 (10%) cases of ichthyosis. The most frequent genes were ALOX12B, ALOXE3, CERS3, CYP4F22, DOLK, FLG2, GJB2, PNPLA1, SLC27A4, SPINK5, STS, SUMF1, TGM1, TGM5, VPS33B. Most frequent variations were detected in CYP4F22 in 16 cases (26.6%) followed by ALOXE3 6 (10%) and STS 6 (10%) then TGM1 5 (8.3) and ALOX12B 5 (8.3). The analysis of molecular genetic identified 23 different genetic variations in the genes of ichthyosis, of which 13 were novel mutations. Homozygous mutations were detected in the majority of ichthyosis cases, 54 (90%), and only 1 case was heterozygous. Few cases, 4 (6.6%) had an unknown type of ichthyosis with a negative genetic result. Conclusion: 13 novel mutations were discovered. Also, about half of ichthyosis patients had a positive history of consanguinity.Keywords: ichthyosis, genetic profile, molecular characterization, congenital ichthyosis
Procedia PDF Downloads 197928 Establishing a Drug Discovery Platform to Progress Compounds into the Clinic
Authors: Sheraz Gul
Abstract:
The requirements for progressing a compound to clinical trials is well established and relies on the results from in-vitro and in-vivo animal tests to indicate that it is likely to be safe and efficacious when testing in humans. The typical data package required will include demonstrating compound safety, toxicity, bioavailability, pharmacodynamics (potential effects of the compound on body systems) and pharmacokinetics (how the compound is potentially absorbed, distributed, metabolised and eliminated after dosing in humans). If the desired criteria are met and the compound meets the clinical Candidate criteria and is deemed worthy of further development, a submission to regulatory bodies such as the US Food & Drug Administration for an exploratory Investigational New Drug Study can be made. The purpose of this study is to collect data to establish that the compound will not expose humans to unreasonable risks when used in limited, early-stage clinical studies in patients or normal volunteer subjects (Phase I). These studies are also designed to determine the metabolism and pharmacologic actions of the drug in humans, the side effects associated with increasing doses, and, if possible, to gain early evidence on their effectiveness. In order to reach the above goals, we have developed a pre-clinical high throughput Absorption, Distribution, Metabolism and Excretion–Toxicity (ADME–Toxicity) panel of assays to identify compounds that are likely to meet the Lead and Candidate compound acceptance criteria. This panel includes solubility studies in a range of biological fluids, cell viability studies in cancer and primary cell-lines, mitochondrial toxicity, off-target effects (across the kinase, protease, histone deacetylase, phosphodiesterase and GPCR protein families), CYP450 inhibition (5 different CYP450 enzymes), CYP450 induction, cardio-toxicity (hERG) and gene-toxicity. This panel of assays has been applied to multiple compound series developed in a number of projects delivering Lead and clinical Candidates and examples from these will be presented.Keywords: absorption, distribution, metabolism and excretion–toxicity , drug discovery, food and drug administration , pharmacodynamics
Procedia PDF Downloads 173927 Exoskeleton Response During Infant Physiological Knee Kinematics And Dynamics
Authors: Breanna Macumber, Victor A. Huayamave, Emir A. Vela, Wangdo Kim, Tamara T. Chamber, Esteban Centeno
Abstract:
Spina bifida is a type of neural tube defect that affects the nervous system and can lead to problems such as total leg paralysis. Treatment requires physical therapy and rehabilitation. Robotic exoskeletons have been used for rehabilitation to train muscle movement and assist in injury recovery; however, current models focus on the adult populations and not on the infant population. The proposed framework aims to couple a musculoskeletal infant model with a robotic exoskeleton using vacuum-powered artificial muscles to provide rehabilitation to infants affected by spina bifida. The study that drove the input values for the robotic exoskeleton used motion capture technology to collect data from the spontaneous kicking movement of a 2.4-month-old infant lying supine. OpenSim was used to develop the musculoskeletal model, and Inverse kinematics was used to estimate hip joint angles. A total of 4 kicks (A, B, C, D) were selected, and the selection was based on range, transient response, and stable response. Kicks had at least 5° of range of motion with a smooth transient response and a stable period. The robotic exoskeleton used a Vacuum-Powered Artificial Muscle (VPAM) the structure comprised of cells that were clipped in a collapsed state and unclipped when desired to simulate infant’s age. The artificial muscle works with vacuum pressure. When air is removed, the muscle contracts and when air is added, the muscle relaxes. Bench testing was performed using a 6-month-old infant mannequin. The previously developed exoskeleton worked really well with controlled ranges of motion and frequencies, which are typical of rehabilitation protocols for infants suffering with spina bifida. However, the random kicking motion in this study contained high frequency kicks and was not able to accurately replicate all the investigated kicks. Kick 'A' had a greater error when compared to the other kicks. This study has the potential to advance the infant rehabilitation field.Keywords: musculoskeletal modeling, soft robotics, rehabilitation, pediatrics
Procedia PDF Downloads 88926 Coherent Optical Tomography Imaging of Epidermal Hyperplasia in Vivo in a Mouse Model of Oxazolone Induced Atopic Dermatitis
Authors: Eric Lacoste
Abstract:
Laboratory animals are currently widely used as a model of human pathologies in dermatology such as atopic dermatitis (AD). These models provide a better understanding of the pathophysiology of this complex and multifactorial disease, the discovery of potential new therapeutic targets and the testing of the efficacy of new therapeutics. However, confirmation of the correct development of AD is mainly based on histology from skin biopsies requiring invasive surgery or euthanasia of the animals, plus slicing and staining protocols. However, there are currently accessible imaging technologies such as Optical Coherence Tomography (OCT), which allows non-invasive visualization of the main histological structures of the skin (like stratum corneum, epidermis, and dermis) and assessment of the dynamics of the pathology or efficacy of new treatments. Briefly, female immunocompetent hairless mice (SKH1 strain) were sensitized and challenged topically on back and ears for about 4 weeks. Back skin and ears thickness were measured using calliper at 3 occasions per week in complement to a macroscopic evaluation of atopic dermatitis lesions on back: erythema, scaling and excoriations scoring. In addition, OCT was performed on the back and ears of animals. OCT allows a virtual in-depth section (tomography) of the imaged organ to be made using a laser, a camera and image processing software allowing fast, non-contact and non-denaturing acquisitions of the explored tissues. To perform the imaging sessions, the animals were anesthetized with isoflurane, placed on a support under the OCT for a total examination time of 5 to 10 minutes. The results show a good correlation of the OCT technique with classical HES histology for skin lesions structures such as hyperkeratosis, epidermal hyperplasia, and dermis thickness. This OCT imaging technique can, therefore, be used in live animals at different times for longitudinal evaluation by repeated measurements of lesions in the same animals, in addition to the classical histological evaluation. Furthermore, this original imaging technique speeds up research protocols, reduces the number of animals and refines the use of the laboratory animal.Keywords: atopic dermatitis, mouse model, oxzolone model, histology, imaging
Procedia PDF Downloads 133925 Radar Fault Diagnosis Strategy Based on Deep Learning
Authors: Bin Feng, Zhulin Zong
Abstract:
Radar systems are critical in the modern military, aviation, and maritime operations, and their proper functioning is essential for the success of these operations. However, due to the complexity and sensitivity of radar systems, they are susceptible to various faults that can significantly affect their performance. Traditional radar fault diagnosis strategies rely on expert knowledge and rule-based approaches, which are often limited in effectiveness and require a lot of time and resources. Deep learning has recently emerged as a promising approach for fault diagnosis due to its ability to learn features and patterns from large amounts of data automatically. In this paper, we propose a radar fault diagnosis strategy based on deep learning that can accurately identify and classify faults in radar systems. Our approach uses convolutional neural networks (CNN) to extract features from radar signals and fault classify the features. The proposed strategy is trained and validated on a dataset of measured radar signals with various types of faults. The results show that it achieves high accuracy in fault diagnosis. To further evaluate the effectiveness of the proposed strategy, we compare it with traditional rule-based approaches and other machine learning-based methods, including decision trees, support vector machines (SVMs), and random forests. The results demonstrate that our deep learning-based approach outperforms the traditional approaches in terms of accuracy and efficiency. Finally, we discuss the potential applications and limitations of the proposed strategy, as well as future research directions. Our study highlights the importance and potential of deep learning for radar fault diagnosis. It suggests that it can be a valuable tool for improving the performance and reliability of radar systems. In summary, this paper presents a radar fault diagnosis strategy based on deep learning that achieves high accuracy and efficiency in identifying and classifying faults in radar systems. The proposed strategy has significant potential for practical applications and can pave the way for further research.Keywords: radar system, fault diagnosis, deep learning, radar fault
Procedia PDF Downloads 92924 Big Data Analytics and Public Policy: A Study in Rural India
Authors: Vasantha Gouri Prathapagiri
Abstract:
Innovations in ICT sector facilitate qualitative life style for citizens across the globe. Countries that facilitate usage of new techniques in ICT, i.e., big data analytics find it easier to fulfil the needs of their citizens. Big data is characterised by its volume, variety, and speed. Analytics involves its processing in a cost effective way in order to draw conclusion for their useful application. Big data also involves into the field of machine learning, artificial intelligence all leading to accuracy in data presentation useful for public policy making. Hence using data analytics in public policy making is a proper way to march towards all round development of any country. The data driven insights can help the government to take important strategic decisions with regard to socio-economic development of her country. Developed nations like UK and USA are already far ahead on the path of digitization with the support of Big Data analytics. India is a huge country and is currently on the path of massive digitization being realised through Digital India Mission. Internet connection per household is on the rise every year. This transforms into a massive data set that has the potential to improvise the public services delivery system into an effective service mechanism for Indian citizens. In fact, when compared to developed nations, this capacity is being underutilized in India. This is particularly true for administrative system in rural areas. The present paper focuses on the need for big data analytics adaptation in Indian rural administration and its contribution towards development of the country on a faster pace. Results of the research focussed on the need for increasing awareness and serious capacity building of the government personnel working for rural development with regard to big data analytics and its utility for development of the country. Multiple public policies are framed and implemented for rural development yet the results are not as effective as they should be. Big data has a major role to play in this context as can assist in improving both policy making and implementation aiming at all round development of the country.Keywords: Digital India Mission, public service delivery system, public policy, Indian administration
Procedia PDF Downloads 160923 Microstructure Dependent Fatigue Crack Growth in Aluminum Alloy
Authors: M. S. Nandana, K. Udaya Bhat, C. M. Manjunatha
Abstract:
In this study aluminum alloy 7010 was subjected to three different ageing treatments i.e., peak ageing (T6), over-ageing (T7451) and retrogression and re ageing (RRA) to study the influence of precipitate microstructure on the fatigue crack growth rate behavior. The microstructural modification was studied by using transmission electron microscope (TEM) to examine the change in the size and morphology of precipitates in the matrix and on the grain boundaries. The standard compact tension (CT) specimens were fabricated and tested under constant amplitude fatigue crack growth tests to evaluate the influence of heat treatment on the fatigue crack growth rate properties. The tests were performed in a computer-controlled servo-hydraulic test machine applying a load ratio, R = 0.1 at a loading frequency of 10 Hz as per ASTM E647. The fatigue crack growth was measured by adopting compliance technique using a CMOD gauge attached to the CT specimen. The average size of the matrix precipitates were found to be of 16-20 nm in T7451, 5-6 nm in RRA and 2-3 nm in T6 conditions respectively. The grain boundary precipitate which was continuous in T6, was disintegrated in RRA and T7451 condition. The PFZ width was lower in RRA compared to T7451 condition. The crack growth rate was higher in T7451 and lowest in RRA treated alloy. The RRA treated alloy also exhibits an increase in threshold stress intensity factor range (∆Kₜₕ). The ∆Kₜₕ measured was 11.1, 10.3 and 5.7 MPam¹/² in RRA, T6 and T7451 alloys respectively. The fatigue crack growth rate in RRA treated alloy was nearly 2-3 times lower than that in T6 and was one order lower than that observed in T7451 condition. The surface roughness of RRA treated alloy was more pronounced when compared to the other conditions. The reduction in fatigue crack growth rate in RRA alloy was majorly due to the increase in roughness and partially due to increase in spacing between the matrix precipitates. The reduction in crack growth rate and increase in threshold stress intensity range is expected to benefit the damage tolerant capability of aircraft structural components under service loads.Keywords: damage tolerance, fatigue, heat treatment, PFZ, RRA
Procedia PDF Downloads 154922 A Framework Based Blockchain for the Development of a Social Economy Platform
Authors: Hasna Elalaoui Elabdallaoui, Abdelaziz Elfazziki, Mohamed Sadgal
Abstract:
Outlines: The social economy is a moral approach to solidarity applied to the projects’ development. To reconcile economic activity and social equity, crowdfunding is as an alternative means of financing social projects. Several collaborative blockchain platforms exist. It eliminates the need for a central authority or an inconsiderate middleman. Also, the costs for a successful crowdfunding campaign are reduced, since there is no commission to be paid to the intermediary. It improves the transparency of record keeping and delegates authority to authorities who may be prone to corruption. Objectives: The objectives are: to define a software infrastructure for projects’ participatory financing within a social and solidarity economy, allowing transparent, secure, and fair management and to have a financial mechanism that improves financial inclusion. Methodology: The proposed methodology is: crowdfunding platforms literature review, financing mechanisms literature review, requirements analysis and project definition, a business plan, Platform development process and implementation technology, and testing an MVP. Contributions: The solution consists of proposing a new approach to crowdfunding based on Islamic financing, which is the principle of Mousharaka inspired by Islamic financing, which presents a financial innovation that integrates ethics and the social dimension into contemporary banking practices. Conclusion: Crowdfunding platforms need to secure projects and allow only quality projects but also offer a wide range of options to funders. Thus, a framework based on blockchain technology and Islamic financing is proposed to manage this arbitration between quality and quantity of options. The proposed financing system, "Musharaka", is a mode of financing that prohibits interests and uncertainties. The implementation is offered on the secure Ethereum platform as investors sign and initiate transactions for contributions using their digital signature wallet managed by a cryptography algorithm and smart contracts. Our proposal is illustrated by a crop irrigation project in the Marrakech region.Keywords: social economy, Musharaka, blockchain, smart contract, crowdfunding
Procedia PDF Downloads 78921 Monetary Policy and Assets Prices in Nigeria: Testing for the Direction of Relationship
Authors: Jameelah Omolara Yaqub
Abstract:
One of the main reasons for the existence of central bank is that it is believed that central banks have some influence on private sector decisions which will enable the Central Bank to achieve some of its objectives especially that of stable price and economic growth. By the assumption of the New Keynesian theory that prices are fully flexible in the short run, the central bank can temporarily influence real interest rate and, therefore, have an effect on real output in addition to nominal prices. There is, therefore, the need for the Central Bank to monitor, respond to, and influence private sector decisions appropriately. This thus shows that the Central Bank and the private sector will both affect and be affected by each other implying considerable interdependence between the sectors. The interdependence may be simultaneous or not depending on the level of information, readily available and how sensitive prices are to agents’ expectations about the future. The aim of this paper is, therefore, to determine whether the interdependence between asset prices and monetary policy are simultaneous or not and how important is this relationship. Studies on the effects of monetary policy have largely used VAR models to identify the interdependence but most have found small effects of interaction. Some earlier studies have ignored the possibility of simultaneous interdependence while those that have allowed for simultaneous interdependence used data from developed economies only. This study, therefore, extends the literature by using data from a developing economy where information might not be readily available to influence agents’ expectation. In this study, the direction of relationship among variables of interest will be tested by carrying out the Granger causality test. Thereafter, the interaction between asset prices and monetary policy in Nigeria will be tested. Asset prices will be represented by the NSE index as well as real estate prices while monetary policy will be represented by money supply and the MPR respectively. The VAR model will be used to analyse the relationship between the variables in order to take account of potential simultaneity of interdependence. The study will cover the period between 1980 and 2014 due to data availability. It is believed that the outcome of the research will guide monetary policymakers especially the CBN to effectively influence the private sector decisions and thereby achieve its objectives of price stability and economic growth.Keywords: asset prices, granger causality, monetary policy rate, Nigeria
Procedia PDF Downloads 225920 The Role of Home Composting in Waste Management Cost Reduction
Authors: Nahid Hassanshahi, Ayoub Karimi-Jashni, Nasser Talebbeydokhti
Abstract:
Due to the economic and environmental benefits of producing less waste, the US Environmental Protection Agency (EPA) introduces source reduction as one of the most important means to deal with the problems caused by increased landfills and pollution. Waste reduction involves all waste management methods, including source reduction, recycling, and composting, which reduce waste flow to landfills or other disposal facilities. Source reduction of waste can be studied from two perspectives: avoiding waste production, or reducing per capita waste production, and waste deviation that indicates the reduction of waste transfer to landfills. The present paper has investigated home composting as a managerial solution for reduction of waste transfer to landfills. Home composting has many benefits. The use of household waste for the production of compost will result in a much smaller amount of waste being sent to landfills, which in turn will reduce the costs of waste collection, transportation and burial. Reducing the volume of waste for disposal and using them for the production of compost and plant fertilizer might help to recycle the material in a shorter time and to use them effectively in order to preserve the environment and reduce contamination. Producing compost in a home-based manner requires very small piece of land for preparation and recycling compared with other methods. The final product of home-made compost is valuable and helps to grow crops and garden plants. It is also used for modifying the soil structure and maintaining its moisture. The food that is transferred to landfills will spoil and produce leachate after a while. It will also release methane and greenhouse gases. But, composting these materials at home is the best way to manage degradable materials, use them efficiently and reduce environmental pollution. Studies have shown that the benefits of the sale of produced compost and the reduced costs of collecting, transporting, and burying waste can well be responsive to the costs of purchasing home compost machine and the cost of related trainings. Moreover, the process of producing home compost may be profitable within 4 to 5 years and as a result, it will have a major role in reducing waste management.Keywords: compost, home compost, reducing waste, waste management
Procedia PDF Downloads 429919 Application and Utility of the Rale Score for Assessment of Clinical Severity in Covid-19 Patients
Authors: Naridchaya Aberdour, Joanna Kao, Anne Miller, Timothy Shore, Richard Maher, Zhixin Liu
Abstract:
Background: COVID-19 has and continues to be a strain on healthcare globally, with the number of patients requiring hospitalization exceeding the level of medical support available in many countries. As chest x-rays are the primary respiratory radiological investigation, the Radiological Assessment of Lung Edema (RALE) score was used to quantify the extent of pulmonary infection on baseline imaging. Assessment of RALE score's reproducibility and associations with clinical outcome parameters were then evaluated to determine implications for patient management and prognosis. Methods: A retrospective study was performed with the inclusion of patients testing positive for COVID-19 on nasopharyngeal swab within a single Local Health District in Sydney, Australia and baseline x-ray imaging acquired between January to June 2020. Two independent Radiologists viewed the studies and calculated the RALE scores. Clinical outcome parameters were collected and statistical analysis was performed to assess RALE score reproducibility and possible associations with clinical outcomes. Results: A total of 78 patients met inclusion criteria with the age range of 4 to 91 years old. RALE score concordance between the two independent Radiologists was excellent (interclass correlation coefficient = 0.93, 95% CI = 0.88-0.95, p<0.005). Binomial logistics regression identified a positive correlation with hospital admission (1.87 OR, 95% CI= 1.3-2.6, p<0.005), oxygen requirement (1.48 OR, 95% CI= 1.2-1.8, p<0.005) and invasive ventilation (1.2 OR, 95% CI= 1.0-1.3, p<0.005) for each 1-point increase in RALE score. For each one year increased in age, there was a negative correlation with recovery (0.05 OR, 95% CI= 0.92-1.0, p<0.01). RALE scores above three were positively associated with hospitalization (Youden Index 0.61, sensitivity 0.73, specificity 0.89) and above six were positively associated with ICU admission (Youden Index 0.67, sensitivity 0.91, specificity 0.78). Conclusion: The RALE score can be used as a surrogate to quantify the extent of COVID-19 infection and has an excellent inter-observer agreement. The RALE score could be used to prognosticate and identify patients at high risk of deterioration. Threshold values may also be applied to predict the likelihood of hospital and ICU admission.Keywords: chest radiography, coronavirus, COVID-19, RALE score
Procedia PDF Downloads 178918 Exploration of Copper Fabric in Non-Asbestos Organic Brake-Pads for Thermal Conductivity Enhancement
Authors: Vishal Mahale, Jayashree Bijwe, Sujeet K. Sinha
Abstract:
Range of thermal conductivity (TC) of Friction Materials (FMs) is a critical issue since lower TC leads to accumulation of frictional heat on the working surface, which results in excessive fade while higher TC leads to excessive heat flow towards back-plate resulting in boiling of brake-fluid leading to ‘spongy brakes’. This phenomenon prohibits braking action, which is most undesirable. Therefore, TC of the FMs across the brake pads should not be high while along the brake pad, it should be high. To enhance TC, metals in the forms of powder and fibers are used in the FMs. Apart from TC improvement, metals provide strength and structural integrity to the composites. Due to higher TC Copper (Cu) powder/fiber is a most preferred metallic ingredient in FM industry. However, Cu powders/fibers are responsible for metallic wear debris generation, which has harmful effects on aquatic organisms. Hence to get rid of a problem of metallic wear debris generation and to keep the positive effect of TC improvement, incorporation of Cu fabric in NAO brake-pads can be an innovative solution. Keeping this in view, two realistic multi-ingredient FM composites with identical formulations were developed in the form of brake-pads. Out of which one composite series consisted of a single layer of Cu fabric in the body of brake-pad and designated as C1 while double layer of Cu fabric was incorporated in another brake-pad series with designation of C2. Distance of Cu fabric layer from the back-plate was kept constant for C1 and C2. One more composite (C0) was developed without Cu fabric for the sake of comparison. Developed composites were characterized for physical properties. Tribological performance was evaluated on full scale inertia dynamometer by following JASO C 406 testing standard. It was concluded that Cu fabric successfully improved fade resistance by increasing conductivity of the composite and also showed slight improvement in wear resistance. Worn surfaces of pads and disc were analyzed by SEM and EDAX to study wear mechanism.Keywords: brake inertia dynamometer, copper fabric, non-asbestos organic (NAO) friction materials, thermal conductivity enhancement
Procedia PDF Downloads 132917 Corneal Confocal Microscopy As a Surrogate Marker of Neuronal Pathology In Schizophrenia
Authors: Peter W. Woodruff, Georgios Ponirakis, Reem Ibrahim, Amani Ahmed, Hoda Gad, Ioannis N. Petropoulos, Adnan Khan, Ahmed Elsotouhy, Surjith Vattoth, Mahmoud K. M. Alshawwaf, Mohamed Adil Shah Khoodoruth, Marwan Ramadan, Anjushri Bhagat, James Currie, Ziyad Mahfoud, Hanadi Al Hamad, Ahmed Own, Peter Haddad, Majid Alabdulla, Rayaz A. Malik
Abstract:
Introduction:- We aimed to test the hypothesis that, using corneal confocal microscopy (a non-invasive method for assessing corneal nerve fibre integrity), patients with schizophrenia would show neuronal abnormalities compared with healthy participants. Schizophrenia is a neurodevelopmental and progressive neurodegenerative disease, for which there are no validated biomarkers. Corneal confocal microscopy (CCM) is a non-invasive ophthalmic imaging biomarker that can be used to detect neuronal abnormalities in neuropsychiatric syndromes. Methods:- Patients with schizophrenia (DSM-V criteria) without other causes of peripheral neuropathy and healthy controls underwent CCM, vibration perception threshold (VPT) and sudomotor function testing. The diagnostic accuracy of CCM in distinguishing patients from controls was assessed using the area under the curve (AUC) of the Receiver Operating Characterstics (ROC) curve. Findings:- Participants with schizophrenia (n=17) and controls (n=38) with comparable age (35.7±8.5 vs 35.6±12.2, P=0.96) were recruited. Patients with schizophrenia had significantly higher body weight (93.9±25.5 vs 77.1±10.1, P=0.02), lower Low Density Lipoproteins (2.6±1.0 vs 3.4±0.7, P=0.02), but comparable systolic and diastolic blood pressure, HbA1c, total cholesterol, triglycerides and High Density Lipoproteins were comparable with control participants. Patients with schizophrenia had significantly lower corneal nerve fiber density (CNFD, fibers/mm2) (23.5±7.8 vs 35.6±6.5, p<0.0001), branch density (CNBD, branches/mm2) (34.4±26.9 vs 98.1±30.6, p<0.0001), and fiber length (CNFL, mm/mm2) (14.3±4.7 vs 24.2±3.9, p<0.0001) but no difference in VPT (6.1±3.1 vs 4.5±2.8, p=0.12) and electrochemical skin conductance (61.0±24.0 vs 68.9±12.3, p=0.23) compared with controls. The diagnostic accuracy of CNFD, CNBD and CNFL to distinguish patients with schizophrenia from healthy controls were, according to the AUC, (95% CI): 87.0% (76.8-98.2), 93.2% (84.2-102.3), 93.2% (84.4-102.1), respectively. Conclusion:- In conclusion, CCM can be used to help identify neuronal changes and has a high diagnostic accuracy to distinguish subjects with schizophrenia from healthy controls. Procedia PDF Downloads 275916 Development of a Microfluidic Device for Low-Volume Sample Lysis
Authors: Abbas Ali Husseini, Ali Mohammad Yazdani, Fatemeh Ghadiri, Alper Şişman
Abstract:
We developed a microchip device that uses surface acoustic waves for rapid lysis of low level of cell samples. The device incorporates sharp-edge glass microparticles for improved performance. We optimized the lysis conditions for high efficiency and evaluated the device's feasibility for point-of-care applications. The microchip contains a 13-finger pair interdigital transducer with a 30-degree focused angle. It generates high-intensity acoustic beams that converge 6 mm away. The microchip operates at a frequency of 16 MHz, exciting Rayleigh waves with a 250 µm wavelength on the LiNbO3 substrate. Cell lysis occurs when Candida albicans cells and glass particles are placed within the focal area. The high-intensity surface acoustic waves induce centrifugal forces on the cells and glass particles, resulting in cell lysis through lateral forces from the sharp-edge glass particles. We conducted 42 pilot cell lysis experiments to optimize the surface acoustic wave-induced streaming. We varied electrical power, droplet volume, glass particle size, concentration, and lysis time. A regression machine-learning model determined the impact of each parameter on lysis efficiency. Based on these findings, we predicted optimal conditions: electrical signal of 2.5 W, sample volume of 20 µl, glass particle size below 10 µm, concentration of 0.2 µg, and a 5-minute lysis period. Downstream analysis successfully amplified a DNA target fragment directly from the lysate. The study presents an efficient microchip-based cell lysis method employing acoustic streaming and microparticle collisions within microdroplets. Integration of a surface acoustic wave-based lysis chip with an isothermal amplification method enables swift point-of-care applications.Keywords: cell lysis, surface acoustic wave, micro-glass particle, droplet
Procedia PDF Downloads 79915 Enabling and Ageing-Friendly Neighbourhoods: An Eye-Tracking Study of Multi-Sensory Experience of Senior Citizens in Singapore
Authors: Zdravko Trivic, Kelvin E. Y. Low, Darko Radovic, Raymond Lucas
Abstract:
Our understanding and experience of the built environment are primarily shaped by multi‐sensory, emotional and symbolic modes of exchange with spaces. Associated sensory and cognitive declines that come with ageing substantially affect the overall quality of life of the elderly citizens and the ways they perceive and use urban environment. Reduced mobility and increased risk of falls, problems with spatial orientation and communication, lower confidence and independence levels, decreased willingness to go out and social withdrawal are some of the major consequences of sensory declines that challenge almost all segments of the seniors’ everyday living. However, contemporary urban environments are often either sensory overwhelming or depleting, resulting in physical, mental and emotional stress. Moreover, the design and planning of housing neighbourhoods hardly go beyond the passive 'do-no-harm' and universal design principles, and the limited provision of often non-integrated eldercare and inter-generational facilities. This paper explores and discusses the largely neglected relationships between the 'hard' and 'soft' aspects of housing neighbourhoods and urban experience, focusing on seniors’ perception and multi-sensory experience as vehicles for design and planning of high-density housing neighbourhoods that are inclusive and empathetic yet build senior residents’ physical and mental abilities at different stages of ageing. The paper outlines methods and key findings from research conducted in two high-density housing neighbourhoods in Singapore with aims to capture and evaluate multi-sensorial qualities of two neighbourhoods from the perspective of senior residents. Research methods employed included: on-site sensory recordings of 'objective' quantitative sensory data (air temperature and humidity, sound level and luminance) using multi-function environment meter, spatial mapping of patterns of elderly users’ transient and stationary activity, socio-sensory perception surveys and sensorial journeys with local residents using eye-tracking glasses, and supplemented by walk-along or post-walk interviews. The paper develops a multi-sensory framework to synthetize, cross-reference, and visualise the activity and spatio-sensory rhythms and patterns and distill key issues pertinent to ageing-friendly and health-supportive neighbourhood design. Key findings show senior residents’ concerns with walkability, safety, and wayfinding, overall aesthetic qualities, cleanliness, smell, noise, and crowdedness in their neighbourhoods, as well as the lack of design support for all-day use in the context of Singaporean tropical climate and for inter-generational social interaction. The (ongoing) analysis of eye-tracking data reveals the spatial elements of senior residents’ look at and interact with the most frequently, with the visual range often directed towards the ground. With capacities to meaningfully combine quantitative and qualitative, measured and experienced sensory data, multi-sensory framework shows to be fruitful for distilling key design opportunities based on often ignored aspects of subjective and often taken-for-granted interactions with the familiar outdoor environment. It offers an alternative way of leveraging the potentials of housing neighbourhoods to take a more active role in enabling healthful living at all stages of ageing.Keywords: ageing-friendly neighbourhoods, eye-tracking, high-density environment, multi-sensory approach, perception
Procedia PDF Downloads 156914 Neighbourhood Walkability and Quality of Life: The Mediating Role of Place Adherence and Social Interaction
Authors: Michał Jaśkiewicz
Abstract:
The relation between walkability, place adherence, social relations and quality of life was explored in a Polish context. A considerable number of studies have suggested that environmental factors may influence the quality of life through indirect pathways. The list of possible psychological mediators includes social relations and identity-related variables. Based on the results of Study 1, local identity is a significant mediator in the relationship between neighbourhood walkability and quality of life. It was assumed that pedestrian-oriented neighbourhoods enable residents to interact and that these spontaneous interactions can help to strengthen a sense of local identity, thus influencing the quality of life. We, therefore, conducted further studies, testing the relationship experimentally in studies 2a and 2b. Participants were exposed to (2a) photos of walkable/non-walkable neighbourhoods or (2b) descriptions of high/low-walkable neighbourhoods. They were then asked to assess the walkability of the neighbourhoods and to evaluate their potential social relations and quality of life in these places. In both studies, social relations with neighbours turned out to be a significant mediator between walkability and quality of life. In Study 3, we implemented the measure of overlapping individual and communal identity (fusion with the neighbourhood) and willingness to collective action as mediators. Living in a walkable neighbourhood was associated with identity fusion with that neighbourhood. Participants who felt more fused expressed greater willingness to engage in collective action with other neighbours. Finally, this willingness was positively related to the quality of life in the city. In Study 4, we used commuting time (an aspect of walkability related to the time that people spend travelling to work) as the independent variable. The results showed that a shorter average daily commuting time was linked to more frequent social interactions in the neighbourhood. Individuals who assessed their social interactions as more frequent expressed a stronger city identification, which was in turn related to quality of life. To sum up, our research replicated and extended previous findings on the association between walkability and well-being measures. We introduced potential mediators of this relationship: social interactions in the neighbourhood and identity-related variables.Keywords: walkability, quality of life, social relations, analysis of mediation
Procedia PDF Downloads 327913 Energy Efficiency Approach to Reduce Costs of Ownership of Air Jet Weaving
Authors: Corrado Grassi, Achim Schröter, Yves Gloy, Thomas Gries
Abstract:
Air jet weaving is the most productive, but also the most energy consuming weaving method. Increasing energy costs and environmental impact are constantly a challenge for the manufacturers of weaving machines. Current technological developments concern with low energy costs, low environmental impact, high productivity, and constant product quality. The high degree of energy consumption of the method can be ascribed to the high need of compressed air. An energy efficiency method is applied to the air jet weaving technology. Such method identifies and classifies the main relevant energy consumers and processes from the exergy point of view and it leads to the identification of energy efficiency potentials during the weft insertion process. Starting from the design phase, energy efficiency is considered as the central requirement to be satisfied. The initial phase of the method consists of an analysis of the state of the art of the main weft insertion components in order to point out a prioritization of the high demanding energy components and processes. The identified major components are investigated to reduce the high demand of energy of the weft insertion process. During the interaction of the flow field coming from the relay nozzles within the profiled reed, only a minor part of the stream is really accelerating the weft yarn, hence resulting in large energy inefficiency. Different tools such as FEM analysis, CFD simulation models and experimental analysis are used in order to design a more energy efficient design of the involved components in the filling insertion. A different concept for the metal strip of the profiled reed is developed. The developed metal strip allows a reduction of the machine energy consumption. Based on a parametric and aerodynamic study, the designed reed transmits higher values of the flow power to the filling yarn. The innovative reed fulfills both the requirement of raising energy efficiency and the compliance with the weaving constraints.Keywords: air jet weaving, aerodynamic simulation, energy efficiency, experimental validation, weft insertion
Procedia PDF Downloads 197912 The Persistence of Abnormal Return on Assets: An Exploratory Analysis of the Differences between Industries and Differences between Firms by Country and Sector
Authors: José Luis Gallizo, Pilar Gargallo, Ramon Saladrigues, Manuel Salvador
Abstract:
This study offers an exploratory statistical analysis of the persistence of annual profits across a sample of firms from different European Union (EU) countries. To this end, a hierarchical Bayesian dynamic model has been used which enables the annual behaviour of those profits to be broken down into a permanent structural and a transitory component, while also distinguishing between general effects affecting the industry as a whole to which each firm belongs and specific effects affecting each firm in particular. This breakdown enables the relative importance of those fundamental components to be more accurately evaluated by country and sector. Furthermore, Bayesian approach allows for testing different hypotheses about the homogeneity of the behaviour of the above components with respect to the sector and the country where the firm develops its activity. The data analysed come from a sample of 23,293 firms in EU countries selected from the AMADEUS data-base. The period analysed ran from 1999 to 2007 and 21 sectors were analysed, chosen in such a way that there was a sufficiently large number of firms in each country sector combination for the industry effects to be estimated accurately enough for meaningful comparisons to be made by sector and country. The analysis has been conducted by sector and by country from a Bayesian perspective, thus making the study more flexible and realistic since the estimates obtained do not depend on asymptotic results. In general terms, the study finds that, although the industry effects are significant, more important are the firm specific effects. That importance varies depending on the sector or the country in which the firm carries out its activity. The influence of firm effects accounts for around 81% of total variation and display a significantly lower degree of persistence, with adjustment speeds oscillating around 34%. However, this pattern is not homogeneous but depends on the sector and country analysed. Industry effects depends also on sector and country analysed have a more marginal importance, being significantly more persistent, with adjustment speeds oscillating around 7-8% with this degree of persistence being very similar for most of sectors and countries analysed.Keywords: dynamic models, Bayesian inference, MCMC, abnormal returns, persistence of profits, return on assets
Procedia PDF Downloads 402911 Testing the Life Cycle Theory on the Capital Structure Dynamics of Trade-Off and Pecking Order Theories: A Case of Retail, Industrial and Mining Sectors
Authors: Freddy Munzhelele
Abstract:
Setting: the empirical research has shown that the life cycle theory has an impact on the firms’ financing decisions, particularly the dividend pay-outs. Accordingly, the life cycle theory posits that as a firm matures, it gets to a level and capacity where it distributes more cash as dividends. On the other hand, the young firms prioritise investment opportunities sets and their financing; thus, they pay little or no dividends. The research on firms’ financing decisions also demonstrated, among others, the adoption of trade-off and pecking order theories on the dynamics of firms capital structure. The trade-off theory talks to firms holding a favourable position regarding debt structures particularly as to the cost and benefits thereof; and pecking order is concerned with firms preferring a hierarchical order as to choosing financing sources. The case of life cycle hypothesis explaining the financial managers’ decisions as regards the firms’ capital structure dynamics appears to be an interesting link, yet this link has been neglected in corporate finance research. If this link is to be explored as an empirical research, the financial decision-making alternatives will be enhanced immensely, since no conclusive evidence has been found yet as to the dynamics of capital structure. Aim: the aim of this study is to examine the impact of life cycle theory on the capital structure dynamics trade-off and pecking order theories of firms listed in retail, industrial and mining sectors of the JSE. These sectors are among the key contributors to the GDP in the South African economy. Design and methodology: following the postpositivist research paradigm, the study is quantitative in nature and utilises secondary data obtainable from the financial statements of sampled firm for the period 2010 – 2022. The firms’ financial statements will be extracted from the IRESS database. Since the data will be in panel form, a combination of the static and dynamic panel data estimators will used to analyse data. The overall data analyses will be done using STATA program. Value add: this study directly investigates the link between the life cycle theory and the dynamics of capital structure decisions, particularly the trade-off and pecking order theories.Keywords: life cycle theory, trade-off theory, pecking order theory, capital structure, JSE listed firms
Procedia PDF Downloads 62910 Milling Simulations with a 3-DOF Flexible Planar Robot
Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden
Abstract:
Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.Keywords: control, milling, multibody, robotic, simulation
Procedia PDF Downloads 249909 Hands on Tools to Improve Knowlege, Confidence and Skill of Clinical Disaster Providers
Authors: Lancer Scott
Abstract:
Purpose: High quality clinical disaster medicine requires providers working collaboratively to care for multiple patients in chaotic environments; however, many providers lack adequate training. To address this deficit, we created a competency-based, 5-hour Emergency Preparedness Training (EPT) curriculum using didactics, small-group discussion, and kinetic learning. The goal was to evaluate the effect of a short course on improving provider knowledge, confidence and skills in disaster scenarios. Methods: Diverse groups of medical university students, health care professionals, and community members were enrolled between 2011 and 2014. The course consisted of didactic lectures, small group exercises, and two live, multi-patient mass casualty incident (MCI) scenarios. The outcome measures were based on core competencies and performance objectives developed by a curriculum task force and assessed via trained facilitator observation, pre- and post-testing, and a course evaluation. Results: 708 participants completed were trained between November 2011 and August 2014, including 49.9% physicians, 31.9% medical students, 7.2% nurses, and 11% various other healthcare professions. 100% of participants completed the pre-test and 71.9% completed the post-test, with average correct answers increasing from 39% to 60%. Following didactics, trainees met 73% and 96% of performance objectives for the two small group exercises and 68.5% and 61.1% of performance objectives for the two MCI scenarios. Average trainee self-assessment of both overall knowledge and skill with clinical disasters improved from 33/100 to 74/100 (overall knowledge) and 33/100 to 77/100 (overall skill). The course assessment was completed by 34.3% participants, of whom 91.5% highly recommended the course. Conclusion: A relatively short, intensive EPT course can improve the ability of a diverse group of disaster care providers to respond effectively to mass casualty scenarios.Keywords: clinical disaster medicine, training, hospital preparedness, surge capacity, education, curriculum, research, performance, training, student, physicians, nurses, health care providers, health care
Procedia PDF Downloads 193908 Role of P53 Codon 72 Polymorphism and Mir-146a Rs2910164 Polymorphism in Cervical Cancer
Authors: Hossein Rassi, Marjan Moradi Fard, Masoud Houshmand
Abstract:
Background: Cervical cancer is multistep disease that is thought to result from an interaction between genetic background and environmental factors. Human papillomavirus (HPV) infection is the leading risk factor for cervical intraepithelial neoplasia (CIN) and cervical cancer. In other hand, some of p53 and miRNA polymorphism may plays an important role in carcinogenesis. This study attempts to clarify the relation of p53 genotypes and miR-146a rs2910164 polymorphism in cervical lesions. Method: Forty two archival samples with cervical lesion retired from Khatam hospital and 40 sample from healthy persons used as control group. A simple and rapid method was used to detect the simultaneous amplification of the HPV consensus L1 region and HPV-16,-18, -11, -31, 33 and -35 along with the b-globin gene as an internal control. We use Multiplex PCR for detection of P53 and miR-146a rs2910164 genotypes in our lab. Finally, data analysis was performed using the 7 version of the Epi Info(TM) 2012 software and test chi-square(x2) for trend. Results: Cervix lesions were collected from 42 patients with Squamous metaplasia, cervical intraepithelial neoplasia, and cervical carcinoma. Successful DNA extraction was assessed by PCR amplification of b-actin gene (99bp). According to the results, p53 GG genotype and miR-146a rs2910164 CC genotype was significantly associated with increased risk of cervical lesions in the study population. In this study, we detected 13 HPV 18 from 42 cervical cancer. Conclusion: The connection between several SNP polymorphism and human virus papilloma in rare researches were seen. The reason of these differences in researches' findings can result in different kinds of races and geographic situations and also differences in life grooves in every region. The present study provided preliminary evidence that a p53 GG genotype and miR-146a rs2910164 CC genotype may effect cervical cancer risk in the study population, interacting synergistically with HPV 18 genotype. Our results demonstrate that the testing of p53 codon 72 polymorphism genotypes and miR-146a rs2910164 polymorphism genotypes in combination with HPV18 can serve as major risk factors in the early identification of cervical cancers. Furthermore, the results indicate the possibility of primary prevention of cervical cancer by vaccination against HPV18 in Iran.Keywords: cervical cancer, HPV18, p53 codon 72 polymorphism, miR-146a rs2910164 polymorphism
Procedia PDF Downloads 457907 Enhancing Operational Efficiency and Patient Care at Johns Hopkins Aramco Healthcare through a Business Intelligence Framework
Authors: Muneera Mohammed Al-Dossary, Fatimah Mohammed Al-Dossary, Mashael Al-Shahrani, Amal Al-Tammemi
Abstract:
Johns Hopkins Aramco Healthcare (JAHA), a joint venture between Saudi Aramco and Johns Hopkins Medicine, delivers comprehensive healthcare services to a diverse patient population. Despite achieving high patient satisfaction rates and surpassing several operational targets, JAHA faces challenges such as appointment delays and resource inefficiencies. These issues highlight the need for an advanced, integrated approach to operational management. This paper proposes a Business Intelligence (BI) framework to address these challenges, leveraging tools such as Epic electronic health records and Tableau dashboards. The framework focuses on data integration, real-time monitoring, and predictive analytics to streamline operations and enhance decision-making. Key outcomes include reduced wait times (e.g., a 23% reduction in specialty clinic wait times) and improved operating room efficiency (from 95.83% to 98% completion rates). These advancements align with JAHA’s strategic objectives of optimizing resource utilization and delivering superior patient care. The findings underscore the transformative potential of BI in healthcare, enabling a shift from reactive to proactive operations management. The success of this implementation lays the foundation for future innovations, including machine learning models for more precise demand forecasting and resource allocation.Keywords: business intelligence, operational efficiency, healthcare management, predictive analytics, patient care improvement, data integration, real-time monitoring, resource optimization, Johns Hopkins Aramco Healthcare, electronic health records, Tableau dashboards, predictive modeling, efficiency metrics, resource utilization, patient satisfaction
Procedia PDF Downloads 8906 Healthcare-SignNet: Advanced Video Classification for Medical Sign Language Recognition Using CNN and RNN Models
Authors: Chithra A. V., Somoshree Datta, Sandeep Nithyanandan
Abstract:
Sign Language Recognition (SLR) is the process of interpreting and translating sign language into spoken or written language using technological systems. It involves recognizing hand gestures, facial expressions, and body movements that makeup sign language communication. The primary goal of SLR is to facilitate communication between hearing- and speech-impaired communities and those who do not understand sign language. Due to the increased awareness and greater recognition of the rights and needs of the hearing- and speech-impaired community, sign language recognition has gained significant importance over the past 10 years. Technological advancements in the fields of Artificial Intelligence and Machine Learning have made it more practical and feasible to create accurate SLR systems. This paper presents a distinct approach to SLR by framing it as a video classification problem using Deep Learning (DL), whereby a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) has been used. This research targets the integration of sign language recognition into healthcare settings, aiming to improve communication between medical professionals and patients with hearing impairments. The spatial features from each video frame are extracted using a CNN, which captures essential elements such as hand shapes, movements, and facial expressions. These features are then fed into an RNN network that learns the temporal dependencies and patterns inherent in sign language sequences. The INCLUDE dataset has been enhanced with more videos from the healthcare domain and the model is evaluated on the same. Our model achieves 91% accuracy, representing state-of-the-art performance in this domain. The results highlight the effectiveness of treating SLR as a video classification task with the CNN-RNN architecture. This approach not only improves recognition accuracy but also offers a scalable solution for real-time SLR applications, significantly advancing the field of accessible communication technologies.Keywords: sign language recognition, deep learning, convolution neural network, recurrent neural network
Procedia PDF Downloads 31905 Image Processing-Based Maize Disease Detection Using Mobile Application
Authors: Nathenal Thomas
Abstract:
In the food chain and in many other agricultural products, corn, also known as maize, which goes by the scientific name Zea mays subsp, is a widely produced agricultural product. Corn has the highest adaptability. It comes in many different types, is employed in many different industrial processes, and is more adaptable to different agro-climatic situations. In Ethiopia, maize is among the most widely grown crop. Small-scale corn farming may be a household's only source of food in developing nations like Ethiopia. The aforementioned data demonstrates that the country's requirement for this crop is excessively high, and conversely, the crop's productivity is very low for a variety of reasons. The most damaging disease that greatly contributes to this imbalance between the crop's supply and demand is the corn disease. The failure to diagnose diseases in maize plant until they are too late is one of the most important factors influencing crop output in Ethiopia. This study will aid in the early detection of such diseases and support farmers during the cultivation process, directly affecting the amount of maize produced. The diseases in maize plants, such as northern leaf blight and cercospora leaf spot, have distinct symptoms that are visible. This study aims to detect the most frequent and degrading maize diseases using the most efficiently used subset of machine learning technology, deep learning so, called Image Processing. Deep learning uses networks that can be trained from unlabeled data without supervision (unsupervised). It is a feature that simulates the exercises the human brain goes through when digesting data. Its applications include speech recognition, language translation, object classification, and decision-making. Convolutional Neural Network (CNN) for Image Processing, also known as convent, is a deep learning class that is widely used for image classification, image detection, face recognition, and other problems. it will also use this algorithm as the state-of-the-art for my research to detect maize diseases by photographing maize leaves using a mobile phone.Keywords: CNN, zea mays subsp, leaf blight, cercospora leaf spot
Procedia PDF Downloads 75904 Trip Reduction in Turbo Machinery
Authors: Pranay Mathur, Carlo Michelassi, Simi Karatha, Gilda Pedoto
Abstract:
Industrial plant uptime is top most importance for reliable, profitable & sustainable operation. Trip and failed start has major impact on plant reliability and all plant operators focussed on efforts required to minimise the trips & failed starts. The performance of these CTQs are measured with 2 metrics, MTBT(Mean time between trips) and SR (Starting reliability). These metrics helps to identify top failure modes and identify units need more effort to improve plant reliability. Baker Hughes Trip reduction program structured to reduce these unwanted trip 1. Real time machine operational parameters remotely available and capturing the signature of malfunction including related boundary condition. 2. Real time alerting system based on analytics available remotely. 3. Remote access to trip logs and alarms from control system to identify the cause of events. 4. Continuous support to field engineers by remotely connecting with subject matter expert. 5. Live tracking of key CTQs 6. Benchmark against fleet 7. Break down to the cause of failure to component level 8. Investigate top contributor, identify design and operational root cause 9. Implement corrective and preventive action 10. Assessing effectiveness of implemented solution using reliability growth models. 11. Develop analytics for predictive maintenance With this approach , Baker Hughes team is able to support customer in achieving their Reliability Key performance Indicators for monitored units, huge cost savings for plant operators. This Presentation explains these approach while providing successful case studies, in particular where 12nos. of LNG and Pipeline operators with about 140 gas compressing line-ups has adopted these techniques and significantly reduce the number of trips and improved MTBTKeywords: reliability, availability, sustainability, digital infrastructure, weibull, effectiveness, automation, trips, fail start
Procedia PDF Downloads 77903 Understanding the Classification of Rain Microstructure and Estimation of Z-R Relationship using a Micro Rain Radar in Tropical Region
Authors: Tomiwa, Akinyemi Clement
Abstract:
Tropical regions experience diverse and complex precipitation patterns, posing significant challenges for accurate rainfall estimation and forecasting. This study addresses the problem of effectively classifying tropical rain types and refining the Z-R (Reflectivity-Rain Rate) relationship to enhance rainfall estimation accuracy. Through a combination of remote sensing, meteorological analysis, and machine learning, the research aims to develop an advanced classification framework capable of distinguishing between different types of tropical rain based on their unique characteristics. This involves utilizing high-resolution satellite imagery, radar data, and atmospheric parameters to categorize precipitation events into distinct classes, providing a comprehensive understanding of tropical rain systems. Additionally, the study seeks to improve the Z-R relationship, a crucial aspect of rainfall estimation. One year of rainfall data was analyzed using a Micro Rain Radar (MRR) located at The Federal University of Technology Akure, Nigeria, measuring rainfall parameters from ground level to a height of 4.8 km with a vertical resolution of 0.16 km. Rain rates were classified into low (stratiform) and high (convective) based on various microstructural attributes such as rain rates, liquid water content, Drop Size Distribution (DSD), average fall speed of the drops, and radar reflectivity. By integrating diverse datasets and employing advanced statistical techniques, the study aims to enhance the precision of Z-R models, offering a more reliable means of estimating rainfall rates from radar reflectivity data. This refined Z-R relationship holds significant potential for improving our understanding of tropical rain systems and enhancing forecasting accuracy in regions prone to heavy precipitation.Keywords: remote sensing, precipitation, drop size distribution, micro rain radar
Procedia PDF Downloads 40