Search results for: leakage detection
314 Spanish Language Violence Corpus: An Analysis of Offensive Language in Twitter
Authors: Beatriz Botella-Gil, Patricio Martínez-Barco, Lea Canales
Abstract:
The Internet and ICT are an integral element of and omnipresent in our daily lives. Technologies have changed the way we see the world and relate to it. The number of companies in the ICT sector is increasing every year, and there has also been an increase in the work that occurs online, from sending e-mails to the way companies promote themselves. In social life, ICT’s have gained momentum. Social networks are useful for keeping in contact with family or friends that live far away. This change in how we manage our relationships using electronic devices and social media has been experienced differently depending on the age of the person. According to currently available data, people are increasingly connected to social media and other forms of online communication. Therefore, it is no surprise that violent content has also made its way to digital media. One of the important reasons for this is the anonymity provided by social media, which causes a sense of impunity in the victim. Moreover, it is not uncommon to find derogatory comments, attacking a person’s physical appearance, hobbies, or beliefs. This is why it is necessary to develop artificial intelligence tools that allow us to keep track of violent comments that relate to violent events so that this type of violent online behavior can be deterred. The objective of our research is to create a guide for detecting and recording violent messages. Our annotation guide begins with a study on the problem of violent messages. First, we consider the characteristics that a message should contain for it to be categorized as violent. Second, the possibility of establishing different levels of aggressiveness. To download the corpus, we chose the social network Twitter for its ease of obtaining free messages. We chose two recent, highly visible violent cases that occurred in Spain. Both of them experienced a high degree of social media coverage and user comments. Our corpus has a total of 633 messages, manually tagged, according to the characteristics we considered important, such as, for example, the verbs used, the presence of exclamations or insults, and the presence of negations. We consider it necessary to create wordlists that are present in violent messages as indicators of violence, such as lists of negative verbs, insults, negative phrases. As a final step, we will use automatic learning systems to check the data obtained and the effectiveness of our guide.Keywords: human language technologies, language modelling, offensive language detection, violent online content
Procedia PDF Downloads 131313 Analyses of Defects in Flexible Silicon Photovoltaic Modules via Thermal Imaging and Electroluminescence
Authors: S. Maleczek, K. Drabczyk, L. Bogdan, A. Iwan
Abstract:
It is known that for industrial applications using solar panel constructed from silicon solar cells require high-efficiency performance. One of the main problems in solar panels is different mechanical and structural defects, causing the decrease of generated power. To analyse defects in solar cells, various techniques are used. However, the thermal imaging is fast and simple method for locating defects. The main goal of this work was to analyze defects in constructed flexible silicon photovoltaic modules via thermal imaging and electroluminescence method. This work is realized for the GEKON project (No. GEKON2/O4/268473/23/2016) sponsored by The National Centre for Research and Development and The National Fund for Environmental Protection and Water Management. Thermal behavior was observed using thermographic camera (VIGOcam v50, VIGO System S.A, Poland) using a DC conventional source. Electroluminescence was observed by Steinbeis Center Photovoltaics (Stuttgart, Germany) equipped with a camera, in which there is a Si-CCD, 16 Mpix detector Kodak KAF-16803type. The camera has a typical spectral response in the range 350 - 1100 nm with a maximum QE of 60 % at 550 nm. In our work commercial silicon solar cells with the size 156 × 156 mm were cut for nine parts (called single solar cells) and used to create photovoltaic modules with the size of 160 × 70 cm (containing about 80 single solar cells). Flexible silicon photovoltaic modules on polyamides or polyester fabric were constructed and investigated taking into consideration anomalies on the surface of modules. Thermal imaging provided evidence of visible voltage-activated conduction. In electro-luminescence images, two regions are noticeable: darker, where solar cell is inactive and brighter corresponding with correctly working photovoltaic cells. The electroluminescence method is non-destructive and gives greater resolution of images thereby allowing a more precise evaluation of microcracks of solar cell after lamination process. Our study showed good correlations between defects observed by thermal imaging and electroluminescence. Finally, we can conclude that the thermographic examination of large scale photovoltaic modules allows us the fast, simple and inexpensive localization of defects at the single solar cells and modules. Moreover, thermographic camera was also useful to detection electrical interconnection between single solar cells.Keywords: electro-luminescence, flexible devices, silicon solar cells, thermal imaging
Procedia PDF Downloads 316312 Enhanced CNN for Rice Leaf Disease Classification in Mobile Applications
Authors: Kayne Uriel K. Rodrigo, Jerriane Hillary Heart S. Marcial, Samuel C. Brillo
Abstract:
Rice leaf diseases significantly impact yield production in rice-dependent countries, affecting their agricultural sectors. As part of precision agriculture, early and accurate detection of these diseases is crucial for effective mitigation practices and minimizing crop losses. Hence, this study proposes an enhancement to the Convolutional Neural Network (CNN), a widely-used method for Rice Leaf Disease Image Classification, by incorporating MobileViTV2—a recently advanced architecture that combines CNN and Vision Transformer models while maintaining fewer parameters, making it suitable for broader deployment on edge devices. Our methodology utilizes a publicly available rice disease image dataset from Kaggle, which was validated by a university structural biologist following the guidelines provided by the Philippine Rice Institute (PhilRice). Modifications to the dataset include renaming certain disease categories and augmenting the rice leaf image data through rotation, scaling, and flipping. The enhanced dataset was then used to train the MobileViTV2 model using the Timm library. The results of our approach are as follows: the model achieved notable performance, with 98% accuracy in both training and validation, 6% training and validation loss, and a Receiver Operating Characteristic (ROC) curve ranging from 95% to 100% for each label. Additionally, the F1 score was 97%. These metrics demonstrate a significant improvement compared to a conventional CNN-based approach, which, in a previous 2022 study, achieved only 78% accuracy after using 5 convolutional layers and 2 dense layers. Thus, it can be concluded that MobileViTV2, with its fewer parameters, outperforms traditional CNN models, particularly when applied to Rice Leaf Disease Image Identification. For future work, we recommend extending this model to include datasets validated by international rice experts and broadening the scope to accommodate biotic factors such as rice pest classification, as well as abiotic stressors such as climate, soil quality, and geographic information, which could improve the accuracy of disease prediction.Keywords: convolutional neural network, MobileViTV2, rice leaf disease, precision agriculture, image classification, vision transformer
Procedia PDF Downloads 25311 Digital Transformation and Digitalization of Public Administration
Authors: Govind Kumar
Abstract:
The concept of ‘e-governance’ that was brought about by the new wave of reforms, namely ‘LPG’ in the early 1990s, has been enabling governments across the globe to digitally transform themselves. Digital transformation is leading the governments with qualitative decisions, optimization in rational use of resources, facilitation of cost-benefit analyses, and elimination of redundancy and corruption with the help of ICT-based applications interface. ICT-based applications/technologies have enormous potential for impacting positive change in the social lives of the global citizenry. Supercomputers test and analyze millions of drug molecules for developing candidate vaccines to combat the global pandemic. Further, e-commerce portals help distribute and supply household items and medicines, while videoconferencing tools provide a visual interface between the clients and hosts. Besides, crop yields are being maximized with the help of drones and machine learning, whereas satellite data, artificial intelligence, and cloud computing help governments with the detection of illegal mining, tackling deforestation, and managing freshwater resources. Such e-applications have the potential to take governance an extra mile by achieving 5 Es (effective, efficient, easy, empower, and equity) of e-governance and six Rs (reduce, reuse, recycle, recover, redesign and remanufacture) of sustainable development. If such digital transformation gains traction within the government framework, it will replace the traditional administration with the digitalization of public administration. On the other hand, it has brought in a new set of challenges, like the digital divide, e-illiteracy, technological divide, etc., and problems like handling e-waste, technological obsolescence, cyber terrorism, e-fraud, hacking, phishing, etc. before the governments. Therefore, it would be essential to bring in a rightful mixture of technological and humanistic interventions for addressing the above issues. This is on account of the reason that technology lacks an emotional quotient, and the administration does not work like technology. Both are self-effacing unless a blend of technology and a humane face are brought in into the administration. The paper will empirically analyze the significance of the technological framework of digital transformation within the government set up for the digitalization of public administration on the basis of the synthesis of two case studies undertaken from two diverse fields of administration and present a future framework of the study.Keywords: digital transformation, electronic governance, public administration, knowledge framework
Procedia PDF Downloads 99310 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 195309 European Prosecutor's Office: Chances and Threats; Brief to Polish Perspective
Authors: Katarzyna Stoklosa
Abstract:
Introduction: European Public Prosecutor’s Office (EPPO) is an independent office in European Union which was established under the article 86 of the Treaty on the Functioning of the European Union by the Treaty of Lisbon following the method of enhanced cooperation. EPPO is aimed at combating crimes against the EU’s financial interest et fraud against the EU budgets on the one hand, EPPO will give a chance to effective fight with organized criminality, on the other it seems to be a threat for member-states which bound with justice the problem of sovereignty. It is a new institution that will become effective from 2020, which is why it requires prior analysis. Methodology: The author uses statistical and comparative methods by collecting and analyzing the work of current institutions such as Europol, Eurojust, as well as the future impact of EPPO on detection and prosecution of crimes. The author will also conduct questionnaire among students and academic staff involved in the perception of EU institutions and the need to create new entities dealing with inter-agency cooperation in criminal matters. Thanks to these research the author will draw up present ways of cooperation between member-states and changes in fighting with financial crimes which will grow up under new regulation. Major Finding of the Study: Analysis and research show that EPPO is an institution based on the principle of mutual recognition, which often does not work in cooperation between Member States. Distrust and problems with the recognition of judgments of other EU Member States may significantly affect the functioning of EPPO. Poland is not part of the EPPO, because arguments have been raised that the European Public Prosecutor's Office interferes too much with the Member States’ pro-active sovereignty and duplicates competences. The research and analyzes carried out by the author show that EPPO has completely new competences, for example, it may file indictments against perpetrators of financial crimes. However, according to the research carried out by the author, such competences may undermine the sovereignty and the principle of protecting the public order of the EU. Conclusion: After the analysis, it will be possible to set following thesis: EPPO is only possible way to effective fight with organized financial criminality. However in conclusion Polish doubts should not be criticized at all. Institutions as EPPO must properly respect sovereignty of member-states. Even instruments like that cannot provoke political contraventions, because there are no other ways to effective resolving of international criminality problem.Keywords: criminal trial, economic crimes, European Public Prosecutor's Office, European Union
Procedia PDF Downloads 164308 Anticancer Activity of Milk Fat Rich in Conjugated Linoleic Acid Against Ehrlich Ascites Carcinoma Cells in Female Swiss Albino Mice
Authors: Diea Gamal Abo El-Hassan, Salwa Ahmed Aly, Abdelrahman Mahmoud Abdelgwad
Abstract:
The major conjugated linoleic acid (CLA) isomers have anticancer effect, especially breast cancer cells, inhibits cell growth and induces cell death. Also, CLA has several health benefits in vivo, including antiatherogenesis, antiobesity, and modulation of immune function. The present study aimed to assess the safety and anticancer effects of milk fat CLA against in vivo Ehrlich ascites carcinoma (EAC) in female Swiss albino mice. This was based on acute toxicity study, detection of the tumor growth, life span of EAC bearing hosts, and simultaneous alterations in the hematological, biochemical, and histopathological profiles. Materials and Methods: One hundred and fifty adult female mice were equally divided into five groups. Groups (1-2) were normal controls, and Groups (3-5) were tumor transplanted mice (TTM) inoculated intraperitoneally with EAC cells (2×106 /0.2 mL). Group (3) was (TTM positive control). Group (4) TTM fed orally on balanced diet supplemented with milk fat CLA (40 mg CLA/kg body weight). Group (5) TTM fed orally on balanced diet supplemented with the same level of CLA 28 days before tumor cells inoculation. Blood samples and specimens from liver and kidney were collected from each group. The effect of milk fat CLA on the growth of tumor, life span of TTM, and simultaneous alterations in the hematological, biochemical, and histopathological profiles were examined. Results: For CLA treated TTM, significant decrease in tumor weight, ascetic volume, viable Ehrlich cells accompanied with increase in life span were observed. Hematological and biochemical profiles reverted to more or less normal levels and histopathology showed minimal effects. Conclusion: The present study proved the safety and anticancer efficiency of milk fat CLA and provides a scientific basis for its medicinal use as anticancer attributable to the additive or synergistic effects of its isomers.Keywords: anticancer activity, conjugated linoleic acid, Ehrlich ascites carcinoma, % increase in life span, mean survival time, tumor transplanted mice.
Procedia PDF Downloads 91307 Laboratory Diagnostic Testing of Peste des Petits Ruminants in Georgia
Authors: Nino G. Vepkhvadze, Tea Enukidze
Abstract:
Every year the number of countries around the world face the risk of the spread of infectious diseases that bring significant ecological and social-economic damage. Hence, the importance of food product safety is emphasized that is the issue of interest for many countries. To solve them, it’s necessary to conduct preventive measures against the diseases, have accurate diagnostic results, leadership, and management. The Peste des petits ruminants (PPR) disease is caused by a morbillivirus closely related to the rinderpest virus. PPR is a transboundary disease as it emerges and evolves, considered as one of the top most damaging animal diseases. The disease imposed a serious threat to sheep-breeding when the farms of sheep, goats are significantly growing within the country. In January 2016, PPR was detected in Georgia. Up to present the origin of the virus, the age relationship of affected ruminants and the distribution of PPRV in Georgia remains unclear. Due to the nature of PPR, and breeding practices in the country, reemerging of the disease in Georgia is highly likely. The purpose of the studies is to provide laboratories with efficient tools allowing the early detection of PPR emergence and re-emergences. This study is being accomplished under the Biological Threat Reduction Program project with the support of the Defense Threat Reduction Agency (DTRA). The purpose of the studies is to investigate the samples and identify areas at high risk of the disease. Georgia has a high density of small ruminant herds bred as free-ranging, close to international borders. Kakheti region, Eastern Georgia, will be considered as area of high priority for PPR surveillance. For this reason, in 2019, in Kakheti region investigated n=484 sheep and goat serum and blood samples from the same animals, utilized serology and molecular biology methods. All samples were negative by RT-PCR, and n=6 sheep samples were seropositive by ELISA-Ab. Future efforts will be concentrated in areas where the risk of PPR might be high such as international bordering regions of Georgia. For diagnostics, it is important to integrate the PPRV knowledge with epidemiological data. Based on these diagnostics, the relevant agencies will be able to control the disease surveillance.Keywords: animal disease, especially dangerous pathogen, laboratory diagnostics, virus
Procedia PDF Downloads 115306 Emergence of Fluoroquinolone Resistance in Pigs, Nigeria
Authors: Igbakura I. Luga, Alex A. Adikwu
Abstract:
A comparison of resistance to quinolones was carried out on isolates of Shiga toxin-producing Escherichia coliO157:H7 from cattle and mecA and nuc genes harbouring Staphylococcus aureus from pigs. The isolates were separately tested in the first and current decades of the 21st century. The objective was to demonstrate the dissemination of resistance to this frontline class of antibiotic by bacteria from food animals and bring to the limelight the spread of antibiotic resistance in Nigeria. A total of 10 isolates of the E. coli O157:H7 and 9 of mecA and nuc genes harbouring S. aureus were obtained following isolation, biochemical testing, and serological identification using the Remel Wellcolex E. coli O157:H7 test. Shiga toxin-production screening in the E. coli O157:H7 using the verotoxin E. coli reverse passive latex agglutination (VTEC-RPLA) test; and molecular identification of the mecA and nuc genes in S. aureus. Detection of the mecA and nuc genes were carried out using the protocol by the Danish Technical University (DTU) using the following primers mecA-1:5'-GGGATCATAGCGTCATTATTC-3', mecA-2: 5'-AACGATTGTGACACGATAGCC-3', nuc-1: 5'-TCAGCAAATGCATCACAAACAG-3', nuc-2: 5'-CGTAAATGCACTTGCTTCAGG-3' for the mecA and nuc genes, respectively. The nuc genes confirm the S. aureus isolates and the mecA genes as being methicillin-resistant and so pathogenic to man. The fluoroquinolones used in the antibiotic resistance testing were norfloxacin (10 µg) and ciprofloxacin (5 µg) in the E. coli O157:H7 isolates and ciprofloxacin (5 µg) in the S. aureus isolates. Susceptibility was tested using the disk diffusion method on Muller-Hinton agar. Fluoroquinolone resistance was not detected from isolates of E. coli O157:H7 from cattle. However, 44% (4/9) of the S. aureus were resistant to ciprofloxacin. Resistance of up to 44% in isolates of mecA and nuc genes harbouring S. aureus is a compelling evidence for the rapid spread of antibiotic resistance from bacteria in food animals from Nigeria. Ciprofloxacin is the drug of choice for the treatment of Typhoid fever, therefore widespread resistance to it in pathogenic bacteria is of great public health significance. The study concludes that antibiotic resistance in bacteria from food animals is on the increase in Nigeria. The National Food and Drug Administration and Control (NAFDAC) agency in Nigeria should implement the World Health Organization (WHO) global action plan on antimicrobial resistance. A good starting point can be coordinating the WHO, Office of International Epizootics (OIE), Food and Agricultural Organization (FAO) tripartite draft antimicrobial resistance monitoring and evaluation (M&E) framework in Nigeria.Keywords: Fluoroquinolone, Nigeria, resistance, Staphylococcus aureus
Procedia PDF Downloads 458305 Nondecoupling Signatures of Supersymmetry and an Lμ-Lτ Gauge Boson at Belle-II
Authors: Heerak Banerjee, Sourov Roy
Abstract:
Supersymmetry, one of the most celebrated fields of study for explaining experimental observations where the standard model (SM) falls short, is reeling from the lack of experimental vindication. At the same time, the idea of additional gauge symmetry, in particular, the gauged Lμ-Lτ symmetric models have also generated significant interest. They have been extensively proposed in order to explain the tantalizing discrepancy in the predicted and measured value of the muon anomalous magnetic moment alongside several other issues plaguing the SM. While very little parameter space within these models remain unconstrained, this work finds that the γ + Missing Energy (ME) signal at the Belle-II detector will be a smoking gun for supersymmetry (SUSY) in the presence of a gauged U(1)Lμ-Lτ symmetry. A remarkable consequence of breaking the enhanced symmetry appearing in the limit of degenerate (s)leptons is the nondecoupling of the radiative contribution of heavy charged sleptons to the γ-Z΄ kinetic mixing. The signal process, e⁺e⁻ →γZ΄→γ+ME, is an outcome of this ubiquitous feature. Taking the severe constraints on gauged Lμ-Lτ models by several low energy observables into account, it is shown that any significant excess in all but the highest photon energy bin would be an undeniable signature of such heavy scalar fields in SUSY coupling to the additional gauge boson Z΄. The number of signal events depends crucially on the logarithm of the ratio of stau to smuon mass in the presence of SUSY. In addition, the number is also inversely proportional to the e⁺e⁻ collision energy, making a low-energy, high-luminosity collider like Belle-II an ideal testing ground for this channel. This process can probe large swathes of the hitherto free slepton mass ratio vs. additional gauge coupling (gₓ) parameter space. More importantly, it can explore the narrow slice of Z΄ mass (MZ΄) vs. gₓ parameter space still allowed in gauged U(1)Lμ-Lτ models for superheavy sparticles. The spectacular finding that the signal significance is independent of individual slepton masses is an exciting prospect indeed. Further, the prospect that signatures of even superheavy SUSY particles that may have escaped detection at the LHC may show up at the Belle-II detector is an invigorating revelation.Keywords: additional gauge symmetry, electron-positron collider, kinetic mixing, nondecoupling radiative effect, supersymmetry
Procedia PDF Downloads 127304 Interpretation of Two Indices for the Prediction of Cardiovascular Risk in Pediatric Obesity
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Obesity and weight gain are associated with increased risk of developing cardiovascular diseases and the progression of liver fibrosis. Aspartate transaminase–to-platelet count ratio index (AST-to-PLT, APRI) and fibrosis-4 (FIB-4) were primarily considered as the formulas capable of differentiating hepatitis from cirrhosis. Recently, they have found clinical use as measures of liver fibrosis and cardiovascular risk. However, their status in children has not been evaluated in detail yet. The aim of this study is to determine APRI and FIB-4 status in obese (OB) children and compare them with values found in children with normal body mass index (N-BMI). A total of sixty-eight children examined in the outpatient clinics of the Pediatrics Department in Tekirdag Namik Kemal University Medical Faculty were included in the study. Two groups were constituted. In the first group, thirty-five children with N-BMI, whose age- and sex-dependent BMI indices vary between 15 and 85 percentiles, were evaluated. The second group comprised thirty-three OB children whose BMI percentile values were between 95 and 99. Anthropometric measurements and routine biochemical tests were performed. Using these parameters, values for the related indices, BMI, APRI, and FIB-4, were calculated. Appropriate statistical tests were used for the evaluation of the study data. The statistical significance degree was accepted as p<0.05. In the OB group, values found for APRI and FIB-4 were higher than those calculated for the N-BMI group. However, there was no statistically significant difference between the N-BMI and OB groups in terms of APRI and FIB-4. A similar pattern was detected for triglyceride (TRG) values. The correlation coefficient and degree of significance between APRI and FIB-4 were r=0.336 and p=0.065 in the N-BMI group. On the other hand, they were r=0.707 and p=0.001 in the OB group. Associations of these two indices with TRG have shown that this parameter was strongly correlated (p<0.001) both with APRI and FIB-4 in the OB group, whereas no correlation was calculated in children with N-BMI. Triglycerides are associated with an increased risk of fatty liver, which can progress to severe clinical problems such as steatohepatitis, which can lead to liver fibrosis. Triglycerides are also independent risk factors for cardiovascular disease. In conclusion, the lack of correlation between TRG and APRI as well as FIB-4 in children with N-BMI, along with the detection of strong correlations of TRG with these indices in OB children, was the indicator of the possible onset of the tendency towards the development of fatty liver in OB children. This finding also pointed out the potential risk for cardiovascular pathologies in OB children. The nature of the difference between APRI vs FIB-4 correlations in N-BMI and OB groups (no correlation versus high correlation), respectively, may be the indicator of the importance of involving age and alanine transaminase parameters in addition to AST and PLT in the formula designed for FIB-4.Keywords: APRI, children, FIB-4, obesity, triglycerides
Procedia PDF Downloads 348303 Human Rabies Survivors in India: Epidemiological, Immunological and Virological Studies
Authors: Madhusudana S. N., Reeta Mani, Ashwini S. Satishchandra P., Netravati, Udhani V., Fiaz A., Karande S.
Abstract:
Rabies is an acute encephalitis which is considered 100% fatal despite occasional reports of survivors. However, in recent times more cases of human rabies survivors are being reported. In the last 5 years, there are six laboratories confirmed human rabies survivors in India alone. All cases were children below 15 years and all contracted the disease by dog bites. All of them also had received the full or partial course of rabies vaccination and 4 out of 6 had also received rabies immunoglobulin. All cases were treated in intensive care units in hospitals at Bangalore, Mumbai, Chandigarh, Lucknow and Goa. We report here the results of immunological and virological studies conducted at our laboratory on these patients. The clinical samples that were obtained from these patients were Serum, CSF, nuchal skin biopsy and saliva. Serum and CSF samples were subjected to standard RFFIT for estimation of rabies neutralizing antibodies. Skin biopsy, CSF and saliva were processed by TaqMan real-time PCR for detection of viral RNA. CSF, saliva and skin homogenates were also processed for virus isolation by inoculation of suckling mice. The PBMCs isolated from fresh blood was subjected to ELISPOT assay to determine the type of immune response (Th1/Th2). Both CSF and serum were also investigated for selected cytokines by Luminex assay. The level of antibodies to virus G protein and N protein were determined by ELISA. All survivors had very high titers of RVNA in serum and CSF 100 fold higher than non-survivors and vaccine controls. A five-fold rise in titer could be demonstrated in 4 out of 6 patients. All survivors had a significant increase in antibodies to G protein in both CSF and serum when compared to non-survivors. There was a profound and robust Th1 response in all survivors indicating that interferon gamma could play an important factor in virus clearance. We could isolate viral RNA in only one patient four years after he had developed symptoms. The partial N gene sequencing revealed 99% homology to species I strain prevalent in India. Levels of selected cytokines in CSF and serum did not reveal any difference between survivors and non-survivors. To conclude, survival from rabies is mediated by virus-specific immune responses of the host and clearance of rabies virus from CNS may involve the participation of both Th2 and Th1 immune responses.Keywords: rabies, rabies treatment, rabies survivors, immune reponse in rabies encephalitis
Procedia PDF Downloads 330302 A Holistic Analysis of the Emergency Call: From in Situ Negotiation to Policy Frameworks and Back
Authors: Jo Angouri, Charlotte Kennedy, Shawnea Ting, David Rawlinson, Matthew Booker, Nigel Rees
Abstract:
Ambulance services need to balance the large volume of emergency (999 in the UK) calls they receive (e.g., West Midlands Ambulance Service reports per day about 4,000 999 calls; about 679,000 calls per year are received in Wales), with dispatching limited resource for on-site intervention to the most critical cases. The process by which Emergency Medical Dispatch (EMD) decisions are made is related to risk assessment and involves the caller and call-taker as well as clinical teams negotiating risk levels on a case-by-case basis. Medical Priority Dispatch System (MPDS – also referred to as Advanced Medical Priority Dispatch System AMPDS) are used in the UK by NHS Trusts (e.,g WAST) to process and prioritise 999 calls. MPDS / AMPDS provide structured protocols for call prioritisation and call management. Protocols/policy frameworks have not been examined before in the way we propose in our project. In more detail, the risk factors that play a role in the EMD negotiation between the caller and call-taker have been analysed in both medical and social science research. Research has focused on the structural, morphological and phonological aspects that could improve, and train, human-to-human interaction or automate risk detection, as well as the medical factors that need to be captured from the caller to inform the dispatch decision. There are two significant gaps in our knowledge that we address in our work: 1. the role of backstage clinical teams in translating the caller/call-taker interaction in their internal risk negotiation and, 2. the role of policy frameworks, protocols and regulations in the framing of institutional priorities and resource allocation. We take a multi method approach and combine the analysis of 999 calls with the analysis of policy documents. We draw on interaction analysis, corpus methodologies and thematic analysis. In this paper, we report on our preliminary findings and focus in particular on the risk factors we have identified and the relationship with the regulations that create the frame within which teams operate. We close the paper with implications of our study for providing evidence-based policy intervention and recommendations for further research.Keywords: emergency (999) call, interaction analysis, discourse analysis, ambulance dispatch, medical discourse
Procedia PDF Downloads 103301 An Investigation of the Structural and Microstructural Properties of Zn1-xCoxO Thin Films Applied as Gas Sensors
Authors: Ariadne C. Catto, Luis F. da Silva, Khalifa Aguir, Valmor Roberto Mastelaro
Abstract:
Zinc oxide (ZnO) pure or doped are one of the most promising metal oxide semiconductors for gas sensing applications due to the well-known high surface-to-volume area and surface conductivity. It was shown that ZnO is an excellent gas-sensing material for different gases such as CO, O2, NO2 and ethanol. In this context, pure and doped ZnO exhibiting different morphologies and a high surface/volume ratio can be a good option regarding the limitations of the current commercial sensors. Different studies showed that the sensitivity of metal-doped ZnO (e.g. Co, Fe, Mn,) enhanced its gas sensing properties. Motivated by these considerations, the aim of this study consisted on the investigation of the role of Co ions on structural, morphological and the gas sensing properties of nanostructured ZnO samples. ZnO and Zn1-xCoxO (0 < x < 5 wt%) thin films were obtained via the polymeric precursor method. The sensitivity, selectivity, response time and long-term stability gas sensing properties were investigated when the sample was exposed to a different concentration range of ozone (O3) at different working temperatures. The gas sensing property was probed by electrical resistance measurements. The long and short-range order structure around Zn and Co atoms were investigated by X-ray diffraction and X-ray absorption spectroscopy. X-ray photoelectron spectroscopy measurement was performed in order to identify the elements present on the film surface as well as to determine the sample composition. Microstructural characteristics of the films were analyzed by a field-emission scanning electron microscope (FE-SEM). Zn1-xCoxO XRD patterns were indexed to the wurtzite ZnO structure and any second phase was observed even at a higher cobalt content. Co-K edge XANES spectra revealed the predominance of Co2+ ions. XPS characterization revealed that Co-doped ZnO samples possessed a higher percentage of oxygen vacancies than the ZnO samples, which also contributed to their excellent gas sensing performance. Gas sensor measurements pointed out that ZnO and Co-doped ZnO samples exhibit a good gas sensing performance concerning the reproducibility and a fast response time (around 10 s). Furthermore, the Co addition contributed to reduce the working temperature for ozone detection and improve the selective sensing properties.Keywords: cobalt-doped ZnO, nanostructured, ozone gas sensor, polymeric precursor method
Procedia PDF Downloads 247300 Immunocytochemical Stability of Antigens in Cytological Samples Stored in In-house Liquid-Based Medium
Authors: Anamarija Kuhar, Veronika Kloboves Prevodnik, Nataša Nolde, Ulrika Klopčič
Abstract:
The decision for immunocytochemistry (ICC) is usually made in the basis of the findings in Giemsa- and/or Papanicolaou- smears. More demanding diagnostic cases require preparation of additional cytological preparations. Therefore, it is convenient to suspend cytological samples in a liquid based medium (LBM) that preserve antigen and morphological properties. However, the duration of these properties being preserved in the medium is usually unknown. Eventually, cell morphology becomes impaired and altered, as well as antigen properties may be lost or become diffused. In this study, the influence of cytological sample storage length in in-house liquid based medium on antigen properties and cell morphology is evaluated. The question is how long the cytological samples in this medium can be stored so that the results of immunocytochemical reactions are still reliable and can be safely used in routine cytopathological diagnostics. The stability of 6 ICC markers that are most frequently used in everyday routine work were tested; Cytokeratin AE1/AE3, Calretinin, Epithelial specific antigen Ep-CAM (MOC-31), CD 45, Oestrogen receptor (ER), and Melanoma triple cocktail were tested on methanol fixed cytospins prepared from fresh fine needle aspiration biopsies, effusion samples, and disintegrated lymph nodes suspended in in-house cell medium. Cytospins were prepared on the day of the sampling as well as on the second, fourth, fifth, and eight day after sample collection. Next, they were fixed in methanol and immunocytochemically stained. Finally, the percentage of positive stained cells, reaction intensity, counterstaining, and cell morphology were assessed using two assessment methods: the internal assessment and the UK NEQAS ICC scheme assessment. Results show that the antigen properties for Cytokeratin AE1/AE3, MOC-31, CD 45, ER, and Melanoma triple cocktail were preserved even after 8 days of storage in in-house LBM, while the antigen properties for Calretinin remained unchanged only for 4 days. The key parameters for assessing detection of antigen are the proportion of cells with a positive reaction and intensity of staining. Well preserved cell morphology is highly important for reliable interpretation of ICC reaction. Therefore, it would be valuable to perform a similar analysis for other ICC markers to determine the duration in which the antigen and morphological properties are preserved in LBM.Keywords: cytology samples, cytospins, immunocytochemistry, liquid-based cytology
Procedia PDF Downloads 143299 Analysis of Brownfield Soil Contamination Using Local Government Planning Data
Authors: Emma E. Hellawell, Susan J. Hughes
Abstract:
BBrownfield sites are currently being redeveloped for residential use. Information on soil contamination on these former industrial sites is collected as part of the planning process by the local government. This research project analyses this untapped resource of environmental data, using site investigation data submitted to a local Borough Council, in Surrey, UK. Over 150 site investigation reports were collected and interrogated to extract relevant information. This study involved three phases. Phase 1 was the development of a database for soil contamination information from local government reports. This database contained information on the source, history, and quality of the data together with the chemical information on the soil that was sampled. Phase 2 involved obtaining site investigation reports for development within the study area and extracting the required information for the database. Phase 3 was the data analysis and interpretation of key contaminants to evaluate typical levels of contaminants, their distribution within the study area, and relating these results to current guideline levels of risk for future site users. Preliminary results for a pilot study using a sample of the dataset have been obtained. This pilot study showed there is some inconsistency in the quality of the reports and measured data, and careful interpretation of the data is required. Analysis of the information has found high levels of lead in shallow soil samples, with mean and median levels exceeding the current guidance for residential use. The data also showed elevated (but below guidance) levels of potentially carcinogenic polyaromatic hydrocarbons. Of particular concern from the data was the high detection rate for asbestos fibers. These were found at low concentrations in 25% of the soil samples tested (however, the sample set was small). Contamination levels of the remaining chemicals tested were all below the guidance level for residential site use. These preliminary pilot study results will be expanded, and results for the whole local government area will be presented at the conference. The pilot study has demonstrated the potential for this extensive dataset to provide greater information on local contamination levels. This can help inform regulators and developers and lead to more targeted site investigations, improving risk assessments, and brownfield development.Keywords: Brownfield development, contaminated land, local government planning data, site investigation
Procedia PDF Downloads 140298 In vitro Evaluation of Capsaicin Patches for Transdermal Drug Delivery
Authors: Alija Uzunovic, Sasa Pilipovic, Aida Sapcanin, Zahida Ademovic, Berina Pilipović
Abstract:
Capsaicin is a naturally occurring alkaloid extracted from capsicum fruit extracts of different of Capsicum species. It has been employed topically to treat many diseases such as rheumatoid arthritis, osteoarthritis, cancer pain and nerve pain in diabetes. The high degree of pre-systemic metabolism of intragastrical capsaicin and the short half-life of capsaicin by intravenous administration made topical application of capsaicin advantageous. In this study, we have evaluated differences in the dissolution characteristics of capsaicin patch 11 mg (purchased from market) at different dissolution rotation speed. The proposed patch area is 308 cm2 (22 cm x 14 cm; it contains 36 µg of capsaicin per square centimeter of adhesive). USP Apparatus 5 (Paddle Over Disc) is used for transdermal patch testing. The dissolution study was conducted using USP apparatus 5 (n=6), ERWEKA DT800 dissolution tester (paddle-type) with addition of a disc. The fabricated patch of 308 cm2 is to be cut into 9 cm2 was placed against a disc (delivery side up) retained with the stainless-steel screen and exposed to 500 mL of phosphate buffer solution pH 7.4. All dissolution studies were carried out at 32 ± 0.5 °C and different rotation speed (50± 5; 100± 5 and 150± 5 rpm). 5 ml aliquots of samples were withdrawn at various time intervals (1, 4, 8 and 12 hours) and replaced with 5 ml of dissolution medium. Withdrawn were appropriately diluted and analyzed by reversed-phase liquid chromatography (RP-LC). A Reversed Phase Liquid Chromatography (RP-LC) method has been developed, optimized and validated for the separation and quantitation of capsaicin in a transdermal patch. The method uses a ProntoSIL 120-3-C18 AQ 125 x 4,0 mm (3 μm) column maintained at 600C. The mobile phase consisted of acetonitrile: water (50:50 v/v), the flow rate of 0.9 mL/min, the injection volume 10 μL and the detection wavelength 222 nm. The used RP-LC method is simple, sensitive and accurate and can be applied for fast (total chromatographic run time was 4.0 minutes) and simultaneous analysis of capsaicin and dihydrocapsaicin in a transdermal patch. According to the results obtained in this study, we can conclude that the relative difference of dissolution rate of capsaicin after 12 hours was elevated by increase of dissolution rotation speed (100 rpm vs 50 rpm: 84.9± 11.3% and 150 rpm vs 100 rpm: 39.8± 8.3%). Although several apparatus and procedures (USP apparatus 5, 6, 7 and a paddle over extraction cell method) have been used to study in vitro release characteristics of transdermal patches, USP Apparatus 5 (Paddle Over Disc) could be considered as a discriminatory test. would be able to point out the differences in the dissolution rate of capsaicin at different rotation speed.Keywords: capsaicin, in vitro, patch, RP-LC, transdermal
Procedia PDF Downloads 227297 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment
Authors: Ella Sèdé Maforikan
Abstract:
Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment
Procedia PDF Downloads 63296 Human Identification Using Local Roughness Patterns in Heartbeat Signal
Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori
Abstract:
Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification
Procedia PDF Downloads 404295 Unlocking Health Insights: Studying Data for Better Care
Authors: Valentina Marutyan
Abstract:
Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.Keywords: data mining, healthcare, big data, large amounts of data
Procedia PDF Downloads 76294 A Modified QuEChERS Method Using Activated Carbon Fibers as r-DSPE Sorbent for Sample Cleanup: Application to Pesticides Residues Analysis in Food Commodities Using GC-MS/MS
Authors: Anshuman Srivastava, Shiv Singh, Sheelendra Pratap Singh
Abstract:
A simple, sensitive and effective gas chromatography tandem mass spectrometry (GC-MS/MS) method was developed for simultaneous analysis of multi pesticide residues (organophosphate, organochlorines, synthetic pyrethroids and herbicides) in food commodities using phenolic resin based activated carbon fibers (ACFs) as reversed-dispersive solid phase extraction (r-DSPE) sorbent in modified QuEChERS (Quick Easy Cheap Effective Rugged Safe) method. The acetonitrile-based QuEChERS technique was used for the extraction of the analytes from food matrices followed by sample cleanup with ACFs instead of traditionally used primary secondary amine (PSA). Different physico-chemical characterization techniques such as Fourier transform infrared spectroscopy, scanning electron microscopy, X-ray diffraction and Brunauer-Emmet-Teller surface area analysis were employed to investigate the engineering and structural properties of ACFs. The recovery of pesticides and herbicides was tested at concentration levels of 0.02 and 0.2 mg/kg in different commodities such as cauliflower, cucumber, banana, apple, wheat and black gram. The recoveries of all twenty-six pesticides and herbicides were found in acceptable limit (70-120%) according to SANCO guideline with relative standard deviation value < 15%. The limit of detection and limit of quantification of the method was in the range of 0.38-3.69 ng/mL and 1.26 -12.19 ng/mL, respectively. In traditional QuEChERS method, PSA used as r-DSPE sorbent plays a vital role in sample clean-up process and demonstrates good recoveries for multiclass pesticides. This study reports that ACFs are better in terms of removal of co-extractives in comparison of PSA without compromising the recoveries of multi pesticides from food matrices. Further, ACF replaces the need of charcoal in addition to the PSA from traditional QuEChERS method which is used to remove pigments. The developed method will be cost effective because the ACFs are significantly cheaper than the PSA. So the proposed modified QuEChERS method is more robust, effective and has better sample cleanup efficiency for multiclass multi pesticide residues analysis in different food matrices such as vegetables, grains and fruits.Keywords: QuEChERS, activated carbon fibers, primary secondary amine, pesticides, sample preparation, carbon nanomaterials
Procedia PDF Downloads 271293 Detecting Potential Geothermal Sites by Using Well Logging, Geophysical and Remote Sensing Data at Siwa Oasis, Western Desert, Egypt
Authors: Amr S. Fahil, Eman Ghoneim
Abstract:
Egypt made significant efforts during the past few years to discover significant renewable energy sources. Regions in Egypt that have been identified for geothermal potential investigation include the Gulf of Suez and the Western Desert. One of the most promising sites for the development of Egypt's Northern Western Desert is Siwa Oasis. The geological setting of the oasis, a tectonically generated depression situated in the northernmost region of the Western desert, supports the potential for substantial geothermal resources. Field data obtained from 27 deep oil wells along the Western Desert included bottom-hole temperature (BHT) depth to basement measurements, and geological maps; data were utilized in this study. The major lithological units, elevation, surface gradient, lineaments density, and remote sensing multispectral and topographic were mapped together to generate the related physiographic variables. Eleven thematic layers were integrated in a geographic information system (GIS) to create geothermal maps to aid in the detection of significant potential geothermal spots along the Siwa Oasis and its vicinity. The contribution of total magnetic intensity data with reduction to the pole (RTP) to the first investigation of the geothermal potential in Siwa Oasis is applied in this work. The integration of geospatial data with magnetic field measurements showed a clear correlation between areas of high heat flow and magnetic anomalies. Such anomalies can be interpreted as related to the existence of high geothermal energy and dense rock, which also have high magnetic susceptibility. The outcomes indicated that the study area has a geothermal gradient ranging from 18 to 42 °C/km, a heat flow ranging from 24.7 to 111.3 m.W. k−1, a thermal conductivity of 1.3–2.65 W.m−1.k−1 and a measured amplitude temperature maximum of 100.7 °C. The southeastern part of the Siwa Oasis, and some sporadic locations on the eastern section of the oasis were found to have significant geothermal potential; consequently, this location is suitable for future geothermal investigation. The adopted method might be applied to identify significant prospective geothermal energy locations in other regions of Egypt and East Africa.Keywords: magnetic data, SRTM, depth to basement, remote sensing, GIS, geothermal gradient, heat flow, thermal conductivity
Procedia PDF Downloads 116292 Evidence-Based in Telemonitoring of Users with Pacemakers at Five Years after Implant: The Poniente Study
Authors: Antonio Lopez-Villegas, Daniel Catalan-Matamoros, Remedios Lopez-Liria
Abstract:
Objectives: The purpose of this study was to analyze clinical data, health-related quality of life (HRQoL) and functional capacity of patients using a telemonitoring follow-up system (TM) compared to patients followed-up through standard outpatient visits (HM) 5 years after the implantation of a pacemaker. Methods: This is a controlled, non-randomised, nonblinded clinical trial, with data collection carried out at 5 years after the pacemakers implant. The study was developed at Hospital de Poniente (Almeria, Spain), between October 2012 and November 2013. The same clinical outcomes were analyzed in both follow-up groups. Health-Related Quality of Life and Functional Capacity was assessed through EuroQol-5D (EQ-5D) questionnaire and Duke Activity Status Index (DASI) respectively. Sociodemographic characteristics and clinical data were also analyzed. Results: 5 years after pacemaker implant, 55 of 82 initial patients finished the study. Users with pacemakers were assigned to either a conventional follow-up group at hospital (HM=34, 50 initials) or a telemonitoring system group (TM=21, 32 initials). No significant differences were found between both groups according to sociodemographic characteristics, clinical data, Health-Related Quality of Life and Functional Capacity according to medical record and EQ5D and DASI questionnaires. In addition, conventional follow-up visits to hospital were reduced in 44,84% (p < 0,001) in the telemonitoring group in relation to hospital monitoring group. Conclusion: Results obtained in this study suggest that the telemonitoring of users with pacemakers is an equivalent option to conventional follow-up at hospital, in terms of Health-Related Quality of Life and Functional Capacity. Furthermore, it allows for the early detection of cardiovascular and pacemakers-related problem events and significantly reduces the number of in-hospital visits. Trial registration: ClinicalTrials.gov NCT02234245. The PONIENTE study has been funded by the General Secretariat for Research, Development and Innovation, Regional Government of Andalusia (Spain), project reference number PI/0256/2017, under the research call 'Development and Innovation Projects in the Field of Biomedicine and Health Sciences', 2017.Keywords: cardiovascular diseases, health-related quality of life, pacemakers follow-up, remote monitoring, telemedicine
Procedia PDF Downloads 126291 Fundamental Study on Reconstruction of 3D Image Using Camera and Ultrasound
Authors: Takaaki Miyabe, Hideharu Takahashi, Hiroshige Kikura
Abstract:
The Government of Japan and Tokyo Electric Power Company Holdings, Incorporated (TEPCO) are struggling with the decommissioning of Fukushima Daiichi Nuclear Power Plants, especially fuel debris retrieval. In fuel debris retrieval, amount of fuel debris, location, characteristics, and distribution information are important. Recently, a survey was conducted using a robot with a small camera. Progress report in remote robot and camera research has speculated that fuel debris is present both at the bottom of the Pressure Containment Vessel (PCV) and inside the Reactor Pressure Vessel (RPV). The investigation found a 'tie plate' at the bottom of the containment, this is handles on the fuel rod. As a result, it is assumed that a hole large enough to allow the tie plate to fall is opened at the bottom of the reactor pressure vessel. Therefore, exploring the existence of holes that lead to inside the RCV is also an issue. Investigations of the lower part of the RPV are currently underway, but no investigations have been made inside or above the PCV. Therefore, a survey must be conducted for future fuel debris retrieval. The environment inside of the RPV cannot be imagined due to the effect of the melted fuel. To do this, we need a way to accurately check the internal situation. What we propose here is the adaptation of a technology called 'Structure from Motion' that reconstructs a 3D image from multiple photos taken by a single camera. The plan is to mount a monocular camera on the tip of long-arm robot, reach it to the upper part of the PCV, and to taking video. Now, we are making long-arm robot that has long-arm and used at high level radiation environment. However, the environment above the pressure vessel is not known exactly. Also, fog may be generated by the cooling water of fuel debris, and the radiation level in the environment may be high. Since camera alone cannot provide sufficient sensing in these environments, we will further propose using ultrasonic measurement technology in addition to cameras. Ultrasonic sensor can be resistant to environmental changes such as fog, and environments with high radiation dose. these systems can be used for a long time. The purpose is to develop a system adapted to the inside of the containment vessel by combining a camera and an ultrasound. Therefore, in this research, we performed a basic experiment on 3D image reconstruction using a camera and ultrasound. In this report, we select the good and bad condition of each sensing, and propose the reconstruction and detection method. The results revealed the strengths and weaknesses of each approach.Keywords: camera, image processing, reconstruction, ultrasound
Procedia PDF Downloads 104290 Analyzing Electromagnetic and Geometric Characterization of Building Insulation Materials Using the Transient Radar Method (TRM)
Authors: Ali Pourkazemi
Abstract:
The transient radar method (TRM) is one of the non-destructive methods that was introduced by authors a few years ago. The transient radar method can be classified as a wave-based non destructive testing (NDT) method that can be used in a wide frequency range. Nevertheless, it requires a narrow band, ranging from a few GHz to a few THz, depending on the application. As a time-of-flight and real-time method, TRM can measure the electromagnetic properties of the sample under test not only quickly and accurately, but also blindly. This means that it requires no prior knowledge of the sample under test. For multi-layer structures, TRM is not only able to detect changes related to any parameter within the multi-layer structure but can also measure the electromagnetic properties of each layer and its thickness individually. Although the temperature, humidity, and general environmental conditions may affect the sample under test, they do not affect the accuracy of the Blind TRM algorithm. In this paper, the electromagnetic properties as well as the thickness of the individual building insulation materials - as a single-layer structure - are measured experimentally. Finally, the correlation between the reflection coefficients and some other technical parameters such as sound insulation, thermal resistance, thermal conductivity, compressive strength, and density is investigated. The sample to be studied is 30 cm x 50 cm and the thickness of the samples varies from a few millimeters to 6 centimeters. This experiment is performed with both biostatic and differential hardware at 10 GHz. Since it is a narrow-band system, high-speed computation for analysis, free-space application, and real-time sensor, it has a wide range of potential applications, e.g., in the construction industry, rubber industry, piping industry, wind energy industry, automotive industry, biotechnology, food industry, pharmaceuticals, etc. Detection of metallic, plastic pipes wires, etc. through or behind the walls are specific applications for the construction industry.Keywords: transient radar method, blind electromagnetic geometrical parameter extraction technique, ultrafast nondestructive multilayer dielectric structure characterization, electronic measurement systems, illumination, data acquisition performance, submillimeter depth resolution, time-dependent reflected electromagnetic signal blind analysis method, EM signal blind analysis method, time domain reflectometer, microwave, milimeter wave frequencies
Procedia PDF Downloads 69289 Ochratoxin-A in Traditional Meat Products from Croatian Households
Authors: Jelka Pleadin, Nina Kudumija, Ana Vulic, Manuela Zadravec, Tina Lesic, Mario Skrivanko, Irena Perkovic, Nada Vahcic
Abstract:
Products of animal origin, such as meat and meat products, can contribute to human mycotoxins’ intake coming as a result of either indirect transfer from farm animals exposed to naturally contaminated grains and feed (carry-over effects) or direct contamination with moulds or naturally contaminated spice mixtures used in meat production. Ochratoxin A (OTA) is mycotoxin considered to be of the outermost importance from the public health standpoint in connection with meat products. The aim of this study was to investigate the occurrence of OTA in different traditional meat products circulating on Croatian markets during 2018, produced by a large number of households situated in eastern and north Croatian regions using a variety of technologies. Concentrations of OTA were determined in traditional meat products (n = 70), including dry fermented sausages (Slavonian kulen, Slavonian sausage, Istrian sausage and domestic sausage; n = 28), dry-cured meat products (pancetta, pork rack and ham; n = 22) and cooked sausages (liver sausages, black pudding sausages and pate; n = 20). OTA was analyzed by use of quantitative screening immunoassay method (ELISA) and confirmed for positive samples (higher than the limit of detection) by liquid chromatography tandem mass spectrometry (LC-MS/MS) method. Whereas the bacon samples contaminated with OTA were not found, its level in dry fermented sausages ranged from 0.22 to 2.17 µg/kg and in dry-cured meat products from 0.47 to 5.35 µg/kg, with in total 9% of positive samples. Besides possible primary contamination of these products arising due to improper manufacturing or/and storage conditions, observed OTA contamination could also be the consequence of secondary contamination that comes as a result of contaminated feed the animals were fed on. OTA levels obtained in cooked sausages ranged from 0.32 to 4.12 µg/kg (5% of positives) and could probably be linked to the contaminated raw materials (liver, kidney and spices) used in the sausages production. The results showed an occasional OTA contamination of traditional meat products, pointing that to avoid such contamination on households these products should be produced and processed under standardized and well-controlled conditions. Further investigations should be performed in order to identify mycotoxin-producing moulds on the surface of the products and to define preventative measures that can reduce the contamination of traditional meat products during their production on households and period of storage.Keywords: Croatian households, ochratoxin-A, traditional cooked sausages, traditional dry-cured meat products
Procedia PDF Downloads 193288 Design and Evaluation of a Prototype for Non-Invasive Screening of Diabetes – Skin Impedance Technique
Authors: Pavana Basavakumar, Devadas Bhat
Abstract:
Diabetes is a disease which often goes undiagnosed until its secondary effects are noticed. Early detection of the disease is necessary to avoid serious consequences which could lead to the death of the patient. Conventional invasive tests for screening of diabetes are mostly painful, time consuming and expensive. There’s also a risk of infection involved, therefore it is very essential to develop non-invasive methods to screen and estimate the level of blood glucose. Extensive research is going on with this perspective, involving various techniques that explore optical, electrical, chemical and thermal properties of the human body that directly or indirectly depend on the blood glucose concentration. Thus, non-invasive blood glucose monitoring has grown into a vast field of research. In this project, an attempt was made to device a prototype for screening of diabetes by measuring electrical impedance of the skin and building a model to predict a patient’s condition based on the measured impedance. The prototype developed, passes a negligible amount of constant current (0.5mA) across a subject’s index finger through tetra polar silver electrodes and measures output voltage across a wide range of frequencies (10 KHz – 4 MHz). The measured voltage is proportional to the impedance of the skin. The impedance was acquired in real-time for further analysis. Study was conducted on over 75 subjects with permission from the institutional ethics committee, along with impedance, subject’s blood glucose values were also noted, using conventional method. Nonlinear regression analysis was performed on the features extracted from the impedance data to obtain a model that predicts blood glucose values for a given set of features. When the predicted data was depicted on Clarke’s Error Grid, only 58% of the values predicted were clinically acceptable. Since the objective of the project was to screen diabetes and not actual estimation of blood glucose, the data was classified into three classes ‘NORMAL FASTING’,’NORMAL POSTPRANDIAL’ and ‘HIGH’ using linear Support Vector Machine (SVM). Classification accuracy obtained was 91.4%. The developed prototype was economical, fast and pain free. Thus, it can be used for mass screening of diabetes.Keywords: Clarke’s error grid, electrical impedance of skin, linear SVM, nonlinear regression, non-invasive blood glucose monitoring, screening device for diabetes
Procedia PDF Downloads 325287 Expression of miRNA 335 in Gall Bladder Cancer: A Correlative Study
Authors: Naseem Fatima, A. N. Srivastava, Tasleem Raza, Vijay Kumar
Abstract:
Introduction: Carcinoma gallbladder is third most common gastrointestinal lethal disease with the highest incidence and mortality rate among women in Northern India. Scientists have found several risk factors that make a person more likely to develop gallbladder cancer; among these risk factors, deregulation of miRNAs has been demonstrated to be one of the most crucial factors. The changes in the expression of specific miRNA genes result in the control of inflammation, cell cycle regulation, stress response, proliferation, differentiation, apoptosis and invasion thus mediate the process in tumorgenesis. The aim of this study was to investigate the role of MiRNA-335 and may as a molecular marker in early detection of gallbladder cancer in suspected cases. Material and Methods: A total of 20 consecutive patients with gallbladder cancer aged between 30-75 years were registered for the study. Total RNA was extracted from tissue by using the mirVANA MiRNA isolation Kit according to the manufacturer’s protocol. The MiRNA- 335 and U6 snRNA-specific cDNA were reverse-transcribed from total RNA using Taqman microRNA reverse-transcription kit according to the manufacturer’s protocol. TaqMan MiRNA probes hsa-miR-335 and Taqman Master Mix without AmpEase UNG, Individual real-time PCR assays were performed in a 20 μL reaction volume on a Real-Time PCR system (Applied Biosystems StepOnePlus™) to detect MiRNA-335 expression in tissue. Relative quantification of target MiRNA expression was evaluated using the comparative cycle threshold (CT) method. The correlation was done in between cycle threshold (CT Value) of target MiRNA in gallbladder cancer with respect to non-cancerous Cholelithiasis gallbladder. Each sample was examined in triplicate. The Newman-Keuls Multiple Comparison Test was used to determine the expression of miR-335. Results: MiRNA335 was found to be significantly downregulated in the gallbladder cancer tissue (P<0.001), when compared with non-cancerous Cholelithiasis gallbladder cases. Out of 20 cases, 75% showed reduced expression of MiRNA335, were at last stage of disease with low overall survival rate and remaining 25% were showed up-regulated expression of MiRNA335 with high survival rate. Conclusion: The present study showed that reduced expression of MiRNA335 is associated with the advancement of the disease, and its deregulation may provide important clues to understanding it as a prognostic marker and opportunities for future research.Keywords: carcinoma gallbladder, downregulation, MiRNA-335, RT-PCR assay
Procedia PDF Downloads 360286 Management Tools for Assessment of Adverse Reactions Caused by Contrast Media at the Hospital
Authors: Pranee Suecharoen, Ratchadaporn Soontornpas, Jaturat Kanpittaya
Abstract:
Background: Contrast media has an important role for disease diagnosis through detection of pathologies. Contrast media can, however, cause adverse reactions after administration of its agents. Although non-ionic contrast media are commonly used, the incidence of adverse events is relatively low. The most common reactions found (10.5%) were mild and manageable and/or preventable. Pharmacists can play an important role in evaluating adverse reactions, including awareness of the specific preparation and the type of adverse reaction. As most common types of adverse reactions are idiosyncratic or pseudo-allergic reactions, common standards need to be established to prevent and control adverse reactions promptly and effectively. Objective: To measure the effect of using tools for symptom evaluation in order to reduce the severity, or prevent the occurrence, of adverse reactions from contrast media. Methods: Retrospective review descriptive research with data collected on adverse reactions assessment and Naranjo’s algorithm between June 2015 and May 2016. Results: 158 patients (10.53%) had adverse reactions. Of the 1,500 participants with an adverse event evaluation, 137 (9.13%) had a mild adverse reaction, including hives, nausea, vomiting, dizziness, and headache. These types of symptoms can be treated (i.e., with antihistamines, anti-emetics) and the patient recovers completely within one day. The group with moderate adverse reactions, numbering 18 cases (1.2%), had hypertension or hypotension, and shortness of breath. Severe adverse reactions numbered 3 cases (0.2%) and included swelling of the larynx, cardiac arrest, and loss of consciousness, requiring immediate treatment. No other complications under close medical supervision were recorded (i.e., corticosteroids use, epinephrine, dopamine, atropine, or life-saving devices). Using the guideline, therapies are divided into general and specific and are performed according to the severity, risk factors and ingestion of contrast media agents. Patients who have high-risk factors were screened and treated (i.e., prophylactic premedication) for prevention of severe adverse reactions, especially those with renal failure. Thus, awareness for the need for prescreening of different risk factors is necessary for early recognition and prompt treatment. Conclusion: Studying adverse reactions can be used to develop a model for reducing the level of severity and setting a guideline for a standardized, multidisciplinary approach to adverse reactions.Keywords: role of pharmacist, management of adverse reactions, guideline for contrast media, non-ionic contrast media
Procedia PDF Downloads 303285 Mobile and Hot Spot Measurement with Optical Particle Counting Based Dust Monitor EDM264
Authors: V. Ziegler, F. Schneider, M. Pesch
Abstract:
With the EDM264, GRIMM offers a solution for mobile short- and long-term measurements in outdoor areas and at production sites. For research as well as permanent areal observations on a near reference quality base. The model EDM264 features a powerful and robust measuring cell based on optical particle counting (OPC) principle with all the advantages that users of GRIMM's portable aerosol spectrometers are used to. The system is embedded in a compact weather-protection housing with all-weather sampling, heated inlet system, data logger, and meteorological sensor. With TSP, PM10, PM4, PM2.5, PM1, and PMcoarse, the EDM264 provides all fine dust fractions real-time, valid for outdoor applications and calculated with the proven GRIMM enviro-algorithm, as well as six additional dust mass fractions pm10, pm2.5, pm1, inhalable, thoracic and respirable for IAQ and workplace measurements. This highly versatile instrument performs real-time monitoring of particle number, particle size and provides information on particle surface distribution as well as dust mass distribution. GRIMM's EDM264 has 31 equidistant size channels, which are PSL traceable. A high-end data logger enables data acquisition and wireless communication via LTE, WLAN, or wired via Ethernet. Backup copies of the measurement data are stored in the device directly. The rinsing air function, which protects the laser and detector in the optical cell, further increases the reliability and long term stability of the EDM264 under different environmental and climatic conditions. The entire sample volume flow of 1.2 L/min is analyzed by 100% in the optical cell, which assures excellent counting efficiency at low and high concentrations and complies with the ISO 21501-1standard for OPCs. With all these features, the EDM264 is a world-leading dust monitor for precise monitoring of particulate matter and particle number concentration. This highly reliable instrument is an indispensable tool for many users who need to measure aerosol levels and air quality outdoors, on construction sites, or at production facilities.Keywords: aerosol research, aerial observation, fence line monitoring, wild fire detection
Procedia PDF Downloads 151