Search results for: hate speech detection
601 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology
Authors: Amarendar Reddy Addula
Abstract:
Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.Keywords: artificial intelligence, ethics & human rights issues, laws, international laws
Procedia PDF Downloads 93600 Some Codes for Variants in Graphs
Authors: Sofia Ait Bouazza
Abstract:
We consider the problem of finding a minimum identifying code in a graph. This problem was initially introduced in 1998 and has been since fundamentally connected to a wide range of applications (fault diagnosis, location detection …). Suppose we have a building into which we need to place fire alarms. Suppose each alarm is designed so that it can detect any fire that starts either in the room in which it is located or in any room that shares a doorway with the room. We want to detect any fire that may occur or use the alarms which are sounding to not only to not only detect any fire but be able to tell exactly where the fire is located in the building. For reasons of cost, we want to use as few alarms as necessary. The first problem involves finding a minimum domination set of a graph. If the alarms are three state alarms capable of distinguishing between a fire in the same room as the alarm and a fire in an adjacent room, we are trying to find a minimum locating domination set. If the alarms are two state alarms that can only sound if there is a fire somewhere nearby, we are looking for a differentiating domination set of a graph. These three areas are the subject of much active research; we primarily focus on the third problem. An identifying code of a graph G is a dominating set C such that every vertex x of G is distinguished from other vertices by the set of vertices in C that are at distance at most r≥1 from x. When only vertices out of the code are asked to be identified, we get the related concept of a locating dominating set. The problem of finding an identifying code (resp a locating dominating code) of minimum size is a NP-hard problem, even when the input graph belongs to a number of specific graph classes. Therefore, we study this problem in some restricted classes of undirected graphs like split graph, line graph and path in a directed graph. Then we present some results on the identifying code by giving an exact value of upper total locating domination and a total 2-identifying code in directed and undirected graph. Moreover we determine exact values of locating dominating code and edge identifying code of thin headless spider and locating dominating code of complete suns.Keywords: identiying codes, locating dominating set, split graphs, thin headless spider
Procedia PDF Downloads 476599 Detection of MspI Polymorphism and SNP of GH Gene in Some Camel Breeds Reared in Egypt
Authors: Sekena H. Abd El-Aziem, Heba A. M. Abd El-Kader, Sally S. Alam, Othman E. Othman
Abstract:
Growth hormone (GH) is an anabolic hormone synthesized and secreted by the somatotroph cells of the anterior lobe of the pituitary gland in a circadian and pulsatile manner, the pattern of which plays an important role in postnatal longitudinal growth and development, tissue growth, lactation, reproduction as well as protein, lipid and carbohydrate metabolism. The aim of this study was to detect the genetic polymorphism of GH gene in five camel breeds reared in Egypt; Sudany, Somali, Mowaled, Maghrabi and Falahy, using PCR-RFLP technique. Also this work aimed to identify the single nucleotide polymorphism between different genotypes detected in these camel breeds. The amplified fragment of camel GH at 613-bp was digested with the restriction enzyme MspI and the result revealed the presence of three different genotypes; CC, CT and TT in tested breeds and significant differences were recorded in the genotype frequencies between these camel breeds. The result showed that the Maghrabi breed that is classified as a dual purpose camels had higher frequency for allele C (0.75) than those in the other tested four breeds. The sequence analysis declared the presence of a SNP (C→T) at position 264 in the amplified fragment which is responsible for the destruction of the restriction site C^CGG and consequently the appearance of two different alleles C and T. The nucleotide sequences of camel GH alleles T and C were submitted to nucleotide sequences database NCBI/Bankit/GenBank and have accession numbers: KP143517 and KP143518, respectively. It is concluded that only one SNP C→T was detected in GH gene among the five tested camel breeds reared in Egypt and this nucleotide substitution can be used as a marker for the genetic biodiversity between camel breeds reared in Egypt. Also, due to the possible association between allele C and higher growth rate, we can used it in MAS for camels and enter the camels possess this allele in breeding program as a way for enhancement of growth trait in camel breeds reared in Egypt.Keywords: camel breeds in Egypt, GH, PCR-RFLP, SNPs
Procedia PDF Downloads 461598 A Novel Study Contrasting Traditional Autopsy with Post-Mortem Computed Tomography in Falls Leading to Death
Authors: Balaji Devanathan, Gokul G., Abilash S., Abhishek Yadav, Sudhir K. Gupta
Abstract:
Background: As an alternative to the traditional autopsy, a virtual autopsy is carried out using scanning and imaging technologies, mainly post-mortem computed tomography (PMCT). This facility aims to supplement traditional autopsy results and reduce or eliminate internal dissection in subsequent autopsies. For emotional and religious reasons, the deceased's relatives have historically disapproved such interior dissection. The non-invasive, objective, and preservative PMCT is what friends and family would rather have than a traditional autopsy. Additionally, it aids in the examination of the technologies and the benefits and drawbacks of each, demonstrating the significance of contemporary imaging in the field of forensic medicine. Results: One hundred falls resulting in fatalities was analysed by the writers. Before the autopsy, each case underwent a PMCT examination using a 16-slice Multi-Slice CT spiral scanner. By using specialised software, MPR and VR reconstructions were carried out following the capture of the raw images. The accurate detection of fractures in the skull, face bones, clavicle, scapula, and vertebra was better observed in comparison to a routine autopsy. The interpretation of pneumothorax, Pneumoperitoneum, pneumocephalus, and hemosiuns are much enhanced by PMCT than traditional autopsy. Conclusion. It is useful to visualise the skeletal damage in fall from height cases using a virtual autopsy based on PMCT. So, the ideal tool in traumatising patients is a virtual autopsy based on PMCT scans. When assessing trauma victims, PMCT should be viewed as an additional helpful tool to traditional autopsy. This is because it can identify additional bone fractures in body parts that are challenging to examine during autopsy, such as posterior regions, which helps the pathologist reconstruct the victim's life and determine the cause of death.Keywords: PMCT, fall from height, autopsy, fracture
Procedia PDF Downloads 37597 Jagiellonian-PET: A Novel TOF-PET Detector Based on Plastic Scintillators
Authors: P. Moskal, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, A. Gruntowski, D. Kaminska, L. Kaplon, G. Korcyl, P. Kowalski, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, L. Raczynski, Z. Rudy, P. Salabura, N. G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, W. Wislicki, M. Zielinski, N. Zon
Abstract:
A new concept and results of the performance tests of the TOF-PET detection system developed at the Jagiellonian University will be presented. The novelty of the concept lies in employing long strips of polymer scintillators instead of crystals as detectors of annihilation quanta, and in using predominantly the timing of signals instead of their amplitudes for the reconstruction of Lines-of-Response. The diagnostic chamber consists of plastic scintillator strips readout by pairs of photo multipliers arranged axially around a cylindrical surface. To take advantage of the superior timing properties of plastic scintillators the signals are probed in the voltage domain with the accuracy of 20 ps by a newly developed electronics, and the data are collected by the novel trigger-less and reconfigurable data acquisition system. The hit-position and hit-time are reconstructed by the dedicated reconstruction methods based on the compressing sensing theory and the library of synchronized model signals. The solutions are subject to twelve patent applications. So far a time-of-flight resolution of ~120 ps (sigma) was achieved for a double-strip prototype with 30 cm field-of-view (FOV). It is by more than a factor of two better than TOF resolution achievable in current TOF-PET modalities and at the same time the FOV of 30 cm long prototype is significantly larger with respect to typical commercial PET devices. The Jagiellonian PET (J-PET) detector with plastic scintillators arranged axially possesses also another advantage. Its diagnostic chamber is free of any electronic devices and magnetic materials thus giving unique possibilities of combining J-PET with CT and J-PET with MRI for scanning the same part of a patient at the same time with both methods.Keywords: PET-CT, PET-MRI, TOF-PET, scintillator
Procedia PDF Downloads 494596 Enhanced Photocatalytic H₂ Production from H₂S on Metal Modified Cds-Zns Semiconductors
Authors: Maali-Amel Mersel, Lajos Fodor, Otto Horvath
Abstract:
Photocatalytic H₂ production by H₂S decomposition is regarded to be an environmentally friendly process to produce carbon-free energy through direct solar energy conversion. For this purpose, sulphide-based materials, as photocatalysts, were widely used due to their excellent solar spectrum responses and high photocatalytic activity. The loading of proper co-catalysts that are based on cheap and earth-abundant materials on those semiconductors was shown to play an important role in the improvement of their efficiency. In this research, CdS-ZnS composite was studied because of its controllable band gap and excellent performance for H₂ evolution under visible light irradiation. The effects of the modification of this photocatalyst with different types of materials and the influence of the preparation parameters on its H₂ production activity were investigated. The CdS-ZnS composite with an enhanced photocatalytic activity for H₂ production was synthesized from ammine complexes. Two types of modification were used: compounds of Ni-group metals (NiS, PdS, and Pt) were applied as co-catalyst on the surface of CdS-ZnS semiconductor, while NiS, MnS, CoS, Ag₂S, and CuS were used as a dopant in the bulk of the catalyst. It was found that 0.1% of noble metals didn’t remarkably influence the photocatalytic activity, while the modification with 0.5% of NiS was shown to be more efficient in the bulk than on the surface. The modification with other types of metals results in a decrease of the rate of H₂ production, while the co-doping seems to be more promising. The preparation parameters (such as the amount of ammonia to form the ammine complexes, the order of the preparation steps together with the hydrothermal treatment) were also found to highly influence the rate of H₂ production. SEM, EDS and DRS analyses were made to reveal the structure of the most efficient photocatalysts. Moreover, the detection of the conduction band electron on the surface of the catalyst was also investigated. The excellent photoactivity of the CdS-ZnS catalysts with and without modification encourages further investigations to enhance the hydrogen generation by optimization of the reaction conditions.Keywords: H₂S, photoactivity, photocatalytic H₂ production, CdS-ZnS
Procedia PDF Downloads 127595 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru
Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar
Abstract:
Nowadays, heritage building information modeling (HBIM) is considered an efficient tool to represent and manage information of cultural heritage (CH). The basis of this tool relies on a 3D model generally obtained from a cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired level of development (LOD), level of information (LOI), grade of generation (GOG), as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit, and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings, and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills, and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models families, respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI, and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources since the BIM software used has a free student license.Keywords: cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit
Procedia PDF Downloads 141594 Development of a Multi-Locus DNA Metabarcoding Method for Endangered Animal Species Identification
Authors: Meimei Shi
Abstract:
Objectives: The identification of endangered species, especially simultaneous detection of multiple species in complex samples, plays a critical role in alleged wildlife crime incidents and prevents illegal trade. This study was to develop a multi-locus DNA metabarcoding method for endangered animal species identification. Methods: Several pairs of universal primers were designed according to the mitochondria conserved gene regions. Experimental mixtures were artificially prepared by mixing well-defined species, including endangered species, e.g., forest musk, bear, tiger, pangolin, and sika deer. The artificial samples were prepared with 1-16 well-characterized species at 1% to 100% DNA concentrations. After multiplex-PCR amplification and parameter modification, the amplified products were analyzed by capillary electrophoresis and used for NGS library preparation. The DNA metabarcoding was carried out based on Illumina MiSeq amplicon sequencing. The data was processed with quality trimming, reads filtering, and OTU clustering; representative sequences were blasted using BLASTn. Results: According to the parameter modification and multiplex-PCR amplification results, five primer sets targeting COI, Cytb, 12S, and 16S, respectively, were selected as the NGS library amplification primer panel. High-throughput sequencing data analysis showed that the established multi-locus DNA metabarcoding method was sensitive and could accurately identify all species in artificial mixtures, including endangered animal species Moschus berezovskii, Ursus thibetanus, Panthera tigris, Manis pentadactyla, Cervus nippon at 1% (DNA concentration). In conclusion, the established species identification method provides technical support for customs and forensic scientists to prevent the illegal trade of endangered animals and their products.Keywords: DNA metabarcoding, endangered animal species, mitochondria nucleic acid, multi-locus
Procedia PDF Downloads 136593 Impacts of Urbanization on Forest and Agriculture Areas in Savannakhet Province, Lao People's Democratic Republic
Authors: Chittana Phompila
Abstract:
The current increased population pushes increasing demands for natural resources and living space. In Laos, urban areas have been expanding rapidly in recent years. The rapid urbanization can have negative impacts on landscapes, including forest and agriculture lands. The primary objective of this research were to map current urban areas in a large city in Savannakhet province, in Laos, 2) to compare changes in urbanization between 1990 and 2018, and 3) to estimate forest and agriculture areas lost due to expansions of urban areas during the last over twenty years within study area. Landsat 8 data was used and existing GIS data was collected including spatial data on rivers, lakes, roads, vegetated areas and other land use/land covers). GIS data was obtained from the government sectors. Object based classification (OBC) approach was applied in ECognition for image processing and analysis of urban area using. Historical data from other Landsat instruments (Landsat 5 and 7) were used to allow us comparing changes in urbanization in 1990, 2000, 2010 and 2018 in this study area. Only three main land cover classes were focused and classified, namely forest, agriculture and urban areas. Change detection approach was applied to illustrate changes in built-up areas in these periods. Our study shows that the overall accuracy of map was 95% assessed, kappa~ 0.8. It is found that that there is an ineffective control over forest and land-use conversions from forests and agriculture to urban areas in many main cities across the province. A large area of agriculture and forest has been decreased due to this conversion. Uncontrolled urban expansion and inappropriate land use planning can lead to creating a pressure in our resource utilisation. As consequence, it can lead to food insecurity and national economic downturn in a long term.Keywords: urbanisation, forest cover, agriculture areas, Landsat 8 imagery
Procedia PDF Downloads 157592 Empirical Analysis of the Global Impact of Cybercrime Laws on Cyber Attacks and Malware Types
Authors: Essang Anwana Onuntuei, Chinyere Blessing Azunwoke
Abstract:
The study focused on probing the effectiveness of online consumer privacy and protection laws, electronic transaction laws, privacy and data protection laws, and cybercrime legislation amid frequent cyber-attacks and malware types worldwide. An empirical analysis was engaged to uncover ties and causations between the stringency and implementation of these legal structures and the prevalence of cyber threats. A deliberate sample of seventy-eight countries (thirteen countries each from six continents) was chosen as sample size to study the challenges linked with trending regulations and possible panoramas for improving cybersecurity through refined legal approaches. Findings establish if the frequency of cyber-attacks and malware types vary significantly. Also, the result proved that various cybercrime laws differ statistically, and electronic transactions law does not statistically impact the frequency of cyber-attacks. The result also statistically revealed that the online Consumer Privacy and Protection law does not influence the total number of cyber-attacks. In addition, the results implied that Privacy and Data Protection laws do not statistically impact the total number of cyber-attacks worldwide. The calculated value also proved that cybercrime law does not statistically impact the total number of cyber-attacks. Finally, the computed value concludes that combined multiple cyber laws do not significantly impact the total number of cyber-attacks worldwide. Suggestions were produced based on findings from the study, contributing to the ongoing debate on the validity of legal approaches in battling cybercrime and shielding consumers in the digital age.Keywords: cybercrime legislation, cyber attacks, consumer privacy and protection law, detection, electronic transaction law, prevention, privacy and data protection law, prohibition, prosecution
Procedia PDF Downloads 39591 Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network
Authors: P. Karthick, K. Mahesh
Abstract:
Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video.Keywords: video compression, K-means clustering, convolutional neural network, generative adversarial network, singular value decomposition, pixel visualization, stochastic gradient descent, frame per second extraction, RGB channel extraction, self-detection and deciding system
Procedia PDF Downloads 187590 Predicting Low Birth Weight Using Machine Learning: A Study on 53,637 Ethiopian Birth Data
Authors: Kehabtimer Shiferaw Kotiso, Getachew Hailemariam, Abiy Seifu Estifanos
Abstract:
Introduction: Despite the highest share of low birth weight (LBW) for neonatal mortality and morbidity, predicting births with LBW for better intervention preparation is challenging. This study aims to predict LBW using a dataset encompassing 53,637 birth cohorts collected from 36 primary hospitals across seven regions in Ethiopia from February 2022 to June 2024. Methods: We identified ten explanatory variables related to maternal and neonatal characteristics, including maternal education, age, residence, history of miscarriage or abortion, history of preterm birth, type of pregnancy, number of livebirths, number of stillbirths, antenatal care frequency, and sex of the fetus to predict LBW. Using WEKA 3.8.2, we developed and compared seven machine learning algorithms. Data preprocessing included handling missing values, outlier detection, and ensuring data integrity in birth weight records. Model performance was evaluated through metrics such as accuracy, precision, recall, F1-score, and area under the Receiver Operating Characteristic curve (ROC AUC) using 10-fold cross-validation. Results: The results demonstrated that the decision tree, J48, logistic regression, and gradient boosted trees model achieved the highest accuracy (94.5% to 94.6%) with a precision of 93.1% to 93.3%, F1-score of 92.7% to 93.1%, and ROC AUC of 71.8% to 76.6%. Conclusion: This study demonstrates the effectiveness of machine learning models in predicting LBW. The high accuracy and recall rates achieved indicate that these models can serve as valuable tools for healthcare policymakers and providers in identifying at-risk newborns and implementing timely interventions to achieve the sustainable developmental goal (SDG) related to neonatal mortality.Keywords: low birth weight, machine learning, classification, neonatal mortality, Ethiopia
Procedia PDF Downloads 20589 Tuberculosis Massive Active Case Discovery in East Jakarta 2016-2017: The Role of Ketuk Pintu Layani Dengan Hati and Juru Pemantau Batuk (Jumantuk) Cadre Programs
Authors: Ngabilas Salama
Abstract:
Background: Indonesia has the 2nd highest number of incidents of tuberculosis (TB). It accounts for 1.020.000 new cases per year, only 30% of which has been reported. To find the lost 70%, a massive active case discovery was conducted through two programs: Ketuk Pintu Layani Dengan Hati (KPLDH) and Kader Juru Pemantau Batuk (Jumantuk cadres), who also plays a role in child TB screening. Methods: Data was collected and analyzed through Tuberculosis Integrated Online System from 2014 to 2017 involving 129 DOTS facility with 86 primary health centers in East Jakarta. Results: East Jakarta consists of 2.900.722 people. KPLDH program started in February 2016 consisting of 84 teams (310 people). Jumantuk cadres was formed 4 months later (218 orang). The number of new TB cases in East Jakarta (primary health center) from 2014 to June 2017 respectively is as follows: 6.499 (2.637), 7.438 (2.651), 8.948 (3.211), 5.701 (1.830). Meanwhile, the percentage of child TB case discovery in primary health center was 8,5%, 9,8%, 12,1% from 2014 to 2016 respectively. In 2017, child TB case discovery was 13,1% for the first 3 months and 16,5% for the next 3 months. Discussion: Increased TB incidence rate from 2014 to 2017 was 14,4%, 20,3%, and 27,4% respectively in East Jakarta, and 0,5%, 21,1%, and 14% in primary health center. This reveals the positive role of KPLDH and Jumantuk in TB detection and reporting. Likewise, these programs were responsible for the increase in child TB case discovery, especially in the first 3 months of 2017 (Ketuk Pintu TB Day program) and the next 3 months (active TB screening). Conclusion: KPLDH dan Jumantuk are actively involved in increasing TB case discovery in both adults and children.Keywords: tuberculosis, case discovery program, primary health center, cadre
Procedia PDF Downloads 331588 Recent Progress in the Uncooled Mid-Infrared Lead Selenide Polycrystalline Photodetector
Authors: Hao Yang, Lei Chen, Ting Mei, Jianbang Zheng
Abstract:
Currently, the uncooled PbSe photodetectors in the mid-infrared range (2-5μm) with sensitization technology extract more photoelectric response than traditional ones, and enable the room temperature (300K) photo-detection with high detectivity, which have attracted wide attentions in many fields. This technology generally contains the film fabrication with vapor phase deposition (VPD) and a sensitizing process with doping of oxygen and iodine. Many works presented in the recent years almost provide and high temperature activation method with oxygen/iodine vapor diffusion, which reveals that oxygen or iodine plays an important role in the sensitization of PbSe material. In this paper, we provide our latest experimental results and discussions in the stoichiometry of oxygen and iodine and its influence on the polycrystalline structure and photo-response. The experimental results revealed that crystal orientation was transformed from (200) to (420) by sensitization, and the responsivity of 5.42 A/W was gained by the optimal stoichiometry of oxygen and iodine with molecular density of I2 of ~1.51×1012 mm-3 and oxygen pressure of ~1Mpa. We verified that I2 plays a role in transporting oxygen into the lattice of crystal, which is actually not its major role. It is revealed that samples sensitized with iodine transform atomic proportion of Pb from 34.5% to 25.0% compared with samples without iodine from XPS data, which result in the proportion of about 1:1 between Pb and Se atoms by sublimation of PbI2 during sensitization process, and Pb/Se atomic proportion is controlled by I/O atomic proportion in the polycrystalline grains, which is very an important factor for improving responsivity of uncooled PbSe photodetector. Moreover, a novel sensitization and dopant activation method is proposed using oxygen ion implantation with low ion energy of < 500eV and beam current of ~120μA/cm2. These results may be helpful to understanding the sensitization mechanism of polycrystalline lead salt materials.Keywords: polycrystalline PbSe, sensitization, transport, stoichiometry
Procedia PDF Downloads 347587 On-Line Super Critical Fluid Extraction, Supercritical Fluid Chromatography, Mass Spectrometry, a Technique in Pharmaceutical Analysis
Authors: Narayana Murthy Akurathi, Vijaya Lakshmi Marella
Abstract:
The literature is reviewed with regard to online Super critical fluid extraction (SFE) coupled directly with supercritical fluid chromatography (SFC) -mass spectrometry that have typically more sensitive than conventional LC-MS/MS and GC-MS/MS. It is becoming increasingly interesting to use on-line techniques that combine sample preparation, separation and detection in one analytical set up. This provides less human intervention, uses small amount of sample and organic solvent and yields enhanced analyte enrichment in a shorter time. The sample extraction is performed under light shielding and anaerobic conditions, preventing the degradation of thermo labile analytes. It may be able to analyze compounds over a wide polarity range as SFC generally uses carbon dioxide which was collected as a by-product of other chemical reactions or is collected from the atmosphere as it contributes no new chemicals to the environment. The diffusion of solutes in supercritical fluids is about ten times greater than that in liquids and about three times less than in gases which results in a decrease in resistance to mass transfer in the column and allows for fast high resolution separations. The drawback of SFC when using carbon dioxide as mobile phase is that the direct introduction of water samples poses a series of problems, water must therefore be eliminated before it reaches the analytical column. Hundreds of compounds analysed simultaneously by simple enclosing in an extraction vessel. This is mainly applicable for pharmaceutical industry where it can analyse fatty acids and phospholipids that have many analogues as their UV spectrum is very similar, trace additives in polymers, cleaning validation can be conducted by putting swab sample in an extraction vessel, analysing hundreds of pesticides with good resolution.Keywords: super critical fluid extraction (SFE), super critical fluid chromatography (SFC), LCMS/MS, GCMS/MS
Procedia PDF Downloads 389586 Development of a Fuzzy Logic Based Model for Monitoring Child Pornography
Authors: Mariam Ismail, Kazeem Rufai, Jeremiah Balogun
Abstract:
A study was conducted to apply fuzzy logic to the development of a monitoring model for child pornography based on associated risk factors, which can be used by forensic experts or integrated into forensic systems for the early detection of child pornographic activities. A number of methods were adopted in the study, which includes an extensive review of related works was done in order to identify the factors that are associated with child pornography following which they were validated by an expert sex psychologist and guidance counselor, and relevant data was collected. Fuzzy membership functions were used to fuzzify the associated variables identified alongside the risk of the occurrence of child pornography based on the inference rules that were provided by the experts consulted, and the fuzzy logic expert system was simulated using the Fuzzy Logic Toolbox available in the MATLAB Software Release 2016. The results of the study showed that there were 4 categories of risk factors required for assessing the risk of a suspect committing child pornography offenses. The results of the study showed that 2 and 3 triangular membership functions were used to formulate the risk factors based on the 2 and 3 number of labels assigned, respectively. The results of the study showed that 5 fuzzy logic models were formulated such that the first 4 was used to assess the impact of each category on child pornography while the last one takes the 4 outputs from the 4 fuzzy logic models as inputs required for assessing the risk of child pornography. The following conclusion was made; there were factors that were related to personal traits, social traits, history of child pornography crimes, and self-regulatory deficiency traits by the suspects required for the assessment of the risk of child pornography crimes committed by a suspect. Using the values of the identified risk factors selected for this study, the risk of child pornography can be easily assessed from their values in order to determine the likelihood of a suspect perpetuating the crime.Keywords: fuzzy, membership functions, pornography, risk factors
Procedia PDF Downloads 127585 Electrochemical Biosensor for the Detection of Botrytis spp. in Temperate Legume Crops
Authors: Marzia Bilkiss, Muhammad J. A. Shiddiky, Mostafa K. Masud, Prabhakaran Sambasivam, Ido Bar, Jeremy Brownlie, Rebecca Ford
Abstract:
A greater achievement in the Integrated Disease Management (IDM) to prevent the loss would result from early diagnosis and quantitation of the causal pathogen species for accurate and timely disease control. This could significantly reduce costs to the growers and reduce any flow on impacts to the environment from excessive chemical spraying. Necrotrophic fungal disease botrytis grey mould, caused by Botrytis cinerea and Botrytis fabae, significantly reduce temperate legume yield and grain quality during favourable environmental condition in Australia and worldwide. Several immunogenic and molecular probe-type protocols have been developed for their diagnosis, but these have varying levels of species-specificity, sensitivity, and consequent usefulness within the paddock. To substantially improve speed, accuracy, and sensitivity, advanced nanoparticle-based biosensor approaches have been developed. For this, two sets of primers were designed for both Botrytis cinerea and Botrytis fabae which have shown the species specificity with initial sensitivity of two genomic copies/µl in pure fungal backgrounds using multiplexed quantitative PCR. During further validation, quantitative PCR detected 100 spores on artificially infected legume leaves. Simultaneously an electro-catalytic assay was developed for both target fungal DNA using functionalised magnetic nanoparticles. This was extremely sensitive, able to detect a single spore within a raw total plant nucleic acid extract background. We believe that the translation of this technology to the field will enable quantitative assessment of pathogen load for future accurate decision support of informed botrytis grey mould management.Keywords: biosensor, botrytis grey mould, sensitive, species specific
Procedia PDF Downloads 172584 Enzyme Producing Psyhrophilic Pseudomonas app. Isolated from Poultry Meats
Authors: Ali Aydin, Mert Sudagidan, Aysen Coban, Alparslan Kadir Devrim
Abstract:
Pseudomonas spp. (specifically, P. fluorescens and P. fragi) are considered the principal spoilage microorganisms of refrigerated poultry meats. The higher the level psychrophilic spoilage Pseudomonas spp. on carcasses at the end of processing lead to decrease the shelf life of the refrigerated product. The aim of the study was the identification of psychrophilic Pseudomonas spp. having proteolytic and lipolytic activities from poultry meats by 16S rRNA and rpoB gene sequencing, investigation of protease and lipase related genes and determination of proteolytic activity of Pseudomonas spp. In the of isolation procedure, collected chicken meat samples from local markets and slaughterhouses were homogenized and the lysates were incubated on Standard method agar and Skim Milk agar for selection of proteolytic bacteria and tributyrin agar for selection of lipolytic bacteria at +4 °C for 7 days. After detection of proteolytic and lipolytic colonies, the isolates were firstly analyzed by biochemical tests such as Gram staining, catalase and oxidase tests. DNA gene sequencing analysis and comparison with GenBank revealed that 126 strong enzyme Pseudomonas spp. were identified as predominantly P. fluorescens (n=55), P. fragi (n=42), Pseudomonas spp. (n=24), P. cedrina (n=2), P. poae (n=1), P. koreensis (n=1), and P. gessardi (n=1). Additionally, protease related aprX gene was screened in the strains and it was detected in 69/126 strains, whereas, lipase related lipA gene was found in 9 Pseudomonas strains. Protease activity was determined using commercially available protease assay kit and 5 strains showed high protease activity. The results showed that psychrophilic Pseudomonas strains were present in chicken meat samples and they can produce important levels of proteases and lipases for food spoilage to decrease food quality and safety.Keywords: Pseudomonas, chicken meat, protease, lipase
Procedia PDF Downloads 386583 A Comparative Study of Black Carbon Emission Characteristics from Marine Diesel Engines Using Light Absorption Method
Authors: Dongguk Im, Gunfeel Moon, Younwoo Nam, Kangwoo Chun
Abstract:
Recognition of the needs about protecting environment throughout worldwide is widespread. In the shipping industry, International Maritime Organization (IMO) has been regulating pollutants emitted from ships by MARPOL 73/78. Recently, the Marine Environment Protection Committee (MEPC) of IMO, at its 68th session, approved the definition of Black Carbon (BC) specified by the following physical properties (light absorption, refractory, insolubility and morphology). The committee also agreed to the need for a protocol for any voluntary measurement studies to identify the most appropriate measurement methods. Filter Smoke Number (FSN) based on light absorption is categorized as one of the IMO relevant BC measurement methods. EUROMOT provided a FSN measurement data (measured by smoke meter) of 31 different engines (low, medium and high speed marine engines) of member companies at the 3rd International Council on Clean Transportation (ICCT) workshop on marine BC. From the comparison of FSN, the results indicated that BC emission from low speed marine diesel engines was ranged from 0.009 to 0.179 FSN and it from medium and high speed marine diesel engine was ranged 0.012 to 3.2 FSN. In consideration of measured the low FSN from low speed engine, an experimental study was conducted using both a low speed marine diesel engine (2 stroke, power of 7,400 kW at 129 rpm) and a high speed marine diesel engine (4 stroke, power of 403 kW at 1,800 rpm) under E3 test cycle. The results revealed that FSN was ranged from 0.01 to 0.16 and 1.09 to 1.35 for low and high speed engines, respectively. The measurement equipment (smoke meter) ranges from 0 to 10 FSN. Considering measurement range of it, FSN values from low speed engines are near the detection limit (0.002 FSN or ~0.02 mg/m3). From these results, it seems to be modulated the measurement range of the measurement equipment (smoke meter) for enhancing measurement accuracy of marine BC and evaluation on performance of BC abatement technologies.Keywords: black carbon, filter smoke number, international maritime organization, marine diesel engine (two and four stroke), particulate matter
Procedia PDF Downloads 273582 Tool for Maxillary Sinus Quantification in Computed Tomography Exams
Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina
Abstract:
The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.Keywords: maxillary sinus, support vector machine, region growing, volume quantification
Procedia PDF Downloads 503581 Land Use Land Cover Changes in Response to Urban Sprawl within North-West Anatolia, Turkey
Authors: Melis Inalpulat, Levent Genc
Abstract:
In the present study, an attempt was made to state the Land Use Land Cover (LULC) transformation over three decades around the urban regions of Balıkesir, Bursa, and Çanakkale provincial centers (PCs) in Turkey. Landsat imageries acquired in 1984, 1999 and 2014 were used to determine the LULC change. Images were classified using the supervised classification technique and five main LULC classes were considered including forest (F), agricultural land (A), residential area (urban) - bare soil (R-B), water surface (W), and other (O). Change detection analyses were conducted for 1984-1999 and 1999-2014, and the results were evaluated. Conversions of LULC types to R-B class were investigated. In addition, population changes (1985-2014) were assessed depending on census data, the relations between population and the urban areas were stated, and future populations and urban area needs were forecasted for 2030. The results of LULC analysis indicated that urban areas, which are covered under R-B class, were expanded in all PCs. During 1984-1999 R-B class within Balıkesir, Bursa and Çanakkale PCs were found to have increased by 7.1%, 8.4%, and 2.9%, respectively. The trend continued in the 1999-2014 term and the increment percentages reached to 15.7%, 15.5%, and 10.2% at the end of 30-year period (1984-2014). Furthermore, since A class in all provinces was found to be the principal contributor for the R-B class, urban sprawl lead to the loss of agricultural lands. Moreover, the areas of R-B classes were highly correlated with population within all PCs (R2>0.992). Depending on this situation, both future populations and R-B class areas were forecasted. The estimated values of increase in the R-B class areas for Balıkesir, Bursa, and Çanakkale PCs were 1,586 ha, 7,999 ha and 854 ha, respectively. Due to this fact, the forecasted values for 2,030 are 7,838 ha, 27,866, and 2,486 ha for Balıkesir, Bursa, and Çanakkale, and thus, 7.7%, 8.2%, and 9.7% more R-B class areas are expected to locate in PCs in respect to the same order.Keywords: landsat, LULC change, population, urban sprawl
Procedia PDF Downloads 260580 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge
Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi
Abstract:
Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring
Procedia PDF Downloads 207579 Prevalence and Comparison for Detection Methods of Candida Species in Vaginal Specimens from Pregnant and Non-Pregnant Saudi Women
Authors: Yazeed Al-Sheikh
Abstract:
Pregnancy represents a risk factor in the occurrence of vulvovaginal candidiasis. To investigate the prevalence rate of vaginal carriage of Candida species in Saudi pregnant and non-pregnant women, high vaginal swab (HVS) specimens (707) were examined by direct microscopy (10% KOH and Giemsa staining) and parallel cultured on Sabouraud Dextrose Agar (SDA) as well as on “CHROM agar Candida” medium. As expected, Candida-positive cultures were frequently observed in pregnant-test group (24%) than in non-pregnant group (17%). The frequency of culture positive was correlated to pregnancy (P=0.047), parity (P=0.001), use of contraceptive (P=0.146), or antibiotics (P=0.128), and diabetic-patients (P < 0.0001). Out of 707 HVS examined specimens, 157 specimens were yeast-positive culture (22%) on Sabouraud Dextrose Agar or “CHROM agar Candida”. In comparison, the sensitivities of the direct 10% KOH and the Giemsa stain microscopic examination methods were 84% (132/157) and 95% (149/157) respectively but both with 100% specificity. As for the identity of recovered 157 yeast isolates, based on API 20C biotype carbohydrate assimilation, germ tube and chlamydospore formation, C. albicansand C. glabrata constitute 80.3 and 12.7% respectively. Rates of C. tropicalis, C. kefyr, C. famata or C. utilis were 2.6, 1.3, and 0.6% respectively. Sachromyces cerevisiae and Rhodotorula mucilaginosa yeasts were also encountered at a frequency of 1.3 and 0.6% respectively. Finally, among all recovered 157 yeast-isolates, strains resistant to ketoconazole were not detected, whereas 5% of the C. albicans and as high as 55% of the non-albicans yeast isolates (majority C. glabrata) showed resistance to fluconazole. Our findings may prove helpful for continuous determination of the existing vaginal candidiasis causative species during pregnancy, its lab-diagnosis and/or control and possible measures to minimize the incidence of the disease-associated pre-term delivery.Keywords: vaginal candidiasis, Candida spp., pregnancy, risk factors, API 20C-yeast biotypes, giemsa stain, antifungal agents
Procedia PDF Downloads 240578 MRI Findings in Children with Intrac Table Epilepsy Compared to Children with Medical Responsive Epilepsy
Authors: Susan Amirsalari, Azime Khosrinejad, Elham Rahimian
Abstract:
Objective: Epilepsy is a common brain disorder characterized by a persistent tendency to develop in neurological, cognitive, and psychological contents. Magnetic Resonance Imaging (MRI) is a neuroimaging test facilitating the detection of structural epileptogenic lesions. This study aimed to compare the MRI findings between patients with intractable and drug-responsive epilepsy. Material & methods: This case-control study was conducted from 2007 to 2019. The research population encompassed all 1-16- year-old patients with intractable epilepsy referred to the Shafa Neuroscience Center (n=72) (a case group) and drug-responsive patients referred to the pediatric neurology clinic of Baqiyatallah Hospital (a control group). Results: There were 72 (23.5%) patients in the intractable epilepsy group and 200 (76.5%) patients in the drug-responsive group. The participants' mean age was 6.70 ±4.13 years, and there were 126 males and 106 females in this study Normal brain MRI was noticed in 21 (29.16%) patients in the case group and 184 (92.46%) patients in the control group. Neuronal migration disorder (NMD)was also exhibited in 7 (9.72%) patients in the case group and no patient in the control group. There were hippocampal abnormalities and focal lesions (mass, dysplasia, etc.) in 10 (13.88%) patients in the case group and only 1 (0.05%) patient in the control group. Gliosis and porencephalic cysts were presented in 3 (4.16%) patients in the case group and no patient in the control group. Cerebral and cerebellar atrophy was revealed in 8 (11.11%) patients in the case group and 4 (2.01%) patients in the control group. Corpus callosum agenesis, hydrocephalus, brain malacia, and developmental cyst were more frequent in the case group; however, the difference between the groups was not significant. Conclusion: The MRI findings such as hippocampal abnormalities, focal lesions (mass, dysplasia), NMD, porencephalic cysts, gliosis, and atrophy are significantly more frequent in children with intractable epilepsy than in those with drug-responsive epilepsy.Keywords: magnetic resonance imaging, intractable epilepsy, drug responsive epilepsy, neuronal migrational disorder
Procedia PDF Downloads 43577 The Magnitude and Associated Factors of Coagulation Abnormalities Among Liver Disease Patients at the University of Gondar Comprehensive Specialized Hospital Northwest, Ethiopia
Authors: Melkamu A., Woldu B., Sitotaw C., Seyoum M., Aynalem M.
Abstract:
Background: Liver disease is any condition that affects the liver cells and their function. It is directly linked to coagulation disorders since most coagulation factors are produced by the liver. Therefore, this study aimed to assess the magnitude and associated factors of coagulation abnormalities among liver disease patients. Methods: A cross-sectional study was conducted from August to October 2022 among 307 consecutively selected study participants at the University of Gondar Comprehensive Specialized Hospital. Sociodemographic and clinical data were collected using a structured questionnaire and data extraction sheet, respectively. About 2.7 mL of venous blood was collected and analyzed by the Genrui CA51 coagulation analyzer. Data was entered into Epi-data and exported to STATA version 14 software for analysis. The finding was described in terms of frequencies and proportions. Factors associated with coagulation abnormalities were analyzed by bivariable and multivariable logistic regression. Result: In this study, a total of 307 study participants were included. Of them, the magnitude of prolonged Prothrombin Time (PT) and Activated Partial Thromboplastin Time (APTT) were 68.08% and 63.51%, respectively. The presence of anemia (AOR = 2.97, 95% CI: 1.26, 7.03), a lack of a vegetable feeding habit (AOR = 2.98, 95% CI: 1.42, 6.24), no history of blood transfusion (AOR = 3.72, 95% CI: 1.78, 7.78), and lack of physical exercise (AOR = 3.23, 95% CI: 1.60, 6.52) were significantly associated with prolonged PT. While the presence of anaemia (AOR = 3.02; 95% CI: 1.34, 6.76), lack of vegetable feeding habit (AOR = 2.64; 95% CI: 1.34, 5.20), no history of blood transfusion (AOR = 2.28; 95% CI: 1.09, 4.79), and a lack of physical exercise (AOR = 2.35; 95% CI: 1.16, 4.78) were significantly associated with abnormal APTT. Conclusion: Patients with liver disease had substantial coagulation problems. Being anemic, having a transfusion history, lack of physical activity, and lack of vegetables showed significant association with coagulopathy. Therefore, early detection and management of coagulation abnormalities in liver disease patients are critical.Keywords: coagulation, liver disease, PT, Aptt
Procedia PDF Downloads 58576 Determination of Vinpocetine in Tablets with the Vinpocetine-Selective Electrode and Possibilities of Application in Pharmaceutical Analysis
Authors: Faisal A. Salih
Abstract:
Vinpocetine (Vin) is an ethyl ester of apovincamic acid and is a semisynthetic derivative of vincamine, an alkaloid from plants of the genus Periwinkle (plant) vinca minor. It was found that this compound stimulates cerebral metabolism: it increases the uptake of glucose and oxygen, as well as the consumption of these substances by the brain tissue. Vinpocetine enhances the flow of blood in the brain and has a vasodilating, antihypertensive, and antiplatelet effect. Vinpocetine seems to improve the human ability to acquire new memories and restore memories that have been lost. This drug has been clinically used for the treatment of cerebrovascular disorders such as stroke and dementia memory disorders, as well as in ophthalmology and otorhinolaryngology. It has no side effects, and no toxicity has been reported when using vinpocetine for a long time. For the quantitative determination of Vin in dosage forms, the HPLC methods are generally used. A promising alternative is potentiometry with Vin- selective electrode, which does not require expensive equipment and materials. Another advantage of the potentiometric method is that the pills and solutions for injections can be used directly without separation from matrix components, which reduces both analysis time and cost. In this study, it was found that the choice of a good plasticizer an electrode with the following membrane composition: PVC (32.8 wt.%), ortho-nitrophenyl octyl ether (66.6 wt.%), tetrakis-4-chlorophenyl borate (0.6 wt.%) exhibits excellent analytical performance: lower detection limit (LDL) 1.2•10⁻⁷ M, linear response range (LRR) 1∙10⁻³–3.9∙10⁻⁶ M, the slope of the electrode function 56.2±0.2 mV/decade). Vin masses per average tablet weight determined by direct potentiometry (DP) and potentiometric titration (PT) methods for the two different sets of 10 tablets were (100.35±0.2–100.36±0.1) mg for two sets of blister packs. The mass fraction of Vin in individual tablets, determined using DP, was (9.87 ± 0.02–10.16 ±0.02) mg, while the RSD was (0.13–0.35%). The procedure has very good reproducibility, and excellent compliance with the declared amounts was observed.Keywords: vinpocetine, potentiometry, ion selective electrode, pharmaceutical analysis
Procedia PDF Downloads 70575 Semi-Automatic Segmentation of Mitochondria on Transmission Electron Microscopy Images Using Live-Wire and Surface Dragging Methods
Authors: Mahdieh Farzin Asanjan, Erkan Unal Mumcuoglu
Abstract:
Mitochondria are cytoplasmic organelles of the cell, which have a significant role in the variety of cellular metabolic functions. Mitochondria act as the power plants of the cell and are surrounded by two membranes. Significant morphological alterations are often due to changes in mitochondrial functions. A powerful technique in order to study the three-dimensional (3D) structure of mitochondria and its alterations in disease states is Electron microscope tomography. Detection of mitochondria in electron microscopy images due to the presence of various subcellular structures and imaging artifacts is a challenging problem. Another challenge is that each image typically contains more than one mitochondrion. Hand segmentation of mitochondria is tedious and time-consuming and also special knowledge about the mitochondria is needed. Fully automatic segmentation methods lead to over-segmentation and mitochondria are not segmented properly. Therefore, semi-automatic segmentation methods with minimum manual effort are required to edit the results of fully automatic segmentation methods. Here two editing tools were implemented by applying spline surface dragging and interactive live-wire segmentation tools. These editing tools were applied separately to the results of fully automatic segmentation. 3D extension of these tools was also studied and tested. Dice coefficients of 2D and 3D for surface dragging using splines were 0.93 and 0.92. This metric for 2D and 3D for live-wire method were 0.94 and 0.91 respectively. The root mean square symmetric surface distance values of 2D and 3D for surface dragging was measured as 0.69, 0.93. The same metrics for live-wire tool were 0.60 and 2.11. Comparing the results of these editing tools with the results of automatic segmentation method, it shows that these editing tools, led to better results and these results were more similar to ground truth image but the required time was higher than hand-segmentation timeKeywords: medical image segmentation, semi-automatic methods, transmission electron microscopy, surface dragging using splines, live-wire
Procedia PDF Downloads 168574 Remote Vital Signs Monitoring in Neonatal Intensive Care Unit Using a Digital Camera
Authors: Fatema-Tuz-Zohra Khanam, Ali Al-Naji, Asanka G. Perera, Kim Gibson, Javaan Chahl
Abstract:
Conventional contact-based vital signs monitoring sensors such as pulse oximeters or electrocardiogram (ECG) may cause discomfort, skin damage, and infections, particularly in neonates with fragile, sensitive skin. Therefore, remote monitoring of the vital sign is desired in both clinical and non-clinical settings to overcome these issues. Camera-based vital signs monitoring is a recent technology for these applications with many positive attributes. However, there are still limited camera-based studies on neonates in a clinical setting. In this study, the heart rate (HR) and respiratory rate (RR) of eight infants at the Neonatal Intensive Care Unit (NICU) in Flinders Medical Centre were remotely monitored using a digital camera applying color and motion-based computational methods. The region-of-interest (ROI) was efficiently selected by incorporating an image decomposition method. Furthermore, spatial averaging, spectral analysis, band-pass filtering, and peak detection were also used to extract both HR and RR. The experimental results were validated with the ground truth data obtained from an ECG monitor and showed a strong correlation using the Pearson correlation coefficient (PCC) 0.9794 and 0.9412 for HR and RR, respectively. The RMSE between camera-based data and ECG data for HR and RR were 2.84 beats/min and 2.91 breaths/min, respectively. A Bland Altman analysis of the data also showed a close correlation between both data sets with a mean bias of 0.60 beats/min and 1 breath/min, and the lower and upper limit of agreement -4.9 to + 6.1 beats/min and -4.4 to +6.4 breaths/min for both HR and RR, respectively. Therefore, video camera imaging may replace conventional contact-based monitoring in NICU and has potential applications in other contexts such as home health monitoring.Keywords: neonates, NICU, digital camera, heart rate, respiratory rate, image decomposition
Procedia PDF Downloads 103573 A Robust Visual Simultaneous Localization and Mapping for Indoor Dynamic Environment
Authors: Xiang Zhang, Daohong Yang, Ziyuan Wu, Lei Li, Wanting Zhou
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) uses cameras to collect information in unknown environments to realize simultaneous localization and environment map construction, which has a wide range of applications in autonomous driving, virtual reality and other related fields. At present, the related research achievements about VSLAM can maintain high accuracy in static environment. But in dynamic environment, due to the presence of moving objects in the scene, the movement of these objects will reduce the stability of VSLAM system, resulting in inaccurate localization and mapping, or even failure. In this paper, a robust VSLAM method was proposed to effectively deal with the problem in dynamic environment. We proposed a dynamic region removal scheme based on semantic segmentation neural networks and geometric constraints. Firstly, semantic extraction neural network is used to extract prior active motion region, prior static region and prior passive motion region in the environment. Then, the light weight frame tracking module initializes the transform pose between the previous frame and the current frame on the prior static region. A motion consistency detection module based on multi-view geometry and scene flow is used to divide the environment into static region and dynamic region. Thus, the dynamic object region was successfully eliminated. Finally, only the static region is used for tracking thread. Our research is based on the ORBSLAM3 system, which is one of the most effective VSLAM systems available. We evaluated our method on the TUM RGB-D benchmark and the results demonstrate that the proposed VSLAM method improves the accuracy of the original ORBSLAM3 by 70%˜98.5% under high dynamic environment.Keywords: dynamic scene, dynamic visual SLAM, semantic segmentation, scene flow, VSLAM
Procedia PDF Downloads 116572 Ion Beam Writing and Implantation in Graphene Oxide, Reduced Graphene Oxide and Polyimide Through Polymer Mask for Sensorics Applications
Authors: Jan Luxa, Vlastimil Mazanek, Petr Malinsky, Alexander Romanenko, Mariapompea Cutroneo, Vladimir Havranek, Josef Novak, Eva Stepanovska, Anna Mackova, Zdenek Sofer
Abstract:
Using accelerated energetic ions is an interesting method for the introduction of structural changes in various carbon-based materials. This way, the properties can be altered in two ways: a) the ions lead to the formation of conductive pathways in graphene oxide structures due to the elimination of oxygen functionalities and b) doping with selected ions to form metal nanoclusters, thus increasing the conductivity. In this work, energetic beams were employed in two ways to prepare capacitor structures in graphene oxide (GO), reduced graphene oxide (rGO) and polyimide (PI) on a micro-scale. The first method revolved around using ion beam writing with a focused ion beam, and the method involved ion implantation via a polymeric mask. To prepare the polymeric mask, a direct spin-coating of PMMA on top of the foils was used. Subsequently, proton beam writing and development in isopropyl alcohol were employed. Finally, the mask was removed using acetone solvent. All three materials were exposed to ion beams with an energy of 2.5-5 MeV and an ion fluence of 3.75x10¹⁴ cm-² (1800 nC.mm-²). Thus, prepared microstructures were thoroughly characterized by various analytical methods, including Scanning electron microscopy (SEM) with Energy-Dispersive X-ray spectroscopy (EDS), X-ray Photoelectron spectroscopy (XPS), micro-Raman spectroscopy, Rutherford Back-scattering Spectroscopy (RBS) and Elastic Recoil Detection Analysis (ERDA) spectroscopy. Finally, these materials were employed and tested as sensors for humidity using electrical conductivity measurements. The results clearly demonstrate that the type of ions, their energy and fluence all have a significant influence on the sensory properties of thus prepared sensors.Keywords: graphene, graphene oxide, polyimide, ion implantation, sensors
Procedia PDF Downloads 83